Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [linuxtools-dev] Tmf: a model to support new trace analysis types

Hi Bernd,

Thanks for the comments. I think we're getting there. Note that I don't expect (or want) the first patch on the topic to fully respond adequately to every point, but just to be satisfying enough to get the concepts in TMF and start being used by actual analysis. It will improve as it goes. But not all of it is necessary for a start.


On 08/01/2013 10:32 AM, Bernd Hufmann wrote:


[1] Event Based Analysis
Currently, we already have a kind of analysis framework and which is based on ITmfDataRequest/ITmfEventRequest and ITmfDataProvider. Currently there is only one ITmfDataProvider implementation which is the TmfTrace. The request are sent to the ITmfDataProvider and the data provider provides data in form of ITmfEvent. The actual analysis is done in the request, mainly in handleData(). So by implementing different requests classes you can do all kinds of analysis. This is currently the core of the framework and different types of (I call it) analysis requests exists, e.g. histogram request, request for building state systems (Kernel state system, statistics), sequence diagram requests, search requests, filter request and more.
Indeed we use "analysis" to describe a few different concepts. The event based analysis, I see as internal to an analysis module. In the MVC design, it would be more on the model level. This is more in the scope of what Francis is developing. And it can certainly make use of what you guys are developing.


[2] Goals of Analysis type extension point
From all the ongoing discussions, the above event based analysis doesn't cover all use cases. There seems to be a need for having the possibility to
[2.3] decouple the analysis from the views (and other output)
[2.4] have a better way of triggering analysis e.g. from a menu action instead of running it when a trace is opened.
[2.5] better visualize what analysis is available for a given trace type

I think some of the goals can be achieved by the current framework.

[2.3] is the responsibility of the each analysis and view. For that a Model-View-Control (MVC) design is needed. In the current available analysis we only implemented it to a certain extend. The control part of the MVC design for most of the analysis views is integrated into the view itself. To fully decouple the analysis and data model generation from the view, there needs to be a proper notification mechanism defined, so that the view (or other output means) can hook to and are notified by any data model changes. I think, this is not necessary part of the analysis type definition itself, it needs to be addressed during implementation of the analysis module.
The new analysis module (advertised by the extension point) would be the controller part and make the link between the analysis itself (all that goes on behind the scene, the event requests, etc) and the outputs, views, and other stuff.

In a new patchset yesterday, I added an IAnalysisOutput, that can register to an analysis. Views would be wrapped in a class implementing this. This mechanism can be used to notify the outputs when the analysis is over and views could be updated this way.

[2.4] and [2.5] I agree with that. With the multiplication of analysis modules and views the framework needs to improve. Currently, all the analysis are related to a trace type and are within views. Using perspectives we try to group which analysis is active at a time. All the analysis are triggered using the TMF signal framework for synchronizing the views. When a trace is opened, all open views and analysis modules (e.g. state system) start gathering the data and start the analysis. When a new active time range is selected, all relevant open views update their data to the active time range. Same for the current selected time. When the active trace is closed the views clear their content. But we need more. Maybe an analysis should be triggered by a menu action (see contribution under review https://git.eclipse.org/r/#/c/13788/ for the plotting engine). There could be a preference page where a list of analysis to be executed when a trace opens is defined. I think this is what you tried to show in the initial email with new project explorer structure. I think for this purpose an extension point can provide the framework.
I'll take a look at the plotting engine.  Sounds interesting.

[3] Design approach
I think it's a good idea have some code and prototypes available so that users can try it and give comments. Often discussions in meetings and mailing lists can do only so much. Some new ideas, problems and concerns are coming during the implementation and when running the features. The existing TMF framework has gone through several design stages and modifications. "We improve as we go". You've already have some proposal on Gerrit which is very good and will help us to understand. However, this is only the framework part. The new analysis type is not used yet. So it's hard to see the benefit and give comments from a user point of view. So, I think as a next step would be to have a concrete example. I would suggest to use one of the existing analysis modules (e.g. kernel state system, statistics, histogram etc). The new analysis type needs to work with the existing modules. Otherwise there is something wrong and we'll end-up with different ways of doing analysis in the TMF framework and that will get messy in the long run.
I'm developing a few features using the analysis framework, but they are not ready to go public yet. I will port the trace synchronization as well, which is also on gerrit, while it still hasn't started being reviewed in detail.


[4.2] Currently there are generic analysis types (e.g. statistics, histogram) that work for all trace types. I don't think we should display them in the project explorer as analysis type for each of the traces.
I think for now it can stay there. I don't really like the way it is displayed in the project explorer anyway, it is just for a start. In the long term, we need to have a more obvious and pretty way to show those analysis, like some other trace viewers (see Florian's screenshot from Microsoft).

[4.3] There was some back and forth discussion about having 2 extension points, one for the analysis type and one for the views. Is the views extension point just to have a way to display the available views in the project explorer under the analysis type and be able to open it from there? I think, there should be, instead, a way to register a view to the analysis and by doing that it can be easily displayed and accessed from the project explorer. This will allow to add plug-ins that provide views for existing analysis types.
I think the new IAnalysisOutput is a good compromise between Alex's position and mine ;-) It fits along those lines to allow to register any output, without needing an extra extension point.

[4.4] The trace type extension point is in the tmf.ui. So, the analysis type extension point might need to be moved there, too, to avoid references from tmf.core to tmf.ui. If we use the trace type extension in the analysis type handling then we need it. The trace type extension defines the trace class implementation, event class implementation and it might be beneficial to have access to this information. Just to keep in mind.
Indeed after working with a dmesg trace, it may be necessary for analysis that have access to trace types and thus go into tmf.ui...

[4.5] command-line mode: If you want to use the RCP to trigger analysis from command-line without opening the GUI, I have to mention that the RCP application will use the GUI plug-ins even if now Display is opened. The RCP requires org.eclipse.ui. To use only the core plug-ins of the TMF/LTTng another startup procedure is needed. It's probably possible (see JUnit test). However, it would be another application.
Ok good to know

[4.6] I haven't seen an comments about handling of concurrently running analysis. This will have an impact of performance and we need to consider that in the design of the analysis type framework. As much as possible, we should use the coalescing mechanism of the event request framework. But, I can see that here is a need to improve on that. In the framework, the coalescing is done during generation of analysis requests. But it would be beneficial to join an ongoing request so that the number that a trace is read is minimal. BTW, a prototype for this feature of coalescing is currently implemented within our team.
I guess we'll see when the problem comes. The user may also want to prioritize his analysis. For instance, if requests are coalesced and the state system is built at the same time as the execution graph and trace synchronization, while it does minimize the number of time the trace is read, the user may want to have the state system build faster, so postpone the other two requests, even if it means reading the trace twice.

That's it for now. I'm looking forward to some more comments and discussion. I hope we can have a first implementation of the basic features of this soon, so we can really start using it and improve it.

Thanks,
Geneviève





Back to the top