Community
Participate
Working Groups
The scrolling of the Events View is painfully slow. Implementation of the long-term solution for the Events View (see below) is a bit demanding and a quick and dirty fix was introduced as a temporary solution (the ones that last...) The fix consisted of using a virtual table which gets populated on a per-need basis i.e. what is actually visible. This is a bit clunky since that table is nonetheless populated with N pointers to potential rows and that clearing the table (when switching experiments) is more or less proportional to the number of virtual elements in the table. This particularly obvious when switching from an experiment that has millions of events to on that has just a few thousands or tens of thousands. Also, scrolling rapidly from beginning to end of trace (and then back) is a bit jumpy since it relies indirectly on the usage of checkpoints table to access a specific event. The problem is compounded by the platform that insists on issuing 1 request per displayed row which can translate into multiple and inefficient re-positioning of the trace (to checkpoints) and then subsequent reads to get to the target event. This can easily be mitigated by implementing a simple caching mechanism. Long-term solution: Because the Events View can potentially have to display many millions of events, not to mention grow dynamically as events are streamed in, it is not realistic to just create a "regular" fixed-size SWT table to display these events (even a virtual one). The preferred solution would be to create a fixed size table (50-100 rows) and use an adjacent vertical slider as a scrollbar to rapidly navigate in the trace. The slider would be used to navigate through the trace by approximating on the *timestamp* of the event based rather than on its rank. E.g. positioning the slider at 25% of its range would "select" the event at 25% in the trace time range. There are 2 drawbacks to this approach: 1. This is slightly different from the "intuitive" notion of a scrollbar where one would expect to go to the event that is at the corresponding *rank*. This can become quite obvious if there are gaps in the time distribution of the trace events. This is a minor issue since a) the actual number of events is typically quite large, 2) more or less evenly distributed, and 3) Mr. User probably doesn't care at all about the event rank. 2. While scrolling forward is fairly trivial, scrolling backward is a bit trickier. In the general case, it can not be assumed that there are delimiters to easily locate a prior event. This can also be mitigated by using checkpoints at regular intervals and by reading blocks of events at a time. This is well in line with the design of TMF (the unimplemented part of the Trace model :-).
*** Bug 320410 has been marked as a duplicate of this bug. ***
Created attachment 175420 [details] TMG virtual events table
This patch 'virtualizes' the events table and can handle arbitrarily large data sets (well, up to MAXINT entries) without busting memory with virtual entries (as in regular Tables with the SWT.VIRTUAL style flag). There is still an unacceptable lag when navigating back and forth in the trace over large 'distances'. This will have to be addressed with a smarter caching scheme at the experiment level (coming up).
Committed changes in TRUNK and Helios branch.
Created attachment 178708 [details] Events caching The main problem left was that the events were read one at a time (SWT sends a distinct request for each table row...) This patch just sets the size of the table events cache to a more optimal value. Eventually, this should be set to the checkpoint interval - from experiment (ultimately from a preference).
Patch committed to Helios and Indigo
Delivered with 0.7