| Summary: | Remote TreeViewer Article | ||
|---|---|---|---|
| Product: | Community | Reporter: | Peter Centgraf <peter> |
| Component: | Articles | Assignee: | Peter Centgraf <peter> |
| Status: | RESOLVED WONTFIX | QA Contact: | |
| Severity: | enhancement | ||
| Priority: | P3 | CC: | bokowski, bradleyjames, Matthew_Hatem, mdelder, susan, tom.schindl |
| Version: | unspecified | ||
| Target Milestone: | --- | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
|
Description
Peter Centgraf
*** Bug 199139 has been marked as a duplicate of this bug. *** Update: I've obviously fallen behind schedule on this. For those who are interested in this topic and want more timely info, here's the Nebula-dev mailing list message that inspired the article idea. This will be the kernel of the article. Enjoy. -- I use the basic Table with a TableViewer and a custom implementation of ILazyContentProvider. The content provider manages a cache of already-loaded items and does paged batch loading. In other words, if the viewer requests item 0, I will load items 0-499 in a single batch and cache the result in memory. Requests 1-499 are extremely fast, but a request for item >= 500 will load another full page. (The user will feel a small pause while data is loading, so I wrap the call to the server in a BusyIndicator.while().) You could easily extend this idea to use an LRU cache. Keep in mind that some features of a JFace Viewer that you might take for granted are not available when you are in VIRTUAL mode. Specifically, sorting and filtering must be done on the server side, since the client will not have access to a complete set of data. Also, it can be tricky to figure out when the ContentProvider should clear its cache -- I do this in the setInput() method. I recommend using a two-phase loading API: The first phase sends sorting and filtering params and retrieves an ordered list of matching primary keys. The second phase loads the "real" data via a list of primary keys. If your data sets are extremely large, you can page the first phase with a setFirstItem()/setMaxItems() approach, then use explicit keys for the full data. You could also implement a slightly simpler approach, where the first phase merely requests the count of total matching items, and the second phase uses a more traditional first/ max API. Or you can return the count AND the first page in the initial request. You get the idea. One final issue will only bother you if users can edit data in- place. It is generally a bad thing to clobber local edits with a fresh copy from the server, so you'll need some way to "pin" items in the cache, and ensure that they are not overwritten when a page is re- fetched. This is why I prefer an API that passes around explicit primary keys, even though it seems more chatty. If there is a lot of local editing, a first/max API will fetch a lot of data that must be thrown away by the client. A key-based API is also safer if you use outer joins -- most databases will truncate a result set without regard for the join, so you can get a partial match from the joined table. The content of this article will also be presented as a short talk at EclipseCon 2008, if my proposal is accepted: https://eclipsecon.greenmeetingsystems.com/submissions/view/249 We're no longer taking articles for Eclipse Corner. Further, this bug has been in the system with little activity for some time. Feel free to reopen. |