| Summary: | dstore: opening a remote editor terminates background transfer operations | ||
|---|---|---|---|
| Product: | [Tools] Target Management | Reporter: | Martin Oberhuber <mober.at+eclipse> |
| Component: | RSE | Assignee: | David McKnight <dmcknigh> |
| Status: | CLOSED FIXED | QA Contact: | Martin Oberhuber <mober.at+eclipse> |
| Severity: | normal | ||
| Priority: | P2 | ||
| Version: | unspecified | ||
| Target Milestone: | 1.0 | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
|
Description
Martin Oberhuber
NB: From an API point of view, it needs to be defined whether the "Files" Service has to be able to process multiple simultaneous connections, or if the RSE UI is able to serialize such connections. Currently, the UI does not do that, so multiple simultaneous requests are being made to the service and it looks like the dstore service doesn't handle that properly. In order to allow for very simple service implementations, it might be worth considering a flag that marks services as multi-request-able or not. If a service is not capable of handling multiple requests, then "Run in background" should not be available for such jobs in the UI; or jobs like "Open Editor" should show the well-known Eclipse "Waiting for background jobs to complete" dialog until all background operations have been finished. In other words, for services that are not able to handle multiple requests, the Eclipse Jobs framework should be used with proper locking to avoid simultaneous jobs being scheduled. The Job scheduler is capable of doing that, so it should not be too hard. I'm thinking that the problem here is that we're not preventing multiple concurrent downloads of the same file. I've added scheduling rules to prevent this from happening. I've been able to test this with multiple copy/paste of the same file. Can you see if my changes fixes the problem you encounter? I'm going to assume my change fixes this. If you find you're still able to reproduce this please reopen. Exactly the same problem is still reproducable with HEAD as of 7-aug-2006. I did copy&paste for the large file, then put the transfer in background. When opening an editor for the first time, it worked ok. I then dbl clicked on two other files (from other directories), and the original background transfer was stalled -- no more progress reports. The job itself remaind for a minute or so in the progress view, and vanished eventually. But the transferred file was truncated (only 2.9MB from 18MB were transferred). I verified that the transfer works OK when I do not open an editor (took 2 minutes). I then did the same operation again (copy, paste, put in background, edit). This time, the transfer was stalled when opening the editor for the first time; I got a dialog "Synchronizing Resources", then "Overwrite Resource?" and chose yes. It turned out that I ran into bug 149186 -- the truncated file was written back to the host, thereby destroying the original file. I discovered that the problem is as follows: With dstore, for downloads, a single mapping is used to determine where a downloaded file is to be received. The problem is that if the mapping changes in mid-download, the location of newly received bytes will change too. Hence, concurrent downloads are a problem. To fix this I've changed dstore to always assume absolute local paths rather than doing a mapping. To apply the fix to a driver, both the client and server need to be updated. Verified during RC3 testing. [target cleanup] 1.0 M4 was the original target milestone for this bug |