Community
Participate
Working Groups
I ended up working from home today, and so I am using a laptop, over wifi, and then through IBM VPN. The result is that I have about a 200ms latency to my remote host. With this latency, the PTP RDT system is almost unusable, and it seems to be unreasonably slow to me. One of the first problems that I noticed is that while creating a new Remote C/C++ project, after selecting Remote Tools as my connection and setting up a connection, I typed in the project name, and it responded excruciatingly slowly. It almost unusable. I'm guessing what is happening is that for each character I type, it is going to the remote host to see if this "new" name is a legal project name to use. Perhaps one simple optimization would be to turn off this checking until the "Next" or "Finish" button is hit. The same problem occurs when typing in the "location" field. Once I finally was able to create that remote connection, it took fully 90 seconds to open the project. The project contains about 200 files and directories in the root directory, but even so, this seems very slow. After opening the root directory of the project, the system spends several minutes "Searching for Binaries". Why is it so slow at performing this operation? Is this not done remotely? Clicking on a given file in the directory tree requires 5 seconds for it to highlight. It's even slower if I click on it while the search for remote binaries is in operation. I then edited a 20-line .h file, and when I saved it, it took 10 seconds to save it. I tried doing an "scp" of the same file to the remote machine, which requires creating an ssh tunnel, copying the file, and then tearing down the tunnel, and this took 7 seconds. Since the DStore daemon is continuously connected, it seems like DStore ought to be faster, right? I was able to perform one index operation on my project, but it took two minutes. This same project, when indexed locally (with all of the same index settings) takes only about 10 seconds. I shutdown then restarted Eclipse, re-opened the project, and modified one file so that I could get it to rebuild the index. This index rebuild took three minutes and 45 seconds this time. Sometimes when saving a file or performing a seemingly innocuous operation, I get the little "watch" cursor, and the IDE becomes unusable for quite awhile. We had started moving in the direction of using PTP/RDT/Remote Tools as our solution for remote development, but based on today's experience, I now have very serious misgivings about that direction.
That performance is not typical. Many of the PTP committers use RDT over the VPN all the time. Latency is not the only factor that can affect performance. If there's a lot of packet loss for example, it can really bog things down. I can't really theorize very well as to what is wrong without more information. Searching for binaries shouldn't take very long. Did you select a remote toolchain when you created your project? The remote toolchains should have the binary parsers disabled. Maybe we missed this in the remote GNU toolchain?
I'm on a VPN from home, over wifi. Ping is giving me about 70 ms. Creating a remote project took only a second or so, trying to follow your scenario. Opening the project (I assume you meant expanding it to see the source files) was about a second (mine had maybe a dozen files). To edit a src file, it had it up in the editor, again in less than a second. Saving a file took maybe 2 sec. I re-indexed the project and if I'm going by the progress info in the lower right of the workbench, it took maybe a second. The behavior you describe sounds familiar; I *think* I have seen similar results before, but can't remember why. Probably chalked it up to network idiosyncrasies at the particular time. It's been a while.
I have some additional information now, unfortunately, it's a bit complicated. I tried several more experiments. As Chris Recoskie guessed, the binary parser is enabled by default for both the Linux GNU Toolchain and the Remote Linux GNU Toolchain. Turning the binary parser off dramatically improved the situation for opening and closing remote directories, however the behavior is still far from ideal: - At project creation time, if I choose the Remote Provider (Remote Tools) and then the connection with long latency (100ms+), and then I start typing either the Project name or the Location, it is *very* slow, because as I stated before, it is checking on every character typed whether or not it is a legitimate directory or project name. This is not fixed by turning off the binary parser. - Indexing using RDT/Remote Tools is still much slower than indexing a local project. Using the same exact source, I can index a project in 5 seconds locally, but it takes 22 seconds if I use RDT/Remote Tools *to the same laptop*. In other words, it's four times slower just by using the RDT/Remote Tools mechanism over the network loopback device. I found that there's a Linux command that can simulate network latency of the client machine: sudo /sbin/tc qdisc add dev eth0 root netem delay 100ms (once you've done that, change "add" to "change" to adjust the latency) (to simulate latency on the same machine, change "eth0" to "lo") I used that to simulate a 100ms latency to our locally located remote host (it's normal ping time from the office is 1-2ms). When I increase the network latency, the indexing time does not increase much, however, the amount of time that it takes to open and then to save files increases dramatically. For example, with 100ms of latency, it takes 2-3 seconds to open a 20 line .h file. With 200ms, it rises to 3-4 seconds. This seems to imply that there are about 3/.2 = 15 roundtrip packet exchanges being sent to retrieve one file. At 200ms of latency, it takes about 60 seconds to index the project. It also seems to frequently get stuck in a state where it says "refreshing workspace", which seems to take an enormous amount of time. I don't know what's going on with that, but it seems that the refresh process is heavily affected by network latency. If I reduce the latency back to 1-2ms, the refresh does complete. The other thing mentioned was packet loss. I've tried running extended pings, and I'm simply not seeing packet loss. I haven't had any other problems connecting with these machines, copying files, etc. over the net.
Bug 356934 created to track the binary parser issue.
I tried indexing a much larger project yesterday - the Linux kernel source - and am still seeing an approximately 3X slow down in indexing performance, vs indexing locally. It takes about 20 minutes when run locally, but is a bit over an hour when running remotely. This was using a machine that has a low latency connection - 1-2ms. This probably ought to be a separate bugzilla entry since it's not related to a high latency connection.
*** This bug has been marked as a duplicate of bug 357697 ***
I'm re-opening this because one of the things reported in this bug is unresolved still: "... while creating a new Remote C/C++ project, after selecting Remote Tools as my connection and setting up a connection, I typed in the project name, and it responded excruciatingly slowly. It almost unusable. I'm guessing what is happening is that for each character I type, it is going to the remote host to see if this "new" name is a legal project name to use. Perhaps one simple optimization would be to turn off this checking until the "Next" or "Finish" button is hit. The same problem occurs when typing in the "location" field." I tracked this behavior down to modify listeners that are attached to these two fields. Those listeners call validateLocation() and validatePage(), which access the remote file system. On machines with a longish latency, this is quite annoying. My suggestion is that the modify listeners be removed, and that validatePage() is called once the user clicks Next.
Reducing the importance of this from major to normal, since it's really just an annoyance.
Assuming this will not be fixed.