| Summary: | Blocking connectors won't close the underlying socket if a chunked request has been handled w/o reading any content | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | [RT] Jetty | Reporter: | Kiyoshi Kamishima <kiyoshi.kamishima> | ||||||||||
| Component: | server | Assignee: | Greg Wilkins <gregw> | ||||||||||
| Status: | RESOLVED FIXED | QA Contact: | |||||||||||
| Severity: | normal | ||||||||||||
| Priority: | P3 | CC: | jesse.mcconnell, jetty-inbox | ||||||||||
| Version: | unspecified | ||||||||||||
| Target Milestone: | 7.1.x | ||||||||||||
| Hardware: | PC | ||||||||||||
| OS: | Windows 7 | ||||||||||||
| Whiteboard: | |||||||||||||
| Attachments: |
|
||||||||||||
|
Description
Kiyoshi Kamishima
Created attachment 181244 [details]
A sample application to demonstrate the problem
thanks for the report. the change was made to ensure that data sent was not discarded by a reset on close. What should happen is that the output is shutdown, then the client closes the connection. But if the client is blocked, this will not happen. I'm investigating a solution now. I'm trying to run the demo app, but I don't have a good windows setup to run the client from (only virtual box at the moment), and no development environment on that to recreate the client with correct ports. Could you run the test and capture the output using wireshark and attach that to this issue. I'll try to reproduce in a standalone test harness. Created attachment 181260 [details]
Packet captures
Thank you for picking this up promptly.
Here are the captured packet data sets. Please look into the note inside the archive. I hope this helps you diagnose the problem more closely.
for now, I will put a hard close in the finally clause of the blocking handle calls. Can you try a jetty build after r2370 to see if it fixes the problem. > for now, I will put a hard close in the finally clause of the blocking handle calls. > Can you try a jetty build after r2370 to see if it fixes the problem. It seems promising and I would be willing to try it. However, due to a lack of expertise and time to tackle setting up a build environment, I am afraid I cannot test it unless a pre-build package is available. I am not sure I can even retrieve the source from behind the firewall. Would you be so kind as to provide one? Or I have no choice but to wait for the next release, supposedly 7.2.1. Thanks, I'll provide you with a link to a build shortly Created attachment 181391 [details] Another set of packet captures Thank you for providing the package. I tried it and confirmed that it no longer blocks the client as it did before. However, I still see a difference in its behavior from Jetty 6 and non-blocking adapters in Jetty 7. With the latest snapshot version, it seems that the connection is occasionally reset before the output end of the buffer is flushed to the wire. If such situation occurs, the client will see an exception instead of the content of the response. It tends to happen especially if the response is long enough to span across multiple packets, but may be still seen even with a very short response when the loopback interface is used. Here is an example of the reported error on the client end. -- Connecting to http://172.27.190.128:8765/ Exception : System.IO.IOException: ?????????????????: An established connection was aborted by the software in your host machine? ---> System.Net.Sockets.SocketException: An established connection was aborted by the software in your host machine ?? System.Net.Sockets.Socket.Receive() ?? System.Net.Sockets.NetworkStream.Read() --- ???????? ???????? --- ?? System.Net.ConnectStream.Read() ?? System.IO.StreamReader.ReadBuffer() ?? System.IO.StreamReader.ReadToEnd() ?? HuggerClientNet.Program.Main() (Sorry for a lot of ?s. They are alternative symbols for Japanese Kana/Kanji letters, but I believe you can still get the point.) I attach a set of captured packets to show the difference. [xp2seven-jetty721-reset.pcap] is the case when this situation is being manifested using the latest snapshot version, where the response is 4000 bytes long in total. [xp2seven-jetty618.pcap] is a similar case but the server end is using Jetty 6.1.8 instead. In this case, the server won't close the socket until it all drains up the incoming chunks. [xp2seven-jetty721-nonblk.pcap] is a counterpart case to illustrate that non-blocking adapters in Jetty 7 behaves similarly to Jetty 6. Maybe another solution, which can bring about more consistent and reliable behavior, may need to be pursued than just resetting the connection. P.S. I won't be back until Monday. Sorry. thanks for the packet captures. That made it much clearer what was going on and I was able to reproduce in a test harness. It was a combination of the 100 continues feature that was at the nub of the issue. I've got a new fix that works for the normal blocking connectors, but is now breaking the SSL connectors. Looking some more. cheers fixed committed for SSL. Pushing snapshot now Created attachment 181795 [details]
Another set of captured packets
I tried a snapshot version as of jetty-distribution-7.2.1-20101023.065142-4, and I can see a great improvement as the client no longer observes the sudden reset of the connection by the peer which caused an unsolicited exception instead of fully receiving the response body.
However, there is a short delay (~200ms) each time the receiving buffer is depleted. As a matter of fact, in Jetty 6 the receiving buffer seems almost always open (see [xp2seven-jetty618.pcap] in the previous set).
As we have decided to change our implementation to use non-blocking connectors instead of blocking connectors, and as it seems the original issue has been fixed in the latest snapshot release, I do not personally oppose to proceed with this bug marked as fixed as it is. However I still believe there is a room for improvement with regard to performance when blocking connectors are used in conjunction with chuncked encoding.
I will leave this bug unclosed, as I agree there is improvement to be made. If you were able to write a unit test for it, that would greatly help us address this issue. cheers > I will leave this bug unclosed, as I agree there is improvement to be made. Good thing to hear as other people may be helped by the more complete solution. > If you were able to write a unit test for it, that would greatly help us address this issue. You may use "A sample app..." with slight modification to it. src\server7\HuggerServer7.java: < os.write("Boo!".getBytes()); > byte[] a = "Boo!".getBytes(); for (int i = 0; i < 1000; i++) { os.write(a); } os.write('?'); If you run "JettyHuggerNet.exe" against it, you will observe a slight delay which lasts for a few seconds. Such delays can be observed both locally and remotely. In contrast, running "java -cp .. client\HuggerClient" will exhibit no such delay. By the way, although I wish I could continue to contribute the improvement of Jetty's quality and we know such contribution would pay us back a great deal in the future, I am currently pressed for time with other tasks and I am afraid I may not respond with regard to this issue in a timely manner any more. I am sorry about that. |