Some Eclipse Foundation services are deprecated, or will be soon. Please ensure you've read this important communication.

Bug 328199

Summary: Blocking connectors won't close the underlying socket if a chunked request has been handled w/o reading any content
Product: [RT] Jetty Reporter: Kiyoshi Kamishima <kiyoshi.kamishima>
Component: serverAssignee: Greg Wilkins <gregw>
Status: RESOLVED FIXED QA Contact:
Severity: normal    
Priority: P3 CC: jesse.mcconnell, jetty-inbox
Version: unspecified   
Target Milestone: 7.1.x   
Hardware: PC   
OS: Windows 7   
Whiteboard:
Attachments:
Description Flags
A sample application to demonstrate the problem
none
Packet captures
none
Another set of packet captures
none
Another set of captured packets none

Description Kiyoshi Kamishima CLA 2010-10-19 22:25:47 EDT
Build Identifier: 7.1.4.v20100610 or later (and at least up to 7.2.0.RC0)

If all of the following conditions are met, Jetty versions >= 7.1.4 seems to be behaving differently from the former versions thus there is a possible regression issue.
* The application is using a blocking connector such as SocketConnector.
* The request is in chunked encoding and rather large in size (like >1M).
* The application's handler does not read the request body at all. That is, it does not call HttpServletRequest#getInputStream().

In versions <= 7.1.3, including version 6s, Jetty actively closes the socket as it finishes handling the request. But starting from version 7.1.4, it shuts down the output end of the socket but not the input end. Although it does not close the input end, it no longer consumes any input either. In fact, as Jetty releases all references to the socket object anyway, the subsequent GC would forcibly close it during finalization.

This might have been an intended change, while I would feel uncomfortable if so. If the client's output end, which corresponds to the input end for Jetty, is still blocked writing to it, it would remain blocked until the next GC runs and finalizes the socket. The HTTP client stack within .NET Framework Library (e.g. HttpWebRequest et al.) is one of the clients that exhibit such behavior, and one of our applications has been caught up in such situation.



Reproducible: Always

Steps to Reproduce:
A sample application is prepared to demonstrate the problem. Please read the instructions included in the zip package.
Comment 1 Kiyoshi Kamishima CLA 2010-10-19 22:27:08 EDT
Created attachment 181244 [details]
A sample application to demonstrate the problem
Comment 2 Greg Wilkins CLA 2010-10-19 23:04:19 EDT
thanks for the report.

the change was made to ensure that data sent was not discarded by a reset on close. What should happen is that the output is shutdown, then the client closes the connection.

But if the client is blocked, this will not happen.  I'm investigating a solution now.
Comment 3 Greg Wilkins CLA 2010-10-19 23:13:21 EDT
I'm trying to run the demo app, but I don't have a good windows setup to run the client from (only virtual box at the moment), and no development environment on that to recreate the client with correct ports.

Could you run the test and capture the output using wireshark and attach that to this issue.

I'll try to reproduce in a standalone test harness.
Comment 4 Kiyoshi Kamishima CLA 2010-10-20 04:02:01 EDT
Created attachment 181260 [details]
Packet captures

Thank you for picking this up promptly.

Here are the captured packet data sets. Please look into the note inside the archive. I hope this helps you diagnose the problem more closely.
Comment 5 Greg Wilkins CLA 2010-10-20 07:57:26 EDT
for now, I will put a hard close in the finally clause of the blocking handle calls.

Can you try a jetty build after r2370 to see if it fixes the problem.
Comment 6 Kiyoshi Kamishima CLA 2010-10-20 09:06:02 EDT
> for now, I will put a hard close in the finally clause of the blocking handle
calls.
> Can you try a jetty build after r2370 to see if it fixes the problem.

It seems promising and I would be willing to try it. However, due to a lack of expertise and time to tackle setting up a build environment, I am afraid I cannot test it unless a pre-build package is available. I am not sure I can even retrieve the source from behind the firewall.

Would you be so kind as to provide one? Or I have no choice but to wait for the next release, supposedly 7.2.1.

Thanks,
Comment 7 Jesse McConnell CLA 2010-10-20 09:17:09 EDT
I'll provide you with a link to a build shortly
Comment 9 Kiyoshi Kamishima CLA 2010-10-21 09:28:42 EDT
Created attachment 181391 [details]
Another set of packet captures

Thank you for providing the package.
I tried it and confirmed that it no longer blocks the client as it did before.

However, I still see a difference in its behavior from Jetty 6 and non-blocking adapters in Jetty 7. With the latest snapshot version, it seems that the connection is occasionally reset before the output end of the buffer is flushed to the wire. If such situation occurs, the client will see an exception instead of the content of the response. It tends to happen especially if the response is long enough to span across multiple packets, but may be still seen even with a very short response when the loopback interface is used.

Here is an example of the reported error on the client end.
--
Connecting to http://172.27.190.128:8765/
Exception : System.IO.IOException: ?????????????????: An established connection was aborted by the software in your host machine? ---> System.Net.Sockets.SocketException: An established connection was aborted by the software in your host machine
   ?? System.Net.Sockets.Socket.Receive()
   ?? System.Net.Sockets.NetworkStream.Read()
   --- ???????? ???????? ---
   ?? System.Net.ConnectStream.Read()
   ?? System.IO.StreamReader.ReadBuffer()
   ?? System.IO.StreamReader.ReadToEnd()
   ?? HuggerClientNet.Program.Main()
(Sorry for a lot of ?s. They are alternative symbols for Japanese Kana/Kanji letters, but I believe you can still get the point.)

I attach a set of captured packets to show the difference.
[xp2seven-jetty721-reset.pcap] is the case when this situation is being manifested using the latest snapshot version, where the response is 4000 bytes long in total.
[xp2seven-jetty618.pcap] is a similar case but the server end is using Jetty 6.1.8 instead. In this case, the server won't close the socket until it all drains up the incoming chunks.
[xp2seven-jetty721-nonblk.pcap] is a counterpart case to illustrate that non-blocking adapters in Jetty 7 behaves similarly to Jetty 6.

Maybe another solution, which can bring about more consistent and reliable behavior, may need to be pursued than just resetting the connection.

P.S. I won't be back until Monday. Sorry.
Comment 10 Greg Wilkins CLA 2010-10-22 00:18:30 EDT
thanks for the packet captures.  That made it much clearer what was going on and I was able to reproduce in a test harness. It was a combination of the 100 continues feature that was at the nub of the issue.

I've got a new fix that works for the normal blocking connectors, but is now breaking the SSL connectors.  Looking some more.

cheers
Comment 11 Greg Wilkins CLA 2010-10-22 19:12:04 EDT
fixed committed for SSL.  Pushing snapshot now
Comment 12 Kiyoshi Kamishima CLA 2010-10-26 22:51:46 EDT
Created attachment 181795 [details]
Another set of captured packets

I tried a snapshot version as of jetty-distribution-7.2.1-20101023.065142-4, and I can see a great improvement as the client no longer observes the sudden reset of the connection by the peer which caused an unsolicited exception instead of fully receiving the response body.

However, there is a short delay (~200ms) each time the receiving buffer is depleted. As a matter of fact, in Jetty 6 the receiving buffer seems almost always open (see [xp2seven-jetty618.pcap] in the previous set).

As we have decided to change our implementation to use non-blocking connectors instead of blocking connectors, and as it seems the original issue has been fixed in the latest snapshot release, I do not personally oppose to proceed with this bug marked as fixed as it is. However I still believe there is a room for improvement with regard to performance when blocking connectors are used in conjunction with chuncked encoding.
Comment 13 Greg Wilkins CLA 2010-10-26 23:53:14 EDT
I will leave this bug unclosed, as I agree there is improvement to be made.

If you were able to write a unit test for it, that would greatly help us address this issue.   

cheers
Comment 14 Kiyoshi Kamishima CLA 2010-10-27 07:35:53 EDT
> I will leave this bug unclosed, as I agree there is improvement to be made.

Good thing to hear as other people may be helped by the more complete solution.

> If you were able to write a unit test for it, that would greatly help us
address this issue.   

You may use "A sample app..." with slight modification to it.

src\server7\HuggerServer7.java:
<
os.write("Boo!".getBytes());
>
byte[] a = "Boo!".getBytes();
for (int i = 0; i < 1000; i++) { os.write(a); }
os.write('?');

If you run "JettyHuggerNet.exe" against it, you will observe a slight delay which lasts for a few seconds. Such delays can be observed both locally and remotely. In contrast, running "java -cp .. client\HuggerClient" will exhibit no such delay.


By the way, although I wish I could continue to contribute the improvement of Jetty's quality and we know such contribution would pay us back a great deal in the future, I am currently pressed for time with other tasks and I am afraid I may not respond with regard to this issue in a timely manner any more. I am sorry about that.