[
Date Prev][
Date Next][
Thread Prev][
Thread Next][
Date Index][
Thread Index]
[
List Home]
Re: [jetty-users] Jetty 9 not using resources properly - N/W issue in local loopback interface
|
A 19/10/2013, às 19:20, dinesh kumar <dinesh12b@xxxxxxxxx> escreveu:
> Hi,
> I am trying to capacity plan for a new Rest service we are developing. I would like to do determine what is the maximum number of POSTS that can be handled in the server for different loads. For example if for 1 MB data what is the server overhead (HTTP header parsing and thread assignment from the threadpool , server context switches etc.,) that is involved. I would like to determine a rough threshold of per core number of parallel requests that can be handled by the server. If there exists a limiting factor in the system (say network, memory or something else) what is it?
>
> Thanks,
> Dinesh
The overhead of Jetty, from my experience, tends to be zero when compared with the objectives and final implementation of what you want to do with that data, and specially with the real life conditions of how that data comes. In other words, a load test of such small data, against localhost, without doing anything to the data, will have so many external variables that will conclude with bogus data.
The first question you need to ask yourself is "what will you do with those 1MB data chunks". If you're saving them into disk or DB, then the HTTP side resources will tend to zero.
The second question is "will I get that data from good sources, or bad behaving sources". I had a project where the "architects" and "quality assurance" were pushing me for high performance under local network loads and jmeter high paralell requests of similar request types, when in reality the requests would come slowly behind the mobile network, meaning unreliable throughput and high latency.
In other words, your implementation needs to consider if you're getting 1MB chunks from high performance local clients, which could let you do a simple synchronous implementation like the one I sent you, and where you'd be limited by the number of threads allocated to the jetty pool divided by the time to receive the 1MB.
Or, in real life, you may be receiving a lot more clients, but each with slower throughput (network bottlenecks, client bugs, real life murphy's law). Jetty has always been perfect for these cases, and now with Servlets 3.1 it's perfect in a standard way, because you can implement a proper async reader that will only consume resources when the clients are really sending data. The bottleneck will then be the OS or the network hardware.
Also if you really want to do such a test, you need to be very careful with the test suit to make sure your bottleneck isn't there, as we saw in the beginning of this thread where your jmeter setup couldn't pass 165Mbit/sec (or the 200 with the other server), whilst we were easily crossing the 1Gbit using curl or ab.
With the 8MB POST I noticed that the times are so small that the time to process the 8MB is hardly different from the total setup and teardown. They are all 0+ms. So if your test suite takes 0.5ms itself to prepare the connection, your throughput will decrease a lot.
Good luck with your tests, but please don't waste time testing what is irrelevant for the final product.
Attachment:
signature.asc
Description: Message signed with OpenPGP using GPGMail