Hi, Breslow. I'm using Jetty6,7 HttpClient to build a reverse proxy in
production, the same use of yours. It has been found a few bugs but I'm
not sure. We can discuss this topic deeply.
- The first is jetty http client can't work normally when the IIS
web server response http status 100. When IIS return 100, it is want to
tell client I've more data to send you, but http
client finallize the interaction. Now my may is to delay 100ms then try
to continue read response content.
HttpParser.java
/*
-------------------------------------------------------------------------------
*/
/**
* Parse until next Event.
* @returns number of bytes filled from endpoint or -1 if fill
never called.
*/
public long parseNext() throws IOException
{
//......
case STATE_FIELD2:
if (ch == HttpTokens.CARRIAGE_RETURN || ch ==
HttpTokens.LINE_FEED)
{
// TODO - we really should know if we are
parsing request or response!
final Buffer method =
HttpMethods.CACHE.lookup(_tok0);
if (method==_tok0 && _tok1.length()==3
&& Character.isDigit((char)_tok1.peek()))
{
_responseStatus = BufferUtil.toInt(_tok1);
if(_responseStatus<200){ // to
handle status 10x
if (this.isMoreInBuffer())
{
byte[] tmp = _buffer.array();
String strTmp = null;
strTmp = new String(tmp);
// System.out.println(strTmp);
if(strTmp!=null){
int iPos = strTmp.indexOf("\r\nHTTP/1.1");
if(iPos>0){
_buffer.skip(iPos+2-_buffer.getIndex());
_buffer.mark();
_state=STATE_START;
continue;
}
}
}
long lFilled = _endp.fill(_buffer);//this.fill();
// System.out.println("filled-200:"+lFilled);
int iTryCount = 1000;
while(lFilled<=0 && iTryCount>0){
try {
Thread.sleep(1);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
iTryCount--;
lFilled = _endp.fill(_buffer);//this.fill();
// System.out.println("filled-200:"+lFilled);
}
if (this.isMoreInBuffer())
{
byte[] tmp = _buffer.array();
String strTmp = null;
strTmp = new String(tmp);
// System.out.println(strTmp);
if(strTmp!=null){
int iPos = strTmp.indexOf("\r\nHTTP/1.1");
if(iPos>0){
_buffer.skip(iPos+2-_buffer.getIndex());
_buffer.mark();
_state=STATE_START;
}
continue;
}
}
}
_handler.startResponse(HttpVersions.CACHE.lookup(_tok0),
_responseStatus,_buffer.sliceFromMark());
- The seconde is jetty http client can't work
with the IIS web server with NTLM authentication. I've researched it
for long time. My result is jetty HttpDestination use different
HttpConnection to resend with NTLM's type2, type3 message. So IIS
doesn't verify the challenge of the type3message which is sent by
client. Jetty client work in a wonderful design mode with a thread
pool. I don't know how to improve it.
Apache httpclient can support NTLM, so the replacement is using both
jetty http client and apache http client.
please refer to:
http://hc.apache.org/httpcomponents-client/ntlm.html
Just wondering who's using Jetty Http Client in production, what types of volume is being pushed through it and what if any issues folks are seeing. Thanks for your help!
We've been building a proxy server using it and a variation of the HttpProxy sample servlet in the distribution and are seeing some strange curves under load when one of the servers we're proxying to starts to take a long time to respond.
---Marc
|