User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.1.2
Right below in that stack (duh).. it's a thread waiting for a
C3P0 db connection. Must have been more tired when I looked
before. Any way it could be jetty-related?
at
com.mchange.v2.c3p0.impl.C3P0ImplUtils.allocateIdentityToken(C3P0Impl
Utils.java:192)
at
com.mchange.v2.c3p0.impl.DriverManagerDataSourceBase.<init>(DriverMan
agerDataSourceBase.java:205)
at
com.mchange.v2.c3p0.DriverManagerDataSource.<init>(DriverManagerDataS
ource.java:60)
at
com.mchange.v2.c3p0.DriverManagerDataSource.<init>(DriverManagerDataSource.java:56)
at
com.mchange.v2.c3p0.ComboPooledDataSource.<init>(ComboPooledDataSource.java:113)
...
at
org.eclipse.jetty.jndi.NamingContext.lookup(NamingContext.java:467)
at
org.eclipse.jetty.jndi.NamingContext.lookup(NamingContext.java:491)
at
org.eclipse.jetty.jndi.NamingContext.lookup(NamingContext.java:491)
at
org.eclipse.jetty.jndi.NamingContext.lookup(NamingContext.java:491)
at
org.eclipse.jetty.jndi.NamingContext.lookup(NamingContext.java:505)
at
org.eclipse.jetty.jndi.java.javaRootURLContext.lookup(javaRootURLContext.java:101)
at
javax.naming.InitialContext.lookup(java.naming@11.0.5-ea/InitialContext.java:409)
at com.priot.db.dao.DaoBase.getConn(DaoBase.java:30)
On 12/3/19 11:58 AM, Bill Ross wrote:
Thanks, good to know - so far the downrev is just on the home
server, but will have to consider whether to dig deeper when I'm
ready to push again. DoS at least isn't much of a factor for my
deliberately-avoided-by-all site. :-)
Here's a peek at threads with lots of cpu on them, in case it's
on the jetty side:
"qtp1008925772-146" #146 prio=5 os_prio=0 cpu=648389.04ms
elapsed=711.23s tid=0x
00007f1a1c04b000 nid=0x6eed runnable [0x00007f1af52e9000]
java.lang.Thread.State: RUNNABLE
at java.util.WeakHashMap.get(java.base@11.0.5-ea/WeakHashMap.java:404)
at
com.mchange.v2.encounter.AbstractEncounterCounter.encounter(AbstractE
ncounterCounter.java:41)
"qtp1008925772-145" #145 prio=5 os_prio=0 cpu=546854.46ms
elapsed=772.97s tid=0x
00007f1a98369800 nid=0x6ecf runnable [0x00007f1a5b0f1000]
java.lang.Thread.State: RUNNABLE
at java.util.WeakHashMap.get(java.base@11.0.5-ea/WeakHashMap.java:404)
at
com.mchange.v2.encounter.AbstractEncounterCounter.encounter(AbstractE
ncounterCounter.java:41)
For what it's worth, I also just down-versioned from
9.4.24.v20191120 because my server was using 300% CPU
with no client activity. I can't rule out my own
changes, and a couple of out-of-practice looks at thread
dumps didn't give me an answer. But there's nothing I've
added that would keep a thread busy like that after
startup, and it happens after running a while. So far it
hasn't happened on the down rev: 9.4.12.v20180830.
I wonder if there's a thread activity monitor one could
add that would warn if a thread seemed runaway..
Bill
On 12/3/19 1:14 AM, Silvio Bierman wrote:
Hi Greg,
At this moment we are receiving multiple error reports
from users who suffer from malfunctioning user
interfaces. We already had received some of those before
the weekend but I did not link this to our move to
9.4.24. Now a pattern is emerging.
They are mostly from Firefox users but some come from
Safari users. The symptoms are consistently similar:
missing images, unstyled content, parts of content
missing etc.
We will probably have to revert to the previous Jetty
version we where running (9.4.20) to make sure we do not
pick one that behaves the same. In the meantime I would
be happy to do any testing if you would require so.
Kind regards,
Silvio
On 12/2/19 2:42 PM, Greg Wilkins wrote:
Silvio,
This is the second time I've heard
about a problem fetching browser resources like
CSS or js. Can you attach the stacks you are
seeing?
The last few days exceptions have started to
come up in the logging. We can quite easily
reproduce them by testing common parts of our
web applications using Firefox. Using Chrome the
same actions do not produce exceptions (or
warnings).
Strangely enough this seems to intermittently
fail mostly (but not exclusively) on plain GET
requests for CSS resources that are requested by
the browser as a result of an @import from
another CSS resource.
We serve the files ourselves from a servlet.
Perhaps we are doing something that triggers
this? GET requests for which we serve the
response content dynamically seem to work fine.
The same goes for POSTs. Since it only happens
via one of our code paths I suspect we are
causing this in some way, although the code is
extremely simple.
Kind regards,
Silvio
On 11/29/19 12:27 AM, Greg Wilkins wrote:
Silvio,
I believe it is ignorable and you can
turn the HttpChannelState logger level
down to suppress them.
However, if there a stack trace
associated with that warning then it is
not what I think it is and you need to
provide more information.
What I believe is happening is that
while a request is being processed, the
associated HTTP/2 stream is being reset
(probably by the client?)
This asynchronous error is detected but
because the request is not async, it
cannot be delivered to the request and
instead we warn. This is probably over
verbose as clients can do silly things
like close mid request handling.
Can anyone tell me what this means? I take
it the situation is not
critical because the application has
worked flawlessly for years with
earlier Jetty versions without these
messages. Can I turn this off?