Hi, sorry for the confusion. The profiler was configured to measure
"wall time" for all the methods rather than CPU time. During the test,
CPU usage was pretty low (< 10%).
Btw, Ion Savin suggested trying to change the tcp_nodelay setting to
"true": it made a big difference, tests went down from 6min to 1min in
my environment, only about 10% slower than UDP
Gustavo
On Wed, 2014-06-11 at 11:27 -0500, Dennis Reed wrote:
Can you double check that you're interpreting the profiler data
correctly
(specifically with respect to where threads are spending time versus
where they are using CPU)?
The spot you pointed out *should* show up as a place where threads
spend lots of time,
as these threads just sit waiting in the read calls for the vast
majority of their life.
But it should *not* be a CPU hotspot -- these threads should be idle
during that time.
-Dennis
On 06/11/2014 03:52 AM, Gustavo Fernandes wrote:
> Hi,
>
> While investigating some CI failures in query (timeouts), I've
> narrowed down the issue to the Jgroups protocol stack being used.
> Running a 'mvn clean install' in the query/ module takes about 6min
> (when timeout does not happen). If I run instead:
>
> mvn -Dtest.protocol.jgroups=udp clean install
>
> Time goes down to around 50s. Recent changes in core's
> jgoups-tcp.xml for the tests were the removal of the loopback=true
> and the modification of the bundler_type, but they don't seem to
> affect the outcome.
>
> FYI, taking a single test and stripping down from it everything but
> the cluster formation and data population (5 objects) leads to the
> cpu hotspot below, and it takes almost 1 minute
>
> I'd be happy to change the query tests to udp, but first would like
> to hear your thoughts about this issue
>
> Gustavo
>
>
+----------------------------------------------------------------------------------+------------------+--------------------+
> | Name |
Time (ms) | Invocation Count |
>
+----------------------------------------------------------------------------------+------------------+--------------------+
> | +---java.net.SocketInputStream.read(byte[], int, int, int) |
101,742 100 % | 4,564 |
> | | |
| |
> | +---java.net.SocketInputStream.read(byte[], int, int) |
| |
> | | |
| |
> | +---java.io.BufferedInputStream.fill() |
| |
> | | |
| |
> | +---java.io.BufferedInputStream.read() |
| |
> | | |
| |
> | +---java.io.DataInputStream.readInt() |
| |
> | | |
| |
> | +---org.jgroups.blocks.TCPConnectionMap$TCPConnection$Receiver.run() |
| |
> | | |
| |
> | +---java.lang.Thread.run() |
| |
>
+----------------------------------------------------------------------------------+------------------+--------------------+
>
>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev