[infinispan-dev] again: "no physical address"

Dan Berindei dan.berindei at gmail.com
Tue Jan 31 16:55:39 EST 2012


Hi Bela

I guess it's pretty clear now... In Sanne's thread dump the main
thread is blocked in a cache.put() call after the cluster has
supposedly already formed:

"org.infinispan.benchmark.Transactional.main()" prio=10
tid=0x00007ff4045de000 nid=0x7c92 in Object.wait()
[0x00007ff40919d000]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        - waiting on <0x00000007f61997d0> (a
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher$FutureCollator)
        at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher$FutureCollator.getResponseList(CommandAwareRpcDispatcher.java:372)
        ...
        at org.infinispan.distribution.DistributionManagerImpl.retrieveFromRemoteSource(DistributionManagerImpl.java:169)
        ...
        at org.infinispan.CacheSupport.put(CacheSupport.java:52)
        at org.infinispan.benchmark.Transactional.start(Transactional.java:110)
        at org.infinispan.benchmark.Transactional.main(Transactional.java:70)

State transfer was disabled, so during the cluster startup the nodes
only had to communicate with the coordinator and not between them. The
put command had to get the old value from another node, so it needed
the physical address and had to block until PING would retrieve it.

Does PING use RSVP or does it wait for the normal STABLE timeout for
retransmission? Note that everything is blocked at this point, we
won't send another message in the entire cluster until we got the
physical address.

I'm sure you've already considered it before, but why not make the
physical addresses a part of the view installation message? This
should ensure that every node can communicate with every other node by
the time the view is installed.


I'm also not sure what to make of these lines:

>>> [org.jgroups.protocols.UDP] sanne-55119: no physical address for
>>> sanne-53650, dropping message
>>> [org.jgroups.protocols.pbcast.GMS] JOIN(sanne-55119) sent to
>>> sanne-53650 timed out (after 3000 ms), retrying

It appears that sanne-55119 knows the logical name of sanne-53650, and
the fact that it's coordinator, but not its physical address.
Shouldn't all of this information have arrived at the same time?


Cheers
Dan


On Tue, Jan 31, 2012 at 4:31 PM, Bela Ban <bban at redhat.com> wrote:
> This happens every now and then, when multiple nodes join at the same
> time, on the same host and PING has a small num_initial_mbrs.
>
> Since 2.8, the identity of a member is not an IP address:port anymore,
> but a UUID. The UUID has to be mapped to an IP address (and port), and
> every member maintains a table of UUIDs/IP addresses. This table is
> populated at startup, but the shipping of the IP address/UUID
> association is unreliable (in the case of UDP), so packets do get
> dropped when there are traffic spikes, like concurrent startup, or when
> the high CPU usage slows down things.
>
> If we need to send a unicast message to P, and the table doesn't have a
> mapping for P, PING multicasts a discovery request, and drops the
> message. Every member responds with the IP address of P, which is then
> added to the table. The next time the message is sent (through
> retransmission), P's IP address will be available, and the unicast send
> should succeed.
>
> Of course, if the multicast or unicast response is dropped too, we'll
> run this protocol again... and again ... and again, until we finally
> have a valid IP address for P.
>
>
> On 1/31/12 11:29 AM, Manik Surtani wrote:
>> I have sporadically seen this before when running some perf tests as well … curious to know what's up.
>>
>> On 30 Jan 2012, at 17:45, Sanne Grinovero wrote:
>>
>>> Hi Bela,
>>> this is the same error we where having in Boston when preparing the
>>> Infinispan nodes for some of the demos. So I didn't see it for a long
>>> time, but today it returned especially to add a special twist to my
>>> performance tests.
>>>
>>> Dan,
>>> when this happened it looked like I had a deadlock: the benchmark is
>>> not making any more progress, it looks like they are all waiting for
>>> answers. JConsole didn't detect a deadlock, and unfortunately I'm not
>>> having more logs than this from nor JGroups nor Infinispan (since it
>>> was supposed to be a performance test!).
>>>
>>> I'm attaching a threaddump in case it interests you, but I hope not:
>>> this is a DIST test with 12 nodes (in the same VM from this dump). I
>>> didn't have time to inspect it myself as I have to run, and I think
>>> the interesting news here is with the "no physical address"
>>>
>>> ideas?
>>>
>>> [org.jboss.logging] Logging Provider: org.jboss.logging.Log4jLoggerProvider
>>> [org.jgroups.protocols.UDP] sanne-55119: no physical address for
>>> sanne-53650, dropping message
>>> [org.jgroups.protocols.pbcast.GMS] JOIN(sanne-55119) sent to
>>> sanne-53650 timed out (after 3000 ms), retrying
>>> [org.jgroups.protocols.pbcast.GMS] sanne-55119 already present;
>>> returning existing view [sanne-53650|5] [sanne-53650, sanne-49978,
>>> sanne-27401, sanne-4741, sanne-29196, sanne-55119]
>>> [org.jgroups.protocols.UDP] sanne-39563: no physical address for
>>> sanne-53650, dropping message
>>> [org.jgroups.protocols.pbcast.GMS] JOIN(sanne-39563) sent to
>>> sanne-53650 timed out (after 3000 ms), retrying
>>> [org.jgroups.protocols.pbcast.GMS] sanne-39563 already present;
>>> returning existing view [sanne-53650|6] [sanne-53650, sanne-49978,
>>> sanne-27401, sanne-4741, sanne-29196, sanne-55119, sanne-39563]
>>> [org.jgroups.protocols.UDP] sanne-18071: no physical address for
>>> sanne-39563, dropping message
>>> [org.jgroups.protocols.UDP] sanne-18071: no physical address for
>>> sanne-55119, dropping message
>>> <threadDump.txt>_______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> --
> Bela Ban
> Lead JGroups (http://www.jgroups.org)
> JBoss / Red Hat
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev



More information about the infinispan-dev mailing list