On Wed, Oct 10, 2012 at 4:47 PM, Scott Marlow <smarlow(a)redhat.com> wrote:
On 10/10/2012 06:47 AM, Dan Berindei wrote:
> Hi Scott
>
> On Wed, Oct 10, 2012 at 6:20 AM, Scott Marlow <smarlow(a)redhat.com
> <mailto:smarlow@redhat.com>> wrote:
>
> I'm trying to understand more about whether it makes sense for a
> DefaultConsistentHash to be created with a non-local owner specified
> in
> the DefaultConsistentHash constructor "segmentOwners" parameter.
>
>
> It definitely makes sense for such a DefaultConsistentHash to exist
> while the cache is starting. But by the time the cache has started (i.e.
> getCache() has returned), it should have been replaced with a
> DefaultConsistentHash that contains the local node as well.
>
> During some AS7 cluster testing that I'm running on my machine, I'm
> seeing the test stall because we loop endlessly in
> KeyAffinityServiceImpl.**getKeyForAddress(). We loop because
> KeyAffinityServiceImpl.**generateKeys() doesn't add any keys.
>
> We don't generate any keys because
> DefaultConsistentHash.**locatePrimaryOwnerForSegment() returns
> address
> "node-1/web" which never matches the local nodes filter
> (KeyAffinityServiceImpl.**interestedInAddress() only filters for
> local
> owners via "node-0/web").
>
>
http://pastie.org/5027574 shows the call stack for the
> DefaultConsistentHash constructor that is the same instance that is
> used
> above. If you look at the call stack, it looks like the
> DefaultConsistentHash instance may of being serialized on the other
> node
> and sent over (which would explain why its owner is "node-1/web" but
> still not sure why/how it comes into play with local
> KeyAffinityServiceImpl.**generateKeys()).
>
>
> My guess is you're able to access the cache before it has finished
> starting, and the KeyAffinityService doesn't know how to deal with a
> cache that doesn't have any local state yet. Again, this should not
>
I instrumented the DefaultConsistentHash constructor to call
thread.dumpStack() only if the owner is "node-1/web" (so I could track the
origin of the wrong DefaultConsistentHash instance being used).
Currently, I also have INFO level logging in the DefaultConsistentHash
ctor that always shows:
"
DefaultConsistentHash ctor this=DefaultConsistentHash{**numSegments=1,
numOwners=2, members=[node-1/web, node-0/web], segmentOwners={0: 0 1}system
identityHashCode=108706475,**show segmentOwners[0 of 1] = [node-1/web,
node-0/web]
DefaultConsistentHash ctor numSegments=1, numOwners=2
DefaultConsistentHash ctor this.segmentOwners[0][0] = node-1/web
"
I think I see the problem now... I missed it earlier, but you have
configured numSegments = 1, which means there will only be one primary
owner for all the keys in the cache. Since node-1 is still alive, it will
remain the primary owner for the single segment, and node-0 will never
become primary owner for any key. (It will be a backup owner, but
KeyAffinityService only looks for primary owners.)
We probably need to add a check to the KeyAffinityServiceImpl constructor
and abort if there are not enough segments for each node to be a primary
owner (i.e. numSegments < numNodes). In the meantime, I think you can just
increase numSegments in your configuration so that it's greater than the
number of nodes and it should work.
Since this testing involves multiple tests
(org.jboss.as.test.clustering.*
*cluster.singleton.**SingletonTestCase,org.jboss.**
as.test.clustering.cluster.**web.**ReplicationWebFailoverTestCase**
,org.jboss.as.test.clustering.**cluster.web.**GranularWebFailoverTestCase,
**org.jboss.as.test.clustering.**cluster.web.passivation.**
SessionBasedSessionPassivation**TestCase,org.jboss.as.test.**
clustering.cluster.web.**passivation.**AttributeBasedSessionPassivati**
onTestCase,org.jboss.as.test.**clustering.cluster.web.**
DistributionWebFailoverTestCas**e), its not surprising to see that we
reach the DefaultConsistentHash constructor 12 times. The segment owners
for the 12 constructors are in the following:
1. [node-0/web]
2. [node-0/web]
3. [node-0/web, node-1/web]
4. [node-0/web, node-1/web]
5. [node-1/web]
6. [node-1/web]
7. [node-1/web]
8. [node-1/web, node-0/web]
9. [node-1/web, node-0/web]
10. [node-1/web, node-0/web] (we use this one when stuck in a loop)
11. [node-0/web]
12. [node-0/web]
We are using the #10 DefaultConsistentHash constructed instance from above
for several minutes to an hour (if I let the test run that long while the
KeyAffinityServiceImpl.**getKeyForAddress() continues in the loop).
Could there be a problem with the ordering of the segment owners? or is
it more that we never switch to use #11/#12 that is likely to be the timing
problem?
The switch to 11/12 only happens after node-1 is dead. All the nodes in the
cluster use the same consistent hash, and as long as you have only one
segment for two nodes, there's always going to be one node that isn't a
primary owner.
happen - getCache() should not return that soon - but it could be
that
> it does happen when multiple threads try to start the same cache in
> parallel. Can you post logs with TRACE enabled for org.infinispan and/or
> a link to your test code?
>
>
Sure, I can enable TRACE for Infinispan and attach the logs to the
ISPN-2376 jira. I'll add links to the test code there as well (as
comments).
Also, KeyAffinityServiceImpl.**generateKeys() contains:
"
// if we had too many misses, just release the lock and try again
if (missCount < maxMisses) {
"
I tried changing the above to a ">=" check and also tried removing the
check (just did the keyProducerStartLatch.close()) neither of which had
direct impact on the current problem.
I think it's very likely you also have a thread stuck in
KeyAffinityServiceImpl.getKeyForAddress(), and that thread will re-open the
latch - so that line doesn't really matter. But getKeyForAddress() could
throw an exception if it sees there's no way for the target nodes to ever
primary-own a key (i.e. numSegments < numNodes).
Cheers
Dan