[
https://jira.jboss.org/jira/browse/JBCACHE-1445?page=com.atlassian.jira.p...
]
Mircea Markus commented on JBCACHE-1445:
----------------------------------------
LegacyDataGravitatorInterceptor.visitCommitCommand is supposed to broadcast the
DataGravitationCleanup, but this never gets called as this is an 1PC tx (cache mode is
ASYNC_REPL).
So LegacyDataGravitatorInterceptor.visitPrepareCommand does the same thing (i.e. broadcast
DataGravitationCleanup) if the tx is 1PC.
This fixed the test, waiting for suite results.
Problem with cleanup after data gravitation
-------------------------------------------
Key: JBCACHE-1445
URL:
https://jira.jboss.org/jira/browse/JBCACHE-1445
Project: JBoss Cache
Issue Type: Bug
Security Level: Public(Everyone can see)
Components: Buddy Replication
Affects Versions: 3.0.0.GA
Reporter: Brian Stansberry
Assignee: Manik Surtani
Priority: Critical
Fix For: 3.0.1.CR1, 3.0.1.GA
JBoss AS web session replication soak testing is showing issues with buddy replication.
One of the failure modes seems to show leftover data remaining in the main tree for a
session's former owner after the session has failed over to another node.
I've taken this issue as a good chance to start filling in the test infrastructure in
the org.jboss.cache.integration package. Test
org.jboss.cache.integration.websession.BuddyReplicationFailoverTest.testFailoverAndFailBack()
shows the issue.
I think some of the tests in the buddyreplication package should be checking for this;
not sure why they pass. Likely some subtle variation in config or something.
The test uses a lot of infrastructure to mock what the AS does. But underneath it all,
the commands to the cache come down to:
node0:
getData(fqn) w/ data gravitation option // nothing there
put(fqn, map) // establishes session
put(fqn, map) // updates session
node3
getData(fqn) w/ data gravitation option // gravitates session
put(fqn, map) // updates session
At this point the cache contents are examined, and the original node in node0's main
tree is still present. A buddy backup node, data owner node3, is also present on node0, as
it should be.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
https://jira.jboss.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira