[JBoss JIRA] (ISPN-3183) HotRod RollUps from 5.2 to 5.3 -- target can't obtain formerly stored data from RCS
by Tomas Sykora (JIRA)
[ https://issues.jboss.org/browse/ISPN-3183?page=com.atlassian.jira.plugin.... ]
Tomas Sykora edited comment on ISPN-3183 at 8/1/13 3:29 AM:
------------------------------------------------------------
I've tried to remove checking for accessing old data from new node (old data = data stored into remote cache store even before starting new node which connects to it) to find out what will happen then.
RecordKnownGlobalKeyset operation issued on source node *looks* ok but I'm really not sure because of exception below.
Then test will try to perform synchronizeData on target node with this result:
testRollingUpgrades(org.infinispan.test.rollingupdates.IspnRollingUpdatesTest) Time elapsed: 36.699 sec <<< ERROR!
javax.management.MBeanException
at org.infinispan.jmx.ResourceDMBean.invoke(ResourceDMBean.java:271)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836)
at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761)
at org.jboss.as.jmx.PluggableMBeanServerImpl$TcclMBeanServer.invoke(PluggableMBeanServerImpl.java:527)
at org.jboss.as.jmx.PluggableMBeanServerImpl.invoke(PluggableMBeanServerImpl.java:263)
at org.jboss.remotingjmx.protocol.v1.ServerProxy$InvokeHandler.handle(ServerProxy.java:1058)
at org.jboss.remotingjmx.protocol.v1.ServerProxy$MessageReciever$1.run(ServerProxy.java:225)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.infinispan.jmx.ResourceDMBean.invoke(ResourceDMBean.java:269)
... 9 more
Caused by: org.infinispan.commons.CacheException: ISPN020004: Could not find migration data in cache default
at org.infinispan.upgrade.hotrod.HotRodTargetMigrator.synchronizeData(HotRodTargetMigrator.java:94)
at org.infinispan.upgrade.RollingUpgradeManager.synchronizeData(RollingUpgradeManager.java:59)
... 14 more
Maybe this can help a little bit.
was (Author: tsykora):
I've tried to remove checking for accessing old data from new node (old data = data stored into remote cache store even before starting new node which connects to it) to find out what will happen then.
RecordKnownGlobalKeyset operation issued on source node *looks* ok but I'm really not sure.
Then test will try to perform synchronizeData on target node with this result:
testRollingUpgrades(org.infinispan.test.rollingupdates.IspnRollingUpdatesTest) Time elapsed: 36.699 sec <<< ERROR!
javax.management.MBeanException
at org.infinispan.jmx.ResourceDMBean.invoke(ResourceDMBean.java:271)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836)
at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761)
at org.jboss.as.jmx.PluggableMBeanServerImpl$TcclMBeanServer.invoke(PluggableMBeanServerImpl.java:527)
at org.jboss.as.jmx.PluggableMBeanServerImpl.invoke(PluggableMBeanServerImpl.java:263)
at org.jboss.remotingjmx.protocol.v1.ServerProxy$InvokeHandler.handle(ServerProxy.java:1058)
at org.jboss.remotingjmx.protocol.v1.ServerProxy$MessageReciever$1.run(ServerProxy.java:225)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.infinispan.jmx.ResourceDMBean.invoke(ResourceDMBean.java:269)
... 9 more
Caused by: org.infinispan.commons.CacheException: ISPN020004: Could not find migration data in cache default
at org.infinispan.upgrade.hotrod.HotRodTargetMigrator.synchronizeData(HotRodTargetMigrator.java:94)
at org.infinispan.upgrade.RollingUpgradeManager.synchronizeData(RollingUpgradeManager.java:59)
... 14 more
Maybe this can help a little bit.
> HotRod RollUps from 5.2 to 5.3 -- target can't obtain formerly stored data from RCS
> -----------------------------------------------------------------------------------
>
> Key: ISPN-3183
> URL: https://issues.jboss.org/browse/ISPN-3183
> Project: Infinispan
> Issue Type: Bug
> Reporter: Tomas Sykora
> Assignee: Tristan Tarrant
> Priority: Critical
> Fix For: 6.0.0.Final
>
> Attachments: 52to52sourceTrace, 52to52targetTrace, 52to53sourceTrace, 52to53targetTrace
>
>
> Scenario (typical for rollups):
> Start source node, put entries.
> Start target node which is pointing to source (source is his RemoteCacheStore now) and try to get entries.
> For 5.2 to 5.2 working perfectly.
> For 5.2 source and 5.3 target -- we have problems here.
> Sorry that I can't provide any valuable info beside TRACEs.
> 4 TRACE logs -- rollups from 5.2 to 5.2 source log and target log + rollups from 5.2 to 5.3 source log and target log.
> Very quick summary:
> 5.2 to 5.2 on target: Entry exists in loader? true
> 5.2 to 5.3 on targer:
> 16:21:41,508 TRACE [org.infinispan.container.EntryFactoryImpl] (HotRodServerWorker-2) Exists in context? null
> 16:21:41,508 TRACE [org.infinispan.container.EntryFactoryImpl] (HotRodServerWorker-2) Retrieved from container null
> What changed in RemoteCacheStore. What changed in HotRod? Any idea? Let me know, thank you!
>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 4 months
[JBoss JIRA] (ISPN-3183) HotRod RollUps from 5.2 to 5.3 -- target can't obtain formerly stored data from RCS
by Tomas Sykora (JIRA)
[ https://issues.jboss.org/browse/ISPN-3183?page=com.atlassian.jira.plugin.... ]
Tomas Sykora commented on ISPN-3183:
------------------------------------
I've tried to remove checking for accessing old data from new node (old data = data stored into remote cache store even before starting new node which connects to it) to find out what will happen then.
RecordKnownGlobalKeyset operation issued on source node *looks* ok but I'm really not sure.
Then test will try to perform synchronizeData on target node with this result:
testRollingUpgrades(org.infinispan.test.rollingupdates.IspnRollingUpdatesTest) Time elapsed: 36.699 sec <<< ERROR!
javax.management.MBeanException
at org.infinispan.jmx.ResourceDMBean.invoke(ResourceDMBean.java:271)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836)
at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761)
at org.jboss.as.jmx.PluggableMBeanServerImpl$TcclMBeanServer.invoke(PluggableMBeanServerImpl.java:527)
at org.jboss.as.jmx.PluggableMBeanServerImpl.invoke(PluggableMBeanServerImpl.java:263)
at org.jboss.remotingjmx.protocol.v1.ServerProxy$InvokeHandler.handle(ServerProxy.java:1058)
at org.jboss.remotingjmx.protocol.v1.ServerProxy$MessageReciever$1.run(ServerProxy.java:225)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.infinispan.jmx.ResourceDMBean.invoke(ResourceDMBean.java:269)
... 9 more
Caused by: org.infinispan.commons.CacheException: ISPN020004: Could not find migration data in cache default
at org.infinispan.upgrade.hotrod.HotRodTargetMigrator.synchronizeData(HotRodTargetMigrator.java:94)
at org.infinispan.upgrade.RollingUpgradeManager.synchronizeData(RollingUpgradeManager.java:59)
... 14 more
Maybe this can help a little bit.
> HotRod RollUps from 5.2 to 5.3 -- target can't obtain formerly stored data from RCS
> -----------------------------------------------------------------------------------
>
> Key: ISPN-3183
> URL: https://issues.jboss.org/browse/ISPN-3183
> Project: Infinispan
> Issue Type: Bug
> Reporter: Tomas Sykora
> Assignee: Tristan Tarrant
> Priority: Critical
> Fix For: 6.0.0.Final
>
> Attachments: 52to52sourceTrace, 52to52targetTrace, 52to53sourceTrace, 52to53targetTrace
>
>
> Scenario (typical for rollups):
> Start source node, put entries.
> Start target node which is pointing to source (source is his RemoteCacheStore now) and try to get entries.
> For 5.2 to 5.2 working perfectly.
> For 5.2 source and 5.3 target -- we have problems here.
> Sorry that I can't provide any valuable info beside TRACEs.
> 4 TRACE logs -- rollups from 5.2 to 5.2 source log and target log + rollups from 5.2 to 5.3 source log and target log.
> Very quick summary:
> 5.2 to 5.2 on target: Entry exists in loader? true
> 5.2 to 5.3 on targer:
> 16:21:41,508 TRACE [org.infinispan.container.EntryFactoryImpl] (HotRodServerWorker-2) Exists in context? null
> 16:21:41,508 TRACE [org.infinispan.container.EntryFactoryImpl] (HotRodServerWorker-2) Retrieved from container null
> What changed in RemoteCacheStore. What changed in HotRod? Any idea? Let me know, thank you!
>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 4 months
[JBoss JIRA] (ISPN-3389) Forwarded transactions can remain stale after state transfer
by Erik Salter (JIRA)
[ https://issues.jboss.org/browse/ISPN-3389?page=com.atlassian.jira.plugin.... ]
Erik Salter updated ISPN-3389:
------------------------------
Description:
There is a scenario where a tx started on one node, moved during state transfer, and committed on the originating node won't be removed from the new owner's tx table.
The chain of events is as follows:
1. New topology comes in as part of a view change.
2. Local transaction started with the new topology ID. This transaction was started due to a LockControlCommand and has no modifications. Also important, it only has local locks.
3. Tx forwarded to new owner before the local lock is acquired and registered with the transaction.
4. Since the tx has only local locks and no modifications, it is only removed locally. No TxCompletion or Rollback are broadcast to the new owners.
This key becomes unusable not due to stale locks, but because the waitForTransaction() code will see that the old tx can "potentially" lock the key.
This easily happens with pessimistic caches, though I have seen it happen with optimistic caches (there is a delta between the transaction being created and the lock registration).
was:
There is a scenario where a tx started on one node, moved during state transfer, and committed on the originating node won't be removed from the new owner's tx table.
The chain of events is as follows:
1. New topology comes in as part of a view change.
2. Local transaction started with the new topology ID. This transaction was started due to a LockControlCommand and has no modifications. Also important, it only has local locks.
3. Tx forwarded to new owner.
4. Since the tx has only local locks and no modifications, it is only removed locally. No TxCompletion or Rollback are broadcast to the new owners.
This key becomes unusable not due to stale locks, but because the waitForTransaction() code will see that the old tx can "potentially" lock the key.
> Forwarded transactions can remain stale after state transfer
> ------------------------------------------------------------
>
> Key: ISPN-3389
> URL: https://issues.jboss.org/browse/ISPN-3389
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer
> Affects Versions: 5.2.7.Final
> Reporter: Erik Salter
> Assignee: Mircea Markus
>
> There is a scenario where a tx started on one node, moved during state transfer, and committed on the originating node won't be removed from the new owner's tx table.
> The chain of events is as follows:
> 1. New topology comes in as part of a view change.
> 2. Local transaction started with the new topology ID. This transaction was started due to a LockControlCommand and has no modifications. Also important, it only has local locks.
> 3. Tx forwarded to new owner before the local lock is acquired and registered with the transaction.
> 4. Since the tx has only local locks and no modifications, it is only removed locally. No TxCompletion or Rollback are broadcast to the new owners.
> This key becomes unusable not due to stale locks, but because the waitForTransaction() code will see that the old tx can "potentially" lock the key.
> This easily happens with pessimistic caches, though I have seen it happen with optimistic caches (there is a delta between the transaction being created and the lock registration).
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 4 months