[JBoss JIRA] (ISPN-3217) Rebalance doesn't store data into cache store
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-3217?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-3217:
-----------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
Pushed to master and 5.3.x. Who's porting over this to 5.2.x? It might require different code due to change move to replication being a degenerate case of distribution in 5.3.x.
> Rebalance doesn't store data into cache store
> ---------------------------------------------
>
> Key: ISPN-3217
> URL: https://issues.jboss.org/browse/ISPN-3217
> Project: Infinispan
> Issue Type: Bug
> Components: Core API
> Affects Versions: 5.2.4.Final, 5.2.5.Final, 5.2.6.Final, 5.3.0.CR1
> Reporter: Takayoshi Kimura
> Assignee: Dan Berindei
> Priority: Blocker
> Fix For: 5.2.7.Final, 5.3.0.Final
>
> Attachments: ISPN-3217-logs.zip
>
>
> In DistCacheStoreInterceptor.skip():
> {noformat}
> private boolean skip(InvocationContext ctx, Object key, FlagAffectedCommand command) {
> return skip(ctx, command) || skipKey(key) || (isUsingLockDelegation && !cdl.localNodeIsPrimaryOwner(key) && (!cdl.localNodeIsOwner(key) || ctx.isOriginLocal()));
> }
> {noformat}
> The 3rd condition returns true on rebalance, so the data is not stored in the cache store.
> - The caller is org.infinispan.statetransfer.StateConsumerImpl.doApplyState
> - The iic is org.infinispan.context.SingleKeyNonTxInvocationContext
> - The example command is:
> {noformat}
> PutKeyValueCommand{key=ByteArrayKey{data=ByteArray{size=9, hashCode=cb62ce78, array=0x033e06666f6f3839..}}, value=CacheValue{data=ByteArray{size=6, array=0x033e03626172..}, version=4294968192}, flags=[CACHE_MODE_LOCAL, SKIP_REMOTE_LOOKUP, PUT_FOR_STATE_TRANSFER, SKIP_SHARED_CACHE_STORE, SKIP_OWNERSHIP_CHECK, IGNORE_RETURN_VALUES, SKIP_XSITE_BACKUP], putIfAbsent=false, lifespanMillis=-1, maxIdleTimeMillis=-1, successful=true}
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-2582) When a node rejoins a cluster , the existing node freezes for a minute
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-2582?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-2582:
-----------------------------------------------
Radoslav Husar <rhusar(a)redhat.com> made a comment on [bug 883558|https://bugzilla.redhat.com/show_bug.cgi?id=883558]
Shay, could you please comment whether this BZ is still relevant? I would think that this has been fixed in 6.1 release.
> When a node rejoins a cluster , the existing node freezes for a minute
> ----------------------------------------------------------------------
>
> Key: ISPN-2582
> URL: https://issues.jboss.org/browse/ISPN-2582
> Project: Infinispan
> Issue Type: Bug
> Environment: EAP 6.0.0.GA
> Reporter: Shay Matasaro
> Assignee: Mircea Markus
> Fix For: 5.2.6.Final
>
>
> 1) 2 node cluster
> 2) distributable web app
> 3) run jemeter with 10 threads
> 4) start node 1
> 5) start node 2
> 6) shutdown node 1
> 7) restart node 1
> result:
> node 2 freezes for one minute then throws exceptions exceptions then continues
> 15:25:19,478 ERROR [org.infinispan.cacheviews.CacheViewsManagerImpl] (CacheViewInstaller-5,master:server-two/web) ISPN000172: Failed to prepare view CacheView{viewId=4, members=[master:server-two/web, master:server-one/web]} for cache default-host/demo7, rolling back to view CacheView{viewId=3, members=[master:server-two/web]}: java.util.concurrent.TimeoutException
> 15:25:19,688 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-3) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,687 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-6) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,687 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-1) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,688 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-9) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,685 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-2) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,688 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-5) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,686 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-4) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,688 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-10) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,688 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-12) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,696 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-3) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,689 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-11) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,705 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-4) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,706 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-10) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,699 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-1) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,696 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-6) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,714 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-10) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34869}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@8834} org.infinispan.transaction.synchronization.SynchronizationAdapter@8853: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
> 15:25:19,708 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-3) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34867}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@8832} org.infinispan.transaction.synchronization.SynchronizationAdapter@8851: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
> 15:25:19,717 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-6) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34864}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@882f} org.infinispan.transaction.synchronization.SynchronizationAdapter@884e: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
> 15:25:19,699 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-9) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,704 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-5) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,707 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-12) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,704 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-2) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,722 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-5) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34866}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@8831} org.infinispan.transaction.synchronization.SynchronizationAdapter@8850: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
> 15:25:19,721 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-9) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34870}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@8835} org.infinispan.transaction.synchronization.SynchronizationAdapter@8854: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
> 15:25:19,712 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-11) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,723 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-12) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34868}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@8833} org.infinispan.transaction.synchronization.SynchronizationAdapter@8852: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
> 15:25:19,712 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-4) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34863}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@882e} org.infinispan.transaction.synchronization.SynchronizationAdapter@884d: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
> 15:25:19,724 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-2) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34862}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@882d} org.infinispan.transaction.synchronization.SynchronizationAdapter@884c: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
> 15:25:19,715 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-1) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34865}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@8830} org.infinispan.transaction.synchronization.SynchronizationAdapter@884f: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
> 15:25:19,726 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-11) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34871}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@8836} org.infinispan.transaction.synchronization.SynchronizationAdapter@8855: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-2938) LuceneCacheLoader to support filtering for preload of the proper type
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-2938?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero updated ISPN-2938:
----------------------------------
Fix Version/s: 6.0.0.Final
(was: 5.3.0.Final)
> LuceneCacheLoader to support filtering for preload of the proper type
> ---------------------------------------------------------------------
>
> Key: ISPN-2938
> URL: https://issues.jboss.org/browse/ISPN-2938
> Project: Infinispan
> Issue Type: Feature Request
> Components: Lucene Directory
> Reporter: Sanne Grinovero
> Assignee: Sanne Grinovero
> Labels: stable_embedded_query
> Fix For: 6.0.0.Final
>
>
> We suggest to use 3 caches to store the Lucene index, but the design of the LuceneCacheLoader is assuming it is connected to a single cache which need to load all data on preload=true.
> We should add options, or perhaps split in different implementations, to only load the relevant part needed for a specifically configured cache.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-2950) In distributed mode cache store data should be read through the main data owner (vs directly from the store)
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-2950?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero updated ISPN-2950:
----------------------------------
Issue Type: Bug (was: Feature Request)
Priority: Blocker (was: Critical)
> In distributed mode cache store data should be read through the main data owner (vs directly from the store)
> ------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-2950
> URL: https://issues.jboss.org/browse/ISPN-2950
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Reporter: Sanne Grinovero
> Assignee: Mircea Markus
> Priority: Blocker
> Labels: onboard
> Fix For: 6.0.0.Final
>
>
> Dist cache with a cache store (shared or not), k owned by \{N1, N2\}. k is read on N3. What currently happens at this stage, if k is not present in N3's memory (likely unless L1 is configured), the N3's cache store is queried and data is loaded from there. This has several drawbacks:
> - the data might already be in the memory of the owner node (N1,N2) so reading it from the disk is highly inefficient. Especially for hot data: data requested from various nodes at the same time (see also mailing list discussion around lucene query performance depending on this)
> - if this is a local cache store, it might contain stale data which would be returned to the user
> - for async configured cache store this would result in dirty reads, given that a change might be in the async store's memory but not in the store at the moment when it is in read by N3. (Note that using async stores still leaves place to inconsistencies when a node leaves, e.g. because of node crashing before managing to flush the async store.)
> This JIRA is about changing the distribution mode: when asked for a specific key, a node would only touch a cache store if it is an owner of that key, otherwise would first go to the main owner of the key to read the value from there. The ClusterCacheLoader should be deprecated as well.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-2950) In distributed mode cache store data should be read through the main data owner (vs directly from the store)
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-2950?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero commented on ISPN-2950:
---------------------------------------
I've changed it to bug and made it blocker as priority as it affects consistency of loaded data as well.
> In distributed mode cache store data should be read through the main data owner (vs directly from the store)
> ------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-2950
> URL: https://issues.jboss.org/browse/ISPN-2950
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Reporter: Sanne Grinovero
> Assignee: Mircea Markus
> Priority: Blocker
> Labels: onboard
> Fix For: 6.0.0.Final
>
>
> Dist cache with a cache store (shared or not), k owned by \{N1, N2\}. k is read on N3. What currently happens at this stage, if k is not present in N3's memory (likely unless L1 is configured), the N3's cache store is queried and data is loaded from there. This has several drawbacks:
> - the data might already be in the memory of the owner node (N1,N2) so reading it from the disk is highly inefficient. Especially for hot data: data requested from various nodes at the same time (see also mailing list discussion around lucene query performance depending on this)
> - if this is a local cache store, it might contain stale data which would be returned to the user
> - for async configured cache store this would result in dirty reads, given that a change might be in the async store's memory but not in the store at the moment when it is in read by N3. (Note that using async stores still leaves place to inconsistencies when a node leaves, e.g. because of node crashing before managing to flush the async store.)
> This JIRA is about changing the distribution mode: when asked for a specific key, a node would only touch a cache store if it is an owner of that key, otherwise would first go to the main owner of the key to read the value from there. The ClusterCacheLoader should be deprecated as well.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-2950) In distributed mode cache store data should be read through the main data owner (vs directly from the store)
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-2950?page=com.atlassian.jira.plugin.... ]
William Burns updated ISPN-2950:
--------------------------------
Fix Version/s: 6.0.0.Final
(was: 5.3.0.Final)
> In distributed mode cache store data should be read through the main data owner (vs directly from the store)
> ------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-2950
> URL: https://issues.jboss.org/browse/ISPN-2950
> Project: Infinispan
> Issue Type: Feature Request
> Components: Loaders and Stores
> Reporter: Sanne Grinovero
> Assignee: Mircea Markus
> Priority: Critical
> Labels: onboard
> Fix For: 6.0.0.Final
>
>
> Dist cache with a cache store (shared or not), k owned by \{N1, N2\}. k is read on N3. What currently happens at this stage, if k is not present in N3's memory (likely unless L1 is configured), the N3's cache store is queried and data is loaded from there. This has several drawbacks:
> - the data might already be in the memory of the owner node (N1,N2) so reading it from the disk is highly inefficient. Especially for hot data: data requested from various nodes at the same time (see also mailing list discussion around lucene query performance depending on this)
> - if this is a local cache store, it might contain stale data which would be returned to the user
> - for async configured cache store this would result in dirty reads, given that a change might be in the async store's memory but not in the store at the moment when it is in read by N3. (Note that using async stores still leaves place to inconsistencies when a node leaves, e.g. because of node crashing before managing to flush the async store.)
> This JIRA is about changing the distribution mode: when asked for a specific key, a node would only touch a cache store if it is an owner of that key, otherwise would first go to the main owner of the key to read the value from there. The ClusterCacheLoader should be deprecated as well.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-3262) LevelDB cache store allows loading after shutdown
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3262?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration updated ISPN-3262:
------------------------------------------
Bugzilla Update: Perform
Bugzilla References: https://bugzilla.redhat.com/show_bug.cgi?id=976434
> LevelDB cache store allows loading after shutdown
> -------------------------------------------------
>
> Key: ISPN-3262
> URL: https://issues.jboss.org/browse/ISPN-3262
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 5.2.7.Final, 5.3.0.Final
> Reporter: Michal Linhard
> Assignee: Mircea Markus
>
> I'm getting NPE in these tests:
> https://github.com/mlinhard/infinispan/commit/d1efd673ba6c34a4f6383b16740...
> it's caused by a thread asking the cache store to load all keys after it's been shutdown.
> It might be considered also a problem of LevelDB implementation that doesn't guard against this.
> these are the stacktraces of the causing events:
> {code}
> DbImpl closing org.iq80.leveldb.impl.DbImpl@121e74ed
> Stack trace for thread main
> org.iq80.leveldb.impl.DbImpl.close(DbImpl.java:-1)
> org.infinispan.loaders.leveldb.LevelDBCacheStore.stop(LevelDBCacheStore.java:107)
> org.infinispan.loaders.CacheLoaderManagerImpl.stop(CacheLoaderManagerImpl.java:296)
> sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:601)
> org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:203)
> org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:886)
> org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:693)
> org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:571)
> org.infinispan.factories.ComponentRegistry.stop(ComponentRegistry.java:242)
> org.infinispan.CacheImpl.stop(CacheImpl.java:604)
> org.infinispan.CacheImpl.stop(CacheImpl.java:599)
> org.infinispan.test.TestingUtil.killCaches(TestingUtil.java:734)
> org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:590)
> org.infinispan.loaders.MultiCacheStoreFunctionalTest.testStartStopOfBackupDoesntRewriteValue(MultiCacheStoreFunctionalTest.java:107)
> sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:601)
> org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> org.testng.TestRunner.privateRun(TestRunner.java:767)
> org.testng.TestRunner.run(TestRunner.java:617)
> org.testng.SuiteRunner.runTest(SuiteRunner.java:335)
> org.testng.SuiteRunner.runSequentially(SuiteRunner.java:330)
> org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
> org.testng.SuiteRunner.run(SuiteRunner.java:240)
> org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
> org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
> org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
> org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
> org.testng.TestNG.run(TestNG.java:1057)
> org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:111)
> org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:204)
> org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:175)
> DbImpl iterator requested org.iq80.leveldb.impl.DbImpl@121e74ed
> Stack trace for thread OOB-1,ISPN,NodeC-18285
> org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:-1)
> org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
> org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
> org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
> org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
> org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
> org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
> org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
> org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
> org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
> org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
> org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
> org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
> org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
> org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
> org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
> org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
> org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
> org.jgroups.JChannel.up(JChannel.java:707)
> org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
> org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> org.jgroups.protocols.FC.up(FC.java:479)
> org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
> org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
> org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
> org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
> org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
> org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> org.jgroups.protocols.Discovery.up(Discovery.java:359)
> org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
> org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
> org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> java.lang.Thread.run(Thread.java:722)
> 2013-06-21 10:32:42,333 WARN [CacheTopologyControlCommand] (OOB-1,ISPN,NodeC-18285) ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=___defaultcache, type=CH_UPDATE, sender=NodeA-43485, joinInfo=null, topologyId=6, currentCH=DefaultConsistentHash{numSegments=60, numOwners=2, members=[NodeC-18285]}, pendingCH=null, throwable=null, viewId=3}
> java.lang.NullPointerException
> at org.iq80.leveldb.impl.DbImpl.internalIterator(DbImpl.java:757)
> at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:722)
> at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
> at org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
> at org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
> at org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
> at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
> at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
> at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
> at org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
> at org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
> at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
> at org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
> at org.jgroups.JChannel.up(JChannel.java:707)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> at org.jgroups.protocols.FC.up(FC.java:479)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
> at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
> at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> at org.jgroups.protocols.Discovery.up(Discovery.java:359)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-3262) LevelDB cache store allows loading after shutdown
by Michal Linhard (JIRA)
[ https://issues.jboss.org/browse/ISPN-3262?page=com.atlassian.jira.plugin.... ]
Michal Linhard updated ISPN-3262:
---------------------------------
Description:
I'm getting NPE in these tests:
https://github.com/mlinhard/infinispan/commit/d1efd673ba6c34a4f6383b16740...
it's caused by a thread asking the cache store to load all keys after it's been shutdown.
It might be considered also a problem of LevelDB implementation that doesn't guard against this.
these are the stacktraces of the causing events:
{code}
DbImpl closing org.iq80.leveldb.impl.DbImpl@121e74ed
Stack trace for thread main
org.iq80.leveldb.impl.DbImpl.close(DbImpl.java:-1)
org.infinispan.loaders.leveldb.LevelDBCacheStore.stop(LevelDBCacheStore.java:107)
org.infinispan.loaders.CacheLoaderManagerImpl.stop(CacheLoaderManagerImpl.java:296)
sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:601)
org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:203)
org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:886)
org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:693)
org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:571)
org.infinispan.factories.ComponentRegistry.stop(ComponentRegistry.java:242)
org.infinispan.CacheImpl.stop(CacheImpl.java:604)
org.infinispan.CacheImpl.stop(CacheImpl.java:599)
org.infinispan.test.TestingUtil.killCaches(TestingUtil.java:734)
org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:590)
org.infinispan.loaders.MultiCacheStoreFunctionalTest.testStartStopOfBackupDoesntRewriteValue(MultiCacheStoreFunctionalTest.java:107)
sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:601)
org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
org.testng.TestRunner.privateRun(TestRunner.java:767)
org.testng.TestRunner.run(TestRunner.java:617)
org.testng.SuiteRunner.runTest(SuiteRunner.java:335)
org.testng.SuiteRunner.runSequentially(SuiteRunner.java:330)
org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
org.testng.SuiteRunner.run(SuiteRunner.java:240)
org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
org.testng.TestNG.run(TestNG.java:1057)
org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:111)
org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:204)
org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:175)
DbImpl iterator requested org.iq80.leveldb.impl.DbImpl@121e74ed
Stack trace for thread OOB-1,ISPN,NodeC-18285
org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:-1)
org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
org.jgroups.JChannel.up(JChannel.java:707)
org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
org.jgroups.protocols.FC.up(FC.java:479)
org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
org.jgroups.protocols.Discovery.up(Discovery.java:359)
org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:722)
2013-06-21 10:32:42,333 WARN [CacheTopologyControlCommand] (OOB-1,ISPN,NodeC-18285) ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=___defaultcache, type=CH_UPDATE, sender=NodeA-43485, joinInfo=null, topologyId=6, currentCH=DefaultConsistentHash{numSegments=60, numOwners=2, members=[NodeC-18285]}, pendingCH=null, throwable=null, viewId=3}
java.lang.NullPointerException
at org.iq80.leveldb.impl.DbImpl.internalIterator(DbImpl.java:757)
at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:722)
at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
at org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
at org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
at org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
at org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
at org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
at org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
at org.jgroups.JChannel.up(JChannel.java:707)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
at org.jgroups.protocols.FC.up(FC.java:479)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
at org.jgroups.protocols.Discovery.up(Discovery.java:359)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
{code}
was:
I'm getting NPE in these tests:
https://github.com/mlinhard/infinispan/commit/d1efd673ba6c34a4f6383b16740...
it's causing by a thread asking the cache store to load all keys after it's been shutdown.
It might be considered also a problem of LevelDB implementation that doesn't guard against this.
these are the stacktraces of the causing events:
{code}
DbImpl closing org.iq80.leveldb.impl.DbImpl@121e74ed
Stack trace for thread main
org.iq80.leveldb.impl.DbImpl.close(DbImpl.java:-1)
org.infinispan.loaders.leveldb.LevelDBCacheStore.stop(LevelDBCacheStore.java:107)
org.infinispan.loaders.CacheLoaderManagerImpl.stop(CacheLoaderManagerImpl.java:296)
sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:601)
org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:203)
org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:886)
org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:693)
org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:571)
org.infinispan.factories.ComponentRegistry.stop(ComponentRegistry.java:242)
org.infinispan.CacheImpl.stop(CacheImpl.java:604)
org.infinispan.CacheImpl.stop(CacheImpl.java:599)
org.infinispan.test.TestingUtil.killCaches(TestingUtil.java:734)
org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:590)
org.infinispan.loaders.MultiCacheStoreFunctionalTest.testStartStopOfBackupDoesntRewriteValue(MultiCacheStoreFunctionalTest.java:107)
sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:601)
org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
org.testng.TestRunner.privateRun(TestRunner.java:767)
org.testng.TestRunner.run(TestRunner.java:617)
org.testng.SuiteRunner.runTest(SuiteRunner.java:335)
org.testng.SuiteRunner.runSequentially(SuiteRunner.java:330)
org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
org.testng.SuiteRunner.run(SuiteRunner.java:240)
org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
org.testng.TestNG.run(TestNG.java:1057)
org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:111)
org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:204)
org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:175)
DbImpl iterator requested org.iq80.leveldb.impl.DbImpl@121e74ed
Stack trace for thread OOB-1,ISPN,NodeC-18285
org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:-1)
org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
org.jgroups.JChannel.up(JChannel.java:707)
org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
org.jgroups.protocols.FC.up(FC.java:479)
org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
org.jgroups.protocols.Discovery.up(Discovery.java:359)
org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:722)
2013-06-21 10:32:42,333 WARN [CacheTopologyControlCommand] (OOB-1,ISPN,NodeC-18285) ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=___defaultcache, type=CH_UPDATE, sender=NodeA-43485, joinInfo=null, topologyId=6, currentCH=DefaultConsistentHash{numSegments=60, numOwners=2, members=[NodeC-18285]}, pendingCH=null, throwable=null, viewId=3}
java.lang.NullPointerException
at org.iq80.leveldb.impl.DbImpl.internalIterator(DbImpl.java:757)
at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:722)
at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
at org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
at org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
at org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
at org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
at org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
at org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
at org.jgroups.JChannel.up(JChannel.java:707)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
at org.jgroups.protocols.FC.up(FC.java:479)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
at org.jgroups.protocols.Discovery.up(Discovery.java:359)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
{code}
> LevelDB cache store allows loading after shutdown
> -------------------------------------------------
>
> Key: ISPN-3262
> URL: https://issues.jboss.org/browse/ISPN-3262
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 5.2.7.Final, 5.3.0.Final
> Reporter: Michal Linhard
> Assignee: Mircea Markus
>
> I'm getting NPE in these tests:
> https://github.com/mlinhard/infinispan/commit/d1efd673ba6c34a4f6383b16740...
> it's caused by a thread asking the cache store to load all keys after it's been shutdown.
> It might be considered also a problem of LevelDB implementation that doesn't guard against this.
> these are the stacktraces of the causing events:
> {code}
> DbImpl closing org.iq80.leveldb.impl.DbImpl@121e74ed
> Stack trace for thread main
> org.iq80.leveldb.impl.DbImpl.close(DbImpl.java:-1)
> org.infinispan.loaders.leveldb.LevelDBCacheStore.stop(LevelDBCacheStore.java:107)
> org.infinispan.loaders.CacheLoaderManagerImpl.stop(CacheLoaderManagerImpl.java:296)
> sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:601)
> org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:203)
> org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:886)
> org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:693)
> org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:571)
> org.infinispan.factories.ComponentRegistry.stop(ComponentRegistry.java:242)
> org.infinispan.CacheImpl.stop(CacheImpl.java:604)
> org.infinispan.CacheImpl.stop(CacheImpl.java:599)
> org.infinispan.test.TestingUtil.killCaches(TestingUtil.java:734)
> org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:590)
> org.infinispan.loaders.MultiCacheStoreFunctionalTest.testStartStopOfBackupDoesntRewriteValue(MultiCacheStoreFunctionalTest.java:107)
> sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:601)
> org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> org.testng.TestRunner.privateRun(TestRunner.java:767)
> org.testng.TestRunner.run(TestRunner.java:617)
> org.testng.SuiteRunner.runTest(SuiteRunner.java:335)
> org.testng.SuiteRunner.runSequentially(SuiteRunner.java:330)
> org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
> org.testng.SuiteRunner.run(SuiteRunner.java:240)
> org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
> org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
> org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
> org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
> org.testng.TestNG.run(TestNG.java:1057)
> org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:111)
> org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:204)
> org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:175)
> DbImpl iterator requested org.iq80.leveldb.impl.DbImpl@121e74ed
> Stack trace for thread OOB-1,ISPN,NodeC-18285
> org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:-1)
> org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
> org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
> org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
> org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
> org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
> org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
> org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
> org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
> org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
> org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
> org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
> org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
> org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
> org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
> org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
> org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
> org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
> org.jgroups.JChannel.up(JChannel.java:707)
> org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
> org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> org.jgroups.protocols.FC.up(FC.java:479)
> org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
> org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
> org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
> org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
> org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
> org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> org.jgroups.protocols.Discovery.up(Discovery.java:359)
> org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
> org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
> org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> java.lang.Thread.run(Thread.java:722)
> 2013-06-21 10:32:42,333 WARN [CacheTopologyControlCommand] (OOB-1,ISPN,NodeC-18285) ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=___defaultcache, type=CH_UPDATE, sender=NodeA-43485, joinInfo=null, topologyId=6, currentCH=DefaultConsistentHash{numSegments=60, numOwners=2, members=[NodeC-18285]}, pendingCH=null, throwable=null, viewId=3}
> java.lang.NullPointerException
> at org.iq80.leveldb.impl.DbImpl.internalIterator(DbImpl.java:757)
> at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:722)
> at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
> at org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
> at org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
> at org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
> at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
> at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
> at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
> at org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
> at org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
> at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
> at org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
> at org.jgroups.JChannel.up(JChannel.java:707)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> at org.jgroups.protocols.FC.up(FC.java:479)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
> at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
> at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> at org.jgroups.protocols.Discovery.up(Discovery.java:359)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-3262) LevelDB cache store allows loading after shutdown
by Michal Linhard (JIRA)
Michal Linhard created ISPN-3262:
------------------------------------
Summary: LevelDB cache store allows loading after shutdown
Key: ISPN-3262
URL: https://issues.jboss.org/browse/ISPN-3262
Project: Infinispan
Issue Type: Bug
Components: Loaders and Stores
Affects Versions: 5.2.7.Final, 5.3.0.Final
Reporter: Michal Linhard
Assignee: Mircea Markus
I'm getting NPE in these tests:
https://github.com/mlinhard/infinispan/commit/d1efd673ba6c34a4f6383b16740...
it's causing by a thread asking the cache store to load all keys after it's been shutdown.
It might be considered also a problem of LevelDB implementation that doesn't guard against this.
these are the stacktraces of the causing events:
{code}
DbImpl closing org.iq80.leveldb.impl.DbImpl@121e74ed
Stack trace for thread main
org.iq80.leveldb.impl.DbImpl.close(DbImpl.java:-1)
org.infinispan.loaders.leveldb.LevelDBCacheStore.stop(LevelDBCacheStore.java:107)
org.infinispan.loaders.CacheLoaderManagerImpl.stop(CacheLoaderManagerImpl.java:296)
sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:601)
org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:203)
org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:886)
org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:693)
org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:571)
org.infinispan.factories.ComponentRegistry.stop(ComponentRegistry.java:242)
org.infinispan.CacheImpl.stop(CacheImpl.java:604)
org.infinispan.CacheImpl.stop(CacheImpl.java:599)
org.infinispan.test.TestingUtil.killCaches(TestingUtil.java:734)
org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:590)
org.infinispan.loaders.MultiCacheStoreFunctionalTest.testStartStopOfBackupDoesntRewriteValue(MultiCacheStoreFunctionalTest.java:107)
sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:601)
org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
org.testng.TestRunner.privateRun(TestRunner.java:767)
org.testng.TestRunner.run(TestRunner.java:617)
org.testng.SuiteRunner.runTest(SuiteRunner.java:335)
org.testng.SuiteRunner.runSequentially(SuiteRunner.java:330)
org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
org.testng.SuiteRunner.run(SuiteRunner.java:240)
org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
org.testng.TestNG.run(TestNG.java:1057)
org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:111)
org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:204)
org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:175)
DbImpl iterator requested org.iq80.leveldb.impl.DbImpl@121e74ed
Stack trace for thread OOB-1,ISPN,NodeC-18285
org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:-1)
org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
org.jgroups.JChannel.up(JChannel.java:707)
org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
org.jgroups.protocols.FC.up(FC.java:479)
org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
org.jgroups.protocols.Discovery.up(Discovery.java:359)
org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:722)
2013-06-21 10:32:42,333 WARN [CacheTopologyControlCommand] (OOB-1,ISPN,NodeC-18285) ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=___defaultcache, type=CH_UPDATE, sender=NodeA-43485, joinInfo=null, topologyId=6, currentCH=DefaultConsistentHash{numSegments=60, numOwners=2, members=[NodeC-18285]}, pendingCH=null, throwable=null, viewId=3}
java.lang.NullPointerException
at org.iq80.leveldb.impl.DbImpl.internalIterator(DbImpl.java:757)
at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:722)
at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
at org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
at org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
at org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
at org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
at org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
at org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
at org.jgroups.JChannel.up(JChannel.java:707)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
at org.jgroups.protocols.FC.up(FC.java:479)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
at org.jgroups.protocols.Discovery.up(Discovery.java:359)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
{code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-3157) Realign Infinispan subsystem to match AS7.2/EAP6.1
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3157?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-3157:
-----------------------------------------------
Shay Matasaro <smatasar(a)redhat.com> made a comment on [bug 973763|https://bugzilla.redhat.com/show_bug.cgi?id=973763]
Yes, there appears to be a problem with the mead repo and John Casey is looking into it. Will let you know as soon as it is fixed.
Tristan
On 06/20/2013 04:19 PM, Shay Matasaro wrote:
> Hi,
>
> I am trying to build the JDG 6.1.0GA tag server, using these git repos:
> |
> git://git.app.eng.bos.redhat.com/srv/git/infinispan/infinispan-server.git
>
> But the builds fail because the source is looking for dependencies ending with 5.2.4-redhat-1, but the on-line brew maven repo only has||5.2.4-redhat-3, and I checked all 6 repos.
>
> |is it a bug in the paren pom, or is there another on-line brew repo?
>
> --
> Thanks,
> Shay
> Realign Infinispan subsystem to match AS7.2/EAP6.1
> --------------------------------------------------
>
> Key: ISPN-3157
> URL: https://issues.jboss.org/browse/ISPN-3157
> Project: Infinispan
> Issue Type: Task
> Components: Server
> Reporter: Tristan Tarrant
> Assignee: Tristan Tarrant
> Priority: Critical
> Fix For: 5.3.0.Final
>
>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months