[JBoss JIRA] (ISPN-2802) Cache recovery fails due to missing responses
by Radim Vansa (JIRA)
[ https://issues.jboss.org/browse/ISPN-2802?page=com.atlassian.jira.plugin.... ]
Radim Vansa updated ISPN-2802:
------------------------------
Description:
When the cache recovery is started, the new coordinator sends CacheTopologyControlCommand.GET_STATUS to all nodes and waits for responses. However, I have a reproducible test-case where it always times out waiting for the responses.
Here are the logs (TRACE is not doable here, but I added some byteman traces - see topology.btm in the archive): http://dl.dropbox.com/u/103079234/recovery.zip
The problematic spot is on node3 at 05:37:57 receiving cluster view 34.
All nodes (except the one which is killed, in this case node1) respond quickly to the GET_STATUS command (see BYTEMAN Receiving - Received pairs, these are bound to command execution in CommandAwareRpcDispatcher), but some responses are not received on node3 (look for Receiving rsp bound to GroupRequest).
JGroups tracing could be useful here but it is not available (intensive logging often blocks on internal log4j locks and the node becomes unresponsive).
As mentioned above, the case is reproducible, therefore if you can suggest any particular BYTEMAN hook, I can try it.
was:
When the cache recovery is started, the new coordinator sends CacheTopologyControlCommand.GET_STATUS to all nodes and waits for responses. However, I have a reproducible test-case where it always times out waiting for the responses.
Here are the logs (TRACE is not doable here, but I added some byteman traces): http://dl.dropbox.com/u/103079234/recovery.zip
The problematic spot is on node3 at 05:37:57 receiving cluster view 34.
All nodes (except the one which is killed, in this case node1) respond quickly to the GET_STATUS command (see BYTEMAN Receiving - Received pairs, these are bound to command execution in CommandAwareRpcDispatcher), but some responses are not received on node3 (look for Receiving rsp bound to GroupRequest).
JGroups tracing could be useful here but it is not available (intensive logging often blocks on internal log4j locks and the node becomes unresponsive).
As mentioned above, the case is reproducible, therefore if you can suggest any particular BYTEMAN hook, I can try it.
> Cache recovery fails due to missing responses
> ---------------------------------------------
>
> Key: ISPN-2802
> URL: https://issues.jboss.org/browse/ISPN-2802
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer
> Affects Versions: 5.2.0.CR3
> Reporter: Radim Vansa
> Assignee: Mircea Markus
>
> When the cache recovery is started, the new coordinator sends CacheTopologyControlCommand.GET_STATUS to all nodes and waits for responses. However, I have a reproducible test-case where it always times out waiting for the responses.
> Here are the logs (TRACE is not doable here, but I added some byteman traces - see topology.btm in the archive): http://dl.dropbox.com/u/103079234/recovery.zip
> The problematic spot is on node3 at 05:37:57 receiving cluster view 34.
> All nodes (except the one which is killed, in this case node1) respond quickly to the GET_STATUS command (see BYTEMAN Receiving - Received pairs, these are bound to command execution in CommandAwareRpcDispatcher), but some responses are not received on node3 (look for Receiving rsp bound to GroupRequest).
> JGroups tracing could be useful here but it is not available (intensive logging often blocks on internal log4j locks and the node becomes unresponsive).
> As mentioned above, the case is reproducible, therefore if you can suggest any particular BYTEMAN hook, I can try it.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months
[JBoss JIRA] (ISPN-2802) Cache recovery fails due to missing responses
by Radim Vansa (JIRA)
[ https://issues.jboss.org/browse/ISPN-2802?page=com.atlassian.jira.plugin.... ]
Radim Vansa updated ISPN-2802:
------------------------------
Description:
When the cache recovery is started, the new coordinator sends CacheTopologyControlCommand.GET_STATUS to all nodes and waits for responses. However, I have a reproducible test-case where it always times out waiting for the responses.
Here are the logs (TRACE is not doable here, but I added some byteman traces): http://dl.dropbox.com/u/103079234/recovery.zip
The problematic spot is on node3 at 05:37:57 receiving cluster view 34.
All nodes (except the one which is killed, in this case node1) respond quickly to the GET_STATUS command (see BYTEMAN Receiving - Received pairs, these are bound to command execution in CommandAwareRpcDispatcher), but some responses are not received on node3 (look for Receiving rsp bound to GroupRequest).
JGroups tracing could be useful here but it is not available (intensive logging often blocks on internal log4j locks and the node becomes unresponsive).
As mentioned above, the case is reproducible, therefore if you can suggest any particular BYTEMAN hook, I can try it.
was:
When the cache recovery is started, the new coordinator sends CacheTopologyControlCommand.GET_STATUS to all nodes and waits for responses. However, I have a reproducible test-case where it always times out waiting for the responses.
Here are the logs (TRACE is not doable here, but I added some byteman traces): http://dl.dropbox.com/u/103079234/recovery.zip
The problematic spot is on node3 at 05:37:57 receiving cluster view 34.
All nodes (except the one which is killed, in this case node1) respond quickly to the GET_STATUS command (see BYTEMAN Receiving - Received pairs, these are bound to command execution in CommandAwareRpcDispatcher), but not all responses are not received on node3.
JGroups tracing could be useful here but it is not available (intensive logging often blocks on internal log4j locks and the node becomes unresponsive).
As mentioned above, the case is reproducible, therefore if you can suggest any particular BYTEMAN hook, I can try it.
> Cache recovery fails due to missing responses
> ---------------------------------------------
>
> Key: ISPN-2802
> URL: https://issues.jboss.org/browse/ISPN-2802
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer
> Affects Versions: 5.2.0.CR3
> Reporter: Radim Vansa
> Assignee: Mircea Markus
>
> When the cache recovery is started, the new coordinator sends CacheTopologyControlCommand.GET_STATUS to all nodes and waits for responses. However, I have a reproducible test-case where it always times out waiting for the responses.
> Here are the logs (TRACE is not doable here, but I added some byteman traces): http://dl.dropbox.com/u/103079234/recovery.zip
> The problematic spot is on node3 at 05:37:57 receiving cluster view 34.
> All nodes (except the one which is killed, in this case node1) respond quickly to the GET_STATUS command (see BYTEMAN Receiving - Received pairs, these are bound to command execution in CommandAwareRpcDispatcher), but some responses are not received on node3 (look for Receiving rsp bound to GroupRequest).
> JGroups tracing could be useful here but it is not available (intensive logging often blocks on internal log4j locks and the node becomes unresponsive).
> As mentioned above, the case is reproducible, therefore if you can suggest any particular BYTEMAN hook, I can try it.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months
[JBoss JIRA] (ISPN-2802) Cache recovery fails due to missing responses
by Radim Vansa (JIRA)
Radim Vansa created ISPN-2802:
---------------------------------
Summary: Cache recovery fails due to missing responses
Key: ISPN-2802
URL: https://issues.jboss.org/browse/ISPN-2802
Project: Infinispan
Issue Type: Bug
Components: State transfer
Affects Versions: 5.2.0.CR3
Reporter: Radim Vansa
Assignee: Mircea Markus
When the cache recovery is started, the new coordinator sends CacheTopologyControlCommand.GET_STATUS to all nodes and waits for responses. However, I have a reproducible test-case where it always times out waiting for the responses.
Here are the logs (TRACE is not doable here, but I added some byteman traces): http://dl.dropbox.com/u/103079234/recovery.zip
The problematic spot is on node3 at 05:37:57 receiving cluster view 34.
All nodes (except the one which is killed, in this case node1) respond quickly to the GET_STATUS command (see BYTEMAN Receiving - Received pairs, these are bound to command execution in CommandAwareRpcDispatcher), but not all responses are not received on node3.
JGroups tracing could be useful here but it is not available (intensive logging often blocks on internal log4j locks and the node becomes unresponsive).
As mentioned above, the case is reproducible, therefore if you can suggest any particular BYTEMAN hook, I can try it.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months
[JBoss JIRA] (ISPN-2801) Should ByteArrayKey handling in MurmurHash3 be moved to Hot Rod server?
by Galder Zamarreño (JIRA)
Galder Zamarreño created ISPN-2801:
--------------------------------------
Summary: Should ByteArrayKey handling in MurmurHash3 be moved to Hot Rod server?
Key: ISPN-2801
URL: https://issues.jboss.org/browse/ISPN-2801
Project: Infinispan
Issue Type: Task
Reporter: Galder Zamarreño
Assignee: Mircea Markus
Priority: Minor
Fix For: 5.3.0.Final
[12:19] <manik> mmarkus: ok, next question. This looks like a bug.
[12:19] <manik> mmarkus: so on the client, we route requests based on the key, which is in the form of a byte[]
[12:19] <manik> mmarkus: as per org.infinispan.client.hotrod.impl.consistenthash.ConsistentHash#getServer(byte[] key)
[12:20] <mmarkus> manik: yes
[12:20] <manik> mmarkus: and the key is serialised using whatever marshaller is configured on the client
[12:20] <manik> but when that key arrives on the server
[12:20] <manik> it is wrapped into a ByteArrayKey
[12:20] <manik> and that is used on the server to map the entry to a node
[12:21] * vchepeli (chepa653@nat/redhat/x-svfcqlzqtfwwujho) has left #infinispan
[12:21] * papegaaij has quit (Remote host closed the connection)
[12:21] <manik> mmarkus: so potentially what the client sees as data owners and what the server nodes see as data owners are different?
[12:21] <manik> galderz: you got a sec for this?
[12:21] <galderz> sure manik
[12:21] <manik> galderz: pls see my comments to mmarkus above
[12:23] * sannegrinovero (Sanne@redhat/jboss/sannegrinovero) has joined #infinispan
[12:23] * ChanServ gives voice to sannegrinovero
[12:23] <galderz> manik, the hash on the ByteArrayKey is essentially a hash on the contens of the byte array
[12:24] * tsykora (tsykora@nat/redhat/x-eelluveahnctzoui) has joined #infinispan
[12:24] <galderz> to be precise: 41 + Arrays.hashCode(data)
[12:24] <galderz> that the passed through the MurmurHash
[12:26] <galderz> manik, we handle it:
[12:26] <galderz> else if (o instanceof ByteArrayKey)
[12:26] <galderz> return hash(((ByteArrayKey) o).getData());
[12:27] <galderz> so manik, regardless of whether it's a byte[] or a ByteArrayKey, the hash should be the same, //cc mmarkus
[12:28] <mmarkus> galderz: +1
[12:28] * mmarkus has quit (Quit: Leaving.)
[12:28] <galderz> i'm quite certain we have some tests that verify this - can't remember of the top of my head
[12:28] <manik> galderz: ah, yes this is in the murmurhash3 impl
[12:29] <manik> ok, no wonder I couldnt find it
[12:29] <manik> :)
[12:29] <manik> I just saw that on one side you're calling locate() with a byte[] and on the other, with a ByteArrayKey so that set off an alarm
[12:30] <galderz> manik, you have a point though, it shou;dn't be specific to MurmurHash3
[12:39] <manik> galderz: do you want to create a JIRA to refactor this somewhere else? E.g., it makes sense for this logic to actually be in the Hot Rod server code - since that's where ByteArrayKeys are used. :)
[12:40] <galderz> manik, why? clients still need this logic too
[12:40] <manik> galderz: no they dont
[12:40] <galderz> oh, u mean handling BAKs?
[12:40] <manik> yep
[12:40] <galderz> manik, maybe… but did u check the history to see why that was put there?
[12:41] <manik> No - maybe worth a chat with dberindei?
[12:41] <manik> galderz: ^^
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months
[JBoss JIRA] (ISPN-2801) Should ByteArrayKey handling in MurmurHash3 be moved to Hot Rod server?
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2801?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño reassigned ISPN-2801:
--------------------------------------
Assignee: Galder Zamarreño (was: Mircea Markus)
> Should ByteArrayKey handling in MurmurHash3 be moved to Hot Rod server?
> -----------------------------------------------------------------------
>
> Key: ISPN-2801
> URL: https://issues.jboss.org/browse/ISPN-2801
> Project: Infinispan
> Issue Type: Task
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Priority: Minor
> Fix For: 5.3.0.Final
>
>
> [12:19] <manik> mmarkus: ok, next question. This looks like a bug.
> [12:19] <manik> mmarkus: so on the client, we route requests based on the key, which is in the form of a byte[]
> [12:19] <manik> mmarkus: as per org.infinispan.client.hotrod.impl.consistenthash.ConsistentHash#getServer(byte[] key)
> [12:20] <mmarkus> manik: yes
> [12:20] <manik> mmarkus: and the key is serialised using whatever marshaller is configured on the client
> [12:20] <manik> but when that key arrives on the server
> [12:20] <manik> it is wrapped into a ByteArrayKey
> [12:20] <manik> and that is used on the server to map the entry to a node
> [12:21] * vchepeli (chepa653@nat/redhat/x-svfcqlzqtfwwujho) has left #infinispan
> [12:21] * papegaaij has quit (Remote host closed the connection)
> [12:21] <manik> mmarkus: so potentially what the client sees as data owners and what the server nodes see as data owners are different?
> [12:21] <manik> galderz: you got a sec for this?
> [12:21] <galderz> sure manik
> [12:21] <manik> galderz: pls see my comments to mmarkus above
> [12:23] * sannegrinovero (Sanne@redhat/jboss/sannegrinovero) has joined #infinispan
> [12:23] * ChanServ gives voice to sannegrinovero
> [12:23] <galderz> manik, the hash on the ByteArrayKey is essentially a hash on the contens of the byte array
> [12:24] * tsykora (tsykora@nat/redhat/x-eelluveahnctzoui) has joined #infinispan
> [12:24] <galderz> to be precise: 41 + Arrays.hashCode(data)
> [12:24] <galderz> that the passed through the MurmurHash
> [12:26] <galderz> manik, we handle it:
> [12:26] <galderz> else if (o instanceof ByteArrayKey)
> [12:26] <galderz> return hash(((ByteArrayKey) o).getData());
> [12:27] <galderz> so manik, regardless of whether it's a byte[] or a ByteArrayKey, the hash should be the same, //cc mmarkus
> [12:28] <mmarkus> galderz: +1
> [12:28] * mmarkus has quit (Quit: Leaving.)
> [12:28] <galderz> i'm quite certain we have some tests that verify this - can't remember of the top of my head
> [12:28] <manik> galderz: ah, yes this is in the murmurhash3 impl
> [12:29] <manik> ok, no wonder I couldnt find it
> [12:29] <manik> :)
> [12:29] <manik> I just saw that on one side you're calling locate() with a byte[] and on the other, with a ByteArrayKey so that set off an alarm
> [12:30] <galderz> manik, you have a point though, it shou;dn't be specific to MurmurHash3
> [12:39] <manik> galderz: do you want to create a JIRA to refactor this somewhere else? E.g., it makes sense for this logic to actually be in the Hot Rod server code - since that's where ByteArrayKeys are used. :)
> [12:40] <galderz> manik, why? clients still need this logic too
> [12:40] <manik> galderz: no they dont
> [12:40] <galderz> oh, u mean handling BAKs?
> [12:40] <manik> yep
> [12:40] <galderz> manik, maybe… but did u check the history to see why that was put there?
> [12:41] <manik> No - maybe worth a chat with dberindei?
> [12:41] <manik> galderz: ^^
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months
[JBoss JIRA] (ISPN-2504) WriteSkew check fails for entries which are inserted first time
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2504?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2504:
--------------------------------
Fix Version/s: 5.2.1
5.3.0.Final
(was: 6.0.0.Final)
> WriteSkew check fails for entries which are inserted first time
> ---------------------------------------------------------------
>
> Key: ISPN-2504
> URL: https://issues.jboss.org/browse/ISPN-2504
> Project: Infinispan
> Issue Type: Bug
> Components: Locking and Concurrency
> Affects Versions: 5.2.0.Beta3
> Reporter: Radim Vansa
> Assignee: Mircea Markus
> Fix For: 5.2.1, 5.3.0.Final
>
>
> If optimistic locking and write skew check are configured and there are two concurrent transactions performing
> {code}
> read(key) -> null
> write(key, value)
> {code}
> one of them should fail (if both read {{null}}). However, both transaction succeed in this case. The reason is that that the {{VersionedPrepareCommand}} has {{null}} version for the key (because it was null) but in {{WriteSkewHelper.performWriteSkewCheckAndReturnNewVersions}} there is
> {code}
> EntryVersion versionSeen = prepareCommand.getVersionsSeen().get(k);
> if (versionSeen != null) entry.setVersion(versionSeen);
> {code}
> As the {{entry}} contains the version injected into context from {{dataContainer}} in {{EntryFactoryImpl.wrapInternalCacheEntryForPut}} lately during the {{VersionedPrepareCommand}} execution, and the version is not overwritten from the {{getVersionsSeen()}} value (as this is null), the performWriteSkewCheck does not report this entry as changed.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months
[JBoss JIRA] (ISPN-2441) Some core interceptors trigger custom interceptor error message
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2441?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2441:
--------------------------------
Fix Version/s: 5.2.1
5.3.0.Final
(was: 6.0.0.Final)
> Some core interceptors trigger custom interceptor error message
> ---------------------------------------------------------------
>
> Key: ISPN-2441
> URL: https://issues.jboss.org/browse/ISPN-2441
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite
> Affects Versions: 5.2.0.Beta2
> Reporter: Dan Berindei
> Assignee: Mircea Markus
> Fix For: 5.2.1, 5.3.0.Final
>
>
> I'm not sure if this is really a problem or if it's just a superfluous error message, but I'm seeing about 6000 of these during a typical test suite run:
> {noformat}
> ISPN000173: Custom interceptor org.infinispan.interceptors.ActivationInterceptor has used @Inject, @Start or @Stop. These methods will not be processed. Please extend org.infinispan.interceptors.base.BaseCustomInterceptor instead, and your custom interceptor will have access to a cache and cacheManager. Override stop() and start() for lifecycle methods.
> ISPN000173: Custom interceptor org.infinispan.interceptors.CacheMgmtInterceptor has used @Inject, @Start or @Stop. These methods will not be processed. Please extend org.infinispan.interceptors.base.BaseCustomInterceptor instead, and your custom interceptor will have access to a cache and cacheManager. Override stop() and start() for lifecycle methods.
> ISPN000173: Custom interceptor org.infinispan.interceptors.DistCacheStoreInterceptor has used @Inject, @Start or @Stop. These methods will not be processed. Please extend org.infinispan.interceptors.base.BaseCustomInterceptor instead, and your custom interceptor will have access to a cache and cacheManager. Override stop() and start() for lifecycle methods.
> ISPN000173: Custom interceptor org.infinispan.interceptors.InvalidationInterceptor has used @Inject, @Start or @Stop. These methods will not be processed. Please extend org.infinispan.interceptors.base.BaseCustomInterceptor instead, and your custom interceptor will have access to a cache and cacheManager. Override stop() and start() for lifecycle methods.
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months
[JBoss JIRA] (ISPN-2582) When a node rejoins a cluster , the existing node freezes for a minute
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2582?page=com.atlassian.jira.plugin.... ]
Mircea Markus commented on ISPN-2582:
-------------------------------------
can you please give it a spin with ISPN 5.2/jdg 6.1? The new non-blocking state transfer should significantly have improved this behaviour.
> When a node rejoins a cluster , the existing node freezes for a minute
> ----------------------------------------------------------------------
>
> Key: ISPN-2582
> URL: https://issues.jboss.org/browse/ISPN-2582
> Project: Infinispan
> Issue Type: Bug
> Environment: EAP 6.0.0.GA
> Reporter: Shay Matasaro
> Assignee: Mircea Markus
>
> 1) 2 node cluster
> 2) distributable web app
> 3) run jemeter with 10 threads
> 4) start node 1
> 5) start node 2
> 6) shutdown node 1
> 7) restart node 1
> result:
> node 2 freezes for one minute then throws exceptions exceptions then continues
> 15:25:19,478 ERROR [org.infinispan.cacheviews.CacheViewsManagerImpl] (CacheViewInstaller-5,master:server-two/web) ISPN000172: Failed to prepare view CacheView{viewId=4, members=[master:server-two/web, master:server-one/web]} for cache default-host/demo7, rolling back to view CacheView{viewId=3, members=[master:server-two/web]}: java.util.concurrent.TimeoutException
> 15:25:19,688 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-3) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,687 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-6) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,687 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-1) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,688 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-9) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,685 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-2) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,688 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-5) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,686 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-4) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,688 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-10) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,688 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-12) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,696 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-3) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,689 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (ajp-/0.0.0.0:8159-11) ISPN000136: Execution error: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,705 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-4) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,706 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-10) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,699 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-1) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,696 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-6) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,714 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-10) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34869}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@8834} org.infinispan.transaction.synchronization.SynchronizationAdapter@8853: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
> 15:25:19,708 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-3) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34867}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@8832} org.infinispan.transaction.synchronization.SynchronizationAdapter@8851: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
> 15:25:19,717 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-6) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34864}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@882f} org.infinispan.transaction.synchronization.SynchronizationAdapter@884e: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
> 15:25:19,699 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-9) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,704 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-5) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,707 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-12) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,704 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-2) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,722 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-5) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34866}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@8831} org.infinispan.transaction.synchronization.SynchronizationAdapter@8850: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
> 15:25:19,721 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-9) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34870}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@8835} org.infinispan.transaction.synchronization.SynchronizationAdapter@8854: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
> 15:25:19,712 ERROR [org.infinispan.transaction.TransactionCoordinator] (ajp-/0.0.0.0:8159-11) ISPN000097: Error while processing a prepare in a single-phase transaction: org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 4
> 15:25:19,723 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-12) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34868}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@8833} org.infinispan.transaction.synchronization.SynchronizationAdapter@8852: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
> 15:25:19,712 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-4) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34863}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@882e} org.infinispan.transaction.synchronization.SynchronizationAdapter@884d: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
> 15:25:19,724 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-2) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34862}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@882d} org.infinispan.transaction.synchronization.SynchronizationAdapter@884c: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
> 15:25:19,715 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-1) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34865}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@8830} org.infinispan.transaction.synchronization.SynchronizationAdapter@884f: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
> 15:25:19,726 ERROR [org.infinispan.transaction.tm.DummyTransaction] (ajp-/0.0.0.0:8159-11) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, transaction=DummyTransaction{xid=DummyXid{id=34871}, status=3}, lockedKeys=null, backupKeyLocks=null, viewId=3} org.infinispan.transaction.synchronization.SyncLocalTransaction@8836} org.infinispan.transaction.synchronization.SynchronizationAdapter@8855: org.infinispan.CacheException: Could not commit.
> Caused by: javax.transaction.xa.XAException
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months
[JBoss JIRA] (ISPN-2609) Infinispan SpringCache throws java.lang.NullPointerException: Null values are not supported!
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2609?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2609:
--------------------------------
Fix Version/s: 5.2.1
5.3.0.Final
> Infinispan SpringCache throws java.lang.NullPointerException: Null values are not supported!
> --------------------------------------------------------------------------------------------
>
> Key: ISPN-2609
> URL: https://issues.jboss.org/browse/ISPN-2609
> Project: Infinispan
> Issue Type: Bug
> Components: Spring integration
> Affects Versions: 5.1.6.FINAL
> Reporter: Roland Csupor
> Assignee: Mircea Markus
> Fix For: 5.2.1, 5.3.0.Final
>
>
> I trying to use Infinispan as Spring cache, but if my function returns null, I got an exception, cause Spring tries to cache this result value:
> {noformat}
> Caused by: java.lang.NullPointerException: Null values are not supported!
> at org.infinispan.CacheImpl.assertKeyValueNotNull(CacheImpl.java:203) ~[infinispan-core-5.1.6.FINAL.jar:5.1.6.FINAL]
> at org.infinispan.CacheImpl.put(CacheImpl.java:699) ~[infinispan-core-5.1.6.FINAL.jar:5.1.6.FINAL]
> at org.infinispan.CacheImpl.put(CacheImpl.java:694) ~[infinispan-core-5.1.6.FINAL.jar:5.1.6.FINAL]
> at org.infinispan.CacheSupport.put(CacheSupport.java:53) ~[infinispan-core-5.1.6.FINAL.jar:5.1.6.FINAL]
> at org.infinispan.spring.provider.SpringCache.put(SpringCache.java:83) ~[infinispan-spring-5.1.6.FINAL.jar:5.1.6.FINAL]
> at org.springframework.cache.interceptor.CacheAspectSupport.update(CacheAspectSupport.java:390) ~[spring-context-3.1.2.RELEASE.jar:3.1.2.RELEASE]
> at org.springframework.cache.interceptor.CacheAspectSupport.execute(CacheAspectSupport.java:218) ~[spring-context-3.1.2.RELEASE.jar:3.1.2.RELEASE]
> at org.springframework.cache.interceptor.CacheInterceptor.invoke(CacheInterceptor.java:66) ~[spring-context-3.1.2.RELEASE.jar:3.1.2.RELEASE]
> {noformat}
> Did I misconfigured something?
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months