[JBoss JIRA] (ISPN-2802) Cache recovery fails due to missing responses
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/ISPN-2802?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on ISPN-2802:
--------------------------------
and which node would that be ? I don't feel like looking at 32 logs ... :-)
> Cache recovery fails due to missing responses
> ---------------------------------------------
>
> Key: ISPN-2802
> URL: https://issues.jboss.org/browse/ISPN-2802
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer
> Affects Versions: 5.2.0.CR3
> Reporter: Radim Vansa
> Assignee: Mircea Markus
>
> When the cache recovery is started, the new coordinator sends CacheTopologyControlCommand.GET_STATUS to all nodes and waits for responses. However, I have a reproducible test-case where it always times out waiting for the responses.
> Here are the logs (TRACE is not doable here, but I added some byteman traces - see topology.btm in the archive): http://dl.dropbox.com/u/103079234/recovery.zip
> The problematic spot is on node3 at 05:37:57 receiving cluster view 34.
> All nodes (except the one which is killed, in this case node1) respond quickly to the GET_STATUS command (see BYTEMAN Receiving - Received pairs, these are bound to command execution in CommandAwareRpcDispatcher), but some responses are not received on node3 (look for Receiving rsp bound to GroupRequest).
> JGroups tracing could be useful here but it is not available (intensive logging often blocks on internal log4j locks and the node becomes unresponsive).
> As mentioned above, the case is reproducible, therefore if you can suggest any particular BYTEMAN hook, I can try it.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 1 month
[JBoss JIRA] (ISPN-2802) Cache recovery fails due to missing responses
by Radim Vansa (JIRA)
[ https://issues.jboss.org/browse/ISPN-2802?page=com.atlassian.jira.plugin.... ]
Radim Vansa commented on ISPN-2802:
-----------------------------------
I have reran it with disabled reaper and verbose UNICAST2. It appears that the node which sent the GET_STATUS repeatedly sends XMIT_REQs, the other nodes respond but the xmitted message is never really received. Here are the logs: http://dl.dropbox.com/u/103079234/recovery2.zip
> Cache recovery fails due to missing responses
> ---------------------------------------------
>
> Key: ISPN-2802
> URL: https://issues.jboss.org/browse/ISPN-2802
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer
> Affects Versions: 5.2.0.CR3
> Reporter: Radim Vansa
> Assignee: Mircea Markus
>
> When the cache recovery is started, the new coordinator sends CacheTopologyControlCommand.GET_STATUS to all nodes and waits for responses. However, I have a reproducible test-case where it always times out waiting for the responses.
> Here are the logs (TRACE is not doable here, but I added some byteman traces - see topology.btm in the archive): http://dl.dropbox.com/u/103079234/recovery.zip
> The problematic spot is on node3 at 05:37:57 receiving cluster view 34.
> All nodes (except the one which is killed, in this case node1) respond quickly to the GET_STATUS command (see BYTEMAN Receiving - Received pairs, these are bound to command execution in CommandAwareRpcDispatcher), but some responses are not received on node3 (look for Receiving rsp bound to GroupRequest).
> JGroups tracing could be useful here but it is not available (intensive logging often blocks on internal log4j locks and the node becomes unresponsive).
> As mentioned above, the case is reproducible, therefore if you can suggest any particular BYTEMAN hook, I can try it.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 1 month
[JBoss JIRA] (ISPN-2811) cassandraStore xml configuration gives parser error on attributes "username" and "password"
by Giovanni Mels (JIRA)
Giovanni Mels created ISPN-2811:
-----------------------------------
Summary: cassandraStore xml configuration gives parser error on attributes "username" and "password"
Key: ISPN-2811
URL: https://issues.jboss.org/browse/ISPN-2811
Project: Infinispan
Issue Type: Bug
Components: Loaders and Stores
Affects Versions: 5.2.0.Final
Reporter: Giovanni Mels
Assignee: Mircea Markus
This is because attributes "username" and "password" are in uppercase in [org.infinispan.loaders.cassandra.configuration.Attribute|https://github.c...], but in lowercase in the [schema|http://docs.jboss.org/infinispan/schemas/infinispan-cachestore-cas...].
{quote}
org.infinispan.config.ConfigurationException: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[37,142]
Message: Unexpected attribute 'password' encountered
at org.infinispan.configuration.parsing.ParseUtils.unexpectedAttribute(ParseUtils.java:76)
at org.infinispan.configuration.parsing.Parser52.parseCommonStoreAttributes(Parser52.java:696)
at org.infinispan.loaders.cassandra.configuration.CassandraCacheStoreConfigurationParser52.parseCassandraStoreAttributes(CassandraCacheStoreConfigurationParser52.java:180)
at org.infinispan.loaders.cassandra.configuration.CassandraCacheStoreConfigurationParser52.parseCassandraStore(CassandraCacheStoreConfigurationParser52.java:77)
at org.infinispan.loaders.cassandra.configuration.CassandraCacheStoreConfigurationParser52.readElement(CassandraCacheStoreConfigurationParser52.java:65)
at org.infinispan.loaders.cassandra.configuration.CassandraCacheStoreConfigurationParser52.readElement(CassandraCacheStoreConfigurationParser52.java:43)
at org.jboss.staxmapper.XMLMapperImpl.processNested(XMLMapperImpl.java:110)
at org.jboss.staxmapper.XMLExtendedStreamReaderImpl.handleAny(XMLExtendedStreamReaderImpl.java:69)
at org.infinispan.configuration.parsing.Parser52.parseLoaders(Parser52.java:588)
at org.infinispan.configuration.parsing.Parser52.parseCache(Parser52.java:180)
at org.infinispan.configuration.parsing.Parser52.parseDefaultCache(Parser52.java:145)
at org.infinispan.configuration.parsing.Parser52.readElement(Parser52.java:98)
at org.infinispan.configuration.parsing.Parser52.readElement(Parser52.java:75)
at org.jboss.staxmapper.XMLMapperImpl.processNested(XMLMapperImpl.java:110)
at org.jboss.staxmapper.XMLMapperImpl.parseDocument(XMLMapperImpl.java:69)
at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:77)
... 28 more
{quote}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 1 month
[JBoss JIRA] (ISPN-2609) Infinispan SpringCache throws java.lang.NullPointerException: Null values are not supported!
by Marius Bogoevici (JIRA)
[ https://issues.jboss.org/browse/ISPN-2609?page=com.atlassian.jira.plugin.... ]
Marius Bogoevici commented on ISPN-2609:
----------------------------------------
So, the problem *is* that Infinispan doesn't actually support {code}cache.put(key, null){code}, and nulls have to be stored wrapped in the cache.
As a fix we could go with Roland's fix directly in the SpringCache for 5.3.0. For previous versions, users would have to implement their own wrapper code (like Roland did) to work around storing nulls in the cache.
> Infinispan SpringCache throws java.lang.NullPointerException: Null values are not supported!
> --------------------------------------------------------------------------------------------
>
> Key: ISPN-2609
> URL: https://issues.jboss.org/browse/ISPN-2609
> Project: Infinispan
> Issue Type: Bug
> Components: Spring integration
> Affects Versions: 5.1.6.FINAL
> Reporter: Roland Csupor
> Assignee: Mircea Markus
> Fix For: 5.2.2, 5.3.0.Final
>
>
> I trying to use Infinispan as Spring cache, but if my function returns null, I got an exception, cause Spring tries to cache this result value:
> {noformat}
> Caused by: java.lang.NullPointerException: Null values are not supported!
> at org.infinispan.CacheImpl.assertKeyValueNotNull(CacheImpl.java:203) ~[infinispan-core-5.1.6.FINAL.jar:5.1.6.FINAL]
> at org.infinispan.CacheImpl.put(CacheImpl.java:699) ~[infinispan-core-5.1.6.FINAL.jar:5.1.6.FINAL]
> at org.infinispan.CacheImpl.put(CacheImpl.java:694) ~[infinispan-core-5.1.6.FINAL.jar:5.1.6.FINAL]
> at org.infinispan.CacheSupport.put(CacheSupport.java:53) ~[infinispan-core-5.1.6.FINAL.jar:5.1.6.FINAL]
> at org.infinispan.spring.provider.SpringCache.put(SpringCache.java:83) ~[infinispan-spring-5.1.6.FINAL.jar:5.1.6.FINAL]
> at org.springframework.cache.interceptor.CacheAspectSupport.update(CacheAspectSupport.java:390) ~[spring-context-3.1.2.RELEASE.jar:3.1.2.RELEASE]
> at org.springframework.cache.interceptor.CacheAspectSupport.execute(CacheAspectSupport.java:218) ~[spring-context-3.1.2.RELEASE.jar:3.1.2.RELEASE]
> at org.springframework.cache.interceptor.CacheInterceptor.invoke(CacheInterceptor.java:66) ~[spring-context-3.1.2.RELEASE.jar:3.1.2.RELEASE]
> {noformat}
> Did I misconfigured something?
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 1 month
[JBoss JIRA] (ISPN-777) Race conditions in cleaning up stale locks held by dead members in a cluster
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-777?page=com.atlassian.jira.plugin.s... ]
Mircea Markus updated ISPN-777:
-------------------------------
Fix Version/s: 5.2.2
(was: 5.2.1)
> Race conditions in cleaning up stale locks held by dead members in a cluster
> ----------------------------------------------------------------------------
>
> Key: ISPN-777
> URL: https://issues.jboss.org/browse/ISPN-777
> Project: Infinispan
> Issue Type: Bug
> Components: Locking and Concurrency
> Affects Versions: 4.2.0.BETA1
> Reporter: Vladimir Blagojevic
> Assignee: Mircea Markus
> Priority: Critical
> Fix For: 5.2.2, 5.3.0.Final
>
> Attachments: CacheScheduledCounter.java, ISPN-777_output.txt
>
>
> It seems that rollback sometimes does not release acquired eager locks. See attached test program and run two JVM instances on the same machine. Program schedules a task to run every 5 seconds. Tasks simply locks a key, gets the value, increases the value and puts it back surrounded by begin/commit/rollback tx boundary.
> Steps to reproduce (keep repeating steps until problem is encountered):
> 1) Kill one running instance.
> 2) Restart it
> See attached example output of a run.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 1 month
[JBoss JIRA] (ISPN-962) Entries not committed w/ DistLockingInterceptor and L1 caching disabled.
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-962?page=com.atlassian.jira.plugin.s... ]
Mircea Markus updated ISPN-962:
-------------------------------
Fix Version/s: 5.2.2
(was: 5.2.1)
> Entries not committed w/ DistLockingInterceptor and L1 caching disabled.
> ------------------------------------------------------------------------
>
> Key: ISPN-962
> URL: https://issues.jboss.org/browse/ISPN-962
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache, Locking and Concurrency
> Affects Versions: 4.2.0.Final
> Reporter: Shane Johnson
> Assignee: Dan Berindei
> Labels: Invalidation, Rehash
> Fix For: 5.2.2, 5.3.0.Final
>
>
> If you choose to disable the L1 cache (enabled=false AND onRehash=false) in distributed mode, the DistLockingInterceptor will NOT commit any invalidations due to a rehash back to the data container.
> The problem is in the commitEntry method.
> boolean doCommit = true;
> if (!dm.isLocal(entry.getKey())) {
> if (configuration.isL1CacheEnabled()) {
> dm.transformForL1(entry);
> } else {
> doCommit = false;
> }
> }
> if (doCommit)
> entry.commit(dataContainer);
> else
> entry.rollback();
> For most commands, dm.isLocal returns TRUE and so the execution proceeds to commit. However, invalidation commands are unique in that they are executed on a remote node even though that node is NOT the owner of the entry. For that reason, the dm.isLocal returns FALSE and the execution proceeds to the L1 cache enabled check. If the L1 cache is disabled, the execution proceeds to set doCommit to false and rollback the invalidation.
> We have temporarily fixed this by updating the else block to check and see if the entry has been removed. If it has not, we set doCommit to false like it does now. Otherwise, we set it to true.
> To be honest, that was a safeguard in case we are missing something. I'm still not sure why we would ever want to set doCommit to false just because the L1 cache has been disabled. However, this change has fixed our problem with entries not being deleted from the original owners on a rehash.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 1 month
[JBoss JIRA] (ISPN-1990) Preload sets the versions to null (repeatable read + write skew)
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-1990?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-1990:
--------------------------------
Fix Version/s: 5.2.2
(was: 5.2.1)
> Preload sets the versions to null (repeatable read + write skew)
> ----------------------------------------------------------------
>
> Key: ISPN-1990
> URL: https://issues.jboss.org/browse/ISPN-1990
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 5.1.3.FINAL
> Environment: Java 6 (64bits)
> Infinispan 5.2.0-SNAPSHOT
> MacOS
> Reporter: Pedro Ruivo
> Assignee: Galder Zamarreño
> Labels: preload, skew, versioning, write
> Fix For: 5.2.2, 5.3.0.Final
>
>
> I think I've spotted a issue when I use repeatable read with write skew check and I preload the cache.
>
> I've made a test case to reproduce the bug. It can be found here [1].
> The problem is that each keys preloaded is put in the container with version = null. When I try to commit a transaction, I get this exception:
>
> {code}
> java.lang.IllegalStateException: Entries cannot have null versions!
> at org.infinispan.container.entries.ClusteredRepeatableReadEntry.performWriteSkewCheck(ClusteredRepeatableReadEntry.java:44)
> at org.infinispan.transaction.WriteSkewHelper.performWriteSkewCheckAndReturnNewVersions(WriteSkewHelper.java:81)
> at org.infinispan.interceptors.locking.ClusteringDependentLogic$AllNodesLogic.createNewVersionsAndCheckForWriteSkews(ClusteringDependentLogic.java:133)
> at org.infinispan.interceptors.VersionedEntryWrappingInterceptor.visitPrepareCommand(VersionedEntryWrappingInterceptor.java:64)
> {code}
>
> I think that all info is in the test case, but if you need something let
> me know.
>
> Cheers,
> Pedro
> [1]
> https://github.com/pruivo/infinispan/blob/issue_1/core/src/test/java/org/...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 1 month
[JBoss JIRA] (ISPN-1586) inconsistent cache data in replication cluster with local (not shared) cache store
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-1586?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-1586:
--------------------------------
Fix Version/s: 5.2.2
(was: 5.2.1)
> inconsistent cache data in replication cluster with local (not shared) cache store
> ----------------------------------------------------------------------------------
>
> Key: ISPN-1586
> URL: https://issues.jboss.org/browse/ISPN-1586
> Project: Infinispan
> Issue Type: Bug
> Components: Core API
> Affects Versions: 5.0.0.FINAL, 5.1.0.CR1
> Environment: ISPN 5.0.0. Final and ISPN 5.1 sanpshot
> Java 1.7
> Linux Cent OS
> Reporter: dex chen
> Assignee: Dan Berindei
> Priority: Critical
> Fix For: 5.2.2, 5.3.0.Final
>
>
> I rerun my test (an embedded ISPN cluser) with ISPN 5.0.0. final and 5.1 Sanpshot code.
> It is configured in "replication", using local cache store, and preload=true, purgeOnStartup=false .. (see the whole config below).
> I will get the inconsistent data among the nodes in the following scenario:
> 1) start 2 node cluster
> 2) after the cluster is formed, add some data to the cache
> k1-->v1
> k2-->v2
> I will see the data replication working perfectly at this point.
> 3) bring node 2 down
> 4) delete entry k1-->v1 through node1
> Note: At this point, on the local (persistent) cache store on the node2 have 2 entries.
> 5) start node2, and wait to join the cluster
> 6) after state merging, you will see now that node1 has 1 entry and nod2 has 2 entries.
> I am expecting that the data should be consistent across the cluster.
> Here is the infinispan config:
> {code:xml}
> <infinispan
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xsi:schemaLocation="urn:infinispan:config:5.0 http://www.infinispan.org/schemas/infinispan-config-5.0.xsd"
> xmlns="urn:infinispan:config:5.0">
> <global>
> <transport clusterName="demoCluster"
> machineId="node1"
> rackId="r1" nodeName="dexlaptop"
> >
> <properties>
> <property name="configurationFile" value="./jgroups-tcp.xml" />
> </properties>
> </transport>
> <globalJmxStatistics enabled="true"/>
> </global>
> <default>
> <locking
> isolationLevel="READ_COMMITTED"
> lockAcquisitionTimeout="20000"
> writeSkewCheck="false"
> concurrencyLevel="5000"
> useLockStriping="false"
> />
> <jmxStatistics enabled="true"/>
> <clustering mode="replication">
> <stateRetrieval
> timeout="240000"
> fetchInMemoryState="true"
> alwaysProvideInMemoryState="false"
> />
> <!--
> Network calls are synchronous.
> -->
> <sync replTimeout="20000"/>
> </clustering>
> <loaders
> passivation="false"
> shared="false"
> preload="true">
> <loader
> class="org.infinispan.loaders.jdbc.stringbased.JdbcStringBasedCacheStore"
> fetchPersistentState="true"
> purgeOnStartup="false">
> <!-- set to true for not first node in the cluster in testing/demo -->
> <properties>
> <property name="stringsTableNamePrefix" value="ISPN_STRING_TABLE"/>
> <property name="idColumnName" value="ID_COLUMN"/>
> <property name="dataColumnName" value="DATA_COLUMN"/>
> <property name="timestampColumnName" value="TIMESTAMP_COLUMN"/>
> <property name="timestampColumnType" value="BIGINT"/>
> <property name="connectionFactoryClass" value="org.infinispan.loaders.jdbc.connectionfactory.PooledConnectionFactory"/>
> <property name="connectionUrl" value="jdbc:h2:file:/var/tmp/h2cachestore;DB_CLOSE_DELAY=-1"/>
> <property name="userName" value="sa"/>
> <property name="driverClass" value="org.h2.Driver"/>
> <property name="idColumnType" value="VARCHAR(255)"/>
> <property name="dataColumnType" value="BINARY"/>
> <property name="dropTableOnExit" value="false"/>
> <property name="createTableOnStart" value="true"/>
> </properties>
> <!--
> <async enabled="false" />
> -->
> </loader>
> </loaders>
> </default>
> </infinispan>
> {code}
> Basically, current ISPN implementation in state transfer will result in data insistence among nodes in replication mode and each node has local cache store.
> I found code BaseStateTransferManagerImpl's applyState code does not remove stale data in the local cache store and result in inconsistent data when joins a cluster:
> Here is the code snipt of applyState():
> {code:java}
> public void applyState(Collection<InternalCacheEntry> state,
> Address sender, int viewId) throws InterruptedException {
> .....
>
> for (InternalCacheEntry e : state) {
> InvocationContext ctx = icc.createInvocationContext(false, 1);
> // locking not necessary as during rehashing we block all transactions
> ctx.setFlags(CACHE_MODE_LOCAL, SKIP_CACHE_LOAD, SKIP_REMOTE_LOOKUP, SKIP_SHARED_CACHE_STORE, SKIP_LOCKING,
> SKIP_OWNERSHIP_CHECK);
> try {
> PutKeyValueCommand put = cf.buildPutKeyValueCommand(e.getKey(), e.getValue(), e.getLifespan(), e.getMaxIdle(), ctx.getFlags());
> interceptorChain.invoke(ctx, put);
> } catch (Exception ee) {
> log.problemApplyingStateForKey(ee.getMessage(), e.getKey());
> }
> }
>
> ...
> }
> {code}
> As we can see that the code bascically try to add all data entryies got from the cluster (other node). Hence, it does not know any previous entries were deleted from the cluster which exist in its local cache store. This is exactly my test case (my confiuration is that each node has its own cache store and in replication mode).
> To fix this, we need to delete any entries from the local cache/cache store which no longer exist in the new state.
> I modified the above method by adding the following code before put loop, and it fixed the problem in my configuration:
> {code:java}
> //Remove entries which no loger exist in the new state from local cache/cache store
> for (InternalCacheEntry ie: dataContainer.entrySet()) {
>
> if (!state.contains(ie)) {
> log.debug("Try to delete local store entry no loger exists in the new state: " + ie.getKey());
> InvocationContext ctx = icc.createInvocationContext(false, 1);
> // locking not necessary as during rehashing we block all transactions
> ctx.setFlags(CACHE_MODE_LOCAL, SKIP_CACHE_LOAD, SKIP_REMOTE_LOOKUP, SKIP_SHARED_CACHE_STORE, SKIP_LOCKING,
> SKIP_OWNERSHIP_CHECK);
> try {
> RemoveCommand remove = cf.buildRemoveCommand(ie.getKey(), ie.getValue(), ctx.getFlags());
> interceptorChain.invoke(ctx, remove);
> dataContainer.remove(ie.getKey());
> } catch (Exception ee) {
> log.error("failed to delete local store entry", ee);
> }
> }
> }
> ...
> {code}
> Obvious, the above "fix" is based on assumption/configure that dataContainer will have all local entries, i.e., preload=true, no enviction replication.
> The real fix, I think, we need delegate the syncState(state) to cache store impl, where we can check the configurations and do the right thing.
> For example, in the cache store impl, we can calculate the changes based on local data and new state, and apply the changes there.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 1 month
[JBoss JIRA] (ISPN-1896) ClusteredGetCommands should never fail with a SuspectException
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-1896?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-1896:
--------------------------------
Fix Version/s: 5.2.2
(was: 5.2.1)
> ClusteredGetCommands should never fail with a SuspectException
> --------------------------------------------------------------
>
> Key: ISPN-1896
> URL: https://issues.jboss.org/browse/ISPN-1896
> Project: Infinispan
> Issue Type: Bug
> Components: RPC
> Affects Versions: 5.1.6.FINAL
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 5.2.2, 5.3.0.Final
>
>
> I have seen this exception in the core test suite logs:
> {noformat}
> 2012-03-02 15:07:19,718 ERROR (testng-VersionedDistStateTransferTest) [org.infinispan.test.fwk.UnitTestTestNGListener] Method testStateTransfer(org.infinispan.container.versioning.VersionedDistStateTransferTest) threw an exceptionorg.infinispan.CacheException: SuspectedException
> at org.infinispan.util.Util.rewrapAsCacheException(Util.java:524)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:168)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:478)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:148)
> at org.infinispan.distribution.DistributionManagerImpl.retrieveFromRemoteSource(DistributionManagerImpl.java:169)
> at org.infinispan.interceptors.DistributionInterceptor.realRemoteGet(DistributionInterceptor.java:212)
> at org.infinispan.interceptors.DistributionInterceptor.remoteGetAndStoreInL1(DistributionInterceptor.java:194)
> at org.infinispan.interceptors.DistributionInterceptor.remoteGetBeforeWrite(DistributionInterceptor.java:440)
> at org.infinispan.interceptors.DistributionInterceptor.handleWriteCommand(DistributionInterceptor.java:455)
> at org.infinispan.interceptors.DistributionInterceptor.visitPutKeyValueCommand(DistributionInterceptor.java:274)
> ...
> Caused by: SuspectedException
> at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:349)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:263)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:163)
> ... 60 more
> {noformat}
> The remote get command should return null instead of failing, even if it had a single target node.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 1 month
[JBoss JIRA] (ISPN-1841) Write skew checks are performed on all entries in a transaction context
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-1841?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-1841:
--------------------------------
Fix Version/s: 5.2.2
(was: 5.2.1)
> Write skew checks are performed on all entries in a transaction context
> -----------------------------------------------------------------------
>
> Key: ISPN-1841
> URL: https://issues.jboss.org/browse/ISPN-1841
> Project: Infinispan
> Issue Type: Bug
> Components: Transactions
> Affects Versions: 5.1.6.FINAL
> Reporter: Manik Surtani
> Assignee: Mircea Markus
> Fix For: 5.2.2, 5.3.0.Final
>
>
> They should only be performed on entries that are read first and then updated. The current implementation doesn't cause any problems, however it is unnecessary processing and certain transactions may unnecessarily abort if, for example, an entry is read, and not written to, but the entry changes before the transaction commits.
> From Pedro Ruivo's email to infinispan-dev, where this was reported:
> {quote}
> I've noticed that in the last version (5.1.x) the write skew check is
> performed on all keys written. However, from your documentation [1] I
> understood that the write skew was meant to be performed only on the
> written keys that were previously read.
> Is this change intentional?
> Cheers,
> Pedro Ruivo
> [1] https://docs.jboss.org/author/display/ISPN51/Data+Versioning
> "Write skew checks are performed at prepare-time to ensure a concurrent
> transaction hasn't modified an entry while it was read and potentially
> updated based on the value read."
> {quote}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 1 month