[JBoss JIRA] (ISPN-3635) Out of data read after write on node losing ownership
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3635?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-3635:
-----------------------------------------------
Tristan Tarrant <ttarrant(a)redhat.com> changed the Status of [bug 1019742|https://bugzilla.redhat.com/show_bug.cgi?id=1019742] from POST to ON_QA
> Out of data read after write on node losing ownership
> -----------------------------------------------------
>
> Key: ISPN-3635
> URL: https://issues.jboss.org/browse/ISPN-3635
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache, State transfer
> Affects Versions: 5.3.0.Final
> Reporter: Radim Vansa
> Assignee: Pedro Ruivo
> Priority: Critical
> Labels: 620
> Fix For: 6.0.0.Final
>
>
> In a situation where a node is losing ownership of an entry (during a state transfer), when the node does a write (and commits that), the change is propagated only to the new owners, the entry is not written locally. However, when it executes read for this key afterwards, it gets the old value as this is directly retrieved from the data container.
> This bug was observed in transactional mode, but might not be limited to it.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 1 month
[JBoss JIRA] (ISPN-3613) Stored entries are deleted from table in rebalance
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3613?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-3613:
-----------------------------------------------
Tristan Tarrant <ttarrant(a)redhat.com> changed the Status of [bug 989927|https://bugzilla.redhat.com/show_bug.cgi?id=989927] from NEW to ON_QA
> Stored entries are deleted from table in rebalance
> --------------------------------------------------
>
> Key: ISPN-3613
> URL: https://issues.jboss.org/browse/ISPN-3613
> Project: Infinispan
> Issue Type: Bug
> Reporter: Mircea Markus
> Assignee: William Burns
> Labels: 620
> Fix For: 6.0.0.Final
>
>
> Description of problem:
> When passivation value is false, stored entries are deleted from table in rebalance.
> clustered.xml
> ------------
> <distributed-cache name="myCache" mode="SYNC" start="EAGER">
> <locking isolation="READ_COMMITTED" acquire-timeout="30000" concurrency-level="1000" striping="false"/>
> <transaction mode="NONE"/>
> <eviction strategy="LIRS" max-entries="10000"/>
> <string-keyed-jdbc-store datasource="java:jboss/datasources/InfinispanDS" passivation="false" preload="true" purge="false" shared="true" fetch-state="false">
> ...
> Version-Release number of selected component (if applicable):
> JDG 6.1
> How reproducible:
> I will attache the clustered.xml and trace logs.
> Steps to Reproduce:
> 1.start node1
> 2.put 300 entries
> 3.start node2
> check entries:
> select count(*) from table;
> 300
> 4.start node3
> check entries:
> select count(*) from table;
> 0
> Actual results:
> In step 4, number of entries are 0 in DB table.
> Expected results:
> In step 4, number of entries are 300 in DB table.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 1 month
[JBoss JIRA] (ISPN-3704) StateTransfer's PutKeyValueCommand may trigger remote get
by Radim Vansa (JIRA)
[ https://issues.jboss.org/browse/ISPN-3704?page=com.atlassian.jira.plugin.... ]
Radim Vansa updated ISPN-3704:
------------------------------
Description:
In TX mode with write skew check on, ST executing puts may trigger a remote get.
The condition in TxDistributionInterceptor.handleTxWriteCommand should probably be switched from
{code}
if (ctx.isOriginLocal() && !skipRemoteGet || command.isConditional() || shouldFetchRemoteValuesForWriteSkewCheck(ctx, command))
remoteGetBeforeWrite(ctx, command, recipientGenerator);
{code}
to
{code}
if (!skipRemoteGet && (ctx.isOriginLocal() || command.isConditional() || shouldFetchRemoteValuesForWriteSkewCheck(ctx, command)))
remoteGetBeforeWrite(ctx, command, recipientGenerator);
{code}
EDIT:
I have also registered a situation where the Prepare/Commit command was executed remotely from within the ST because the topology has changed during the remote get. This should be avoided as well.
was:
In TX mode with write skew check on, ST executing puts may trigger a remote get.
The condition in TxDistributionInterceptor.handleTxWriteCommand should probably be switched from
{code}
if (ctx.isOriginLocal() && !skipRemoteGet || command.isConditional() || shouldFetchRemoteValuesForWriteSkewCheck(ctx, command))
remoteGetBeforeWrite(ctx, command, recipientGenerator);
{code}
to
{code}
if (!skipRemoteGet && (ctx.isOriginLocal() || command.isConditional() || shouldFetchRemoteValuesForWriteSkewCheck(ctx, command)))
remoteGetBeforeWrite(ctx, command, recipientGenerator);
{code}
> StateTransfer's PutKeyValueCommand may trigger remote get
> ---------------------------------------------------------
>
> Key: ISPN-3704
> URL: https://issues.jboss.org/browse/ISPN-3704
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 6.0.0.CR1
> Reporter: Radim Vansa
> Assignee: Mircea Markus
> Priority: Critical
>
> In TX mode with write skew check on, ST executing puts may trigger a remote get.
> The condition in TxDistributionInterceptor.handleTxWriteCommand should probably be switched from
> {code}
> if (ctx.isOriginLocal() && !skipRemoteGet || command.isConditional() || shouldFetchRemoteValuesForWriteSkewCheck(ctx, command))
> remoteGetBeforeWrite(ctx, command, recipientGenerator);
> {code}
> to
> {code}
> if (!skipRemoteGet && (ctx.isOriginLocal() || command.isConditional() || shouldFetchRemoteValuesForWriteSkewCheck(ctx, command)))
> remoteGetBeforeWrite(ctx, command, recipientGenerator);
> {code}
> EDIT:
> I have also registered a situation where the Prepare/Commit command was executed remotely from within the ST because the topology has changed during the remote get. This should be avoided as well.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 1 month
[JBoss JIRA] (ISPN-3354) Multiple events on the local node with Infinispan 5.3.0-final
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3354?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-3354:
-----------------------------------------------
Tristan Tarrant <ttarrant(a)redhat.com> changed the Status of [bug 1024942|https://bugzilla.redhat.com/show_bug.cgi?id=1024942] from NEW to POST
> Multiple events on the local node with Infinispan 5.3.0-final
> -------------------------------------------------------------
>
> Key: ISPN-3354
> URL: https://issues.jboss.org/browse/ISPN-3354
> Project: Infinispan
> Issue Type: Bug
> Components: Listeners
> Affects Versions: 5.3.0.Final
> Reporter: Luca Zenti
> Assignee: Pedro Ruivo
> Priority: Critical
> Labels: 620
> Fix For: 6.1.0.Final
>
> Attachments: TestInfinispanDuplicatedEvents.java
>
>
> After upgrading to Infinispan 5.3.0-final I found a strange "intermittent" problem in my application. Digging a bit deeper, I found out it is due to CacheEntry events raised twice for some keys on the local node (the node where the cache operation is invoked).
> I was able to reproduce the problem and I wrote the attached test case.
> The problem happens regardless of the cluster mode, but only with non-transactional caches. I think this is due to the fact that with transactional caches the events are raised on commit.
> Also, my application used to work with an interceptor rather than an event listener, so I actually found the problem when I saw my interceptor being occasionally executed 3 times with 2 nodes.
> I'm not sure whether the command and the chain of interceptor is really meant to be executed twice on the local node, but the consequent behaviour on the events sounds like a bug.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 1 month
[JBoss JIRA] (ISPN-2999) getCacheEntry not working when distribution gets go remote
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-2999?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-2999:
-----------------------------------------------
Tristan Tarrant <ttarrant(a)redhat.com> changed the Status of [bug 1024923|https://bugzilla.redhat.com/show_bug.cgi?id=1024923] from NEW to ON_QA
> getCacheEntry not working when distribution gets go remote
> ----------------------------------------------------------
>
> Key: ISPN-2999
> URL: https://issues.jboss.org/browse/ISPN-2999
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 5.2.5.Final
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Priority: Critical
> Labels: 620
> Fix For: 6.0.0.Final
>
>
> Assuming the cache contains byte[], you get this exception when calling getCacheEntry(K) for a key not available locally:
> {code}org.infinispan.server.hotrod.HotRodException: java.lang.ClassCastException: [B cannot be cast to org.infinispan.container.entries.CacheEntry
> at org.infinispan.server.hotrod.HotRodDecoder.createServerException(HotRodDecoder.scala:216)
> at org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:79)
> at org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:49)
> at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500)
> at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
> at org.infinispan.server.core.AbstractProtocolDecoder.messageReceived(AbstractProtocolDecoder.scala:393)
> at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
> at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
> at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
> at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
> at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:313)
> at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
> at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
> at java.lang.Thread.run(Thread.java:680)
> Caused by: java.lang.ClassCastException: [B cannot be cast to org.infinispan.container.entries.CacheEntry
> at org.infinispan.CacheImpl.getCacheEntry(CacheImpl.java:299)
> at org.infinispan.CacheImpl.getCacheEntry(CacheImpl.java:304)
> at org.infinispan.server.core.AbstractProtocolDecoder.get(AbstractProtocolDecoder.scala:287)
> at org.infinispan.server.core.AbstractProtocolDecoder.decodeKey(AbstractProtocolDecoder.scala:117)
> at org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:73)
> ... 14 more{code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 1 month
[JBoss JIRA] (ISPN-3518) 1PC can cause a window of inconsistency with L1 invalidation
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3518?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-3518:
-----------------------------------------------
Tristan Tarrant <ttarrant(a)redhat.com> changed the Status of [bug 1024934|https://bugzilla.redhat.com/show_bug.cgi?id=1024934] from NEW to ON_QA
> 1PC can cause a window of inconsistency with L1 invalidation
> ------------------------------------------------------------
>
> Key: ISPN-3518
> URL: https://issues.jboss.org/browse/ISPN-3518
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.3.0.Final
> Reporter: William Burns
> Assignee: William Burns
> Priority: Critical
> Labels: 620
> Fix For: 6.0.0.Final
>
>
> The L1TxInterceptor currently doesn't block on L1 invalidations during a 1PC. This can cause an inconsistent view of data across non owner nodes.
> Example:
> {quote}
> Node A owns k with value of v1
> Node B has k in L1 with value of v1
> tx1 started
> Node A put k -> v2
> Node A sends invalidation
> Node A commits
> tx1 completed
> tx2 started
> Node B get k returns v1 from L1
> tx2 completed
> Node B gets invalidation for k
> tx3 started
> Node B get k remotely retrieves v2 from Node A
> tx3 completed
> {quote}
> We need to make sure that all L1 invalidations in Tx mode are completed before completing the transaction.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 1 month