[JBoss JIRA] (ISPN-4751) Hibernate search, infinispan and Amazon S3 - IllegalArgumentException: bucketId: A96137216.bz2 (expected: integer)
by George Christman (JIRA)
[ https://issues.jboss.org/browse/ISPN-4751?page=com.atlassian.jira.plugin.... ]
George Christman edited comment on ISPN-4751 at 1/22/15 10:05 AM:
------------------------------------------------------------------
Hi Vojtech, I'm wondering if you happened to have any success with the
issue?
was (Author: gchristman):
Hi Vojtech, I'm wondering if you happened to have any success with the
issue?
On Wed, Jan 7, 2015 at 3:21 PM, Vojtech Juranek (JIRA) <issues(a)jboss.org>
--
George Christman
CEO
www.CarDaddy.com
P.O. Box 735
Johnstown, New York
> Hibernate search, infinispan and Amazon S3 - IllegalArgumentException: bucketId: A96137216.bz2 (expected: integer)
> ------------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-4751
> URL: https://issues.jboss.org/browse/ISPN-4751
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Reporter: Lance Ess
> Assignee: William Burns
>
> I'm trying to use hibernate-search to host a Lucene index on Amazon S3 but I'm getting the following exception:
> {code}
> Exception in thread "LuceneIndexesData-CloudCacheStore-0" java.lang.IllegalArgumentException: bucketId: A96137216.bz2 (expected: integer)
> at org.infinispan.loaders.bucket.Bucket.setBucketId(Bucket.java:84)
> at org.infinispan.loaders.cloud.CloudCacheStore.readFromBlob(CloudCacheStore.java:450)
> at org.infinispan.loaders.cloud.CloudCacheStore.scanBlobForExpiredEntries(CloudCacheStore.java:292)
> at org.infinispan.loaders.cloud.CloudCacheStore.purge(CloudCacheStore.java:284)
> at org.infinispan.loaders.cloud.CloudCacheStore.purgeInternal(CloudCacheStore.java:336)
> at org.infinispan.loaders.AbstractCacheStore$2.run(AbstractCacheStore.java:111)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> {code}
> The documentation for persisting Lucene indexes on Amazon-S3 is a little sparse but I think I'm on the right track. I'm trying to start infinispan embedded within my application so I've specified a path to the infinispan XML as follows in my hibernate.cfg.xml
> {code:xml}
> <property name="hibernate.search.default.directory_provider">infinispan</property>
> <property name="hibernate.search.infinispan.configuration_resourcename">infinispan-amazons3.xml</property>
> <property name="hibernate.search.infinispan.chunk_size">300000000</property>
> {code}
> And my infinispan-amazons3.xml is:
> {code:xml}
> <infinispan>
> <default>
> <loaders>
> <cloudStore xmlns="urn:infinispan:config:cloud:5.3"
> cloudService="aws-s3"
> identity="user"
> password="password"
> bucketPrefix="bucket">
> </cloudStore>
> </loaders>
> </default>
> </infinispan>
> {code}
> I'm using the following versions (maven pom.xml)
> {code}
> <dependency>
> <groupId>org.hibernate</groupId>
> <artifactId>hibernate-search</artifactId>
> <version>4.4.4.Final</version>
> </dependency>
> <dependency>
> <groupId>org.hibernate</groupId>
> <artifactId>hibernate-search-infinispan</artifactId>
> <version>4.4.4.Final</version>
> </dependency>
> <dependency>
> <groupId>org.infinispan</groupId>
> <artifactId>infinispan-cachestore-cloud</artifactId>
> <version>5.3.0.Final</version>
> </dependency>
> <dependency>
> <groupId>org.jclouds.provider</groupId>
> <artifactId>aws-s3</artifactId>
> <version>1.4.1</version>
> </dependency>
> {code}
> I initially thought this was related to ISPN-1909 but my version is after the fix for that issue (5.1.3.CR1, 5.1.3.FINAL)
> FYI here's my maven dependency tree (grepped for infinispan)
> {code}
> $ mvn dependency:tree | grep infinispan
> [INFO] +- org.hibernate:hibernate-search-infinispan:jar:4.4.4.Final:compile
> [INFO] | \- org.infinispan:infinispan-lucene-directory:jar:5.3.0.Final:compile
> [INFO] +- org.infinispan:infinispan-cachestore-cloud:jar:5.3.0.Final:compile
> [INFO] | \- org.infinispan:infinispan-core:jar:5.3.0.Final:compile
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 2 months
[JBoss JIRA] (ISPN-4751) Hibernate search, infinispan and Amazon S3 - IllegalArgumentException: bucketId: A96137216.bz2 (expected: integer)
by George Christman (JIRA)
[ https://issues.jboss.org/browse/ISPN-4751?page=com.atlassian.jira.plugin.... ]
George Christman updated ISPN-4751:
-----------------------------------
Hi Vojtech, I'm wondering if you happened to have any success with the
issue?
On Wed, Jan 7, 2015 at 3:21 PM, Vojtech Juranek (JIRA) <issues(a)jboss.org>
--
George Christman
CEO
www.CarDaddy.com
P.O. Box 735
Johnstown, New York
> Hibernate search, infinispan and Amazon S3 - IllegalArgumentException: bucketId: A96137216.bz2 (expected: integer)
> ------------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-4751
> URL: https://issues.jboss.org/browse/ISPN-4751
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Reporter: Lance Ess
> Assignee: William Burns
>
> I'm trying to use hibernate-search to host a Lucene index on Amazon S3 but I'm getting the following exception:
> {code}
> Exception in thread "LuceneIndexesData-CloudCacheStore-0" java.lang.IllegalArgumentException: bucketId: A96137216.bz2 (expected: integer)
> at org.infinispan.loaders.bucket.Bucket.setBucketId(Bucket.java:84)
> at org.infinispan.loaders.cloud.CloudCacheStore.readFromBlob(CloudCacheStore.java:450)
> at org.infinispan.loaders.cloud.CloudCacheStore.scanBlobForExpiredEntries(CloudCacheStore.java:292)
> at org.infinispan.loaders.cloud.CloudCacheStore.purge(CloudCacheStore.java:284)
> at org.infinispan.loaders.cloud.CloudCacheStore.purgeInternal(CloudCacheStore.java:336)
> at org.infinispan.loaders.AbstractCacheStore$2.run(AbstractCacheStore.java:111)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> {code}
> The documentation for persisting Lucene indexes on Amazon-S3 is a little sparse but I think I'm on the right track. I'm trying to start infinispan embedded within my application so I've specified a path to the infinispan XML as follows in my hibernate.cfg.xml
> {code:xml}
> <property name="hibernate.search.default.directory_provider">infinispan</property>
> <property name="hibernate.search.infinispan.configuration_resourcename">infinispan-amazons3.xml</property>
> <property name="hibernate.search.infinispan.chunk_size">300000000</property>
> {code}
> And my infinispan-amazons3.xml is:
> {code:xml}
> <infinispan>
> <default>
> <loaders>
> <cloudStore xmlns="urn:infinispan:config:cloud:5.3"
> cloudService="aws-s3"
> identity="user"
> password="password"
> bucketPrefix="bucket">
> </cloudStore>
> </loaders>
> </default>
> </infinispan>
> {code}
> I'm using the following versions (maven pom.xml)
> {code}
> <dependency>
> <groupId>org.hibernate</groupId>
> <artifactId>hibernate-search</artifactId>
> <version>4.4.4.Final</version>
> </dependency>
> <dependency>
> <groupId>org.hibernate</groupId>
> <artifactId>hibernate-search-infinispan</artifactId>
> <version>4.4.4.Final</version>
> </dependency>
> <dependency>
> <groupId>org.infinispan</groupId>
> <artifactId>infinispan-cachestore-cloud</artifactId>
> <version>5.3.0.Final</version>
> </dependency>
> <dependency>
> <groupId>org.jclouds.provider</groupId>
> <artifactId>aws-s3</artifactId>
> <version>1.4.1</version>
> </dependency>
> {code}
> I initially thought this was related to ISPN-1909 but my version is after the fix for that issue (5.1.3.CR1, 5.1.3.FINAL)
> FYI here's my maven dependency tree (grepped for infinispan)
> {code}
> $ mvn dependency:tree | grep infinispan
> [INFO] +- org.hibernate:hibernate-search-infinispan:jar:4.4.4.Final:compile
> [INFO] | \- org.infinispan:infinispan-lucene-directory:jar:5.3.0.Final:compile
> [INFO] +- org.infinispan:infinispan-cachestore-cloud:jar:5.3.0.Final:compile
> [INFO] | \- org.infinispan:infinispan-core:jar:5.3.0.Final:compile
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 2 months
[JBoss JIRA] (ISPN-5168) Recovery: force commit on an orphan tx unlocks remote keys too soon
by Dan Berindei (JIRA)
Dan Berindei created ISPN-5168:
----------------------------------
Summary: Recovery: force commit on an orphan tx unlocks remote keys too soon
Key: ISPN-5168
URL: https://issues.jboss.org/browse/ISPN-5168
Project: Infinispan
Issue Type: Bug
Components: Core
Affects Versions: 7.1.0.Beta1, 7.0.3.Final
Reporter: Dan Berindei
The force commit admin operation replays the PrepareCommand on all the owners to acquire any missing locks. But the prepare doesn't do anything if the tx already exists and is marked as prepared on the remote nodes.
However, when executing the CommitCommand, {{TxInterceptor}} realizes that the existing remote tx has an older topology id and replays the PrepareCommand. And if the originator of the tx left the cluster, {{TxInterceptor.invokeNextInterceptorAndVerifyTransaction()}} will roll back the tx and unlock all the keys. It doesn't throw an exception, so the commit still succeeds, but without holding any locks.
{noformat}
10:38:51,313 TRACE (testng-OriginatorAndOwnerFailureReplicationTest:) [JGroupsTransport] dests=[OriginatorAndOwnerFailureReplicationTest-NodeD-50040, OriginatorAndOwnerFailureReplicationTest-NodeE-44976], command=PrepareCommand {modifications=[PutKeyValueCommand{key=aKey, value=newValue, flags=null, putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedMetadata{version=null}, successful=true}], onePhaseCommit=false, gtx=RecoveryAwareGlobalTransaction{xid=< 1, 64, 64, -12-63-13-63-32-39-44-29-891-73-111-107-75-113-88-108-59-88120000000000000000000000000000000000000000000, -12-63-13-63-32-39-44-29-891-73-111-107-75-113-88-108-59-88120000000000000000000000000000000000000000000 >, internalId=562962838323201} GlobalTransaction:<OriginatorAndOwnerFailureReplicationTest-NodeD-50040>:2:local, cacheName='___defaultcache', topologyId=5}, mode=SYNCHRONOUS, timeout=15000
10:38:51,319 TRACE (testng-OriginatorAndOwnerFailureReplicationTest:) [JGroupsTransport] dests=null, command=CommitCommand {gtx=RecoveryAwareGlobalTransaction{xid=< 1, 64, 64, -12-63-13-63-32-39-44-29-891-73-111-107-75-113-88-108-59-88120000000000000000000000000000000000000000000, -12-63-13-63-32-39-44-29-891-73-111-107-75-113-88-108-59-88120000000000000000000000000000000000000000000 >, internalId=562962838323201} GlobalTransaction:<OriginatorAndOwnerFailureReplicationTest-NodeD-50040>:2:local, cacheName='___defaultcache', topologyId=5}, mode=SYNCHRONOUS_IGNORE_LEAVERS, timeout=15000
10:38:51,322 TRACE (remote-thread-1,OriginatorAndOwnerFailureReplicationTest-NodeE:) [TxInterceptor] Remote tx topology id 4 and command topology is 5
10:38:51,322 TRACE (remote-thread-1,OriginatorAndOwnerFailureReplicationTest-NodeE:) [TxInterceptor] Replaying the transactions received as a result of state transfer PrepareCommand {modifications=[PutKeyValueCommand{key=aKey, value=newValue, flags=null, putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedMetadata{version=null}, successful=true}], onePhaseCommit=false, gtx=RecoveryAwareGlobalTransaction{xid=< 1, 64, 64, -12-63-13-63-32-39-44-29-891-73-111-107-75-113-88-108-59-88120000000000000000000000000000000000000000000, -12-63-13-63-32-39-44-29-891-73-111-107-75-113-88-108-59-88120000000000000000000000000000000000000000000 >, internalId=562962838323201} GlobalTransaction:<OriginatorAndOwnerFailureReplicationTest-NodeF-60014>:2:remote, cacheName='___defaultcache', topologyId=-1}
10:38:51,323 TRACE (remote-thread-1,OriginatorAndOwnerFailureReplicationTest-NodeE:) [TxInterceptor] invokeNextInterceptorAndVerifyTransaction :: originatorMissing=true, alreadyCompleted=true
10:38:51,323 TRACE (remote-thread-1,OriginatorAndOwnerFailureReplicationTest-NodeE:) [TxInterceptor] Rolling back remote transaction RecoveryAwareGlobalTransaction{xid=< 1, 64, 64, -12-63-13-63-32-39-44-29-891-73-111-107-75-113-88-108-59-88120000000000000000000000000000000000000000000, -12-63-13-63-32-39-44-29-891-73-111-107-75-113-88-108-59-88120000000000000000000000000000000000000000000 >, internalId=562962838323201} GlobalTransaction:<OriginatorAndOwnerFailureReplicationTest-NodeF-60014>:2:remote because either already completed (true) or originator no longer in the cluster (true).
10:38:51,323 TRACE (remote-thread-1,OriginatorAndOwnerFailureReplicationTest-NodeE:) [OwnableReentrantPerEntryLockContainer] Unlocking lock instance for key aKey
10:38:51,328 TRACE (remote-thread-1,OriginatorAndOwnerFailureReplicationTest-NodeE:) [ReadCommittedEntry] Updating entry (key=aKey removed=false valid=true changed=true created=true loaded=false value=newValue metadata=EmbeddedMetadata{version=null}, providedMetadata=null)
{noformat}
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 2 months
[JBoss JIRA] (ISPN-4535) size() operator on JPACacheStore should expect larger than int
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-4535?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant commented on ISPN-4535:
---------------------------------------
That's what the JDK does for Map:
public interface Map<K,V> {
// Query Operations
/**
* Returns the number of key-value mappings in this map. If the
* map contains more than <tt>Integer.MAX_VALUE</tt> elements, returns
* <tt>Integer.MAX_VALUE</tt>.
*
* @return the number of key-value mappings in this map
*/
int size();
> size() operator on JPACacheStore should expect larger than int
> --------------------------------------------------------------
>
> Key: ISPN-4535
> URL: https://issues.jboss.org/browse/ISPN-4535
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Reporter: Sanne Grinovero
>
> the return type of the CacheStore#size() is int but it's common in databases to store way more.
> The query performing the count operation is doing a cast to integer, this will fail.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 2 months
[JBoss JIRA] (ISPN-5158) Transaction rolled back but returns successful response
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-5158?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-5158:
------------------------------------
Found the problem: my fix was not throwing the exception if the transaction was already completed. In this case, the transaction was completed because {{TransactionTable.cleanupLeaverTransactions()}} had already rolled back transactions originating from edg-perf02:
{noformat}
11:57:32,123 TRACE [org.infinispan.transaction.TransactionTable] (transport-thread-23) Checking for transactions originated on leavers. Current cache members are [edg-perf02-35237, edg-perf01-13291, edg-perf04-62504], remote transactions: 7
11:57:32,142 DEBUG [org.infinispan.transaction.TransactionTable] (transport-thread-23) Rolling back transaction GlobalTransaction:<edg-perf03-14221>:47986:remote because originator edg-perf03-14221 left the cluster
{noformat}
I'm now trying to replace the "already completed" check with a "completed successfully" check and see how that works. It does appear to break a couple of existing recovery tests: OriginatorAndOwnerFailureReplicationTest and SimpleCacheRecoveryAdminTest.
> Transaction rolled back but returns successful response
> -------------------------------------------------------
>
> Key: ISPN-5158
> URL: https://issues.jboss.org/browse/ISPN-5158
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 7.1.0.Beta1
> Reporter: Radim Vansa
> Assignee: Dan Berindei
> Priority: Critical
> Attachments: tx.txt, views.txt
>
>
> When the cluster is merging, it is possible that a node is removed from the view although it is still responsive. Eventually the cluster is merged correctly, but since the node is reported as missing from the view, transaction originating from this node is rolled back.
> {code}
> 10:01:36,116 TRACE [org.infinispan.interceptors.TxInterceptor] (remote-thread-151) Rolling back remote transaction GlobalTransaction:<edg-perf02-39415>:28106:remote because either already completed(false) or originator no longer in the cluster(true).
> {code}
> However, even after this a successful response is sent to the originator:
> {code}
> 10:01:36,119 TRACE [org.infinispan.remoting.InboundInvocationHandlerImpl] (remote-thread-151) About to send back response null for command PrepareCommand {modifications=[PutKeyValueCommand{key=key_0000000000001318, value=[19 #1: 1195, ], flags=[SKIP_CACHE_LOAD, SKIP_REMOTE_LOOKUP], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedMetadata{version=null}, successful=true}], onePhaseCommit=false, gtx=GlobalTransaction:<edg-perf02-39415>:28106:remote, cacheName='testCache', topologyId=47}
> {code}
> Originator then expects that the transaction was successfully prepared:
> {code}
> 10:01:36,124 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (DefaultStressor-9) Responses: [sender=edg-perf01-36235, received=true, suspected=false]
> [sender=edg-perf03-24110, received=true, suspected=false]
> 10:01:36,135 TRACE [org.infinispan.transaction.TransactionCoordinator] (DefaultStressor-9) Committing transaction GlobalTransaction:<edg-perf02-39415>:28106:local
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 2 months
[JBoss JIRA] (ISPN-5167) Cache.size() returns cluster-wide entry size in int and overflow
by Takayoshi Kimura (JIRA)
Takayoshi Kimura created ISPN-5167:
--------------------------------------
Summary: Cache.size() returns cluster-wide entry size in int and overflow
Key: ISPN-5167
URL: https://issues.jboss.org/browse/ISPN-5167
Project: Infinispan
Issue Type: Feature Request
Components: Core
Affects Versions: 7.0.3.Final
Reporter: Takayoshi Kimura
We have a large cluster and a cache will have number of entries > Integer.MAX_VALUE in near future.
It would be great if we have an additional method which returns size in long type.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 2 months
[JBoss JIRA] (ISPN-5158) Transaction rolled back but returns successful response
by Radim Vansa (JIRA)
[ https://issues.jboss.org/browse/ISPN-5158?page=com.atlassian.jira.plugin.... ]
Radim Vansa commented on ISPN-5158:
-----------------------------------
It seems that the fix is not complete. This is what I found when running with the fixes: It seems that transaction was rolled back but still did not throw an exception:
{code}
11:55:40,011 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,edg-perf02-35237) ISPN000093: Received new, MERGED cluster view: MergeView::[edg-perf03-14221|17] (3) [edg-perf03-14221, edg-perf01-13291, edg-perf02-35237], 8 subgroups: [edg-perf03-14221|8] (2) [edg-perf03-14221, edg-perf02-56279], [edg-perf03-14221|9] (3) [edg-perf03-14221, edg-perf01-13291, edg-perf02-56279], [edg-perf01-13291|4] (2) [edg-perf01-13291, edg-perf02-56279], [edg-perf01-13291|16] (1) [edg-perf01-13291], [edg-perf03-14221|5] (2) [edg-perf03-14221, edg-perf04-62504], [edg-perf03-14221|16] (3) [edg-perf03-14221, edg-perf04-62504, edg-perf02-35237], [edg-perf01-13291|8] (1) [edg-perf01-13291], [edg-perf03-14221|13] (2) [edg-perf03-14221, edg-perf04-62504]
11:55:40,012 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-15) Received new cluster view: 17, isCoordinator = false, becameCoordinator = false
11:57:31,680 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,edg-perf02-35237) ISPN000093: Received new, MERGED cluster view: MergeView::[edg-perf02-35237|18] (3) [edg-perf02-35237, edg-perf01-13291, edg-perf04-62504], 1 subgroups: [edg-perf04-62504|17] (1) [edg-perf04-62504]
11:57:31,683 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-18) Received new cluster view: 18, isCoordinator = true, becameCoordinator = true
11:57:31,979 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-23) Attempting to execute command on self: CacheTopologyControlCommand{cache=testCache, type=REBALANCE_START, sender=edg-perf02-35237, joinInfo=null, topologyId=53, rebalanceId=14, currentCH=DefaultConsistentHash{ns = 512, owners = (2)[edg-perf02-35237: 257+85, edg-perf01-13291: 255+86]}, pendingCH=DefaultConsistentHash{ns = 512, owners = (3)[edg-perf02-35237: 171+171, edg-perf01-13291: 171+170, edg-perf04-62504: 170+171]}, availabilityMode=null, actualMembers=[edg-perf02-35237, edg-perf01-13291, edg-perf04-62504], throwable=null, viewId=18}
11:57:32,004 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-23) Waiting on view 18 being accepted
11:57:32,069 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread-23) Starting local rebalance for cache testCache, topology = CacheTopology{id=53, rebalanceId=14, currentCH=DefaultConsistentHash{ns = 512, owners = (2)[edg-perf02-35237: 257+85, edg-perf01-13291: 255+86]}, pendingCH=DefaultConsistentHash{ns = 512, owners = (3)[edg-perf02-35237: 171+171, edg-perf01-13291: 171+170, edg-perf04-62504: 170+171]}, unionCH=null, actualMembers=[edg-perf02-35237, edg-perf01-13291, edg-perf04-62504]}
11:57:32,113 TRACE [org.infinispan.commands.tx.PrepareCommand] (remote-thread-83) Invoking remotely originated prepare: PrepareCommand {modifications=[PutKeyValueCommand{key=key_0000000000001D43, value=[29 #19:
54, 54, 85, 318, 786, 905, 985, 1276, 1313, 1464, 1551, 1585, 1621, 1972, 2014, 2319, 2319, 2415, 2471, ], flags=[SKIP_CACHE_LOAD, SKIP_REMOTE_LOOKUP], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=Embe
ddedMetadata{version=null}, successful=true}], onePhaseCommit=false, gtx=GlobalTransaction:<edg-perf03-14221>:47986:remote, cacheName='testCache', topologyId=50} with invocation context: org.infinispan.context.i
mpl.RemoteTxInvocationContext@b4026157
11:57:32,113 TRACE [org.infinispan.transaction.TransactionTable] (remote-thread-83) Created and registered remote transaction RemoteTransaction{modifications=[PutKeyValueCommand{key=key_0000000000001D43, value=[29 #19: 54, 54, 85, 318, 786, 905, 985, 1276, 1313, 1464, 1551, 1585, 1621, 1972, 2014, 2319, 2319, 2415, 2471, ], flags=[SKIP_CACHE_LOAD, SKIP_REMOTE_LOOKUP], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedMetadata{version=null}, successful=true}], lookedUpEntries={}, lockedKeys=null, backupKeyLocks=null, lookedUpEntriesTopology=2147483647, isMarkedForRollback=false, tx=GlobalTransaction:<edg-perf03-14221>:47986:remote, state=null}
11:57:32,116 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-23) Signalling transaction data received for topology 53
11:57:32,142 DEBUG [org.infinispan.transaction.TransactionTable] (transport-thread-23) Rolling back transaction GlobalTransaction:<edg-perf03-14221>:47986:remote because originator edg-perf03-14221 left the cluster
11:57:32,143 TRACE [org.infinispan.transaction.TransactionTable] (transport-thread-23) Marking transaction GlobalTransaction:<edg-perf03-14221>:47986:remote as completed
11:57:32,183 TRACE [org.infinispan.interceptors.TxInterceptor] (remote-thread-83) Rolling back remote transaction GlobalTransaction:<edg-perf03-14221>:47986:remote because either already completed (true) or originator no longer in the cluster (true).
11:57:32,214 TRACE [org.infinispan.remoting.InboundInvocationHandlerImpl] (remote-thread-83) About to send back response null for command PrepareCommand {modifications=[PutKeyValueCommand{key=key_0000000000001D4
3, value=[29 #19: 54, 54, 85, 318, 786, 905, 985, 1276, 1313, 1464, 1551, 1585, 1621, 1972, 2014, 2319, 2319, 2415, 2471, ], flags=[SKIP_CACHE_LOAD, SKIP_REMOTE_LOOKUP], putIfAbsent=false, valueMatcher=MATCH_ALW
AYS, metadata=EmbeddedMetadata{version=null}, successful=true}], onePhaseCommit=false, gtx=GlobalTransaction:<edg-perf03-14221>:47986:remote, cacheName='testCache', topologyId=50}
{code}
The above can be found in /qa/hudson_jobs/jdg-resilience-split-dist-tx/builds/6/edg-perf02.log
> Transaction rolled back but returns successful response
> -------------------------------------------------------
>
> Key: ISPN-5158
> URL: https://issues.jboss.org/browse/ISPN-5158
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 7.1.0.Beta1
> Reporter: Radim Vansa
> Assignee: Dan Berindei
> Priority: Critical
> Attachments: tx.txt, views.txt
>
>
> When the cluster is merging, it is possible that a node is removed from the view although it is still responsive. Eventually the cluster is merged correctly, but since the node is reported as missing from the view, transaction originating from this node is rolled back.
> {code}
> 10:01:36,116 TRACE [org.infinispan.interceptors.TxInterceptor] (remote-thread-151) Rolling back remote transaction GlobalTransaction:<edg-perf02-39415>:28106:remote because either already completed(false) or originator no longer in the cluster(true).
> {code}
> However, even after this a successful response is sent to the originator:
> {code}
> 10:01:36,119 TRACE [org.infinispan.remoting.InboundInvocationHandlerImpl] (remote-thread-151) About to send back response null for command PrepareCommand {modifications=[PutKeyValueCommand{key=key_0000000000001318, value=[19 #1: 1195, ], flags=[SKIP_CACHE_LOAD, SKIP_REMOTE_LOOKUP], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedMetadata{version=null}, successful=true}], onePhaseCommit=false, gtx=GlobalTransaction:<edg-perf02-39415>:28106:remote, cacheName='testCache', topologyId=47}
> {code}
> Originator then expects that the transaction was successfully prepared:
> {code}
> 10:01:36,124 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (DefaultStressor-9) Responses: [sender=edg-perf01-36235, received=true, suspected=false]
> [sender=edg-perf03-24110, received=true, suspected=false]
> 10:01:36,135 TRACE [org.infinispan.transaction.TransactionCoordinator] (DefaultStressor-9) Committing transaction GlobalTransaction:<edg-perf02-39415>:28106:local
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 2 months