[JBoss JIRA] (ISPN-3162) implement a spliterator like interface for parallel traversal of local and replicated cache
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-3162?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant reassigned ISPN-3162:
-------------------------------------
Assignee: (was: Mircea Markus)
> implement a spliterator like interface for parallel traversal of local and replicated cache
> -------------------------------------------------------------------------------------------
>
> Key: ISPN-3162
> URL: https://issues.jboss.org/browse/ISPN-3162
> Project: Infinispan
> Issue Type: Feature Request
> Components: Core, Distributed Execution and Map/Reduce, Loaders and Stores
> Affects Versions: 5.3.0.CR1
> Reporter: Mathieu Lachance
>
> The backport of ConcurrentHashMapV8 comes with an interesting interface, Spliterator and it's 3 implementations, Key/Value/EntryIterator.
> As java 7 is now being more widely adopted, it could be interesting to offer to take advantage of the java ForkJoinPool for the parallel traversal of local and replicated caches where the Infinispan map reduce framework is maybe less suited for.
> At term, it could be used to speed up the search in a cache without the need to index it.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 2 months
[JBoss JIRA] (ISPN-3427) LevelDB Cache store testsuite failure on Windows 2008
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-3427?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant reassigned ISPN-3427:
-------------------------------------
Assignee: (was: Mircea Markus)
> LevelDB Cache store testsuite failure on Windows 2008
> -----------------------------------------------------
>
> Key: ISPN-3427
> URL: https://issues.jboss.org/browse/ISPN-3427
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 5.2.7.Final
> Reporter: Michal Linhard
>
> method: org.infinispan.loaders.leveldb.LevelDBCacheStoreTest.tearDown
> exception:
> {code}
> java.nio.channels.OverlappingFileLockException
> at sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255)
> at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152)
> at sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:1017)
> at java.nio.channels.FileChannel.tryLock(FileChannel.java:1154)
> at org.iq80.leveldb.impl.DbLock.<init>(DbLock.java:47)
> at org.iq80.leveldb.impl.DbImpl.<init>(DbImpl.java:167)
> at org.iq80.leveldb.impl.Iq80DBFactory.open(Iq80DBFactory.java:59)
> at org.infinispan.loaders.leveldb.LevelDBCacheStore.openDatabase(LevelDBCacheStore.java:83)
> at org.infinispan.loaders.leveldb.LevelDBCacheStore.reinitDatabase(LevelDBCacheStore.java:95)
> at org.infinispan.loaders.leveldb.LevelDBCacheStore.reinitAllDatabases(LevelDBCacheStore.java:99)
> at org.infinispan.loaders.leveldb.LevelDBCacheStore.clearLockSafe(LevelDBCacheStore.java:152)
> at org.infinispan.loaders.LockSupportCacheStore.clear(LockSupportCacheStore.java:270)
> at org.infinispan.loaders.BaseCacheStoreTest.tearDown(BaseCacheStoreTest.java:100)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80)
> at org.testng.internal.Invoker.invokeConfigurationMethod(Invoker.java:564)
> at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:213)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:796)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:907)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1237)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:37)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> {code}
> run: http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/edg-60-ispn-testsuite...
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 2 months
[JBoss JIRA] (ISPN-3536) Exception when handling command SingleRpcCommand
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-3536?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant reassigned ISPN-3536:
-------------------------------------
Assignee: (was: Mircea Markus)
> Exception when handling command SingleRpcCommand
> ------------------------------------------------
>
> Key: ISPN-3536
> URL: https://issues.jboss.org/browse/ISPN-3536
> Project: Infinispan
> Issue Type: Bug
> Components: Cross-Site Replication
> Affects Versions: 5.3.0.Final, 6.0.0.Alpha4
> Environment: Linux ip-10-252-170-214 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
> java version "1.6.0_24"
> OpenJDK Runtime Environment (IcedTea6 1.11.11.90) (amazon-62.1.11.11.90.55.amzn1-x86_64)
> OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
> Reporter: Mark De Leon
> Attachments: pearson-clustered-xsite-bos.xml, pearson-clustered-xsite-va.xml
>
>
> The following can be referenced from forum post by Chris Riley ISPN000071
> Trying to perform an insert of a cache value into a distributed cache that is replicated via Cross Site Replication in JBoss 6.0.0.Alpha4. The cross site replication has two nodes in the global cluster.
> When a value is put on the local site we get the following error on the remote site. We just upgraded to 6.0.0.Alpha4 because of JIRA ISSUE ISPN-3346, which we reproduced in 5.3.0.Final. I have attached our jboss configuration file for review.
>
> 17:22:41,798 WARN [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (Incoming-2,shared=tcp) ISPN000071: Caught exception when handling command SingleRpcCommand{cacheName='importantCache', command=PutKeyValueCommand{key=1, value=[B@363f983f, flags=null, putIfAbsent=false, metadata=MimeMetadata(contentType=text/plain), successful=true, ignorePreviousValue=false}}: java.lang.NullPointerException
> at org.infinispan.xsite.BackupReceiverRepositoryImpl.isBackupForRemoteCache(BackupReceiverRepositoryImpl.java:112) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.xsite.BackupReceiverRepositoryImpl.getBackupCacheManager(BackupReceiverRepositoryImpl.java:95) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.xsite.BackupReceiverRepositoryImpl.handleRemoteCommand(BackupReceiverRepositoryImpl.java:67) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromRemoteSite(CommandAwareRpcDispatcher.java:234) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:209) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:247) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:665) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:130) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.JChannel.up(JChannel.java:719) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1002) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.deliver(RELAY2.java:612) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.route(RELAY2.java:508) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.handleMessage(RELAY2.java:483) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.handleRelayMessage(RELAY2.java:461) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.Relayer$Bridge.receive(Relayer.java:263) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.JChannel.up(JChannel.java:749) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1006) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:195) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:439) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:439) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:304) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.UNICAST.removeAndDeliver(UNICAST.java:748) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.UNICAST.handleBatchReceived(UNICAST.java:704) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.UNICAST.up(UNICAST.java:454) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FD.up(FD.java:274) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:223) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.TP.passBatchUp(TP.java:1409) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.TP$BatchHandler.run(TP.java:1564) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [rt.jar:1.6.0_24]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [rt.jar:1.6.0_24]
> at java.lang.Thread.run(Thread.java:679) [rt.jar:1.6.0_24]
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 2 months
[JBoss JIRA] (ISPN-3876) TcpTransportFactory stores failed SocketAddress in RequestBalancingStrategy
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-3876?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant reassigned ISPN-3876:
-------------------------------------
Assignee: (was: Mircea Markus)
> TcpTransportFactory stores failed SocketAddress in RequestBalancingStrategy
> ---------------------------------------------------------------------------
>
> Key: ISPN-3876
> URL: https://issues.jboss.org/browse/ISPN-3876
> Project: Infinispan
> Issue Type: Bug
> Components: Remote Protocols
> Affects Versions: 5.2.1.Final, 5.3.0.Final, 6.0.0.Final
> Environment: Hotrod Client, Java
> Reporter: Patrick Seeber
>
> The "updateServers" Method in the TcpTransportFactory class iterates over all addedServers and adds them to the connection pool if no exceptions are thrown. Howerver, if an exception is thrown, the SocketAddress may not have been added to the conection pool but is added to the balancer afterwards. Therefore, the balancer may contain an invalid SocketAddress which is not contained in the connection pool.
> In our application with few distributed caches, we encounter situations where all servers (SocketAddresses) are corrupt and the application fails to load or store entries in/from the cache.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 2 months
[JBoss JIRA] (ISPN-2395) Key sorting done by the OptimisticLockingInterceptor is incompatible with the lock striping
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-2395?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant reassigned ISPN-2395:
-------------------------------------
Assignee: (was: Mircea Markus)
> Key sorting done by the OptimisticLockingInterceptor is incompatible with the lock striping
> -------------------------------------------------------------------------------------------
>
> Key: ISPN-2395
> URL: https://issues.jboss.org/browse/ISPN-2395
> Project: Infinispan
> Issue Type: Bug
> Components: Transactions
> Affects Versions: 5.1.5.FINAL, 5.1.6.FINAL, 5.1.7.Final, 5.2.0.Beta1
> Reporter: Nicolas Filotto
> Priority: Minor
> Attachments: TestLocking.java
>
>
> In ISPN 5.0, you provided a workaround allowing us to sort the keys ourself in order to prevent deadlocks even in case we enable the lock striping (more details here ISPN-993), thanks to this workaround we could write a simple key comparator (that works with lock striping enabled or not) as next:
> {code}
> public int compare(Object k1, Object k2) {
> LockManager lm = cache.getLockManager();
> int result = lm.getLockId(key1) - lm.getLockId(k2);
> }
> {code}
> Starting from ISPN 5.1 (ISPN-1132), the keys are sorted automatically by ISPN however unfortunately what has been done is incompatible with lock striping, indeed the keys are sorted regardless the lock distribution which is a mistake since we actually expect the keys to be sorted according to the corresponding locks and not to the keys themselves otherwise deadlock can still occur.
> As attached file, you will find a test case that shows the issue.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 2 months