[JBoss JIRA] (ISPN-4610) Implement total order for non-transactional caches
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-4610?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-4610:
------------------------------
Fix Version/s: 7.1.0.Beta1
(was: 7.1.0.Alpha1)
> Implement total order for non-transactional caches
> --------------------------------------------------
>
> Key: ISPN-4610
> URL: https://issues.jboss.org/browse/ISPN-4610
> Project: Infinispan
> Issue Type: Feature Request
> Components: Core
> Affects Versions: 7.0.0.Alpha5
> Reporter: Dan Berindei
> Assignee: Pedro Ruivo
> Fix For: 7.1.0.Beta1
>
>
> Current locking algorithm in non-transactional caches needs a remote thread on the primary owner to block while replicating the update to the backup owner. The thread is also holding the lock for the key, so it's blocking other threads that want to write to the same key. When there is a lot of contention, this can exhaust the remote executor thread pool and cause lock timeouts.
> TO was designed with high contention in mind, and it doesn't block threads to acquire locks. So it should handle this much better.
> An alternative solution would be the locking rework in ISPN-2849.
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
10 years, 1 month
[JBoss JIRA] (ISPN-4722) CLI remove is not cluster-wide
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-4722?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-4722:
------------------------------
Fix Version/s: 7.1.0.Beta1
(was: 7.1.0.Alpha1)
> CLI remove is not cluster-wide
> ------------------------------
>
> Key: ISPN-4722
> URL: https://issues.jboss.org/browse/ISPN-4722
> Project: Infinispan
> Issue Type: Bug
> Components: CLI
> Affects Versions: 6.0.2.Final, 7.0.0.Beta1
> Reporter: Galder Zamarreño
> Assignee: Tristan Tarrant
> Fix For: 7.1.0.Beta1
>
>
> In CLI, the "remove" command does not delete entries in all nodes of a clustered environment, only the local copy. However, the "put" command does write in all nodes. Is it the expected behavior? See example below:
> {code}
> node 1
> put k1 v1
> get k1 -> v1
> node 2
> get k1 -> v1
> node 1
> remove k1
> get k1 -> null
> node 2
> get k1 -> v1
> {code}
> I know that these commands provided by CLI are not used in real world, but they are useful to demonstrate the correct configuration of a JDG cluster.
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
10 years, 1 month
[JBoss JIRA] (ISPN-4721) RestStoreParallelIterationTest.testParallelIterationWithoutValueOrMetadata randomly failing
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-4721?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-4721:
------------------------------
Fix Version/s: 7.1.0.Beta1
(was: 7.1.0.Alpha1)
> RestStoreParallelIterationTest.testParallelIterationWithoutValueOrMetadata randomly failing
> -------------------------------------------------------------------------------------------
>
> Key: ISPN-4721
> URL: https://issues.jboss.org/browse/ISPN-4721
> Project: Infinispan
> Issue Type: Bug
> Components: Remote Protocols
> Affects Versions: 7.0.0.Beta1
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Labels: testsuite_stability
> Fix For: 7.1.0.Beta1
>
> Attachments: Infinispan_Pull_requests_monitor_5797_2833_pruivo_ISPN-4680.log.zip
>
>
> Fails with:
> {code}
> org.infinispan.persistence.spi.PersistenceException: org.apache.http.conn.HttpHostConnectException: Connection to http://localhost:18080 refused
> at org.infinispan.persistence.rest.RestStore.write(RestStore.java:183)
> at org.infinispan.persistence.ParallelIterationTest.insertData(ParallelIterationTest.java:199)
> at org.infinispan.persistence.ParallelIterationTest.runIterationTest(ParallelIterationTest.java:125)
> at org.infinispan.persistence.ParallelIterationTest.testParallelIterationWithoutValueOrMetadata(ParallelIterationTest.java:84)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:38)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:382)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.http.conn.HttpHostConnectException: Connection to http://localhost:18080 refused
> at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:190)
> at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
> at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:644)
> at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:479)
> at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
> at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:827)
> at org.infinispan.persistence.rest.RestStore.write(RestStore.java:181)
> ... 23 more
> Caused by: java.net.ConnectException: Cannot assign requested address
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:579)
> at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
> at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
> ... 29 more
> ------- Stdout: -----
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
10 years, 1 month
[JBoss JIRA] (ISPN-4737) Noisy exceptions in Hot Rod client when node goes down
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-4737?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-4737:
------------------------------
Fix Version/s: 7.1.0.Beta1
(was: 7.1.0.Alpha1)
> Noisy exceptions in Hot Rod client when node goes down
> ------------------------------------------------------
>
> Key: ISPN-4737
> URL: https://issues.jboss.org/browse/ISPN-4737
> Project: Infinispan
> Issue Type: Bug
> Components: Remote Protocols
> Affects Versions: 7.0.0.Beta2
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 7.1.0.Beta1
>
> Attachments: hr-client.log
>
>
> When a node goes down, the Hot Rod client prints some noisy exceptions such as:
> {code}
> 11:30:27,846 ERROR [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (main) ISPN004017: Could not fetch transport: org.infinispan.client.hotrod.exceptions.TransportException:: Could not connect to server: /127.0.0.1:11322
> at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.<init>(TcpTransport.java:76) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:35) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:16) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1220) [commons-pool-1.6.redhat-6.jar:1.6.redhat-6]
> at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.borrowTransportFromPool(TcpTransportFactory.java:322) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.getTransport(TcpTransportFactory.java:216) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> at org.infinispan.client.hotrod.impl.operations.AbstractKeyOperation.getTransport(AbstractKeyOperation.java:40) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> at org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation.execute(RetryOnFailureOperation.java:48) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> at org.infinispan.client.hotrod.impl.RemoteCacheImpl.put(RemoteCacheImpl.java:237) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> at org.infinispan.client.hotrod.impl.RemoteCacheSupport.put(RemoteCacheSupport.java:79) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> at com.example.Main.main(Main.java:41) [:]
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) [rt.jar:1.7.0_65]
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) [rt.jar:1.7.0_65]
> at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:117) [rt.jar:1.7.0_65]
> at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.<init>(TcpTransport.java:66) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> ... 10 more
> 11:30:27,851 WARN [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (main) ISPN004022: Unable to invalidate transport for server: /127.0.0.1:11322
> 11:30:27,855 ERROR [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (main) ISPN004017: Could not fetch transport: org.infinispan.client.hotrod.exceptions.TransportException:: Could not connect to server: /127.0.0.1:11322
> at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.<init>(TcpTransport.java:76) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:35) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:16) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1220) [commons-pool-1.6.redhat-6.jar:1.6.redhat-6]
> at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.borrowTransportFromPool(TcpTransportFactory.java:322) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.getTransport(TcpTransportFactory.java:216) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> at org.infinispan.client.hotrod.impl.operations.AbstractKeyOperation.getTransport(AbstractKeyOperation.java:40) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> at org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation.execute(RetryOnFailureOperation.java:48) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> at org.infinispan.client.hotrod.impl.RemoteCacheImpl.put(RemoteCacheImpl.java:237) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> at org.infinispan.client.hotrod.impl.RemoteCacheSupport.put(RemoteCacheSupport.java:79) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> at com.example.Main.main(Main.java:41) [:]
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) [rt.jar:1.7.0_65]
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) [rt.jar:1.7.0_65]
> at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:117) [rt.jar:1.7.0_65]
> at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.<init>(TcpTransport.java:66) [infinispan-client-hotrod-6.1.1.ER2-redhat-1.jar:6.1.1.ER2-redhat-1]
> ... 10 more
> {code}
> This does not cause malfunctioning but pollutes client logs. Hot Rod clients recover fine from nodes going down and eventually these exceptions disappear.
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
10 years, 1 month
[JBoss JIRA] (ISPN-4846) State transfer keeps trying to fetch transaction data after the cache was stopped
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-4846?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-4846:
------------------------------
Fix Version/s: 7.1.0.Beta1
(was: 7.1.0.Alpha1)
> State transfer keeps trying to fetch transaction data after the cache was stopped
> ---------------------------------------------------------------------------------
>
> Key: ISPN-4846
> URL: https://issues.jboss.org/browse/ISPN-4846
> Project: Infinispan
> Issue Type: Bug
> Components: Core, State Transfer
> Affects Versions: 7.0.0.CR1
> Reporter: Dan Berindei
> Fix For: 7.1.0.Beta1
>
>
> StateConsumerImpl doesn't check if the cache is stopped while fetching transaction data, it only stops when it's no longer able to find providers for transactions.
> However, JGroupsTransport throws a generic CacheException when the channel is stopped. The state transfer thread can enter a busy-wait loop, retrying to get the transaction data and immediately getting the CacheException, filling the log with messages like this:
> {noformat}
> 19:32:28,237 WARN (remote-thread-NodeN-p42592-t1:) [StateConsumerImpl] ISPN000209: Failed to retrieve transactions for segments [10, 11, 12, 13, 14, 15, 17, 16, 19, 18, 21, 20, 23, 22, 25, 24, 27, 26, 29, 28, 42, 43, 40, 41, 46, 47, 44, 45, 51, 50, 49, 48, 55, 54, 53, 52, 59, 58, 57, 56] of cache testCache from node NodeM-53416
> org.infinispan.commons.CacheException: java.lang.IllegalStateException: channel is not connected
> at org.infinispan.commons.util.Util.rewrapAsCacheException(Util.java:655)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:176)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:536)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:290)
> at org.infinispan.statetransfer.StateConsumerImpl.getTransactions(StateConsumerImpl.java:766)
> at org.infinispan.statetransfer.StateConsumerImpl.requestTransactions(StateConsumerImpl.java:685)
> at org.infinispan.statetransfer.StateConsumerImpl.addTransfers(StateConsumerImpl.java:629)
> at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:331)
> at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
> at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:43)
> at org.infinispan.statetransfer.StateTransferManagerImpl$1.rebalance(StateTransferManagerImpl.java:116)
> {noformat}
> We should check is the cache is stopped before retrying in StateConsumerImpl.requestTransactions. I also think we should change the stop order - it would make sense to stop the remote executor threads and the RpcDispatcher before we stop the channel.
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
10 years, 1 month