[JBoss JIRA] (ISPN-5077) Custom remote events can be slightly inefficient
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-5077?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-5077:
----------------------------------
Fix Version/s: 9.4.4.Final
(was: 9.4.3.Final)
> Custom remote events can be slightly inefficient
> ------------------------------------------------
>
> Key: ISPN-5077
> URL: https://issues.jboss.org/browse/ISPN-5077
> Project: Infinispan
> Issue Type: Enhancement
> Components: Remote Protocols
> Affects Versions: 7.0.2.Final
> Reporter: Galder Zamarreño
> Priority: Major
> Fix For: 9.4.4.Final
>
>
> Something we might want to improve for Hot Rod 3.0 protocol:
> [16:40] <galderz> i've been thinking further about converters, and I think i've found a slight mismatch between what converter means for embedded listeners vs remote listeners
> [16:40] <wburns> oh yeah?
> [16:40] <galderz> for embedded listeners, it essentially transforms what you see as `value`
> [16:41] <galderz> with the knowledge that key and metadata information will be shipped
> [16:41] <galderz> the way i mapped converter to remote listeners is that whatever the converter returns, we ship that, as is, to the client
> [16:41] <galderz> so, if a remote listener wants a custom event that includes key + value
> [16:41] <galderz> it needs to develop a converter impl that returns bytes containing key + value
> [16:41] <galderz> which is inefficient because you are passing around the key twice
> [16:42] <galderz> once as part of the event itself, and again inside the converted value
> [16:42] <galderz> inefficient from the POV of shipping stuff around from other nodes to where the cluster listener is located
> [16:44] <wburns> yeah makes sense
> [16:44] <galderz> not a major issue but not easy to fix without changing semantics or public protocol
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-5075) org.infinispan.IllegalLifecycleStateException in query tests
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-5075?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-5075:
----------------------------------
Fix Version/s: 9.4.4.Final
(was: 9.4.3.Final)
> org.infinispan.IllegalLifecycleStateException in query tests
> ------------------------------------------------------------
>
> Key: ISPN-5075
> URL: https://issues.jboss.org/browse/ISPN-5075
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Query
> Affects Versions: 7.1.0.Alpha1
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
> Priority: Major
> Fix For: 9.4.4.Final
>
>
> Many query tests log an error after running:
> {code}
> Exception in thread "Hibernate Search: async deletion of index segments-1" org.infinispan.IllegalLifecycleStateException: ISPN000323: Cache 'LuceneIndexesMetadata' is in 'TERMINATED' state and so it does not accept new invocations. Either restart it or recreate the cache container.
> at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:89)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:71)
> at org.infinispan.commands.AbstractVisitor.visitGetKeyValueCommand(AbstractVisitor.java:77)
> at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:44)
> at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:333)
> at org.infinispan.cache.impl.CacheImpl.get(CacheImpl.java:422)
> at org.infinispan.cache.impl.DecoratedCache.get(DecoratedCache.java:427)
> at org.infinispan.lucene.impl.FileListOperations.getFileList(FileListOperations.java:160)
> at org.infinispan.lucene.impl.FileListOperations.deleteFileName(FileListOperations.java:129)
> at org.infinispan.lucene.impl.DirectoryImplementor.deleteFile(DirectoryImplementor.java:64)
> at org.infinispan.lucene.impl.DirectoryLuceneV4$DeleteTask.run(DirectoryLuceneV4.java:195)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> That error does not cause the tests to fail, and are due to the async deletes of segments recently implemented. Need to check if the executors are being flushed correctly when the cache manager stops
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-4903) ServerFailureRetrySingleOwnerTest doesn't actually test client retry
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-4903?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-4903:
----------------------------------
Fix Version/s: 9.4.4.Final
(was: 9.4.3.Final)
> ServerFailureRetrySingleOwnerTest doesn't actually test client retry
> --------------------------------------------------------------------
>
> Key: ISPN-4903
> URL: https://issues.jboss.org/browse/ISPN-4903
> Project: Infinispan
> Issue Type: Bug
> Components: Server, Test Suite - Server
> Affects Versions: 7.0.0.CR2
> Reporter: Dan Berindei
> Priority: Major
> Fix For: 9.4.4.Final
>
> Attachments: ServerFailureRetrySingleOwnerTest.java
>
>
> With {{useSynchronization = true}} (the default, before ISPN-4166 is integrated), the {{SuspectException}} thrown by the listener is swallowed by the transaction manager and the client doesn't retry. The test doesn't pick that up because the exception is thrown _after_ the entry was updated in the data container (a regular SuspectException would be thrown before).
> I changed the configuration to {{useSynchronization = false}}, but it didn't work because the {{SuspectException}} is wrapped in a {{CacheListenerException}}, so the client throws an exception instead of retrying. I also changed the test to use an interceptor instead of a listener, but then I got a {{ClassCastException}}:
> {noformat}
> Caused by: java.lang.ClassCastException: [B cannot be cast to org.infinispan.container.entries.CacheEntry
> at org.infinispan.cache.impl.CacheImpl.getCacheEntry(CacheImpl.java:424)
> at org.infinispan.cache.impl.CacheImpl.getCacheEntry(CacheImpl.java:429)
> at org.infinispan.server.hotrod.Decoder2x$.customReadKey(Decoder2x.scala:285)
> at org.infinispan.server.hotrod.HotRodDecoder.customDecodeKey(HotRodDecoder.scala:156)
> at org.infinispan.server.core.AbstractProtocolDecoder.org$infinispan$server$core$AbstractProtocolDecoder$$decodeKey(AbstractProtocolDecoder.scala:176)
> at org.infinispan.server.core.AbstractProtocolDecoder.decodeDispatch(AbstractProtocolDecoder.scala:71) ... 14 more
> {noformat}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-4879) Log a clear error message when an incompatible node joins the cluster
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-4879?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-4879:
----------------------------------
Fix Version/s: 9.4.4.Final
(was: 9.4.3.Final)
> Log a clear error message when an incompatible node joins the cluster
> ---------------------------------------------------------------------
>
> Key: ISPN-4879
> URL: https://issues.jboss.org/browse/ISPN-4879
> Project: Infinispan
> Issue Type: Feature Request
> Components: Core
> Affects Versions: 7.0.0.CR2
> Reporter: Dan Berindei
> Priority: Major
> Fix For: 9.4.4.Final
>
>
> We don't check the Infinispan version when a node joins the cluster. If the node has an incompatible version, it will most likely fail to join, but the error message is not at all straightforward. As an example:
> {noformat}
> Exception in thread "main" org.infinispan.commons.CacheException: Unable
> to invoke method public void
> org.infinispan.statetransfer.StateTransferManagerImpl.start() throws
> java.lang.Exception on object of type StateTransferManagerImpl
> at
> org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:170)
> at
> org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:869)
> at
> org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:638)
> at
> org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:627)
> at
> org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:530)
> at
> org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:216)
> at org.infinispan.cache.impl.CacheImpl.start(CacheImpl.java:764)
> at
> org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:584)
> at
> org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:539)
> at
> org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:416)
> at ch.nexustelecom.lbd.engine.ImsiCache.init(ImsiCache.java:49)
> at
> ch.nexustelecom.dexclient.engine.DefaultDexClientEngine.init(DefaultDexClientEngine.java:120)
> at ch.nexustelecom.dexclient.DexClient.initClient(DexClient.java:169)
> at
> ch.nexustelecom.dexclient.tool.DexClientManager.startup(DexClientManager.java:196)
> at
> ch.nexustelecom.dexclient.tool.DexClientManager.main(DexClientManager.java:83)
> Caused by: org.infinispan.commons.CacheException:
> java.lang.ClassNotFoundException:
> org.infinispan.partionhandling.impl.AvailabilityMode
> at org.infinispan.commons.util.Util.rewrapAsCacheException(Util.java:655)
> at
> org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:176)
> at
> org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:536)
> at
> org.infinispan.topology.LocalTopologyManagerImpl.executeOnCoordinator(LocalTopologyManagerImpl.java:388)
> at
> org.infinispan.topology.LocalTopologyManagerImpl.join(LocalTopologyManagerImpl.java:102)
> at
> org.infinispan.statetransfer.StateTransferManagerImpl.start(StateTransferManagerImpl.java:108)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at
> org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168)
> ... 14 more
> Caused by: java.lang.ClassNotFoundException:
> org.infinispan.partionhandling.impl.AvailabilityMode
> at java.net.URLClassLoader$1.run(Unknown Source)
> at java.net.URLClassLoader$1.run(Unknown Source)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(Unknown Source)
> at java.lang.ClassLoader.loadClass(Unknown Source)
> at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
> at java.lang.ClassLoader.loadClass(Unknown Source)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Unknown Source)
> at
> org.jboss.marshalling.AbstractClassResolver.loadClass(AbstractClassResolver.java:131)
> at
> org.jboss.marshalling.AbstractClassResolver.resolveClass(AbstractClassResolver.java:112)
> at
> org.jboss.marshalling.river.RiverUnmarshaller.doReadClassDescriptor(RiverUnmarshaller.java:1002)
> at
> org.jboss.marshalling.river.RiverUnmarshaller.doReadNewObject(RiverUnmarshaller.java:1239)
> at
> org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:272)
> at
> org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
> at
> org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)
> at
> org.infinispan.topology.CacheStatusResponse$Externalizer.readObject(CacheStatusResponse.java:76)
> at
> org.infinispan.topology.CacheStatusResponse$Externalizer.readObject(CacheStatusResponse.java:62)
> at
> org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:424)
> at
> org.infinispan.marshall.core.ExternalizerTable.readObject(ExternalizerTable.java:221)
> at
> org.infinispan.marshall.core.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:148)
> at
> org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351)
> at
> org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
> at
> org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)
> at
> org.infinispan.remoting.responses.SuccessfulResponse$Externalizer.readObject(SuccessfulResponse.java:79)
> at
> org.infinispan.remoting.responses.SuccessfulResponse$Externalizer.readObject(SuccessfulResponse.java:64)
> at
> org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:424)
> at
> org.infinispan.marshall.core.ExternalizerTable.readObject(ExternalizerTable.java:221)
> at
> org.infinispan.marshall.core.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:148)
> at
> org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351)
> at
> org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
> at
> org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)
> at
> org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:135)
> at
> org.infinispan.marshall.core.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:101)
> at
> org.infinispan.commons.marshall.AbstractDelegatingMarshaller.objectFromByteBuffer(AbstractDelegatingMarshaller.java:80)
> at
> org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectFromBuffer(MarshallerAdapter.java:28)
> at
> org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:390)
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:250)
> at
> org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:674)
> at org.jgroups.JChannel.up(JChannel.java:733)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030)
> at org.jgroups.protocols.pbcast.STATE_TRANSFER.up(STATE_TRANSFER.java:146)
> at org.jgroups.protocols.RSVP.up(RSVP.java:190)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:390)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:379)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1042)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)
> at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1034)
> at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:752)
> at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:399)
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:610)
> at org.jgroups.protocols.BARRIER.up(BARRIER.java:152)
> at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:155)
> at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:200)
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:297)
> at org.jgroups.protocols.MERGE3.up(MERGE3.java:288)
> at org.jgroups.protocols.Discovery.up(Discovery.java:277)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1568)
> at org.jgroups.protocols.TP$MyHandler.run(TP.java:1787)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> {noformat}
> Optionally, we could allow the user to configure an "application version" and prevent nodes with different application versions from joining the same cluster.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-4846) State transfer keeps trying to fetch transaction data after the cache was stopped
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-4846?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-4846:
----------------------------------
Fix Version/s: 9.4.4.Final
(was: 9.4.3.Final)
> State transfer keeps trying to fetch transaction data after the cache was stopped
> ---------------------------------------------------------------------------------
>
> Key: ISPN-4846
> URL: https://issues.jboss.org/browse/ISPN-4846
> Project: Infinispan
> Issue Type: Bug
> Components: Core, State Transfer
> Affects Versions: 7.0.0.CR1
> Reporter: Dan Berindei
> Priority: Major
> Fix For: 9.4.4.Final
>
>
> StateConsumerImpl doesn't check if the cache is stopped while fetching transaction data, it only stops when it's no longer able to find providers for transactions.
> However, JGroupsTransport throws a generic CacheException when the channel is stopped. The state transfer thread can enter a busy-wait loop, retrying to get the transaction data and immediately getting the CacheException, filling the log with messages like this:
> {noformat}
> 19:32:28,237 WARN (remote-thread-NodeN-p42592-t1:) [StateConsumerImpl] ISPN000209: Failed to retrieve transactions for segments [10, 11, 12, 13, 14, 15, 17, 16, 19, 18, 21, 20, 23, 22, 25, 24, 27, 26, 29, 28, 42, 43, 40, 41, 46, 47, 44, 45, 51, 50, 49, 48, 55, 54, 53, 52, 59, 58, 57, 56] of cache testCache from node NodeM-53416
> org.infinispan.commons.CacheException: java.lang.IllegalStateException: channel is not connected
> at org.infinispan.commons.util.Util.rewrapAsCacheException(Util.java:655)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:176)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:536)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:290)
> at org.infinispan.statetransfer.StateConsumerImpl.getTransactions(StateConsumerImpl.java:766)
> at org.infinispan.statetransfer.StateConsumerImpl.requestTransactions(StateConsumerImpl.java:685)
> at org.infinispan.statetransfer.StateConsumerImpl.addTransfers(StateConsumerImpl.java:629)
> at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:331)
> at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
> at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:43)
> at org.infinispan.statetransfer.StateTransferManagerImpl$1.rebalance(StateTransferManagerImpl.java:116)
> {noformat}
> We should check is the cache is stopped before retrying in StateConsumerImpl.requestTransactions. I also think we should change the stop order - it would make sense to stop the remote executor threads and the RpcDispatcher before we stop the channel.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-4817) Cluster Listener replication of listener to remote dist nodes needs privilege action
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-4817?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-4817:
----------------------------------
Fix Version/s: 9.4.4.Final
(was: 9.4.3.Final)
> Cluster Listener replication of listener to remote dist nodes needs privilege action
> ------------------------------------------------------------------------------------
>
> Key: ISPN-4817
> URL: https://issues.jboss.org/browse/ISPN-4817
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 7.0.0.CR1
> Reporter: William Burns
> Priority: Critical
> Fix For: 9.4.4.Final
>
>
> The ClusterListenerReplicateCallable needs to add the listener in a privileged block so that it can work properly.
> I was also thinking if we need to change all of our messages to send along the current Subject so it can be properly applied to remote invocations as well. In this case we wouldn't need to wrap the listener invocation.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-4624) Allow custom partition handling strategy
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-4624?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-4624:
----------------------------------
Fix Version/s: 9.4.4.Final
(was: 9.4.3.Final)
> Allow custom partition handling strategy
> ----------------------------------------
>
> Key: ISPN-4624
> URL: https://issues.jboss.org/browse/ISPN-4624
> Project: Infinispan
> Issue Type: Feature Request
> Components: Core
> Affects Versions: 7.0.0.Beta1
> Reporter: Dan Berindei
> Priority: Major
> Labels: partition_handling
> Fix For: 9.4.4.Final
>
>
> Users should be able to configure a custom PartitionHandlingStrategy. It should be able to specify a behaviour on merge as well.
> We might want to merge this with the RebalancePolicy, which was also supposed to be configurable.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month