[JBoss JIRA] (ISPN-2198) Cluster with non-shared JDBC cache store has too much entries after node failure
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-2198?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-2198:
------------------------------------
[~rvansa] could it be that this test was using the same DB table to store the cache entries for all the nodes? The configuration seems to suggest that, although if that was the case I would have expected primary key violations even state transfer started.
> Cluster with non-shared JDBC cache store has too much entries after node failure
> --------------------------------------------------------------------------------
>
> Key: ISPN-2198
> URL: https://issues.jboss.org/browse/ISPN-2198
> Project: Infinispan
> Issue Type: Feature Request
> Components: Loaders and Stores
> Affects Versions: 5.1.5.FINAL
> Reporter: Radim Vansa
> Attachments: cache_entries.csv, logs.zip, sfout.txt
>
>
> In resilience test with 4-node cluster where one node is killed a weird situation appears. Before the node kill have this number of entries:
> 210602;215820;209400;203038 = 838860 entries
> After the kill the number of entries changes for a while:
> 210602;null;209400;203038
> 250602;null;269400;243038
> 290602;null;269400;273038
> 300602;null;289400;293038
> 300602;null;289400;293038
> 321218;null;296035;293038
> But then it stabilizes on
> 326899;null;305039;314165 = 946103 entries
> When the node02 is restarted it complains about duplicit entries:
> ERROR [org.infinispan.loaders.jdbc.stringbased.JdbcStringBasedCacheStore] (OOB-124,null) ISPN008024: Error while storing string key to database; key: '8Az4Ia2V5NzYzNDI=', buffer size of value: 1050 bytes: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry '?8Az4Ia2V5NzYzNDI=' for key 'PRIMARY'
> Is this a bug or wrong configuration?
> Here is an excerpt from configuration (sorry for no formatting):
> <distributed-cache batching="false" indexing="NONE" l1-lifespan="0" mode="SYNC" name="memcachedCache" owners="2" remote-timeout="60000" start="EAGER" virtual-nodes="512">
> <locking acquire-timeout="3000" concurrency-level="1000" isolation="REPEATABLE_READ" striping="false"/>
> <transaction mode="NONE"/>
> <state-transfer enabled="true" timeout="600000"/>
> <eviction max-entries="-1" strategy="NONE"/>
> <string-keyed-jdbc-store datasource="java:jboss/datasources/JdbcDS" passivation="false" preload="false" purge="true" shared="false">
> <property name="databaseType">MYSQL</property>
> <string-keyed-table prefix="node01">
> <id-column name="id" type="VARCHAR(100)"/>
> <data-column name="value" type="BLOB(1200)"/>
> </string-keyed-table>
> </string-keyed-jdbc-store>
> </distributed-cache>
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 6 months
[JBoss JIRA] (ISPN-2853) Asymmetric Transactional Clustered Cache causes NullPointerExceptions on non Clustered members
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-2853?page=com.atlassian.jira.plugin.... ]
Dan Berindei resolved ISPN-2853.
--------------------------------
Resolution: Won't Fix
This was never supposed to work, we do not support having the same cache defined with different clustering modes in the same cluster.
> Asymmetric Transactional Clustered Cache causes NullPointerExceptions on non Clustered members
> ----------------------------------------------------------------------------------------------
>
> Key: ISPN-2853
> URL: https://issues.jboss.org/browse/ISPN-2853
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 5.1.6.FINAL
> Reporter: William Burns
> Assignee: Adrian Nistor
> Priority: Minor
> Labels: onboard
> Attachments: Asymmetric.java
>
>
> We utilize Asymmetric clusters to prevent some unneeded communication between members that don't need to participate in the cluster cache. This works fine for the cache updates to not be sent to that node. However, I noticed that if you have this cache be transactional as well, then members that aren't clustered for this cache will get transaction prepare and commit messages which cause NullPointerExceptions since they don't have remote transactions configured for these nodes.
> Here is a sample test case that shows the error that is found.
> {code}
> 15164 [OOB-3,wburns-45269] TRACE org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher - Attempting to execute command: TxCompletionNotificationCommand{ xid=null, internalId=0, gtx=GlobalTransaction:<wburns-1521>:1:local, cacheName=asymmetric} [sender=wburns-1521]
> 15164 [OOB-3,wburns-45269] TRACE org.infinispan.remoting.InboundInvocationHandlerImpl - Calling perform() on TxCompletionNotificationCommand{ xid=null, internalId=0, gtx=GlobalTransaction:<wburns-1521>:1:local, cacheName=asymmetric}
> 15164 [OOB-3,wburns-45269] TRACE org.infinispan.commands.remote.recovery.TxCompletionNotificationCommand - Processing completed transaction GlobalTransaction:<wburns-1521>:1:local
> 15164 [OOB-3,wburns-45269] TRACE org.infinispan.remoting.InboundInvocationHandlerImpl - Exception executing command
> java.lang.NullPointerException
> at org.infinispan.transaction.TransactionTable.removeRemoteTransaction(TransactionTable.java:340)
> at org.infinispan.commands.remote.recovery.TxCompletionNotificationCommand.perform(TxCompletionNotificationCommand.java:92)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:127)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:136)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithRetry(InboundInvocationHandlerImpl.java:162)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:114)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:226)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:203)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:456)
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:363)
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:238)
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:601)
> at org.jgroups.JChannel.up(JChannel.java:716)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
> at org.jgroups.protocols.RSVP.up(RSVP.java:192)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:400)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:889)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
> at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:759)
> at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:365)
> at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:602)
> at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:177)
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
> at org.jgroups.protocols.Discovery.up(Discovery.java:359)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1180)
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1728)
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1710)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> 15167 [OOB-3,wburns-45269] TRACE org.infinispan.remoting.InboundInvocationHandlerImpl - Unable to execute command, got invalid response ExceptionResponse
> 20170 [OOB-3,wburns-45269] TRACE org.infinispan.marshall.jboss.AbstractJBossMarshaller - Start unmarshaller after retrieving marshaller from thread local
> 20170 [OOB-3,wburns-45269] TRACE org.infinispan.marshall.VersionAwareMarshaller - Read version 510
> 20171 [OOB-3,wburns-45269] TRACE org.infinispan.marshall.jboss.AbstractJBossMarshaller - Start unmarshaller after retrieving marshaller from factory
> 20171 [OOB-3,wburns-45269] TRACE org.infinispan.marshall.VersionAwareMarshaller - Read version 510
> 20171 [OOB-3,wburns-45269] TRACE org.infinispan.marshall.jboss.AbstractJBossMarshaller - Stop unmarshaller
> 20171 [OOB-3,wburns-45269] TRACE org.infinispan.marshall.jboss.AbstractJBossMarshaller - Stop unmarshaller
> 20171 [OOB-3,wburns-45269] TRACE org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher - Attempting to execute command: TxCompletionNotificationCommand{ xid=null, internalId=0, gtx=GlobalTransaction:<wburns-1521>:2:local, cacheName=asymmetric} [sender=wburns-1521]
> 20171 [OOB-3,wburns-45269] TRACE org.infinispan.remoting.InboundInvocationHandlerImpl - Calling perform() on TxCompletionNotificationCommand{ xid=null, internalId=0, gtx=GlobalTransaction:<wburns-1521>:2:local, cacheName=asymmetric}
> 20171 [OOB-3,wburns-45269] TRACE org.infinispan.commands.remote.recovery.TxCompletionNotificationCommand - Processing completed transaction GlobalTransaction:<wburns-1521>:2:local
> 20171 [OOB-3,wburns-45269] TRACE org.infinispan.remoting.InboundInvocationHandlerImpl - Exception executing command
> java.lang.NullPointerException
> at org.infinispan.transaction.TransactionTable.removeRemoteTransaction(TransactionTable.java:340)
> at org.infinispan.commands.remote.recovery.TxCompletionNotificationCommand.perform(TxCompletionNotificationCommand.java:92)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:127)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:136)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithRetry(InboundInvocationHandlerImpl.java:162)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:114)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:226)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:203)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:456)
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:363)
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:238)
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:601)
> at org.jgroups.JChannel.up(JChannel.java:716)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
> at org.jgroups.protocols.RSVP.up(RSVP.java:192)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:400)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:889)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
> at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:759)
> at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:365)
> at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:602)
> at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:177)
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
> at org.jgroups.protocols.Discovery.up(Discovery.java:359)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1180)
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1728)
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1710)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> 20173 [OOB-3,wburns-45269] TRACE org.infinispan.remoting.InboundInvocationHandlerImpl - Unable to execute command, got invalid response ExceptionResponse
> {code}
> As a side note, these NPE appear to not be propagated to the client, since they are sent with a response mode of GET_NONE. However we have a site that will every once in a while get the NPE sent back to the updating member which then causes a CacheException to occur forcing the original nodes transaction to be rolled back forcing a retry of the operation.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 6 months
[JBoss JIRA] (ISPN-5463) ClassCastException: org.infinispan.query.remote.indexing.ProtobufValueWrapper cannot be cast to [B
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5463?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-5463:
-----------------------------------------------
Vojtech Juranek <vjuranek(a)redhat.com> changed the Status of [bug 1225104|https://bugzilla.redhat.com/show_bug.cgi?id=1225104] from ON_QA to VERIFIED
> ClassCastException: org.infinispan.query.remote.indexing.ProtobufValueWrapper cannot be cast to [B
> --------------------------------------------------------------------------------------------------
>
> Key: ISPN-5463
> URL: https://issues.jboss.org/browse/ISPN-5463
> Project: Infinispan
> Issue Type: Bug
> Components: Remote Querying
> Affects Versions: 7.2.1.Final
> Reporter: Philippe Cuvecle
> Assignee: Adrian Nistor
> Fix For: 7.2.2.Final, 8.0.0.Alpha2, 8.0.0.Final
>
>
> I try to use hotrod + protobuf + event notification and get a Class cast exception on Infinispan server side when trying to put an object
> Server side :
> ---------------
> Infinispan 7.2.1 in server mode
> I have declared my proto file in the MBean ProtobufMetadataManager
> Cache config :
> <replicated-cache name="myReplicatedCache" mode="SYNC" start="EAGER">
> <indexing index="ALL" auto-config="true"/>
> </replicated-cache>
> Client side
> -------------
> My cache object is modeled as a POJO with 5 string fields and its proto definition file
> I am able to put and get and even query successfully until I try to register a ClientListener. When doing this, the first put fails with this stacktrace :
> 16:37:36,186 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (HotRodServerWorker-7-5) ISPN000136: Execution error: org.infinispan.commons.CacheListenerException: ISPN000280: Caught exception [java.lang.ClassCastException] while invoking method [public void org.infinispan.server.hotrod.ClientListenerRegistry$BaseClientEventSender.onCacheEvent(org.infinispan.notifications.cachelistener.event.CacheEntryEvent)] on listener instance: org.infinispan.server.hotrod.ClientListenerRegistry$StatelessClientEventSender@3909d917
> at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl$1.run(AbstractListenerImpl.java:291) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.util.concurrent.WithinThreadExecutor.execute(WithinThreadExecutor.java:22) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl.invoke(AbstractListenerImpl.java:309) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.doRealInvocation(CacheNotifierImpl.java:1160) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.invokeNoChecks(CacheNotifierImpl.java:1155) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.invoke(CacheNotifierImpl.java:1132) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.notifications.cachelistener.CacheNotifierImpl.notifyCacheEntryCreated(CacheNotifierImpl.java:290) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.locking.ClusteringDependentLogic$AbstractClusteringDependentLogic.notifyCommitEntry(ClusteringDependentLogic.java:138) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.locking.ClusteringDependentLogic$InvalidationLogic.commitSingleEntry(ClusteringDependentLogic.java:331) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.locking.ClusteringDependentLogic$ReplicationLogic.commitSingleEntry(ClusteringDependentLogic.java:372) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.locking.ClusteringDependentLogic$AbstractClusteringDependentLogic.commitEntry(ClusteringDependentLogic.java:108) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.EntryWrappingInterceptor.commitContextEntry(EntryWrappingInterceptor.java:371) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.EntryWrappingInterceptor.commitEntryIfNeeded(EntryWrappingInterceptor.java:549) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.EntryWrappingInterceptor.commitContextEntries(EntryWrappingInterceptor.java:348) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.EntryWrappingInterceptor.invokeNextAndApplyChanges(EntryWrappingInterceptor.java:422) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.EntryWrappingInterceptor.setSkipRemoteGetsAndInvokeNextForDataCommand(EntryWrappingInterceptor.java:453) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.EntryWrappingInterceptor.visitPutKeyValueCommand(EntryWrappingInterceptor.java:195) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.query.backend.QueryInterceptor.visitPutKeyValueCommand(QueryInterceptor.java:164)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitNonTxDataWriteCommand(AbstractLockingInterceptor.java:88) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.locking.NonTransactionalLockingInterceptor.visitDataWriteCommand(NonTransactionalLockingInterceptor.java:40) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitPutKeyValueCommand(AbstractLockingInterceptor.java:55) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:111) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:44) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.statetransfer.StateTransferInterceptor.handleNonTxWriteCommand(StateTransferInterceptor.java:324) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.statetransfer.StateTransferInterceptor.handleWriteCommand(StateTransferInterceptor.java:256) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.statetransfer.StateTransferInterceptor.visitPutKeyValueCommand(StateTransferInterceptor.java:115) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.CacheMgmtInterceptor.updateStoreStatistics(CacheMgmtInterceptor.java:191) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:177) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.compat.BaseTypeConverterInterceptor.visitPutKeyValueCommand(BaseTypeConverterInterceptor.java:69) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:102) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:71) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:44) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:336) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1617) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.cache.impl.CacheImpl.putInternal(CacheImpl.java:1097) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1089) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.cache.impl.DecoratedCache.put(DecoratedCache.java:522) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.cache.impl.AbstractDelegatingAdvancedCache.put(AbstractDelegatingAdvancedCache.java:236) [infinispan-core.jar:7.2.1.Final]
> at org.infinispan.server.hotrod.CacheDecodeContext.put(CacheDecodeContext.scala:215) [infinispan-server-hotrod.jar:7.2.1.Final]
> at org.infinispan.server.hotrod.HotRodDecoder.org$infinispan$server$hotrod$HotRodDecoder$$decodeValue(HotRodDecoder.scala:132) [infinispan-server-hotrod.jar:7.2.1.Final]
> at org.infinispan.server.hotrod.HotRodDecoder$$anonfun$decode$1.apply$mcV$sp(HotRodDecoder.scala:50) [infinispan-server-hotrod.jar:7.2.1.Final]
> at org.infinispan.server.hotrod.HotRodDecoder.wrapSecurity(HotRodDecoder.scala:208) [infinispan-server-hotrod.jar:7.2.1.Final]
> at org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.scala:45) [infinispan-server-hotrod.jar:7.2.1.Final]
> at io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:370) [netty-all-4.0.25.Final.jar:4.0.25.Final]
> at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:168) [netty-all-4.0.25.Final.jar:4.0.25.Final]
> at org.infinispan.server.hotrod.HotRodDecoder.org$infinispan$server$core$transport$StatsChannelHandler$$super$channelRead(HotRodDecoder.scala:31) [infinispan-server-hotrod.jar:7.2.1.Final]
> at org.infinispan.server.core.transport.StatsChannelHandler$class.channelRead(StatsChannelHandler.scala:32) [infinispan-server-core.jar:7.2.1.Final]
> at org.infinispan.server.hotrod.HotRodDecoder.channelRead(HotRodDecoder.scala:31) [infinispan-server-hotrod.jar:7.2.1.Final]
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) [netty-all-4.0.25.Final.jar:4.0.25.Final]
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) [netty-all-4.0.25.Final.jar:4.0.25.Final]
> at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) [netty-all-4.0.25.Final.jar:4.0.25.Final]
> at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130) [netty-all-4.0.25.Final.jar:4.0.25.Final]
> at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) [netty-all-4.0.25.Final.jar:4.0.25.Final]
> at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) [netty-all-4.0.25.Final.jar:4.0.25.Final]
> at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) [netty-all-4.0.25.Final.jar:4.0.25.Final]
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) [netty-all-4.0.25.Final.jar:4.0.25.Final]
> at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) [netty-all-4.0.25.Final.jar:4.0.25.Final]
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) [netty-all-4.0.25.Final.jar:4.0.25.Final]
> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_65]
> Caused by: java.lang.ClassCastException: org.infinispan.query.remote.indexing.ProtobufValueWrapper cannot be cast to [B
> at org.infinispan.server.hotrod.ClientListenerRegistry$BaseClientEventSender.onCacheEvent(ClientListenerRegistry.scala:183) [infinispan-server-hotrod.jar:7.2.1.Final]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [rt.jar:1.7.0_65]
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) [rt.jar:1.7.0_65]
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_65]
> at java.lang.reflect.Method.invoke(Method.java:606) [rt.jar:1.7.0_65]
> at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl$1.run(AbstractListenerImpl.java:286) [infinispan-core.jar:7.2.1.Final]
> ... 73 more
> Client side code :
> ----------------------
> ...
> ConfigurationBuilder configurationBuilder = new ConfigurationBuilder();
> configurationBuilder.addServer().host(IP_ADDRESS).port(11222).marshaller(new ProtoStreamMarshaller());
> RemoteCacheManager remoteCacheManager = new RemoteCacheManager(configurationBuilder.build());
> EventPrintListener listener = new EventPrintListener();
> SerializationContext srcCtx = ProtoStreamMarshaller.getSerializationContext(remoteCacheManager);
> srcCtx.registerProtoFiles(FileDescriptorSource.fromResources("/myObject.proto"));
> srcCtx.registerMarshaller(new MyObjectMarshaller());
>
> // Obtain the cache
> RemoteCache<Integer, MyObject> remoteCache = remoteCacheManager.getCache("myReplicatedCache");
> // Add remote listener
> remoteCache.addClientListener(listener); // no issue if this is removed
>
> MyObject o = new MyObject();
> o.setA("XXXX");
> o.setB("YYYY");
> o.setC("ZZZZ");
> o.setD("AAAA");
> o.setE("BBBB");
>
> remoteCache.put(1, o); // server stacktrace
> Listener :
> -----------
> @ClientListener
> public class EventPrintListener<K> {
> @ClientCacheEntryCreated
> public void createdEntry(ClientCacheEntryCreatedEvent<K> event) {
> System.out.printf("** Key '%s' was created\n", event.getKey());
> }
> @ClientCacheEntryModified
> public void modifiedEntry(ClientCacheEntryModifiedEvent<K> event) {
> System.out.printf("** Key '%s' was modified\n", event.getKey());
> }
> @ClientCacheEntryRemoved
> public void removedEntry(ClientCacheEntryRemovedEvent<K> event) {
> System.out.printf("** Key '%s' was removed\n", event.getKey());
> }
> }
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 6 months
[JBoss JIRA] (ISPN-5518) Introduce a separate thread pool for async cache operations
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-5518?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-5518:
-------------------------------
Status: Open (was: New)
> Introduce a separate thread pool for async cache operations
> -----------------------------------------------------------
>
> Key: ISPN-5518
> URL: https://issues.jboss.org/browse/ISPN-5518
> Project: Infinispan
> Issue Type: Feature Request
> Components: Configuration, Core
> Affects Versions: 7.2.2.Final, 8.0.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 8.0.0.Alpha2
>
>
> At the moment, it is very easy for an application to start a huge number of putAsync operations and fill the transport executor's thread pool, delaying internal work such as state transfer. Increasing the size of the transport executor's thread pool won't work, because that would in turn fill the remote commands and JGroups' OOB thread pools, with the same effect.
> If the cache async operations used a different thread pool, it would be possible to configure more {{remoteCommandsThreadPool}} threads than {{asyncOperationsThreadPool}} threads, avoiding this problem.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 6 months
[JBoss JIRA] (ISPN-5518) Introduce a separate thread pool for async cache operations
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-5518?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-5518:
-------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/3519
> Introduce a separate thread pool for async cache operations
> -----------------------------------------------------------
>
> Key: ISPN-5518
> URL: https://issues.jboss.org/browse/ISPN-5518
> Project: Infinispan
> Issue Type: Feature Request
> Components: Configuration, Core
> Affects Versions: 7.2.2.Final, 8.0.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 8.0.0.Alpha2
>
>
> At the moment, it is very easy for an application to start a huge number of putAsync operations and fill the transport executor's thread pool, delaying internal work such as state transfer. Increasing the size of the transport executor's thread pool won't work, because that would in turn fill the remote commands and JGroups' OOB thread pools, with the same effect.
> If the cache async operations used a different thread pool, it would be possible to configure more {{remoteCommandsThreadPool}} threads than {{asyncOperationsThreadPool}} threads, avoiding this problem.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 6 months
[JBoss JIRA] (ISPN-5515) Purge store if there is another node already running
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-5515?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-5515:
------------------------------------
> I think these are really tricky ideas which should be discussed on the mailing list, I noticed this JIRA by pure luck and find it concerning that such decisions are made without any wider discussion.
We already discussed this on the mailing list, and the conclusion was to implement [graceful restart|https://github.com/infinispan/infinispan/wiki/Graceful-shutdown-&...]. This issue is not really about implementing new functionality, it's about automating a recommendation we already have _for users who want it_.
> it's possible the new starting node starts while "thinking it's first", but then actually merge with a running cluster. The cluster detection protocols aren't foolproof, and you're relying on timeouts to be configured safely (when are they ever?).
If a node starts in a separate partition by itself, the behaviour with "purge on join" enabled will be exactly as it is now - not better, but not worse either.
> it's unrealistic to push such a requirement to "admin's responsibility" especially but not least because node restarts might not be under their control
A node restart will not affect nodes that are already running in any way.
Also, this option will be disabled by default, and if the admin can't control the order in which nodes start, he should definitely not enable it.
> even with this design, the majority of cachestores are cleared so there is an assumption that "data loss is fine" for the user: so why even bother trying to keep a small portion of it at risk of consistency trouble?
In a replicated cache, it's not a small portion of the data, it's all the data.
I agree that it makes a lot less sense in a distributed cache: in theory you could shut down the cluster such that all the state is transferred to a single node and all the data is preserved in that node's store, but it's definitely something you'd want to do on a regular basis.
> this design seems to favour something else above correctness, and I'm not sure what "something else" you're aiming at.. why work hard to not wipe a single cachestore?
These are my assumptions for using this option:
* The cache is replicated and the number of nodes is small
* Losing data is not fatal, as there is a backing store
* Reading a stale value *is* fatal
* Reading data from the canonical store is slow
I realize these assumptions are quite narrow, and most users will not use it. But for applications who do fit these assumptions, I think this will help. And it would be back-portable to 7.2.x, unlike the graceful restart work.
>I agree with you that this is an improvement over the current state, but I don't see why you would implement tricky code to provide a tricky solution when all what's needed is remove the preloading option from configuration. You'll be done in much less work and get a better reliable solution.
I renamed the issue, since it's not really about preload - having preload does complicate the code, but stale values are possible with or without preload enabled.
> Purge store if there is another node already running
> ----------------------------------------------------
>
> Key: ISPN-5515
> URL: https://issues.jboss.org/browse/ISPN-5515
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core, Loaders and Stores
> Affects Versions: 7.2.2.Final, 8.0.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 8.0.0.Alpha2
>
>
> Preloading happens before communicating with other nodes that might already have the cache running. When joining the existing members, the cache then waits to receive the first CH in which it is a member, and then deletes only the entries in the segments that it doesn't own in that CH.
> The intention of this was to remove as little as possible from the existing data, e.g. if the first node to start up is not the one that was stopped last. But the preloaded entries are not replicated to the other nodes, so this can lead to inconsistencies.
> It would be better to delay preloading until we know we are the first node to start up, but failing that we could clear the data container and the store before receiving the initial state.
> Note that this will only allow preloading data from one node. Restoring data from more nodes is harder to do, and we will implement it as part of graceful restart.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 6 months
[JBoss JIRA] (ISPN-5515) Purge store if there is another node already running
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-5515?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-5515:
-------------------------------
Summary: Purge store if there is another node already running (was: Preload only on the node that starts up the first)
> Purge store if there is another node already running
> ----------------------------------------------------
>
> Key: ISPN-5515
> URL: https://issues.jboss.org/browse/ISPN-5515
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core, Loaders and Stores
> Affects Versions: 7.2.2.Final, 8.0.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 8.0.0.Alpha2
>
>
> Preloading happens before communicating with other nodes that might already have the cache running. When joining the existing members, the cache then waits to receive the first CH in which it is a member, and then deletes only the entries in the segments that it doesn't own in that CH.
> The intention of this was to remove as little as possible from the existing data, e.g. if the first node to start up is not the one that was stopped last. But the preloaded entries are not replicated to the other nodes, so this can lead to inconsistencies.
> It would be better to delay preloading until we know we are the first node to start up, but failing that we could clear the data container and the store before receiving the initial state.
> Note that this will only allow preloading data from one node. Restoring data from more nodes is harder to do, and we will implement it as part of graceful restart.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 6 months
[JBoss JIRA] (ISPN-5515) Preload only on the node that starts up the first
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-5515?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero commented on ISPN-5515:
---------------------------------------
I think these are really tricky ideas which should be discussed on the mailing list, I noticed this JIRA by pure luck and find it concerning that such decisions are made without any wider discussion.
What I find most tricky:
- it's possible the new starting node starts while "thinking it's first", but then actually merge with a running cluster. The cluster detection protocols aren't foolproof, and you're relying on timeouts to be configured safely (when are they ever?).
- it's unrealistic to push such a requirement to "admin's responsibility" especially but not least because node restarts might not be under their control
- even with this design, the majority of cachestores are cleared so there is an assumption that "data loss is fine" for the user: so why even bother trying to keep a small portion of it at risk of consistency trouble?
- this design seems to favour something else above correctness, and I'm not sure what "something else" you're aiming at.. why work hard to not wipe a single cachestore?
I agree with you that this is an improvement over the current state, but I don't see why you would implement tricky code to provide a tricky solution when all what's needed is remove the preloading option from configuration. You'll be done in much less work and get a better reliable solution.
Unless I'm missing the important reason to load this data?
> Preload only on the node that starts up the first
> -------------------------------------------------
>
> Key: ISPN-5515
> URL: https://issues.jboss.org/browse/ISPN-5515
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core, Loaders and Stores
> Affects Versions: 7.2.2.Final, 8.0.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 8.0.0.Alpha2
>
>
> Preloading happens before communicating with other nodes that might already have the cache running. When joining the existing members, the cache then waits to receive the first CH in which it is a member, and then deletes only the entries in the segments that it doesn't own in that CH.
> The intention of this was to remove as little as possible from the existing data, e.g. if the first node to start up is not the one that was stopped last. But the preloaded entries are not replicated to the other nodes, so this can lead to inconsistencies.
> It would be better to delay preloading until we know we are the first node to start up, but failing that we could clear the data container and the store before receiving the initial state.
> Note that this will only allow preloading data from one node. Restoring data from more nodes is harder to do, and we will implement it as part of graceful restart.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 6 months