[JBoss JIRA] (ISPN-7672) NonTotalOrderTxPerCacheInboundInvocationHandler throws warning when adding cache entry using Spring Session
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-7672?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-7672:
------------------------------------
[~galder.zamarreno] -1 to store the version inside the value, I have a feeling it's going to require extra allocations. And I don't know how it would interact with [~gustavonalle]'s transcoding work... For ISPN-4972, I think the HotRod server should use the functional API to execute the replacement based solely on the version.
> NonTotalOrderTxPerCacheInboundInvocationHandler throws warning when adding cache entry using Spring Session
> -----------------------------------------------------------------------------------------------------------
>
> Key: ISPN-7672
> URL: https://issues.jboss.org/browse/ISPN-7672
> Project: Infinispan
> Issue Type: Bug
> Components: Cloud Integrations, Spring Integration
> Affects Versions: 9.0.0.CR4
> Reporter: Sebastian Łaskawiec
> Assignee: Galder Zamarreño
> Priority: Blocker
> Fix For: 9.1.0.Final
>
>
> When I'm trying to add an entry using Spring Session integration with a [transactional cache|https://github.com/slaskawi/presentations/blob/master/2017_spring_s...], the server throws a warning:
> {code}
> [transactions-repository-1-2cbrv] 06:53:40,773 WARN [org.infinispan.remoting.inboundhandler.NonTotalOrderTxPerCacheInboundInvocationHandler] (remote-thread--p2-t18) ISPN000071: Caught exception when handling command DistributedExecuteCommand [cache=Cache 'sessions'@transactions-repository-1-2cbrv, keys=[], callable=ClusterEventCallable{identifier=b345211e-fbd7-4305-b3a6-6979301e0360, events=[ClusterEvent {type=CACHE_ENTRY_CREATED, cache=Cache 'sessions'@transactions-repository-1-2cbrv, key=[B@8c75820, value=[B@76856353, oldValue=null, transaction=RecoveryAwareGlobalTransaction{xid=< 131077, 29, 36, 0000000000-1-1-84170374-96-629488-44-62370001349, 0000000000-1-1-84170374-96-629488-44-62370001400000000 >, internalId=562954248388609} GlobalTx:transactions-repository-1-cwk6f:1, retryCommand=false, origin=transactions-repository-1-cwk6f}]}]: org.infinispan.commons.CacheListenerException: ISPN000280: Caught exception [java.lang.ClassCastException] while invoking method [public void org.infinispan.server.hotrod.ClientListenerRegistry$BaseClientEventSender.onCacheEvent(org.infinispan.notifications.cachelistener.event.CacheEntryEvent)] on listener instance: org.infinispan.server.hotrod.ClientListenerRegistry$StatelessClientEventSender@7b97a57
> [transactions-repository-1-2cbrv] at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl$1.run(AbstractListenerImpl.java:401)
> [transactions-repository-1-2cbrv] at org.infinispan.util.concurrent.WithinThreadExecutor.execute(WithinThreadExecutor.java:20)
> [transactions-repository-1-2cbrv] at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl.invoke(AbstractListenerImpl.java:419)
> [transactions-repository-1-2cbrv] at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.doRealInvocation(CacheNotifierImpl.java:1512)
> [transactions-repository-1-2cbrv] at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.doRealInvocation(CacheNotifierImpl.java:1508)
> [transactions-repository-1-2cbrv] at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.invokeNoChecks(CacheNotifierImpl.java:1503)
> [transactions-repository-1-2cbrv] at org.infinispan.notifications.cachelistener.CacheNotifierImpl.notifyClusterListeners(CacheNotifierImpl.java:711)
> [transactions-repository-1-2cbrv] at org.infinispan.notifications.cachelistener.cluster.ClusterEventCallable.call(ClusterEventCallable.java:49)
> [transactions-repository-1-2cbrv] at org.infinispan.notifications.cachelistener.cluster.ClusterEventCallable.call(ClusterEventCallable.java:25)
> [transactions-repository-1-2cbrv] at org.infinispan.commands.read.DistributedExecuteCommand.invokeAsync(DistributedExecuteCommand.java:99)
> [transactions-repository-1-2cbrv] at org.infinispan.remoting.inboundhandler.BasePerCacheInboundInvocationHandler.invokeCommand(BasePerCacheInboundInvocationHandler.java:90)
> [transactions-repository-1-2cbrv] at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.invoke(BaseBlockingRunnable.java:90)
> [transactions-repository-1-2cbrv] at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.runAsync(BaseBlockingRunnable.java:68)
> [transactions-repository-1-2cbrv] at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.run(BaseBlockingRunnable.java:40)
> [transactions-repository-1-2cbrv] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [transactions-repository-1-2cbrv] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [transactions-repository-1-2cbrv] at java.lang.Thread.run(Thread.java:745)
> [transactions-repository-1-2cbrv] Caused by: java.lang.ClassCastException: org.infinispan.container.versioning.SimpleClusteredVersion cannot be cast to org.infinispan.container.versioning.NumericVersion
> [transactions-repository-1-2cbrv] at org.infinispan.server.hotrod.ClientListenerRegistry$BaseClientEventSender.onCacheEvent(ClientListenerRegistry.java:363)
> [transactions-repository-1-2cbrv] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [transactions-repository-1-2cbrv] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> [transactions-repository-1-2cbrv] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [transactions-repository-1-2cbrv] at java.lang.reflect.Method.invoke(Method.java:498)
> [transactions-repository-1-2cbrv] at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl$1.run(AbstractListenerImpl.java:396)
> [transactions-repository-1-2cbrv] ... 16 more
> [transactions-repository-1-2cbrv]
> {code}
> This didn't happen in {{CR2}} release, so there must be something that changed since then. I also noticed that this sometimes leads to exceptions in the Hot Rod client.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (ISPN-6187) Nodes should be recognized if they join a cluster again if PartitionHandling is active
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-6187?page=com.atlassian.jira.plugin.... ]
Dan Berindei reassigned ISPN-6187:
----------------------------------
Assignee: Dan Berindei
> Nodes should be recognized if they join a cluster again if PartitionHandling is active
> --------------------------------------------------------------------------------------
>
> Key: ISPN-6187
> URL: https://issues.jboss.org/browse/ISPN-6187
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Reporter: Wolf-Dieter Fink
> Assignee: Dan Berindei
> Labels: partition_handling
>
> If a node leave a cluster by crash/power or network problems and is restarted the ID has changed and it is a NEW node.
> In case of PartitionHandling enabled the cluster can be in a state where it is impossible to recover automatically.
> As ISPN8 will have a new feature "persistent" address, which is used for CH only at this moment, this address can be used for PH as well.
> As a result the node join as known and can be
> # filled with data via state-transfer if the remaining cluster is the major partition and AVAILABLE.
> # filled with data if at least one owner is inside the remaining cluster
> equal splitt with numOwner>numNode/2
> # full cluster rebalance with WARN/ERROR if there is a possible loss of data
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (ISPN-6187) Nodes should be recognized if they join a cluster again if PartitionHandling is active
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-6187?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-6187:
-------------------------------
Status: Open (was: New)
> Nodes should be recognized if they join a cluster again if PartitionHandling is active
> --------------------------------------------------------------------------------------
>
> Key: ISPN-6187
> URL: https://issues.jboss.org/browse/ISPN-6187
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Reporter: Wolf-Dieter Fink
> Assignee: Dan Berindei
> Labels: partition_handling
>
> If a node leave a cluster by crash/power or network problems and is restarted the ID has changed and it is a NEW node.
> In case of PartitionHandling enabled the cluster can be in a state where it is impossible to recover automatically.
> As ISPN8 will have a new feature "persistent" address, which is used for CH only at this moment, this address can be used for PH as well.
> As a result the node join as known and can be
> # filled with data via state-transfer if the remaining cluster is the major partition and AVAILABLE.
> # filled with data if at least one owner is inside the remaining cluster
> equal splitt with numOwner>numNode/2
> # full cluster rebalance with WARN/ERROR if there is a possible loss of data
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (ISPN-6187) Nodes should be recognized if they join a cluster again if PartitionHandling is active
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-6187?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-6187:
------------------------------------
# If the partition is already AVAILABLE, we will do a rebalance and transfer state to the joiner regardless of the persistent UUID.
# Agree, we could make a DEGRADED partition AVAILABLE if the new joiner makes it a majority partition and the partition already had a copy of each segment (also possible with {{numOwners <= numNodes/2}}, as long as {{numOwners > 2}}). After we mark the partition as AVAILABLE, we're in case #1 and we should do a rebalance with the current members.
# I think the whole point of enabling partition handling is to not allow the application to proceed with incomplete data, so I don't think we should ever rebalance if we don't have a copy of each segment.
> Nodes should be recognized if they join a cluster again if PartitionHandling is active
> --------------------------------------------------------------------------------------
>
> Key: ISPN-6187
> URL: https://issues.jboss.org/browse/ISPN-6187
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Reporter: Wolf-Dieter Fink
> Labels: partition_handling
>
> If a node leave a cluster by crash/power or network problems and is restarted the ID has changed and it is a NEW node.
> In case of PartitionHandling enabled the cluster can be in a state where it is impossible to recover automatically.
> As ISPN8 will have a new feature "persistent" address, which is used for CH only at this moment, this address can be used for PH as well.
> As a result the node join as known and can be
> # filled with data via state-transfer if the remaining cluster is the major partition and AVAILABLE.
> # filled with data if at least one owner is inside the remaining cluster
> equal splitt with numOwner>numNode/2
> # full cluster rebalance with WARN/ERROR if there is a possible loss of data
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (ISPN-6187) Nodes should be recognized if they join a cluster again if PartitionHandling is active
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-6187?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-6187:
-------------------------------
Description:
If a node leave a cluster by crash/power or network problems and is restarted the ID has changed and it is a NEW node.
In case of PartitionHandling enabled the cluster can be in a state where it is impossible to recover automatically.
As ISPN8 will have a new feature "persistent" address, which is used for CH only at this moment, this address can be used for PH as well.
As a result the node join as known and can be
1. filled with data via state-transfer if the remaining cluster is the major partition and AVAILABLE.
2. filled with data if at least one owner is inside the remaining cluster
equal splitt with numOwner>numNode/2
3. full cluster rebalance with WARN/ERROR if there is a possible loss of data
was:
If a node leave a cluster by crash/power or network problems and is restarted the ID has changed and it is a NEW node.
In case of PartitionHandling enabled the cluster can be in a state where it is impossible to recover automatically.
As ISPN8 will have a new feature "persistent" address, which is used for CH only at this moment, this address can be used for PH as well.
As a result the node join as known and can be
- filled with data via state-transfer if the remaining cluster is the major partition and AVAILABLE.
- filled with data if at least one owner is inside the remaining cluster
equal splitt with numOwner>numNode/2
- full cluster rebalance with WARN/ERROR if there is a possible loss of data
> Nodes should be recognized if they join a cluster again if PartitionHandling is active
> --------------------------------------------------------------------------------------
>
> Key: ISPN-6187
> URL: https://issues.jboss.org/browse/ISPN-6187
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Reporter: Wolf-Dieter Fink
> Labels: partition_handling
>
> If a node leave a cluster by crash/power or network problems and is restarted the ID has changed and it is a NEW node.
> In case of PartitionHandling enabled the cluster can be in a state where it is impossible to recover automatically.
> As ISPN8 will have a new feature "persistent" address, which is used for CH only at this moment, this address can be used for PH as well.
> As a result the node join as known and can be
> 1. filled with data via state-transfer if the remaining cluster is the major partition and AVAILABLE.
> 2. filled with data if at least one owner is inside the remaining cluster
> equal splitt with numOwner>numNode/2
> 3. full cluster rebalance with WARN/ERROR if there is a possible loss of data
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (ISPN-6187) Nodes should be recognized if they join a cluster again if PartitionHandling is active
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-6187?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-6187:
-------------------------------
Description:
If a node leave a cluster by crash/power or network problems and is restarted the ID has changed and it is a NEW node.
In case of PartitionHandling enabled the cluster can be in a state where it is impossible to recover automatically.
As ISPN8 will have a new feature "persistent" address, which is used for CH only at this moment, this address can be used for PH as well.
As a result the node join as known and can be
# filled with data via state-transfer if the remaining cluster is the major partition and AVAILABLE.
# filled with data if at least one owner is inside the remaining cluster
equal splitt with numOwner>numNode/2
# full cluster rebalance with WARN/ERROR if there is a possible loss of data
was:
If a node leave a cluster by crash/power or network problems and is restarted the ID has changed and it is a NEW node.
In case of PartitionHandling enabled the cluster can be in a state where it is impossible to recover automatically.
As ISPN8 will have a new feature "persistent" address, which is used for CH only at this moment, this address can be used for PH as well.
As a result the node join as known and can be
1. filled with data via state-transfer if the remaining cluster is the major partition and AVAILABLE.
2. filled with data if at least one owner is inside the remaining cluster
equal splitt with numOwner>numNode/2
3. full cluster rebalance with WARN/ERROR if there is a possible loss of data
> Nodes should be recognized if they join a cluster again if PartitionHandling is active
> --------------------------------------------------------------------------------------
>
> Key: ISPN-6187
> URL: https://issues.jboss.org/browse/ISPN-6187
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Reporter: Wolf-Dieter Fink
> Labels: partition_handling
>
> If a node leave a cluster by crash/power or network problems and is restarted the ID has changed and it is a NEW node.
> In case of PartitionHandling enabled the cluster can be in a state where it is impossible to recover automatically.
> As ISPN8 will have a new feature "persistent" address, which is used for CH only at this moment, this address can be used for PH as well.
> As a result the node join as known and can be
> # filled with data via state-transfer if the remaining cluster is the major partition and AVAILABLE.
> # filled with data if at least one owner is inside the remaining cluster
> equal splitt with numOwner>numNode/2
> # full cluster rebalance with WARN/ERROR if there is a possible loss of data
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (ISPN-5655) MissingFormatArgumentException thrown by PreferConsistencyStrategy
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5655?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-5655:
-----------------------------------------------
Jakub Senko <jsenko(a)redhat.com> changed the Status of [bug 1412752|https://bugzilla.redhat.com/show_bug.cgi?id=1412752] from POST to MODIFIED
> MissingFormatArgumentException thrown by PreferConsistencyStrategy
> ------------------------------------------------------------------
>
> Key: ISPN-5655
> URL: https://issues.jboss.org/browse/ISPN-5655
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 8.0.0.Beta2
> Reporter: Ryan Emerson
> Assignee: Ryan Emerson
> Fix For: 8.0.0.Beta3, 8.0.0.Final
>
>
> Exception thrown due to line 197 in PreferConsistencyStrategy.java
> 2015-08-03 10:30:38,873 ERROR Unable to format msg: After merge, cache %s has recovered and is entering available mode java.util.MissingFormatArgumentException: Format specifier '%s'
> at java.util.Formatter.format(Formatter.java:2519)
> at java.util.Formatter.format(Formatter.java:2455)
> at java.lang.String.format(String.java:2928)
> at org.apache.logging.log4j.message.StringFormattedMessage.formatMessage(StringFormattedMessage.java:88)
> at org.apache.logging.log4j.message.StringFormattedMessage.getFormattedMessage(StringFormattedMessage.java:60)
> at org.apache.logging.log4j.core.pattern.MessagePatternConverter.format(MessagePatternConverter.java:68)
> at org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:36)
> at org.apache.logging.log4j.core.layout.PatternLayout.toSerializable(PatternLayout.java:196)
> at org.apache.logging.log4j.core.layout.PatternLayout.toSerializable(PatternLayout.java:55)
> at org.apache.logging.log4j.core.layout.AbstractStringLayout.toByteArray(AbstractStringLayout.java:71)
> at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:108)
> at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:99)
> at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:430)
> at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:409)
> at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:412)
> at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:367)
> at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:112)
> at org.jboss.logging.Log4j2Logger.doLogf(Log4j2Logger.java:66)
> at org.jboss.logging.Logger.logf(Logger.java:2445)
> at org.jboss.logging.DelegatingBasicLogger.debugf(DelegatingBasicLogger.java:344)
> at org.infinispan.partitionhandling.impl.PreferConsistencyStrategy.onPartitionMerge(PreferConsistencyStrategy.java:198)
> at org.infinispan.topology.ClusterCacheStatus.doMergePartitions(ClusterCacheStatus.java:509)
> at org.infinispan.topology.ClusterTopologyManagerImpl$2.call(ClusterTopologyManagerImpl.java:383)
> at org.infinispan.topology.ClusterTopologyManagerImpl$2.call(ClusterTopologyManagerImpl.java:380)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at org.infinispan.executors.SemaphoreCompletionService$QueueingTask.runInternal(SemaphoreCompletionService.java:173)
> at org.infinispan.executors.SemaphoreCompletionService$QueueingTask.run(SemaphoreCompletionService.java:151)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months