[JBoss JIRA] (ISPN-12231) Cache fails to start with IllegalStateException: We already had a newer topology
by Dan Berindei (Jira)
[ https://issues.redhat.com/browse/ISPN-12231?page=com.atlassian.jira.plugi... ]
Dan Berindei updated ISPN-12231:
--------------------------------
Status: Open (was: New)
> Cache fails to start with IllegalStateException: We already had a newer topology
> --------------------------------------------------------------------------------
>
> Key: ISPN-12231
> URL: https://issues.redhat.com/browse/ISPN-12231
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 11.0.3.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Major
> Fix For: 12.0.0.Dev02
>
>
> {{LocalTopologyManagerImpl.join()}} registers the {{LocalCacheStatus}} outside the {{ActionSequencer}} call, allowing another topology update command to install a topology before the join response is processed.
> This is very unlikely to happen outside of tests, but I was able to reproduce it reliably when starting lots of nodes in parallel.
> {noformat}
> org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.statetransfer.StateTransferManager
> at org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:560)
> at org.infinispan.factories.impl.BasicComponentRegistryImpl.access$700(BasicComponentRegistryImpl.java:30)
> at org.infinispan.factories.impl.BasicComponentRegistryImpl$ComponentWrapper.running(BasicComponentRegistryImpl.java:775)
> at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:341)
> at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:237)
> at org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:210)
> at org.infinispan.cache.impl.CacheImpl.start(CacheImpl.java:1008)
> at org.infinispan.cache.impl.AbstractDelegatingCache.start(AbstractDelegatingCache.java:512)
> at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:697)
> at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:643)
> at org.infinispan.manager.DefaultCacheManager.internalGetCache(DefaultCacheManager.java:532)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:510)
> at org.infinispan.stress.LargeClusterStressTest.lambda$testLargeClusterStart$0(LargeClusterStressTest.java:92)
> at java.base/java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java)
> at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at java.base/java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java)
> at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
> at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
> at java.base/java.lang.Thread.run(Thread.java:832)
> Caused by: java.util.concurrent.CompletionException: java.lang.IllegalStateException: We already had a newer topology by the time we received the join response
> at org.infinispan.util.concurrent.CompletionStages.join(CompletionStages.java:82)
> at org.infinispan.statetransfer.StateTransferManagerImpl.start(StateTransferManagerImpl.java:133)
> at org.infinispan.statetransfer.CorePackageImpl$1.start(CorePackageImpl.java:48)
> at org.infinispan.statetransfer.CorePackageImpl$1.start(CorePackageImpl.java:27)
> at org.infinispan.factories.impl.BasicComponentRegistryImpl.invokeStart(BasicComponentRegistryImpl.java:592)
> at org.infinispan.factories.impl.BasicComponentRegistryImpl.doStartWrapper(BasicComponentRegistryImpl.java:583)
> at org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:552)
> ... 20 more
> Caused by: java.lang.IllegalStateException: We already had a newer topology by the time we received the join response
> at org.infinispan.topology.LocalTopologyManagerImpl.lambda$handleJoinResponse$5(LocalTopologyManagerImpl.java:227)
> at java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1183)
> at java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2299)
> at java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:143)
> at org.infinispan.topology.LocalTopologyManagerImpl.handleJoinResponse(LocalTopologyManagerImpl.java:225)
> at org.infinispan.topology.LocalTopologyManagerImpl.lambda$join$0(LocalTopologyManagerImpl.java:161)
> at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1146)
> at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2137)
> at org.infinispan.remoting.transport.AbstractRequest.complete(AbstractRequest.java:67)
> at org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:45)
> at org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:52)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1405)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1308)
> {noformat}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 7 months
[JBoss JIRA] (ISPN-12230) Deprecate client keySizeEstimate and valueSizeEstimate attributes
by Dan Berindei (Jira)
Dan Berindei created ISPN-12230:
-----------------------------------
Summary: Deprecate client keySizeEstimate and valueSizeEstimate attributes
Key: ISPN-12230
URL: https://issues.redhat.com/browse/ISPN-12230
Project: Infinispan
Issue Type: Bug
Components: Configuration, Hot Rod
Affects Versions: 11.0.3.Final
Reporter: Dan Berindei
Assignee: Dan Berindei
Fix For: 12.0.0.Dev02
The core configuration has long ago removed the {{keySizeEstimate}} and {{valueSizeEstimate}} attributes, using instead a {{BufferSizePredictor}} to dynamically estimate the size of the next key/value. {{ProtostreamMarshaller}} goes even further and ignores the estimate.
We should stop using {{keySizeEstimate}} and {{valueSizeEstimate}} on the client as well, and instead use a fixed buffer size ({{ProtostreamMarshaller}} uses 512).
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 7 months
[JBoss JIRA] (ISPN-12229) Indexed caches with non-indexed entities query inconsistency
by Diego Lovison (Jira)
Diego Lovison created ISPN-12229:
------------------------------------
Summary: Indexed caches with non-indexed entities query inconsistency
Key: ISPN-12229
URL: https://issues.redhat.com/browse/ISPN-12229
Project: Infinispan
Issue Type: Bug
Components: Embedded Querying, Remote Querying
Affects Versions: 11.0.3.Final
Reporter: Diego Lovison
When a cache is indexed, but the protobuf entitiy is not:
"FROM Entity" returns zero results
"FROM Entity WHERE <predicate>" return results
It appears in the first case the query goes to the index (that will be empty), but not in the second where it does a non-indexed query
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 7 months
[JBoss JIRA] (ISPN-12228) Combine no route to log messages
by Diego Lovison (Jira)
Diego Lovison created ISPN-12228:
------------------------------------
Summary: Combine no route to log messages
Key: ISPN-12228
URL: https://issues.redhat.com/browse/ISPN-12228
Project: Infinispan
Issue Type: Bug
Components: Cross-Site Replication
Affects Versions: 12.0.0.Dev01, 11.0.3.Final
Reporter: Diego Lovison
When doing some put in a cache that has a backup site, if the backup site is not available, a lot of messages will be printed in logs. I think that we can count and print every 100 or 1000
Current state:
{noformat}
[1;31m21:42:38,542 ERROR (jgroups-146,edg-perf05-62972) [org.jgroups.protocols.relay.RELAY2] edg-perf05-62972: no route to site01: dropping message[m
[1;31m21:42:38,542 ERROR (irac-sender-thread-edg-perf05-62972) [org.jgroups.protocols.relay.RELAY2] edg-perf05-62972: no route to site01: dropping message[m
{noformat}
Desired state: dropping X messages
{noformat}
[1;31m21:42:38,542 ERROR (irac-sender-thread-edg-perf05-62972) [org.jgroups.protocols.relay.RELAY2] edg-perf05-62972: no route to site01: dropping X messages[m
{noformat}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 7 months
[JBoss JIRA] (ISPN-12211) Combine no route to log messages
by Diego Lovison (Jira)
[ https://issues.redhat.com/browse/ISPN-12211?page=com.atlassian.jira.plugi... ]
Diego Lovison updated ISPN-12211:
---------------------------------
Issue Type: Bug (was: Enhancement)
> Combine no route to log messages
> --------------------------------
>
> Key: ISPN-12211
> URL: https://issues.redhat.com/browse/ISPN-12211
> Project: Infinispan
> Issue Type: Bug
> Components: Cross-Site Replication
> Affects Versions: 12.0.0.Dev01, 11.0.3.Final
> Reporter: Diego Lovison
> Priority: Major
>
> When doing some put in a cache that has a backup site, if the backup site is not available, a lot of messages will be printed in logs. I think that we can count and print every 100 or 1000
> Current state:
> {noformat}
> [1;31m21:42:38,542 ERROR (jgroups-146,edg-perf05-62972) [org.jgroups.protocols.relay.RELAY2] edg-perf05-62972: no route to site01: dropping message[m
> [1;31m21:42:38,542 ERROR (irac-sender-thread-edg-perf05-62972) [org.jgroups.protocols.relay.RELAY2] edg-perf05-62972: no route to site01: dropping message[m
> {noformat}
> Desired state: dropping X messages
> {noformat}
> [1;31m21:42:38,542 ERROR (irac-sender-thread-edg-perf05-62972) [org.jgroups.protocols.relay.RELAY2] edg-perf05-62972: no route to site01: dropping X messages[m
> {noformat}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 7 months
[JBoss JIRA] (ISPN-12227) Backup CacheResource limit number of blocking threads utilised
by Ryan Emerson (Jira)
Ryan Emerson created ISPN-12227:
-----------------------------------
Summary: Backup CacheResource limit number of blocking threads utilised
Key: ISPN-12227
URL: https://issues.redhat.com/browse/ISPN-12227
Project: Infinispan
Issue Type: Enhancement
Components: Server
Reporter: Ryan Emerson
Assignee: Ryan Emerson
Fix For: 12.0.0.Final
Currently the {{BackupManager}} and associated classes make heavy use of the {{BlockingManager}} to execute concurrent tasks. We should limit the execution of these tasks to a subset of the total number of blocking threads available in order to avoid a {{CacheBackpressureFullException}} being thrown.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 7 months
[JBoss JIRA] (ISPN-12226) BlockingManager enhance limited concurrency API
by Ryan Emerson (Jira)
Ryan Emerson created ISPN-12226:
-----------------------------------
Summary: BlockingManager enhance limited concurrency API
Key: ISPN-12226
URL: https://issues.redhat.com/browse/ISPN-12226
Project: Infinispan
Issue Type: Enhancement
Components: Core
Reporter: Ryan Emerson
Fix For: 12.0.0.Final
Currently the {{BlockingManager}} allows a series of tasks to be executed on a subset of the total number of blocking threads via the {{BlockingManager.BlockingExecutor}} and {{BlockingManager#limitedBlockingExecutor}}. However, the {{BlockingExecutor}} has a limited number of methods in comparison to that provided by {{BlockingManager}}. We should provide an API that allows for the full {{BlockingManager}} capabilities to be executed on a limited executor with the specified level of concurrency.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 7 months