[JBoss JIRA] (ISPN-5167) Cache.size() returns cluster-wide entry size in int and overflow
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-5167?page=com.atlassian.jira.plugin.... ]
William Burns edited comment on ISPN-5167 at 6/2/15 11:33 AM:
--------------------------------------------------------------
Takayoshi, with ISPN 8 we have streams. Internally their API defines .count which returns a long. Is it sufficient for you to just call {code} cache.keySet().stream().count(){code} to get the size as a long?
was (Author: william.burns):
Takayoshi, with ISPN 8 we have streams. Internally their API defines .count which returns a long. Is it sufficient for you to just call cache.keySet().stream().count() to get the size as a long?
> Cache.size() returns cluster-wide entry size in int and overflow
> ----------------------------------------------------------------
>
> Key: ISPN-5167
> URL: https://issues.jboss.org/browse/ISPN-5167
> Project: Infinispan
> Issue Type: Feature Request
> Components: Core
> Affects Versions: 7.0.3.Final
> Reporter: Takayoshi Kimura
> Assignee: William Burns
>
> We have a large cluster and a cache will have number of entries > Integer.MAX_VALUE in near future.
> It would be great if we have an additional method which returns size in long type.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 6 months
[JBoss JIRA] (ISPN-5167) Cache.size() returns cluster-wide entry size in int and overflow
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-5167?page=com.atlassian.jira.plugin.... ]
William Burns commented on ISPN-5167:
-------------------------------------
Takayoshi, with ISPN 8 we have streams. Internally their API defines .count which returns a long. Is it sufficient for you to just call cache.keySet().stream().count() to get the size as a long?
> Cache.size() returns cluster-wide entry size in int and overflow
> ----------------------------------------------------------------
>
> Key: ISPN-5167
> URL: https://issues.jboss.org/browse/ISPN-5167
> Project: Infinispan
> Issue Type: Feature Request
> Components: Core
> Affects Versions: 7.0.3.Final
> Reporter: Takayoshi Kimura
> Assignee: William Burns
>
> We have a large cluster and a cache will have number of entries > Integer.MAX_VALUE in near future.
> It would be great if we have an additional method which returns size in long type.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 6 months
[JBoss JIRA] (ISPN-2198) Cluster with non-shared JDBC cache store has too much entries after node failure
by Radim Vansa (JIRA)
[ https://issues.jboss.org/browse/ISPN-2198?page=com.atlassian.jira.plugin.... ]
Radim Vansa commented on ISPN-2198:
-----------------------------------
[~dan.berindei] I have filed this bug almost 3 years ago, therefore I really can't remember now. And regrettably, I haven't provided datasource configuration.
Maybe it could be closed as outdated, as all the state-transfer related stuff has changed anyway.
> Cluster with non-shared JDBC cache store has too much entries after node failure
> --------------------------------------------------------------------------------
>
> Key: ISPN-2198
> URL: https://issues.jboss.org/browse/ISPN-2198
> Project: Infinispan
> Issue Type: Feature Request
> Components: Loaders and Stores
> Affects Versions: 5.1.5.FINAL
> Reporter: Radim Vansa
> Attachments: cache_entries.csv, logs.zip, sfout.txt
>
>
> In resilience test with 4-node cluster where one node is killed a weird situation appears. Before the node kill have this number of entries:
> 210602;215820;209400;203038 = 838860 entries
> After the kill the number of entries changes for a while:
> 210602;null;209400;203038
> 250602;null;269400;243038
> 290602;null;269400;273038
> 300602;null;289400;293038
> 300602;null;289400;293038
> 321218;null;296035;293038
> But then it stabilizes on
> 326899;null;305039;314165 = 946103 entries
> When the node02 is restarted it complains about duplicit entries:
> ERROR [org.infinispan.loaders.jdbc.stringbased.JdbcStringBasedCacheStore] (OOB-124,null) ISPN008024: Error while storing string key to database; key: '8Az4Ia2V5NzYzNDI=', buffer size of value: 1050 bytes: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry '?8Az4Ia2V5NzYzNDI=' for key 'PRIMARY'
> Is this a bug or wrong configuration?
> Here is an excerpt from configuration (sorry for no formatting):
> <distributed-cache batching="false" indexing="NONE" l1-lifespan="0" mode="SYNC" name="memcachedCache" owners="2" remote-timeout="60000" start="EAGER" virtual-nodes="512">
> <locking acquire-timeout="3000" concurrency-level="1000" isolation="REPEATABLE_READ" striping="false"/>
> <transaction mode="NONE"/>
> <state-transfer enabled="true" timeout="600000"/>
> <eviction max-entries="-1" strategy="NONE"/>
> <string-keyed-jdbc-store datasource="java:jboss/datasources/JdbcDS" passivation="false" preload="false" purge="true" shared="false">
> <property name="databaseType">MYSQL</property>
> <string-keyed-table prefix="node01">
> <id-column name="id" type="VARCHAR(100)"/>
> <data-column name="value" type="BLOB(1200)"/>
> </string-keyed-table>
> </string-keyed-jdbc-store>
> </distributed-cache>
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 6 months
[JBoss JIRA] (ISPN-3924) Fix test: NonTxPutIfAbsentDuringLeaveStressTest.testNodeLeavingDuringPutIfAbsent:114
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-3924?page=com.atlassian.jira.plugin.... ]
Dan Berindei reopened ISPN-3924:
--------------------------------
After moving the test to the functional group, it started failing again in CI:
{noformat}
java.util.concurrent.ExecutionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from NonTxPutIfAbsentDuringLeaveStressTest-NodeD-4139, see cause for remote stack trace
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at org.infinispan.distribution.rehash.NonTxPutIfAbsentDuringLeaveStressTest.testNodeLeavingDuringPutIfAbsent(NonTxPutIfAbsentDuringLeaveStressTest.java:97)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
at org.testng.TestRunner.privateRun(TestRunner.java:767)
at org.testng.TestRunner.run(TestRunner.java:617)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
at org.testng.SuiteRunner.access$000(SuiteRunner.java:38)
at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:382)
at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from NonTxPutIfAbsentDuringLeaveStressTest-NodeD-4139, see cause for remote stack trace
at org.infinispan.remoting.transport.AbstractTransport.checkResponse(AbstractTransport.java:44)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:738)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$14(JGroupsTransport.java:586)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport$$Lambda$27/1600976794.apply(Unknown Source)
at java.util.concurrent.CompletableFuture$ThenApply.run(CompletableFuture.java:717)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:193)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2345)
at org.infinispan.remoting.transport.jgroups.SingleResponseFuture.futureDone(SingleResponseFuture.java:27)
at org.jgroups.blocks.Request.checkCompletion(Request.java:169)
at org.jgroups.blocks.UnicastRequest.receiveResponse(UnicastRequest.java:83)
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:398)
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:250)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:675)
at org.jgroups.JChannel.up(JChannel.java:739)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1029)
at org.jgroups.protocols.RSVP.up(RSVP.java:201)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:394)
at org.jgroups.protocols.tom.TOA.up(TOA.java:121)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1042)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)
at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1064)
at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:779)
at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:426)
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:652)
at org.jgroups.protocols.Discovery.up(Discovery.java:291)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1577)
at org.jgroups.protocols.TP$MyHandler.run(TP.java:1796)
... 3 more
Caused by: org.infinispan.IllegalLifecycleStateException: ISPN000324: Default cache is in 'STOPPING' state and this is an invocation not belonging to an on-going transaction, so it does not accept new invocations. Either restart it or recreate the cache container.
at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:91)
at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:71)
at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:44)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71)
at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:336)
at org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand(BaseRpcInvokingCommand.java:39)
at org.infinispan.commands.remote.SingleRpcCommand.perform(SingleRpcCommand.java:48)
at org.infinispan.remoting.inboundhandler.BasePerCacheInboundInvocationHandler.invokePerform(BasePerCacheInboundInvocationHandler.java:85)
at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.run(BaseBlockingRunnable.java:32)
... 3 more
{noformat}
> Fix test: NonTxPutIfAbsentDuringLeaveStressTest.testNodeLeavingDuringPutIfAbsent:114
> ------------------------------------------------------------------------------------
>
> Key: ISPN-3924
> URL: https://issues.jboss.org/browse/ISPN-3924
> Project: Infinispan
> Issue Type: Task
> Components: Test Suite - Core
> Reporter: Sanne Grinovero
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 7.2.2.Final, 8.0.0.Alpha2
>
>
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 6 months
[JBoss JIRA] (ISPN-5515) Purge store if there is another node already running
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-5515?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero commented on ISPN-5515:
---------------------------------------
{quote}We already discussed this on the mailing list, and the conclusion was to implement graceful restart. {quote}
Right, I'm exactly concerned about that metodology: we agreed to do something else on the mailing list!
{quote}This issue is not really about implementing new functionality, it's about automating a recommendation we already have for users who want it.{quote}
I'll have to trust you on that as I have no idea which recommendations you are referring to. The only one recommendation which I would give, is don't use a shared cachestore unless it's for temporary passivation (and wipe them all clean at boot).
> Purge store if there is another node already running
> ----------------------------------------------------
>
> Key: ISPN-5515
> URL: https://issues.jboss.org/browse/ISPN-5515
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core, Loaders and Stores
> Affects Versions: 7.2.2.Final, 8.0.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 8.0.0.Alpha2
>
>
> Preloading happens before communicating with other nodes that might already have the cache running. When joining the existing members, the cache then waits to receive the first CH in which it is a member, and then deletes only the entries in the segments that it doesn't own in that CH.
> The intention of this was to remove as little as possible from the existing data, e.g. if the first node to start up is not the one that was stopped last. But the preloaded entries are not replicated to the other nodes, so this can lead to inconsistencies.
> It would be better to delay preloading until we know we are the first node to start up, but failing that we could clear the data container and the store before receiving the initial state.
> Note that this will only allow preloading data from one node. Restoring data from more nodes is harder to do, and we will implement it as part of graceful restart.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 6 months
[JBoss JIRA] (ISPN-5515) Purge store if there is another node already running
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-5515?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero commented on ISPN-5515:
---------------------------------------
Thanks Dan, makes sense to consider replication too. But even then, why is it important for non-primary nodes to not load any entry? If any, I'd expect each node - even with REPL - to only load the entities which it is *primary owner* for. Consistency is an issue either way and I simply think one shouldn't trust any state from a shared CacheStore, but let's assume that some users really want this.. wouldn't you at least spread the load of loading operations from CacheStore among the nodes?
I simply fail to see how picking any one node's CacheStore helps to improve on consistency.
> Purge store if there is another node already running
> ----------------------------------------------------
>
> Key: ISPN-5515
> URL: https://issues.jboss.org/browse/ISPN-5515
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core, Loaders and Stores
> Affects Versions: 7.2.2.Final, 8.0.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 8.0.0.Alpha2
>
>
> Preloading happens before communicating with other nodes that might already have the cache running. When joining the existing members, the cache then waits to receive the first CH in which it is a member, and then deletes only the entries in the segments that it doesn't own in that CH.
> The intention of this was to remove as little as possible from the existing data, e.g. if the first node to start up is not the one that was stopped last. But the preloaded entries are not replicated to the other nodes, so this can lead to inconsistencies.
> It would be better to delay preloading until we know we are the first node to start up, but failing that we could clear the data container and the store before receiving the initial state.
> Note that this will only allow preloading data from one node. Restoring data from more nodes is harder to do, and we will implement it as part of graceful restart.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 6 months