[JBoss JIRA] (ISPN-4974) Cross site state transfer - CLI ops throw NPE when backup is not defined
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-4974?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-4974:
-----------------------------------------------
Sebastian Łaskawiec <slaskawi(a)redhat.com> changed the Status of [bug 1163332|https://bugzilla.redhat.com/show_bug.cgi?id=1163332] from POST to MODIFIED
> Cross site state transfer - CLI ops throw NPE when backup is not defined
> ------------------------------------------------------------------------
>
> Key: ISPN-4974
> URL: https://issues.jboss.org/browse/ISPN-4974
> Project: Infinispan
> Issue Type: Bug
> Components: CLI
> Affects Versions: 7.0.0.Final
> Reporter: Matej Čimbora
> Assignee: Pedro Ruivo
> Fix For: 7.1.0.Alpha1, 7.1.0.Final
>
>
> When <backups><backup site="XYZ"/></backups> is not present in configuration of given cache, "site" CLI operations are still available on the node. However, their usage leads to NPEs being thrown, e.g.
> [31m22:40:13,381 ERROR [org.infinispan.cli.interpreter.Interpreter] (management-handler-thread - 4) ISPN019003: Interpreter error: java.lang.NullPointerException
> at org.infinispan.cli.interpreter.statement.SiteStatement.execute(SiteStatement.java:46) [infinispan-cli-interpreter-7.0.0.Final.jar:7.0.0.Final]
> at org.infinispan.cli.interpreter.Interpreter.execute(Interpreter.java:149) [infinispan-cli-interpreter-7.0.0.Final.jar:7.0.0.Final]
> at org.infinispan.server.infinispan.SecurityActions$6.run(SecurityActions.java:255) [infinispan-server-infinispan-7.0.0.Final.jar:7.0.0.Final]
> at org.infinispan.server.infinispan.SecurityActions$6.run(SecurityActions.java:252) [infinispan-server-infinispan-7.0.0.Final.jar:7.0.0.Final]
> at org.infinispan.security.Security.doPrivileged(Security.java:89) [infinispan-core-7.0.0.Final.jar:7.0.0.Final]
> at org.infinispan.server.infinispan.SecurityActions.doPrivileged(SecurityActions.java:68) [infinispan-server-infinispan-7.0.0.Final.jar:7.0.0.Final]
> at org.infinispan.server.infinispan.SecurityActions.executeInterpreter(SecurityActions.java:258) [infinispan-server-infinispan-7.0.0.Final.jar:7.0.0.Final]
> at org.jboss.as.clustering.infinispan.subsystem.CliInterpreterHandler.execute(CliInterpreterHandler.java:49) [infinispan-server-infinispan-7.0.0.Final.jar:7.0.0.Final]
> at org.jboss.as.controller.AbstractOperationContext.executeStep(AbstractOperationContext.java:606) [wildfly-controller-8.1.0.Final.jar:8.1.0.Final]
> at org.jboss.as.controller.AbstractOperationContext.doCompleteStep(AbstractOperationContext.java:484) [wildfly-controller-8.1.0.Final.jar:8.1.0.Final]
> at org.jboss.as.controller.AbstractOperationContext.completeStepInternal(AbstractOperationContext.java:281) [wildfly-controller-8.1.0.Final.jar:8.1.0.Final]
> at org.jboss.as.controller.AbstractOperationContext.executeOperation(AbstractOperationContext.java:276) [wildfly-controller-8.1.0.Final.jar:8.1.0.Final]
> at org.jboss.as.controller.ModelControllerImpl.internalExecute(ModelControllerImpl.java:271) [wildfly-controller-8.1.0.Final.jar:8.1.0.Final]
> at org.jboss.as.controller.ModelControllerImpl.execute(ModelControllerImpl.java:145) [wildfly-controller-8.1.0.Final.jar:8.1.0.Final]
> at org.jboss.as.controller.remote.ModelControllerClientOperationHandler$ExecuteRequestHandler.doExecute(ModelControllerClientOperationHandler.java:199) [wildfly-controller-8.1.0.Final.jar:8.1.0.Final]
> at org.jboss.as.controller.remote.ModelControllerClientOperationHandler$ExecuteRequestHandler.access$300(ModelControllerClientOperationHandler.java:130) [wildfly-controller-8.1.0.Final.jar:8.1.0.Final]
> at org.jboss.as.controller.remote.ModelControllerClientOperationHandler$ExecuteRequestHandler$1$1.run(ModelControllerClientOperationHandler.java:150) [wildfly-controller-8.1.0.Final.jar:8.1.0.Final]
> at org.jboss.as.controller.remote.ModelControllerClientOperationHandler$ExecuteRequestHandler$1$1.run(ModelControllerClientOperationHandler.java:146) [wildfly-controller-8.1.0.Final.jar:8.1.0.Final]
> at java.security.AccessController.doPrivileged(Native Method) [rt.jar:1.7.0_60]
> at javax.security.auth.Subject.doAs(Subject.java:415) [rt.jar:1.7.0_60]
> at org.jboss.as.controller.AccessAuditContext.doAs(AccessAuditContext.java:94) [wildfly-controller-8.1.0.Final.jar:8.1.0.Final]
> at org.jboss.as.controller.remote.ModelControllerClientOperationHandler$ExecuteRequestHandler$1.execute(ModelControllerClientOperationHandler.java:146) [wildfly-controller-8.1.0.Final.jar:8.1.0.Final]
> at org.jboss.as.protocol.mgmt.AbstractMessageHandler$2$1.doExecute(AbstractMessageHandler.java:283)
> at org.jboss.as.protocol.mgmt.AbstractMessageHandler$AsyncTaskRunner.run(AbstractMessageHandler.java:504)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_60]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_60]
> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_60]
> at org.jboss.threads.JBossThread.run(JBossThread.java:122) [jboss-threads-2.1.1.Final.jar:2.1.1.Final]
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
11 years, 4 months
[JBoss JIRA] (ISPN-5016) Specify and document cache consistency guarantees
by Radim Vansa (JIRA)
Radim Vansa created ISPN-5016:
---------------------------------
Summary: Specify and document cache consistency guarantees
Key: ISPN-5016
URL: https://issues.jboss.org/browse/ISPN-5016
Project: Infinispan
Issue Type: Task
Components: Documentation-Core
Affects Versions: 7.0.2.Final
Reporter: Radim Vansa
Priority: Critical
We can't simply use the consistency model defined by Java Specification and broaden it for whole cache (maybe the expression "can't" is too strong, but we definitely don't want to do that in some cases).
By consistency guarantees/model I mean mostly in which order are
writes allowed to be observed: and we can't boil it down to simply
causal, PRAM or any other consistency model as writes can be observed as non-atomic in Infinispan.
Infinispan documentation is quite scarce about that, the only trace I've
found is in Glossarry [2] "Infinispan has traditionally followed ACID
principles as far as possible, however an eventually consistent mode
embracing BASE is on the roadmap."
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
11 years, 4 months
[JBoss JIRA] (ISPN-4995) ClusteredGet served for non-member of CH
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-4995?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration updated ISPN-4995:
------------------------------------------
Bugzilla Update: Perform
Bugzilla References: https://bugzilla.redhat.com/show_bug.cgi?id=1167780
> ClusteredGet served for non-member of CH
> ----------------------------------------
>
> Key: ISPN-4995
> URL: https://issues.jboss.org/browse/ISPN-4995
> Project: Infinispan
> Issue Type: Bug
> Components: Core, State Transfer
> Reporter: Radim Vansa
> Priority: Critical
>
> When nodes accept ClusteredGetCommand from node that is not member of CH, it can happen that when one thread does
> {code}
> put(K1, V1);
> put(K2, V2)
> {code}
> and another gets
> {code}
> get(K2) -> V2
> get(K1) -> V0 (some old value)
> {code}
> edg-perf01, 02 and 03 share this view and topology:
> {code}
> 04:40:08,714 TRACE [org.jgroups.protocols.FD_SOCK] (INT-8,edg-perf01-63779) edg-perf01-63779: i-have-sock: edg-perf02-45117 --> 172.18.1.3:37476 (cache is {edg-perf01-63779=172.18.1.1:40099, edg-perf02-45117=172.18.1.3:37476})
> 04:40:08,715 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread--p2-t6) Received new cluster view: 8, isCoordinator = true, becameCoordinator = false
> 04:40:11,203 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread--p2-t1) Updating local consistent hash(es) for cache testCache: new topology = CacheTopology{id=16, rebalanceId=4, currentC
> H=DefaultConsistentHash{ns = 512, owners = (3)[edg-perf02-45117: 171+170, edg-perf03-6264: 171+171, edg-perf01-63779: 170+171]}, pendingCH=null, unionCH=null, actualMembers=[edg-perf02-45117, edg-perf03-6264, edg-perf01-63779]}
> {code}
> Later, edg-perf02 and edg-perf03 get new view and install a new topology, where edg-perf01 does not exist:
> {code}
> 04:41:13,681 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-2,edg-perf03-6264) ISPN000093: Received new, MERGED cluster view for channel default: MergeView::[edg-perf02-45117|9] (3) [edg-perf02-45117, edg-perf03-6264, edg-perf04-10989], 1 subgroups: [edg-perf04-10989|7] (1) [edg-perf04-10989]
> 04:41:13,681 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread--p2-t22) Received new cluster view: 9, isCoordinator = false, becameCoordinator = false
> 04:41:13,760 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (remote-thread--p3-t32) Attempting to execute non-CacheRpcCommand command: CacheTopologyControlCommand{cache=testCache, type=CH_UPDATE, sender=edg-perf02-45117, joinInfo=null, topologyId=18, rebalanceId=4, currentCH=DefaultConsistentHash{ns = 512, owners = (2)[edg-perf02-45117: 256+85, edg-perf03-6264: 256+86]}, pendingCH=null, availabilityMode=AVAILABLE, actualMembers=[edg-perf02-45117, edg-perf03-6264], throwable=null, viewId=9}[sender=edg-perf02-45117]
> {code}
> After that, edg-perf04 writes to {{key_00000000000020DB}} which is currently owned only by edg-perf03 - this key servers as K1 in example above. It is not backed up to edg-perf01, but edg-perf01 still thinks it's an owner of this key as it did not get any new view (this is a log from edg-perf03) :
> {code}
> 04:41:30,884 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (remote-thread--p3-t45) edg-perf03-6264 invoking PutKeyValueCommand{key=key_00000000000020DB, value=[33 #4: 0, 169, 284, 634, ], flags=[SKIP_CACHE_LOAD, SKIP_REMOTE_LOOKUP], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedMetadata{version=null}, successful=true} to recipient list [edg-perf03-6264] with options RpcOptions{timeout=60000, unit=MILLISECONDS, fifoOrder=true, totalOrder=false, responseFilter=null, responseMode=SYNCHRONOUS, skipReplicationQueue=false}
> {code}
> Later, edg-perf04 writes to another key {{stressor_33}} (K2 in the example) value with operationId=650 (previous value is 600) which is replicated to edg-perf02 and edg-perf03.
> Now a merge view with all 4 nodes is installed:
> {code}
> 04:41:31,258 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-2,edg-perf01-63779) ISPN000093: Received new, MERGED cluster view for channel default: MergeView::[edg-perf01-63779|10] (4) [edg-perf01-63779, edg-perf03-6264, edg-perf02-45117, edg-perf04-10989], 6 subgroups: [edg-perf02-45117|7] (2) [edg-perf02-45117, edg-perf03-6264], [edg-perf01-63779|4] (2) [edg-perf01-63779, edg-perf02-45117], [edg-perf02-45117|9] (3) [edg-perf02-45117, edg-perf03-6264, edg-perf04-10989], [edg-perf03-6264|4] (2) [edg-perf03-6264, edg-perf04-10989], [edg-perf01-63779|8] (3) [edg-perf01-63779, edg-perf02-45117, edg-perf03-6264], [edg-perf01-63779|6] (1) [edg-perf01-63779]
> 04:41:31,258 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread--p2-t2) Received new cluster view: 10, isCoordinator = true, becameCoordinator = false
> {code}
> edg-perf01 now issues a remote get to edg-perf02 for key stressor_33 and receives the correct answer (operationId=650):
> {code}
> 04:41:32,494 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (BackgroundOps-Checker-1) Response(s) to ClusteredGetCommand{key=stressor_33, flags=null} is {edg-perf02-45117=SuccessfulResponse{responseValue=ImmortalCacheValue {value=LastOperation{operationId=650, seed=0000A15A4C2DD25A}}} }
> {code}
> However, when edg-perf01 reads {{key_00000000000020DB}}, it loads the old value from local data container as no CH update/rebalance happened so far:
> {code}
> 04:41:32,496 TRACE [org.infinispan.partitionhandling.impl.PartitionHandlingManagerImpl] (BackgroundOps-Checker-1) Checking availability for key=key_00000000000020DB, status=AVAILABLE
> 04:41:32,497 ERROR [org.radargun.stages.cache.background.LogChecker] (BackgroundOps-Checker-1) Missing operation 634 for thread 33 on key 8411 (key_00000000000020DB)
> 04:41:32,499 DEBUG [org.radargun.service.InfinispanDebugable] (BackgroundOps-Checker-1) Debug info for key testCache key_00000000000020DB: owners=edg-perf01-63779, edg-perf03-6264, local=true, uncertain=false, container.key_00000000000020DB=ImmortalCacheEntry[key=key_00000000000020DB, value=[33 #3: 0, 169, 284, ], created=-1, isCreated=false, lastUsed=-1, isChanged=false, expires=-1, isExpired=false, canExpire=false, isEvicted=true, isRemoved=false, isValid=false, lifespan=-1, maxIdle=-1], segmentId=173
> {code}
> Note that this was found on branch https://github.com/infinispan/infinispan/pull/3062/files trying to fix ISPN-4949.
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
11 years, 4 months
[JBoss JIRA] (ISPN-5011) CacheManager not stopping when search factory not initialized
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-5011?page=com.atlassian.jira.plugin.... ]
Adrian Nistor commented on ISPN-5011:
-------------------------------------
Integrated. Thanks [~gustavonalle]!
> CacheManager not stopping when search factory not initialized
> -------------------------------------------------------------
>
> Key: ISPN-5011
> URL: https://issues.jboss.org/browse/ISPN-5011
> Project: Infinispan
> Issue Type: Bug
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
> Fix For: 7.1.0.Alpha1, 7.0.3.Final
>
>
> The situation can be reproduced in a simple test:
> {code}
> @Test
> public void testStartAndStopWithoutIndexing() {
> EmbeddedCacheManager cacheManager = ... // With indexing enabled, using infinispan directory
> cacheManager.getCache();
> cacheManager.stop();
> assertEquals(ComponentStatus.TERMINATED, cacheManager.getStatus());
> }
> {code}
> The issue is that query related caches are lazily created, and the stop( ) method fails with NPE
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
11 years, 4 months
[JBoss JIRA] (ISPN-5011) CacheManager not stopping when search factory not initialized
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-5011?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-5011:
--------------------------------
Fix Version/s: 7.0.3.Final
> CacheManager not stopping when search factory not initialized
> -------------------------------------------------------------
>
> Key: ISPN-5011
> URL: https://issues.jboss.org/browse/ISPN-5011
> Project: Infinispan
> Issue Type: Bug
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
> Fix For: 7.1.0.Alpha1, 7.0.3.Final
>
>
> The situation can be reproduced in a simple test:
> {code}
> @Test
> public void testStartAndStopWithoutIndexing() {
> EmbeddedCacheManager cacheManager = ... // With indexing enabled, using infinispan directory
> cacheManager.getCache();
> cacheManager.stop();
> assertEquals(ComponentStatus.TERMINATED, cacheManager.getStatus());
> }
> {code}
> The issue is that query related caches are lazily created, and the stop( ) method fails with NPE
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
11 years, 4 months
[JBoss JIRA] (ISPN-5011) CacheManager not stopping when search factory not initialized
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-5011?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-5011:
--------------------------------
Status: Resolved (was: Pull Request Sent)
Assignee: Gustavo Fernandes
Fix Version/s: 7.1.0.Alpha1
Resolution: Done
> CacheManager not stopping when search factory not initialized
> -------------------------------------------------------------
>
> Key: ISPN-5011
> URL: https://issues.jboss.org/browse/ISPN-5011
> Project: Infinispan
> Issue Type: Bug
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
> Fix For: 7.1.0.Alpha1
>
>
> The situation can be reproduced in a simple test:
> {code}
> @Test
> public void testStartAndStopWithoutIndexing() {
> EmbeddedCacheManager cacheManager = ... // With indexing enabled, using infinispan directory
> cacheManager.getCache();
> cacheManager.stop();
> assertEquals(ComponentStatus.TERMINATED, cacheManager.getStatus());
> }
> {code}
> The issue is that query related caches are lazily created, and the stop( ) method fails with NPE
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
11 years, 4 months