 
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        [JBoss JIRA] (ISPN-1583) AbstractDelegatingAdvancedCache with(ClassLoader), withFlags(Flag...) logic is broken
                                
                                
                                
                                    
                                        by Paul Ferraro (Created) (JIRA)
                                    
                                
                                
                                        AbstractDelegatingAdvancedCache with(ClassLoader), withFlags(Flag...) logic is broken
-------------------------------------------------------------------------------------
                 Key: ISPN-1583
                 URL: https://issues.jboss.org/browse/ISPN-1583
             Project: Infinispan
          Issue Type: Bug
          Components: Core API
    Affects Versions: 5.1.0.BETA5
            Reporter: Paul Ferraro
            Assignee: Manik Surtani
            Priority: Critical
When the withFlags(...) logic was modified to use a DecoratedCache instead of thread-local storage, any caches already decorated with the AbstractDelegatingAdvancedCache(...) broke.
Take the following code:
AdvancedCache<K, V> baseCache;
AdvancedCache<K, V> customCache = new AbstractDelegatingAdvancedCache<K, V>(baseCache) {
  public void clear() {
     // custom clear logic
  }
};
customCache.withFlags(Flag.CACHE_MODE_LOCAL).clear();
In the above statement, the flag is not applied.
The call to withFlags(...) returns a reference to customCache, and the reference to DecoratedCache containing the flags is lost to garbage collection.
In the case of with(ClassLoader) we have the opposite problem.
customCache.with(customClassLoader).clear();
In the above statement, the native clear() method is invoked instead of my custom clear() method. with(ClassLoader) returns a reference to DecoratedCache.  The clear() method then operates on baseCache, instead of decoratedCache.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
        
                                
                         
                        
                                
                                12 years, 11 months
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                 
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        [JBoss JIRA] Created: (ISPN-928) Interceptor that allows invocations only when cluster is formed of N nodes
                                
                                
                                
                                    
                                        by Galder Zamarreño (JIRA)
                                    
                                
                                
                                        Interceptor that allows invocations only when cluster is formed of N nodes
--------------------------------------------------------------------------
                 Key: ISPN-928
                 URL: https://issues.jboss.org/browse/ISPN-928
             Project: Infinispan
          Issue Type: Feature Request
          Components: Configuration, RPC
            Reporter: Galder Zamarreño
            Assignee: Galder Zamarreño
             Fix For: 5.0.0.BETA1, 5.0.0.Final
Following from https://github.com/pmuir/infinispan-examples/commit/f5d090092fa7b3660025b...
It'd be great to have an configurable StrictCluster interceptor in Infinispan which would basically make all invocations wait until the cluster of N nodes has been formed. I think it'd be a great addition and would allow clients to verify whether the cluster actually forms without the need to verify whether data replicates...etc.
In principle, the configuration would be at the CacheManager, i.e.:
<transport strictNumMembers="4"... />
However, it could also be useful to configure it at the cache level. So, could maybe want to do this: I want cache X to allow invocations the moment I have 2 nodes (in spite of the cluster being formed of 4 noes), whereas I want cache Y to allow invocations once I have 3 nodes.
Apart from an strict number of nodes, you could have a minimum number of nodes: allow invocations once I have 4 or more nodes. The strict value could still be useful to make sure intrusive machines don't get into the cluster, i.e. I expect 4 nodes in the cluster and if I have 5, something is wrong.
I think it's an interesting concept that would get rid of cluster validation code in examples and RadarGun.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
                                
                         
                        
                                
                                12 years, 11 months
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                        
                                
                                
                                        
                                                
                                        
                                        
                                        [JBoss JIRA] Created: (ISPN-1220) Add classloader hooks to cache listener events
                                
                                
                                
                                    
                                        by Paul Ferraro (JIRA)
                                    
                                
                                
                                        Add classloader hooks to cache listener events
----------------------------------------------
                 Key: ISPN-1220
                 URL: https://issues.jboss.org/browse/ISPN-1220
             Project: Infinispan
          Issue Type: Enhancement
          Components: Listeners
    Affects Versions: 5.0.0.CR7
            Reporter: Paul Ferraro
            Assignee: Manik Surtani
This issue seeks to extend the classloading api changes made in ISPN-1096 to the Event API.  Currently, cache listener events do not allow a classloader to be specified to perform any necessary deserialization triggered by the getKey(), getValue() methods.  Can the event api be enhanced such that calls to getKey(), getValue() use a specific classloader?
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
        
                                
                         
                        
                                
                                12 years, 11 months
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                 
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        [JBoss JIRA] (ISPN-1568) Clustered Query fail when hibernate search not fully initialized
                                
                                
                                
                                    
                                        by Mathieu Lachance (Created) (JIRA)
                                    
                                
                                
                                        Clustered Query fail when hibernate search not fully initialized
----------------------------------------------------------------
                 Key: ISPN-1568
                 URL: https://issues.jboss.org/browse/ISPN-1568
             Project: Infinispan
          Issue Type: Bug
          Components: Querying, RPC
    Affects Versions: 5.1.0.BETA5
            Reporter: Mathieu Lachance
            Assignee: Sanne Grinovero
Hi,
I'm running into this issue when doing a clustered query in distribution mode :
org.infinispan.CacheException: org.hibernate.search.SearchException: Not a mapped entity (don't forget to add @Indexed): class com.XXX.Client
	at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:166)
	at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:181)
	at org.infinispan.query.clustered.ClusteredQueryInvoker.broadcast(ClusteredQueryInvoker.java:113)
	at org.infinispan.query.clustered.ClusteredCacheQueryImpl.broadcastQuery(ClusteredCacheQueryImpl.java:115)
	at org.infinispan.query.clustered.ClusteredCacheQueryImpl.iterator(ClusteredCacheQueryImpl.java:90)
	at org.infinispan.query.impl.CacheQueryImpl.iterator(CacheQueryImpl.java:129)
	at org.infinispan.query.clustered.ClusteredCacheQueryImpl.list(ClusteredCacheQueryImpl.java:133)
	at com.XXX.DistributedCache.cacheQueryList(DistributedCache.java:313)
	at com.XXX.DistributedCache.cacheQueryList(DistributedCache.java:274)
	at com.XXX.ClientCache.getClientsByServerId(ClientCache.java:127)
	at com.XXX.ClientManager.getClientsByServerId(ClientManager.java:157)
	at com.XXX$PingClient.run(PlayerBll.java:890)
	at java.util.TimerThread.mainLoop(Timer.java:512)
	at java.util.TimerThread.run(Timer.java:462)
Caused by: org.hibernate.search.SearchException: Not a mapped entity (don't forget to add @Indexed): class com.XXX.Client
	at org.hibernate.search.query.engine.impl.HSQueryImpl.buildSearcher(HSQueryImpl.java:549)
	at org.hibernate.search.query.engine.impl.HSQueryImpl.buildSearcher(HSQueryImpl.java:493)
	at org.hibernate.search.query.engine.impl.HSQueryImpl.queryDocumentExtractor(HSQueryImpl.java:292)
	at org.infinispan.query.clustered.commandworkers.CQCreateEagerQuery.perform(CQCreateEagerQuery.java:44)
	at org.infinispan.query.clustered.ClusteredQueryCommand.perform(ClusteredQueryCommand.java:135)
	at org.infinispan.query.clustered.ClusteredQueryCommand.perform(ClusteredQueryCommand.java:129)
	at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:170)
	at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:179)
	at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithRetry(InboundInvocationHandlerImpl.java:208)
	at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:156)
	at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:162)
	at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:141)
	at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
	at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
	at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
	at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:556)
	at org.jgroups.JChannel.up(JChannel.java:716)
	at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
	at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
	at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
	at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
	at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881)
	at org.jgroups.protocols.pbcast.StreamingStateTransfer.up(StreamingStateTransfer.java:262)
	at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
	at org.jgroups.protocols.UNICAST.up(UNICAST.java:332)
	at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:700)
	at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:561)
	at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:140)
	at org.jgroups.protocols.FD.up(FD.java:273)
	at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:284)
	at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
	at org.jgroups.protocols.Discovery.up(Discovery.java:354)
	at org.jgroups.protocols.TP.passMessageUp(TP.java:1174)
	at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1709)
	at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1691)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
	at java.lang.Thread.run(Thread.java:662)
With the use of the following
cache configuration :
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="urn:infinispan:config:5.1 http://www.infinispan.org/schemas/infinispan-config-5.1.xsd"
	xmlns="urn:infinispan:config:5.1">
	<global>
		<transport clusterName="XXX-cluster" machineId="XXX" siteId="XXX" rackId="XXX" distributedSyncTimeout="15000">
			<properties>
				<property name="configurationFile" value="jgroups-jdbc-ping.xml" />
			</properties>
		</transport>
	</global>
	<default>
		<transaction
			cacheStopTimeout="30000"
			transactionManagerLookupClass="org.infinispan.transaction.lookup.DummyTransactionManagerLookup"
			lockingMode="PESSIMISTIC"
			useSynchronization="true"
			transactionMode="TRANSACTIONAL"
			syncCommitPhase="true"
			syncRollbackPhase="false"
			>
			<recovery enabled="false" />
		</transaction>
		<clustering mode="local" />
		<indexing enabled="true" indexLocalOnly="true">
			<properties>
				<property name="hibernate.search.default.directory_provider" value="ram" />
			</properties>
		</indexing>
	</default>
	<namedCache name="XXX-Client">
		<transaction
			cacheStopTimeout="30000"
			transactionManagerLookupClass="org.infinispan.transaction.lookup.DummyTransactionManagerLookup"
			lockingMode="PESSIMISTIC"
			useSynchronization="true"
			transactionMode="TRANSACTIONAL"
			syncCommitPhase="true"
			syncRollbackPhase="false"
			>
			<recovery enabled="false" />
		</transaction>
		<invocationBatching enabled="false" />
		<loaders passivation="false" />
		<clustering mode="distribution" >
			<sync replTimeout="15000" />
			<stateRetrieval
				timeout="240000"
				retryWaitTimeIncreaseFactor="2"
				numRetries="5"
				maxNonProgressingLogWrites="100"
				
				fetchInMemoryState="false"
				logFlushTimeout="60000"
				alwaysProvideInMemoryState="false"
			/>
		</clustering>
		<storeAsBinary enabled="false" storeValuesAsBinary="true" storeKeysAsBinary="true" />
		<deadlockDetection enabled="true" spinDuration="100" />
		<eviction strategy="NONE" threadPolicy="PIGGYBACK" maxEntries="-1" />
		<jmxStatistics enabled="true" />
		<locking writeSkewCheck="false" lockAcquisitionTimeout="10000" isolationLevel="READ_COMMITTED" useLockStriping="false" concurrencyLevel="32" />
		<expiration wakeUpInterval="60000" lifespan="-1" maxIdle="3000000" />
	</namedCache>
</infinispan>
and jgroups configuration :
<config xmlns="urn:org:jgroups"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="urn:org:jgroups file:schema/JGroups-3.0.xsd">
   <TCP
        bind_port="7800"
        loopback="true"
        port_range="30"
        recv_buf_size="20000000"
        send_buf_size="640000"
        discard_incompatible_packets="true"
        max_bundle_size="64000"
        max_bundle_timeout="30"
        enable_bundling="true"
        use_send_queues="true"
        sock_conn_timeout="300"
        enable_diagnostics="false"
        thread_pool.enabled="true"
        thread_pool.min_threads="2"
        thread_pool.max_threads="30"
        thread_pool.keep_alive_time="5000"
        thread_pool.queue_enabled="false"
        thread_pool.queue_max_size="100"
        thread_pool.rejection_policy="Discard"
        oob_thread_pool.enabled="true"
        oob_thread_pool.min_threads="2"
        oob_thread_pool.max_threads="30"
        oob_thread_pool.keep_alive_time="5000"
        oob_thread_pool.queue_enabled="false"
        oob_thread_pool.queue_max_size="100"
        oob_thread_pool.rejection_policy="Discard"        
         />
   <JDBC_PING
		connection_url="jdbc:jtds:sqlserver://XXX;databaseName=XXX"
		connection_username="XXX"
		connection_password="XXX"
		connection_driver="net.sourceforge.jtds.jdbcx.JtdsDataSource"
		initialize_sql=""
	/>
   <MERGE2 max_interval="30000"
           min_interval="10000"/>
   <FD_SOCK/>
   <FD timeout="3000" max_tries="3"/>
   <VERIFY_SUSPECT timeout="1500"/>
   <pbcast.NAKACK
         use_mcast_xmit="false"
         retransmit_timeout="300,600,1200,2400,4800"
         discard_delivered_msgs="false"/>
   <UNICAST timeout="300,600,1200"/>
   <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
                  max_bytes="400000"/>
   <pbcast.STATE />
   <pbcast.GMS print_local_addr="false" join_timeout="7000" view_bundling="true"/>
   <UFC max_credits="2000000" min_threshold="0.10"/>
   <MFC max_credits="2000000" min_threshold="0.10"/>
   <FRAG2 frag_size="60000"/>
</config>
Tough my entity is well annotated.
Here's the steps to reproduce :
1. boot node A completly.
2. boot node B, make all caches start (DefaultCacheManager::startCaches(...)), then breakpoint just after.
3. on node A, do a clustered query.
4. node A fail because node b has not been fully initialized.
Here's how I do my query :
	private CacheQuery getClusteredNonClusteredQuery(Query query)
	{
		CacheQuery cacheQuery;
		if (useClusteredQuery)
		{
			cacheQuery = searchManager.getClusteredQuery(query, cacheValueClass);
		}
		else
		{
			cacheQuery = searchManager.getQuery(query, cacheValueClass);
		}
		return cacheQuery;
	}
I've tried also without supplying any "cacheValueClass" without any success.
One ugly "workaround" I've found is, to as soon as possible in the application, to force the local insertion and removal of one dummy key and value to force initialization of the search manager like : 
cache.getAdvancedCache().withFlags(Flag.CACHE_MODE_LOCAL).put("XXX", new Client("XXX");
cache.getAdvancedCache().withFlags(Flag.CACHE_MODE_LOCAL).remove("XXX");
Tough this technique won't still garanty me that any clustered query will occur before.
I think the issue this might as well be related to issue : ISPN-627 Provision to get Cache from CacheManager.
Any idea or workaround ? Do you think by just adding a try catch and return an empty list could "fix" the problem ?
Thanks a lot,
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
        
                                
                         
                        
                                
                                12 years, 11 months
                        
                        
                 
         
 
        
            
        
        
        
            
        
        
        
                
                        
                                
                                 
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        [JBoss JIRA] Created: (ISPN-1293) Enable default lifespan/maxIdle values to be used by the Hot Rod server
                                
                                
                                
                                    
                                        by Galder Zamarreño (JIRA)
                                    
                                
                                
                                        Enable default lifespan/maxIdle values to be used by the Hot Rod server
-----------------------------------------------------------------------
                 Key: ISPN-1293
                 URL: https://issues.jboss.org/browse/ISPN-1293
             Project: Infinispan
          Issue Type: Enhancement
          Components: Cache Server
            Reporter: Galder Zamarreño
            Assignee: Galder Zamarreño
             Fix For: 5.2.0.FINAL
Hot Rod clients should be able to tell the server that no lifespan/maxIdle value was given, and so that the server should use the default lifespan+maxIdle values set in configuration. This is not currently possible in v1 of the protocol and so requires a change of protocol. 
This is the result of the investigation for ISPN-1285, and so when this is resolved:
1. Make sure you revert the javadoc added to ISPN-1285 to document the limitation
2. Enable and expand client/hotrod-client/src/test/java/org/infinispan/client/hotrod/ExpiryTest.java
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
       
                                
                         
                        
                                
                                12 years, 11 months
                        
                        
                 
         
 
        
            
        
        
        
            
        
        
        
                
                        
                                
                                 
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        [JBoss JIRA] (ISPN-1586) inconsistent cache data in replication cluster with local (not shared) cache store
                                
                                
                                
                                    
                                        by dex chen (Created) (JIRA)
                                    
                                
                                
                                        inconsistent cache data in replication cluster with local (not shared) cache store
----------------------------------------------------------------------------------
                 Key: ISPN-1586
                 URL: https://issues.jboss.org/browse/ISPN-1586
             Project: Infinispan
          Issue Type: Bug
          Components: Core API
    Affects Versions: 5.0.0.FINAL
         Environment: ISPN 5.0.0. Final and ISPN 5.1 sanpshot
Java 1.7
Linux Cent OS
            Reporter: dex chen
            Assignee: Manik Surtani
I rerun my test (an embedded ISPN cluser)  with ISPN 5.0.0. final and 5.1 Sanpshot code.
It is configured in "replication", using local cache store, and preload=true, purgeOnStartup=false .. (see the whole config below).
I will get the inconsistent data among the nodes in the following scenario:
1) start 2 node cluster
2) after the cluster is formed, add some data to the cache
k1-->v1
k2-->v2
I will see the data replication working perfectly at this point.
3) bring node 2 down
4) delete entry k1-->v1 through node1
Note: At this point, on the local (persistent) cache store on the node2 have 2 entries.
5) start node2, and wait to join the cluster
6) after state merging, you will see now that node1 has 1 entry and nod2 has 2 entries.
I am expecting that the data should be consistent across the cluster.
Here is the infinispan config:
<infinispan
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="urn:infinispan:config:5.0 http://www.infinispan.org/schemas/infinispan-config-5.0.xsd"
      xmlns="urn:infinispan:config:5.0">
   <global>
      <transport clusterName="demoCluster"
                machineId="node1" 
            rackId="r1" nodeName="dexlaptop"
      >
         <properties>
            <property name="configurationFile" value="./jgroups-tcp.xml" />
         </properties>
      </transport>
      <globalJmxStatistics enabled="true"/>
   </global>
   <default>
     <locking
         isolationLevel="READ_COMMITTED"
         lockAcquisitionTimeout="20000"
         writeSkewCheck="false"
         concurrencyLevel="5000"
         useLockStriping="false"
      />
      <jmxStatistics enabled="true"/>
      <clustering mode="replication">
         <stateRetrieval
            timeout="240000"
            fetchInMemoryState="true"
            alwaysProvideInMemoryState="false"
         />
         <!--
            Network calls are synchronous.
         -->
         <sync replTimeout="20000"/>
      </clustering>
      <loaders
         passivation="false"
         shared="false"
         preload="true">
         <loader
            class="org.infinispan.loaders.jdbc.stringbased.JdbcStringBasedCacheStore"
            fetchPersistentState="true"
            purgeOnStartup="false">
            <!-- set to true for not first node in the cluster in testing/demo -->
            <properties>
              <property name="stringsTableNamePrefix" value="ISPN_STRING_TABLE"/>
              <property name="idColumnName" value="ID_COLUMN"/>
              <property name="dataColumnName" value="DATA_COLUMN"/>
              <property name="timestampColumnName" value="TIMESTAMP_COLUMN"/>
              <property name="timestampColumnType" value="BIGINT"/>
              <property name="connectionFactoryClass" value="org.infinispan.loaders.jdbc.connectionfactory.PooledConnectionFactory"/>
              <property name="connectionUrl" value="jdbc:h2:file:/var/tmp/h2cachestore;DB_CLOSE_DELAY=-1"/>
              <property name="userName" value="sa"/>
              <property name="driverClass" value="org.h2.Driver"/>
              <property name="idColumnType" value="VARCHAR(255)"/>
              <property name="dataColumnType" value="BINARY"/>
              <property name="dropTableOnExit" value="false"/>
              <property name="createTableOnStart" value="true"/>
      </properties>
            <!--
            <async enabled="false" />
            -->
         </loader>
      </loaders>
   </default>
</infinispan>
Basically, current ISPN implementation in state transfer will result in data insistence among nodes in replication mode and each node has local cache store.
I found code BaseStateTransferManagerImpl's applyState code does not remove stale data in the local cache store and result in inconsistent data when joins a cluster:
Here is the code snipt of applyState():
  public void applyState(Collection<InternalCacheEntry> state,
                          Address sender, int viewId) throws InterruptedException {
   .....
  
      for (InternalCacheEntry e : state) {
         InvocationContext ctx = icc.createInvocationContext(false, 1);
         // locking not necessary as during rehashing we block all transactions
         ctx.setFlags(CACHE_MODE_LOCAL, SKIP_CACHE_LOAD, SKIP_REMOTE_LOOKUP, SKIP_SHARED_CACHE_STORE, SKIP_LOCKING,
                      SKIP_OWNERSHIP_CHECK);
         try {
            PutKeyValueCommand put = cf.buildPutKeyValueCommand(e.getKey(), e.getValue(), e.getLifespan(), e.getMaxIdle(), ctx.getFlags());
            interceptorChain.invoke(ctx, put);
         } catch (Exception ee) {
            log.problemApplyingStateForKey(ee.getMessage(), e.getKey());
         }
      }
 
...
}
As we can see that the code bascically try to add all data entryies got from the cluster (other node). Hence, it does not know any previous entries were deleted from the cluster which exist in its local cache store. This is exactly my test case (my confiuration is that each node has its own cache store and in replication  mode).
To fix this, we need to delete any entries from the local cache/cache store which no longer exist in the new state.
I modified the above method by adding the following code before put loop, and it fixed the problem in my configuration:
//Remove entries which no loger exist in the new state from local cache/cache store
for (InternalCacheEntry ie: dataContainer.entrySet()) {
          
          if (!state.contains(ie)) {
          log.debug("Try to delete local store entry no loger exists in the new state: " + ie.getKey());
          InvocationContext ctx = icc.createInvocationContext(false, 1);
          // locking not necessary as during rehashing we block all transactions
          ctx.setFlags(CACHE_MODE_LOCAL, SKIP_CACHE_LOAD, SKIP_REMOTE_LOOKUP, SKIP_SHARED_CACHE_STORE, SKIP_LOCKING,
                       SKIP_OWNERSHIP_CHECK);
          try {
              RemoveCommand remove = cf.buildRemoveCommand(ie.getKey(), ie.getValue(), ctx.getFlags());
              interceptorChain.invoke(ctx, remove);
              dataContainer.remove(ie.getKey());
           } catch (Exception ee) {
              log.error("failed to delete local store entry", ee);
           }  
          }
      }
...
Obvious, the above "fix" is based on assumption/configure that dataContainer will have all local entries, i.e., preload=true, no enviction replication.
The real fix, I think, we need delegate the syncState(state) to cache store impl, where we can check the configurations and do the right thing.
For example, in the cache store impl, we can calculate the changes based on local data and new state, and apply the changes there.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
        
                                
                         
                        
                                
                                12 years, 11 months