[JBoss JIRA] (ISPN-3029) IllegalMonitorStateException in LockSupportCacheStore.loadAllKeys
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3029?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-3029:
--------------------------------
Fix Version/s: 6.0.0.Final
(was: 5.3.0.Final)
> IllegalMonitorStateException in LockSupportCacheStore.loadAllKeys
> -----------------------------------------------------------------
>
> Key: ISPN-3029
> URL: https://issues.jboss.org/browse/ISPN-3029
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 5.2.5.Final
> Reporter: Dan Berindei
> Assignee: Mircea Markus
> Priority: Critical
> Labels: onboard
> Fix For: 6.0.0.Final
>
>
> Most LockSupportCacheStore methods that call {{acquireGlobalLock()}} ignore its return value and proceed as if the lock was acquired on all the buckets. ISPN-2378 partially fixed this by only attempting to unlock the global lock if the lock was actually acquired, but the processing that was supposed to be protected by the global lock is still executed even if the lock acquisition failed.
> In {{loadAllKeys}}, this doesn't usually cause any problems. But if the cache store contains expired entries, it will try to upgrade a bucket lock to a write lock in order to update the bucket on disk, and the upgrade will fail with a IllegalMonitorStateException:
> {noformat}
> > 20:41:36,960 ERROR [org.infinispan.statetransfer.OutboundTransferTask] (undefined) Failed to execute outbound transfer: java.lang.IllegalMonitorStateException: attempt to unlock read lock, not locked by current thread
> > at java.util.concurrent.locks.ReentrantReadWriteLock$Sync.unmatchedUnlockException(ReentrantReadWriteLock.java:447) [rt.jar:1.7.0_09-icedtea]
> > at java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryReleaseShared(ReentrantReadWriteLock.java:431) [rt.jar:1.7.0_09-icedtea]
> > at java.util.concurrent.locks.AbstractQueuedSynchronizer.releaseShared(AbstractQueuedSynchronizer.java:1340) [rt.jar:1.7.0_09-icedtea]
> > at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.unlock(ReentrantReadWriteLock.java:883) [rt.jar:1.7.0_09-icedtea]
> > at org.infinispan.util.concurrent.locks.StripedLock.upgradeLock(StripedLock.java:140) [infinispan-core-5.2.4.Final-redhat-2.jar:5.2.4.Final-redhat-2]
> > at org.infinispan.loaders.LockSupportCacheStore.upgradeLock(LockSupportCacheStore.java:106) [infinispan-core-5.2.4.Final-redhat-2.jar:5.2.4.Final-redhat-2]
> > at org.infinispan.loaders.bucket.BucketBasedCacheStore.access$000(BucketBasedCacheStore.java:49) [infinispan-core-5.2.4.Final-redhat-2.jar:5.2.4.Final-redhat-2]
> > at org.infinispan.loaders.bucket.BucketBasedCacheStore$CollectionGeneratingBucketHandler.handle(BucketBasedCacheStore.java:159) [infinispan-core-5.2.4.Final-redhat-2.jar:5.2.4.Final-redhat-2]
> > at org.infinispan.loaders.file.FileCacheStore.loopOverBuckets(FileCacheStore.java:102) [infinispan-core-5.2.4.Final-redhat-2.jar:5.2.4.Final-redhat-2]
> > at org.infinispan.loaders.bucket.BucketBasedCacheStore.loadAllKeysLockSafe(BucketBasedCacheStore.java:219) [infinispan-core-5.2.4.Final-redhat-2.jar:5.2.4.Final-redhat-2]
> > at org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179) [infinispan-core-5.2.4.Final-redhat-2.jar:5.2.4.Final-redhat-2]
> > at org.infinispan.loaders.decorators.AbstractDelegatingStore.loadAllKeys(AbstractDelegatingStore.java:140) [infinispan-core-5.2.4.Final-redhat-2.jar:5.2.4.Final-redhat-2]
> > at org.infinispan.loaders.decorators.AsyncStore.loadKeys(AsyncStore.java:184) [infinispan-core-5.2.4.Final-redhat-2.jar:5.2.4.Final-redhat-2]
> > at org.infinispan.loaders.decorators.AsyncStore.loadAllKeys(AsyncStore.java:205) [infinispan-core-5.2.4.Final-redhat-2.jar:5.2.4.Final-redhat-2]
> > at org.infinispan.statetransfer.OutboundTransferTask.run(OutboundTransferTask.java:163) [infinispan-core-5.2.4.Final-redhat-2.jar:5.2.4.Final-redhat-2]
> > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_09-icedtea]
> > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) [rt.jar:1.7.0_09-icedtea]
> > at java.util.concurrent.FutureTask.run(FutureTask.java:166) [rt.jar:1.7.0_09-icedtea]
> > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_09-icedtea]
> > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) [rt.jar:1.7.0_09-icedtea]
> > at java.util.concurrent.FutureTask.run(FutureTask.java:166) [rt.jar:1.7.0_09-icedtea]
> > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_09-icedtea]
> > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_09-icedtea]
> > at java.lang.Thread.run(Thread.java:722) [rt.jar:1.7.0_09-icedtea]
> > at org.jboss.threads.JBossThread.run(JBossThread.java:122) [jboss-threads-2.0.0.GA-redhat-2.jar:2.0.0.GA-redhat-2]
> {noformat}
> A simple solution would be to throw an exception any time the global lock acquisition failed, but the current global lock acquisition algorithm might need to change because it seems very deadlock-prone at the moment.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-3224) RemoteCacheManager of HotRod client is not able connect to server because of wrong parsing IPv6 address for pure IPv6 machines and gets wrong address on dual stack machines
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3224?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-3224:
--------------------------------
Fix Version/s: 6.0.0.Final
(was: 5.3.0.Final)
> RemoteCacheManager of HotRod client is not able connect to server because of wrong parsing IPv6 address for pure IPv6 machines and gets wrong address on dual stack machines
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-3224
> URL: https://issues.jboss.org/browse/ISPN-3224
> Project: Infinispan
> Issue Type: Bug
> Components: Remote protocols
> Affects Versions: 5.2.4.Final
> Reporter: Vitalii Chepeliuk
> Assignee: Galder Zamarreño
> Priority: Critical
> Fix For: 6.0.0.Final
>
>
> ########################Run hotrod client with pure IPv6#############################################
> Hotrod client fails when want connect to server, below is exception from pure IPv6 machines it doesn't really undedstand IPv6 address it should connect."Could not connect to server: /0.0.10.60:52" Here should
> be IPv6 address but it looks like some wrong IPv4 address it want to connect to and I use complicated address 2620:52:0:105f:0:0:ffff:32%2:11222 as host variable and it is not specified in /etc/hosts
> public RemoteCacheManager(String host, int port, boolean start, ClassLoader classLoader) {
> config = new ConfigurationProperties(host + ":" + port); <<< host=2620:52:0:105f:0:0:ffff:32%2 and port=11222
> this.classLoader = classLoader;
> if (start) start();
> }
>
> then in start method
> @Override
> public void start() {
> // Workaround for JDK6 NPE: http://bugs.sun.com/view_bug.do?bug_id=6427854
> SysPropertyActions.setProperty("sun.nio.ch.bugLevel", "\"\"");
> forceReturnValueDefault = config.getForceReturnValues();
> codec = CodecFactory.getCodec(config.getProtocolVersion());
> String factory = config.getTransportFactory();
> transportFactory = (TransportFactory) getInstance(factory, classLoader);
> Collection<SocketAddress> servers = config.getServerList(); <<< we get list of servers but getServerList() method should be improved see below!
>
> transportFactory.start(codec, config, servers, topologyId, classLoader); <<< and pass to transportFactory
> if (marshaller == null) {
> String marshallerName = config.getMarshaller();
> setMarshaller((Marshaller) getInstance(marshallerName, classLoader));
> }
> if (asyncExecutorService == null) {
> String asyncExecutorClass = config.getAsyncExecutorFactory();
> ExecutorFactory executorFactory = (ExecutorFactory) getInstance(asyncExecutorClass, classLoader);
> asyncExecutorService = executorFactory.getExecutor(config.getProperties());
> }
> synchronized (cacheName2RemoteCache) {
> for (RemoteCacheHolder rcc : cacheName2RemoteCache.values()) {
> startRemoteCache(rcc);
> }
> }
> // Print version to help figure client version run
> log.version(org.infinispan.Version.printVersion());
> started = true;
> }
>
> and "servers" variable contain the same IP address 2620:52:0:105f:0:0:ffff:32%2:11222
> public Collection<SocketAddress> getServerList() {
> Set<SocketAddress> addresses = new HashSet<SocketAddress>();
> String servers = props.getProperty(SERVER_LIST, "127.0.0.1:" + DEFAULT_HOTROD_PORT); <<< got 2620:52:0:105f:0:0:ffff:32%2:11222
> for (String server : servers.split(";")) {
> String[] components = server.trim().split(":"); <<< just only here the splitting it wrong, we devide address into 9 chunks
> String host = components[0]; <<< host name shoud be 1 chunk 2620
> int port = DEFAULT_HOTROD_PORT;
> if (components.length > 1) port = Integer.parseInt(components[1]); <<< and port 52
> addresses.add(new InetSocketAddress(host, port)); <<< and again we pass wrong parameteres to this constructor with IntetSocketAddress(2620, 52)
> }
> if (addresses.isEmpty()) throw new IllegalStateException("No Hot Rod servers specified!");
> return addresses; << here we just get some strange IPv4 address as 0.0.10.60:52
> }
> and exception is the following
> Caused by: org.infinispan.client.hotrod.exceptions.TransportException:: Could not connect to server: /0.0.10.60:52
> at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.<init>(TcpTransport.java:88)
> at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:57)
> at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:38)
> at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1220)
> at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.borrowTransportFromPool(TcpTransportFactory.java:271)
> ########################Other issue is when hotrod client connects in dual stack mode#################
> /etc/hosts file is---------------------------------------------------------
> 127.0.0.1 myhost localhost.localdomain localhost
> ::1 myhost localhost6.localdomain6 localhost6
> ---------------------------------------------------------------------------
> Then is the same problem in ConfigurationProperties.java getServerList() method we add address to addresses <<<addresses.add(new InetSocketAddress(host, port));>>>
> so InetSocketAddress constructor is called
> public InetSocketAddress(String hostname, int port) {
> checkHost(hostname);
> InetAddress addr = null;
> String host = null;
> try {
> addr = InetAddress.getByName(hostname); <<< we should get InetAddress with hostname
> } catch(UnknownHostException e) {
> host = hostname;
> }
> holder = new InetSocketAddressHolder(host, addr, checkPort(
> }
> but we have 2! different inet addresses with the same hostname one is 127.0.0.1 and other ::1 and if i run it on IPv6 there should be ::1 and not 127.0.0.1!
> And
> public static InetAddress getByName(String host)
> throws UnknownHostException {
> return InetAddress.getAllByName(host)[0]; <<< but here we get only first address in array and got always 127.0.0.1
> }
> and then other exception is thrown--------------------------------------
> Caused by: org.infinispan.client.hotrod.exceptions.TransportException:: Could not connect to server: vchepQA/127.0.0.1:11222
> at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.<init>(TcpTransport.java:88)
> at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:57)
> at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:38)
> at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1220)
> at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.borrowTransportFromPool(TcpTransportFactory.java:271)
> ... 97 more
> ---------------------------------------------------------------------------
> and i forget to add trace log just download it here http://dropmefiles.com/en/H5wvu
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-1586) inconsistent cache data in replication cluster with local (not shared) cache store
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-1586?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-1586:
--------------------------------
Fix Version/s: 6.0.0.Final
(was: 5.3.0.Final)
> inconsistent cache data in replication cluster with local (not shared) cache store
> ----------------------------------------------------------------------------------
>
> Key: ISPN-1586
> URL: https://issues.jboss.org/browse/ISPN-1586
> Project: Infinispan
> Issue Type: Bug
> Components: Core API
> Affects Versions: 5.0.0.FINAL, 5.1.0.CR1
> Environment: ISPN 5.0.0. Final and ISPN 5.1 sanpshot
> Java 1.7
> Linux Cent OS
> Reporter: dex chen
> Assignee: Dan Berindei
> Priority: Critical
> Fix For: 6.0.0.Final
>
>
> I rerun my test (an embedded ISPN cluser) with ISPN 5.0.0. final and 5.1 Sanpshot code.
> It is configured in "replication", using local cache store, and preload=true, purgeOnStartup=false .. (see the whole config below).
> I will get the inconsistent data among the nodes in the following scenario:
> 1) start 2 node cluster
> 2) after the cluster is formed, add some data to the cache
> k1-->v1
> k2-->v2
> I will see the data replication working perfectly at this point.
> 3) bring node 2 down
> 4) delete entry k1-->v1 through node1
> Note: At this point, on the local (persistent) cache store on the node2 have 2 entries.
> 5) start node2, and wait to join the cluster
> 6) after state merging, you will see now that node1 has 1 entry and nod2 has 2 entries.
> I am expecting that the data should be consistent across the cluster.
> Here is the infinispan config:
> {code:xml}
> <infinispan
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xsi:schemaLocation="urn:infinispan:config:5.0 http://www.infinispan.org/schemas/infinispan-config-5.0.xsd"
> xmlns="urn:infinispan:config:5.0">
> <global>
> <transport clusterName="demoCluster"
> machineId="node1"
> rackId="r1" nodeName="dexlaptop"
> >
> <properties>
> <property name="configurationFile" value="./jgroups-tcp.xml" />
> </properties>
> </transport>
> <globalJmxStatistics enabled="true"/>
> </global>
> <default>
> <locking
> isolationLevel="READ_COMMITTED"
> lockAcquisitionTimeout="20000"
> writeSkewCheck="false"
> concurrencyLevel="5000"
> useLockStriping="false"
> />
> <jmxStatistics enabled="true"/>
> <clustering mode="replication">
> <stateRetrieval
> timeout="240000"
> fetchInMemoryState="true"
> alwaysProvideInMemoryState="false"
> />
> <!--
> Network calls are synchronous.
> -->
> <sync replTimeout="20000"/>
> </clustering>
> <loaders
> passivation="false"
> shared="false"
> preload="true">
> <loader
> class="org.infinispan.loaders.jdbc.stringbased.JdbcStringBasedCacheStore"
> fetchPersistentState="true"
> purgeOnStartup="false">
> <!-- set to true for not first node in the cluster in testing/demo -->
> <properties>
> <property name="stringsTableNamePrefix" value="ISPN_STRING_TABLE"/>
> <property name="idColumnName" value="ID_COLUMN"/>
> <property name="dataColumnName" value="DATA_COLUMN"/>
> <property name="timestampColumnName" value="TIMESTAMP_COLUMN"/>
> <property name="timestampColumnType" value="BIGINT"/>
> <property name="connectionFactoryClass" value="org.infinispan.loaders.jdbc.connectionfactory.PooledConnectionFactory"/>
> <property name="connectionUrl" value="jdbc:h2:file:/var/tmp/h2cachestore;DB_CLOSE_DELAY=-1"/>
> <property name="userName" value="sa"/>
> <property name="driverClass" value="org.h2.Driver"/>
> <property name="idColumnType" value="VARCHAR(255)"/>
> <property name="dataColumnType" value="BINARY"/>
> <property name="dropTableOnExit" value="false"/>
> <property name="createTableOnStart" value="true"/>
> </properties>
> <!--
> <async enabled="false" />
> -->
> </loader>
> </loaders>
> </default>
> </infinispan>
> {code}
> Basically, current ISPN implementation in state transfer will result in data insistence among nodes in replication mode and each node has local cache store.
> I found code BaseStateTransferManagerImpl's applyState code does not remove stale data in the local cache store and result in inconsistent data when joins a cluster:
> Here is the code snipt of applyState():
> {code:java}
> public void applyState(Collection<InternalCacheEntry> state,
> Address sender, int viewId) throws InterruptedException {
> .....
>
> for (InternalCacheEntry e : state) {
> InvocationContext ctx = icc.createInvocationContext(false, 1);
> // locking not necessary as during rehashing we block all transactions
> ctx.setFlags(CACHE_MODE_LOCAL, SKIP_CACHE_LOAD, SKIP_REMOTE_LOOKUP, SKIP_SHARED_CACHE_STORE, SKIP_LOCKING,
> SKIP_OWNERSHIP_CHECK);
> try {
> PutKeyValueCommand put = cf.buildPutKeyValueCommand(e.getKey(), e.getValue(), e.getLifespan(), e.getMaxIdle(), ctx.getFlags());
> interceptorChain.invoke(ctx, put);
> } catch (Exception ee) {
> log.problemApplyingStateForKey(ee.getMessage(), e.getKey());
> }
> }
>
> ...
> }
> {code}
> As we can see that the code bascically try to add all data entryies got from the cluster (other node). Hence, it does not know any previous entries were deleted from the cluster which exist in its local cache store. This is exactly my test case (my confiuration is that each node has its own cache store and in replication mode).
> To fix this, we need to delete any entries from the local cache/cache store which no longer exist in the new state.
> I modified the above method by adding the following code before put loop, and it fixed the problem in my configuration:
> {code:java}
> //Remove entries which no loger exist in the new state from local cache/cache store
> for (InternalCacheEntry ie: dataContainer.entrySet()) {
>
> if (!state.contains(ie)) {
> log.debug("Try to delete local store entry no loger exists in the new state: " + ie.getKey());
> InvocationContext ctx = icc.createInvocationContext(false, 1);
> // locking not necessary as during rehashing we block all transactions
> ctx.setFlags(CACHE_MODE_LOCAL, SKIP_CACHE_LOAD, SKIP_REMOTE_LOOKUP, SKIP_SHARED_CACHE_STORE, SKIP_LOCKING,
> SKIP_OWNERSHIP_CHECK);
> try {
> RemoveCommand remove = cf.buildRemoveCommand(ie.getKey(), ie.getValue(), ctx.getFlags());
> interceptorChain.invoke(ctx, remove);
> dataContainer.remove(ie.getKey());
> } catch (Exception ee) {
> log.error("failed to delete local store entry", ee);
> }
> }
> }
> ...
> {code}
> Obvious, the above "fix" is based on assumption/configure that dataContainer will have all local entries, i.e., preload=true, no enviction replication.
> The real fix, I think, we need delegate the syncState(state) to cache store impl, where we can check the configurations and do the right thing.
> For example, in the cache store impl, we can calculate the changes based on local data and new state, and apply the changes there.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2475) When L1.onRehash is enabled, L1 invalidations should be sent to the previous owners
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2475?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2475:
--------------------------------
Assignee: William Burns (was: Dan Berindei)
> When L1.onRehash is enabled, L1 invalidations should be sent to the previous owners
> -----------------------------------------------------------------------------------
>
> Key: ISPN-2475
> URL: https://issues.jboss.org/browse/ISPN-2475
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer
> Affects Versions: 5.1.0.FINAL
> Reporter: Dan Berindei
> Assignee: William Burns
> Priority: Critical
> Fix For: 5.3.0.Final
>
>
> Copied from the parent:
> {quote}
> [...] we can keep track of the consistent hashes in the last 10 minutes (or whatever the L1 lifespan is) and the time of the last invalidation sent for each key. When we need to send a new invalidation, we add the owners in all the consistent hashes since the last invalidation to the invalidation command recipients.
> {quote}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-3048) Eviction needs to be transactional
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3048?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-3048:
--------------------------------
Fix Version/s: 6.0.0.Final
(was: 5.3.0.Final)
> Eviction needs to be transactional
> ----------------------------------
>
> Key: ISPN-3048
> URL: https://issues.jboss.org/browse/ISPN-3048
> Project: Infinispan
> Issue Type: Bug
> Components: Eviction
> Affects Versions: 5.3.0.Alpha1
> Reporter: Paul Ferraro
> Assignee: Dan Berindei
> Priority: Critical
> Fix For: 6.0.0.Final
>
>
> Currently, Infinispan eviction is non-transactional. This makes Infinispan's eviction manager virtually unusable, since non-transactional eviction can cause phantom reads and data loss because it violates the isolation of concurrent transactions. This is especially problematic when using a passivation-enabled cache store. In this case, a cache eviction/passivation can cause a concurrently executed cache retrieval to return null - even though the act of passivation does not change the data - it only changes where it is stored.
> We work around this in the AS by performing eviction manually, using pessimistic locking in combination with eager lock acquisition prior to eviction. This is unfortunate, since it prevents me from leveraging Infinispan's build-in eviction strategies.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-3048) Eviction needs to be transactional
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3048?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-3048:
--------------------------------
Assignee: William Burns (was: Dan Berindei)
> Eviction needs to be transactional
> ----------------------------------
>
> Key: ISPN-3048
> URL: https://issues.jboss.org/browse/ISPN-3048
> Project: Infinispan
> Issue Type: Bug
> Components: Eviction
> Affects Versions: 5.3.0.Alpha1
> Reporter: Paul Ferraro
> Assignee: William Burns
> Priority: Critical
> Fix For: 6.0.0.Final
>
>
> Currently, Infinispan eviction is non-transactional. This makes Infinispan's eviction manager virtually unusable, since non-transactional eviction can cause phantom reads and data loss because it violates the isolation of concurrent transactions. This is especially problematic when using a passivation-enabled cache store. In this case, a cache eviction/passivation can cause a concurrently executed cache retrieval to return null - even though the act of passivation does not change the data - it only changes where it is stored.
> We work around this in the AS by performing eviction manually, using pessimistic locking in combination with eager lock acquisition prior to eviction. This is unfortunate, since it prevents me from leveraging Infinispan's build-in eviction strategies.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2913) putForExternalRead leaves locks
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2913?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2913:
--------------------------------
Fix Version/s: 6.0.0.Final
> putForExternalRead leaves locks
> -------------------------------
>
> Key: ISPN-2913
> URL: https://issues.jboss.org/browse/ISPN-2913
> Project: Infinispan
> Issue Type: Bug
> Components: Locking and Concurrency
> Affects Versions: 5.2.1.Final
> Reporter: Sebastian Tusk
> Assignee: Adrian Nistor
> Priority: Critical
> Fix For: 6.0.0.Final
>
> Attachments: SebastianTusk_ISPN-2913.patch
>
>
> In TxDistributionInterceptor.remoteGetAndStoreInL1 locks are acquired. Without a transaction these locks are never released. The cache setup is Dist, Async, L1, 2 Nodes, 1 Owner, Optimistic Locking.
> In AbstractTxLockingInterceptor.visitGetKeyValueCommand locks are released explicitly if outside of transactions. I fixed this problem by doing the same in OptimisticLockingInterceptor.visitPutKeyValueCommand. It is very likely that this doesn't fix all problems. For instance OptimisticLockingInterceptor.visitPutMapCommand or PessimisticLockingInterceptor.
> Cache Config:
> <namedCache name="entity">
> <jmxStatistics enabled="true" />
>
> <clustering mode="dist">
> <stateTransfer fetchInMemoryState="false" timeout="20000" />
> <async />
> <l1 enabled="true" />
> <hash numOwners="1"/>
> </clustering>
> <locking isolationLevel="READ_COMMITTED"
> lockAcquisitionTimeout="15000" useLockStriping="false" />
>
> <eviction maxEntries="10000" strategy="LRU" />
> <expiration maxIdle="100000" wakeUpInterval="5000"/>
> <storeAsBinary storeKeysAsBinary="true" storeValuesAsBinary="false" enabled="false" />
>
> <transaction transactionMode="TRANSACTIONAL" autoCommit="false" lockingMode="OPTIMISTIC"/>
> </namedCache>
> Fixed OptimisticLockingInterceptor.visitPutKeyValueCommand:
> @Override
> public Object visitPutKeyValueCommand(InvocationContext ctx, PutKeyValueCommand command) throws Throwable {
> try {
> if (command.isConditional()) markKeyAsRead(ctx, command);
> return invokeNextInterceptor(ctx, command);
> } catch (Throwable te) {
> throw cleanLocksAndRethrow(ctx, te);
> } finally {
> //with putForExternalRead the value might be put into L1 without a transaction
> //we need to release any locks for these cases
> if (!ctx.isInTxScope()) lockManager.unlockAll(ctx);
> }
> }
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2913) putForExternalRead leaves locks
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2913?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2913:
--------------------------------
Priority: Major (was: Critical)
> putForExternalRead leaves locks
> -------------------------------
>
> Key: ISPN-2913
> URL: https://issues.jboss.org/browse/ISPN-2913
> Project: Infinispan
> Issue Type: Bug
> Components: Locking and Concurrency
> Affects Versions: 5.2.1.Final
> Reporter: Sebastian Tusk
> Assignee: Adrian Nistor
> Fix For: 6.0.0.Final
>
> Attachments: SebastianTusk_ISPN-2913.patch
>
>
> In TxDistributionInterceptor.remoteGetAndStoreInL1 locks are acquired. Without a transaction these locks are never released. The cache setup is Dist, Async, L1, 2 Nodes, 1 Owner, Optimistic Locking.
> In AbstractTxLockingInterceptor.visitGetKeyValueCommand locks are released explicitly if outside of transactions. I fixed this problem by doing the same in OptimisticLockingInterceptor.visitPutKeyValueCommand. It is very likely that this doesn't fix all problems. For instance OptimisticLockingInterceptor.visitPutMapCommand or PessimisticLockingInterceptor.
> Cache Config:
> <namedCache name="entity">
> <jmxStatistics enabled="true" />
>
> <clustering mode="dist">
> <stateTransfer fetchInMemoryState="false" timeout="20000" />
> <async />
> <l1 enabled="true" />
> <hash numOwners="1"/>
> </clustering>
> <locking isolationLevel="READ_COMMITTED"
> lockAcquisitionTimeout="15000" useLockStriping="false" />
>
> <eviction maxEntries="10000" strategy="LRU" />
> <expiration maxIdle="100000" wakeUpInterval="5000"/>
> <storeAsBinary storeKeysAsBinary="true" storeValuesAsBinary="false" enabled="false" />
>
> <transaction transactionMode="TRANSACTIONAL" autoCommit="false" lockingMode="OPTIMISTIC"/>
> </namedCache>
> Fixed OptimisticLockingInterceptor.visitPutKeyValueCommand:
> @Override
> public Object visitPutKeyValueCommand(InvocationContext ctx, PutKeyValueCommand command) throws Throwable {
> try {
> if (command.isConditional()) markKeyAsRead(ctx, command);
> return invokeNextInterceptor(ctx, command);
> } catch (Throwable te) {
> throw cleanLocksAndRethrow(ctx, te);
> } finally {
> //with putForExternalRead the value might be put into L1 without a transaction
> //we need to release any locks for these cases
> if (!ctx.isInTxScope()) lockManager.unlockAll(ctx);
> }
> }
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2974) DeltaAware based fine-grained replication corrupts cache data, if eviction is enabled
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2974?page=com.atlassian.jira.plugin.... ]
Mircea Markus resolved ISPN-2974.
---------------------------------
Resolution: Done
> DeltaAware based fine-grained replication corrupts cache data, if eviction is enabled
> -------------------------------------------------------------------------------------
>
> Key: ISPN-2974
> URL: https://issues.jboss.org/browse/ISPN-2974
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 5.2.1.Final, 5.2.5.Final, 5.2.6.Final
> Reporter: Horia Chiorean
> Assignee: Adrian Nistor
> Priority: Critical
> Labels: 5.2.x
> Fix For: 5.3.0.Beta1, 5.2.6.Final
>
>
> When using a custom {{DeltaAware}} implementation in a cluster with 2 replicated nodes with eviction enabled, data transferred from one node (the writer) to the another (the reader) causes data stored on this node and evicted at the time of the change, to be rewritten with whatever the partial latest delta was.
> In more detail:
> * configure 2 nodes in replicated mode, with eviction enabled
> * consider NodeA the writer and NodeB the reader
> * NodeA inserts some data (custom entries) into the cache
> * NodeB correctly receives via state transfer the initial data
> * NodeA loads & partially updates some information about an entry which was not in the cache - was evicted previously
> * NodeB receives the partial delta with the changes from NodeA, but *instead of merging* with whatever is stored in the persistent store, *replaces the entire entry in the cache*, leaving it in effect with "partial/corrupt information"
> If eviction is not enabled, everything works as expected.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months