[JBoss JIRA] Created: (ISPN-833) Revisit cache name predefinition limitation for cache servers
by Galder Zamarreño (JIRA)
Revisit cache name predefinition limitation for cache servers
-------------------------------------------------------------
Key: ISPN-833
URL: https://issues.jboss.org/browse/ISPN-833
Project: Infinispan
Issue Type: Feature Request
Components: Cache Server
Reporter: Galder Zamarreño
Assignee: Galder Zamarreño
Fix For: 5.0.0.BETA1, 5.0.0.Final
There're are two primary reasons why Infinispan servers require predefined caches to be started up on startup, and do not allow invocations to undefined caches:
1. Concurrent cache startups were resulting in NPEs (ISPN-635) - This is already solved since the 4.2.x days.
2. Infinispan has issues dealing with asymmetric clusters (ISPN-658).
Once these two issues have been resolved, revisit the limitation.
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 9 months
[JBoss JIRA] Created: (ISPN-847) Eviction with strategy but no maxEntries does not make sense
by Galder Zamarreño (JIRA)
Eviction with strategy but no maxEntries does not make sense
------------------------------------------------------------
Key: ISPN-847
URL: https://issues.jboss.org/browse/ISPN-847
Project: Infinispan
Issue Type: Bug
Components: Configuration, Eviction
Affects Versions: 5.0.0.ALPHA1, 4.2.0.Final
Reporter: Galder Zamarreño
Assignee: Vladimir Blagojevic
Priority: Minor
Fix For: 5.0.0.BETA1, 5.0.0.Final
A configuration like this can be confusing cos no maxEntries is set, and without it there's no trigger to start evicting stuff:
<default>
<eviction strategy="FIFO" wakeUpInterval="6000" /><!-- 6 seconds -->
</default>
So, we should come up with some WARN/ERROR message indicating that maxEntries is missing unless NONE strategy is used.
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 9 months
[JBoss JIRA] Created: (ISPN-898) Migrate Hudson from DummyTM to JBossTM
by Mircea Markus (JIRA)
Migrate Hudson from DummyTM to JBossTM
---------------------------------------
Key: ISPN-898
URL: https://issues.jboss.org/browse/ISPN-898
Project: Infinispan
Issue Type: Feature Request
Components: Transactions
Affects Versions: 4.2.0.Final
Reporter: Mircea Markus
Assignee: Mircea Markus
Fix For: 5.0.0.BETA1
This implies fixing some tests as well. These tests failed on migration:
>>> org.infinispan.api.mvcc.PutForExternalReadTest.org.infinispan.api.mvcc.PutForExternalReadTest-testMemLeakOnSuspendedTransactions 30.021 3
>>> org.infinispan.api.mvcc.repeatable_read.RepeatableReadLockTest.org.infinispan.api.mvcc.repeatable_read.RepeatableReadLockTest-testRepeatableReadWithNullRemoval 0.0030 3
>>> org.infinispan.distribution.InvalidationFailureTest.org.infinispan.distribution.InvalidationFailureTest-testH1Invalidated 0.0050 3
>>> org.infinispan.distribution.SyncDistImplicitLockingTest.org.infinispan.distribution.SyncDistImplicitLockingTest-testReplaceNonExistentKey 0.375 3
>>> org.infinispan.jmx.TxInterceptorMBeanTest.org.infinispan.jmx.TxInterceptorMBeanTest-testCommit 0.0030 3
>>> org.infinispan.jmx.TxInterceptorMBeanTest.org.infinispan.jmx.TxInterceptorMBeanTest-testRemoteCommit 0.0020 3
>>> org.infinispan.lock.EagerLockingSingleLockTest.org.infinispan.lock.EagerLockingSingleLockTest-testLockOwnerFailure 0.5 3
>>> org.infinispan.replication.AsyncReplTest.org.infinispan.replication.AsyncReplTest-testWithTx 0.049 3
>>> org.infinispan.replication.SyncReplImplicitLockingTest.org.infinispan.replication.SyncReplImplicitLockingTest-testReplaceNonExistentKey 0.394 3
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 9 months
[JBoss JIRA] Created: (ISPN-868) Running out of memory using Infinispan after adding a small number of entities
by Tom Waterhouse (JIRA)
Running out of memory using Infinispan after adding a small number of entities
------------------------------------------------------------------------------
Key: ISPN-868
URL: https://issues.jboss.org/browse/ISPN-868
Project: Infinispan
Issue Type: Bug
Affects Versions: 4.2.0.Final
Environment: JBossJTA 4.14.0/Hibernate 3.6.0.Final/Spring 3.0.5
Reporter: Tom Waterhouse
Assignee: Manik Surtani
While running a load test data builder for our application we ran out of memory very quickly. A simple test case (attached) was created to duplicate the issue. We found running the simple test case illustrates that Infinispan uses a large amount of heap space.
As a reference the same test was run using EHCache 2.2. Memory usage was much lower; we never ran out of heap space. Note that EHCache was used as a reference only, our goal is to go to production with Infinispan.
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 9 months
[JBoss JIRA] Created: (ISPN-897) Error message during start multi-nodes cluster
by Changgeng Li (JIRA)
Error message during start multi-nodes cluster
----------------------------------------------
Key: ISPN-897
URL: https://issues.jboss.org/browse/ISPN-897
Project: Infinispan
Issue Type: Enhancement
Components: Core API
Affects Versions: 4.2.1.CR1
Reporter: Changgeng Li
Assignee: Manik Surtani
When start a multiple nodes environment, you may find the following error message:
ERROR [org.infinispan.remoting.InboundInvocationHandlerImpl] Defined caches: [StreamingDeviceCache, TOPOLOGY_QAMNAMETOSERVICEGROUPID, TOPOLOGY_DEVICE, VAID, ASSETINFO_LOCAL_CACHE, CONTENT_VOLUME_LOCAL_CACHE, TOPOLOGY_EDGEDEVICEINPUT, TOPOLOGY_SERVICEGROUP, ODRM_SESSION, eigAllocation, PAID, clientIdToSessionId, exclusion, TOPOLOGY_STREAMINGDEVICE, sopAllocation, TOPOLOGY_QAM, qamAllocation, STREAMING_CONTENT, content, SERVICE_GROUP, billing, accountIdToSessionId, TOPOLOGY_EDGEDEVICE, session, POILIST, TOPOLOGY_SOPGROUP, VAID_LOCAL_CACHE]
This is actually not an ERROR, but it confuse our QA people.
Maybe we can change it to WARN level.
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 9 months
[JBoss JIRA] Created: (ISPN-493) Harden rehash leave process
by Vladimir Blagojevic (JIRA)
Harden rehash leave process
---------------------------
Key: ISPN-493
URL: https://jira.jboss.org/browse/ISPN-493
Project: Infinispan
Issue Type: Task
Affects Versions: 4.1.0.BETA2, 4.0.0.Final
Reporter: Vladimir Blagojevic
Assignee: Vladimir Blagojevic
Fix For: 5.0.0.BETA1, 5.0.0.Final
We need to make sure that leave rehash process properly handles massive and rapid node failure.
Massive failures:
JGroups detects multiple node failures and pushes up to Infinispan views that are more "volatile" than we currently assumed (only one member at the time can leave). For example, if we have view V1={A,B,C,D,E} and massive failure causes {C,D,E} to fail, JGroups failure detection and GMS are going to install a view V2={A,B} to surviving members. LeaveTask does not handle this scenario.
Rapid node failure:
We need to revisit how LeaveTasks are queued up and executed/canceled during rapid node failures. Do we always cancel currently running leave tasks? At what stage are we allowed to cancel it and at what stage of a leave tasks is it better to wait for a completion of a task.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 10 months
[JBoss JIRA] Created: (ISPN-902) Data consistency across rehashing
by Erik Salter (JIRA)
Data consistency across rehashing
---------------------------------
Key: ISPN-902
URL: https://issues.jboss.org/browse/ISPN-902
Project: Infinispan
Issue Type: Bug
Reporter: Erik Salter
Assignee: Manik Surtani
Priority: Critical
Attachments: cacheTest.zip
There are two scenarios we're seeing on rehashing, both of which are critical.
1. On a node leaving a running cluster, we're seeing an inordinate amount of timeout errors, such as the one below. The end result of this is that the cluster ends up losing data.
org.infinispan.util.concurrent.TimeoutException: Timed out waiting for valid responses!
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:417)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:101)
at org.infinispan.distribution.DistributionManagerImpl.retrieveFromRemoteSource(DistributionManagerImpl.java:341)
at org.infinispan.interceptors.DistributionInterceptor.realRemoteGet(DistributionInterceptor.java:143)
at org.infinispan.interceptors.DistributionInterceptor.remoteGetAndStoreInL1(DistributionInterceptor.java:131)
06:07:44,097 WARN [GMS] cms-node-20192: merge leader did not get data from all partition coordinators [cms-node-20192, mydht1-18445], merge is cancelled at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:59)
2. Joining a node into a running cluster causes transactional failures on the other nodes. Most of the time, depending on the load, a node can take upwards of 8 minutes to join.
I've attached a unit test that can reproduce these issues.
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 10 months