[JBoss JIRA] (ISPN-1462) JGroupsTransport doesn't initialize properly when it receives an already-connected JGroups channel
by Dan Berindei (Created) (JIRA)
JGroupsTransport doesn't initialize properly when it receives an already-connected JGroups channel
--------------------------------------------------------------------------------------------------
Key: ISPN-1462
URL: https://issues.jboss.org/browse/ISPN-1462
Project: Infinispan
Issue Type: Bug
Components: RPC
Affects Versions: 5.1.0.BETA1
Reporter: Dan Berindei
Assignee: Dan Berindei
Priority: Blocker
Fix For: 5.1.0.BETA2
When the JGroups channel is connected outside Infinispan, before passing it to JGroupsTransport, the member list is not initialized.
This can lead to strange errors like this one:
java.lang.IllegalArgumentException: Invalid cache list for consistent hash: []
at org.infinispan.distribution.ch.AbstractWheelConsistentHash.setCaches(AbstractWheelConsistentHash.java:96)
at org.infinispan.distribution.ch.ConsistentHashHelper.createConsistentHash(ConsistentHashHelper.java:122)
at org.infinispan.statetransfer.ReplicatedStateTransferManagerImpl.createConsistentHash(ReplicatedStateTransferManagerImpl.java:56)
at org.infinispan.statetransfer.BaseStateTransferManagerImpl.start(BaseStateTransferManagerImpl.java:143)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months
[JBoss JIRA] Created: (ISPN-658) DistributionManager not considerate of cache state changes
by Paul Ferraro (JIRA)
DistributionManager not considerate of cache state changes
----------------------------------------------------------
Key: ISPN-658
URL: https://jira.jboss.org/browse/ISPN-658
Project: Infinispan
Issue Type: Bug
Components: Distributed Cache
Affects Versions: 4.2.0.ALPHA2
Reporter: Paul Ferraro
Assignee: Manik Surtani
Considering a cache manager with 2 caches in DIST mode (C1 and C2) deployed on 2 nodes (N1 and N2).
Currently, the DistributionManager does not properly handle the following scenarios:
1. Stop C1 on N1. This ought to trigger a rehash for the C1 cache. Currently, rehashing is only triggered via view change. Failure to rehash on stopping of a cache can inadvertently cause data loss, if all backups of a given cache entry have stopped.
2. A new DIST mode cache, C3, is started on N2. If N1 is the coordinator, the join request sent to N1 will get stuck in an infinite loop, since the cache manager on N1 does not contain a C3 cache.
3. Less critically, a new node, N3 is started. It does not yet have a C1 or C2 cache, though it's cache manager is started. This prematurely triggers a rehash of C1 and C2, even though there are no new caches instances to consider.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months
[JBoss JIRA] Created: (ISPN-1383) Data caching multiplying memory requirements of Hot Rod server
by Galder Zamarreño (JIRA)
Data caching multiplying memory requirements of Hot Rod server
--------------------------------------------------------------
Key: ISPN-1383
URL: https://issues.jboss.org/browse/ISPN-1383
Project: Infinispan
Issue Type: Bug
Components: Cache Server
Affects Versions: 5.0.1.FINAL
Reporter: Galder Zamarreño
Assignee: Galder Zamarreño
Fix For: 5.1.0.ALPHA2 , 5.1.0.FINAL
After inserting a 160MB object in Hot Rod, the memory consumption of the server goes through the roof. The screenshot shows a couple of interesting things:
1. Both the HotRodDecoder and the DataContainer have a byte[] of approx 160mb each. So, it seems like it's actually two copies of the same byte[]. It's clear though that only the cache container should have it, and any cache decoder data should be cleared when the request is completed.
2. Netty's UnsafeDynamicChannelBuffer is still holding to a byte[] of 268MB approx. Judging by the 2nd screenshot attached, this appears to be belonging to the underlying ReplayingDecoder. That should also, if possible, be cleared up. I'll have a look around in the Netty code and maybe ping Trustin.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months
[JBoss JIRA] (ISPN-1444) Infinispan fails to shutdown gracefully
by Luc Boudreau (Created) (JIRA)
Infinispan fails to shutdown gracefully
---------------------------------------
Key: ISPN-1444
URL: https://issues.jboss.org/browse/ISPN-1444
Project: Infinispan
Issue Type: Bug
Affects Versions: 5.0.1.FINAL, 4.2.1.FINAL
Reporter: Luc Boudreau
Assignee: Manik Surtani
Priority: Blocker
We have embeded Infinispan into our project, but when used, we cannot gracefully shutdown the JVM anymore.
There are a few exceptions thrown by late-access to the classloader from log4j, but these errors are easy to work around. Tomcat blocks any class loading after an application has been marked as shutting down. I can load them in the classloader in my application at runtime and circumvent those issues.
The main problem is the fact that the threads are just hanging there. They are not marked as daemon threads, so the JVM doesn't shut them off automatically, if required. The culprit threads are:
- OOB-1
- OOB-2
- multicast receiver
- unicast-receiver
- TransferQueueBuilder
Some of these threads might be related to JGroups. Please advise, and I will create a separate ticket in their bug tracker if needed.
I know Infinispan registers a shutdown hook in order to cleanly shut down, but it looks quite unreliable and causes a lot of problems for us.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months