NPE from invokeRemotely on stopped cache
by Sanne Grinovero
Hello,
I got the following stacktrace using CR2:
[exec] java.lang.NullPointerException
[exec] at
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:92)
[exec] at
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:128)
[exec] at
org.infinispan.remoting.ReplicationQueue.flush(ReplicationQueue.java:147)
[exec] at
org.infinispan.remoting.ReplicationQueue$1.run(ReplicationQueue.java:99)
[exec] at
java.util.…
[View More]concurrent.Executors$RunnableAdapter.call(Executors.java:441)
[exec] at
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
[exec] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
[exec] at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
[exec] at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
[exec] at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
[exec] at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
[exec] at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
[exec] at java.lang.Thread.run(Thread.java:619)
>From which I went to inspect
org.infinispan.remoting.transport.jgroups.JGroupsTransport; in this
class the "members" field is initialized as Collections.emptyList()
and it appears that the only way to become null is by stopping the
cache.
Also it seems that in many points the check "members!=null" is
performed, but not in RpcManagerImpl.
Could the stop() method be changed to set this value to emptyList()
too, so that all those "!=null" checks could be avoided, or if you see
good reasons to throw an error then what would be a better one than
NPE.
I'm also wondering if this is supposed to happen: isn't this a race
condition during the stop process of the cache? I'd like to stop it
after all replications have been sent, seems cleaner.
Regards,
Sanne
[View Less]
14 years, 8 months
Infinispan's dependency injection fwk
by Mircea Markus
Hi,
I have this problem with the the dependency injection fwk in infinispan: many times I'd like to declare the injected dependencies as final fields in order to take advantage[1] of "final" semantics in Java's memory model. I cannot/don't know how to do that, can I annotate an constructor with @Inject? If not I think that would be useful.
[1] @Inject methods are called by the thread that starts the CM. The injected dependencies are cached as local fields and will be accessed by a different …
[View More]application thread. So they'd need volatile/synchronise for proper publishing: afaik volatile is less performant that publishing with final.
wdyt?
Cheers,
Mircea
[View Less]
14 years, 8 months
Eviction thread and purging expired entries
by Mircea Markus
Hi,
Eviction thread does two things right now:
- evict stuff from DataContainer
- purge entries from a CacheStore
CacheStore.purge might slow down eviction as it is generally an expensive operation. It might not even be needed, if the users don't use expiration.
What about:
a) making EvictionThread.purgeCacheStore configurable.
or/and
b) use another thread for purging the store.
This doesn't came out of the blue, there's a user that has eviction+cache store configured with eviction …
[View More]thread wakeup set to 1 sec - he still gets OOM.
Wdyt?
Cheers,
Mircea
[View Less]
14 years, 8 months
[ISPN-548] Discussion on updating the QueryInterceptor to be able to update old keys
by Navin Surtani
Just getting a discussion going on this since a JIRA has now been created.
Basically, the way I see it is that we can check which keys in the cache
have already been used - but that's context specific (I'll probably need
a quick explanation to how the InvocationContexts work). The question
for the gurus of ISPN here is whether or not this is going to be an issue.
The plan of attack is as follows: -
1 - User does a put()
2 - Interceptor checks the set of keys used within the same context …
[View More]to
see if the same key has been used
3 - If it has an UPDATE is used.
4 - If not we do an ADD.
Reckon that sounds okay?
--
Navin Surtani
Intern Infinispan
[View Less]
14 years, 8 months
Cutting Radegast CR2
by Manik Surtani
Guys
What are your thoughts around freezing Branch 4.1.x for a CR2 release on Monday evening, and cutting the release Tuesday AM? Apart from the blocker Mircea is working on (and hopefully should finish with soon), can everyone else look at either closing their open issues or postponing them?
Cheers
Manik
--
Manik Surtani
manik(a)jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org
14 years, 8 months
Null values
by Vladimir Blagojevic
Hey,
According to Cache javadoc we do not support null keys and values, yet we only check for non null keys. What is the final verdict here?
This is related to a JIRA case I am working on https://jira.jboss.org/browse/ISPN-514
Cheers
--
Vladimir Blagojevic
JBoss Clustering Team
JBoss, by Red Hat
14 years, 8 months
Bug in queryInterceptor?
by Israel Lacerra
Hi guys,
Looks like in QueryInterceptor.addToIndexes the correct workType to perform
on searchFactory is WorkType.UPDATE.
If the workType is ADD, then when we put a new object on a old key (using
cache.put), the old object remains in the index.
Am I wrong?
Israel
14 years, 8 months
5 node cluster - exceptions?
by kapil nayar
I have a cluster (5 nodes) configured for distribution mode with L1 and sync
operation (Infinispan 4.1.0.BETA2).
The cluster nodes are running on Windows Server 2003 VMs (3 nodes on VM1 and
2nodes on VM2) with jgroups configured as TCP.
Single cache instance was created on all nodes and the 5nodes seemed to
connect successfully.
I left the cluster running overnight without any application/ cache activity
and noticed the following messages and exceptions next morning:
The jmx RPCManager …
[View More]statistics still show cluster size as 5.
I need to understand if these exceptions would have messed up the cache/
cache manager or is it only transient.
Any comments/ observations are appreciated.
Thanks,
Kapil
2010-07-20 04:28:44,180 WARN [NAKACK] VM1-57619: dropped message from
VM2-62323 (not in xmit_table), keys are [VM1-57619, V
M1-29440, VM1-57675], view=[VM1-29440|5] [VM1-29440, VM1-57619, VM1-57675]
2010-07-20 04:28:44,195 WARN [NAKACK] VM1-57619: dropped message from
VM2-33071 (not in xmit_table), keys are [VM1-57619, V
M1-29440, VM1-57675], view=[VM1-29440|5] [VM1-29440, VM1-57619, VM1-57675]
2010-07-20 04:28:44,242 WARN [NAKACK] VM1-57619: dropped message from
VM2-33071 (not in xmit_table), keys are [VM1-57619, V
M1-29440, VM1-57675], view=[VM1-29440|5] [VM1-29440, VM1-57619, VM1-57675]
2010-07-20 04:28:44,258 WARN [NAKACK] VM1-57619: dropped message from
VM2-62323 (not in xmit_table), keys are [VM1-57619, V
M1-29440, VM1-57675], view=[VM1-29440|5] [VM1-29440, VM1-57619, VM1-57675]
2010-07-20 04:28:44,273 WARN [NAKACK] VM1-57619: dropped message from
VM2-33071 (not in xmit_table), keys are [VM1-57619, V
M1-29440, VM1-57675], view=[VM1-29440|5] [VM1-29440, VM1-57619, VM1-57675]
2010-07-20 04:28:44,570 WARN [FD_SOCK] I (VM1-57619) was suspected by
VM2-62323; ignoring the SUSPECT message
2010-07-20 04:48:45,124 ERROR [JoinTask] Caught exception!
org.infinispan.CacheException: Unable to retrieve old consistent hash from
coordinator even after several attempts at sleeping and retrying!
at
org.infinispan.distribution.JoinTask.retrieveOldCH(JoinTask.java:191)
at
org.infinispan.distribution.JoinTask.performRehash(JoinTask.java:83)
at org.infinispan.distribution.RehashTask.call(RehashTask.java:52)
at org.infinispan.distribution.RehashTask.call(RehashTask.java:32)
at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
2010-07-20 05:51:27,370 WARN [FD] I was suspected by VM2-62323; ignoring
the SUSPECT message and sending back a HEARTBEAT_ACK
2010-07-20 05:51:28,588 WARN [NAKACK] VM1-57619: dropped message from
VM2-62323 (not in xmit_table), keys are [VM1-57619, V
M1-29440, VM1-57675], view=[VM1-57619|8] [VM1-57619, VM1-29440, VM1-57675]
2010-07-20 05:51:29,557 WARN [FD_SOCK] I (VM1-57619) was suspected by
VM2-62323; ignoring the SUSPECT message
2010-07-20 05:51:30,370 WARN [FD] I was suspected by VM2-62323; ignoring
the SUSPECT message and sending back a HEARTBEAT_ACK
2010-07-20 05:51:30,370 WARN [TCP] VM1-57619: no physical address for
VM2-33071, dropping message
2010-07-20 05:51:30,604 WARN [NAKACK] VM1-57619: dropped message from
VM2-62323 (not in xmit_table), keys are [VM1-57619, V
M1-29440, VM1-57675], view=[VM1-57619|8] [VM1-57619, VM1-29440, VM1-57675]
2010-07-20 05:51:30,651 WARN [NAKACK] VM1-57619: dropped message from
VM2-62323 (not in xmit_table), keys are [VM1-57619, V
M1-29440, VM1-57675], view=[VM1-57619|8] [VM1-57619, VM1-29440, VM1-57675]
2010-07-20 05:51:30,698 WARN [NAKACK] VM1-57619: dropped message from
VM2-62323 (not in xmit_table), keys are [VM1-57619, V
M1-29440, VM1-57675], view=[VM1-57619|8] [VM1-57619, VM1-29440, VM1-57675]
2010-07-20 05:51:30,713 WARN [NAKACK] VM1-57619: dropped message from
VM2-62323 (not in xmit_table), keys are [VM1-57619, V
M1-29440, VM1-57675], view=[VM1-57619|8] [VM1-57619, VM1-29440, VM1-57675]
2010-07-20 05:51:33,354 WARN [NAKACK] VM1-57619: dropped message from
VM2-62323 (not in xmit_table), keys are [VM1-57619, V
M1-29440, VM1-57675], view=[VM1-57619|8] [VM1-57619, VM1-29440, VM1-57675]
2010-07-20 05:51:39,026 WARN [NAKACK] VM1-57619: dropped message from
VM2-62323 (not in xmit_table), keys are [VM1-57619, V
M1-29440, VM1-57675], view=[VM1-57619|8] [VM1-57619, VM1-29440, VM1-57675]
2010-07-20 05:51:39,042 WARN [NAKACK] VM1-57619: dropped message from
VM2-62323 (not in xmit_table), keys are [VM1-57619, V
M1-29440, VM1-57675], view=[VM1-57619|8] [VM1-57619, VM1-29440, VM1-57675]
2010-07-20 05:51:40,042 WARN [FD_SOCK] I (VM1-57619) was suspected by
VM2-62323; ignoring the SUSPECT message
2010-07-20 06:11:40,362 ERROR [JoinTask] Caught exception!
org.infinispan.CacheException: Unable to retrieve old consistent hash from
coordinator even after several attempts at sleeping and retrying!
at
org.infinispan.distribution.JoinTask.retrieveOldCH(JoinTask.java:191)
at
org.infinispan.distribution.JoinTask.performRehash(JoinTask.java:83)
at org.infinispan.distribution.RehashTask.call(RehashTask.java:52)
at org.infinispan.distribution.RehashTask.call(RehashTask.java:32)
at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
[View Less]
14 years, 8 months