[JBoss JIRA] (ISPN-2025) NPE in Externalizer on shutdown
by Michal Linhard (JIRA)
Michal Linhard created ISPN-2025:
------------------------------------
Summary: NPE in Externalizer on shutdown
Key: ISPN-2025
URL: https://issues.jboss.org/browse/ISPN-2025
Project: Infinispan
Issue Type: Bug
Components: Marshalling
Affects Versions: 5.1.4.FINAL
Reporter: Michal Linhard
Assignee: Galder Zamarreño
I get this what I get when I'm shutting down one of the clustered nodes (default config standalone-ha.xml) of JDG 6.0.0.ER7
{code}
09:50:25,505 WARN [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] Problems unmarshalling remote command from byte buffer: java.lang.NullPointerException
at org.infinispan.marshall.jboss.ExternalizerTable.readObject(ExternalizerTable.java:222)
at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351)
at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:37) [jboss-marshalling-1.3.13.GA-redhat-1.jar:1.3.13.GA-redhat-1]
at org.infinispan.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:154)
at org.infinispan.marshall.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:114)
at org.infinispan.marshall.AbstractDelegatingMarshaller.objectFromByteBuffer(AbstractDelegatingMarshaller.java:85)
at org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectFromBuffer(MarshallerAdapter.java:50)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:200)
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:456) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:363) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:238) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:130) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.JChannel.up(JChannel.java:716) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.protocols.RSVP.up(RSVP.java:179) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.protocols.FRAG2.up(FRAG2.java:181) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.protocols.FlowControl.up(FlowControl.java:400) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:889) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:793) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:365) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:602) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:177) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.protocols.MERGE2.up(MERGE2.java:205) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.protocols.Discovery.up(Discovery.java:359) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.stack.Protocol.up(Protocol.java:363) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.protocols.TP.passMessageUp(TP.java:1180) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1728) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1710) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_30]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_30]
at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_30]
at org.jboss.threads.JBossThread.run(JBossThread.java:122) [jboss-threads-2.0.0.GA-redhat-1.jar:2.0.0.GA-redhat-1]
{code}
I've ran the instances out of the box by this commands, binding them to virtual ips on my laptop:
{code}
server1/bin/standalone.sh -b 192.168.11.101 -c standalone-ha.xml -Djboss.bind.address.management=192.168.11.101 -Djboss.node.name=node1
server2/bin/standalone.sh -b 192.168.11.102 -c standalone-ha.xml -Djboss.bind.address.management=192.168.11.102 -Djboss.node.name=node2
{code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (ISPN-1855) Accessing a non-distributed cache from a RemoteCacheManager can break topology updates
by Dan Berindei (JIRA)
Dan Berindei created ISPN-1855:
----------------------------------
Summary: Accessing a non-distributed cache from a RemoteCacheManager can break topology updates
Key: ISPN-1855
URL: https://issues.jboss.org/browse/ISPN-1855
Project: Infinispan
Issue Type: Bug
Affects Versions: 5.1.1.FINAL
Reporter: Dan Berindei
Assignee: Manik Surtani
Fix For: 5.2.0.FINAL
RemoteCacheManager uses a single consistent hash to map requests to different servers, but caches on the server may have different CHs (or even no CH if the cache is not in distributed mode).
If the first request goes to a on-distributed cache, the client will never request an updated CH and so it will use a round robin strategy for routing request to all the caches. Obviously this is not optimal for distributed caches.
Each distributed cache can also have different members since 5.1, so it would be best if we kept a separate CH per cache on the client.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (ISPN-1583) AbstractDelegatingAdvancedCache with(ClassLoader), withFlags(Flag...) logic is broken
by Paul Ferraro (Created) (JIRA)
AbstractDelegatingAdvancedCache with(ClassLoader), withFlags(Flag...) logic is broken
-------------------------------------------------------------------------------------
Key: ISPN-1583
URL: https://issues.jboss.org/browse/ISPN-1583
Project: Infinispan
Issue Type: Bug
Components: Core API
Affects Versions: 5.1.0.BETA5
Reporter: Paul Ferraro
Assignee: Manik Surtani
Priority: Critical
When the withFlags(...) logic was modified to use a DecoratedCache instead of thread-local storage, any caches already decorated with the AbstractDelegatingAdvancedCache(...) broke.
Take the following code:
AdvancedCache<K, V> baseCache;
AdvancedCache<K, V> customCache = new AbstractDelegatingAdvancedCache<K, V>(baseCache) {
public void clear() {
// custom clear logic
}
};
customCache.withFlags(Flag.CACHE_MODE_LOCAL).clear();
In the above statement, the flag is not applied.
The call to withFlags(...) returns a reference to customCache, and the reference to DecoratedCache containing the flags is lost to garbage collection.
In the case of with(ClassLoader) we have the opposite problem.
customCache.with(customClassLoader).clear();
In the above statement, the native clear() method is invoked instead of my custom clear() method. with(ClassLoader) returns a reference to DecoratedCache. The clear() method then operates on baseCache, instead of decoratedCache.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (ISPN-1822) Cache entry not evicted from memory on IBM JDK when another entry was loaded from a cache loader and maxEntries had been reached
by Martin Gencur (JIRA)
Martin Gencur created ISPN-1822:
-----------------------------------
Summary: Cache entry not evicted from memory on IBM JDK when another entry was loaded from a cache loader and maxEntries had been reached
Key: ISPN-1822
URL: https://issues.jboss.org/browse/ISPN-1822
Project: Infinispan
Issue Type: Bug
Components: Eviction
Affects Versions: 5.1.0.FINAL
Environment: java version "1.6.0"
Java(TM) SE Runtime Environment (build pxi3260sr9fp1-20110208_03(SR9 FP1))
IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Linux x86-32 jvmxi3260sr9-20110203_74623 (JIT enabled, AOT enabled) ;
java version "1.7.0"
Java(TM) SE Runtime Environment (build pxi3270-20110827_01)
IBM J9 VM (build 2.6, JRE 1.7.0 Linux x86-32 20110810_88604 (JIT enabled, AOT enabled)
Reporter: Martin Gencur
Assignee: Manik Surtani
This behavior is specific to IBM JDK (I tried JDK6 and 7), it works fine with Java HotSpot.
Steps to reproduce the problem:
1) set maxEntries for eviction to 2 and algorithm e.g. to LRU
2) store 3 entries key1, key2, key3 to the cache (after that you can see that the cache contains only 2 entries - key2 and key3, the first one was evicted from memory)
3) call cache.get("key1")
4) PROBLEM - cache contains all key1, key2, key3 even though it should contain only 2 entries - only happens with IBM JDK (6 or 7 ..no matter)
I'll shortly issue a pull request with a test to ispn-core
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] Created: (ISPN-928) Interceptor that allows invocations only when cluster is formed of N nodes
by Galder Zamarreño (JIRA)
Interceptor that allows invocations only when cluster is formed of N nodes
--------------------------------------------------------------------------
Key: ISPN-928
URL: https://issues.jboss.org/browse/ISPN-928
Project: Infinispan
Issue Type: Feature Request
Components: Configuration, RPC
Reporter: Galder Zamarreño
Assignee: Galder Zamarreño
Fix For: 5.0.0.BETA1, 5.0.0.Final
Following from https://github.com/pmuir/infinispan-examples/commit/f5d090092fa7b3660025b...
It'd be great to have an configurable StrictCluster interceptor in Infinispan which would basically make all invocations wait until the cluster of N nodes has been formed. I think it'd be a great addition and would allow clients to verify whether the cluster actually forms without the need to verify whether data replicates...etc.
In principle, the configuration would be at the CacheManager, i.e.:
<transport strictNumMembers="4"... />
However, it could also be useful to configure it at the cache level. So, could maybe want to do this: I want cache X to allow invocations the moment I have 2 nodes (in spite of the cluster being formed of 4 noes), whereas I want cache Y to allow invocations once I have 3 nodes.
Apart from an strict number of nodes, you could have a minimum number of nodes: allow invocations once I have 4 or more nodes. The strict value could still be useful to make sure intrusive machines don't get into the cluster, i.e. I expect 4 nodes in the cluster and if I have 5, something is wrong.
I think it's an interesting concept that would get rid of cluster validation code in examples and RadarGun.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] Created: (ISPN-1220) Add classloader hooks to cache listener events
by Paul Ferraro (JIRA)
Add classloader hooks to cache listener events
----------------------------------------------
Key: ISPN-1220
URL: https://issues.jboss.org/browse/ISPN-1220
Project: Infinispan
Issue Type: Enhancement
Components: Listeners
Affects Versions: 5.0.0.CR7
Reporter: Paul Ferraro
Assignee: Manik Surtani
This issue seeks to extend the classloading api changes made in ISPN-1096 to the Event API. Currently, cache listener events do not allow a classloader to be specified to perform any necessary deserialization triggered by the getKey(), getValue() methods. Can the event api be enhanced such that calls to getKey(), getValue() use a specific classloader?
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (ISPN-1568) Clustered Query fail when hibernate search not fully initialized
by Mathieu Lachance (Created) (JIRA)
Clustered Query fail when hibernate search not fully initialized
----------------------------------------------------------------
Key: ISPN-1568
URL: https://issues.jboss.org/browse/ISPN-1568
Project: Infinispan
Issue Type: Bug
Components: Querying, RPC
Affects Versions: 5.1.0.BETA5
Reporter: Mathieu Lachance
Assignee: Sanne Grinovero
Hi,
I'm running into this issue when doing a clustered query in distribution mode :
org.infinispan.CacheException: org.hibernate.search.SearchException: Not a mapped entity (don't forget to add @Indexed): class com.XXX.Client
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:166)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:181)
at org.infinispan.query.clustered.ClusteredQueryInvoker.broadcast(ClusteredQueryInvoker.java:113)
at org.infinispan.query.clustered.ClusteredCacheQueryImpl.broadcastQuery(ClusteredCacheQueryImpl.java:115)
at org.infinispan.query.clustered.ClusteredCacheQueryImpl.iterator(ClusteredCacheQueryImpl.java:90)
at org.infinispan.query.impl.CacheQueryImpl.iterator(CacheQueryImpl.java:129)
at org.infinispan.query.clustered.ClusteredCacheQueryImpl.list(ClusteredCacheQueryImpl.java:133)
at com.XXX.DistributedCache.cacheQueryList(DistributedCache.java:313)
at com.XXX.DistributedCache.cacheQueryList(DistributedCache.java:274)
at com.XXX.ClientCache.getClientsByServerId(ClientCache.java:127)
at com.XXX.ClientManager.getClientsByServerId(ClientManager.java:157)
at com.XXX$PingClient.run(PlayerBll.java:890)
at java.util.TimerThread.mainLoop(Timer.java:512)
at java.util.TimerThread.run(Timer.java:462)
Caused by: org.hibernate.search.SearchException: Not a mapped entity (don't forget to add @Indexed): class com.XXX.Client
at org.hibernate.search.query.engine.impl.HSQueryImpl.buildSearcher(HSQueryImpl.java:549)
at org.hibernate.search.query.engine.impl.HSQueryImpl.buildSearcher(HSQueryImpl.java:493)
at org.hibernate.search.query.engine.impl.HSQueryImpl.queryDocumentExtractor(HSQueryImpl.java:292)
at org.infinispan.query.clustered.commandworkers.CQCreateEagerQuery.perform(CQCreateEagerQuery.java:44)
at org.infinispan.query.clustered.ClusteredQueryCommand.perform(ClusteredQueryCommand.java:135)
at org.infinispan.query.clustered.ClusteredQueryCommand.perform(ClusteredQueryCommand.java:129)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:170)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:179)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithRetry(InboundInvocationHandlerImpl.java:208)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:156)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:162)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:141)
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:556)
at org.jgroups.JChannel.up(JChannel.java:716)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881)
at org.jgroups.protocols.pbcast.StreamingStateTransfer.up(StreamingStateTransfer.java:262)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
at org.jgroups.protocols.UNICAST.up(UNICAST.java:332)
at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:700)
at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:561)
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:140)
at org.jgroups.protocols.FD.up(FD.java:273)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:284)
at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
at org.jgroups.protocols.Discovery.up(Discovery.java:354)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1174)
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1709)
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1691)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
With the use of the following
cache configuration :
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:5.1 http://www.infinispan.org/schemas/infinispan-config-5.1.xsd"
xmlns="urn:infinispan:config:5.1">
<global>
<transport clusterName="XXX-cluster" machineId="XXX" siteId="XXX" rackId="XXX" distributedSyncTimeout="15000">
<properties>
<property name="configurationFile" value="jgroups-jdbc-ping.xml" />
</properties>
</transport>
</global>
<default>
<transaction
cacheStopTimeout="30000"
transactionManagerLookupClass="org.infinispan.transaction.lookup.DummyTransactionManagerLookup"
lockingMode="PESSIMISTIC"
useSynchronization="true"
transactionMode="TRANSACTIONAL"
syncCommitPhase="true"
syncRollbackPhase="false"
>
<recovery enabled="false" />
</transaction>
<clustering mode="local" />
<indexing enabled="true" indexLocalOnly="true">
<properties>
<property name="hibernate.search.default.directory_provider" value="ram" />
</properties>
</indexing>
</default>
<namedCache name="XXX-Client">
<transaction
cacheStopTimeout="30000"
transactionManagerLookupClass="org.infinispan.transaction.lookup.DummyTransactionManagerLookup"
lockingMode="PESSIMISTIC"
useSynchronization="true"
transactionMode="TRANSACTIONAL"
syncCommitPhase="true"
syncRollbackPhase="false"
>
<recovery enabled="false" />
</transaction>
<invocationBatching enabled="false" />
<loaders passivation="false" />
<clustering mode="distribution" >
<sync replTimeout="15000" />
<stateRetrieval
timeout="240000"
retryWaitTimeIncreaseFactor="2"
numRetries="5"
maxNonProgressingLogWrites="100"
fetchInMemoryState="false"
logFlushTimeout="60000"
alwaysProvideInMemoryState="false"
/>
</clustering>
<storeAsBinary enabled="false" storeValuesAsBinary="true" storeKeysAsBinary="true" />
<deadlockDetection enabled="true" spinDuration="100" />
<eviction strategy="NONE" threadPolicy="PIGGYBACK" maxEntries="-1" />
<jmxStatistics enabled="true" />
<locking writeSkewCheck="false" lockAcquisitionTimeout="10000" isolationLevel="READ_COMMITTED" useLockStriping="false" concurrencyLevel="32" />
<expiration wakeUpInterval="60000" lifespan="-1" maxIdle="3000000" />
</namedCache>
</infinispan>
and jgroups configuration :
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups file:schema/JGroups-3.0.xsd">
<TCP
bind_port="7800"
loopback="true"
port_range="30"
recv_buf_size="20000000"
send_buf_size="640000"
discard_incompatible_packets="true"
max_bundle_size="64000"
max_bundle_timeout="30"
enable_bundling="true"
use_send_queues="true"
sock_conn_timeout="300"
enable_diagnostics="false"
thread_pool.enabled="true"
thread_pool.min_threads="2"
thread_pool.max_threads="30"
thread_pool.keep_alive_time="5000"
thread_pool.queue_enabled="false"
thread_pool.queue_max_size="100"
thread_pool.rejection_policy="Discard"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="2"
oob_thread_pool.max_threads="30"
oob_thread_pool.keep_alive_time="5000"
oob_thread_pool.queue_enabled="false"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="Discard"
/>
<JDBC_PING
connection_url="jdbc:jtds:sqlserver://XXX;databaseName=XXX"
connection_username="XXX"
connection_password="XXX"
connection_driver="net.sourceforge.jtds.jdbcx.JtdsDataSource"
initialize_sql=""
/>
<MERGE2 max_interval="30000"
min_interval="10000"/>
<FD_SOCK/>
<FD timeout="3000" max_tries="3"/>
<VERIFY_SUSPECT timeout="1500"/>
<pbcast.NAKACK
use_mcast_xmit="false"
retransmit_timeout="300,600,1200,2400,4800"
discard_delivered_msgs="false"/>
<UNICAST timeout="300,600,1200"/>
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
max_bytes="400000"/>
<pbcast.STATE />
<pbcast.GMS print_local_addr="false" join_timeout="7000" view_bundling="true"/>
<UFC max_credits="2000000" min_threshold="0.10"/>
<MFC max_credits="2000000" min_threshold="0.10"/>
<FRAG2 frag_size="60000"/>
</config>
Tough my entity is well annotated.
Here's the steps to reproduce :
1. boot node A completly.
2. boot node B, make all caches start (DefaultCacheManager::startCaches(...)), then breakpoint just after.
3. on node A, do a clustered query.
4. node A fail because node b has not been fully initialized.
Here's how I do my query :
private CacheQuery getClusteredNonClusteredQuery(Query query)
{
CacheQuery cacheQuery;
if (useClusteredQuery)
{
cacheQuery = searchManager.getClusteredQuery(query, cacheValueClass);
}
else
{
cacheQuery = searchManager.getQuery(query, cacheValueClass);
}
return cacheQuery;
}
I've tried also without supplying any "cacheValueClass" without any success.
One ugly "workaround" I've found is, to as soon as possible in the application, to force the local insertion and removal of one dummy key and value to force initialization of the search manager like :
cache.getAdvancedCache().withFlags(Flag.CACHE_MODE_LOCAL).put("XXX", new Client("XXX");
cache.getAdvancedCache().withFlags(Flag.CACHE_MODE_LOCAL).remove("XXX");
Tough this technique won't still garanty me that any clustered query will occur before.
I think the issue this might as well be related to issue : ISPN-627 Provision to get Cache from CacheManager.
Any idea or workaround ? Do you think by just adding a try catch and return an empty list could "fix" the problem ?
Thanks a lot,
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (ISPN-1990) Preload sets the versions to null (repeatable read + write skew)
by Pedro Ruivo (JIRA)
Pedro Ruivo created ISPN-1990:
---------------------------------
Summary: Preload sets the versions to null (repeatable read + write skew)
Key: ISPN-1990
URL: https://issues.jboss.org/browse/ISPN-1990
Project: Infinispan
Issue Type: Bug
Components: Loaders and Stores
Environment: Java 6 (64bits)
Infinispan 5.2.0-SNAPSHOT
MacOS
Reporter: Pedro Ruivo
Assignee: Manik Surtani
I think I've spotted a issue when I use repeatable read with write skew check and I preload the cache.
I've made a test case to reproduce the bug. It can be found here [1].
The problem is that each keys preloaded is put in the container with version = null. When I try to commit a transaction, I get this exception:
java.lang.IllegalStateException: Entries cannot have null versions!
at
org.infinispan.container.entries.ClusteredRepeatableReadEntry.performWriteSkewCheck(ClusteredRepeatableReadEntry.java:44)
at
org.infinispan.transaction.WriteSkewHelper.performWriteSkewCheckAndReturnNewVersions(WriteSkewHelper.java:81)
at
org.infinispan.interceptors.locking.ClusteringDependentLogic$AllNodesLogic.createNewVersionsAndCheckForWriteSkews(ClusteringDependentLogic.java:133)
at
org.infinispan.interceptors.VersionedEntryWrappingInterceptor.visitPrepareCommand(VersionedEntryWrappingInterceptor.java:64)
I think that all info is in the test case, but if you need something let
me know.
Cheers,
Pedro
[1]
https://github.com/pruivo/infinispan/blob/issue_1/core/src/test/java/org/...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years