[JBoss JIRA] (ISPN-5947) Infinispan directory provider is a lot slower when lucene caches are distributed compared to replicated
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-5947?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero closed ISPN-5947.
---------------------------------
Assignee: (was: Gustavo Fernandes)
> Infinispan directory provider is a lot slower when lucene caches are distributed compared to replicated
> -------------------------------------------------------------------------------------------------------
>
> Key: ISPN-5947
> URL: https://issues.jboss.org/browse/ISPN-5947
> Project: Infinispan
> Issue Type: Bug
> Components: Embedded Querying
> Reporter: Jakub Markos
>
> I noticed that the difference in performance when using Infinispan Directory Provider with lucene data cache in distributed mode compared to replicated mode is quite big. In numbers, on my computer, running a 4 node cluster with a distributed cache with indexing enabled:
> {code}
> <distributed-cache name="dist_lucene" owners="2" statistics="true">
> <indexing index="LOCAL">
> <property name="default.indexmanager">org.infinispan.query.indexmanager.InfinispanIndexManager</property>
> <property name="default.exclusive_index_use">true</property>
> <property name="default.metadata_cachename">lucene_metadata</property>
> <property name="default.data_cachename">lucene_data</property>
> <property name="default.locking_cachename">lucene_locking</property>
> </indexing>
> </distributed-cache>
> <replicated-cache name="lucene_metadata" mode="SYNC" remote-timeout="25000">
> <indexing index="NONE"/>
> </replicated-cache>
> <replicated-cache name="lucene_data" mode="SYNC" remote-timeout="25000">
> <indexing index="NONE"/>
> </replicated-cache>
> <replicated-cache name="lucene_locking" mode="SYNC" remote-timeout="25000">
> <indexing index="NONE"/>
> </replicated-cache>
> {code}
> Using 10 threads on each node, loading 100 000 entries takes ~2.5 minutes, and using 100 threads takes ~1 minute. Changing the configuration to use a distributed cache for the index data:
> {code}
> <distributed-cache name="lucene_data" mode="SYNC" remote-timeout="25000">
> <indexing index="NONE"/>
> </distributed-cache>
> {code}
> leads to loading times 3+ hours (10 threads, I stopped it at around 80000 entries) and 22 minutes (100 threads), which is around 20x slowdown.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-5947) Infinispan directory provider is a lot slower when lucene caches are distributed compared to replicated
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-5947?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero resolved ISPN-5947.
-----------------------------------
Resolution: Rejected
Not a bug, this is a direct consequence of the current design. We recommend using replicated caches for index storage for this reason.
> Infinispan directory provider is a lot slower when lucene caches are distributed compared to replicated
> -------------------------------------------------------------------------------------------------------
>
> Key: ISPN-5947
> URL: https://issues.jboss.org/browse/ISPN-5947
> Project: Infinispan
> Issue Type: Bug
> Components: Embedded Querying
> Reporter: Jakub Markos
> Assignee: Gustavo Fernandes
>
> I noticed that the difference in performance when using Infinispan Directory Provider with lucene data cache in distributed mode compared to replicated mode is quite big. In numbers, on my computer, running a 4 node cluster with a distributed cache with indexing enabled:
> {code}
> <distributed-cache name="dist_lucene" owners="2" statistics="true">
> <indexing index="LOCAL">
> <property name="default.indexmanager">org.infinispan.query.indexmanager.InfinispanIndexManager</property>
> <property name="default.exclusive_index_use">true</property>
> <property name="default.metadata_cachename">lucene_metadata</property>
> <property name="default.data_cachename">lucene_data</property>
> <property name="default.locking_cachename">lucene_locking</property>
> </indexing>
> </distributed-cache>
> <replicated-cache name="lucene_metadata" mode="SYNC" remote-timeout="25000">
> <indexing index="NONE"/>
> </replicated-cache>
> <replicated-cache name="lucene_data" mode="SYNC" remote-timeout="25000">
> <indexing index="NONE"/>
> </replicated-cache>
> <replicated-cache name="lucene_locking" mode="SYNC" remote-timeout="25000">
> <indexing index="NONE"/>
> </replicated-cache>
> {code}
> Using 10 threads on each node, loading 100 000 entries takes ~2.5 minutes, and using 100 threads takes ~1 minute. Changing the configuration to use a distributed cache for the index data:
> {code}
> <distributed-cache name="lucene_data" mode="SYNC" remote-timeout="25000">
> <indexing index="NONE"/>
> </distributed-cache>
> {code}
> leads to loading times 3+ hours (10 threads, I stopped it at around 80000 entries) and 22 minutes (100 threads), which is around 20x slowdown.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-5083) Hot Rod decoder should use async Cache operations
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-5083?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes resolved ISPN-5083.
-------------------------------------
Resolution: Out of Date
With the recent big refactorings in the server, most operations are now run outside the server IO loop
> Hot Rod decoder should use async Cache operations
> -------------------------------------------------
>
> Key: ISPN-5083
> URL: https://issues.jboss.org/browse/ISPN-5083
> Project: Infinispan
> Issue Type: Enhancement
> Components: Remote Protocols
> Reporter: Galder Zamarreño
> Assignee: Gustavo Fernandes
> Fix For: 9.2.0.Final
>
>
> Hot Rod decoder is currently tying up Netty threads as a result of calling up to Infinispan sync operations. Instead, Hot Rod decoder should call up async operations, convert the Notifying Futures to Scala Futures, and write up the reply when it's received. This should increase performance specially under heavy load.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-6456) OptimisticTxPartitionAndMergeDuringPrepareTest.testOriginatorIsolatedPartition fails randomly
by Ryan Emerson (JIRA)
[ https://issues.jboss.org/browse/ISPN-6456?page=com.atlassian.jira.plugin.... ]
Ryan Emerson reassigned ISPN-6456:
----------------------------------
Assignee: Ryan Emerson (was: Galder Zamarreño)
> OptimisticTxPartitionAndMergeDuringPrepareTest.testOriginatorIsolatedPartition fails randomly
> ---------------------------------------------------------------------------------------------
>
> Key: ISPN-6456
> URL: https://issues.jboss.org/browse/ISPN-6456
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 8.1.2.Final
> Reporter: Ivan Straka
> Assignee: Ryan Emerson
>
> Test in Infinispan testsuite org.infinispan.partitionhandling.OptimisticTxPartitionAndMergeDuringPrepareTest.testOriginatorIsolatedPartition fails randomly
> {code:java}
> java.lang.RuntimeException: Timed out waiting for rebalancing to complete on node OptimisticTxPartitionAndMergeDuringPrepareTest-NodeI-7304, expected member list is [OptimisticTxPartitionAndMergeDuringPrepareTest-NodeI-7304, OptimisticTxPartitionAndMergeDuringPrepareTest-NodeJ-31794, OptimisticTxPartitionAndMergeDuringPrepareTest-NodeK-33671, OptimisticTxPartitionAndMergeDuringPrepareTest-NodeL-56275], current member list is [OptimisticTxPartitionAndMergeDuringPrepareTest-NodeJ-31794, OptimisticTxPartitionAndMergeDuringPrepareTest-NodeK-33671, OptimisticTxPartitionAndMergeDuringPrepareTest-NodeL-56275]!
> at org.infinispan.test.TestingUtil.waitForRehashToComplete(TestingUtil.java:239)
> at org.infinispan.test.TestingUtil.waitForRehashToComplete(TestingUtil.java:249)
> at org.infinispan.partitionhandling.BaseTxPartitionAndMergeTest.mergeCluster(BaseTxPartitionAndMergeTest.java:87)
> at org.infinispan.partitionhandling.BaseOptimisticTxPartitionAndMergeTest.doTest(BaseOptimisticTxPartitionAndMergeTest.java:75)
> at org.infinispan.partitionhandling.OptimisticTxPartitionAndMergeDuringPrepareTest.testOriginatorIsolatedPartition(OptimisticTxPartitionAndMergeDuringPrepareTest.java:33)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:38)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:382)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-8139) OptimisticPrimaryOwnerCrashDuringPrepareTest.testPrimaryOwnerCrash random failures
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-8139?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes updated ISPN-8139:
------------------------------------
Fix Version/s: 9.1.1.Final
> OptimisticPrimaryOwnerCrashDuringPrepareTest.testPrimaryOwnerCrash random failures
> ----------------------------------------------------------------------------------
>
> Key: ISPN-8139
> URL: https://issues.jboss.org/browse/ISPN-8139
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Reporter: Tristan Tarrant
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.1.1.Final
>
>
> Stacktrace
> java.util.concurrent.TimeoutException
> at java.util.concurrent.FutureTask.get(FutureTask.java:205)
> at org.infinispan.distribution.rehash.OptimisticPrimaryOwnerCrashDuringPrepareTest.testPrimaryOwnerCrash(OptimisticPrimaryOwnerCrashDuringPrepareTest.java:58)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
> ... Removed 16 stack frames
>
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-8139) OptimisticPrimaryOwnerCrashDuringPrepareTest.testPrimaryOwnerCrash random failures
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-8139?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes updated ISPN-8139:
------------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> OptimisticPrimaryOwnerCrashDuringPrepareTest.testPrimaryOwnerCrash random failures
> ----------------------------------------------------------------------------------
>
> Key: ISPN-8139
> URL: https://issues.jboss.org/browse/ISPN-8139
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Reporter: Tristan Tarrant
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.1.1.Final
>
>
> Stacktrace
> java.util.concurrent.TimeoutException
> at java.util.concurrent.FutureTask.get(FutureTask.java:205)
> at org.infinispan.distribution.rehash.OptimisticPrimaryOwnerCrashDuringPrepareTest.testPrimaryOwnerCrash(OptimisticPrimaryOwnerCrashDuringPrepareTest.java:58)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
> ... Removed 16 stack frames
>
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-1568) Clustered Query fail when hibernate search not fully initialized
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-1568?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero resolved ISPN-1568.
-----------------------------------
Assignee: (was: Gustavo Fernandes)
Resolution: Out of Date
I suspect this was resolved a long time ago, but also having entities auto-detected is now deprecated so we can close such problems.
> Clustered Query fail when hibernate search not fully initialized
> ----------------------------------------------------------------
>
> Key: ISPN-1568
> URL: https://issues.jboss.org/browse/ISPN-1568
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Embedded Querying
> Affects Versions: 5.1.0.BETA5
> Reporter: Mathieu Lachance
>
> Hi,
> I'm running into this issue when doing a clustered query in distribution mode :
> org.infinispan.CacheException: org.hibernate.search.SearchException: Not a mapped entity (don't forget to add @Indexed): class com.XXX.Client
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:166)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:181)
> at org.infinispan.query.clustered.ClusteredQueryInvoker.broadcast(ClusteredQueryInvoker.java:113)
> at org.infinispan.query.clustered.ClusteredCacheQueryImpl.broadcastQuery(ClusteredCacheQueryImpl.java:115)
> at org.infinispan.query.clustered.ClusteredCacheQueryImpl.iterator(ClusteredCacheQueryImpl.java:90)
> at org.infinispan.query.impl.CacheQueryImpl.iterator(CacheQueryImpl.java:129)
> at org.infinispan.query.clustered.ClusteredCacheQueryImpl.list(ClusteredCacheQueryImpl.java:133)
> at com.XXX.DistributedCache.cacheQueryList(DistributedCache.java:313)
> at com.XXX.DistributedCache.cacheQueryList(DistributedCache.java:274)
> at com.XXX.ClientCache.getClientsByServerId(ClientCache.java:127)
> at com.XXX.ClientManager.getClientsByServerId(ClientManager.java:157)
> at com.XXX$PingClient.run(PlayerBll.java:890)
> at java.util.TimerThread.mainLoop(Timer.java:512)
> at java.util.TimerThread.run(Timer.java:462)
> Caused by: org.hibernate.search.SearchException: Not a mapped entity (don't forget to add @Indexed): class com.XXX.Client
> at org.hibernate.search.query.engine.impl.HSQueryImpl.buildSearcher(HSQueryImpl.java:549)
> at org.hibernate.search.query.engine.impl.HSQueryImpl.buildSearcher(HSQueryImpl.java:493)
> at org.hibernate.search.query.engine.impl.HSQueryImpl.queryDocumentExtractor(HSQueryImpl.java:292)
> at org.infinispan.query.clustered.commandworkers.CQCreateEagerQuery.perform(CQCreateEagerQuery.java:44)
> at org.infinispan.query.clustered.ClusteredQueryCommand.perform(ClusteredQueryCommand.java:135)
> at org.infinispan.query.clustered.ClusteredQueryCommand.perform(ClusteredQueryCommand.java:129)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:170)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:179)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithRetry(InboundInvocationHandlerImpl.java:208)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:156)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:162)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:141)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:556)
> at org.jgroups.JChannel.up(JChannel.java:716)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881)
> at org.jgroups.protocols.pbcast.StreamingStateTransfer.up(StreamingStateTransfer.java:262)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
> at org.jgroups.protocols.UNICAST.up(UNICAST.java:332)
> at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:700)
> at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:561)
> at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:140)
> at org.jgroups.protocols.FD.up(FD.java:273)
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:284)
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
> at org.jgroups.protocols.Discovery.up(Discovery.java:354)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1174)
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1709)
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1691)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> With the use of the following
> cache configuration :
> <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xsi:schemaLocation="urn:infinispan:config:5.1 http://www.infinispan.org/schemas/infinispan-config-5.1.xsd"
> xmlns="urn:infinispan:config:5.1">
> <global>
> <transport clusterName="XXX-cluster" machineId="XXX" siteId="XXX" rackId="XXX" distributedSyncTimeout="15000">
> <properties>
> <property name="configurationFile" value="jgroups-jdbc-ping.xml" />
> </properties>
> </transport>
> </global>
> <default>
> <transaction
> cacheStopTimeout="30000"
> transactionManagerLookupClass="org.infinispan.transaction.lookup.DummyTransactionManagerLookup"
> lockingMode="PESSIMISTIC"
> useSynchronization="true"
> transactionMode="TRANSACTIONAL"
> syncCommitPhase="true"
> syncRollbackPhase="false"
> >
> <recovery enabled="false" />
> </transaction>
> <clustering mode="local" />
> <indexing enabled="true" indexLocalOnly="true">
> <properties>
> <property name="hibernate.search.default.directory_provider" value="ram" />
> </properties>
> </indexing>
> </default>
> <namedCache name="XXX-Client">
> <transaction
> cacheStopTimeout="30000"
> transactionManagerLookupClass="org.infinispan.transaction.lookup.DummyTransactionManagerLookup"
> lockingMode="PESSIMISTIC"
> useSynchronization="true"
> transactionMode="TRANSACTIONAL"
> syncCommitPhase="true"
> syncRollbackPhase="false"
> >
> <recovery enabled="false" />
> </transaction>
> <invocationBatching enabled="false" />
> <loaders passivation="false" />
> <clustering mode="distribution" >
> <sync replTimeout="15000" />
> <stateRetrieval
> timeout="240000"
> retryWaitTimeIncreaseFactor="2"
> numRetries="5"
> maxNonProgressingLogWrites="100"
>
> fetchInMemoryState="false"
> logFlushTimeout="60000"
> alwaysProvideInMemoryState="false"
> />
> </clustering>
> <storeAsBinary enabled="false" storeValuesAsBinary="true" storeKeysAsBinary="true" />
> <deadlockDetection enabled="true" spinDuration="100" />
> <eviction strategy="NONE" threadPolicy="PIGGYBACK" maxEntries="-1" />
> <jmxStatistics enabled="true" />
> <locking writeSkewCheck="false" lockAcquisitionTimeout="10000" isolationLevel="READ_COMMITTED" useLockStriping="false" concurrencyLevel="32" />
> <expiration wakeUpInterval="60000" lifespan="-1" maxIdle="3000000" />
> </namedCache>
> </infinispan>
> and jgroups configuration :
> <config xmlns="urn:org:jgroups"
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xsi:schemaLocation="urn:org:jgroups file:schema/JGroups-3.0.xsd">
> <TCP
> bind_port="7800"
> loopback="true"
> port_range="30"
> recv_buf_size="20000000"
> send_buf_size="640000"
> discard_incompatible_packets="true"
> max_bundle_size="64000"
> max_bundle_timeout="30"
> enable_bundling="true"
> use_send_queues="true"
> sock_conn_timeout="300"
> enable_diagnostics="false"
> thread_pool.enabled="true"
> thread_pool.min_threads="2"
> thread_pool.max_threads="30"
> thread_pool.keep_alive_time="5000"
> thread_pool.queue_enabled="false"
> thread_pool.queue_max_size="100"
> thread_pool.rejection_policy="Discard"
> oob_thread_pool.enabled="true"
> oob_thread_pool.min_threads="2"
> oob_thread_pool.max_threads="30"
> oob_thread_pool.keep_alive_time="5000"
> oob_thread_pool.queue_enabled="false"
> oob_thread_pool.queue_max_size="100"
> oob_thread_pool.rejection_policy="Discard"
> />
> <JDBC_PING
> connection_url="jdbc:jtds:sqlserver://XXX;databaseName=XXX"
> connection_username="XXX"
> connection_password="XXX"
> connection_driver="net.sourceforge.jtds.jdbcx.JtdsDataSource"
> initialize_sql=""
> />
> <MERGE2 max_interval="30000"
> min_interval="10000"/>
> <FD_SOCK/>
> <FD timeout="3000" max_tries="3"/>
> <VERIFY_SUSPECT timeout="1500"/>
> <pbcast.NAKACK
> use_mcast_xmit="false"
> retransmit_timeout="300,600,1200,2400,4800"
> discard_delivered_msgs="false"/>
> <UNICAST timeout="300,600,1200"/>
> <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
> max_bytes="400000"/>
> <pbcast.STATE />
> <pbcast.GMS print_local_addr="false" join_timeout="7000" view_bundling="true"/>
> <UFC max_credits="2000000" min_threshold="0.10"/>
> <MFC max_credits="2000000" min_threshold="0.10"/>
> <FRAG2 frag_size="60000"/>
> </config>
> Tough my entity is well annotated.
> Here's the steps to reproduce :
> 1. boot node A completly.
> 2. boot node B, make all caches start (DefaultCacheManager::startCaches(...)), then breakpoint just after.
> 3. on node A, do a clustered query.
> 4. node A fail because node b has not been fully initialized.
> Here's how I do my query :
> private CacheQuery getClusteredNonClusteredQuery(Query query)
> {
> CacheQuery cacheQuery;
> if (useClusteredQuery)
> {
> cacheQuery = searchManager.getClusteredQuery(query, cacheValueClass);
> }
> else
> {
> cacheQuery = searchManager.getQuery(query, cacheValueClass);
> }
> return cacheQuery;
> }
> I've tried also without supplying any "cacheValueClass" without any success.
> One ugly "workaround" I've found is, to as soon as possible in the application, to force the local insertion and removal of one dummy key and value to force initialization of the search manager like :
> cache.getAdvancedCache().withFlags(Flag.CACHE_MODE_LOCAL).put("XXX", new Client("XXX");
> cache.getAdvancedCache().withFlags(Flag.CACHE_MODE_LOCAL).remove("XXX");
> Tough this technique won't still garanty me that any clustered query will occur before.
> I think the issue this might as well be related to issue : ISPN-627 Provision to get Cache from CacheManager.
> Any idea or workaround ? Do you think by just adding a try catch and return an empty list could "fix" the problem ?
> EDIT :
> I've added a try catch inside org.infinispan.query.clustered.commandworkers.CQCreateEagerQuery::perform()
> @Override
> public QueryResponse perform() {
> query.afterDeserialise((SearchFactoryImplementor) getSearchFactory());
> try
> {
> DocumentExtractor extractor = query.queryDocumentExtractor();
> int resultSize = query.queryResultSize();
>
> ISPNEagerTopDocs eagerTopDocs = collectKeys(extractor);
>
> QueryResponse queryResponse = new QueryResponse(eagerTopDocs,
> getQueryBox().getMyId(), resultSize);
> queryResponse.setAddress(cache.getAdvancedCache().getRpcManager()
> .getAddress());
> return queryResponse;
> }
> catch (SearchException e)
> {
> QueryResponse queryResponse = new QueryResponse(new ISPNEagerTopDocs(), getQueryBox().getMyId(), 0);
> queryResponse.setAddress(cache.getAdvancedCache().getRpcManager().getAddress());
> return queryResponse;
> }
> }
> And made a default constructor in org.infinispan.query.clustered.ISPNEagerTopDocs
> public ISPNEagerTopDocs()
> {
> super(0, new ScoreDoc[0], 0);
> this.keys = new Object[0];
> }
> Swallowing the exception appear to be a "good" workaround.
> Thanks a lot,
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-8164) ReadAfterLostDataTest random failures
by Gustavo Fernandes (JIRA)
Gustavo Fernandes created ISPN-8164:
---------------------------------------
Summary: ReadAfterLostDataTest random failures
Key: ISPN-8164
URL: https://issues.jboss.org/browse/ISPN-8164
Project: Infinispan
Issue Type: Bug
Components: Test Suite - Core
Affects Versions: 9.1.0.Final
Reporter: Gustavo Fernandes
{noformat}
testRemove[DIST_SYNC](org.infinispan.statetransfer.ReadAfterLostDataTest) Time elapsed: 0.044 sec <<< FAILURE!
org.infinispan.util.concurrent.TimeoutException: Timed out waiting for topology 15
at org.infinispan.interceptors.impl.AsyncInterceptorChainImpl.invoke(AsyncInterceptorChainImpl.java:259)
at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1679)
at org.infinispan.cache.impl.CacheImpl.remove(CacheImpl.java:661)
at org.infinispan.cache.impl.CacheImpl.remove(CacheImpl.java:655)
at org.infinispan.cache.impl.AbstractDelegatingCache.remove(AbstractDelegatingCache.java:363)
at org.infinispan.cache.impl.EncoderCache.remove(EncoderCache.java:664)
at org.infinispan.statetransfer.ReadAfterLostDataTest.remove(ReadAfterLostDataTest.java:223)
at org.infinispan.statetransfer.ReadAfterLostDataTest.invokeOperation(ReadAfterLostDataTest.java:175)
at org.infinispan.statetransfer.ReadAfterLostDataTest.test(ReadAfterLostDataTest.java:167)
at org.infinispan.statetransfer.ReadAfterLostDataTest.testRemove(ReadAfterLostDataTest.java:84)
{noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months