[JBoss JIRA] (ISPN-5495) ConcurrentStartTest.testConcurrentStart random failures
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-5495?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-5495:
--------------------------------
Fix Version/s: 8.0.0.CR1
(was: 8.0.0.Beta2)
> ConcurrentStartTest.testConcurrentStart random failures
> -------------------------------------------------------
>
> Key: ISPN-5495
> URL: https://issues.jboss.org/browse/ISPN-5495
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Test Suite - Core
> Affects Versions: 7.2.1.Final
> Reporter: Dan Berindei
> Priority: Blocker
> Labels: testsuite_stability
> Fix For: 8.0.0.CR1
>
>
> {noformat}
> org.testng.internal.thread.ThreadTimeoutException: Method org.testng.internal.TestNGMethod.testConcurrentStart() didn't finish within the time-out 60000
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:338)
> at org.infinispan.test.TestingUtil.waitForRehashToComplete(TestingUtil.java:253)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 5 months
[JBoss JIRA] (ISPN-5493) SiteProviderTopologyChangeTest.testXSiteSTDuringLeave random failures
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-5493?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-5493:
--------------------------------
Fix Version/s: 8.0.0.CR1
(was: 8.0.0.Beta2)
> SiteProviderTopologyChangeTest.testXSiteSTDuringLeave random failures
> ---------------------------------------------------------------------
>
> Key: ISPN-5493
> URL: https://issues.jboss.org/browse/ISPN-5493
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Cross-Site Replication, Test Suite - Core
> Affects Versions: 7.2.1.Final
> Reporter: Dan Berindei
> Assignee: Pedro Ruivo
> Priority: Blocker
> Labels: testsuite_failure
> Fix For: 8.0.0.CR1
>
> Attachments: SiteProviderTopologyChangeTest.log.gz
>
>
> Looks like the node is killed before the {{XSiteStateTransferControlCommand(START_SEND)}} command was replicated to all the nodes in the source cluster, and the {{SuspectException}} stops the state push:
> {noformat}
> 23:33:22,834 DEBUG (testng-SiteProviderTopologyChangeTest:) [XSiteAdminOperations] Unable to pushState to 'NYC'.java.lang.Exception: java.util.concurrent.ExecutionException: org.infinispan.remoting.transport.jgroups.SuspectException: Suspected member: SiteProviderTopologyChangeTest-NodeBG-25313
> at org.infinispan.xsite.statetransfer.XSiteStateTransferManagerImpl.startPushState(XSiteStateTransferManagerImpl.java:151)
> at org.infinispan.xsite.XSiteAdminOperations.pushState(XSiteAdminOperations.java:238)
> at org.infinispan.xsite.statetransfer.failures.AbstractTopologyChangeTest.startStateTransfer(AbstractTopologyChangeTest.java:144)
> at org.infinispan.xsite.statetransfer.failures.SiteProviderTopologyChangeTest.doXSiteStateTransferDuringTopologyChange(SiteProviderTopologyChangeTest.java:241)
> at org.infinispan.xsite.statetransfer.failures.SiteProviderTopologyChangeTest.testXSiteSTDuringLeave(SiteProviderTopologyChangeTest.java:78)
> Caused by: java.util.concurrent.ExecutionException: org.infinispan.remoting.transport.jgroups.SuspectException: Suspected member: SiteProviderTopologyChangeTest-NodeBG-25313
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at org.infinispan.commons.util.concurrent.NotifyingFutureImpl.get(NotifyingFutureImpl.java:77)
> at org.infinispan.xsite.statetransfer.XSiteStateTransferManagerImpl.invokeRemotelyInLocalSite(XSiteStateTransferManagerImpl.java:376)
> at org.infinispan.xsite.statetransfer.XSiteStateTransferManagerImpl.controlStateTransferOnLocalSite(XSiteStateTransferManagerImpl.java:335)
> at org.infinispan.xsite.statetransfer.XSiteStateTransferManagerImpl.startPushState(XSiteStateTransferManagerImpl.java:142)
> ... 24 more
> Caused by: org.infinispan.remoting.transport.jgroups.SuspectException: Suspected member: SiteProviderTopologyChangeTest-NodeBG-25313
> at org.infinispan.remoting.transport.AbstractTransport.parseResponseAndAddToResponseList(AbstractTransport.java:74)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:586)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:287)
> at org.infinispan.remoting.rpc.RpcManagerImpl$2.call(RpcManagerImpl.java:382)
> at org.infinispan.remoting.rpc.RpcManagerImpl$2.call(RpcManagerImpl.java:378)
> ... 4 more
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 5 months
[JBoss JIRA] (ISPN-5523) Enabling eager caching can lead to sever throwing "OutOfMemoryError: Direct buffer memory"
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-5523?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-5523:
--------------------------------
Fix Version/s: 8.0.0.CR1
(was: 8.0.0.Beta2)
> Enabling eager caching can lead to sever throwing "OutOfMemoryError: Direct buffer memory"
> -------------------------------------------------------------------------------------------
>
> Key: ISPN-5523
> URL: https://issues.jboss.org/browse/ISPN-5523
> Project: Infinispan
> Issue Type: Bug
> Components: Remote Protocols
> Affects Versions: 7.2.2.Final, 8.0.0.Alpha1
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 7.2.4.Final, 8.0.0.CR1, 8.0.0.Final
>
>
> Some near caching tests are throwing:
> {code}
> [0m[31m04:11:24,499 ERROR [org.infinispan.server.hotrod.CacheDecodeContext] (HotRodServerWorker-43) ISPN005009: Unexpected error before any request parameters read: java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658) [rt.jar:1.7.0_75]
> at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) [rt.jar:1.7.0_75]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) [rt.jar:1.7.0_75]
> at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:433) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:179) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.PoolArena.allocate(PoolArena.java:168) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.PoolArena.allocate(PoolArena.java:98) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:241) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:155) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:146) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:107) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:106) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:494) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:461) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:350) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_75]
> {code}
> KeyValueVersionConverter allocates a byte buffer but does not release it. It could be the cause...
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 5 months
[JBoss JIRA] (ISPN-5521) Upgrade to Hibernate ORM 5.0.0.CR1
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-5521?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-5521:
--------------------------------
Fix Version/s: 8.0.0.CR1
(was: 8.0.0.Beta2)
> Upgrade to Hibernate ORM 5.0.0.CR1
> ----------------------------------
>
> Key: ISPN-5521
> URL: https://issues.jboss.org/browse/ISPN-5521
> Project: Infinispan
> Issue Type: Component Upgrade
> Components: Loaders and Stores
> Reporter: Sanne Grinovero
> Assignee: Tristan Tarrant
> Fix For: 8.0.0.CR1
>
>
> I'm opening this to make sure we keep Infinispan aligned with the other platforms, now moving to Hibernate 5.
> This affects at least the JPA CacheStore, I'm not sure if other components.
> Among many improvements, noticeable for Infinispan there is better OSGi support.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 5 months
[JBoss JIRA] (ISPN-5515) Purge store if there is another node already running
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-5515?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-5515:
--------------------------------
Fix Version/s: 8.0.0.CR1
(was: 8.0.0.Beta2)
> Purge store if there is another node already running
> ----------------------------------------------------
>
> Key: ISPN-5515
> URL: https://issues.jboss.org/browse/ISPN-5515
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core, Loaders and Stores
> Affects Versions: 7.2.2.Final, 8.0.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 8.0.0.CR1
>
>
> Preloading happens before communicating with other nodes that might already have the cache running. When joining the existing members, the cache then waits to receive the first CH in which it is a member, and then deletes only the entries in the segments that it doesn't own in that CH.
> The intention of this was to remove as little as possible from the existing data, e.g. if the first node to start up is not the one that was stopped last. But the preloaded entries are not replicated to the other nodes, so this can lead to inconsistencies.
> It would be better to delay preloading until we know we are the first node to start up, but failing that we could clear the data container and the store before receiving the initial state.
> Note that this will only allow preloading data from one node. Restoring data from more nodes is harder to do, and we will implement it as part of graceful restart.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 5 months
[JBoss JIRA] (ISPN-5507) Transactions committed immediately before cache stop can block shutdown
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-5507?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-5507:
--------------------------------
Fix Version/s: 8.0.0.CR1
(was: 8.0.0.Beta2)
> Transactions committed immediately before cache stop can block shutdown
> -----------------------------------------------------------------------
>
> Key: ISPN-5507
> URL: https://issues.jboss.org/browse/ISPN-5507
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Test Suite - Core
> Affects Versions: 7.2.1.Final, 8.0.0.Alpha1
> Reporter: Dan Berindei
> Priority: Blocker
> Labels: testsuite_stability
> Fix For: 8.0.0.CR1
>
>
> This is causing random failures in {{DistributedEntryRetrieverTxTest.verifyNodeLeavesBeforeGettingData}}.
> The test inserts some values into the cache, starts an iteration, and then kills one of the nodes. In rare instances, the killed cache only receives the TxCompletionCommand for one of the writes after it started the shutdown, and ignores it. That leaves the remote tx on-going, and {{TransactionTable.shutDownGracefully()}} blocks for 30 seconds - causing a {{TimeoutException}} elsewhere in the test.
> {noformat}
> 10:52:18,129 TRACE (remote-thread-NodeAM-p12133-t6:) [CommandAwareRpcDispatcher] About to send back response SuccessfulResponse{responseValue=null} for command CommitCommand {gtx=GlobalTransaction:<NodeAL-45757>:22325:remote, cacheName='org.infinispan.iteration.DistributedEntryRetrieverTxTest', topologyId=4}
> 10:52:18,129 TRACE (testng-DistributedEntryRetrieverTxTest:) [JGroupsTransport] dests=[NodeAM-45518, NodeAL-45757], command=TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=4, gtx=GlobalTransaction:<NodeAL-45757>:22325:local, cacheName=org.infinispan.iteration.DistributedEntryRetrieverTxTest} , mode=ASYNCHRONOUS, timeout=15000
> 10:52:18,133 DEBUG (testng-DistributedEntryRetrieverTxTest:) [CacheImpl] Stopping cache org.infinispan.iteration.DistributedEntryRetrieverTxTest on NodeAM-45518
> 10:52:18,133 TRACE (OOB-2,NodeAM-45518:) [GlobalInboundInvocationHandler] Attempting to execute CacheRpcCommand: TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=4, gtx=GlobalTransaction:<NodeAL-45757>:22325:local, cacheName=org.infinispan.iteration.DistributedEntryRetrieverTxTest} [sender=NodeAL-45757]
> 10:52:18,133 TRACE (OOB-2,NodeAM-45518:) [GlobalInboundInvocationHandler] Silently ignoring that org.infinispan.iteration.DistributedEntryRetrieverTxTest cache is not defined
> 10:52:18,133 DEBUG (testng-DistributedEntryRetrieverTxTest:) [TransactionTable] Wait for on-going transactions to finish for 30 seconds.
> 10:52:48,139 WARN (testng-DistributedEntryRetrieverTxTest:) [TransactionTable] ISPN000100: Stopping, but there are 0 local transactions and 1 remote transactions that did not finish in time.
> 10:52:48,386 ERROR (testng-DistributedEntryRetrieverTxTest:) [UnitTestTestNGListener] Test verifyNodeLeavesBeforeGettingData(org.infinispan.iteration.DistributedEntryRetrieverTxTest) failed.
> java.lang.IllegalStateException: Thread already timed out waiting for event pre_send_response_released
> at org.infinispan.test.fwk.CheckPoint.trigger(CheckPoint.java:131)
> at org.infinispan.test.fwk.CheckPoint.trigger(CheckPoint.java:116)
> at org.infinispan.iteration.DistributedEntryRetrieverTest.verifyNodeLeavesBeforeGettingData(DistributedEntryRetrieverTest.java:105)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 5 months
[JBoss JIRA] (ISPN-5614) Write performance regression after ISPN-5484
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-5614?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-5614:
--------------------------------
Fix Version/s: 8.0.0.CR1
(was: 8.0.0.Beta2)
> Write performance regression after ISPN-5484
> --------------------------------------------
>
> Key: ISPN-5614
> URL: https://issues.jboss.org/browse/ISPN-5614
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 8.0.0.Beta1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 8.0.0.CR1
>
>
> Regression test shows a significant drop in throughput in the replicated and distributed write tests.
> This was after adjusting the internal thread pool settings in the JGroups configuration: with the default (min=5, max=20, queue=0), the distributed read test would fail to finish.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 5 months
[JBoss JIRA] (ISPN-5561) Problem in dealing with the HQL queries that contain string comparison operations, when it involves empty string as an operand
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-5561?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-5561:
--------------------------------
Fix Version/s: 8.0.0.CR1
(was: 8.0.0.Beta2)
> Problem in dealing with the HQL queries that contain string comparison operations, when it involves empty string as an operand
> ------------------------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-5561
> URL: https://issues.jboss.org/browse/ISPN-5561
> Project: Infinispan
> Issue Type: Bug
> Components: Embedded Querying, Remote Querying
> Affects Versions: 7.2.0.Final
> Environment: Infinispan version: Infinispan 7.2.0-Final
> Operating System: Linux
> Compatibility mode is enabled.
> Indexing is enabled and 'autoconfig' parameter is set to true.
> Reporter: Pavan Kundgol
> Assignee: Adrian Nistor
> Fix For: 7.2.4.Final, 8.0.0.CR1, 8.0.0.Final
>
>
> When a HQL query that involves string comparison operation with the empty string matching is executed through hot rod client(java), it is resulting in the following error,
> org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for message id[7] returned server error (status=0x85): org.hibernate.search.exception.EmptyQueryException: HSEARCH000146: The query string '' applied on field 'CITY' has no meaningful tokens to be matched. Validate the query input against the Analyzer applied on this field.
> The same problem may also occur when the string matching operation involves a string containing certain special keywords(hibernate search stop words), as operands.
> The problem seems to be assosiated with the the 'hibernate-hql-lucene' parser used by the 'infinispan-remote-query-server' module. The problem here is that the hibernate-hql-lucene parser is translating comparison operations directly into lucene keyword matching operations, irrespective of the field type, though the phrase matching is more appropriate for string fields. This leads to invalid query formation when the field in concern is of type string, and is matched against an empty string(or a string containing one or more hibernate search stop words).
> For example,
> the where condition city='' present in the following HQL query
> select name from com.test.User where city=''
> is tralated to lucene query, the hql-lucene parser is currently translating to
> queryBuilder.keyword().onField("city").matching("").createQuery()
> whereas recommended one is
> queryBuilder.phrase().onField("city").sentence("").createQuery()
> In the former way of translation, Hibernate Search query execution fails due to its usage of standard analyzer while executing the query. Hence, the problem can also be avoided by disabling the standard analyzer while executing the query.
> queryBuilder.keyword().onField("city").ignoreAnalyzer().matching("").createQuery()
> Preferred solution may be to enhance the 'hibernate-hql-lucene' library to do type specific translations, and to specifically translate the string comparison operations of hql query into phrase matching operations of lucene query(instead of keyword matching operations).
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 5 months