[JBoss JIRA] (ISPN-2918) TopologyAwareConsistentHashFactory doesn't distribute data to nodes evenly
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-2918?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration updated ISPN-2918:
------------------------------------------
Bugzilla References: https://bugzilla.redhat.com/show_bug.cgi?id=923928, https://bugzilla.redhat.com/show_bug.cgi?id=924563 (was: https://bugzilla.redhat.com/show_bug.cgi?id=923928)
> TopologyAwareConsistentHashFactory doesn't distribute data to nodes evenly
> --------------------------------------------------------------------------
>
> Key: ISPN-2918
> URL: https://issues.jboss.org/browse/ISPN-2918
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.2.4.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Labels: 5.2.x, jdg6
> Fix For: 5.2.6.Final, 5.3.0.Alpha1
>
>
> When the topology of a cluster is "balanced" (i.e. all sites have the same number of racks, racks have the same number of machines, machines have the same number of nodes), the number of segments owned by each node should be approximately the same.
> The current algorithm properly balances the primary owners of the segments, but the backup owners are not balanced, so a node can end up owning a lot more segments than expected.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-2958) Lucene Directory Read past EOF
by Clement Pang (JIRA)
Clement Pang created ISPN-2958:
----------------------------------
Summary: Lucene Directory Read past EOF
Key: ISPN-2958
URL: https://issues.jboss.org/browse/ISPN-2958
Project: Infinispan
Issue Type: Bug
Components: Lucene Directory
Affects Versions: 5.2.1.Final
Reporter: Clement Pang
Assignee: Sanne Grinovero
This seems to be happening rather deterministically.
Infinispan configuration (in JBoss EAP 6.1.0.Alpha):
{code}
<cache-container name="lucene">
<local-cache name="dshell-index-data" start="EAGER">
<eviction strategy="LIRS" max-entries="50000"/>
<file-store path="lucene" passivation="true" purge="false"/>
</local-cache>
<local-cache name="dshell-index-metadata" start="EAGER">
<file-store path="lucene" passivation="true" purge="false"/>
</local-cache>
<local-cache name="dshell-index-lock" start="EAGER">
<file-store path="lucene" passivation="true" purge="false"/>
</local-cache>
</cache-container>
{code}
Upon shutting down the server and confirming that passivation did indeed write the data to disk, the subsequent start-up would fail right away with:
{code}
Caused by: org.hibernate.search.SearchException: Could not initialize index
at org.hibernate.search.store.impl.DirectoryProviderHelper.initializeIndexIfNeeded(DirectoryProviderHelper.java:162)
at org.hibernate.search.infinispan.impl.InfinispanDirectoryProvider.start(InfinispanDirectoryProvider.java:103)
at org.hibernate.search.indexes.impl.DirectoryBasedIndexManager.initialize(DirectoryBasedIndexManager.java:104)
at org.hibernate.search.indexes.impl.IndexManagerHolder.createIndexManager(IndexManagerHolder.java:227)
... 64 more
Caused by: java.io.IOException: Read past EOF
at org.infinispan.lucene.SingleChunkIndexInput.readByte(SingleChunkIndexInput.java:77)
at org.apache.lucene.store.ChecksumIndexInput.readByte(ChecksumIndexInput.java:41)
at org.apache.lucene.store.DataInput.readInt(DataInput.java:86)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:272)
at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:182)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1168)
at org.hibernate.search.store.impl.DirectoryProviderHelper.initializeIndexIfNeeded(DirectoryProviderHelper.java:157)
... 67 more
{code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-2957) XMLStreamException raised by Parser52.parseInterceptor() when declaring custom interceptor with properties
by Alan STS (JIRA)
Alan STS created ISPN-2957:
------------------------------
Summary: XMLStreamException raised by Parser52.parseInterceptor() when declaring custom interceptor with properties
Key: ISPN-2957
URL: https://issues.jboss.org/browse/ISPN-2957
Project: Infinispan
Issue Type: Bug
Components: Configuration, Core API
Affects Versions: 5.2.1.Final
Reporter: Alan STS
Assignee: Mircea Markus
Infinispan 5.2 schema allows to declare a custom interceptor with properties as follows:
<customInterceptors>
<interceptor position="LAST" class="com.group.awms.is.resource.dao.infinispan.interceptor.ApplicationInterceptor">
<properties>
<property name="cacheActivityName" value="activity" />
</properties>
</interceptor>
</customInterceptors>
Problem is that method parseInterceptor of class Parser52 does not expect any content for interceptors and the following exception is raised:
Message: Unexpected element '{urn:infinispan:config:5.2}properties' encountered
at org.infinispan.configuration.parsing.ParseUtils.unexpectedElement(ParseUtils.java:57)
at org.infinispan.configuration.parsing.ParseUtils.requireNoContent(ParseUtils.java:152)
at org.infinispan.configuration.parsing.Parser52.parseInterceptor(Parser52.java:1186)
at org.infinispan.configuration.parsing.Parser52.parseCustomInterceptors(Parser52.java:1148)
at org.infinispan.configuration.parsing.Parser52.parseCache(Parser52.java:156)
at org.infinispan.configuration.parsing.Parser52.parseNamedCache(Parser52.java:139)
at org.infinispan.configuration.parsing.Parser52.readElement(Parser52.java:106)
at org.infinispan.configuration.parsing.Parser52.readElement(Parser52.java:75)
at org.jboss.staxmapper.XMLMapperImpl.processNested(XMLMapperImpl.java:110)
at org.jboss.staxmapper.XMLMapperImpl.parseDocument(XMLMapperImpl.java:69)
at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:77)
... 40 more
The Parser class is not in sync with the schema.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-2956) Hot Rod putIfAbsent to take version to handle edge cases
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2956?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-2956:
-----------------------------------
Fix Version/s: 6.0.0.Final
Component/s: Remote protocols
> Hot Rod putIfAbsent to take version to handle edge cases
> --------------------------------------------------------
>
> Key: ISPN-2956
> URL: https://issues.jboss.org/browse/ISPN-2956
> Project: Infinispan
> Issue Type: Feature Request
> Components: Remote protocols
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 6.0.0.Final
>
>
> Hot Rod's putIfAbsent might have issues on some edge cases:
> {quote}I want to know whether the putting entry already exists in the remote
> cache cluster, or not.
> I thought that RemoteCache.putIfAbsent() would be useful for that
> purpose, i.e.,
> if (remoteCache.putIfAbsent(k,v) == null) {
> // new entry.
> } else {
> // k already exists.
> }
> But no.
> The putIfAbsent() for new entry may return non-null value, if one of the
> server crushed while putting.
> The behavior is like the following:
> 1. Client do putIfAbsent(k,v).
> 2. The server receives the request and sends replication requests to
> other servers. If the server crushed before completing replication, some
> servers own that (k,v), but others not.
> 3. Client receives the error. The putIfAbsent() internally retries the
> same request to the next server in the cluster server list.
> 4. If the next server owns the (k,v), the putIfAbsent() returns the
> replicated (k,v) at the step 2, without any error.
> So, putIfAbsent() is not reliable for knowing whether the putting entry
> is *exactly* new or not.
> Does anyone have any idea/workaround for this purpose?{quote}
> A workaround is to do this:
> {quote}We got a simple solution, which can be applied to our customer's application.
> If each value part of putting (k,v) is unique or contains unique value,
> the client can do *double check* wether the entry is new.
> val = System.nanoTime(); // or uuid is also useful.
> if ((ret = cache.putIfAbsent(key, val)) == null
> || ret.equals(val)) {
> // new entry, if the return value is just the same.
> } else {
> // key already exists.
> }
> We are proposing this workaround which almost works fine.{quote}
> However, this is a bit of a cludge.
> Hot Rod should be improved with an operation that allows a version to be passed when entry is created, instead of relying on the client generating it.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-2441) Some core interceptors trigger custom interceptor error message
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-2441?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-2441:
-------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Some core interceptors trigger custom interceptor error message
> ---------------------------------------------------------------
>
> Key: ISPN-2441
> URL: https://issues.jboss.org/browse/ISPN-2441
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite
> Affects Versions: 5.2.0.Beta2
> Reporter: Dan Berindei
> Assignee: Adrian Nistor
> Fix For: 5.3.0.Alpha1, 5.3.0.Final
>
>
> I'm not sure if this is really a problem or if it's just a superfluous error message, but I'm seeing about 6000 of these during a typical test suite run:
> {noformat}
> ISPN000173: Custom interceptor org.infinispan.interceptors.ActivationInterceptor has used @Inject, @Start or @Stop. These methods will not be processed. Please extend org.infinispan.interceptors.base.BaseCustomInterceptor instead, and your custom interceptor will have access to a cache and cacheManager. Override stop() and start() for lifecycle methods.
> ISPN000173: Custom interceptor org.infinispan.interceptors.CacheMgmtInterceptor has used @Inject, @Start or @Stop. These methods will not be processed. Please extend org.infinispan.interceptors.base.BaseCustomInterceptor instead, and your custom interceptor will have access to a cache and cacheManager. Override stop() and start() for lifecycle methods.
> ISPN000173: Custom interceptor org.infinispan.interceptors.DistCacheStoreInterceptor has used @Inject, @Start or @Stop. These methods will not be processed. Please extend org.infinispan.interceptors.base.BaseCustomInterceptor instead, and your custom interceptor will have access to a cache and cacheManager. Override stop() and start() for lifecycle methods.
> ISPN000173: Custom interceptor org.infinispan.interceptors.InvalidationInterceptor has used @Inject, @Start or @Stop. These methods will not be processed. Please extend org.infinispan.interceptors.base.BaseCustomInterceptor instead, and your custom interceptor will have access to a cache and cacheManager. Override stop() and start() for lifecycle methods.
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-2956) Hot Rod putIfAbsent to take version to handle edge cases
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2956?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño moved ESC-20 to ISPN-2956:
-------------------------------------------
Project: Infinispan (was: Escalante)
Key: ISPN-2956 (was: ESC-20)
Workflow: GIT Pull Request workflow (was: jira)
> Hot Rod putIfAbsent to take version to handle edge cases
> --------------------------------------------------------
>
> Key: ISPN-2956
> URL: https://issues.jboss.org/browse/ISPN-2956
> Project: Infinispan
> Issue Type: Feature Request
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
>
> Hot Rod's putIfAbsent might have issues on some edge cases:
> {quote}I want to know whether the putting entry already exists in the remote
> cache cluster, or not.
> I thought that RemoteCache.putIfAbsent() would be useful for that
> purpose, i.e.,
> if (remoteCache.putIfAbsent(k,v) == null) {
> // new entry.
> } else {
> // k already exists.
> }
> But no.
> The putIfAbsent() for new entry may return non-null value, if one of the
> server crushed while putting.
> The behavior is like the following:
> 1. Client do putIfAbsent(k,v).
> 2. The server receives the request and sends replication requests to
> other servers. If the server crushed before completing replication, some
> servers own that (k,v), but others not.
> 3. Client receives the error. The putIfAbsent() internally retries the
> same request to the next server in the cluster server list.
> 4. If the next server owns the (k,v), the putIfAbsent() returns the
> replicated (k,v) at the step 2, without any error.
> So, putIfAbsent() is not reliable for knowing whether the putting entry
> is *exactly* new or not.
> Does anyone have any idea/workaround for this purpose?{quote}
> A workaround is to do this:
> {quote}We got a simple solution, which can be applied to our customer's application.
> If each value part of putting (k,v) is unique or contains unique value,
> the client can do *double check* wether the entry is new.
> val = System.nanoTime(); // or uuid is also useful.
> if ((ret = cache.putIfAbsent(key, val)) == null
> || ret.equals(val)) {
> // new entry, if the return value is just the same.
> } else {
> // key already exists.
> }
> We are proposing this workaround which almost works fine.{quote}
> However, this is a bit of a cludge.
> Hot Rod should be improved with an operation that allows a version to be passed when entry is created, instead of relying on the client generating it.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-2953) Alternate statistics MBeans to allow for a Hawt.io Infinispan plugin
by Manik Surtani (JIRA)
[ https://issues.jboss.org/browse/ISPN-2953?page=com.atlassian.jira.plugin.... ]
Manik Surtani commented on ISPN-2953:
-------------------------------------
Awesome. Looks good, James. Closing/rejecting this JIRA.
> Alternate statistics MBeans to allow for a Hawt.io Infinispan plugin
> --------------------------------------------------------------------
>
> Key: ISPN-2953
> URL: https://issues.jboss.org/browse/ISPN-2953
> Project: Infinispan
> Issue Type: Feature Request
> Components: JMX, reporting and management
> Affects Versions: 5.2.5.Final
> Reporter: Manik Surtani
> Assignee: Manik Surtani
> Fix For: 5.3.0.Alpha1, 5.3.0.Final
>
> Attachments: hawtio-infinispan.png
>
>
> Hawt.io draws its MBean trees in a manner different from what RHQ does, and the existing statistics MBean is hard to consume from Hawt.io.
> This feature request is to register an alternate set of MBeans if a flag is passed in ({{-Dinfinispan.jmx.alternate_name_order=true}}?
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-2955) Async marshalling executor retry when queue fills
by Manik Surtani (JIRA)
Manik Surtani created ISPN-2955:
-----------------------------------
Summary: Async marshalling executor retry when queue fills
Key: ISPN-2955
URL: https://issues.jboss.org/browse/ISPN-2955
Project: Infinispan
Issue Type: Enhancement
Components: Marshalling
Affects Versions: 5.2.5.Final
Reporter: Manik Surtani
Assignee: Galder Zamarreño
Fix For: 5.3.0.Alpha1, 5.3.0.Final
When using an async transport and async marshalling, an executor is used to process the marshalling task in a separate thread and the caller's thread is allowed to return immediately.
When the executor's queue fills and the queue cannot accept any more tasks, it throws a {{RejectedExecutionException}}, causing a very bad user/developer experience. A more correct approach to this is to catch the {{RejectedExecutionException}}, block and retry the task submission.
The end result is that, in the degenerate case (when the executor queue is full) instead of throwing exceptions, those invocations will perform slightly slower.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months