[JBoss JIRA] (ISPN-2956) putIfAbsent on Hot Rod Java client doesn't reliably fulfil contract
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-2956?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-2956:
------------------------------
Fix Version/s: 7.0.0.Beta1
(was: 7.0.0.Alpha4)
> putIfAbsent on Hot Rod Java client doesn't reliably fulfil contract
> -------------------------------------------------------------------
>
> Key: ISPN-2956
> URL: https://issues.jboss.org/browse/ISPN-2956
> Project: Infinispan
> Issue Type: Bug
> Components: Remote Protocols
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Priority: Minor
> Labels: hotrod-java-client, remote-clients
> Fix For: 7.0.0.Beta1
>
>
> Hot Rod's putIfAbsent might have issues on some edge cases:
> {quote}I want to know whether the putting entry already exists in the remote
> cache cluster, or not.
> I thought that RemoteCache.putIfAbsent() would be useful for that
> purpose, i.e.,
> {code}
> if (remoteCache.putIfAbsent(k,v) == null) {
> // new entry.
> } else {
> // k already exists.
> }
> {code}
> But no.
> The putIfAbsent() for new entry may return non-null value, if one of the
> server crushed while putting.
> The behavior is like the following:
> 1. Client do putIfAbsent(k,v).
> 2. The server receives the request and sends replication requests to
> other servers. If the server crushed before completing replication, some
> servers own that (k,v), but others not.
> 3. Client receives the error. The putIfAbsent() internally retries the
> same request to the next server in the cluster server list.
> 4. If the next server owns the (k,v), the putIfAbsent() returns the
> replicated (k,v) at the step 2, without any error.
> So, putIfAbsent() is not reliable for knowing whether the putting entry
> is *exactly* new or not.
> Does anyone have any idea/workaround for this purpose?{quote}
> A workaround is to do this:
> {quote}We got a simple solution, which can be applied to our customer's application.
> If each value part of putting (k,v) is unique or contains unique value,
> the client can do *double check* wether the entry is new.
> {code}
> val = System.nanoTime(); // or uuid is also useful.
> if ((ret = cache.putIfAbsent(key, val)) == null
> || ret.equals(val)) {
> // new entry, if the return value is just the same.
> } else {
> // key already exists.
> }
> {code}
> We are proposing this workaround which almost works fine.{quote}
> However, this is a bit of a cludge.
> Hot Rod should be improved with an operation that allows a version to be passed when entry is created, instead of relying on the client generating it.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
11 years, 11 months
[JBoss JIRA] (ISPN-2958) Lucene Directory Read past EOF
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-2958?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-2958:
------------------------------
Fix Version/s: 7.0.0.Beta1
(was: 7.0.0.Alpha4)
> Lucene Directory Read past EOF
> ------------------------------
>
> Key: ISPN-2958
> URL: https://issues.jboss.org/browse/ISPN-2958
> Project: Infinispan
> Issue Type: Bug
> Components: Lucene Directory
> Affects Versions: 5.2.1.Final
> Reporter: Clement Pang
> Assignee: Pedro Ruivo
> Labels: retest, stable_embedded_query
> Fix For: 7.0.0.Beta1
>
>
> This seems to be happening rather deterministically.
> Infinispan configuration (in JBoss EAP 6.1.0.Alpha):
> {code}
> <cache-container name="lucene">
> <local-cache name="dshell-index-data" start="EAGER">
> <eviction strategy="LIRS" max-entries="50000"/>
> <file-store path="lucene" passivation="true" purge="false"/>
> </local-cache>
> <local-cache name="dshell-index-metadata" start="EAGER">
> <file-store path="lucene" passivation="true" purge="false"/>
> </local-cache>
> <local-cache name="dshell-index-lock" start="EAGER">
> <file-store path="lucene" passivation="true" purge="false"/>
> </local-cache>
> </cache-container>
> {code}
> Upon shutting down the server and confirming that passivation did indeed write the data to disk, the subsequent start-up would fail right away with:
> {code}
> Caused by: org.hibernate.search.SearchException: Could not initialize index
> at org.hibernate.search.store.impl.DirectoryProviderHelper.initializeIndexIfNeeded(DirectoryProviderHelper.java:162)
> at org.hibernate.search.infinispan.impl.InfinispanDirectoryProvider.start(InfinispanDirectoryProvider.java:103)
> at org.hibernate.search.indexes.impl.DirectoryBasedIndexManager.initialize(DirectoryBasedIndexManager.java:104)
> at org.hibernate.search.indexes.impl.IndexManagerHolder.createIndexManager(IndexManagerHolder.java:227)
> ... 64 more
> Caused by: java.io.IOException: Read past EOF
> at org.infinispan.lucene.SingleChunkIndexInput.readByte(SingleChunkIndexInput.java:77)
> at org.apache.lucene.store.ChecksumIndexInput.readByte(ChecksumIndexInput.java:41)
> at org.apache.lucene.store.DataInput.readInt(DataInput.java:86)
> at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:272)
> at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:182)
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1168)
> at org.hibernate.search.store.impl.DirectoryProviderHelper.initializeIndexIfNeeded(DirectoryProviderHelper.java:157)
> ... 67 more
> {code}
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
11 years, 11 months
[JBoss JIRA] (ISPN-3229) L1 cache entries should not be passivated
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-3229?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-3229:
------------------------------
Fix Version/s: 7.0.0.Beta1
(was: 7.0.0.Alpha4)
> L1 cache entries should not be passivated
> -----------------------------------------
>
> Key: ISPN-3229
> URL: https://issues.jboss.org/browse/ISPN-3229
> Project: Infinispan
> Issue Type: Bug
> Components: L1
> Affects Versions: 5.2.6.Final
> Reporter: William Burns
> Assignee: William Burns
> Fix For: 7.0.0.Beta1
>
> Attachments: DistSyncL1PassivationFuncTest.java
>
>
> L1 entries are stored in the same data container as the real entries. These can be evicted which is fine, however we don't want them to be passivated as this could be costly to write/read from the cache store. We should either prevent L1 cache entries from being passivated or have an option to allow for it.
> Currently L1 entries are not differentiated from other entries except through the fact that they are mortal, which is used for other operations as well, which means they would need a placeholder of some kind to tell the container this is a L1 entry.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
11 years, 11 months
[JBoss JIRA] (ISPN-3244) TopologyAwareSyncConsistentHashFactory should limit the number of segments per node
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-3244?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-3244:
------------------------------
Fix Version/s: 7.0.0.Beta1
(was: 7.0.0.Alpha4)
> TopologyAwareSyncConsistentHashFactory should limit the number of segments per node
> -----------------------------------------------------------------------------------
>
> Key: ISPN-3244
> URL: https://issues.jboss.org/browse/ISPN-3244
> Project: Infinispan
> Issue Type: Bug
> Components: State Transfer
> Affects Versions: 5.2.6.Final, 5.3.0.CR2
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 7.0.0.Beta1
>
>
> Let's say we have a cluster with 5 nodes: A(r1), B(r2), C(r2), D(r3), E(r3)
> TopologyAwareConsistentSyncHashFactory will spread the segments equally on each rack, meaning A will own 2x segments compared to the other nodes.
> TopologyAwareConsistentHashFactory limits the maximum number per node, so that A owns just as many segments as the other nodes. With a slight limitation: the number of racks must be greater than numOwners, otherwise each rack must hold (at least) one copy of all the data.
> TopologyAwareConsistentSyncHashFactory is a little random, so we can't distribute the data perfectly, but we can limit the number of segments on each node to something like 1.5x the average number of segments.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
11 years, 11 months