[JBoss JIRA] (ISPN-2959) Improve the release scripts
by Tristan Tarrant (JIRA)
Tristan Tarrant created ISPN-2959:
-------------------------------------
Summary: Improve the release scripts
Key: ISPN-2959
URL: https://issues.jboss.org/browse/ISPN-2959
Project: Infinispan
Issue Type: Enhancement
Reporter: Tristan Tarrant
Assignee: Mircea Markus
The release script should be improved in the following ways:
- if appropriate (e.g. after a Final release), it should bump the version on the released branch
- it should be generic enough to handle release of both Infinispan and Infinispan Server
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-2869) Optimize GridInputStream.skip()
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2869?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño resolved ISPN-2869.
------------------------------------
Fix Version/s: 5.3.0.Alpha1
5.3.0.Final
(was: 6.0.0.Final)
Resolution: Done
> Optimize GridInputStream.skip()
> -------------------------------
>
> Key: ISPN-2869
> URL: https://issues.jboss.org/browse/ISPN-2869
> Project: Infinispan
> Issue Type: Enhancement
> Reporter: Marko Lukša
> Assignee: Marko Lukša
> Fix For: 5.3.0.Alpha1, 5.3.0.Final
>
>
> {{GridInputStream.skip()}} is currently very inefficient, especially when skipping past the currently loaded chunk.
> The method also has a small border-case bug: when the parameter is negative, the method should not skip any bytes, but it actually skips/reads all the remaining bytes of the stream.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-1990) Preload sets the versions to null (repeatable read + write skew)
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-1990?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-1990:
-----------------------------------
Fix Version/s: (was: 5.3.0.Alpha1)
> Preload sets the versions to null (repeatable read + write skew)
> ----------------------------------------------------------------
>
> Key: ISPN-1990
> URL: https://issues.jboss.org/browse/ISPN-1990
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 5.1.3.FINAL
> Environment: Java 6 (64bits)
> Infinispan 5.2.0-SNAPSHOT
> MacOS
> Reporter: Pedro Ruivo
> Assignee: Galder Zamarreño
> Labels: preload, skew, versioning, write
> Fix For: 5.3.0.Final
>
>
> I think I've spotted a issue when I use repeatable read with write skew check and I preload the cache.
>
> I've made a test case to reproduce the bug. It can be found here [1].
> The problem is that each keys preloaded is put in the container with version = null. When I try to commit a transaction, I get this exception:
>
> {code}
> java.lang.IllegalStateException: Entries cannot have null versions!
> at org.infinispan.container.entries.ClusteredRepeatableReadEntry.performWriteSkewCheck(ClusteredRepeatableReadEntry.java:44)
> at org.infinispan.transaction.WriteSkewHelper.performWriteSkewCheckAndReturnNewVersions(WriteSkewHelper.java:81)
> at org.infinispan.interceptors.locking.ClusteringDependentLogic$AllNodesLogic.createNewVersionsAndCheckForWriteSkews(ClusteringDependentLogic.java:133)
> at org.infinispan.interceptors.VersionedEntryWrappingInterceptor.visitPrepareCommand(VersionedEntryWrappingInterceptor.java:64)
> {code}
>
> I think that all info is in the test case, but if you need something let
> me know.
>
> Cheers,
> Pedro
> [1]
> https://github.com/pruivo/infinispan/blob/issue_1/core/src/test/java/org/...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-1990) Preload sets the versions to null (repeatable read + write skew)
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-1990?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño commented on ISPN-1990:
----------------------------------------
This is fundamentally broken since the CacheLoaderManager cannot preload the cache putting an entry with its corresponding version (even if that can be stored as such). IOW, there's no Cache.put() that takes a version. This will be resolved as part of ISPN-2281, so will re-evaluate once that is fixed.
> Preload sets the versions to null (repeatable read + write skew)
> ----------------------------------------------------------------
>
> Key: ISPN-1990
> URL: https://issues.jboss.org/browse/ISPN-1990
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 5.1.3.FINAL
> Environment: Java 6 (64bits)
> Infinispan 5.2.0-SNAPSHOT
> MacOS
> Reporter: Pedro Ruivo
> Assignee: Galder Zamarreño
> Labels: preload, skew, versioning, write
> Fix For: 5.3.0.Alpha1, 5.3.0.Final
>
>
> I think I've spotted a issue when I use repeatable read with write skew check and I preload the cache.
>
> I've made a test case to reproduce the bug. It can be found here [1].
> The problem is that each keys preloaded is put in the container with version = null. When I try to commit a transaction, I get this exception:
>
> {code}
> java.lang.IllegalStateException: Entries cannot have null versions!
> at org.infinispan.container.entries.ClusteredRepeatableReadEntry.performWriteSkewCheck(ClusteredRepeatableReadEntry.java:44)
> at org.infinispan.transaction.WriteSkewHelper.performWriteSkewCheckAndReturnNewVersions(WriteSkewHelper.java:81)
> at org.infinispan.interceptors.locking.ClusteringDependentLogic$AllNodesLogic.createNewVersionsAndCheckForWriteSkews(ClusteringDependentLogic.java:133)
> at org.infinispan.interceptors.VersionedEntryWrappingInterceptor.visitPrepareCommand(VersionedEntryWrappingInterceptor.java:64)
> {code}
>
> I think that all info is in the test case, but if you need something let
> me know.
>
> Cheers,
> Pedro
> [1]
> https://github.com/pruivo/infinispan/blob/issue_1/core/src/test/java/org/...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-1990) Preload sets the versions to null (repeatable read + write skew)
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-1990?page=com.atlassian.jira.plugin.... ]
Work on ISPN-1990 started by Galder Zamarreño.
> Preload sets the versions to null (repeatable read + write skew)
> ----------------------------------------------------------------
>
> Key: ISPN-1990
> URL: https://issues.jboss.org/browse/ISPN-1990
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 5.1.3.FINAL
> Environment: Java 6 (64bits)
> Infinispan 5.2.0-SNAPSHOT
> MacOS
> Reporter: Pedro Ruivo
> Assignee: Galder Zamarreño
> Labels: preload, skew, versioning, write
> Fix For: 5.3.0.Alpha1, 5.3.0.Final
>
>
> I think I've spotted a issue when I use repeatable read with write skew check and I preload the cache.
>
> I've made a test case to reproduce the bug. It can be found here [1].
> The problem is that each keys preloaded is put in the container with version = null. When I try to commit a transaction, I get this exception:
>
> {code}
> java.lang.IllegalStateException: Entries cannot have null versions!
> at org.infinispan.container.entries.ClusteredRepeatableReadEntry.performWriteSkewCheck(ClusteredRepeatableReadEntry.java:44)
> at org.infinispan.transaction.WriteSkewHelper.performWriteSkewCheckAndReturnNewVersions(WriteSkewHelper.java:81)
> at org.infinispan.interceptors.locking.ClusteringDependentLogic$AllNodesLogic.createNewVersionsAndCheckForWriteSkews(ClusteringDependentLogic.java:133)
> at org.infinispan.interceptors.VersionedEntryWrappingInterceptor.visitPrepareCommand(VersionedEntryWrappingInterceptor.java:64)
> {code}
>
> I think that all info is in the test case, but if you need something let
> me know.
>
> Cheers,
> Pedro
> [1]
> https://github.com/pruivo/infinispan/blob/issue_1/core/src/test/java/org/...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-2955) Async marshalling executor retry when queue fills
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-2955?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-2955:
------------------------------------
A better alternative would be to configure the async executor with a different RejectedExecutionHandler. It's currently using AbortPolicy (the default), but CallerRunsPolicy or a CallerBlocksPolicy that would do a blocking put in the queue would be much better.
> Async marshalling executor retry when queue fills
> -------------------------------------------------
>
> Key: ISPN-2955
> URL: https://issues.jboss.org/browse/ISPN-2955
> Project: Infinispan
> Issue Type: Enhancement
> Components: Marshalling
> Affects Versions: 5.2.5.Final
> Reporter: Manik Surtani
> Assignee: Manik Surtani
> Fix For: 5.3.0.Alpha1, 5.3.0.Final
>
>
> When using an async transport and async marshalling, an executor is used to process the marshalling task in a separate thread and the caller's thread is allowed to return immediately.
> When the executor's queue fills and the queue cannot accept any more tasks, it throws a {{RejectedExecutionException}}, causing a very bad user/developer experience. A more correct approach to this is to catch the {{RejectedExecutionException}}, block and retry the task submission.
> The end result is that, in the degenerate case (when the executor queue is full) instead of throwing exceptions, those invocations will perform slightly slower.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-2956) Hot Rod putIfAbsent to take version to handle edge cases
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-2956?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-2956:
-------------------------------
Description:
Hot Rod's putIfAbsent might have issues on some edge cases:
{quote}I want to know whether the putting entry already exists in the remote
cache cluster, or not.
I thought that RemoteCache.putIfAbsent() would be useful for that
purpose, i.e.,
{code}
if (remoteCache.putIfAbsent(k,v) == null) {
// new entry.
} else {
// k already exists.
}
{code}
But no.
The putIfAbsent() for new entry may return non-null value, if one of the
server crushed while putting.
The behavior is like the following:
1. Client do putIfAbsent(k,v).
2. The server receives the request and sends replication requests to
other servers. If the server crushed before completing replication, some
servers own that (k,v), but others not.
3. Client receives the error. The putIfAbsent() internally retries the
same request to the next server in the cluster server list.
4. If the next server owns the (k,v), the putIfAbsent() returns the
replicated (k,v) at the step 2, without any error.
So, putIfAbsent() is not reliable for knowing whether the putting entry
is *exactly* new or not.
Does anyone have any idea/workaround for this purpose?{quote}
A workaround is to do this:
{quote}We got a simple solution, which can be applied to our customer's application.
If each value part of putting (k,v) is unique or contains unique value,
the client can do *double check* wether the entry is new.
{code}
val = System.nanoTime(); // or uuid is also useful.
if ((ret = cache.putIfAbsent(key, val)) == null
|| ret.equals(val)) {
// new entry, if the return value is just the same.
} else {
// key already exists.
}
{code}
We are proposing this workaround which almost works fine.{quote}
However, this is a bit of a cludge.
Hot Rod should be improved with an operation that allows a version to be passed when entry is created, instead of relying on the client generating it.
was:
Hot Rod's putIfAbsent might have issues on some edge cases:
{quote}I want to know whether the putting entry already exists in the remote
cache cluster, or not.
I thought that RemoteCache.putIfAbsent() would be useful for that
purpose, i.e.,
if (remoteCache.putIfAbsent(k,v) == null) {
// new entry.
} else {
// k already exists.
}
But no.
The putIfAbsent() for new entry may return non-null value, if one of the
server crushed while putting.
The behavior is like the following:
1. Client do putIfAbsent(k,v).
2. The server receives the request and sends replication requests to
other servers. If the server crushed before completing replication, some
servers own that (k,v), but others not.
3. Client receives the error. The putIfAbsent() internally retries the
same request to the next server in the cluster server list.
4. If the next server owns the (k,v), the putIfAbsent() returns the
replicated (k,v) at the step 2, without any error.
So, putIfAbsent() is not reliable for knowing whether the putting entry
is *exactly* new or not.
Does anyone have any idea/workaround for this purpose?{quote}
A workaround is to do this:
{quote}We got a simple solution, which can be applied to our customer's application.
If each value part of putting (k,v) is unique or contains unique value,
the client can do *double check* wether the entry is new.
val = System.nanoTime(); // or uuid is also useful.
if ((ret = cache.putIfAbsent(key, val)) == null
|| ret.equals(val)) {
// new entry, if the return value is just the same.
} else {
// key already exists.
}
We are proposing this workaround which almost works fine.{quote}
However, this is a bit of a cludge.
Hot Rod should be improved with an operation that allows a version to be passed when entry is created, instead of relying on the client generating it.
> Hot Rod putIfAbsent to take version to handle edge cases
> --------------------------------------------------------
>
> Key: ISPN-2956
> URL: https://issues.jboss.org/browse/ISPN-2956
> Project: Infinispan
> Issue Type: Feature Request
> Components: Remote protocols
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 6.0.0.Final
>
>
> Hot Rod's putIfAbsent might have issues on some edge cases:
> {quote}I want to know whether the putting entry already exists in the remote
> cache cluster, or not.
> I thought that RemoteCache.putIfAbsent() would be useful for that
> purpose, i.e.,
> {code}
> if (remoteCache.putIfAbsent(k,v) == null) {
> // new entry.
> } else {
> // k already exists.
> }
> {code}
> But no.
> The putIfAbsent() for new entry may return non-null value, if one of the
> server crushed while putting.
> The behavior is like the following:
> 1. Client do putIfAbsent(k,v).
> 2. The server receives the request and sends replication requests to
> other servers. If the server crushed before completing replication, some
> servers own that (k,v), but others not.
> 3. Client receives the error. The putIfAbsent() internally retries the
> same request to the next server in the cluster server list.
> 4. If the next server owns the (k,v), the putIfAbsent() returns the
> replicated (k,v) at the step 2, without any error.
> So, putIfAbsent() is not reliable for knowing whether the putting entry
> is *exactly* new or not.
> Does anyone have any idea/workaround for this purpose?{quote}
> A workaround is to do this:
> {quote}We got a simple solution, which can be applied to our customer's application.
> If each value part of putting (k,v) is unique or contains unique value,
> the client can do *double check* wether the entry is new.
> {code}
> val = System.nanoTime(); // or uuid is also useful.
> if ((ret = cache.putIfAbsent(key, val)) == null
> || ret.equals(val)) {
> // new entry, if the return value is just the same.
> } else {
> // key already exists.
> }
> {code}
> We are proposing this workaround which almost works fine.{quote}
> However, this is a bit of a cludge.
> Hot Rod should be improved with an operation that allows a version to be passed when entry is created, instead of relying on the client generating it.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-2918) TopologyAwareConsistentHashFactory doesn't distribute data to nodes evenly
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-2918?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration updated ISPN-2918:
------------------------------------------
Bugzilla References: https://bugzilla.redhat.com/show_bug.cgi?id=923928, https://bugzilla.redhat.com/show_bug.cgi?id=924563, https://bugzilla.redhat.com/show_bug.cgi?id=924564 (was: https://bugzilla.redhat.com/show_bug.cgi?id=923928, https://bugzilla.redhat.com/show_bug.cgi?id=924563)
> TopologyAwareConsistentHashFactory doesn't distribute data to nodes evenly
> --------------------------------------------------------------------------
>
> Key: ISPN-2918
> URL: https://issues.jboss.org/browse/ISPN-2918
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.2.4.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Labels: 5.2.x, jdg6
> Fix For: 5.2.6.Final, 5.3.0.Alpha1
>
>
> When the topology of a cluster is "balanced" (i.e. all sites have the same number of racks, racks have the same number of machines, machines have the same number of nodes), the number of segments owned by each node should be approximately the same.
> The current algorithm properly balances the primary owners of the segments, but the backup owners are not balanced, so a node can end up owning a lot more segments than expected.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months