[JBoss JIRA] (ISPN-2560) Distribution ZIP file polluted
by Manik Surtani (JIRA)
Manik Surtani created ISPN-2560:
-----------------------------------
Summary: Distribution ZIP file polluted
Key: ISPN-2560
URL: https://issues.jboss.org/browse/ISPN-2560
Project: Infinispan
Issue Type: Bug
Components: Build process
Affects Versions: 5.2.0.Beta4
Reporter: Manik Surtani
Assignee: Mircea Markus
Priority: Critical
Fix For: 5.2.0.CR1, 5.2.0.Final
There appear to be a lot of files packaged up and archived in various (incorrect and superfluous) places in the ZIP archives.
1. Looking at 5.2.0.Beta4-all.zip, I see:
{{
Multiverse:infinispan-5.2.0.Beta4-all manik $ jar tf infinispan-core.jar | grep "\.sh"
functions.sh
importConfig.sh
Multiverse:infinispan-5.2.0.Beta4-all manik $
}}
Why are these shell scripts in the JAR file?
2. Also, I see similar things in other JAR files:
{{
Multiverse:infinispan-5.2.0.Beta4-all manik $ jar tf modules/demos/ec2/infinispan-ec2-demo.jar | grep "\.sh"
runEC2Demo-all.sh
runEC2Demo-influenza.sh
runEC2Demo-nucleotide.sh
runEC2Demo-protein.sh
runEC2Demo-query.sh
runEC2Demo-reader.sh
Multiverse:infinispan-5.2.0.Beta4-all manik $
}}
{{
Multiverse:infinispan-5.2.0.Beta4-all manik $ jar tf modules/cli-client/infinispan-cli-client.jar | grep "\.sh"
ispn-cli.sh
Multiverse:infinispan-5.2.0.Beta4-all manik $
}}
3. I see these in {{/etc/}} which, if I now put {{/etc/}} in my classpath, causes things to break in spectacular ways due to the service loaded picking up incorrect metadata.
{{
Multiverse:infinispan-5.2.0.Beta4-all manik $ ls etc/META-INF/services/
org.infinispan.cli.commands.Command
org.infinispan.cli.connection.Connector
org.infinispan.commands.module.ModuleCommandExtensions
org.infinispan.configuration.parsing.ConfigurationParser
org.infinispan.distexec.mapreduce.spi.MapReduceTaskLifecycle
org.infinispan.distexec.spi.DistributedTaskLifecycle
org.infinispan.factories.components.ModuleMetadataFileFinder
org.infinispan.lifecycle.ModuleLifecycle
Multiverse:infinispan-5.2.0.Beta4-all manik $
}}
4. Why do we package {{etc/infinispan-query-component-metadata.dat}}? That should be a part of infinispan-query.jar, and not in etc.
5. What is in {{/etc/help}}? Looks like resource files for the CLI, which should really be in one of the CLI jars.
Marking this as critical, since this is messy and confusing for users, and can cause breakage when running some demos and makes things very confusing to debug.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (ISPN-2240) Per-key lock container leads to superfluous TimeoutExceptions on concurrent access to same key
by Robert Stupp (JIRA)
Robert Stupp created ISPN-2240:
----------------------------------
Summary: Per-key lock container leads to superfluous TimeoutExceptions on concurrent access to same key
Key: ISPN-2240
URL: https://issues.jboss.org/browse/ISPN-2240
Project: Infinispan
Issue Type: Bug
Components: Locking and Concurrency
Affects Versions: 5.1.6.FINAL
Reporter: Robert Stupp
Assignee: Mircea Markus
Attachments: somehow.zip
Hi,
I've encountered a lot of TimeoutExceptions just running a load test against an infinispan cluster.
I tracked down the reason and found out, that the code in org.infinispan.util.concurrent.locks.containers.AbstractPerEntryLockContainer#releaseLock() causes these superfluous TimeoutExceptions.
A small test case (which just prints out timeouts, too late timeouts and "paints" a lot of dots to the console - more dots/second on the console means better throughput ;-)
In a short test I extended the class ReentrantPerEntryLockContainer and changed the implementation of releaseLock() as follows:
{noformat}
public void releaseLock(Object lockOwner, Object key) {
ReentrantLock l = locks.get(key);
if (l != null) {
if (!l.isHeldByCurrentThread())
throw new IllegalStateException("Lock for [" + key + "] not held by current thread " + Thread.currentThread());
while (l.isHeldByCurrentThread())
unlock(l, lockOwner);
if (!l.hasQueuedThreads())
locks.remove(key);
}
else
throw new IllegalStateException("No lock for [" + key + ']');
}
{noformat}
The main improvement is that locks are not removed from the concurrent map as long as other threads are waiting on that lock.
If the lock is removed from the map while other threads are waiting for it, they may run into timeouts and force TimeoutExceptions to the client.
The above methods "paints more dots per second" - means: it gives a better throughput for concurrent accesses to the same key.
The re-implemented method should also fix some replication timeout exceptions.
Please, please add this to 5.1.7, if possible.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (ISPN-2510) PrepareCommands should fail on nodes where the cache is not running
by Dan Berindei (JIRA)
Dan Berindei created ISPN-2510:
----------------------------------
Summary: PrepareCommands should fail on nodes where the cache is not running
Key: ISPN-2510
URL: https://issues.jboss.org/browse/ISPN-2510
Project: Infinispan
Issue Type: Bug
Components: Distributed Cache, RPC
Affects Versions: 5.2.0.Beta3
Reporter: Dan Berindei
Assignee: Dan Berindei
Fix For: 5.2.0.Final
When the user stops a cache without stopping the cache manager on that node, subsequent PrepareCommands sent to that node will return a {{SuccessfulResponse}}.
If that node used to the primary owner of the command's modified key, the originator will proceed with the transaction as if it had acquired a lock on that key. It is thus possible for multiple transactions to think they have acquired the key lock at the same time.
On the other hand, in replicated caches is is quite possible that a cache is not running on all the cluster node and yet PrepareCommands are broadcasted to everyone in parallel. So the solution should not involve sending exceptions (which have huge stack traces), and the originator should be able to ignore failures responses from nodes that were not targeted in the first place.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (ISPN-2352) Second invocation of ClusteredQueryImpl.lazyIterator() yields results in reverse order
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2352?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-2352:
-----------------------------------
Fix Version/s: 5.2.0.Final
Affects Version/s: 5.1.8.Final
> Second invocation of ClusteredQueryImpl.lazyIterator() yields results in reverse order
> --------------------------------------------------------------------------------------
>
> Key: ISPN-2352
> URL: https://issues.jboss.org/browse/ISPN-2352
> Project: Infinispan
> Issue Type: Bug
> Components: Querying
> Affects Versions: 5.2.0.Alpha4, 5.1.8.Final
> Reporter: Marko Lukša
> Assignee: Galder Zamarreño
> Priority: Minor
> Fix For: 5.2.0.CR1, 5.2.0.Final
>
>
> When you invoke lazyIterator() for the 2nd time on the same ClusteredQueryImpl instance, the iterator returns results in reverse order. This only occurs when the query has a Sort specified.
> Caused by DistributedIterator.setTopDocs(), which inverts the reverse flag on SortField.
> The same is probably also true for ClusteredQueryImpl.iterator()
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (ISPN-2352) Second invocation of ClusteredQueryImpl.lazyIterator() yields results in reverse order
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2352?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño commented on ISPN-2352:
----------------------------------------
[~luksa] I think I've replicated the issue.
> Second invocation of ClusteredQueryImpl.lazyIterator() yields results in reverse order
> --------------------------------------------------------------------------------------
>
> Key: ISPN-2352
> URL: https://issues.jboss.org/browse/ISPN-2352
> Project: Infinispan
> Issue Type: Bug
> Components: Querying
> Affects Versions: 5.2.0.Alpha4
> Reporter: Marko Lukša
> Assignee: Galder Zamarreño
> Priority: Minor
> Fix For: 5.2.0.CR1
>
>
> When you invoke lazyIterator() for the 2nd time on the same ClusteredQueryImpl instance, the iterator returns results in reverse order. This only occurs when the query has a Sort specified.
> Caused by DistributedIterator.setTopDocs(), which inverts the reverse flag on SortField.
> The same is probably also true for ClusteredQueryImpl.iterator()
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (ISPN-2205) Design HotRod protocol version 1.2
by Dan Berindei (JIRA)
Dan Berindei created ISPN-2205:
----------------------------------
Summary: Design HotRod protocol version 1.2
Key: ISPN-2205
URL: https://issues.jboss.org/browse/ISPN-2205
Project: Infinispan
Issue Type: Task
Components: Cache Server
Affects Versions: 5.2.0.ALPHA2
Reporter: Dan Berindei
Assignee: Dan Berindei
Priority: Critical
Fix For: 5.2.0.FINAL
The consistent hash representation is changing in 5.2, and we need to modify the HotRod protocol to incorporate those changes. We preserved compatibility with 1.0/1.1 clients with a workaround on the server, but is not as efficient as it could be.
The new version could also incorporate fixes for older issues like ISPN-1293.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years