[JBoss JIRA] Created: (ISPN-803) Update status of unsupported operations in Javadoc
by Richard Achmatowicz (JIRA)
Update status of unsupported operations in Javadoc
--------------------------------------------------
Key: ISPN-803
URL: https://jira.jboss.org/browse/ISPN-803
Project: Infinispan
Issue Type: Feature Request
Components: Configuration
Reporter: Richard Achmatowicz
Assignee: Galder Zamarreño
Fix For: 4.2.0.ALPHA5
At present, there is precious little information on the server modules and their startup.
The scala sources have a logging framework in place, there seems to be little in the way of information being logged.
It would be very helpful if a server module could have some DEBUG information added to indicate:
* the phases it does through when starting and confirmation of whether or not it has started
* the configuration it is using and some basic details
* similar for shutdown
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years
[JBoss JIRA] Created: (ISPN-809) CassandraCacheLoader not working with HotRod server due to non deterministic toString() in ByteArrayKey
by Jonas Lasson (JIRA)
CassandraCacheLoader not working with HotRod server due to non deterministic toString() in ByteArrayKey
-------------------------------------------------------------------------------------------------------
Key: ISPN-809
URL: https://jira.jboss.org/browse/ISPN-809
Project: Infinispan
Issue Type: Bug
Components: Loaders and Stores
Affects Versions: 4.2.0.CR2
Reporter: Jonas Lasson
Assignee: Manik Surtani
Priority: Critical
Currently the Cassandra store is creating CassandraKeys based on the keys .toString method, see below:
private String hashKey(Object key) {
return entryKeyPrefix + key.toString();
}
When HotRod server is used the key will be a ByteArrayKey, which has a toString that is non deterministic and outputs (3 examples with the same byte array):
ByteArrayKey{data=ByteArray{size=8, hashCode=33d626a4, array=[2, 62, 5, 74, 79, 78, 65, 83, ..]}}
ByteArrayKey{data=ByteArray{size=8, hashCode=2ada52a1, array=[2, 62, 5, 74, 79, 78, 65, 83, ..]}}
ByteArrayKey{data=ByteArray{size=8, hashCode=5576b9ea, array=[2, 62, 5, 74, 79, 78, 65, 83, ..]}}
As you can see the hashCode is differing even though the byte array is the same.
This is because ByteArrayKey.toString is using Util.printArray(byte[],true) where true means that a hashCode should be printed as well.
Unfortenaly, the hashcode is calculated with byte[].hashCode() which is not considering the data in the byte array.
There are several solutions to the problem:
* Have another mechanism to export unique ids from the key instead of toString (with a possible toString fallback)
* Fix so that ByteArrayKey.toString returns deterministic data. (Still bad solution as the keys will be very long and not make sense).
* Special handling for ByteArrayKey to calculate the key based on the bytes in the byte array.
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years
[JBoss JIRA] Created: (ISPN-765) Concurrent startup leading to rehashing issues
by Galder Zamarreño (JIRA)
Concurrent startup leading to rehashing issues
----------------------------------------------
Key: ISPN-765
URL: https://jira.jboss.org/browse/ISPN-765
Project: Infinispan
Issue Type: Bug
Components: Distributed Cache
Affects Versions: 4.2.0.BETA1
Reporter: Galder Zamarreño
Assignee: Mircea Markus
Fix For: 4.2.0.CR1
>From ISPN-762, there seems to be a rehashing problem when several distributed caches are started concurrently:
Logs can be found in: https://jira.jboss.org/secure/attachment/12338527/topologycache.diff
"Even though all the caches are defined in the right place, there is still a problem during starting 3 nodes at a time. Look at the errors coming from the log files:
* wp-60814 log:
2010-11-08 14:29:26,687 INFO [org.infinispan.distribution.DistributionManagerImpl] (Incoming-2,wp-60814) This is a JOIN event! Wait for notification from new joiner wp-59660
2010-11-08 14:31:26,668 ERROR [org.infinispan.distribution.JoinTask] (Rehasher-wp-60814) Caught exception!
org.infinispan.CacheException: org.infinispan.util.concurrent.TimeoutException: Timed out after 120 seconds waiting for a response from wp-34493
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommands(CommandAwareRpcDispatcher.java:122)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:403)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:101)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:125)
at org.infinispan.distribution.JoinTask.retrieveOldCH(JoinTask.java:192)
at org.infinispan.distribution.JoinTask.performRehash(JoinTask.java:87)
at org.infinispan.distribution.RehashTask.call(RehashTask.java:53)
at org.infinispan.distribution.RehashTask.call(RehashTask.java:33)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: org.infinispan.util.concurrent.TimeoutException: Timed out after 120 seconds waiting for a response from wp-34493
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher$ReplicationTask.call(CommandAwareRpcDispatcher.java:304)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommands(CommandAwareRpcDispatcher.java:120)
... 12 more
* wp-34493 log:
2010-11-08 14:29:26,654 INFO [org.infinispan.distribution.DistributionManagerImpl] (Incoming-2,wp-34493) This is a JOIN event! Wait for notification from
new joiner wp-59660
2010-11-08 14:29:36,759 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (InfinispanServer-Main) Execution error:
org.infinispan.util.concurrent.TimeoutException: Timed out waiting for valid responses!
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:421)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:101)
...
at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:273)
at org.infinispan.CacheDelegate.putIfAbsent(CacheDelegate.java:453)
at org.infinispan.CacheSupport.putIfAbsent(CacheSupport.java:40)
at org.infinispan.server.hotrod.HotRodServer$$anonfun$1.apply(HotRodServer.scala:85)
at org.infinispan.server.hotrod.HotRodServer$$anonfun$1.apply(HotRodServer.scala:75)
at org.infinispan.server.hotrod.HotRodServer.isViewUpdated(HotRodServer.scala:102)
at org.infinispan.server.hotrod.HotRodServer.org$infinispan$server$hotrod$HotRodServer$$updateTopologyView(HotRodServer.scala:97)
at org.infinispan.server.hotrod.HotRodServer.addSelfToTopologyView(HotRodServer.scala:75)
at org.infinispan.server.hotrod.HotRodServer.startTransport(HotRodServer.scala:63)
at org.infinispan.server.core.AbstractProtocolServer.start(AbstractProtocolServer.scala:70)
at org.infinispan.server.hotrod.HotRodServer.start(HotRodServer.scala:44)
...
* wp-59660 log:
2010-11-08 14:29:26,699 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (InfinispanServer-Main) Received new cluster view: [wp-34493|2] [wp-34493, wp-60814, wp-59660]
2010-11-08 14:29:26,756 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (InfinispanServer-Main) Cache local address is wp-59660, physical addresses are [10.0.36.136:34302]
2010-11-08 14:29:26,851 ERROR [org.infinispan.remoting.InboundInvocationHandlerImpl] (OOB-2,wp-59660) Defined caches: [___hotRodTopologyCache]
2010-11-08 14:29:26,851 INFO [org.infinispan.remoting.InboundInvocationHandlerImpl] (OOB-2,wp-59660) Will try and wait for the cache to start
2010-11-08 14:29:56,864 INFO [org.infinispan.remoting.InboundInvocationHandlerImpl] (OOB-2,wp-59660) Cache named ___hotRodTopologyCache does not exist on this cache manager!
2010-11-08 14:49:26,897 ERROR [org.infinispan.distribution.JoinTask] (Rehasher-wp-59660) Caught exception!
org.infinispan.CacheException: Unable to retrieve old consistent hash from coordinator even after several attempts at sleeping and retrying!
at org.infinispan.distribution.JoinTask.retrieveOldCH(JoinTask.java:218)
at org.infinispan.distribution.JoinTask.performRehash(JoinTask.java:87)
at org.infinispan.distribution.RehashTask.call(RehashTask.java:53)
at org.infinispan.distribution.RehashTask.call(RehashTask.java:33)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
The first node reaches rehashRpcTimeout=120000 while waiting on the old hash. The second one reaches 10s timeout during adding himself to the topology view. And the third one fails on retrieving old hash, but after much longer time = 20min.
It looks as though we had a kind of deadlock."
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years
[JBoss JIRA] Created: (ISPN-807) Generalize Externalizer interface
by Galder Zamarreño (JIRA)
Generalize Externalizer interface
---------------------------------
Key: ISPN-807
URL: https://jira.jboss.org/browse/ISPN-807
Project: Infinispan
Issue Type: Feature Request
Components: Marshalling
Reporter: Galder Zamarreño
Assignee: Galder Zamarreño
Fix For: 5.0.0.ALPHA1, 5.0.0.Final
Make Externalizer interface generalized in order to make it more typesafe, i.e.:
interface Externalizer<T> {
void writeObject(ObjectOutput output, T object) throws IOException;
T readObject(ObjectInput input) throws IOException, ClassNotFoundException;
}
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years
[JBoss JIRA] Created: (ISPN-812) Memcached flush_all command pipelining problem
by Galder Zamarreño (JIRA)
Memcached flush_all command pipelining problem
----------------------------------------------
Key: ISPN-812
URL: https://jira.jboss.org/browse/ISPN-812
Project: Infinispan
Issue Type: Bug
Components: Cache Server
Affects Versions: 4.2.0.CR2, 4.1.0.Final
Reporter: Galder Zamarreño
Assignee: Galder Zamarreño
Fix For: 4.2.0.CR3, 4.2.0.Final
>From Michal:
"memcached server module flush_all command cannot be in the middle of pipelined command input.
(this is probably tightly connected to JBPAPP-5439, but manifests itself in different scenario as well)
testcase:
$ echo -e "flush_all\r\nget a" | nc localhost 11211
SERVER_ERROR org.infinispan.server.core.ServerException: java.lang.NumberFormatException: For input string: "get"
expected output:
OK
END "
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years
[JBoss JIRA] Created: (ISPN-811) Memcached unknown command causes a lost line in command pipeline
by Galder Zamarreño (JIRA)
Memcached unknown command causes a lost line in command pipeline
----------------------------------------------------------------
Key: ISPN-811
URL: https://jira.jboss.org/browse/ISPN-811
Project: Infinispan
Issue Type: Bug
Components: Cache Server
Affects Versions: 4.2.0.CR2, 4.1.0.Final
Reporter: Galder Zamarreño
Assignee: Galder Zamarreño
Fix For: 4.2.0.CR3, 4.2.0.Final
>From Michal:
"memcached server consumes one more line of input than it has to when unknown command occurs.
input:
"bogus\r\ndelete a\r\n"
expected output:
ERROR
NOT_FOUND
actual output:
ERROR
input:
"bogus\r\ndelete a\r\ndelete a\r\n"
expected output:
ERROR
NOT_FOUND
NOT_FOUND
actual output:
ERROR
NOT_FOUND
input: "bogus \r\ndelete a\r\ndelete a\r" (space after bogus) behaves as expected
check MemcachedDecoder.scala, line 45
readLine consumes next line even when it was already done by readElement"
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years