[JBoss JIRA] (ISPN-3190) Memcached server throwing NullPointerException if the mashaller is not explicitly set
by Martin Gencur (JIRA)
Martin Gencur created ISPN-3190:
-----------------------------------
Summary: Memcached server throwing NullPointerException if the mashaller is not explicitly set
Key: ISPN-3190
URL: https://issues.jboss.org/browse/ISPN-3190
Project: Infinispan
Issue Type: Bug
Components: Server
Affects Versions: 5.3.0.CR1
Reporter: Martin Gencur
Assignee: Galder Zamarreño
Fix For: 5.3.0.Final
According to the tutorial at https://docs.jboss.org/author/display/ISPN/Interoperability+between+Embed... it is not necessary to specify a special marshaller under normal circumstances. However, the Memcached server seems to require it in any case and the Memcached client's "get" operation fails because we did not set any marshaller for the compatibility mode and it remained null. Note that I'm not using SpyMemcached.
My understanding is that even if I use SpyMemcached to retrieve the entry, it will be found and retrieved but without the marshaller I simply won't understand the returned value. But something will be returned. Is my assumption correct?
{code}
2013-06-05 14:12:19,054 TRACE (MemcachedServerWorker-6) [org.infinispan.container.EntryFactoryImpl] Retrieved from container ImmortalCacheEntry{key=4, value=MetadataImmortalCacheValue {value=v1, metadata=EmbeddedMetadata{lifespan=-1, maxIdle=-1, version=ServerEntryVersion(1)}}}
2013-06-05 14:12:19,054 TRACE (MemcachedServerWorker-6) [org.infinispan.interceptors.CallInterceptor] Executing command: GetKeyValueCommand {key=4, flags=[OPERATION_MEMCACHED]}.
2013-06-05 14:12:19,054 TRACE (MemcachedServerWorker-6) [org.infinispan.commands.read.GetKeyValueCommand] Found entry ImmortalCacheEntry{key=4, value=MetadataImmortalCacheValue {value=v1, metadata=EmbeddedMetadata{lifespan=-1, maxIdle=-1, version=ServerEntryVersion(1)}}}
2013-06-05 14:12:19,055 ERROR (MemcachedServerWorker-6) [org.infinispan.interceptors.InvocationContextInterceptor] ISPN000136: Execution error
java.lang.NullPointerException
at org.infinispan.server.memcached.MemcachedTypeConverter.marshall(MemcachedTypeConverter.scala:58)
at org.infinispan.server.memcached.MemcachedTypeConverter.unboxValue(MemcachedTypeConverter.scala:45)
at org.infinispan.server.memcached.MemcachedTypeConverter.unboxValue(MemcachedTypeConverter.scala:37)
at org.infinispan.interceptors.compat.TypeConverterInterceptor.visitGetKeyValueCommand(TypeConverterInterceptor.java:93)
at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:62)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:120)
at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:128)
at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:92)
at org.infinispan.commands.AbstractVisitor.visitGetKeyValueCommand(AbstractVisitor.java:96)
at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:62)
at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:343)
at org.infinispan.CacheImpl.getCacheEntry(CacheImpl.java:399)
at org.infinispan.DecoratedCache.getCacheEntry(DecoratedCache.java:514)
at org.infinispan.server.memcached.MemcachedDecoder.get(MemcachedDecoder.scala:122)
at org.infinispan.server.core.AbstractProtocolDecoder.decodeKey(AbstractProtocolDecoder.scala:121)
at org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:75)
at org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:49)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
at org.infinispan.server.core.AbstractProtocolDecoder.messageReceived(AbstractProtocolDecoder.scala:385)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
{code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-3184) The DELTA_WRITE flag should force a remote get during state transfer
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-3184?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-3184:
------------------------------------
Actually, FineGrainedAtomicHashMap uses ApplyDeltaCommand, that must force a remote get as well.
> The DELTA_WRITE flag should force a remote get during state transfer
> --------------------------------------------------------------------
>
> Key: ISPN-3184
> URL: https://issues.jboss.org/browse/ISPN-3184
> Project: Infinispan
> Issue Type: Bug
> Reporter: Dan Berindei
> Assignee: Dan Berindei
>
> AtomicHashMap and FineGrainedAtomicHashMap, as well as custom DeltaAware implementations, use PutKeyValueCommands with the DELTA_WRITE flag to execute incremental updates. These commands need the previous value of the entry in order to work.
> If a node is joining and it receives a PutKeyValueCommand with the DELTA_WRITE flag before it has received the value of the affected key, it should do a remote get to retrieve the previous value and apply the change on top of that value, just like we do for conditional commands. Not doing so leads to data loss.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2836) org.jgroups.TimeoutException after invoking MapCombineCommand in Map/Reduce task with 2 nodes
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-2836?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo edited comment on ISPN-2836 at 6/5/13 5:42 AM:
-----------------------------------------------------------
It is simple to add a timeout parameter to the MapReduceTasks to wait for remote replies. Note that this timeout is used directly by JGroups so it is not the timeout for the task because the current implementation waits for the remote replies sequentially. Also, it is possible to wait for ever with a timeout value <= 0. The default value will be the replication timeout.
Does it make sense to you? [~afield] [~mgencur]?
was (Author: pruivo):
It is simple to add a timeout parameter to the MapReduceTasks to wait for remote replies. Note that this timeout is used directly by JGroups so it is not the timeout for the task because the current implementation waits for the replies sequentially. Also, it is possible to wait for ever with a timeout value <= 0. The default value is the replication timeout.
Does it make sense to you? [~afield] [~mgencur]?
> org.jgroups.TimeoutException after invoking MapCombineCommand in Map/Reduce task with 2 nodes
> ---------------------------------------------------------------------------------------------
>
> Key: ISPN-2836
> URL: https://issues.jboss.org/browse/ISPN-2836
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Execution and Map/Reduce
> Affects Versions: 5.2.1.Final
> Reporter: Alan Field
> Assignee: Pedro Ruivo
> Priority: Blocker
> Labels: onboard
> Fix For: 5.3.0.Final
>
> Attachments: afield-tcp-521-final.txt, benchmark-mapreduce-multifilesize.xml, dist-udp-no-tx.xml, jgroups-udp.xml, udp-edg-perf01.txt, udp-edg-perf02.txt
>
>
> Using RadarGun and two nodes to execute the example WordCount Map/Reduce job against a cache with ~550 keys with a value size of 1MB is producing a thread deadlock. The cache is distributed with transactions disabled.
> TCP transport deadlocks without throwing an exception. Disabling the send queue and setting UNICAST2.conn_expiry_timeout=0 prevents the deadlock, but the job does not complete. The nodes send "are-you-alive" messages back and forth, and I have seen the following exception:
> {noformat}
> 11:44:29,970 ERROR [org.jgroups.protocols.TCP] (OOB-98,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (76 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:352)
> at org.radargun.cachewrappers.InfinispanMapReduceWrapper.executeMapReduceTask(InfinispanMapReduceWrapper.java:98)
> at org.radargun.stages.MapReduceStage.executeOnSlave(MapReduceStage.java:74)
> at org.radargun.Slave$2.run(Slave.java:103)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.util.concurrent.ExecutionException: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> at org.infinispan.distexec.mapreduce.MapReduceTask$TaskPart.get(MapReduceTask.java:832)
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhaseWithLocalReduction(MapReduceTask.java:477)
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:350)
> ... 9 more
> Caused by: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at org.infinispan.util.Util.rewrapAsCacheException(Util.java:541)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:186)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:515)
> 11:44:29,978 ERROR [org.jgroups.protocols.TCP] (Timer-3,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (60 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:175)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:197)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:254)
> at org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:80)
> at org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:288)
> ... 5 more
> Caused by: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:390)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:301)
> 11:44:29,979 ERROR [org.jgroups.protocols.TCP] (Timer-4,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (63 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:179)
> ... 11 more
> {noformat}
> With UDP transport, both threads are deadlocked. I will attach thread dumps from runs using TCP and UDP transport.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2836) org.jgroups.TimeoutException after invoking MapCombineCommand in Map/Reduce task with 2 nodes
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-2836?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo commented on ISPN-2836:
-----------------------------------
It is simple to add a timeout parameter to the MapReduceTasks to wait for remote replies. Note that this timeout is used directly by JGroups so it is not the timeout for the task because the current implementation waits for the replies sequentially. Also, it is possible to wait for ever with a timeout value <= 0. The default value is the replication timeout.
Does it make sense to you? [~afield] [~mgencur]?
> org.jgroups.TimeoutException after invoking MapCombineCommand in Map/Reduce task with 2 nodes
> ---------------------------------------------------------------------------------------------
>
> Key: ISPN-2836
> URL: https://issues.jboss.org/browse/ISPN-2836
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Execution and Map/Reduce
> Affects Versions: 5.2.1.Final
> Reporter: Alan Field
> Assignee: Pedro Ruivo
> Priority: Blocker
> Labels: onboard
> Fix For: 5.3.0.Final
>
> Attachments: afield-tcp-521-final.txt, benchmark-mapreduce-multifilesize.xml, dist-udp-no-tx.xml, jgroups-udp.xml, udp-edg-perf01.txt, udp-edg-perf02.txt
>
>
> Using RadarGun and two nodes to execute the example WordCount Map/Reduce job against a cache with ~550 keys with a value size of 1MB is producing a thread deadlock. The cache is distributed with transactions disabled.
> TCP transport deadlocks without throwing an exception. Disabling the send queue and setting UNICAST2.conn_expiry_timeout=0 prevents the deadlock, but the job does not complete. The nodes send "are-you-alive" messages back and forth, and I have seen the following exception:
> {noformat}
> 11:44:29,970 ERROR [org.jgroups.protocols.TCP] (OOB-98,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (76 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:352)
> at org.radargun.cachewrappers.InfinispanMapReduceWrapper.executeMapReduceTask(InfinispanMapReduceWrapper.java:98)
> at org.radargun.stages.MapReduceStage.executeOnSlave(MapReduceStage.java:74)
> at org.radargun.Slave$2.run(Slave.java:103)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.util.concurrent.ExecutionException: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> at org.infinispan.distexec.mapreduce.MapReduceTask$TaskPart.get(MapReduceTask.java:832)
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhaseWithLocalReduction(MapReduceTask.java:477)
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:350)
> ... 9 more
> Caused by: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at org.infinispan.util.Util.rewrapAsCacheException(Util.java:541)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:186)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:515)
> 11:44:29,978 ERROR [org.jgroups.protocols.TCP] (Timer-3,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (60 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:175)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:197)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:254)
> at org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:80)
> at org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:288)
> ... 5 more
> Caused by: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:390)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:301)
> 11:44:29,979 ERROR [org.jgroups.protocols.TCP] (Timer-4,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (63 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:179)
> ... 11 more
> {noformat}
> With UDP transport, both threads are deadlocked. I will attach thread dumps from runs using TCP and UDP transport.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-3189) Extend Grouping API to mapping of DeltaCompositeKeys to the same node as the delta aware value key
by Pedro Ruivo (JIRA)
Pedro Ruivo created ISPN-3189:
---------------------------------
Summary: Extend Grouping API to mapping of DeltaCompositeKeys to the same node as the delta aware value key
Key: ISPN-3189
URL: https://issues.jboss.org/browse/ISPN-3189
Project: Infinispan
Issue Type: Enhancement
Components: Fine-grained API
Affects Versions: 5.3.0.CR1
Reporter: Pedro Ruivo
Assignee: Pedro Ruivo
Priority: Minor
Fix For: 6.0.0.Final
Try to understand and analyze the performance impact to extended the Grouping API to support the DeltaCompositeKey and return the owners based on the delta aware value key.
In the current implementation, the interceptors are responsible to check if the key is a DeltaCompositeKey instance and check for the ownership based on the delta aware value key.
Pros and cons of using an extension of Grouping API
Pros:
* this check for DeltaCompositeKey instance is done in a single point instead of being spread in the code.
* less error prone
Cons:
* possible performance impact since all the getSegment(key) invocations should perform the check.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months