[JBoss JIRA] (ISPN-3173) Define a new query operation over hotrod
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3173?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-3173:
--------------------------------
Description:
HR request: [op_id] [payload].
The request payload is query specific (not defined in the HR protocol) and at this stage has the following format: [Q_TYPE] [QUERY_ST] [FIRST_INDEX] [PAGE_SIZE].
Q_TYPE and query identifier, might be JPAQL (this is the query type we'll support). In future we'll add different query types as well.
HR response: [success] [payload]
> Define a new query operation over hotrod
> ----------------------------------------
>
> Key: ISPN-3173
> URL: https://issues.jboss.org/browse/ISPN-3173
> Project: Infinispan
> Issue Type: Feature Request
> Reporter: Mircea Markus
> Assignee: Mircea Markus
>
> HR request: [op_id] [payload].
> The request payload is query specific (not defined in the HR protocol) and at this stage has the following format: [Q_TYPE] [QUERY_ST] [FIRST_INDEX] [PAGE_SIZE].
> Q_TYPE and query identifier, might be JPAQL (this is the query type we'll support). In future we'll add different query types as well.
> HR response: [success] [payload]
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 4 months
[JBoss JIRA] (ISPN-3173) Add a query operation over hotrod
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3173?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-3173:
--------------------------------
Summary: Add a query operation over hotrod (was: Define a new query operation over hotrod)
> Add a query operation over hotrod
> ----------------------------------
>
> Key: ISPN-3173
> URL: https://issues.jboss.org/browse/ISPN-3173
> Project: Infinispan
> Issue Type: Feature Request
> Reporter: Mircea Markus
> Assignee: Mircea Markus
>
> HR request: [op_id] [payload].
> The request payload is query specific (not defined in the HR protocol) and at this stage has the following format: [Q_TYPE] [QUERY_ST] [FIRST_INDEX] [PAGE_SIZE].
> Q_TYPE and query identifier, might be JPAQL (this is the query type we'll support). In future we'll add different query types as well.
> HR response: [success] [payload]
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 4 months
[JBoss JIRA] (ISPN-3183) HotRod RollUps from 5.2 to 5.3 -- target can't obtain formerly store data from RCS
by Tomas Sykora (JIRA)
[ https://issues.jboss.org/browse/ISPN-3183?page=com.atlassian.jira.plugin.... ]
Tomas Sykora commented on ISPN-3183:
------------------------------------
And... in case of 5.2 to 5.3 if I add entry to target (5.3) it is replicated to the source RemoteCacheStore (5.2) so configuration should be ok.
> HotRod RollUps from 5.2 to 5.3 -- target can't obtain formerly store data from RCS
> ----------------------------------------------------------------------------------
>
> Key: ISPN-3183
> URL: https://issues.jboss.org/browse/ISPN-3183
> Project: Infinispan
> Issue Type: Bug
> Reporter: Tomas Sykora
> Assignee: Mircea Markus
> Attachments: 52to52sourceTrace, 52to52targetTrace, 52to53sourceTrace, 52to53targetTrace
>
>
> Scenario (typical for rollups):
> Start source node, put entries.
> Start target node which is pointing to source (source is his RemoteCacheStore now) and try to get entries.
> For 5.2 to 5.2 working perfectly.
> For 5.2 source and 5.3 target -- we have problems here.
> Sorry that I can't provide any valuable info beside TRACEs.
> 4 TRACE logs -- rollups from 5.2 to 5.2 source log and target log + rollups from 5.2 to 5.3 source log and target log.
> Very quick summary:
> 5.2 to 5.2 on target: Entry exists in loader? true
> 5.2 to 5.3 on targer:
> 16:21:41,508 TRACE [org.infinispan.container.EntryFactoryImpl] (HotRodServerWorker-2) Exists in context? null
> 16:21:41,508 TRACE [org.infinispan.container.EntryFactoryImpl] (HotRodServerWorker-2) Retrieved from container null
> What changed in RemoteCacheStore. What changed in HotRod? Any idea? Let me know, thank you!
>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 4 months
[JBoss JIRA] (ISPN-3183) HotRod RollUps from 5.2 to 5.3 -- target can't obtain formerly stored data from RCS
by Tomas Sykora (JIRA)
[ https://issues.jboss.org/browse/ISPN-3183?page=com.atlassian.jira.plugin.... ]
Tomas Sykora updated ISPN-3183:
-------------------------------
Summary: HotRod RollUps from 5.2 to 5.3 -- target can't obtain formerly stored data from RCS (was: HotRod RollUps from 5.2 to 5.3 -- target can't obtain formerly store data from RCS)
> HotRod RollUps from 5.2 to 5.3 -- target can't obtain formerly stored data from RCS
> -----------------------------------------------------------------------------------
>
> Key: ISPN-3183
> URL: https://issues.jboss.org/browse/ISPN-3183
> Project: Infinispan
> Issue Type: Bug
> Reporter: Tomas Sykora
> Assignee: Mircea Markus
> Attachments: 52to52sourceTrace, 52to52targetTrace, 52to53sourceTrace, 52to53targetTrace
>
>
> Scenario (typical for rollups):
> Start source node, put entries.
> Start target node which is pointing to source (source is his RemoteCacheStore now) and try to get entries.
> For 5.2 to 5.2 working perfectly.
> For 5.2 source and 5.3 target -- we have problems here.
> Sorry that I can't provide any valuable info beside TRACEs.
> 4 TRACE logs -- rollups from 5.2 to 5.2 source log and target log + rollups from 5.2 to 5.3 source log and target log.
> Very quick summary:
> 5.2 to 5.2 on target: Entry exists in loader? true
> 5.2 to 5.3 on targer:
> 16:21:41,508 TRACE [org.infinispan.container.EntryFactoryImpl] (HotRodServerWorker-2) Exists in context? null
> 16:21:41,508 TRACE [org.infinispan.container.EntryFactoryImpl] (HotRodServerWorker-2) Retrieved from container null
> What changed in RemoteCacheStore. What changed in HotRod? Any idea? Let me know, thank you!
>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 4 months
[JBoss JIRA] (ISPN-3183) HotRod RollUps from 5.2 to 5.3 -- target can't obtain formerly store data from RCS
by Tomas Sykora (JIRA)
[ https://issues.jboss.org/browse/ISPN-3183?page=com.atlassian.jira.plugin.... ]
Tomas Sykora updated ISPN-3183:
-------------------------------
Attachment: 52to52sourceTrace
52to52targetTrace
52to53sourceTrace
52to53targetTrace
> HotRod RollUps from 5.2 to 5.3 -- target can't obtain formerly store data from RCS
> ----------------------------------------------------------------------------------
>
> Key: ISPN-3183
> URL: https://issues.jboss.org/browse/ISPN-3183
> Project: Infinispan
> Issue Type: Bug
> Reporter: Tomas Sykora
> Assignee: Mircea Markus
> Attachments: 52to52sourceTrace, 52to52targetTrace, 52to53sourceTrace, 52to53targetTrace
>
>
> Scenario (typical for rollups):
> Start source node, put entries.
> Start target node which is pointing to source (source is his RemoteCacheStore now) and try to get entries.
> For 5.2 to 5.2 working perfectly.
> For 5.2 source and 5.3 target -- we have problems here.
> Sorry that I can't provide any valuable info beside TRACEs.
> 4 TRACE logs -- rollups from 5.2 to 5.2 source log and target log + rollups from 5.2 to 5.3 source log and target log.
> Very quick summary:
> 5.2 to 5.2 on target: Entry exists in loader? true
> 5.2 to 5.3 on targer:
> 16:21:41,508 TRACE [org.infinispan.container.EntryFactoryImpl] (HotRodServerWorker-2) Exists in context? null
> 16:21:41,508 TRACE [org.infinispan.container.EntryFactoryImpl] (HotRodServerWorker-2) Retrieved from container null
> What changed in RemoteCacheStore. What changed in HotRod? Any idea? Let me know, thank you!
>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 4 months
[JBoss JIRA] (ISPN-3183) HotRod RollUps from 5.2 to 5.3 -- target can't obtain formerly store data from RCS
by Tomas Sykora (JIRA)
Tomas Sykora created ISPN-3183:
----------------------------------
Summary: HotRod RollUps from 5.2 to 5.3 -- target can't obtain formerly store data from RCS
Key: ISPN-3183
URL: https://issues.jboss.org/browse/ISPN-3183
Project: Infinispan
Issue Type: Bug
Reporter: Tomas Sykora
Assignee: Mircea Markus
Scenario (typical for rollups):
Start source node, put entries.
Start target node which is pointing to source (source is his RemoteCacheStore now) and try to get entries.
For 5.2 to 5.2 working perfectly.
For 5.2 source and 5.3 target -- we have problems here.
Sorry that I can't provide any valuable info beside TRACEs.
4 TRACE logs -- rollups from 5.2 to 5.2 source log and target log + rollups from 5.2 to 5.3 source log and target log.
Very quick summary:
5.2 to 5.2 on target: Entry exists in loader? true
5.2 to 5.3 on targer:
16:21:41,508 TRACE [org.infinispan.container.EntryFactoryImpl] (HotRodServerWorker-2) Exists in context? null
16:21:41,508 TRACE [org.infinispan.container.EntryFactoryImpl] (HotRodServerWorker-2) Retrieved from container null
What changed in RemoteCacheStore. What changed in HotRod? Any idea? Let me know, thank you!
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 4 months
[JBoss JIRA] (ISPN-2836) org.jgroups.TimeoutException after invoking MapCombineCommand in Map/Reduce task with 2 nodes
by Alan Field (JIRA)
[ https://issues.jboss.org/browse/ISPN-2836?page=com.atlassian.jira.plugin.... ]
Alan Field commented on ISPN-2836:
----------------------------------
[~pruivo] No, that version will work fine to load the cache and reproduce the timeout. Thanks
> org.jgroups.TimeoutException after invoking MapCombineCommand in Map/Reduce task with 2 nodes
> ---------------------------------------------------------------------------------------------
>
> Key: ISPN-2836
> URL: https://issues.jboss.org/browse/ISPN-2836
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Execution and Map/Reduce
> Affects Versions: 5.2.1.Final
> Reporter: Alan Field
> Assignee: Pedro Ruivo
> Priority: Blocker
> Labels: onboard
> Fix For: 5.3.0.Final
>
> Attachments: afield-tcp-521-final.txt, benchmark-mapreduce-multifilesize.xml, dist-udp-no-tx.xml, jgroups-udp.xml, udp-edg-perf01.txt, udp-edg-perf02.txt
>
>
> Using RadarGun and two nodes to execute the example WordCount Map/Reduce job against a cache with ~550 keys with a value size of 1MB is producing a thread deadlock. The cache is distributed with transactions disabled.
> TCP transport deadlocks without throwing an exception. Disabling the send queue and setting UNICAST2.conn_expiry_timeout=0 prevents the deadlock, but the job does not complete. The nodes send "are-you-alive" messages back and forth, and I have seen the following exception:
> {noformat}
> 11:44:29,970 ERROR [org.jgroups.protocols.TCP] (OOB-98,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (76 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:352)
> at org.radargun.cachewrappers.InfinispanMapReduceWrapper.executeMapReduceTask(InfinispanMapReduceWrapper.java:98)
> at org.radargun.stages.MapReduceStage.executeOnSlave(MapReduceStage.java:74)
> at org.radargun.Slave$2.run(Slave.java:103)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.util.concurrent.ExecutionException: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> at org.infinispan.distexec.mapreduce.MapReduceTask$TaskPart.get(MapReduceTask.java:832)
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhaseWithLocalReduction(MapReduceTask.java:477)
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:350)
> ... 9 more
> Caused by: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at org.infinispan.util.Util.rewrapAsCacheException(Util.java:541)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:186)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:515)
> 11:44:29,978 ERROR [org.jgroups.protocols.TCP] (Timer-3,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (60 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:175)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:197)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:254)
> at org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:80)
> at org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:288)
> ... 5 more
> Caused by: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:390)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:301)
> 11:44:29,979 ERROR [org.jgroups.protocols.TCP] (Timer-4,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (63 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:179)
> ... 11 more
> {noformat}
> With UDP transport, both threads are deadlocked. I will attach thread dumps from runs using TCP and UDP transport.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 4 months
[JBoss JIRA] (ISPN-2836) org.jgroups.TimeoutException after invoking MapCombineCommand in Map/Reduce task with 2 nodes
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-2836?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo commented on ISPN-2836:
-----------------------------------
[~afield] no problem. I've rollback the change to:
cacheWrapper.put(bucket, key, buffer.asCharBuffer().toString());
Is it different of what yo have?
> org.jgroups.TimeoutException after invoking MapCombineCommand in Map/Reduce task with 2 nodes
> ---------------------------------------------------------------------------------------------
>
> Key: ISPN-2836
> URL: https://issues.jboss.org/browse/ISPN-2836
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Execution and Map/Reduce
> Affects Versions: 5.2.1.Final
> Reporter: Alan Field
> Assignee: Pedro Ruivo
> Priority: Blocker
> Labels: onboard
> Fix For: 5.3.0.Final
>
> Attachments: afield-tcp-521-final.txt, benchmark-mapreduce-multifilesize.xml, dist-udp-no-tx.xml, jgroups-udp.xml, udp-edg-perf01.txt, udp-edg-perf02.txt
>
>
> Using RadarGun and two nodes to execute the example WordCount Map/Reduce job against a cache with ~550 keys with a value size of 1MB is producing a thread deadlock. The cache is distributed with transactions disabled.
> TCP transport deadlocks without throwing an exception. Disabling the send queue and setting UNICAST2.conn_expiry_timeout=0 prevents the deadlock, but the job does not complete. The nodes send "are-you-alive" messages back and forth, and I have seen the following exception:
> {noformat}
> 11:44:29,970 ERROR [org.jgroups.protocols.TCP] (OOB-98,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (76 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:352)
> at org.radargun.cachewrappers.InfinispanMapReduceWrapper.executeMapReduceTask(InfinispanMapReduceWrapper.java:98)
> at org.radargun.stages.MapReduceStage.executeOnSlave(MapReduceStage.java:74)
> at org.radargun.Slave$2.run(Slave.java:103)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.util.concurrent.ExecutionException: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> at org.infinispan.distexec.mapreduce.MapReduceTask$TaskPart.get(MapReduceTask.java:832)
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhaseWithLocalReduction(MapReduceTask.java:477)
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:350)
> ... 9 more
> Caused by: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at org.infinispan.util.Util.rewrapAsCacheException(Util.java:541)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:186)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:515)
> 11:44:29,978 ERROR [org.jgroups.protocols.TCP] (Timer-3,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (60 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:175)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:197)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:254)
> at org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:80)
> at org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:288)
> ... 5 more
> Caused by: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:390)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:301)
> 11:44:29,979 ERROR [org.jgroups.protocols.TCP] (Timer-4,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (63 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:179)
> ... 11 more
> {noformat}
> With UDP transport, both threads are deadlocked. I will attach thread dumps from runs using TCP and UDP transport.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 4 months