Moving JCache annotation implementation and TCK run from infinispan-cdi to infinispan-jcache
by Galder Zamarreño
Hi all,
As part of the JSR-107 TCK, there's the possibility to run the optional JSR-107 annotation TCK.
Currently this is run as part of https://github.com/infinispan/infinispan/tree/master/cdi/tck-runner, but I'd like to integrate with the JSR-107 implementation TCK run integration project so that both TCKs are run from a single project:
https://github.com/galderz/infinispan/tree/t_2639/jcache/src/it/tck-runner
Btw, I've recently discovered Maven Invoker plugin which is ideal for running integration tests for a particular project, such as TCK. The nice thing about it is that if you're building integration tests for project X, it will run them as part of the build for project X. So no need to have a separate project which you have to invokve manually (as we have now with lucene integration tests). For more info, check: http://maven.apache.org/plugins/maven-invoker-plugin/ - I discovered it looking at how Dagger was testing it's CDI implementation.
Also, I've renamed the JSR-107 module name to JCache cos that's the name of the API, and in the future there could other JSRs that deal with further improvements of JCache, so you don't really wanna be stuck in a particular JSR number.
The other thing I'd like to do is remove the JCache annotation support from https://github.com/infinispan/infinispan/tree/master/cdi/extension and put it in the JSR-107 implementation I'm finishing.
That way, all JCache/JSR-107 related stuff is under the same umbrella (both implementation and TCK run integration tests):
https://github.com/galderz/infinispan/tree/t_2639/jcache
This would be a trivial subtask of https://issues.jboss.org/browse/ISPN-2639
Cheers,
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
11 years, 10 months
JGroups 3.3.0.Beta1
by Bela Ban
FYI,
added to Nexus.
This completes the implementation of message batching in all protocols,
and contains the complete Async Invocation API as well. Once we have an
implementation of async request handling in Infinispan, which runs
independent transactions in separate threads from an internal thread
pool, I expect a significant performance increase !
--
Bela Ban, JGroups lead (http://www.jgroups.org)
11 years, 10 months
Re: [infinispan-dev] ISPN-2808 - thread pool for incoming message [feedback]
by Mircea Markus
On 27 Feb 2013, at 19:06, Pedro Ruivo wrote:
> Hi all,
>
> I'm working on ISPN-2808 and I want some feedback about it (code is here [1])
>
> I'm starting to implement this feature but I know that Asynchronous Invocation API is not totally finished in JGroups.
>
> My idea in to use an executor service in CommandAwareRpcDispatcher (CARD) and when a request (command) is received, it checks if it is useful to move the command execution to another thread (in this line [2])
>
> For now, I'm thinking to move all the write commands, lock control command, prepare command and commit command to the executor service (Note: commit command is only moved when in DIST mode and L1 is enabled).
you might want to move Commit there when we have a tx cache and cache store - it's during the commit where the data is written to the cache store and that might take time.
> first question: do you think it is fine to move the commands to the executor service in CARD or should I move this functionally to the InvoundHandler?
+1 for the InboundInvocationHandler: with ISPN-2849 we'll build the tx dependency right before invoking the interceptor chain (potentially in a new interceptor), so i think the closer you move it to the interceptor chain the better.
> second question: do you have in mind other commands may block the OOB/Regular thread and should be moved to a thread in the executor service?
Generally all the commands that are long-processing(lock acquisition or interaction with a cache store) would be better executed in this pool in order to avoid the OOB/regular thread pool to deadlock.
Looking at the command hierarchy for long processing commands:
- StateResponseCommand seems to be a good candidate as it might acquire locks
-IndexUpdateCommand/ClusterQueryCommand - I'll let Sanne comment on these two, which might require an update on query module as well
- MapCombineCommand if a cache loader is present (it iterates over the entries in the loader)
Dan/Adrian care to comment on the CacheTopologyControlCommand?
Cheers,
--
Mircea Markus
Infinispan lead (www.infinispan.org)
11 years, 10 months
Fwd: ISPN-2808 - thread pool for incoming message [feedback]
by Pedro Ruivo
-------- Original Message --------
Subject: ISPN-2808 - thread pool for incoming message [feedback]
Date: Wed, 27 Feb 2013 19:06:49 +0000
From: Pedro Ruivo <pedro(a)infinispan.org>
To: infinispan-core-dev(a)infinispan.org <infinispan-core-dev(a)infinispan.org>
Hi all,
I'm working on ISPN-2808 and I want some feedback about it (code is here
[1])
I'm starting to implement this feature but I know that Asynchronous
Invocation API is not totally finished in JGroups.
My idea in to use an executor service in CommandAwareRpcDispatcher
(CARD) and when a request (command) is received, it checks if it is
useful to move the command execution to another thread (in this line [2])
For now, I'm thinking to move all the write commands, lock control
command, prepare command and commit command to the executor service
(Note: commit command is only moved when in DIST mode and L1 is enabled).
first question: do you think it is fine to move the commands to the
executor service in CARD or should I move this functionally to the
InvoundHandler?
second question: do you have in mind other commands may block the
OOB/Regular thread and should be moved to a thread in the executor service?
Any other feedback is welcome.
thanks!
Cheers,
Pedro
PS. should I move this topic to ISPN-DEV?
[1] https://github.com/pruivo/infinispan/tree/ISPN-2808
[2]
https://github.com/pruivo/infinispan/commit/a267da0b2a4c785279141a9df1924...
11 years, 10 months
Message batching in JGroups
by Bela Ban
I'm not sure adding receive(MessageBatch) to Receiver and to UpHandler
is a benefit to applications. The current implementation simply calls
receive(Message) or up(new Event(Event.MSG, msg)), so each message in a
batch is delivered in turn.
I thought - if it turns out we need this - I can always add it later,
.e.g. in 4.0 where API breakage is allowed.
Thoughts ? I know this is pretty knew, so folks have probably not yet
played with this feature...
[1] https://issues.jboss.org/browse/JGRP-1581
--
Bela Ban, JGroups lead (http://www.jgroups.org)
11 years, 10 months
L1OnRehashNotConcurrentTest
by Manik Surtani
The test passes, but dumps a lot of warnings and exceptions to the console. Is this expected?
Running org.infinispan.distribution.L1OnRehashNotConcurrentTest
Configuring TestNG with: org.apache.maven.surefire.testng.conf.TestNG652Configurator@7addd49d
Transport protocol stack used = tcp
[testng-L1OnRehashNotConcurrentTest] Test testInvalidationBehaviorOnRehash(org.infinispan.distribution.L1OnRehashNotConcurrentTest) succeeded.
Test suite progress: tests succeeded: 1, failed: 0, skipped: 0.
2013-02-27 14:42:51,252 WARN [InboundTransferTask] (transport-thread-2,L1OnRehashNotConcurrentTest-NodeD) ISPN000210: Failed to request segments [1, 3, 6] of cache dist from node L1OnRehashNotConcurrentTest-NodeB-50866 (node will not be retried)
2013-02-27 14:42:51,253 WARN [InboundTransferTask] (transport-thread-0,L1OnRehashNotConcurrentTest-NodeE) ISPN000210: Failed to request segments [4, 7, 8] of cache dist from node L1OnRehashNotConcurrentTest-NodeB-50866 (node will not be retried)
2013-02-27 14:42:51,254 WARN [InboundTransferTask] (transport-thread-4,L1OnRehashNotConcurrentTest-NodeC) ISPN000210: Failed to request segments [0, 2, 5, 44] of cache dist from node L1OnRehashNotConcurrentTest-NodeB-50866 (node will not be retried)
2013-02-27 14:42:51,262 WARN [InboundTransferTask] (transport-thread-2,L1OnRehashNotConcurrentTest-NodeD) ISPN000210: Failed to request segments [1, 3, 5, 6, 8] of cache dist from node L1OnRehashNotConcurrentTest-NodeC-26107 (node will not be retried)
2013-02-27 14:42:51,263 WARN [InboundTransferTask] (transport-thread-0,L1OnRehashNotConcurrentTest-NodeE) ISPN000210: Failed to request segments [0, 2, 3, 6, 8] of cache dist from node L1OnRehashNotConcurrentTest-NodeC-26107 (node will not be retried)
2013-02-27 14:42:51,272 WARN [StateConsumerImpl] (OOB-2,ISPN,L1OnRehashNotConcurrentTest-NodeD-31496) Received unsolicited state from node L1OnRehashNotConcurrentTest-NodeE-9441 for segment 10 of cache dist
2013-02-27 14:42:51,272 ERROR [RpcManagerImpl] (transport-thread-5,L1OnRehashNotConcurrentTest-NodeE) ISPN000073: Unexpected error while replicating
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2188)
at org.jgroups.blocks.Request.responsesComplete(Request.java:197)
at org.jgroups.blocks.Request.execute(Request.java:89)
at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:398)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:301)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:179)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:515)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:175)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:197)
at org.infinispan.statetransfer.OutboundTransferTask.sendEntries(OutboundTransferTask.java:257)
at org.infinispan.statetransfer.OutboundTransferTask.run(OutboundTransferTask.java:187)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
2013-02-27 14:42:51,272 WARN [StateConsumerImpl] (OOB-2,ISPN,L1OnRehashNotConcurrentTest-NodeD-31496) Received unsolicited state from node L1OnRehashNotConcurrentTest-NodeE-9441 for segment 41 of cache dist
2013-02-27 14:42:51,273 ERROR [OutboundTransferTask] (transport-thread-5,L1OnRehashNotConcurrentTest-NodeE) Failed to send entries to node L1OnRehashNotConcurrentTest-NodeD-31496 : java.lang.InterruptedException
org.infinispan.CacheException: java.lang.InterruptedException
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:185)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:197)
at org.infinispan.statetransfer.OutboundTransferTask.sendEntries(OutboundTransferTask.java:257)
at org.infinispan.statetransfer.OutboundTransferTask.run(OutboundTransferTask.java:187)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2188)
at org.jgroups.blocks.Request.responsesComplete(Request.java:197)
at org.jgroups.blocks.Request.execute(Request.java:89)
at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:398)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:301)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:179)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:515)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:175)
... 12 more
2013-02-27 14:42:51,273 WARN [StateConsumerImpl] (OOB-2,ISPN,L1OnRehashNotConcurrentTest-NodeD-31496) Received unsolicited state from node L1OnRehashNotConcurrentTest-NodeE-9441 for segment 11 of cache dist
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.564 sec
--
Manik Surtani
manik(a)jboss.org
twitter.com/maniksurtani
Platform Architect, JBoss Data Grid
http://red.ht/data-grid
11 years, 10 months
changing the CacheQuery.list() method to retrieve a typed list.
by Xavier Coulon
Hello,
[This mail summaries a conversation that I started with Sanne and Emmanuel, off-list]
The current CacheQuery.list() method returns a List<Object>, which less handy to manipulate than a typed list (List<Book> for example).
My initial question is: could the CacheQuery.list() method be changed into something like CacheQuery.list(Book.class) which would return a List<Book> ?
After discussing this subject with Sanne and Emmanuel, it seems to be interesting but not so trivial, partially because of the way Infinispan supports Projections.
Which brings me to the suggestion where, in case of projections, the result list could be a List<Book> for which only of subset of the attributes would be set.
While another user opened a similar issue last week -nice coincidence- : https://issues.jboss.org/browse/ISPN-2826, Sanne pointed me to https://issues.jboss.org/browse/ISPN-949 which is about using something similar to ResultTransformers in Infinispan.
Thanks,
Best regards,
/Xavier
11 years, 10 months
Re: [infinispan-dev] Blocking issue in TO State Transfer
by Dan Berindei
On Tue, Feb 26, 2013 at 12:57 PM, Pedro Ruivo <pedro(a)infinispan.org> wrote:
> hi,
>
> I found the blocking problem with the state transfer this morning. It
> happens because of the reordering of a regular and OOB message.
>
> Below, is a simplification of what is happening for two nodes
>
> A: total order broadcasts rebalance_start
>
> B: (incoming thread) delivers rebalance_start
> B: has no segments to request so the rebalance is done
> B: sends async request with rebalance_confirm (unicast #x)
> B: sends the rebalance_start response (unicast #x+1) (the response is a
> regular message)
>
> A: receives rebalance_start response (unicast #x+1)
> A: in UNICAST2, it detects the message is out-of-order and blocks the
> response in the sender window (i.e. the message #x is missing)
> A: receives the rebalance_confirm (unicast #x)
> A: delivers rebalance_confirm. Infinispan blocks this command until all
> the rebalance_start responses are received ==> this originates a deadlock!
> (because the response is blocked in unicast layer)
>
> Question: can the request's response message be sent always as OOB? (I
> think the answer should be no...)
>
>
We could, if Bela adds the send(Message) method to the Response
interface... and personally I think it would be better to make all
responses OOB (as in JGroups 3.2.x). I don't have any data to back this up,
though...
> My suggestion: when I deliver a rebalance_confirm command (that it is send
> async), can I move it to a thread in async_thread_pool_executor?
>
>
I have WIP fix for https://issues.jboss.org/browse/ISPN-2825, which should
stop blocking the REBALANCE_CONFIRM commands on the coordinator:
https://github.com/danberindei/infinispan/tree/t_2825_m
I haven't issued a PR yet because I'm still getting a failure in
ClusterTopologyManagerTest, I think because of a JGroups issue (RSVP not
receiving an ACK from itself). I'll let you know when I find out...
> Weird thing: last night I tried more than 5x time in a row with UNICAST3
> and it never blocks. can this meaning a problem with UNICAST3 or I had just
> lucky?
>
>
Even though the REBALANCE_CONFIRM command is sent async, the message is
still OOB. I think UNICAST/2/3 should not block any regular message waiting
for the processing of an OOB message, as long as that message was received,
so maybe the problem is in UNICAST2?
> Any other suggestion?
>
> Cheers,
> Pedro
>
>
>
11 years, 10 months
Re: [infinispan-dev] Blocking issue in TO State Transfer
by Mircea Markus
Adding inifinispan-dev, as other people might be interested in this as well.
On 26 Feb 2013, at 10:57, Pedro Ruivo wrote:
> hi,
>
> I found the blocking problem with the state transfer this morning. It happens because of the reordering of a regular and OOB message.
>
> Below, is a simplification of what is happening for two nodes
>
> A: total order broadcasts rebalance_start
>
> B: (incoming thread) delivers rebalance_start
> B: has no segments to request so the rebalance is done
> B: sends async request with rebalance_confirm (unicast #x)
> B: sends the rebalance_start response (unicast #x+1) (the response is a regular message)
>
> A: receives rebalance_start response (unicast #x+1)
> A: in UNICAST2, it detects the message is out-of-order and blocks the response in the sender window (i.e. the message #x is missing)
> A: receives the rebalance_confirm (unicast #x)
> A: delivers rebalance_confirm. Infinispan blocks this command until all the rebalance_start responses are received ==> this originates a deadlock! (because the response is blocked in unicast layer)
wondering why does this happen: the "rebalance_start response" (regular unicast #x+1) should not wait for the rebalance_confirm (OOB message) as there's no ordering between them, but be passed up the stack by jgroups.
>
> Question: can the request's response message be sent always as OOB? (I think the answer should be no...)
>
> My suggestion: when I deliver a rebalance_confirm command (that it is send async), can I move it to a thread in async_thread_pool_executor?
>
> Weird thing: last night I tried more than 5x time in a row with UNICAST3 and it never blocks. can this meaning a problem with UNICAST3 or I had just lucky?
>
> Any other suggestion?
>
> Cheers,
> Pedro
>
>
Cheers,
--
Mircea Markus
Infinispan lead (www.infinispan.org)
11 years, 10 months