Keeping track of locked nodes
by Sanne Grinovero
I just noticed that org.infinispan.transaction.LocalTransaction is
keeping track of Addresses on which locks where acquired.
That's surprising me .. why should it ever be interested in the
specific Address? I'd expect it to be able to figure that out when
needed, especially since the Address owning the lock might change over
time I don't understand to track for a specific node.
Cheers,
Sanne
12 years, 4 months
Fwd: looking again at AS7-3290 and Hibernate 4.0.1...
by Galder Zamarreño
Btw, this discussion should be extended to Infinispan dev list in case we can improve on the custom commands SPI.
In the case of Hibernate 2LC, the custom command is really a cache specific one, so we could potentially tie up cache specific custom commands to the cache lifecycle, but still don't see how it can really help with the issue below.
Begin forwarded message:
> From: Galder Zamarreño <galder(a)redhat.com>
> Subject: Re: looking again at AS7-3290 and Hibernate 4.0.1...
> Date: February 6, 2012 10:49:03 AM GMT+01:00
> To: Scott Marlow <smarlow(a)redhat.com>
> Cc: Tristan Tarrant <ttarrant(a)redhat.com>, Steve Ebersole <steve(a)hibernate.org>, Paul Ferraro <pferraro(a)redhat.com>
>
>
> On Feb 3, 2012, at 6:43 PM, Scott Marlow wrote:
>
>> On 02/03/2012 10:33 AM, Scott Marlow wrote:
>>> I was just reviewing the change we made to address
>>> https://issues.jboss.org/browse/AS7-3290 (Infinispan needed to see the
>>> Hibernate-Infinispan modules services, when AS7 constructs the
>>> Infinispan global component registry).
>>>
>>> This turns into a dependency from the AS7
>>> org.jboss.as.clustering.infinispan module onto our
>>> org.hibernate.infinispan module (contains the Hibernate-Infinispan 4.0.1
>>> jar).
>>>
>>> Community users are asking to use the Infinispan 2lc with Hibernate
>>> 3.6.x also which we don't ship. I would imagine that the same request
>>> will come in for OGM in the future.
>
> If they wanna use Infinispan 2LC with Hibernate 3.6.x, they won't be able to use Infinispan 5.1.x. So far, 3.6.x has only been tested with 4.2.x.
>
> Are you sure this is gonna be supported??
>
>>>
>>> Is there another way to avoid this dependency between the constructor of
>>> the Infinispan global component registry and all of the persistence
>>> provider modules that want to use the 2lc?
>>
>> Is this the commit that brought this requirement in? https://github.com/hibernate/hibernate-orm/commit/cc9fbf42a9a75a231767590...
>>
>> Could this be made optional or reverted in a 4.0.2.Final build?
>
> Hmmmm, that would require going back to using a cache as a way to send evict all invalidation messages around the cluster which is a hack. Custom commands are the best way to handle this use case IMO.
>
> At the moment, these commands at their factories are loaded on startup, when the GCR is created.
>
> I wonder if it would help if Infinispan would load them lazily? RemoteCommandFactory could potentially look up the command factory when it receives a request for a custom command, but you still have the same issue of needing to find Hibernate 2LC classes from Infinispan jar.
>
> Btw, I don't see how different this problem is to when a user defines a custom cache loader (or any other SPIs we have) and wants to plug that into Infinispan. The CL where Infinispan is, is gonna need to know about these custom classes. Same thing happens for Hibernate 2LC which is implementing an SPI.
>
>>
>>>
>>> Scott
>>
>
> --
> Galder Zamarreño
> Sr. Software Engineer
> Infinispan, JBoss Cache
>
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
12 years, 4 months
state transfer exceptions at REPL
by Sanne Grinovero
Can anyone explain this error?
I'm updating Hibernate Search, and having a simple test which in a loop does:
- write to shared index
- add a node / remove a node
- wait for joins
- verifies index state
This is expected to work, as it already did with all previous
Infinispan versions.
Using Infinispan 5.1.1.FINAL and JGroups 3.0.5.Final.
2012-02-07 10:42:38,668 WARN [CacheViewControlCommand]
(OOB-4,sanne-20017) ISPN000071: Caught exception when handling command
CacheViewControlCommand{cache=LuceneIndexesMetadata,
type=PREPARE_VIEW, sender=sanne-3158, newViewId=8,
newMembers=[sanne-3158, sanne-63971, sanne-20017, sanne-2794,
sanne-25511, sanne-30075], oldViewId=7, oldMembers=[sanne-3158,
sanne-63971, sanne-20017, sanne-2794, sanne-25511]}
java.util.concurrent.ExecutionException:
org.infinispan.remoting.transport.jgroups.SuspectException: One or
more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesMetadata,
type=APPLY_STATE, sender=sanne-20017, viewId=8, state=4}
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
at java.util.concurrent.FutureTask.get(FutureTask.java:91)
at org.infinispan.util.concurrent.AggregatingNotifyingFutureBuilder.get(AggregatingNotifyingFutureBuilder.java:93)
at org.infinispan.statetransfer.BaseStateTransferTask.finishPushingState(BaseStateTransferTask.java:139)
at org.infinispan.statetransfer.ReplicatedStateTransferTask.doPerformStateTransfer(ReplicatedStateTransferTask.java:116)
at org.infinispan.statetransfer.BaseStateTransferTask.performStateTransfer(BaseStateTransferTask.java:93)
at org.infinispan.statetransfer.BaseStateTransferManagerImpl.prepareView(BaseStateTransferManagerImpl.java:294)
at org.infinispan.cacheviews.CacheViewsManagerImpl.handlePrepareView(CacheViewsManagerImpl.java:486)
at org.infinispan.commands.control.CacheViewControlCommand.perform(CacheViewControlCommand.java:125)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:95)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:161)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:141)
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543)
at org.jgroups.JChannel.up(JChannel.java:716)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:383)
at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:697)
at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:559)
at org.jgroups.protocols.BARRIER.up(BARRIER.java:126)
at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:167)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:282)
at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
at org.jgroups.protocols.Discovery.up(Discovery.java:355)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1174)
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1722)
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1704)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException:
One or more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesMetadata,
type=APPLY_STATE, sender=sanne-20017, viewId=8, state=4}
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:436)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:148)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:169)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:219)
at org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:78)
at org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:249)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
... 3 more
2012-02-07 10:42:38,706 WARN [CacheViewControlCommand]
(OOB-5,sanne-20017) ISPN000071: Caught exception when handling command
CacheViewControlCommand{cache=LuceneIndexesData, type=PREPARE_VIEW,
sender=sanne-3158, newViewId=8, newMembers=[sanne-3158, sanne-63971,
sanne-20017, sanne-2794, sanne-25511, sanne-30075], oldViewId=7,
oldMembers=[sanne-3158, sanne-63971, sanne-20017, sanne-2794,
sanne-25511]}
java.util.concurrent.ExecutionException:
org.infinispan.remoting.transport.jgroups.SuspectException: One or
more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesData, type=APPLY_STATE,
sender=sanne-20017, viewId=8, state=3}
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
at java.util.concurrent.FutureTask.get(FutureTask.java:91)
at org.infinispan.util.concurrent.AggregatingNotifyingFutureBuilder.get(AggregatingNotifyingFutureBuilder.java:93)
at org.infinispan.statetransfer.BaseStateTransferTask.finishPushingState(BaseStateTransferTask.java:139)
at org.infinispan.statetransfer.ReplicatedStateTransferTask.doPerformStateTransfer(ReplicatedStateTransferTask.java:116)
at org.infinispan.statetransfer.BaseStateTransferTask.performStateTransfer(BaseStateTransferTask.java:93)
at org.infinispan.statetransfer.BaseStateTransferManagerImpl.prepareView(BaseStateTransferManagerImpl.java:294)
at org.infinispan.cacheviews.CacheViewsManagerImpl.handlePrepareView(CacheViewsManagerImpl.java:486)
at org.infinispan.commands.control.CacheViewControlCommand.perform(CacheViewControlCommand.java:125)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:95)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:161)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:141)
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543)
at org.jgroups.JChannel.up(JChannel.java:716)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:383)
at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:697)
at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:559)
at org.jgroups.protocols.BARRIER.up(BARRIER.java:126)
at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:167)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:282)
at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
at org.jgroups.protocols.Discovery.up(Discovery.java:355)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1174)
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1722)
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1704)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException:
One or more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesData, type=APPLY_STATE,
sender=sanne-20017, viewId=8, state=3}
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:436)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:148)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:169)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:219)
at org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:78)
at org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:249)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
... 3 more
2012-02-07 10:42:38,684 WARN [UNICAST2] (OOB-7,sanne-2794)
sanne-2794: my conn_id (6) != received conn_id (1); discarding STABLE
message !
2012-02-07 10:42:38,671 WARN [CacheViewControlCommand]
(OOB-3,sanne-63971) ISPN000071: Caught exception when handling command
CacheViewControlCommand{cache=LuceneIndexesMetadata,
type=PREPARE_VIEW, sender=sanne-3158, newViewId=8,
newMembers=[sanne-3158, sanne-63971, sanne-20017, sanne-2794,
sanne-25511, sanne-30075], oldViewId=7, oldMembers=[sanne-3158,
sanne-63971, sanne-20017, sanne-2794, sanne-25511]}
java.util.concurrent.ExecutionException:
org.infinispan.remoting.transport.jgroups.SuspectException: One or
more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesMetadata,
type=APPLY_STATE, sender=sanne-63971, viewId=8, state=24}
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
at java.util.concurrent.FutureTask.get(FutureTask.java:91)
at org.infinispan.util.concurrent.AggregatingNotifyingFutureBuilder.get(AggregatingNotifyingFutureBuilder.java:93)
at org.infinispan.statetransfer.BaseStateTransferTask.finishPushingState(BaseStateTransferTask.java:139)
at org.infinispan.statetransfer.ReplicatedStateTransferTask.doPerformStateTransfer(ReplicatedStateTransferTask.java:116)
at org.infinispan.statetransfer.BaseStateTransferTask.performStateTransfer(BaseStateTransferTask.java:93)
at org.infinispan.statetransfer.BaseStateTransferManagerImpl.prepareView(BaseStateTransferManagerImpl.java:294)
at org.infinispan.cacheviews.CacheViewsManagerImpl.handlePrepareView(CacheViewsManagerImpl.java:486)
at org.infinispan.commands.control.CacheViewControlCommand.perform(CacheViewControlCommand.java:125)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:95)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:161)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:141)
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543)
at org.jgroups.JChannel.up(JChannel.java:716)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:383)
at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:697)
at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:559)
at org.jgroups.protocols.BARRIER.up(BARRIER.java:126)
at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:167)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:282)
at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
at org.jgroups.protocols.Discovery.up(Discovery.java:355)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1174)
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1722)
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1704)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException:
One or more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesMetadata,
type=APPLY_STATE, sender=sanne-63971, viewId=8, state=24}
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:436)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:148)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:169)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:219)
at org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:78)
at org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:249)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
... 3 more
2012-02-07 10:42:38,677 WARN [CacheViewControlCommand]
(OOB-4,sanne-63971) ISPN000071: Caught exception when handling command
CacheViewControlCommand{cache=LuceneIndexesData, type=PREPARE_VIEW,
sender=sanne-3158, newViewId=8, newMembers=[sanne-3158, sanne-63971,
sanne-20017, sanne-2794, sanne-25511, sanne-30075], oldViewId=7,
oldMembers=[sanne-3158, sanne-63971, sanne-20017, sanne-2794,
sanne-25511]}
java.util.concurrent.ExecutionException:
org.infinispan.remoting.transport.jgroups.SuspectException: One or
more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesData, type=APPLY_STATE,
sender=sanne-63971, viewId=8, state=22}
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
at java.util.concurrent.FutureTask.get(FutureTask.java:91)
at org.infinispan.util.concurrent.AggregatingNotifyingFutureBuilder.get(AggregatingNotifyingFutureBuilder.java:93)
at org.infinispan.statetransfer.BaseStateTransferTask.finishPushingState(BaseStateTransferTask.java:139)
at org.infinispan.statetransfer.ReplicatedStateTransferTask.doPerformStateTransfer(ReplicatedStateTransferTask.java:116)
at org.infinispan.statetransfer.BaseStateTransferTask.performStateTransfer(BaseStateTransferTask.java:93)
at org.infinispan.statetransfer.BaseStateTransferManagerImpl.prepareView(BaseStateTransferManagerImpl.java:294)
at org.infinispan.cacheviews.CacheViewsManagerImpl.handlePrepareView(CacheViewsManagerImpl.java:486)
at org.infinispan.commands.control.CacheViewControlCommand.perform(CacheViewControlCommand.java:125)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:95)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:161)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:141)
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543)
at org.jgroups.JChannel.up(JChannel.java:716)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:383)
at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:697)
at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:559)
at org.jgroups.protocols.BARRIER.up(BARRIER.java:126)
at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:167)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:282)
at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
at org.jgroups.protocols.Discovery.up(Discovery.java:355)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1174)
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1722)
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1704)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException:
One or more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesData, type=APPLY_STATE,
sender=sanne-63971, viewId=8, state=22}
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:436)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:148)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:169)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:219)
at org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:78)
at org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:249)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
... 3 more
2012-02-07 10:42:38,718 WARN [CacheViewControlCommand]
(OOB-6,sanne-25511) ISPN000071: Caught exception when handling command
CacheViewControlCommand{cache=LuceneIndexesData, type=PREPARE_VIEW,
sender=sanne-3158, newViewId=8, newMembers=[sanne-3158, sanne-63971,
sanne-20017, sanne-2794, sanne-25511, sanne-30075], oldViewId=7,
oldMembers=[sanne-3158, sanne-63971, sanne-20017, sanne-2794,
sanne-25511]}
java.util.concurrent.ExecutionException:
org.infinispan.remoting.transport.jgroups.SuspectException: One or
more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesData, type=APPLY_STATE,
sender=sanne-25511, viewId=8, state=19}
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
at java.util.concurrent.FutureTask.get(FutureTask.java:91)
at org.infinispan.util.concurrent.AggregatingNotifyingFutureBuilder.get(AggregatingNotifyingFutureBuilder.java:93)
at org.infinispan.statetransfer.BaseStateTransferTask.finishPushingState(BaseStateTransferTask.java:139)
at org.infinispan.statetransfer.ReplicatedStateTransferTask.doPerformStateTransfer(ReplicatedStateTransferTask.java:116)
at org.infinispan.statetransfer.BaseStateTransferTask.performStateTransfer(BaseStateTransferTask.java:93)
at org.infinispan.statetransfer.BaseStateTransferManagerImpl.prepareView(BaseStateTransferManagerImpl.java:294)
at org.infinispan.cacheviews.CacheViewsManagerImpl.handlePrepareView(CacheViewsManagerImpl.java:486)
at org.infinispan.commands.control.CacheViewControlCommand.perform(CacheViewControlCommand.java:125)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:95)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:161)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:141)
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543)
at org.jgroups.JChannel.up(JChannel.java:716)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:383)
at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:697)
at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:559)
at org.jgroups.protocols.BARRIER.up(BARRIER.java:126)
at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:167)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:282)
at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
at org.jgroups.protocols.Discovery.up(Discovery.java:355)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1174)
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1722)
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1704)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException:
One or more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesData, type=APPLY_STATE,
sender=sanne-25511, viewId=8, state=19}
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:436)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:148)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:169)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:219)
at org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:78)
at org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:249)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
... 3 more
2012-02-07 10:42:38,733 ERROR [CacheViewsManagerImpl]
(CacheViewInstaller-1,sanne-3158) ISPN000172: Failed to prepare view
CacheView{viewId=8, members=[sanne-3158, sanne-63971, sanne-20017,
sanne-2794, sanne-25511, sanne-30075]} for cache
LuceneIndexesMetadata, rolling back to view CacheView{viewId=7,
members=[sanne-3158, sanne-63971, sanne-20017, sanne-2794,
sanne-25511]}
java.util.concurrent.ExecutionException:
java.util.concurrent.ExecutionException:
org.infinispan.remoting.transport.jgroups.SuspectException: One or
more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesMetadata,
type=APPLY_STATE, sender=sanne-20017, viewId=8, state=4}
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
at java.util.concurrent.FutureTask.get(FutureTask.java:91)
at org.infinispan.cacheviews.CacheViewsManagerImpl.clusterPrepareView(CacheViewsManagerImpl.java:319)
at org.infinispan.cacheviews.CacheViewsManagerImpl.clusterInstallView(CacheViewsManagerImpl.java:250)
at org.infinispan.cacheviews.CacheViewsManagerImpl$ViewInstallationTask.call(CacheViewsManagerImpl.java:876)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.util.concurrent.ExecutionException:
org.infinispan.remoting.transport.jgroups.SuspectException: One or
more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesMetadata,
type=APPLY_STATE, sender=sanne-20017, viewId=8, state=4}
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
at java.util.concurrent.FutureTask.get(FutureTask.java:91)
at org.infinispan.util.concurrent.AggregatingNotifyingFutureBuilder.get(AggregatingNotifyingFutureBuilder.java:93)
at org.infinispan.statetransfer.BaseStateTransferTask.finishPushingState(BaseStateTransferTask.java:139)
at org.infinispan.statetransfer.ReplicatedStateTransferTask.doPerformStateTransfer(ReplicatedStateTransferTask.java:116)
at org.infinispan.statetransfer.BaseStateTransferTask.performStateTransfer(BaseStateTransferTask.java:93)
at org.infinispan.statetransfer.BaseStateTransferManagerImpl.prepareView(BaseStateTransferManagerImpl.java:294)
at org.infinispan.cacheviews.CacheViewsManagerImpl.handlePrepareView(CacheViewsManagerImpl.java:486)
at org.infinispan.commands.control.CacheViewControlCommand.perform(CacheViewControlCommand.java:125)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:95)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:161)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:141)
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543)
at org.jgroups.JChannel.up(JChannel.java:716)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:383)
at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:697)
at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:559)
at org.jgroups.protocols.BARRIER.up(BARRIER.java:126)
at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:167)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:282)
at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
at org.jgroups.protocols.Discovery.up(Discovery.java:355)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1174)
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1722)
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1704)
... 3 more
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException:
One or more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesMetadata,
type=APPLY_STATE, sender=sanne-20017, viewId=8, state=4}
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:436)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:148)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:169)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:219)
at org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:78)
at org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:249)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
... 3 more
2012-02-07 10:42:38,737 ERROR [CacheViewsManagerImpl]
(CacheViewInstaller-3,sanne-3158) ISPN000172: Failed to prepare view
CacheView{viewId=8, members=[sanne-3158, sanne-63971, sanne-20017,
sanne-2794, sanne-25511, sanne-30075]} for cache LuceneIndexesData,
rolling back to view CacheView{viewId=7, members=[sanne-3158,
sanne-63971, sanne-20017, sanne-2794, sanne-25511]}
java.util.concurrent.ExecutionException:
java.util.concurrent.ExecutionException:
org.infinispan.remoting.transport.jgroups.SuspectException: One or
more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesData, type=APPLY_STATE,
sender=sanne-20017, viewId=8, state=3}
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
at java.util.concurrent.FutureTask.get(FutureTask.java:91)
at org.infinispan.cacheviews.CacheViewsManagerImpl.clusterPrepareView(CacheViewsManagerImpl.java:319)
at org.infinispan.cacheviews.CacheViewsManagerImpl.clusterInstallView(CacheViewsManagerImpl.java:250)
at org.infinispan.cacheviews.CacheViewsManagerImpl$ViewInstallationTask.call(CacheViewsManagerImpl.java:876)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.util.concurrent.ExecutionException:
org.infinispan.remoting.transport.jgroups.SuspectException: One or
more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesData, type=APPLY_STATE,
sender=sanne-20017, viewId=8, state=3}
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
at java.util.concurrent.FutureTask.get(FutureTask.java:91)
at org.infinispan.util.concurrent.AggregatingNotifyingFutureBuilder.get(AggregatingNotifyingFutureBuilder.java:93)
at org.infinispan.statetransfer.BaseStateTransferTask.finishPushingState(BaseStateTransferTask.java:139)
at org.infinispan.statetransfer.ReplicatedStateTransferTask.doPerformStateTransfer(ReplicatedStateTransferTask.java:116)
at org.infinispan.statetransfer.BaseStateTransferTask.performStateTransfer(BaseStateTransferTask.java:93)
at org.infinispan.statetransfer.BaseStateTransferManagerImpl.prepareView(BaseStateTransferManagerImpl.java:294)
at org.infinispan.cacheviews.CacheViewsManagerImpl.handlePrepareView(CacheViewsManagerImpl.java:486)
at org.infinispan.commands.control.CacheViewControlCommand.perform(CacheViewControlCommand.java:125)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:95)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:161)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:141)
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543)
at org.jgroups.JChannel.up(JChannel.java:716)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:383)
at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:697)
at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:559)
at org.jgroups.protocols.BARRIER.up(BARRIER.java:126)
at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:167)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:282)
at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
at org.jgroups.protocols.Discovery.up(Discovery.java:355)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1174)
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1722)
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1704)
... 3 more
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException:
One or more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesData, type=APPLY_STATE,
sender=sanne-20017, viewId=8, state=3}
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:436)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:148)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:169)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:219)
at org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:78)
at org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:249)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
... 3 more
12 years, 4 months
Eviction maxEntries analysis
by Martin Gencur
Hi all,
I ran a few tests to find out what is the actual number of entries held
in a cache when certain "maxEntries" param is set for eviction and I
store more than maxEntries entries. I tested with HotSpot JDK6 [1], IBM
JDK 6,7 [2]. OpenJDK6 seems to have the same results as HotSpot JDK.
Results:
maxEntries being set -> actual number of entries held in the cache
HotSpot JDK:
------------
2 -> 2
4 -> 4
6 -> 4
8 -> 8
10 -> 8
256 -> 232
300 -> 266
IBM JDK (both 6, 7):
--------------------
2 -> 4 (2 with LIRS)
4 -> 6 (4 with LIRS)
6 -> 10
8 -> 11 (8 with LIRS)
10 -> 13, (8 with LIRS)
300 -> 287, (266 with LIRS)
256 -> 247, (232 with LIRS)
I modified one test in ispn-core to do this testing:
https://github.com/mgencur/infinispan/commit/837a1c752fa7fbfb3f05738dd873...
Any thoughts ? :)
[1]
java version "1.6.0_21"
Java(TM) SE Runtime Environment (build 1.6.0_21-b06)
Java HotSpot(TM) 64-Bit Server VM (build 17.0-b16, mixed mode)
[2]
java version "1.6.0"
Java(TM) SE Runtime Environment (build pxi3260sr9fp1-20110208_03(SR9
FP1))
IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Linux x86-32
jvmxi3260sr9-20110203_74623 (JIT enabled, AOT enabled)
java version "1.7.0"
Java(TM) SE Runtime Environment (build pxi3270-20110827_01)
IBM J9 VM (build 2.6, JRE 1.7.0 Linux x86-32 20110810_88604 (JIT
enabled, AOT enabled)
--
Martin Gencur
--
JBoss QE, Enterprise Data Grid
Desk phone: +420 532 294 192, ext. 62192
12 years, 4 months
Remove/rename log4j.xml
by Sanne Grinovero
Hi all,
we're having a log4j.xml as a test resource of the Infinispan core
module. Since I depend on the testing module in other projects, this
is annoying as there is no easy way to exclude it from the IDE.
Would anyone disagree in me removing it? you need it renamed?
12 years, 4 months
Which transactions API?
by Sanne Grinovero
I'm having the transactions API defined by both the following packages:
Hibernate core depends on:
<parent>
<groupId>org.jboss.spec</groupId>
<artifactId>jboss-specs-parent</artifactId>
<version>1.0.0.Beta2</version>
</parent>
<groupId>org.jboss.spec.javax.transaction</groupId>
<artifactId>jboss-transaction-api_1.1_spec</artifactId>
<version>1.0.0.Final</version>
Infinispan core depends on:
<parent>
<groupId>org.jboss.javaee</groupId>
<artifactId>jboss-javaee-parent</artifactId>
<version>5.2.0.Beta1</version>
</parent>
<groupId>org.jboss.javaee</groupId>
<artifactId>jboss-transaction-api</artifactId>
<version>1.0.1.GA</version>
I'd like to introduce some consistency, especially since some project use both.
Which one should we use ?
AS 7.1 [master]
is using
<dependency>
<groupId>org.jboss.spec.javax.transaction</groupId>
<artifactId>jboss-transaction-api_1.1_spec</artifactId>
<version>1.0.0.Final</version>
</dependency>
So I guess Infinispan should pick the same?
12 years, 4 months
Proposal: ISPN-1394 Manual rehashing in 5.2
by Sanne Grinovero
I think this is an important feature to have soon;
My understanding of it:
We default with the feature off, and newly discovered nodes are
added/removed as usual. With a JMX operatable switch, one can disable
this:
If a remote node is joining the JGroups view, but rehash is off: it
will be added to a to-be-installed view, but this won't be installed
until rehash is enabled again. This gives time to add more changes
before starting the rehash, and would help a lot to start larger
clusters.
If the [self] node is booting and joining a cluster with manual rehash
off, the start process and any getCache() invocation should block and
wait for it to be enabled. This would need of course to override the
usually low timeouts.
When a node is suspected it's a bit a different story as we need to
make sure no data is lost. The principle is the same, but maybe we
should have two flags: one which is a "soft request" to avoid rehashes
of less than N members (and refuse N>=numOwners ?), one which is just
disable it and don't care: data might be in a cachestore, data might
not be important. Which reminds me, we should consider as well a JMX
command to flush the container to the CacheLoader.
--Sanne
12 years, 4 months
JBoss Libra
by Galder Zamarreño
Just saw this: https://github.com/wolfc/jboss-libra
We should investigate the possibility of adding this to Infinispan and provide memory size based eviction, WDYT?
The performance impact would need to be measured too.
EhCache has apparenlty done something similar but from what I heard, it's full of hacks to work on diff plattforms...
Cheers,
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
12 years, 4 months