Staggering remote GET calls
by Manik Surtani
Guys,
I have a topic branch with a fix for ISPN-825, to stagger remote GET calls. (See the JIRA for details on this patch).
This should have an interesting effect on greatly reducing the pressure on the OOB thread pool. This isn't a *real* fix for the problem that Radim reported (Pedro is working on that with Bela), but reducing pressure on the OOB thread pool is a side effect of this fix.
It should generally make things faster too, with less traffic on the network. I'd be curious for you to give this branch a try, Radim - see how it impacts your tests.
https://github.com/maniksurtani/infinispan/tree/t_825
Cheers
Manik
--
Manik Surtani
manik(a)jboss.org
twitter.com/maniksurtani
Platform Architect, JBoss Data Grid
http://red.ht/data-grid
11 years, 10 months
More verbose logging
by Manik Surtani
Guys,
I see this emitted by the CommandAwareRpcDispatcher:
2013-02-19 16:22:13,988 TRACE [CommandAwareRpcDispatcher] (OOB-1,ISPN,NodeB-11464) Attempting to execute command: StateResponseCommand{cache=___defaultcache, origin=NodeA-62578, topologyId=1, stateChunks=[StateChunk{segmentId=0, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=1, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=2, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=3, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=4, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=5, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=6, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=7, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=8, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=9, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=10, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=11, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=12, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=13, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=14, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=15, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=17, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=16, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=19, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=18, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=21, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=20, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=23, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=22, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=25, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=24, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=27, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=26, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=29, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=28, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=31, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=30, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=34, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=35, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=32, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=33, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=38, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=39, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=36, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=37, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=42, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=43, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=40, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=41, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=46, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=47, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=44, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=45, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=51, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=50, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=49, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=48, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=55, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=54, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=53, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=52, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=59, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=58, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=57, cacheEntries=[], isLastChunk=true}, StateChunk{segmentId=56, cacheEntries=[], isLastChunk=true}]} [sender=NodeA-62578]
Again, just 3 nodes, default segment size, but with TRACE level logging.
a) This would explode to something huge if we have, say, a cluster of 100+ nodes with, say, 5000 segments.
b) Each StateChunk is listing all of its entries! Again, here this is fine since the cache is empty; but a running system with a few hundred GB of data would make enabling TRACE logging akin to a DoS attack.
I agree that TRACE level logging on a running system is not recommended, and generally will slow things down a lot, but this has potential to bring a cluster down. I imagine such information is only really useful in a lab or test environment when tracing edge-case consistency issues - so maybe we need a separate flag? Perhaps a -Dinfinispan.logging.verbose=true, which would alter the output of StateResponseCommand.toString() accordingly?
- M
--
Manik Surtani
manik(a)jboss.org
twitter.com/maniksurtani
Platform Architect, JBoss Data Grid
http://red.ht/data-grid
11 years, 10 months
Status update for week 7 and 8
by Galder Zamarreño
Hi,
Sorry couldn't make it to the IRC meeting. Here's my status update for last 2 weeks (we didn't have one last week)
11-15 February:
- 11, 12, 13: PTO
- JSR-107 work to complete missing pieces separate from subtasks in https://issues.jboss.org/browse/ISPN-2639
18-22 February:
- Add support for cache writers and various TCK fixes (https://issues.jboss.org/browse/ISPN-2639)
- Add support for storeAsValue (https://issues.jboss.org/browse/ISPN-2767)
- Sort out Manik's locate() pull req, add support for invokeEntryProcessor (https://issues.jboss.org/browse/ISPN-2824)
- Work to adapt JSR-107 notification model to Infinispan (https://issues.jboss.org/browse/ISPN-2766), skipping notifications with containsKey
- Skipping notification flag added to be able to fulfill TCK requirements, related to ISPN-2766
Remaining work on JSR-107:
- I'm currenly trying to finish up listener work in JSR-107 to pass all tests. This should be finished today or tomorrow and we should have 100% pass rate.
- Once I've got 100% pass rate, I'll send a pull request to code review it…etc, and we can do an alpha release claiming that we implement 0.6 version of the API and pass TCK.
- IMPORTANT NOTE:
- I've discovered that there's parts of the spec not tested by the TCK yet, i.e. cache expired listeners, or async notifications.
- I will create subtaskst to implement for these missing pieces in https://issues.jboss.org/browse/ISPN-2639, but I'll leave them for later on in the 5.3 series.
This week:
- Get to 100% pass rate and send pull req for JSR-107 work so far.
- Fix other 5.3 alpha1 tagged issues
- Start looking into inter server endpoint compatbility mode: https://issues.jboss.org/browse/ISPN-2281
Cheers,
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
11 years, 10 months
Re: [infinispan-dev] StoreByValueTest tck test
by Vladimir Blagojevic
No valid reason Manik. In summary I thought I would have gotten our
keys/values serialized even in local VM if I turn on storeAsBinary but
that does not seem to be the case. I need to use storeAsBinary to
complete a feature of JSR 107 that allows storing of key/value pairs as
serialized values rather than simple references.
TBH, I am not sure how can we do this given mechanisms we have in place.
I would have to implement serialization/deserialization in our jsr 107
project but that would be a wrong path if we can somehow turn on our own
existing storeAsBinary for in VM stored objects (see Galder's email on
what is currently done)
Regards,
Vladimir
On 13-01-24 7:09 AM, Manik Surtani wrote:
> JSR 107's storeAsBinary and our storeAsBinary are conceptually the same. You get a defensive copy and this should work.
>
> But see my comment below:
>
> Also adding Mircea in cc. Any reason why you're not using infinispan-dev for this?
>
> On 24 Jan 2013, at 12:00, Galder Zamarreño <galder(a)redhat.com> wrote:
>
>> Hey Vladimir,
>>
>> IIRC, for performance reasons, even with storeAsBinary, Infinispan keeps the data as normal instance locally. When data is serialized and sent to other nodes, again for performance reasons, it keeps it as raw or byte[] format.
>>
>> So, storing objects by value only happens in counted occassions when storeAsBinary is enabled.
>>
>> You can track it by using a debugger and see how the the MarshalledValue instances are created.
>>
>> Not sure how to fix this without some extra configuration option.
>>
>> Cheers,
>>
>> On Jan 23, 2013, at 5:38 PM, Vladimir Blagojevic <vblagoje(a)redhat.com> wrote:
>>
>>> Galder,
>>>
>>> A quick search of help from you beacuse you are more familiar with this area (storeAsBinary) than I am. There is a tck test that checks storing of objects by value not by reference in the cache [1]. I thought that if we set our underlying cache to be storeAsBinary we would handle this tck requirement (store by value if neeed rather than by reference). However, StoreByValueTest fails although I set our underlying Infinispan cache to be storeAsBinary. I am using local cache athough I tried with transport and dist_async setup as well - same result. Any ideas what is going on?
>>>
>>> Have a look at the test [1] , result I get are below:
>>>
>>>
>>> -------------------------------------------------------
>>> Running org.jsr107.tck.StoreByValueTest
>>> Jan 23, 2013 12:35:29 PM org.jsr107.tck.util.ExcludeList <init>
>>> INFO: ===== ExcludeList url=file:/Users/vladimir/workspace/jsr107/jsr107tck/implementation-tester/target/test-classes/ExcludeList
>>> Defined org.jsr107.tck.StoreByValueTest config StoreAsBinaryConfiguration{enabled=true, storeKeysAsBinary=true, storeValuesAsBinary=true}
>>> Tests run: 6, Failures: 6, Errors: 0, Skipped: 0, Time elapsed: 21.852 sec <<< FAILURE!
>>>
>>> Results :
>>>
>>> Failed tests: get_Existing_MutateValue(org.jsr107.tck.StoreByValueTest): expected: java.util.Date<Wed Jan 23 12:35:34 EST 2013> but was: java.util.Date<Wed Jan 23 12:35:34 EST 2013>
> ?? These seem the same to me? How is the TCK testing for these two values? By reference? Or using .equals()?
>
>>> get_Existing_MutateKey(org.jsr107.tck.StoreByValueTest): expected:<Wed Jan 23 12:35:38 EST 2013> but was:<null>
> This seems a bigger issue. You might want to look at Infinispan logs here?
>
>>> getAndPut_NotThere(org.jsr107.tck.StoreByValueTest): expected: java.util.Date<Wed Jan 23 12:35:41 EST 2013> but was: java.util.Date<Wed Jan 23 12:35:41 EST 2013>
> Again, see my first comment.
>
>>> getAndPut_Existing_MutateValue(org.jsr107.tck.StoreByValueTest): expected: java.util.Date<Wed Jan 23 12:35:45 EST 2013> but was: java.util.Date<Wed Jan 23 12:35:45 EST 2013>
> Again, see my first comment.
>
>>> getAndPut_Existing_NonSameKey_MutateValue(org.jsr107.tck.StoreByValueTest): expected: java.util.Date<Wed Jan 23 12:35:48 EST 2013> but was: java.util.Date<Wed Jan 23 12:35:48 EST 2013>
> Again, see my first comment.
>
>>> getAndPut_Existing_NonSameKey_MutateKey(org.jsr107.tck.StoreByValueTest): expected:<Wed Jan 23 12:35:51 EST 2013> but was:<null>
>>>
>>> Tests run: 6, Failures: 6, Errors: 0, Skipped: 0
>>>
>>> [1] https://github.com/jsr107/jsr107tck/blob/master/cache-tests/src/test/java...
>>
>> --
>> Galder Zamarreño
>> galder(a)redhat.com
>> twitter.com/galderz
>>
>> Project Lead, Escalante
>> http://escalante.io
>>
>> Engineer, Infinispan
>> http://infinispan.org
>>
> --
> Manik Surtani
> manik(a)jboss.org
> twitter.com/maniksurtani
>
> Platform Architect, JBoss Data Grid
> http://red.ht/data-grid
>
11 years, 10 months
Adding JSR-107 support for invokeEntryProcessor
by Galder Zamarreño
Hi all,
We're meant to implement this method in JSR-107:
https://github.com/jsr107/jsr107spec/blob/master/src/main/java/javax/cach...
The interesting bit comes in the javadoc of EntryProcessor: https://github.com/jsr107/jsr107spec/blob/master/src/main/java/javax/cach...
To be more precise:
" * Allows execution of code which may mutate a cache entry with exclusive
* access (including reads) to that entry.
* <p/>
* Any mutations will not take effect till after the processor has completed; if an exception
* thrown inside the processor, the exception will be returned wrapped in an
* ExecutionException. No changes will be made to the cache.
* <p/>
* This enables a way to perform compound operations without transactions
* involving a cache entry atomically. Such operations may include mutations."
Having quickly glanced, there's several things that need addressing from Infinispan internals perspective:
1. Implies that we need to be able to lock a key without a transaction, something we don't currently support.
2. We need an unlock()
3. Requires exclusive access, even for read operations. Our lock() implementation still allows read operations.
These are fairly substantial changes (I'm planning to add them as subtasks to https://issues.jboss.org/browse/ISPN-2639) particularly 1) and 3), and so wanted to share some thoughts:
For 1 and 2, the easiest way I can think of doing this is by having a new LockingInterceptor that is similar to NonTransactionalLockingInterceptor, but unlocks only when unlock is called (as opposed to after each operation finishes).
For 3, we'd either need to add a new lock() method that supports locking read+write, or change lock() behaivour to also lock reads. The latter could break old clients, so I'd go for a new lock method, i.e. lockExclusively(). Again, to support this, a new different NonTransactionalLockingInterceptor is needed so that locks are acquired on read operations as well.
Finally, any new configurations could be avoided at this stage by simply having the JSR-107 adapter inject the right locking interceptor. IOW, if you use JSR-107, we'll swap NonTransactionalLockingInterceptor for JSR107FriendlyNonTransactionalLockingInterceptor.
Before I get started with this, I wanted to get the thoughts/opinions of the list.
Cheers,
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
11 years, 10 months
Verbose logging in state transfer
by Manik Surtani
Guys,
I had a test fail (for some completely unrelated reason) and I found the following in WARN level logging:
2013-02-19 15:56:07,016 WARN [CommandAwareRpcDispatcher] (OOB-2,ISPN,NodeC-28368) Problems invoking command CacheTopologyControlCommand{cache=___defaultcache, type=REBALANCE_START, sender=NodeA-13767, joinInfo=null, topologyId=6, currentCH=DefaultConsistentHash{numSegments=60, numOwners=2, members=[NodeB-35945, NodeC-28368], owners={0: 0, 1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 1, 11: 1, 12: 1, 13: 1, 14: 1, 15: 1, 16: 1, 17: 1, 18: 1, 19: 1, 20: 1, 21: 1, 22: 1, 23: 1, 24: 1, 25: 1, 26: 1, 27: 1, 28: 1, 29: 1, 30: 0, 31: 0, 32: 0, 33: 0, 34: 0, 35: 0, 36: 0, 37: 0, 38: 0, 39: 0, 40: 0 1, 41: 0 1, 42: 0 1, 43: 0 1, 44: 0 1, 45: 0 1, 46: 0 1, 47: 0 1, 48: 0 1, 49: 0 1, 50: 1 0, 51: 1 0, 52: 1 0, 53: 1 0, 54: 1 0, 55: 1 0, 56: 1 0, 57: 1 0, 58: 1 0, 59: 1 0}, pendingCH=DefaultConsistentHash{numSegments=60, numOwners=2, members=[NodeB-35945, NodeC-28368], owners={0: 0 1, 1: 0 1, 2: 0 1, 3: 0 1, 4: 0 1, 5: 0 1, 6: 0 1, 7: 0 1, 8: 0 1, 9: 0 1, 10: 1 0, 11: 1 0, 12: 1 0, 13: 1 0, 14: 1 0, 15: 1 0, 16: 1 0, 17: 1 0, 18: 1 0, 19: 1 0, 20: 1 0, 21: 1 0, 22: 1 0, 23: 1 0, 24: 1 0, 25: 1 0, 26: 1 0, 27: 1 0, 28: 1 0, 29: 1 0, 30: 0 1, 31: 0 1, 32: 0 1, 33: 0 1, 34: 0 1, 35: 0 1, 36: 0 1, 37: 0 1, 38: 0 1, 39: 0 1, 40: 0 1, 41: 0 1, 42: 0 1, 43: 0 1, 44: 0 1, 45: 0 1, 46: 0 1, 47: 0 1, 48: 0 1, 49: 0 1, 50: 1 0, 51: 1 0, 52: 1 0, 53: 1 0, 54: 1 0, 55: 1 0, 56: 1 0, 57: 1 0, 58: 1 0, 59: 1 0}, throwable=null, viewId=2}
This test only contained 3 nodes as well. That's a lot of detailed info there - dumping the entire consistent hash with all of its segments. In a more realistic scenario, with, say, 100+ nodes and maybe 2000 segments, this will drive any sys admin insane. :)
Do we need to be so verbose?
- Manik
--
Manik Surtani
manik(a)jboss.org
twitter.com/maniksurtani
Platform Architect, JBoss Data Grid
http://red.ht/data-grid
11 years, 10 months
NBST test failures in Query
by Sanne Grinovero
I'm having a couple of tests in the Query module sporadically (but
often) failing because of exceptions like:
- Received invalid rebalance confirmation from NodeC-62354 for cache
___defaultcache, we don't have a rebalance in progress
- Suspected members
- WARN [InboundTransferTask] (transport-thread-1,NodeD) ISPN000210:
Failed to request segments [0, 2, 4, 36, 6, 42, 8, 43, 40, 10, 41, 11,
12, 13, 14, 50, ...
- ISPN000071: Caught exception when handling command
CacheTopologyControlCommand ...
Looks very much state-transfer related to me. Could anyone from the
NBST experts have a look?
There are some interesting test there:
org.infinispan.query.distributed.MultiNodeDistributedTest
and its extensions explicitly attempt to start additional nodes,
scaling up and down while performing other operations. To be fair, I
wish they would really do thinks in parallel, but they are politely
blocking and waiting for rehashing to complete at each step..
I would expect at this point to see some tests doing operations and -
in parallel - have nodes added and removed. Are there no such tests in
core?
Sanne
11 years, 10 months
Infinispan 5.3.0 roadmap
by Mircea Markus
Hi,
Based on product and community requirements and team discussions here's the roadmap for Infinispan 5.3.0. Please let me know any suggestions you might have:
1. stabilise the test suite: (32 intermittent failures http://goo.gl/M3pAu)
2. stabilise Infinispan (NBST, other known bugs)
3. new features
- developed in-house:
- Implement REPLICATED mode as a degenerated DISTRIBUTED mode (Adrian, ISPN-2772)
- faster file cache store (Dan, ISPN-2806)
- avoid deadlocks for OOB threads (Dan - ISPN-2808)
- server end point compatibility (Galder, ISPN-2281)
- JSR 107 implementation (ISPN-2639, TBC by Galder next week - depending on how close to finish we are based on Vladimir's work)
- rolling upgrades for REST and Memcached (Tristan, ISPN-2638, ISPN-2637)
- use JDG harness to lunch Infinispan servers (Tristan, ISPN-2809)
- contributed
- xsite state transfer (ISPN-2342 - Erik Salter+Mircea. Erik will send pull request in the following weeks, needs review + tested)
- TOA/TOM (Pedro/Dan/Mircea - ISPN-2635 and ISPN-2636, integration taking place in a F2F meeting in London (18-22 Feb)
- Transaction over Hot Rod (Michael Musgrove/Mircea - ISPN-375 - depending on Michel's availability)
Timeline:
- first Alpha on the 28 Feb (one month after development start we'll start with test suite fixes that might not be relevant for community)
- 2 weeks release cycles after that with 4 Betas
- 2 CRs 1 + 1/2 weeks in between (code freeze)
- final on the 31 May
[1] http://goo.gl/cSWEz
Cheers,
--
Mircea Markus
Infinispan lead (www.infinispan.org)
11 years, 10 months
Infinispan and JPA entities
by Randall Hauch
I've been wondering about this particular use case for a while:
A client application simply uses get, put, and query for objects stored in Infinispan, where the objects really are mapped to a real schema in a relational database. If the objects were JPA-like entities, the database mapping could be defined via a subset of the JPA annotations. Essentially, Infinispan becomes a key-value store on top of a traditional database with a domain-specific schema. Add some JAXB annotations, and it quickly becomes possible to expose these entities via a simple RESTful service. A new cache store implementation could persist the entities to JDBC using the annotations.
This may seem a bit odd at first. Why not just use JPA directly? IMO, for a certain class of applications, this scenario is architecturally easier to understand. Plus, if you put this on top of Teiid's ability to create a virtual database (with a virtual schema that matches what you want the objects to be), then you could put these new entities on top of an existing database with a schema that doesn't necessarily mirror the entity structure.
Is this crazy? Is there a better way of achieving this?
Randall
11 years, 10 months
Protecting ourselves against naive JSR-107 usages in app server environments
by Galder Zamarreño
Hi all,
We've got a small class loading puzzle to solve in our JSR-107 implementation.
JSR-107 has a class called Caching which keeps a singleton enum reference (AFAIK, has same semantics as static) to the systemt's CacheManagerFactory, which in our case it would be InfinispanCacheManagerFactory:
https://github.com/jsr107/jsr107spec/blob/master/src/main/java/javax/cach...
A naive user of JSR-107 could decide to use this Caching class in an app server environment and get a reference to the CMF through it, which could cause major classloading issues if we don't protect ourselves.
Within out CMF implementation, we need to keep some kind of mapping which given a name *and* a classloader, which can find the CacheManager instance associated to it.
This poses a potential risk of a static strong reference being held indirectly on the classloader associated with the Infinispan Cache Manager (amongst other sensible components...).
One way to break this strong reference is for CMF implementation to hold a weak reference on the CM as done here:
https://github.com/galderz/infinispan/blob/t_2639/jsr107/src/main/java/or...
This poses a problem though in that the Infinispan Cache Manager can be evicted from memory without it's stop/shutdown method being called, leading to resources being left open (i.e. jgroups, jmx…etc).
The only safe way to deal with this that I've thought so far is to have a finalyze() method in InfinispanCacheManager (JSR-107 impl of CacheManager) that makes sure this cache manager is shut down. I'm fully aware this is an expensive operation, but so far is the only way I can see in which we can avoid leaking stuff, while not affecting the actual Infinispan core module.
I've found a good example of this in https://github.com/jbossas/jboss-as/blob/master/controller-client/src/mai... - It even tracks creation time so that if all references to InfinispanCacheManager are lost but the ICM instance is not closed, it will print a warm message.
If anyone has any other thoughts, it'd be interesting to hear about them.
Cheers,
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
11 years, 10 months