Moving Infinispan wikis to Infinispan space?
by Galder Zamarreno
Hi all,
Libor, from the JBoss.org community team, has contacted me and asked
whether we'd like to move the Infinispan wikis to the Infinispan space,
rather than keeping them in the general Wiki space.
I've asked Libor what would happen with wikis that affect several
spaces, i.e. when Hibernate wikis are moved to JBoss.org, if that
happens, http://community.jboss.org/docs/DOC-14105 would be a prime
candidate of wiki that affects several spaces. He said that it could
stay in the shared space and be tagged accordingly.
Libor, what are the benefits of moving Infinispan wikis to our own
Infinispan space?
Cheers,
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
14 years, 9 months
When do you use JdbcStringBasedCacheStore instead of JdbcBinaryCacheStore? And viceversa?
by Galder Zamarreno
Hi,
I'm reading the javadoc of JdbcStringBasedCacheStore and
JdbcBinaryCacheStore but I don't understand the difference between them two:
JdbcStringBasedCacheStore stores each cache entry within a row but can
also store non-string keys.
JdbcBinaryCacheStore stores each bucket as a row in the database but can
also store non-string keys.
So, if both support non-string keys, when do you use one versus the other?
Cheers,
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
14 years, 9 months
Solution for failing Isolated query cache tests in Hibernate trunk
by Galder Zamarreno
Hi all,
In Hibernate trunk, the following tests are failing:
In JBC 2LC provider:
MVCCIsolatedClassLoaderTest.testClassLoaderHandlingNamedQueryRegion
MVCCIsolatedClassLoaderTest.testClassLoaderHandlingStandardQueryCache
In Infinispan 2LC provider:
IsolatedClassLoaderTest.testClassLoaderHandlingNamedQueryRegion
IsolatedClassLoaderTest.testClassLoaderHandlingStandardQueryCache
They're failing because AccountHolder is being loaded with the system
classloader rather than the SelectedClassnameClassLoader.
The reason for this is that as a result of
http://opensource.atlassian.com/projects/hibernate/browse/HHH-2990,
SerializationHelper.CustomObjectInputStream now takes the classloader in
the constructor. And during the test, SerializableType.fromBytes passes
null as classloader to this constructor. The null comes from
getReturnedClass().getClassLoader() below, which has been added as a
result of 2990.
private Object fromBytes(byte[] bytes) throws SerializationException {
return SerializationHelper.deserialize( bytes,
getReturnedClass().getClassLoader() );
}
Now, shouldn't we use Thread.currentThread().getContextClassLoader()
instead of getReturnedClass().getClassLoader()? Previously, that's what
would have happened. I dunno why getReturnedClass().getClassLoader() was
added though.
I've just tested using Thread.currentThread().getContextClassLoader()
change and the tests pass now. Steve?
Cheers,
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
14 years, 9 months
Hot Rod - pt4
by Galder Zamarreno
Hi,
Re: http://community.jboss.org/wiki/HotRodProtocol
I've updated the wiki with the following information:
- We know differentiate between request and response op codes to make
the protocol more flexible - see pt2 email thread for further details.
- Removed total body length and now all fields are accompanied by a
length field.
- Flags combined with XOR ops.
- Added encoding based on
http://community.jboss.org/wiki/RemoteCacheInteractions
- Updated example at bottom.
- Added event handling for cluster formation and cluster formation +
hashcodes notifications.
Cheers,
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
14 years, 9 months
Implementing LIRS eviction policy [ISPN-299]
by Vladimir Blagojevic
Hey everyone,
Some people are already familiar with this thread. They can jump towards the end of email to read a concrete proposal on how to implement LIRS in Infinispan. Others, those of you interested obscure eviction algorithms, keep reading :)
Some time ago Manik asked me to look into implementation of a new, LIRS algorithm for cache eviction. It is a well known fact that plain vanilla LRU algorithm, although simple and easy to understand, under performs in cases of weak access locality (one time access pages are not replaced timely, pages to be accessed soonest are unfortunately replaced, and so on). There has been a new algorithm out there that is rather popular called LIRS that addresses these shortcomings of LRU yet it retains LRU's simplicity.
That is where the easy part ends. Eviction algorithm, if not implemented in scalable and lock free fashion can seriously degrade performance. Having a lock protected data container (to use Infinispan lingo) causes high contention offsetting eviction precision that we get by using algorithm such as LIRS. That set me off on to a search for LinkedHashMap (most suitable for LIRS and LRU) like structure that is lock free. Ben Manes, recently employed by Google, has been working on this problem for a while. His first attempt to implement ConcurrentLinkedHashMap had a flaw that was discovered by EhCache people and confirmed by Manik in his own test. Ben Manes' second design for ConcurrentLinkedHashMap uses ideas from a well known seminal paper in the area of lock free algorithms [1] and the new design looks valid, at least to me. His implementation of ConcurrentLinkedHashMap is not finished yet.
However, even if we had ConcurrentLinkedHashMap today that puts us only half way from our lock free LIRS implementation. LIRS does not use only one stack/list such as LRU but two. LIRS, in some cases, performs a lot of node shifting within that list and transfers nodes from one list to another. Manik and I talked about how we could potentially change original LIRS and stick the whole thing into one stack (ConcurrentLinkedHashMap) by using additional node markings and such. Overall, I think this is possible but full of potential pitfalls.
Just before holidays while bashing Google scholar day after day I came across a research paper [2] that I would say has a lot of potential, not only for our LIRS eviction data container implementation but any other eviction algorithm implementation.
Instead of making a trade-off between the high hit ratio of an eviction algorithm and the low lock contention there is a third way, and dare I say a excellent idea of lock amortization. We can wrap any eviction algorithm with a framework that keeps track of cache access per thread (ThreadLocal) in a simple data container, say a queue. For each cache hit associated with a thread, the access history information is recorded in the thread’s queue. When thread's queue is full or the number of accesses recorded in the queue reaches a certain pre-determined threshold, we acquire a lock and then execute the operations defined by the eviction algorithm - once for all the accesses in the queue. A thread is allowed to access many cache items without requesting a lock to run the eviction replacement algorithm, or without paying the lock acquisition cost. We can fully exploit a non-blocking lock APIs like tryLock. As you recall tryLock makes an attempt to get the lock and if the lock is currently held by another thread, it fails without blocking its caller thread. Although tryLock is cheap it is not used for every cache access for obvious reasons but rather on certain pre-determined thresholds. In case when thread's queue is totally full lock must be explicitly requested. Intuitively speaking this makes a lot of sense, we significantly lower the cost of lock contention, order/streamline access to locked structures, retain the precision of eviction algorithm such as LIRS, and best of all, if we are to believe to the authors claim, we can increase throughput by nearly two-fold compared to the implementation of an unmodified eviction algorithm, such as LRU, and at the same time achieve a scalability as good as the one that use lock free structures.
So how do we translate these ideas in Infinispan?
In order to implement data container with batching lock amortization updates, DataContainer is structured so that it contains two DataContainers in a chain. As far as Infinispan code base is concerned DataContainer interface is still exposed as is but the implementation of the first DataContainer in the chain contains a references to a delegate – real DataContainer implementation. The first DataContainer in the chain is considered to be a lock free buffer data container (BDC) while delegate container is thread safe and interchangeable (LIRS, LRU) data container (DC). BDC has a ConcurrentHashMap whose cache entry contents are managed as calls are unwound from DC.
As previously discussed BP-Wrapper [1] shared objects are used to batch updates to DC. BP-Wrappers are envisioned as per thread objects having its own queue to record cache entry accesses. As Brian mentioned this might be perfectly fine in other systems but it will present a problem in Infinispan where we can potentially have hundreds of concurrent threads accessing single data container. Many short lived threads would never fill up their queue enough to hit a threshold. Manik, suggested that we share BP-Wrapper objects in a pool among all InvocationContext(s).
XYZInterceptor {
Pool<BP-Wrapper> pool;
// grab BP-Wrapper off pool
// assign to InvocationContext
// try {pass up chain}
// finally {pull off InvocationContext, return to pool}
}
However, after thinking this through a bit more, a better solutions seems to be recording all cache entry accesses in a lock-free queue within BDC itself. All threads making invocations into DC share one lock-free queue to record cache entry accesses instead of having one queue per BP-Wrapper object. In this case, we do not have to manage shared BP-Wrapper objects, we do not need an extra interceptor and so on.
In order to batch updates to DC we need to “commit” all accessed cache entries into DC. As of now we do not have such API. Either we introduce a subclass of DataContainer that has a following new method or we can extend current DataContainer and make all implementations of DC that do not handle batch updates perform a no-op:
public touch(List<InternalCacheEntry> updates)
Feedback appreciated.
Regards,
Vladimir
[1] http://www.cl.cam.ac.uk/research/srg/netos/papers/2001-caslists.pdf
[2] http://www.cse.ohio-state.edu/hpcs/WWW/HTML/publications/papers/TR-09-1.pdf
14 years, 9 months
Creating cache managers in your unit tests
by Manik Surtani
Guys,
as far as possible, please extend either SingleCacheManagerTest [1] or MultipleCacheManagersTest [2]. and use the corresponding helper methods there to create a new CacheManager. Alternatively, if you have a unit test that needs to create its own CacheManager directly, please use TestCacheManagerFactory [3]. Do *not* use "new DefaultCacheManager()" directly since this means the framework does not have a chance to alter config settings to work within the framework, including clustering settings to make sure the test does not interfere with other tests, threadpool settings to prevent unnecessary OOMs when running the entire suite with thousands of cache managers, etc.
I have just fixed a bunch of offending tests [4], and updated the wiki page [5] on writing tests accordingly. Please follow these guidelines.
Cheers & Happy X'mas!
Manik
[1] http://fisheye.jboss.org/browse/Infinispan/trunk/core/src/test/java/org/i...
[2] http://fisheye.jboss.org/browse/Infinispan/trunk/core/src/test/java/org/i...
[3] http://fisheye.jboss.org/browse/Infinispan/trunk/core/src/test/java/org/i...
[4] http://fisheye.jboss.org/changelog/Infinispan/trunk?cs=1327
[5] http://community.jboss.org/wiki/ParallelTestSuite
--
Manik Surtani
manik(a)jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org
14 years, 10 months
Hot Rod - pt2
by Galder Zamarreno
Hi all,
Re: http://community.jboss.org/wiki/HotRodProtocol
I've updated the wiki with the following stuff:
- Renamed replaceIfEquals to replaceIfUnmodified
- Added remove and removeIfUnmodified.
- Added containsKey command.
- Added getWithCas command so that cas value can be returned. I decided
for a separate command rather than adding cas to get return because you
don't always want cas to be returned. Having a separate command makes
better use of network bandwith.
- Added stats command. JMX attributes are basically accessible through
this, including cache size.
- Added error handling section and updated status codes.
Note that Mircea added some interesting comments and I replied to them
directly in the wiki.
Still remaining to add:
- Commands: putForExternalRead evict, clear, version, name and quite
commands.
- Passing flags.
Regards,
p.s. Updating this has been quite a struggle due to F12 + FF 3.5.5
crashing at least 5 times, plus parts of the wiki dissapearing after
publishing them!
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
14 years, 10 months