Cache chaining and async and transactions
by philippe van dyck
Hi all,
i am searching for the perfect configuration on EC2 and I have a question about the asynchronous operations.
I would like to chain two caches 1) one filecache 2) one s3cache.
I would also like the transaction to return ASAP when the filecache has committed, and the s3cache to commit the same transaction, but asynchronously.
Is it 'configurable' ?
Right now I use this ... but it is not working (I have to wait for the s3cache to finish what it seems to be an async operation)
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:infinispan:config:4.0">
<global>
<transport
transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport">
<properties>
<property name="configurationFile" value="jgroups.xml" />
</properties>
</transport>
</global>
<default>
<transaction
transactionManagerLookupClass="org.infinispan.transaction.lookup.DummyTransactionManagerLookup" />
<clustering mode="distribution">
<l1 enabled="true" lifespan="100000" />
<hash numOwners="2" rehashRpcTimeout="120000" />
</clustering>
<loaders passivation="false" shared="true" preload="false">
<loader class="org.infinispan.loaders.file.FileCacheStore"
fetchPersistentState="true" ignoreModifications="false"
purgeOnStartup="true">
<properties>
<property name="location" value="/tmp" />
</properties>
</loader>
<loader class="org.infinispan.loaders.s3.S3CacheStore"
fetchPersistentState="false" ignoreModifications="false"
purgeOnStartup="false">
<properties>
<property name="awsAccessKey" value="***" />
<property name="awsSecretKey" value="***" />
<property name="bucketPrefix" value="store" />
</properties>
<async enabled="true" threadPoolSize="10" />
</loader>
</loaders>
<eviction strategy="LRU" wakeUpInterval="10000" maxEntries="1000" />
<unsafe unreliableReturnValues="true" />
</default>
</infinispan>
Thanks,
Phil
15 years
Adding getMBeanServer to AdvancedCache
by Galder Zamarreno
Hi,
For implementing the stats command in the memcached txt server, I'm
planning to use JMX to get the relevant stats...etc. However, to make
sure I hit the right MBeanServer, I thought of adding a getMBeanServer()
to AdvancedCache that returns whatever MBeanServer has been used to
register the MBeans. The MBeanServer actually comes from the
corresponding MBeanServerLookup implementation.
Has anyone got any objections to this?
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
15 years
Management of shared caches
by Brian Stansberry
Had a look at DefaultCacheManager today and I see it has no mechanism
for controlling the stopping of shared caches. Multiple independent
callers to getCache("foo") can get a ref to the foo cache, and then any
of them can call stop() on it.
A few possibilities come to mind:
1) Add a releaseCache method, do some reference counting, and stop the
cache when all refs are released. Remove the cache from the "caches" map.
2) And/or, wrap the cache in a wrapper whose stop() method doesn't call
through to the wrapped cache until stop() is called on all wrappers
3) Advise in the javadoc that shared caches are supported, but if they
are used it's the users responsibility to ensure that at least one but
only one caller calls stop() on the cache. At least for the AS use
cases, this should be OK, since an Infinispan Cache will be analogous to
a JBC Region, and there's only one user for a given region.
--
Brian Stansberry
Lead, AS Clustering
JBoss by Red Hat
15 years
ISPN-296 closed but not sure why
by Bryan Grunow
The issue has been closed but I can reproduce it. I'm not sure if it
was closed because it could not be reproduced or because it not felt as
a bug.
https://jira.jboss.org/jira/browse/ISPN-296
I've attached a simple test to reproduce the problem. Just a reminder.
When running the test make sure to add -Dinfinispan.query.enabled=true
-Dinfinispan.query.indexLocalOnly=true to enable query support.
Bryan
---------------------------------------------------------------------
This transmission (including any attachments) may contain confidential information, privileged material (including material protected by the solicitor-client or other applicable privileges), or constitute non-public information. Any use of this information by anyone other than the intended recipient is prohibited. If you have received this transmission in error, please immediately reply to the sender and delete this information from your system. Use, dissemination, distribution, or reproduction of this transmission by unintended recipients is not authorized and may be unlawful.
15 years
Query iterators returning null in result set
by Bryan Grunow
It appears something is broken in CR3 when iterating query results. The
count comes back correct but the values returned are all null. It does
work by getting the results via the CacheQuery.list() command.
This appears to be because the list() method uses
KeyTransformationHandler.stringToKey(key) to get the key whereas neither
of the iterators use this so the key that they try to look up results by
is prefixed.
Bryan
---------------------------------------------------------------------
This transmission (including any attachments) may contain confidential information, privileged material (including material protected by the solicitor-client or other applicable privileges), or constitute non-public information. Any use of this information by anyone other than the intended recipient is prohibited. If you have received this transmission in error, please immediately reply to the sender and delete this information from your system. Use, dissemination, distribution, or reproduction of this transmission by unintended recipients is not authorized and may be unlawful.
15 years
Improving CacheStore.loadAll()
by Manik Surtani
Adrian,
Thanks for the transcript between yourself and Philippe below. Here are my thoughts:
* loadAll() is generally overused and this can get expensive. I've changed purgeExpired() in certain impls to not use loadAll().
* preloading the cache also calls loadAll(). I have a suggestion for this here - https://jira.jboss.org/jira/browse/ISPN-310 - but this won't be in place till 4.1.0.
* rehashing isn't as bad as you think - the rehashing of entries in stores only takes place when the cache store is *not* shared. Any use of an expensive, remote store (such as S3, JDBC) would typically be shared between Infinispan nodes and as such these will not be considered when rehashing.
That said, stuff can be improved a bit, specifically with the addition of something like loadKeys(Set<Object> excludes). This will allow the rehash code to load just the necessary keys, excluding keys already considered from the data container directly, and then inspect each key to test if the key needs to be rehashed elsewhere. If so, the value could be loaded using load().
I have captured this in
https://jira.jboss.org/jira/browse/ISPN-311
The problem, I think, with maintaining metadata separately is that it adds an additional synchronization point when updating that metadata, whether this is expiration data per key, or even just a list of keys in the store for a quick loadKeys() impl. But am open to ideas, after all this is just CacheStore specific implementation details.
Cheers
Manik
On 3 Dec 2009, at 16:20, Adrian Cole wrote:
> <adriancole> aloha all
> <pvdyck> hi all
> <adriancole> we are talking about the rehash concern wrt bucket-based
> cachestores
> <pvdyck> here is transcript
> <pvdyck> it seems that it first loops on the set from the store to
> compare the keys with the keys in memory
> <pvdyck> [17:01] pvdyck: the set of keys present in memory will
> always be smaller ... so maybe looping on this one and comparing with
> the keys present in the store is a good optimization
> <pvdyck> [17:01] pvdyck: I will give you the exact file and line in a moment
> <pvdyck> [17:02] pvdyck: ok LeaveTask:74
> <pvdyck> [17:03] pvdyck: actually
> org.infinispan.distribution.LeaveTask line 74 from CR2
> <adriancole> for context, the current design implies a big load, pvdyck, right?
> <pvdyck> (is it sooo early ... is there and #infinispan irc channel
> display next to the coffee machine ? ;-)
> <adriancole> :)
> <adriancole> pvdyk, I can see that changing the loop will reduce the
> possiblity for overloading a node
> <pvdyck> the design implies calling loadAllLockSafe() ... loading all
> the entries (K+V) from the cache -> very bad idea actually
> <adriancole> seems that keys should be in a separate place
> <adriancole> wdyt?
> <adriancole> a lot of large systems have a separate area for metadata
> and payload
> <adriancole> one popular one is git ;)
> <pvdyck> the simple idea of having this loadAll thing is a problem
> <pvdyck> if it ever get called ... I am quite sure the system will hang
> <pvdyck> and indeed you are right, there is no reason to bring the
> values with it ... keys are more than enough!
> <adriancole> so, here's the thing
> <adriancole> the whole bucket-based thing is suppopsed to help avoid
> killing entries who share the same hashCode
> <adriancole> and there's also another issue with encoding keys
> <adriancole> since they might be typed and not strings
> <pvdyck> is it still the case with the permanent hash ?
> <pvdyck> oops sorry ... consistent hash
> <adriancole> well, I'm talking about the hash of the thing you are putting in
> <adriancole> not the consistent hash alg
> <pvdyck> ok, understood...
> <adriancole> ya, so I think that if we address this, we're ok
> <adriancole> in the blobstore (jclouds) thing, we could address
> <adriancole> by having a partition on hashCode
> <adriancole> and encoding the object's key as the name of the thing in s3
> <adriancole> or whereever
> <adriancole> so like "apple" -> "bear"
> <adriancole> "bear".hashCode/encode("apple")
> <pvdyck> actually, I don't think the problem should end up in the
> hands of the store itself
> <adriancole> that would be convenient :)
> <adriancole> in that case, I think that ispn may need a metadata
> store and value store
> <adriancole> since currently the typed bucket object contains both
> keys and values
> <adriancole> which makes it impossible to get to one without the other
> <adriancole> I'm pretty sure
> <pvdyck> looks like a lot of changes... but it is a path to explore!
> <pvdyck> we obviously need to wait for them to appear ;-)
> <adriancole> firstly, I think we'll need a patch for the s3
> optimization so you can work :)
> <adriancole> and also, this design needs to be reworked for sure
> * Received a malformed DCC request from pvdyck.
--
Manik Surtani
manik(a)jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org
15 years
Hotwo : cglib and guice living in perfect harmony
by philippe van dyck
Hi all,
if you try to use jClouds or Infinispan S3 cache store with cglib, as does qi4j, you will probably receive an ugly exception telling you that cglib is nowhere to be found...
The only way to get rid of it is by disabling guice's custom loader by adding this flag to your command line : -Dguice.custom.loader=false
You will also need to add this library to your pom.xml
<dependency>
<groupId>javax.inject</groupId>
<artifactId>javax.inject</artifactId>
<version>1</version>
</dependency>
Hope it helps,
Cheers,
Philippe
P.S.: I spent hours trying to figure out what was happening since all the libs where on the classpath
15 years
Tests failing: ports shared across modules
by Sanne Grinovero
Hello,
Looking into hudson's tests on
http://hudson.jboss.org/hudson/view/Infinispan/job/Infinispan-trunk-JDK6-tcp
it appears the number of failed tests change from build to build even
by documentation changes or other unrelated changes.
Two examples:
Build #1031 :
Changes: Typos in javadocs
Test Result (4 failures / -1)
a javadoc change fixed a test?
Build #1033 :
Changes: [ISPN-301] (Closing the Lucene Directory will close the cache too)
Test Result (8 failures / +2)
So while I only changed something related to Lucene, the failures in
core increased?
In some of the test errors you can find evidence of communication
between test scenarios, like these:
http://hudson.jboss.org/hudson/view/Infinispan/job/Infinispan-trunk-JDK6-...
http://hudson.jboss.org/hudson/view/Infinispan/job/Infinispan-trunk-JDK6-...
first one from the Lucene module, second from the Tree module: they
both have the node vmg22.mw.lab.eng.bos.redhat.com-42912
and are complaining about an unexpected number of participants.
Looking into the other errors it always looks like as "someone else"
changed the cache, but there's no evidence, so I think we should solve
the isolation problem first?
Both stacktraces show that they're using
org.infinispan.test.MultipleCacheManagersTest.createClusteredCaches(MultipleCacheManagersTest.java:137)
to setup the caches, so it doesn't appear to be a problem with these
two testcases.
This kind of interactions don't seem to happen inside a single module,
could it be a classloader problem? A static threadlocal defines the
jgroups port to use, but it's "static" in a per-module world, instead
of globally static?
I've added some logging to
org.infinispan.test.fwk.JGroupsConfigBuilder, 2 snippets of the
result:
[org.infinispan.test.fwk.JGroupsConfigBuilder] (pool-1-thread-1) TCP
bind_port:7900 ClassLoder:org.apache.maven.surefire.booter.IsolatedClassLoader@2e93d13f
[...many lines..]
[org.infinispan.test.fwk.JGroupsConfigBuilder] (pool-1509-thread-10)
TCP bind_port:7900
ClassLoder:org.apache.maven.surefire.booter.IsolatedClassLoader@2d1a2259
I see the two classloaders being different, and while the threads are
different they are sharing the same bind_port 7900.
Looking into http://maven.apache.org/plugins/maven-surefire-plugin/examples/class-load...
It looks like from Surefire 2.4.3 the default is to use a shared
system classloader, but there's a little warning at the bottom of page
about not being possible to not isolate the classloader while using
forkMode=none
ideas?
P.S. where can I get the sources of maven-surefire-plugin version2.4.3-JBOSS ?
Cheers,
Sanne
15 years
Repo out of space?
by Vladimir Blagojevic
Is this some internal joke svn is playing on me or is it for real?
svn: Commit failed (details follow):
svn: Commit failed (details follow):
svn: Can't close file
'/mnt/n4aphx2-3.storage.phx2.redhat.com/svn/repos/infinispan/db/transactions/1297-2.txn/node.0.0':
No space left on device
svn: MKACTIVITY of
'/repos/infinispan/!svn/act/20cdec8c-2501-0010-b8ab-a39119f90591': 500
Internal Server Error (https://svn.jboss.org)
15 years