state transfer timed out, where to configure?
by Andreas Kruthoff
Hi dev
I'm running into the following exception on the 3rd node out of 2.
Distributed cluster, file store with a few millions of entries.
The 3rd node times out during startup, I think. "Initial state transfer
timed out". How can I configure/increase the timeout in my infinispan.xml?
Is it <state-transfer timeout="1200000"/> within <distributed cache/> ?
thanks for help
-andreas
Exception in thread "main" org.infinispan.commons.CacheException: Unable
to invoke method public void
org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete()
th
rows java.lang.InterruptedException on object of type
StateTransferManagerImpl
at
org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:170)
at
org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:869)
at
org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:638)
at
org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:627)
at
org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:530)
at
org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:216)
at org.infinispan.cache.impl.CacheImpl.start(CacheImpl.java:813)
at
org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:584)
at
org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:539)
at
org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:416)
at ch.nexustelecom.lbd.engine.ImeiCache.init(ImeiCache.java:49)
at
ch.nexustelecom.dexclient.engine.DefaultDexClientEngine.init(DefaultDexClientEngine.java:120)
at
ch.nexustelecom.dexclient.DexClient.initClient(DexClient.java:169)
at
ch.nexustelecom.dexclient.tool.DexClientManager.startup(DexClientManager.java:196)
at
ch.nexustelecom.dexclient.tool.DexClientManager.main(DexClientManager.java:83)
Caused by: org.infinispan.commons.CacheException: Initial state transfer
timed out for cache infinicache-lbd-imei on m4sxhpsrm672-11986
at
org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete(StateTransferManagerImpl.java:216)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168)
... 14 more
This email and any attachment may contain confidential information which is intended for use only by the addressee(s) named above. If you received this email by mistake, please notify the sender immediately, and delete the email from your system. You are prohibited from copying, disseminating or otherwise using the email or any attachment.
9 years, 10 months
Asynchronous cache's "void put()" call expectations changed from 6.0.0 to 6.0.1/7.0
by Galder Zamarreño
Hi all,
@Paul, this might be important for WF if using async repl caches (the same I think applies to distributed async caches too)
Today I’ve been trying to upgrade Infinispan version in Hibernate master from 6.0.0.Final to 7.0.0.Beta1. Overall, it’s all worked fine but there’s one test that has started failing.
Essentialy, this is a clustered test for a repl async cache (w/ cluster cache loader) where a non-owner cache node does put() and immediately, on the same cache, it calls a get(). The test is failing because the get() does not see the effects of the put(), even if both operations are called on the same cache instance.
According to Dan, this should have been happening since [1] was implemented, but it’s really started happening since [2] when lock delegation was enabled for replicated caches (EntryWrappingInterceptor.isUsingLockDelegation is now true whereas in 6.0.0 it was false).
Not sure we set expectations in this regard, but clearly it’s big change in terms of expectations on when “void put()” completes for async repl caches. I’m not sure how we should handle this, but it definitely needs some discussion and adjuts documentation/javadoc if needed. Can we do something differently?
Indepent of how we resolve this, this is the result of once again of trying to shoehole async behaviour into sync APIs. Any async caches (DIST, INV, REPL) should really be accessed exclusively via the AsyncCache API, where you can return quickly and use the future, and any listener to attach to it (a bit ala Java8’s CompletableFuture.map lambda calls) as a way to signal that the operation has completed, and then you have an API and cache mode that make sense and is consistent with how async APIs work.
Right now, when a repl async cache’s “void put()” call is not very well defined. Does it return when message has been put on the network? What impact does it have in the local cache contents?
Also, a very big problem of the change of behaviour is that if left like that, you are forcing users to code differently, using the same “void put()” API depending on the configuration (whether async/sync). As clearly shown by the issue above, this is very confusing. It’s a lot more logical IMO, and I’ve already sent an email on this very same topic [3] back in January, that whether a cache is sync or async should be based purely on the API used and forget about the static configuration flag on whether the cache is async or sync.
Cheers,
[1] https://issues.jboss.org/browse/ISPN-2772
[2] https://issues.jboss.org/browse/ISPN-3354
[3] http://lists.jboss.org/pipermail/infinispan-dev/2014-January/014448.html
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
9 years, 10 months
Student / Contributor projects
by Tristan Tarrant
Hi all,
I was told that our student/contributor project page is awfully
out-of-date, so we're in need of a big refresh. We should also move that
page to the website.
Here are some ideas I have collected:
- ISPN-5185 Add topology headers to the RESTful server
- ISPN-5186 intelligent (L2/L3) Java REST client
- ISPN-5187 Node.js HotRod client (either pure-Javascript or based on
the C++ client)
- ISPN-5188 Support for JSON as indexable/queryable objects using the
ProtoBuf schema definitions (this could be extended to XML too)
- ISPN-5189 Allow setting a "computing" function (using JDK 8's lambdas)
on a cache so that entries can be computed on-demand when they are
missing/expired
More ideas please
Tristan
--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
9 years, 10 months
Distribution-aware ClusterLoader
by Manik Surtani
Greetings. :-)
I chatted with a few of you offline about this earlier; anyone has any
thoughts around a ClusterLoader implementation that, instead of
broadcasting to the entire cluster, unicasts to the owners of a given key
by inspecting the DistributionManager. Thinking of using this as a
lazy/on-demand form of state transfer in a distributed cluster, so joiners
don’t trigger big chunks of data moving around eagerly.
- M
9 years, 11 months
allowDuplicateDomains set to true for CDI?
by Sebastian Łaskawiec
Hey!
When I was moving CDI quickstart to a new repository (from
infinispan-quickstart to jboss-jdg-quickstarts), I noticed that probably
some of our users will try to put Infinispan library inside WAR/lib and
run it locally with CDI Extension.
This will end up with JmxDomainConflictException on Wildfly (because the
domain for " DefaultCacheManager" will probably be already registered).
The workaround is simple - the user has to provide his own
EmbeddedCacheManager producer with turned on alowDuplicateDomains option.
In my opinion this option should be enabled by default for CDI
Extension. If you agree with me, I'll make necessary changes.
Any thoughts?
Best regards
Sebastian
9 years, 11 months
Experiment: Affinity Tagging
by Sanne Grinovero
Hi all,
I'm playing with an idea for some internal components to be able to
"tag" the key for an entry to be stored into Infinispan in a very
specific segment of the CH.
Conceptually the plan is easy to understand by looking at this patch:
https://github.com/Sanne/infinispan/commit/45a3d9e62318d5f5f950a60b5bb174...
Hacking the change into ReplicatedConsistentHash is quite barbaric,
please bear with me as I couldn't figure a better way to be able to
experiment with this. I'll probably want to extend this class, but
then I'm not sure how to plug it in?
What would you all think of such a "tagging" mechanism?
# Why I didn't use the KeyAffinityService
- I need to use my own keys, not the meaningless stuff produced by the service
- the extensive usage of Random in there doesn't seem suited for a
performance critical path
# Why I didn't use the Grouping API
- I need to pick the specific storage segment, not just co-locate with
a different key
The general goal is to make it possible to "tag" all entries of an
index, and have an independent index for each segment of the CH. So
the resulting effect would be, that when a primary owner for any key K
is making an update, and this triggers an index update, that update is
A) going to happen on the same node -> no need to forwarding to a
"master indexing node"
B) each such writes on the index happen on the same node which is
primary owner for all the written entries of the index.
There are two additional nice consequences:
- there would be no need to perform a reliable "master election":
ownership singleton is already guaranteed by Infinispan's essential
logic, so it would reuse that
- the propagation of writes on the index from the primary owner
(which is the local node by definition) to backup owners could use
REPL_ASYNC for most practical use cases.
So net result is that the overhead for indexing is reduced to 0 (ZERO)
blocking RPCs if the async repl is acceptable, or to only one blocking
roundtrip if very strict consistency is required.
Thanks,
Sanne
9 years, 11 months