Infinispan 7.2.0.Beta1 released
by Dan Berindei
Dear Infinispan community,
Infinispan 7.2.0.Beta1 is now available!
Along the usual assortment of bug fixes, this release includes a few
exciting new features:
* Server-side scripting with JSR-223 (ISPN-5013)
* Initial support for the JCache API over HotRod (ISPN-4955)
* Improved size-based eviction, implemented on top of Doug Lea's
ConcurrentHashMapV8 (ISPN-3023)
The blog post has more details (and links):
http://blog.infinispan.org/2015/03/infinispan-720beta1-released.html
Cheers
Dan
9 years, 7 months
CacheEntry vs. Metadata
by Radim Vansa
Hi,
I was already several times looking on the class hierarchy of CacheEntry
and its descendants. Since the documentation of those interfaces is
usually a one liner, I'd like to ask for the big picture:
So we have CacheEntry, which implements MetadataAware - therefore, it
contains a metadata, which define lifespan, maxIdle time and version.
However, even the CacheEntry interface itself contains getters for
lifespan and idle time and MortalCacheEntry hosts the fields - so I see
that there's some duplication with the Metadata object.
Beyond the expiration-related stuff (and common key - value getters),
CacheEntry has several methods querying and manipulating its state -
isChanged, isValid, isRemoved etc. It's a bit confusing that this is
presented not as a single state but rather a matrix of boolean states.
When I've tried to implement EntryProcessor several weeks ago (I've
stopped the attempts since this should be implemented in Infinispan 8),
I had quite a hard time figuring out which should be set and how in case
I want to update/remove the entry. undelete() and skipLookup() are not
obvious, either.
Is the reason for having Immortal/Mortal/Transient/TransientMortal
entries + Metadata* versions + *Values to optimally use memory?
Then there are the ReadCommitted and RepeatableRead entries - are these
ever stored in data container, or just in context? What's the exact
relation between those implementing InternalCacheEntry and MVCCEntry?
Then there's the DeltaAwareCacheEntry - this does not fit to the image
for me at all.
I am also not sure about the relation of EmbeddedMetadata and
InternalMetadataImpl
Thanks for your insight!
Radim
--
Radim Vansa <rvansa(a)redhat.com>
JBoss Performance Team
9 years, 7 months
Early Access builds for JDK 9 b53 and JDK 8u60 b05 are available on java.net
by Rory O'Donnell
Hi Galder,
Early Access build for JDK 9 b53 <https://jdk9.java.net/download/>
available on java.net, summary of changes are listed here
<http://www.java.net/download/jdk9/changes/jdk9-b53.html>
Early Access build for JDK 8u60 b05 <http://jdk8.java.net/download.html>
is available on java.net, summary of changes are listed here.
<http://www.java.net/download/jdk8u60/changes/jdk8u60-b05.html>
I'd also like to use this opportunity to point you to JEP 238:
Multi-Version JAR Files [0],
which is currently a Candidate JEP for JDK 9.
It's goal is to extend the JAR file format to allow multiple, JDK
release-specific versions of class
files to coexist in a single file. An additional goal is to backport the
run-time changes to
JDK 8u60, thereby enabling JDK 8 to consume multi-version JARs. For a
detailed discussion,
please see the corresponding thread on the core-libs-dev mailing list. [1]
Please keep in mind that a JEP in the Candidate state is merely an idea
worthy of consideration
by JDK Release Projects and related efforts; there is no commitment that
it will be delivered in
any particular release.
Comments, questions, and suggestions are welcome on the corelibs-dev
mailing list. (If you
haven’t already subscribed to that list then please do so first,
otherwise your message will be
discarded as spam.)
Rgds,Rory
[0] http://openjdk.java.net/jeps/238
[1]
http://mail.openjdk.java.net/pipermail/core-libs-dev/2015-February/031461...
--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland
9 years, 7 months
state transfer timed out, where to configure?
by Andreas Kruthoff
Hi dev
I'm running into the following exception on the 3rd node out of 2.
Distributed cluster, file store with a few millions of entries.
The 3rd node times out during startup, I think. "Initial state transfer
timed out". How can I configure/increase the timeout in my infinispan.xml?
Is it <state-transfer timeout="1200000"/> within <distributed cache/> ?
thanks for help
-andreas
Exception in thread "main" org.infinispan.commons.CacheException: Unable
to invoke method public void
org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete()
th
rows java.lang.InterruptedException on object of type
StateTransferManagerImpl
at
org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:170)
at
org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:869)
at
org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:638)
at
org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:627)
at
org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:530)
at
org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:216)
at org.infinispan.cache.impl.CacheImpl.start(CacheImpl.java:813)
at
org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:584)
at
org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:539)
at
org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:416)
at ch.nexustelecom.lbd.engine.ImeiCache.init(ImeiCache.java:49)
at
ch.nexustelecom.dexclient.engine.DefaultDexClientEngine.init(DefaultDexClientEngine.java:120)
at
ch.nexustelecom.dexclient.DexClient.initClient(DexClient.java:169)
at
ch.nexustelecom.dexclient.tool.DexClientManager.startup(DexClientManager.java:196)
at
ch.nexustelecom.dexclient.tool.DexClientManager.main(DexClientManager.java:83)
Caused by: org.infinispan.commons.CacheException: Initial state transfer
timed out for cache infinicache-lbd-imei on m4sxhpsrm672-11986
at
org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete(StateTransferManagerImpl.java:216)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168)
... 14 more
This email and any attachment may contain confidential information which is intended for use only by the addressee(s) named above. If you received this email by mistake, please notify the sender immediately, and delete the email from your system. You are prohibited from copying, disseminating or otherwise using the email or any attachment.
9 years, 8 months
Is it time to reduce the complexity of the CacheManager?
by Sanne Grinovero
All,
at the beginning of time, the expectation was that an application
server (aka WildFly) would have a single CacheManager, and different
applications would define their different Cache configuration on this
app-server singleton.
In that primitive world that sounded reasonable, as system
administrators wouldn't want to manage firewalls and port assignments
for a new Transport for each deployed application.
Then the complexities came:
- deployments are asymmetric compared to the application server
- each deployment has its own ClassLoader
- deployments start/stop independently from each other
At that point a considerable investment was made to get lazily
starting Caches, per-Cache sets of Externalizer(s) to isolate
classloaders, ClassLoader-aware Cache decorators, up to the recently
introduced Cache-dependency rules for stopping dependant Caches last.
Not to mention we have now a complex per-Cache View handling, which
results in performance complexities such as ISPN-4842.
There are some more complexities coming:
Hibernate OGM wishes to control the context of deserialization - this
is actually an important optimisation to keep garbage production under
control, but also applications might want to register custom RPC
commands; this has been a long standing problem for Search (among
others).
Infinispan Query does have custom RPC commands, and this just happens
to work because the Infinispan core module has an explicit dependency
to query.. but that's a twisted dependency scheme, as the module would
need to list each possible extension point: it's not something you can
do for all projects using it.
Interestingly enough, there is a very simple solution which wipes out
all of the above complexity, and also resolves some pain points:
today the app server supports the FORK protocol from JGroups, so we
can get rid of the idea of a single CacheManager per appserver, and
create one per classloader and *within* the classloader.
By doing so, we can delete all code about per-Cache classloaders,
remove the CacheView concept, and also allow the deployment (the
application) which is needing caching services to register whatever it
wants.
It could register custom interceptors, commands, externalizers,
CacheStore(s), etc.. without pain.
Finally, we could get rid of the concept that Caches start lazily. I'd
change to a simplified lifecycle which expects the CacheManager to
initialize, then allows Cache configurations to be defined, and then
it all starts atomically.
At that point, you'd not even be responsible anymore for complex
dependency resolutions across caches.
I'd hope this would allow OGM to get the features it needs, and also
significantly simplify the definition of boot process for any user,
not least it should simplify quite some code which is now being
maintained.
A nice follow-up task would be that WildFly would need to be able to
"pick up" some configuration file from the deployment and inject a
matching CacheManager, so this requires a bit of cooperation with the
app server team, and an agreement on a conventional configuration
name.
This should be done by WildFly (and not the app), so that the user
deployment can lookup the CacheManager by JNDI without needing to
understand how to wire things up in the FORK channel.
I also believe this is a winner from usability point of view, as many
of the troubles I see in forums and customers are about "how do I
start this up?". Remember our guides essentially teach you to either
take the AS CacheManager, or to start your own. Neither of those are
the optimal solution, and people get in trouble.
WDYT?
Sanne
9 years, 8 months
HotRodClient SocketTimeout behavior during Continuous Queries
by Stelios Koussouris
Looking at Remote Listeners and Continuous Queries last week I noticed that after the default 60secs of inactivity the socket maintained open for server to client push of grid events was closed with java.net.SocketTimeoutException and the client had no way of recovering.
I am not sure if this was intentional feature so that inactive connections are closed but the way to overcome this was to set sockettimeout(0).
However, even this solution is not perfect as in an environment where client/server are separated by a firewall the "inactive" connection could be closed by the firewall.
It be useful to "refresh" the connection at SOCKET_TIMEOUT-x Time to ensure this doesn't occur.
Stelios
9 years, 8 months
Status update
by Dan Berindei
Hi guys
I'll be missing the meeting today, so here's my update.
I've fixed some small issues:
* ISPN-5246 Re-enable the queue for the JGroups internal thread pool
in the default configuration
* ISPN-5223 NPE in InfinispanCollections.containsAny - partition handling
* ISPN-5141 SingleFileStore.process() should use sequential access
I also looked into the status of the test suite in CI and I created
some new JIRAs:
* JGRP-1916 ConcurrentModificationException in FD_ALL
* ISPN-5254 Server not always stopped properly with the IBM JDK
This week:
* ISPN-5174 Transaction cannot be recommitted after ownership changes
* ISPN-5044 ClusterTopologyManagerTest.testClusterRecoveryAfterSplitAndCoordLeave
* Review Will's LRU/LIRS changes
Cheers
Dan
9 years, 8 months
Asynchronous cache's "void put()" call expectations changed from 6.0.0 to 6.0.1/7.0
by Galder Zamarreño
Hi all,
@Paul, this might be important for WF if using async repl caches (the same I think applies to distributed async caches too)
Today I’ve been trying to upgrade Infinispan version in Hibernate master from 6.0.0.Final to 7.0.0.Beta1. Overall, it’s all worked fine but there’s one test that has started failing.
Essentialy, this is a clustered test for a repl async cache (w/ cluster cache loader) where a non-owner cache node does put() and immediately, on the same cache, it calls a get(). The test is failing because the get() does not see the effects of the put(), even if both operations are called on the same cache instance.
According to Dan, this should have been happening since [1] was implemented, but it’s really started happening since [2] when lock delegation was enabled for replicated caches (EntryWrappingInterceptor.isUsingLockDelegation is now true whereas in 6.0.0 it was false).
Not sure we set expectations in this regard, but clearly it’s big change in terms of expectations on when “void put()” completes for async repl caches. I’m not sure how we should handle this, but it definitely needs some discussion and adjuts documentation/javadoc if needed. Can we do something differently?
Indepent of how we resolve this, this is the result of once again of trying to shoehole async behaviour into sync APIs. Any async caches (DIST, INV, REPL) should really be accessed exclusively via the AsyncCache API, where you can return quickly and use the future, and any listener to attach to it (a bit ala Java8’s CompletableFuture.map lambda calls) as a way to signal that the operation has completed, and then you have an API and cache mode that make sense and is consistent with how async APIs work.
Right now, when a repl async cache’s “void put()” call is not very well defined. Does it return when message has been put on the network? What impact does it have in the local cache contents?
Also, a very big problem of the change of behaviour is that if left like that, you are forcing users to code differently, using the same “void put()” API depending on the configuration (whether async/sync). As clearly shown by the issue above, this is very confusing. It’s a lot more logical IMO, and I’ve already sent an email on this very same topic [3] back in January, that whether a cache is sync or async should be based purely on the API used and forget about the static configuration flag on whether the cache is async or sync.
Cheers,
[1] https://issues.jboss.org/browse/ISPN-2772
[2] https://issues.jboss.org/browse/ISPN-3354
[3] http://lists.jboss.org/pipermail/infinispan-dev/2014-January/014448.html
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
9 years, 8 months