Infinispan embedded off-heap cache
by yavuz gokirmak
Hi all,
Is it possible to use infinispan as embedded off-heap cache.
As I understood it is not implemented yet.
If this is the case, we are planning to put effort for off-heap embedded
cache development.
I really need to hear your advices,
best regards
10 years, 9 months
Design change in Infinispan Query
by Sanne Grinovero
Hello all,
currently Infinispan Query is an interceptor registering on the
specific Cache instance which has indexing enabled; one such
interceptor is doing all what it needs to do in the sole scope of the
cache it was registered in.
If you enable indexing - for example - on 3 different caches, there
will be 3 different Hibernate Search engines started in background,
and they are all unaware of each other.
After some design discussions with Ales for CapeDwarf, but also
calling attention on something that bothered me since some time, I'd
evaluate the option to have a single Hibernate Search Engine
registered in the CacheManager, and have it shared across indexed
caches.
Current design limitations:
A- If they are all configured to use the same base directory to
store indexes, and happen to have same-named indexes, they'll share
the index without being aware of each other. This is going to break
unless the user configures some tricky parameters, and even so
performance won't be great: instances will lock each other out, or at
best write in alternate turns.
B- The search engine isn't particularly "heavy", still it would be
nice to share some components and internal services.
C- Configuration details which need some care - like injecting a
JGroups channel for clustering - needs to be done right isolating each
instance (so large parts of configuration would be quite similar but
not totally equal)
D- Incoming messages into a JGroups Receiver need to be routed not
only among indexes, but also among Engine instances. This prevents
Query to reuse code from Hibernate Search.
Problems with a unified Hibernate Search Engine:
1#- Isolation of types / indexes. If the same indexed class is
stored in different (indexed) caches, they'll share the same index. Is
it a problem? I'm tempted to consider this a good thing, but wonder if
it would surprise some users. Would you expect that?
2#- configuration format overhaul: indexing options won't be set on
the cache section but in the global section. I'm looking forward to
use the schema extensions anyway to provide a better configuration
experience than the current <properties />.
3#- Assuming 1# is fine, when a search hit is found I'd need to be
able to figure out from which cache the value should be loaded.
3#A we could have the cache name encoded in the index, as part
of the identifier: {PK,cacheName}
3#B we actually shard the index, keeping a physically separate
index per cache. This would mean searching on the joint index view but
extracting hits from specific indexes to keep track of "which index"..
I think we can do that but it's definitely tricky.
It's likely easier to keep indexed values from different caches in
different indexes. that would mean to reject #1 and mess with the user
defined index name, to add for example the cache name to the user
defined string.
Any comment?
Cheers,
Sanne
10 years, 10 months
singleton @Listeners
by Mircea Markus
This is a problem that pops up constantly:
User: "I add a listener to my distributed/replicated cache but this gets invoked numOwners times - can I make that to be invoked only once cluster wise?"
Developer: "Yes, you can! You have to do that and that..."
What about a "singleton" attribute on the Listener? Would make the reply shorter:
Developer: "Use @Listener(singleton=true)"
Cheers,
Mircea
11 years, 4 months
NPE in MapReduceTask running in cluster
by Matej Lazar
NPE ocures while running CapeDwarf cluster tests, see stack below.
Null comes from MapReduceTask.invokeEverywhere
13:36:51,053 INFO [org.jboss.as.clustering.infinispan] (http-/192.168.30.248:8080-1) JBAS010281: Started search_capedwarf-test cache from capedwarf container
13:36:51,058 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoking MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] across entire cluster
*13:36:51,065 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoked MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] across entire cluster, results are {node-b/capedwarf=null}*
13:36:51,067 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoking MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] locally
13:36:51,069 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoked MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] locally
Any idea ?
Thanks,
Matej.
java.lang.NullPointerException
at org.infinispan.distexec.mapreduce.MapReduceTask.mergeResponse(MapReduceTask.java:530)
at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhaseWithLocalReduction(MapReduceTask.java:439)
at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:328)
at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:692)
at org.jboss.capedwarf.search.CapedwarfSearchService.listIndexes(CapedwarfSearchService.java:94)
at org.jboss.test.capedwarf.cluster.SearchTestCase.clear(SearchTestCase.java:360)
at org.jboss.test.capedwarf.cluster.SearchTestCase.cleanpOnStart(SearchTestCase.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at org.jboss.arquillian.junit.Arquillian$6$1.invoke(Arquillian.java:270)
at org.jboss.arquillian.container.test.impl.execution.LocalTestExecuter.execute(LocalTestExecuter.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.invokeObservers(EventContextImpl.java:99)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:81)
at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:135)
at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:115)
at org.jboss.arquillian.core.impl.EventImpl.fire(EventImpl.java:67)
at org.jboss.arquillian.container.test.impl.execution.ContainerTestExecuter.execute(ContainerTestExecuter.java:38)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.invokeObservers(EventContextImpl.java:99)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:81)
at org.jboss.arquillian.test.impl.TestContextHandler.createTestContext(TestContextHandler.java:89)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
at org.jboss.arquillian.test.impl.TestContextHandler.createClassContext(TestContextHandler.java:75)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
at org.jboss.arquillian.test.impl.TestContextHandler.createSuiteContext(TestContextHandler.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:135)
at org.jboss.arquillian.test.impl.EventTestRunnerAdaptor.test(EventTestRunnerAdaptor.java:111)
at org.jboss.arquillian.junit.Arquillian$6.evaluate(Arquillian.java:263)
at org.jboss.arquillian.junit.Arquillian$4.evaluate(Arquillian.java:226)
at org.jboss.arquillian.junit.Arquillian.multiExecute(Arquillian.java:314)
at org.jboss.arquillian.junit.Arquillian.access$100(Arquillian.java:46)
at org.jboss.arquillian.junit.Arquillian$5.evaluate(Arquillian.java:240)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.jboss.arquillian.junit.Arquillian$2.evaluate(Arquillian.java:185)
at org.jboss.arquillian.junit.Arquillian.multiExecute(Arquillian.java:314)
at org.jboss.arquillian.junit.Arquillian.access$100(Arquillian.java:46)
at org.jboss.arquillian.junit.Arquillian$3.evaluate(Arquillian.java:199)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.jboss.arquillian.junit.Arquillian.run(Arquillian.java:147)
at org.junit.runner.JUnitCore.run(JUnitCore.java:157)
at org.junit.runner.JUnitCore.run(JUnitCore.java:136)
at org.jboss.arquillian.junit.container.JUnitTestRunner.execute(JUnitTestRunner.java:65)
at org.jboss.arquillian.protocol.servlet.runner.ServletTestRunner.executeTest(ServletTestRunner.java:160)
at org.jboss.arquillian.protocol.servlet.runner.ServletTestRunner.execute(ServletTestRunner.java:126)
at org.jboss.arquillian.protocol.servlet.runner.ServletTestRunner.doGet(ServletTestRunner.java:90)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:734)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:329)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at org.jboss.weld.servlet.ConversationPropagationFilter.doFilter(ConversationPropagationFilter.java:62)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:280)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at org.jboss.capedwarf.appidentity.GAEFilter.doFilter(GAEFilter.java:57)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:280)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
at org.jboss.modcluster.container.jbossweb.JBossWebContext$RequestListenerValve.event(JBossWebContext.java:67)
at org.jboss.modcluster.container.jbossweb.JBossWebContext$RequestListenerValve.invoke(JBossWebContext.java:48)
at org.jboss.as.web.security.SecurityContextAssociationValve.invoke(SecurityContextAssociationValve.java:165)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:155)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:372)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:877)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:679)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:931)
at java.lang.Thread.run(Thread.java:722)
11 years, 6 months
Re: [infinispan-dev] Removing Infinispan dependency on the Hibernate-Infinispan module in 4.x
by Galder Zamarreño
Scott, what do you suggest doing instead then? Without the commands, evictAll invalidation won't work.
Are you suggesting that I revert back to using the cache as a notification bus so that regions are invalidated?
On Feb 8, 2012, at 4:13 PM, Scott Marlow wrote:
> http://lists.jboss.org/pipermail/infinispan-dev/2012-February/010125.html has more context.
>
> Since there are no easy/quick fixes that can be applied at this time, to remove the AS7 Infinispan dependency on the Hibernate-Infinispan module, I think we should avoid depending on the service loader way to supply the custom commands (in the Hibernate-Infinispan module), at least until this can be addressed elsewhere.
>
> I propose that the Hibernate-Infinispan second level cache should not use the Service Loader to pass custom commands into Infinispan. If we agree, I'll create a jira for this.
>
> Scott
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
11 years, 6 months
distributed fork-join executor prototype
by matt hoffman
Hey guys,
I've been working on a prototype of integrating Infinispan into our app.
We do a lot of distributed processing across a small cluster, so I've
played with Infinispan's existing distributed execution framework (which is
nice), as well as using Infinispan alongside a normal message queue to
distribute tasks. But I've also put together a prototype of a new
distributed execution framework using fork-join pools that you all might be
interested in. If it sounds like something that would be worthwhile for
Infinispan, I can raise a Jira and submit a pull request with what I have
so far. I'd need to get the CA and company policy stuff finalized; that
might take a couple days. Meanwhile, in case there is any interest, I've
described the approach I've taken below.
First, a little background:
A while back I worked on a side project that integrated a distributed
work-stealing algorithm into the standard JDK fork-join queue. It used
JGroups for communication, because it was quick and easy for prototyping.
So this week I thought i'd take a stab at porting that over to Infinispan.
The algorithm I came up with for Infinispan is a bit less of a
work-stealing algorithm, to take advantage of Infinispan's built-in
distribution capabilities, but I think it's still fairly efficient.
My basic approach was to take in a cache in the constructor, much like the
existing distributed executor, and then create a parallel, DIST-mode cache
that uses the same hash & grouping configuration as the original cache.
That new parallel cache is the "task cache", and we use that to distribute
available tasks across the cluster. It's a distributed cache so that tasks
are partitioned across a large cluster, and it uses the hashing config of
the original cache and a KeyAffinityService to attempt to distribute the
tasks to the same nodes that contain the data being worked on. Nodes use
cache listeners to be notified when there is new work available, and the
atomic replace() to "check out" the tasks for execution, and "check in" the
results.
The basic algorithm is something like this:
For a refresher, a normal FJ pool has a fork() method that takes in a task,
and then places that task on an internal queue (actually, one of several
queues). When threads are idle, they look to the nearest work queue for
work. If that work queue does not have work, they "steal" work from another
thread's queue. So in the best case, tasks remain on the same thread as
the task that spawned them, so tasks that process the same data as their
parents may still have that data in the CPU's cache, etc. There's more to
it than that, but that's the basic idea.
This distributed algorithm just adds an extra layer on top for tasks that
are marked "distributable" (by extending DistributedFJTask instead of the
normal ForkJoinTask). When you call fork() with a DistributedFJTask, it
first checks to see if the local pool's work queue is empty. If so, we just
go ahead and submit it locally; there's no reason to distribute it. If not,
we put the task in the task cache, and let Infinispan distribute it. When a
node has no more work to do in its internal fork-join queues, it looks at
the task cache and tries to pull work from there.
So, it isn't really a "work-stealing" algorithm, per se; the distributable
tasks are being distributed eagerly using Infinispan's normal cache
distribution. But I'm hoping that doing that also makes it easier to handle
node failure, since nodes collectively share a common picture of the work
to be done.
This approach required one change to the actual FJ classes themselves
(in org.infinispan.util.concurrent.jdk8backported).
That's probably the most controversial change. I had to make the original
ForkJoinTask's fork() method non-final in order to extend it cleanly.
There's probably a way around that, but that's the cleanest option I have
thought of thus far.
And lastly, it's not done yet: basic task distribution is working, but I
haven't tackled failover to any real extent yet. The biggest questions,
though, are around what to do with the existing distributed execution
interfaces. For example, DistributedTask has a getCallable() method because
it assumes it's wrapping a Callable. But ForkJoinTasks don't extend
Callable. I could put in a shim to wrap the DistributedFJTasks into
Callables for the sake of that method, but I don't know if it's worth it.
Similarly, the DistributedExecutorService interface exposes a lot of
submit-to-specific-address or submit-to-all-addresses methods, which are an
odd fit here since tasks are distributed via their own cache. Even if I
used a KeyAffinityService to target the task to the given Address, it might
get picked up by another node that shares that same hash. But I can add in
a direct-to-single-Address capability in if that seems worthwhile.
Alternately, I can just use entirely different interfaces
(DistributedFJExecutorService, DistributedFJTask?).
Thoughts? Concerns? Glaring issues?
11 years, 7 months
[Discussion] TimeService implementation
by Pedro Ruivo
Hi,
The TimeService interface is going to replace all the System.nanoTime()
and System.currentTimeMillis() in the code and it will allow to replace
in the test suite in order to provide a better control over the time
dependent code (if needed).
The objective of this email is to discuss possible implementation and
interface that will cover the needs of everybody (or at least try it).
Later I'll create a JIRA and start the implementation (except if someone
really wants to implement it :P)
* Interface:
//return the current time in nanoseconds and milliseconds respectively
long nowNanos()
long nowMillis()
//return the duration between now() and the startNanos/Millis
long durationNanos(long startNanos)
long durationMillis(long startMillis)
//return the duration between start+ and end+
long durationNanos(long startNanos, long endNanos)
long durationMillis(long startMillis, long endMillis)
Any other method? Maybe adding a TimeUnit parameter instead of duplicate
methods (e.g. now(TimeUnit))? Maybe some convert() with double
precision, since the TimeUnit.convert(...) usually loses precision?
(I need double precision for the Extended Statistics, but I can put this
converters in my classes)
* Scope:
Should the TimeService be in a cache or global scope? My vote is per
cache...
* Configuration
Should we provide a configuration to pass an implementation of the
TimeService interface? if yes, we may need to change the methods above
to return some interface (e.g. TimeStamp) in order to support almost
anything.
Any other subject? Thoughts?
If I was not clear let me know...
Cheers,
Pedro Ruivo
11 years, 7 months
Infinispan 5.2 cartridge for OpenShift
by Andy Goldstein
Hi all,
Martin Gencur asked me to post a note to this list letting you know about the Infinispan 5.2 cartridge I worked on for OpenShift. In short, the cartridge allows you to create 1 or more gears with Infinispan running in them. If you have multiple gears, they will find each other and form a cluster. You can configure an infinispan.xml to set up your caches and anything else you can configure via the xml file.
As of now, the Infinispan cartridge is only developed in the v1 cartridge format for OpenShift. Additionally, there are some "hacks" you have to do (at least against OpenShift Enterprise 1.1) to enable users to create standalone Infinispan applications in OpenShift (see https://github.com/openshift/origin-community-cartridges/blob/master/open...).
In the future, we'll look to update the cartridge so it's in the easier-to-maintain v2 cartridge format. I'm sure there is also additional work that could go into the cartridge to make it more robust, but for right now, it's out there and has worked in my test environment. If anyone wants to try it out and provide feedback, I'd love to hear from you.
Thanks,
Andy
11 years, 7 months
TeamCity and automatic pull request build
by Mircea Markus
As you might have noticed the "GitHub integration" plugin has been set in CI. It posts notifications on the pull request page on the status of the build.
Few words on how it works:
- whenever a pull request is made it triggers a build on the merge between that pull request and the master. That should be the equivalent of what put integrate_pull_request script does, so it's quite reliable for integration.
- when you update the pull request the build is triggered again, based on the new merge result
- when you update the master, as a result of the master updating, all the pending pull requests are again merged and re-built.
Ideally we should have a small number of pull request pending, in order to reduce the number of re-builds.
Cheers,
--
Mircea Markus
Infinispan lead (www.infinispan.org)
11 years, 7 months