Design change in Infinispan Query
by Sanne Grinovero
Hello all,
currently Infinispan Query is an interceptor registering on the
specific Cache instance which has indexing enabled; one such
interceptor is doing all what it needs to do in the sole scope of the
cache it was registered in.
If you enable indexing - for example - on 3 different caches, there
will be 3 different Hibernate Search engines started in background,
and they are all unaware of each other.
After some design discussions with Ales for CapeDwarf, but also
calling attention on something that bothered me since some time, I'd
evaluate the option to have a single Hibernate Search Engine
registered in the CacheManager, and have it shared across indexed
caches.
Current design limitations:
A- If they are all configured to use the same base directory to
store indexes, and happen to have same-named indexes, they'll share
the index without being aware of each other. This is going to break
unless the user configures some tricky parameters, and even so
performance won't be great: instances will lock each other out, or at
best write in alternate turns.
B- The search engine isn't particularly "heavy", still it would be
nice to share some components and internal services.
C- Configuration details which need some care - like injecting a
JGroups channel for clustering - needs to be done right isolating each
instance (so large parts of configuration would be quite similar but
not totally equal)
D- Incoming messages into a JGroups Receiver need to be routed not
only among indexes, but also among Engine instances. This prevents
Query to reuse code from Hibernate Search.
Problems with a unified Hibernate Search Engine:
1#- Isolation of types / indexes. If the same indexed class is
stored in different (indexed) caches, they'll share the same index. Is
it a problem? I'm tempted to consider this a good thing, but wonder if
it would surprise some users. Would you expect that?
2#- configuration format overhaul: indexing options won't be set on
the cache section but in the global section. I'm looking forward to
use the schema extensions anyway to provide a better configuration
experience than the current <properties />.
3#- Assuming 1# is fine, when a search hit is found I'd need to be
able to figure out from which cache the value should be loaded.
3#A we could have the cache name encoded in the index, as part
of the identifier: {PK,cacheName}
3#B we actually shard the index, keeping a physically separate
index per cache. This would mean searching on the joint index view but
extracting hits from specific indexes to keep track of "which index"..
I think we can do that but it's definitely tricky.
It's likely easier to keep indexed values from different caches in
different indexes. that would mean to reject #1 and mess with the user
defined index name, to add for example the cache name to the user
defined string.
Any comment?
Cheers,
Sanne
10 years, 10 months
singleton @Listeners
by Mircea Markus
This is a problem that pops up constantly:
User: "I add a listener to my distributed/replicated cache but this gets invoked numOwners times - can I make that to be invoked only once cluster wise?"
Developer: "Yes, you can! You have to do that and that..."
What about a "singleton" attribute on the Listener? Would make the reply shorter:
Developer: "Use @Listener(singleton=true)"
Cheers,
Mircea
11 years, 3 months
NPE in MapReduceTask running in cluster
by Matej Lazar
NPE ocures while running CapeDwarf cluster tests, see stack below.
Null comes from MapReduceTask.invokeEverywhere
13:36:51,053 INFO [org.jboss.as.clustering.infinispan] (http-/192.168.30.248:8080-1) JBAS010281: Started search_capedwarf-test cache from capedwarf container
13:36:51,058 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoking MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] across entire cluster
*13:36:51,065 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoked MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] across entire cluster, results are {node-b/capedwarf=null}*
13:36:51,067 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoking MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] locally
13:36:51,069 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoked MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] locally
Any idea ?
Thanks,
Matej.
java.lang.NullPointerException
at org.infinispan.distexec.mapreduce.MapReduceTask.mergeResponse(MapReduceTask.java:530)
at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhaseWithLocalReduction(MapReduceTask.java:439)
at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:328)
at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:692)
at org.jboss.capedwarf.search.CapedwarfSearchService.listIndexes(CapedwarfSearchService.java:94)
at org.jboss.test.capedwarf.cluster.SearchTestCase.clear(SearchTestCase.java:360)
at org.jboss.test.capedwarf.cluster.SearchTestCase.cleanpOnStart(SearchTestCase.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at org.jboss.arquillian.junit.Arquillian$6$1.invoke(Arquillian.java:270)
at org.jboss.arquillian.container.test.impl.execution.LocalTestExecuter.execute(LocalTestExecuter.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.invokeObservers(EventContextImpl.java:99)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:81)
at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:135)
at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:115)
at org.jboss.arquillian.core.impl.EventImpl.fire(EventImpl.java:67)
at org.jboss.arquillian.container.test.impl.execution.ContainerTestExecuter.execute(ContainerTestExecuter.java:38)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.invokeObservers(EventContextImpl.java:99)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:81)
at org.jboss.arquillian.test.impl.TestContextHandler.createTestContext(TestContextHandler.java:89)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
at org.jboss.arquillian.test.impl.TestContextHandler.createClassContext(TestContextHandler.java:75)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
at org.jboss.arquillian.test.impl.TestContextHandler.createSuiteContext(TestContextHandler.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:135)
at org.jboss.arquillian.test.impl.EventTestRunnerAdaptor.test(EventTestRunnerAdaptor.java:111)
at org.jboss.arquillian.junit.Arquillian$6.evaluate(Arquillian.java:263)
at org.jboss.arquillian.junit.Arquillian$4.evaluate(Arquillian.java:226)
at org.jboss.arquillian.junit.Arquillian.multiExecute(Arquillian.java:314)
at org.jboss.arquillian.junit.Arquillian.access$100(Arquillian.java:46)
at org.jboss.arquillian.junit.Arquillian$5.evaluate(Arquillian.java:240)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.jboss.arquillian.junit.Arquillian$2.evaluate(Arquillian.java:185)
at org.jboss.arquillian.junit.Arquillian.multiExecute(Arquillian.java:314)
at org.jboss.arquillian.junit.Arquillian.access$100(Arquillian.java:46)
at org.jboss.arquillian.junit.Arquillian$3.evaluate(Arquillian.java:199)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.jboss.arquillian.junit.Arquillian.run(Arquillian.java:147)
at org.junit.runner.JUnitCore.run(JUnitCore.java:157)
at org.junit.runner.JUnitCore.run(JUnitCore.java:136)
at org.jboss.arquillian.junit.container.JUnitTestRunner.execute(JUnitTestRunner.java:65)
at org.jboss.arquillian.protocol.servlet.runner.ServletTestRunner.executeTest(ServletTestRunner.java:160)
at org.jboss.arquillian.protocol.servlet.runner.ServletTestRunner.execute(ServletTestRunner.java:126)
at org.jboss.arquillian.protocol.servlet.runner.ServletTestRunner.doGet(ServletTestRunner.java:90)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:734)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:329)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at org.jboss.weld.servlet.ConversationPropagationFilter.doFilter(ConversationPropagationFilter.java:62)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:280)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at org.jboss.capedwarf.appidentity.GAEFilter.doFilter(GAEFilter.java:57)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:280)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
at org.jboss.modcluster.container.jbossweb.JBossWebContext$RequestListenerValve.event(JBossWebContext.java:67)
at org.jboss.modcluster.container.jbossweb.JBossWebContext$RequestListenerValve.invoke(JBossWebContext.java:48)
at org.jboss.as.web.security.SecurityContextAssociationValve.invoke(SecurityContextAssociationValve.java:165)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:155)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:372)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:877)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:679)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:931)
at java.lang.Thread.run(Thread.java:722)
11 years, 6 months
Re: [infinispan-dev] Removing Infinispan dependency on the Hibernate-Infinispan module in 4.x
by Galder Zamarreño
Scott, what do you suggest doing instead then? Without the commands, evictAll invalidation won't work.
Are you suggesting that I revert back to using the cache as a notification bus so that regions are invalidated?
On Feb 8, 2012, at 4:13 PM, Scott Marlow wrote:
> http://lists.jboss.org/pipermail/infinispan-dev/2012-February/010125.html has more context.
>
> Since there are no easy/quick fixes that can be applied at this time, to remove the AS7 Infinispan dependency on the Hibernate-Infinispan module, I think we should avoid depending on the service loader way to supply the custom commands (in the Hibernate-Infinispan module), at least until this can be addressed elsewhere.
>
> I propose that the Hibernate-Infinispan second level cache should not use the Service Loader to pass custom commands into Infinispan. If we agree, I'll create a jira for this.
>
> Scott
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
11 years, 6 months
Testsuite error jiras
by Galder Zamarreño
When you open JIRAs to fix randomly failing tets, i.e. https://issues.jboss.org/browse/ISPN-2102, please make sure the test is disabled too.
This avoids confusion with the tests that have been identified and the ones that have not been.
Cheers,
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
12 years, 1 month
Fixing ISPN-2384 (data loss with concurrent activation/passivation)
by Galder Zamarreño
Hi all,
Re: https://issues.jboss.org/browse/ISPN-2384
I've created a unit test in https://github.com/galderz/infinispan/commit/01230d40df6f26720039986916c3...
That was the easy part :). How to fix it is no so clear. In pseudo-code, the race condition happens when:
1. T1. passivate entry X to cache store
2. T2. retrieve X from memory
3. T2. activation interceptor removes X from store
4. T1. evicts X from memory
The end result is that X is gone from both memory and cache store.
I've thought of several ways of fixing it, but not convinced with any:
One way to fix this is by making step 1 & 4 atomic, and I was hoping to pigyback on the segment lock, but for that to work, step 2 (data container get() op) would need to wait for this segment lock, which would be detrimental for performance.
Another way would be if activation only happened if the data was retrieved from the cache store (and not from memory) since removing from cache store when the value came from memory is rather pointless. The problem with this solution is that the source of the data is not currently ship around in the interceptor stack. IOW, the activation interceptor doesn't know if the data came from memory or the cache store. This could be potentially recorded in the cache entry stored in the context, but requires some refactoring and it could still be vunerable to this sequence of events:
1. T1. passivate entry X to cache store
2. T2. retrieve X from cache store
3. T2. store X in memory
4. T2. activation interceptor removes X from store
5. T1. evicts X from memory
So, to get around this sequence of events, since the the store acquires a segment lock, passivate and evict X could be done within the segment lock, and that would make sure that the evict happens before the storage in memory:
1. T1. acquire segment lock
2. T1. passivate entry X to cache store
3. T2. retrieve X from cache store
4. T1. evicts X from memory
5. T1. releases segment lock
6. T2. acquires segment lock
7. T2. store X in memory
8. T2. activation interceptor removes X from store
9. T2. releases segment lock
I think this could work, but I was wondering if you could see other potential solutions?
Cheers,
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
12 years, 1 month
Who's the lock owner in REPL? Ordering of CAS operations.
by Sanne Grinovero
Hi all,
after having a look and discussing some details with Manik about the
missing CAS operation for ReplaceCommand based APIs,
we agreed that the only way to have each node apply the same value - and
have each node resolve the same condition, hence making a consistent choice
- we need to mandate the compare on the lock owner and from there replicate
the decision to it's peers.
Looks good, as we where discussing mostly about DIST; how should I deal
with REPL?
I actually don't know how REPL is managing locks and consistency, I always
assumed it to be the same as DIST but that doesn't seem to be the case. Is
this implying that all the "single lock owner" improvements also apply to
DIST only?
Is REPL acquiring a lock on all nodes? If that's the case, then I don't
know how to fix this without being able to rely on eventual consistency.
I'm getting this idea that the only way to achieve scalable consistency is
to write always on a single point for each entry (the lock owner) and this
has the responsibility to replicate writes to other nodes; this would be
nice and easy to handle as then all write operations on the same key have a
single node defining the order of operations (for a specific key); that
needs sequential RPCs rather than parallel ones, but you don't need locks
at all.
Cheers,
Sanne
12 years, 1 month