Design change in Infinispan Query
by Sanne Grinovero
Hello all,
currently Infinispan Query is an interceptor registering on the
specific Cache instance which has indexing enabled; one such
interceptor is doing all what it needs to do in the sole scope of the
cache it was registered in.
If you enable indexing - for example - on 3 different caches, there
will be 3 different Hibernate Search engines started in background,
and they are all unaware of each other.
After some design discussions with Ales for CapeDwarf, but also
calling attention on something that bothered me since some time, I'd
evaluate the option to have a single Hibernate Search Engine
registered in the CacheManager, and have it shared across indexed
caches.
Current design limitations:
A- If they are all configured to use the same base directory to
store indexes, and happen to have same-named indexes, they'll share
the index without being aware of each other. This is going to break
unless the user configures some tricky parameters, and even so
performance won't be great: instances will lock each other out, or at
best write in alternate turns.
B- The search engine isn't particularly "heavy", still it would be
nice to share some components and internal services.
C- Configuration details which need some care - like injecting a
JGroups channel for clustering - needs to be done right isolating each
instance (so large parts of configuration would be quite similar but
not totally equal)
D- Incoming messages into a JGroups Receiver need to be routed not
only among indexes, but also among Engine instances. This prevents
Query to reuse code from Hibernate Search.
Problems with a unified Hibernate Search Engine:
1#- Isolation of types / indexes. If the same indexed class is
stored in different (indexed) caches, they'll share the same index. Is
it a problem? I'm tempted to consider this a good thing, but wonder if
it would surprise some users. Would you expect that?
2#- configuration format overhaul: indexing options won't be set on
the cache section but in the global section. I'm looking forward to
use the schema extensions anyway to provide a better configuration
experience than the current <properties />.
3#- Assuming 1# is fine, when a search hit is found I'd need to be
able to figure out from which cache the value should be loaded.
3#A we could have the cache name encoded in the index, as part
of the identifier: {PK,cacheName}
3#B we actually shard the index, keeping a physically separate
index per cache. This would mean searching on the joint index view but
extracting hits from specific indexes to keep track of "which index"..
I think we can do that but it's definitely tricky.
It's likely easier to keep indexed values from different caches in
different indexes. that would mean to reject #1 and mess with the user
defined index name, to add for example the cache name to the user
defined string.
Any comment?
Cheers,
Sanne
10 years, 10 months
singleton @Listeners
by Mircea Markus
This is a problem that pops up constantly:
User: "I add a listener to my distributed/replicated cache but this gets invoked numOwners times - can I make that to be invoked only once cluster wise?"
Developer: "Yes, you can! You have to do that and that..."
What about a "singleton" attribute on the Listener? Would make the reply shorter:
Developer: "Use @Listener(singleton=true)"
Cheers,
Mircea
11 years, 3 months
NPE in MapReduceTask running in cluster
by Matej Lazar
NPE ocures while running CapeDwarf cluster tests, see stack below.
Null comes from MapReduceTask.invokeEverywhere
13:36:51,053 INFO [org.jboss.as.clustering.infinispan] (http-/192.168.30.248:8080-1) JBAS010281: Started search_capedwarf-test cache from capedwarf container
13:36:51,058 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoking MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] across entire cluster
*13:36:51,065 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoked MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] across entire cluster, results are {node-b/capedwarf=null}*
13:36:51,067 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoking MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] locally
13:36:51,069 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoked MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] locally
Any idea ?
Thanks,
Matej.
java.lang.NullPointerException
at org.infinispan.distexec.mapreduce.MapReduceTask.mergeResponse(MapReduceTask.java:530)
at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhaseWithLocalReduction(MapReduceTask.java:439)
at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:328)
at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:692)
at org.jboss.capedwarf.search.CapedwarfSearchService.listIndexes(CapedwarfSearchService.java:94)
at org.jboss.test.capedwarf.cluster.SearchTestCase.clear(SearchTestCase.java:360)
at org.jboss.test.capedwarf.cluster.SearchTestCase.cleanpOnStart(SearchTestCase.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at org.jboss.arquillian.junit.Arquillian$6$1.invoke(Arquillian.java:270)
at org.jboss.arquillian.container.test.impl.execution.LocalTestExecuter.execute(LocalTestExecuter.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.invokeObservers(EventContextImpl.java:99)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:81)
at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:135)
at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:115)
at org.jboss.arquillian.core.impl.EventImpl.fire(EventImpl.java:67)
at org.jboss.arquillian.container.test.impl.execution.ContainerTestExecuter.execute(ContainerTestExecuter.java:38)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.invokeObservers(EventContextImpl.java:99)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:81)
at org.jboss.arquillian.test.impl.TestContextHandler.createTestContext(TestContextHandler.java:89)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
at org.jboss.arquillian.test.impl.TestContextHandler.createClassContext(TestContextHandler.java:75)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
at org.jboss.arquillian.test.impl.TestContextHandler.createSuiteContext(TestContextHandler.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:135)
at org.jboss.arquillian.test.impl.EventTestRunnerAdaptor.test(EventTestRunnerAdaptor.java:111)
at org.jboss.arquillian.junit.Arquillian$6.evaluate(Arquillian.java:263)
at org.jboss.arquillian.junit.Arquillian$4.evaluate(Arquillian.java:226)
at org.jboss.arquillian.junit.Arquillian.multiExecute(Arquillian.java:314)
at org.jboss.arquillian.junit.Arquillian.access$100(Arquillian.java:46)
at org.jboss.arquillian.junit.Arquillian$5.evaluate(Arquillian.java:240)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.jboss.arquillian.junit.Arquillian$2.evaluate(Arquillian.java:185)
at org.jboss.arquillian.junit.Arquillian.multiExecute(Arquillian.java:314)
at org.jboss.arquillian.junit.Arquillian.access$100(Arquillian.java:46)
at org.jboss.arquillian.junit.Arquillian$3.evaluate(Arquillian.java:199)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.jboss.arquillian.junit.Arquillian.run(Arquillian.java:147)
at org.junit.runner.JUnitCore.run(JUnitCore.java:157)
at org.junit.runner.JUnitCore.run(JUnitCore.java:136)
at org.jboss.arquillian.junit.container.JUnitTestRunner.execute(JUnitTestRunner.java:65)
at org.jboss.arquillian.protocol.servlet.runner.ServletTestRunner.executeTest(ServletTestRunner.java:160)
at org.jboss.arquillian.protocol.servlet.runner.ServletTestRunner.execute(ServletTestRunner.java:126)
at org.jboss.arquillian.protocol.servlet.runner.ServletTestRunner.doGet(ServletTestRunner.java:90)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:734)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:329)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at org.jboss.weld.servlet.ConversationPropagationFilter.doFilter(ConversationPropagationFilter.java:62)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:280)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at org.jboss.capedwarf.appidentity.GAEFilter.doFilter(GAEFilter.java:57)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:280)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
at org.jboss.modcluster.container.jbossweb.JBossWebContext$RequestListenerValve.event(JBossWebContext.java:67)
at org.jboss.modcluster.container.jbossweb.JBossWebContext$RequestListenerValve.invoke(JBossWebContext.java:48)
at org.jboss.as.web.security.SecurityContextAssociationValve.invoke(SecurityContextAssociationValve.java:165)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:155)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:372)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:877)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:679)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:931)
at java.lang.Thread.run(Thread.java:722)
11 years, 6 months
Re: [infinispan-dev] Removing Infinispan dependency on the Hibernate-Infinispan module in 4.x
by Galder Zamarreño
Scott, what do you suggest doing instead then? Without the commands, evictAll invalidation won't work.
Are you suggesting that I revert back to using the cache as a notification bus so that regions are invalidated?
On Feb 8, 2012, at 4:13 PM, Scott Marlow wrote:
> http://lists.jboss.org/pipermail/infinispan-dev/2012-February/010125.html has more context.
>
> Since there are no easy/quick fixes that can be applied at this time, to remove the AS7 Infinispan dependency on the Hibernate-Infinispan module, I think we should avoid depending on the service loader way to supply the custom commands (in the Hibernate-Infinispan module), at least until this can be addressed elsewhere.
>
> I propose that the Hibernate-Infinispan second level cache should not use the Service Loader to pass custom commands into Infinispan. If we agree, I'll create a jira for this.
>
> Scott
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
11 years, 6 months
Separate ExecutorService for map/reduce tasks?
by Vladimir Blagojevic
Hi,
Although https://issues.jboss.org/browse/ISPN-2284 is charted for 6.0 I
would like to see if there is a possibility to finish it for 5.2. Most
of the parallel execution I have done already this and last week [1].
However, this change is not limited to map/reduce package only as we
might possibly want to have a separate executor for map/reduce execution
on each node. These changes affect global configuration and are not
confined to map/reduce packages only. Or should we simply use transport
executor for execution of these tasks for now and should the need arise
introduce separate executor in the future releases?
Regards,
Vladimir
[1] https://github.com/vblagoje/infinispan/tree/t_2284
11 years, 11 months
next infinispan release
by Mircea Markus
Hi,
Next Infinispan release(5.2.0.CR1) is scheduled for Friday 21 Dec.
We still have 19 issues (blocker + critical) that must make it in for CR[1]. Besides this we are still waiting for input from QA - hopefully things will stabilise at this stage.
Realistically speaking we won't be able to solve all 19 issues this the end week of this week, so I've moved the release date to Jan 4th (Friday).
Also please review the JIRAs[1] assigned to you and approach the blocker/critical JIRAs first.
[1] http://goo.gl/iV9MQ
Cheers,
--
Mircea Markus
Infinispan lead (www.infinispan.org)
11 years, 11 months
MFC/UFC credits in default config
by Radim Vansa
Hi,
recently I have synchronized our jgroups configuration with the default one shipped with Infinispan (core/src/main/resources/jgroups-(tcp|udp).xml) and it has shown that 200k credits in UFC/MFC (I keep the two values in sync) is not enough even for our smallest resilience test (killing one of four nodes). The state transfer was often blocked when requesting for more credits which resulted in not completing it within the time limit.
Therefore, I'd like to suggest to increase the amount of credits in default configuration as well, because we simply cannot use the lower setting and it's preferable to have the configurations as close as possible. The only settings we need to keep different are thread pool sizes and addresses and ports.
Radim
-----------------------------------------------------------
Radim Vansa
Quality Assurance Engineer
JBoss Datagrid
tel. +420532294559 ext. 62559
Red Hat Czech, s.r.o.
Brno, Purkyňova 99/71, PSČ 612 45
Czech Republic
11 years, 11 months
HotRod server and Rolling Upgrades
by Tristan Tarrant
So,
I thought we had everything ready to go for HotRod rolling upgrades:
* have HotRod server full of data (the "source")
* configure a new HotRod server (the "target") with a RemoteCacheStore
pointing to the "source" (using "rawValues")
* clients switch over to the "target" server which on cache misses
should seamlessly fetch entries from the "source"
* issue a "dump keys" on the source
* fetch the "dumped keys" from the target
* disable the RCS on the target and switch off the "source" for good
* PROFIT$$$
Unfortunately there is a teeny tiny flaw in the plan: entries in a
HotRod-managed cache are ByteArrayKey/CacheValue pairs and
unfortunately, when the "target" reads from the RCS they get unwrapped
into their byte[] equivalents.
The solutions we have are:
1. have a special marshaller placed on the RemoteCacheStore's
RemoteCacheManager which rewraps the entries. Unfortunately
marshallers can't distinguish between keys and values, so this would
probably require some horrid ThreadLocal trickery
2. Add a new option to RemoteCacheStore so that it rewraps entries in
the ByteArrayKey/CacheValue format. Unfortunately the CacheValue
class is part of server-core, but the dependency could be made
optional, and in the context of the Rolling Upgrade scenario it is a
non-issue, since it will be in the classpath
3. Introduce a new MigrationRemoteCacheStore which does the same as the
above, but without changing RCS itself.
My personal favourite is number 2, but I trust your better judgement.
I think these are merely workarounds and we should have a better way for
"entry wrappers" (such as the cache servers) to "localize" the entries
for their own particular needs. Also I believe we need a better way to
attach metadata to entries in a portable way so that we don't need these
value wrappers.
Tristan
12 years