Infinispan embedded off-heap cache
by yavuz gokirmak
Hi all,
Is it possible to use infinispan as embedded off-heap cache.
As I understood it is not implemented yet.
If this is the case, we are planning to put effort for off-heap embedded
cache development.
I really need to hear your advices,
best regards
10 years, 9 months
Design change in Infinispan Query
by Sanne Grinovero
Hello all,
currently Infinispan Query is an interceptor registering on the
specific Cache instance which has indexing enabled; one such
interceptor is doing all what it needs to do in the sole scope of the
cache it was registered in.
If you enable indexing - for example - on 3 different caches, there
will be 3 different Hibernate Search engines started in background,
and they are all unaware of each other.
After some design discussions with Ales for CapeDwarf, but also
calling attention on something that bothered me since some time, I'd
evaluate the option to have a single Hibernate Search Engine
registered in the CacheManager, and have it shared across indexed
caches.
Current design limitations:
A- If they are all configured to use the same base directory to
store indexes, and happen to have same-named indexes, they'll share
the index without being aware of each other. This is going to break
unless the user configures some tricky parameters, and even so
performance won't be great: instances will lock each other out, or at
best write in alternate turns.
B- The search engine isn't particularly "heavy", still it would be
nice to share some components and internal services.
C- Configuration details which need some care - like injecting a
JGroups channel for clustering - needs to be done right isolating each
instance (so large parts of configuration would be quite similar but
not totally equal)
D- Incoming messages into a JGroups Receiver need to be routed not
only among indexes, but also among Engine instances. This prevents
Query to reuse code from Hibernate Search.
Problems with a unified Hibernate Search Engine:
1#- Isolation of types / indexes. If the same indexed class is
stored in different (indexed) caches, they'll share the same index. Is
it a problem? I'm tempted to consider this a good thing, but wonder if
it would surprise some users. Would you expect that?
2#- configuration format overhaul: indexing options won't be set on
the cache section but in the global section. I'm looking forward to
use the schema extensions anyway to provide a better configuration
experience than the current <properties />.
3#- Assuming 1# is fine, when a search hit is found I'd need to be
able to figure out from which cache the value should be loaded.
3#A we could have the cache name encoded in the index, as part
of the identifier: {PK,cacheName}
3#B we actually shard the index, keeping a physically separate
index per cache. This would mean searching on the joint index view but
extracting hits from specific indexes to keep track of "which index"..
I think we can do that but it's definitely tricky.
It's likely easier to keep indexed values from different caches in
different indexes. that would mean to reject #1 and mess with the user
defined index name, to add for example the cache name to the user
defined string.
Any comment?
Cheers,
Sanne
10 years, 10 months
singleton @Listeners
by Mircea Markus
This is a problem that pops up constantly:
User: "I add a listener to my distributed/replicated cache but this gets invoked numOwners times - can I make that to be invoked only once cluster wise?"
Developer: "Yes, you can! You have to do that and that..."
What about a "singleton" attribute on the Listener? Would make the reply shorter:
Developer: "Use @Listener(singleton=true)"
Cheers,
Mircea
11 years, 4 months
NPE in MapReduceTask running in cluster
by Matej Lazar
NPE ocures while running CapeDwarf cluster tests, see stack below.
Null comes from MapReduceTask.invokeEverywhere
13:36:51,053 INFO [org.jboss.as.clustering.infinispan] (http-/192.168.30.248:8080-1) JBAS010281: Started search_capedwarf-test cache from capedwarf container
13:36:51,058 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoking MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] across entire cluster
*13:36:51,065 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoked MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] across entire cluster, results are {node-b/capedwarf=null}*
13:36:51,067 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoking MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] locally
13:36:51,069 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoked MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] locally
Any idea ?
Thanks,
Matej.
java.lang.NullPointerException
at org.infinispan.distexec.mapreduce.MapReduceTask.mergeResponse(MapReduceTask.java:530)
at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhaseWithLocalReduction(MapReduceTask.java:439)
at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:328)
at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:692)
at org.jboss.capedwarf.search.CapedwarfSearchService.listIndexes(CapedwarfSearchService.java:94)
at org.jboss.test.capedwarf.cluster.SearchTestCase.clear(SearchTestCase.java:360)
at org.jboss.test.capedwarf.cluster.SearchTestCase.cleanpOnStart(SearchTestCase.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at org.jboss.arquillian.junit.Arquillian$6$1.invoke(Arquillian.java:270)
at org.jboss.arquillian.container.test.impl.execution.LocalTestExecuter.execute(LocalTestExecuter.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.invokeObservers(EventContextImpl.java:99)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:81)
at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:135)
at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:115)
at org.jboss.arquillian.core.impl.EventImpl.fire(EventImpl.java:67)
at org.jboss.arquillian.container.test.impl.execution.ContainerTestExecuter.execute(ContainerTestExecuter.java:38)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.invokeObservers(EventContextImpl.java:99)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:81)
at org.jboss.arquillian.test.impl.TestContextHandler.createTestContext(TestContextHandler.java:89)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
at org.jboss.arquillian.test.impl.TestContextHandler.createClassContext(TestContextHandler.java:75)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
at org.jboss.arquillian.test.impl.TestContextHandler.createSuiteContext(TestContextHandler.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:135)
at org.jboss.arquillian.test.impl.EventTestRunnerAdaptor.test(EventTestRunnerAdaptor.java:111)
at org.jboss.arquillian.junit.Arquillian$6.evaluate(Arquillian.java:263)
at org.jboss.arquillian.junit.Arquillian$4.evaluate(Arquillian.java:226)
at org.jboss.arquillian.junit.Arquillian.multiExecute(Arquillian.java:314)
at org.jboss.arquillian.junit.Arquillian.access$100(Arquillian.java:46)
at org.jboss.arquillian.junit.Arquillian$5.evaluate(Arquillian.java:240)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.jboss.arquillian.junit.Arquillian$2.evaluate(Arquillian.java:185)
at org.jboss.arquillian.junit.Arquillian.multiExecute(Arquillian.java:314)
at org.jboss.arquillian.junit.Arquillian.access$100(Arquillian.java:46)
at org.jboss.arquillian.junit.Arquillian$3.evaluate(Arquillian.java:199)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.jboss.arquillian.junit.Arquillian.run(Arquillian.java:147)
at org.junit.runner.JUnitCore.run(JUnitCore.java:157)
at org.junit.runner.JUnitCore.run(JUnitCore.java:136)
at org.jboss.arquillian.junit.container.JUnitTestRunner.execute(JUnitTestRunner.java:65)
at org.jboss.arquillian.protocol.servlet.runner.ServletTestRunner.executeTest(ServletTestRunner.java:160)
at org.jboss.arquillian.protocol.servlet.runner.ServletTestRunner.execute(ServletTestRunner.java:126)
at org.jboss.arquillian.protocol.servlet.runner.ServletTestRunner.doGet(ServletTestRunner.java:90)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:734)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:329)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at org.jboss.weld.servlet.ConversationPropagationFilter.doFilter(ConversationPropagationFilter.java:62)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:280)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at org.jboss.capedwarf.appidentity.GAEFilter.doFilter(GAEFilter.java:57)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:280)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
at org.jboss.modcluster.container.jbossweb.JBossWebContext$RequestListenerValve.event(JBossWebContext.java:67)
at org.jboss.modcluster.container.jbossweb.JBossWebContext$RequestListenerValve.invoke(JBossWebContext.java:48)
at org.jboss.as.web.security.SecurityContextAssociationValve.invoke(SecurityContextAssociationValve.java:165)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:155)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:372)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:877)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:679)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:931)
at java.lang.Thread.run(Thread.java:722)
11 years, 6 months
Re: [infinispan-dev] Removing Infinispan dependency on the Hibernate-Infinispan module in 4.x
by Galder Zamarreño
Scott, what do you suggest doing instead then? Without the commands, evictAll invalidation won't work.
Are you suggesting that I revert back to using the cache as a notification bus so that regions are invalidated?
On Feb 8, 2012, at 4:13 PM, Scott Marlow wrote:
> http://lists.jboss.org/pipermail/infinispan-dev/2012-February/010125.html has more context.
>
> Since there are no easy/quick fixes that can be applied at this time, to remove the AS7 Infinispan dependency on the Hibernate-Infinispan module, I think we should avoid depending on the service loader way to supply the custom commands (in the Hibernate-Infinispan module), at least until this can be addressed elsewhere.
>
> I propose that the Hibernate-Infinispan second level cache should not use the Service Loader to pass custom commands into Infinispan. If we agree, I'll create a jira for this.
>
> Scott
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
11 years, 7 months
distributed fork-join executor prototype
by matt hoffman
Hey guys,
I've been working on a prototype of integrating Infinispan into our app.
We do a lot of distributed processing across a small cluster, so I've
played with Infinispan's existing distributed execution framework (which is
nice), as well as using Infinispan alongside a normal message queue to
distribute tasks. But I've also put together a prototype of a new
distributed execution framework using fork-join pools that you all might be
interested in. If it sounds like something that would be worthwhile for
Infinispan, I can raise a Jira and submit a pull request with what I have
so far. I'd need to get the CA and company policy stuff finalized; that
might take a couple days. Meanwhile, in case there is any interest, I've
described the approach I've taken below.
First, a little background:
A while back I worked on a side project that integrated a distributed
work-stealing algorithm into the standard JDK fork-join queue. It used
JGroups for communication, because it was quick and easy for prototyping.
So this week I thought i'd take a stab at porting that over to Infinispan.
The algorithm I came up with for Infinispan is a bit less of a
work-stealing algorithm, to take advantage of Infinispan's built-in
distribution capabilities, but I think it's still fairly efficient.
My basic approach was to take in a cache in the constructor, much like the
existing distributed executor, and then create a parallel, DIST-mode cache
that uses the same hash & grouping configuration as the original cache.
That new parallel cache is the "task cache", and we use that to distribute
available tasks across the cluster. It's a distributed cache so that tasks
are partitioned across a large cluster, and it uses the hashing config of
the original cache and a KeyAffinityService to attempt to distribute the
tasks to the same nodes that contain the data being worked on. Nodes use
cache listeners to be notified when there is new work available, and the
atomic replace() to "check out" the tasks for execution, and "check in" the
results.
The basic algorithm is something like this:
For a refresher, a normal FJ pool has a fork() method that takes in a task,
and then places that task on an internal queue (actually, one of several
queues). When threads are idle, they look to the nearest work queue for
work. If that work queue does not have work, they "steal" work from another
thread's queue. So in the best case, tasks remain on the same thread as
the task that spawned them, so tasks that process the same data as their
parents may still have that data in the CPU's cache, etc. There's more to
it than that, but that's the basic idea.
This distributed algorithm just adds an extra layer on top for tasks that
are marked "distributable" (by extending DistributedFJTask instead of the
normal ForkJoinTask). When you call fork() with a DistributedFJTask, it
first checks to see if the local pool's work queue is empty. If so, we just
go ahead and submit it locally; there's no reason to distribute it. If not,
we put the task in the task cache, and let Infinispan distribute it. When a
node has no more work to do in its internal fork-join queues, it looks at
the task cache and tries to pull work from there.
So, it isn't really a "work-stealing" algorithm, per se; the distributable
tasks are being distributed eagerly using Infinispan's normal cache
distribution. But I'm hoping that doing that also makes it easier to handle
node failure, since nodes collectively share a common picture of the work
to be done.
This approach required one change to the actual FJ classes themselves
(in org.infinispan.util.concurrent.jdk8backported).
That's probably the most controversial change. I had to make the original
ForkJoinTask's fork() method non-final in order to extend it cleanly.
There's probably a way around that, but that's the cleanest option I have
thought of thus far.
And lastly, it's not done yet: basic task distribution is working, but I
haven't tackled failover to any real extent yet. The biggest questions,
though, are around what to do with the existing distributed execution
interfaces. For example, DistributedTask has a getCallable() method because
it assumes it's wrapping a Callable. But ForkJoinTasks don't extend
Callable. I could put in a shim to wrap the DistributedFJTasks into
Callables for the sake of that method, but I don't know if it's worth it.
Similarly, the DistributedExecutorService interface exposes a lot of
submit-to-specific-address or submit-to-all-addresses methods, which are an
odd fit here since tasks are distributed via their own cache. Even if I
used a KeyAffinityService to target the task to the given Address, it might
get picked up by another node that shares that same hash. But I can add in
a direct-to-single-Address capability in if that seems worthwhile.
Alternately, I can just use entirely different interfaces
(DistributedFJExecutorService, DistributedFJTask?).
Thoughts? Concerns? Glaring issues?
11 years, 7 months
How to run the testsuite?
by Sanne Grinovero
Hi all,
after reviewing some pull requests, I'm since a couple of days unable
to run the testsuite; since Anna's fixes affect many modules I'm
trying to run the testsuite of the whole project, as we should always
do but I admit I haven't done it in a while because of the core module
failures.
So I run:
$ mvn -fn clean install
using -fn to have it continue after the core failures.
First attempt gave me an OOM, was running with 1G heap.. I'm pretty
sure this was good enough some months back.
Second attempt slowed down like crazy, and I found a warning about
having filled the code cache size, so doubled it to 200M.
Third attempt: OutOfMemoryError: PermGen space! But I'm running with
-XX:MaxPermSize=380M which should be plenty?
This is :
java version "1.6.0_43"
Java(TM) SE Runtime Environment (build 1.6.0_43-b01)
Java HotSpot(TM) 64-Bit Server VM (build 20.14-b01, mixed mode)
MAVEN_OPTS=-Xmx2G -XX:MaxPermSize=380M -XX:+TieredCompilation
-Djava.net.preferIPv4Stack=true -Djgroups.bind_addr=127.0.0.1
-XX:ReservedCodeCacheSize=200M
-Dlog4j.configuration=file:/opt/infinispan-log4j.xml
My custom log configuration just disables trace & debug.
Going to try now with larger PermGen and different JVMs but it looks
quite bad.. any other suggestion?
(I do have the security limits setup properly)
Sanne
11 years, 8 months
Classloading issue with multiple modules in AS7
by Sanne Grinovero
When starting an EmbeddedCacheManager from a different module deployed
in the AS, I get this stacktrace:
Caused by: org.hibernate.search.SearchException: Unable to initialize
directory provider:
org.hibernate.search.test.integration.jbossas7.model.Member
at org.hibernate.search.store.impl.DirectoryProviderFactory.createDirectoryProvider(DirectoryProviderFactory.java:87)
at org.hibernate.search.indexes.impl.DirectoryBasedIndexManager.createDirectoryProvider(DirectoryBasedIndexManager.java:232)
at org.hibernate.search.indexes.impl.DirectoryBasedIndexManager.initialize(DirectoryBasedIndexManager.java:100)
at org.hibernate.search.indexes.impl.IndexManagerHolder.createIndexManager(IndexManagerHolder.java:227)
... 19 more
Caused by: org.infinispan.config.ConfigurationException:
org.infinispan.CacheException: Unable to load component metadata!
at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:386)
at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:341)
at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:328)
at org.hibernate.search.infinispan.CacheManagerServiceProvider.start(CacheManagerServiceProvider.java:93)
at org.hibernate.search.engine.impl.StandardServiceManager$ServiceProviderWrapper.startVirtual(StandardServiceManager.java:178)
at org.hibernate.search.engine.impl.StandardServiceManager.requestService(StandardServiceManager.java:124)
at org.hibernate.search.infinispan.impl.InfinispanDirectoryProvider.initialize(InfinispanDirectoryProvider.java:86)
at org.hibernate.search.store.impl.DirectoryProviderFactory.createDirectoryProvider(DirectoryProviderFactory.java:84)
... 22 more
Caused by: org.infinispan.CacheException: Unable to load component metadata!
at org.infinispan.factories.components.ComponentMetadataRepo.initialize(ComponentMetadataRepo.java:131)
at org.infinispan.factories.GlobalComponentRegistry.<init>(GlobalComponentRegistry.java:103)
at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:381)
... 29 more
Caused by: java.lang.NullPointerException
at org.infinispan.factories.components.ComponentMetadataRepo.readMetadata(ComponentMetadataRepo.java:53)
at org.infinispan.factories.components.ComponentMetadataRepo.initialize(ComponentMetadataRepo.java:129)
... 31 more
The ComponentMetadataRepo is unable to load
"infinispan-core-component-metadata.dat", which contains the
critically-important information for wiring together the internal
components of Infinispan core.
Now I think this is quite silly as locating this resource is trivial:
it's in the same jar as all the infinispan core classes:
infinispan-core-[version].jar so patching this looks like trivial:
it's using the ClassLoader configured as defaultClassLoader in
org.infinispan.factories.AbstractComponentRegistry, but really it
should just use something like
AbstractComponentRegistry.class.getClassLoader() ?
Things get a bit more tricky with extension modules, and a lot of
magic is being wrapped around this defaultClassLoader which goes
beyond my understanding so I would be glad to leave this issue to some
core Infinispan developer more familiar with the intentions here.
Sanne
11 years, 8 months
a nice HTML5 console for Infinispan & a question on MBean names...
by James Strachan
Howdy folks!
I'm working on a HTML5 web console called hawtio (http://hawt.io)
which is a pluggable & modular console for various Java libraries,
frameworks and containers. e.g. today it has plugins for Apache Camel,
ActiveMQ, Fuse Fabric as well as JMX, OSGi & Logging - then plugins
for containers like Tomcat, Jetty, JBoss, Karaf/ServiceMix. We've a
cute dashboard & wiki which uses git to store its configuration/files
too.
Anyway the reason I bring this up is yesterday we had our first
Infinispan issue with hawtio:
https://github.com/hawtio/hawtio/issues/134
its now fixed; but it got me thinking if we could have a nice little
hawtio plugin for Infinispan so folks could make dashboards of real
time metrics of caches, browse caches etc.
I enabled metrics and managed to get a basic JMX tree going with real
time metrics on attributes/charts on a single Cache using vanilla JMX
in hawtio with Infinispan. I've attached an example of how it looks if
you're interested.
The thing is, there's no way to easily click on a folder and get all
the MBeans for the Statistics. If that were the case then hawtio would
show a sortable table of all the metrics of all the caches in one
table view; or we could easily chart any of the metrics of all the
caches in one real time chart easily.
It'd be a fairly minor change; we'd just need to change the ObjectName
used for the Cache mbeans from this naming convention:
Infinispan:type=Cache,name="drink(local)",manager="DefaultCacheManager",component=Statistics
to this (just moving the "name" property to the end)
Infinispan:type=Cache,manager="DefaultCacheManager",component=Statistics,name="drink(local)"
then in the hawtio JMX tree we could select the "Statistics" folder
and see all of the mbeans in a table and so you could sort the table
by metric, see all the values on a single screen & create real time
charts of any permutation of cache & metric.
I wondered if folks fancied either adopting this naming convention
(putting the name last and the component kind before it) or adding a
configuration flag so we could enable this kind of JMX naming
convention style? It'd make things much more hawt when using hawtio
and infinispan! :). You'd get nice HTML5 statistics in hawtio
instantly on all caches.
I also wondered if folks fancied adding a few more operations to the
Cache mbean so that we could build a little console in hawtio for
Infinispan; to view/update contents of the caches or flush them etc?
As a background; since JMX is usually the lowest common denominator,
hawtio defaults to using it to detect whats in a JVM. If we discover a
particular kind of MBean we then enable/disable different parts of the
hawtio UI dynamically. e.g. if you're viewing a JVM then you deploy
some Apache Camel, hey presto, the Camel UI in hawtio appears as a
tab.
>From a technology perspective hawtio is all static HTML5 / JavaScript
on the client; it then communicates with the MBeans via an awesome
library called jolokia (http://jolokia.org/) which exposes JMX over
HTTP/JSON.
So hawtio could totally use the Infinispan REST API (but it'd be nice
if there was an mbean registered to indicate the REST API is running
and where to find it so hawtio could discover it). However you can
usually rely more on JMX being there (folks don't always deploy
Infinispan inside a web container with a REST API). So I wondered if
folks fancied adding a simple little JMX API to query / update / flush
the cache that we could then use to build a little HTML5 console in
hawtio?
e.g something valuely like:
interface CacheMBean {
...
String getJson(String key)
void setJson(String key, String value);
// lets browse the keys - in a simple paginated way some how..
Set<String> getCacheKeys( String from, int count);
}
You could return Objects; jolokia automarshalls things to JSON; though
maybe having a specific JSON reading/writing mbean (using
jolokia/jackson under the covers) might be neater as it'd work with
all JMX tools?
So in summary, I'm happy to help hack a little hawtio plugin for
infinispan if anyone's interested. Making the cache ObjectName change
would be a nice quick win & if more operations get added to the MBeans
we can then easily build a better hawtio plugin for Infinispan.
Thoughts?
--
James
-------
Red Hat
Email: jstracha(a)redhat.com
Web: http://fusesource.com
Twitter: jstrachan, fusenews
Blog: http://macstrac.blogspot.com/
Open Source Integration
11 years, 8 months