Design change in Infinispan Query
by Sanne Grinovero
Hello all,
currently Infinispan Query is an interceptor registering on the
specific Cache instance which has indexing enabled; one such
interceptor is doing all what it needs to do in the sole scope of the
cache it was registered in.
If you enable indexing - for example - on 3 different caches, there
will be 3 different Hibernate Search engines started in background,
and they are all unaware of each other.
After some design discussions with Ales for CapeDwarf, but also
calling attention on something that bothered me since some time, I'd
evaluate the option to have a single Hibernate Search Engine
registered in the CacheManager, and have it shared across indexed
caches.
Current design limitations:
A- If they are all configured to use the same base directory to
store indexes, and happen to have same-named indexes, they'll share
the index without being aware of each other. This is going to break
unless the user configures some tricky parameters, and even so
performance won't be great: instances will lock each other out, or at
best write in alternate turns.
B- The search engine isn't particularly "heavy", still it would be
nice to share some components and internal services.
C- Configuration details which need some care - like injecting a
JGroups channel for clustering - needs to be done right isolating each
instance (so large parts of configuration would be quite similar but
not totally equal)
D- Incoming messages into a JGroups Receiver need to be routed not
only among indexes, but also among Engine instances. This prevents
Query to reuse code from Hibernate Search.
Problems with a unified Hibernate Search Engine:
1#- Isolation of types / indexes. If the same indexed class is
stored in different (indexed) caches, they'll share the same index. Is
it a problem? I'm tempted to consider this a good thing, but wonder if
it would surprise some users. Would you expect that?
2#- configuration format overhaul: indexing options won't be set on
the cache section but in the global section. I'm looking forward to
use the schema extensions anyway to provide a better configuration
experience than the current <properties />.
3#- Assuming 1# is fine, when a search hit is found I'd need to be
able to figure out from which cache the value should be loaded.
3#A we could have the cache name encoded in the index, as part
of the identifier: {PK,cacheName}
3#B we actually shard the index, keeping a physically separate
index per cache. This would mean searching on the joint index view but
extracting hits from specific indexes to keep track of "which index"..
I think we can do that but it's definitely tricky.
It's likely easier to keep indexed values from different caches in
different indexes. that would mean to reject #1 and mess with the user
defined index name, to add for example the cache name to the user
defined string.
Any comment?
Cheers,
Sanne
10 years, 10 months
singleton @Listeners
by Mircea Markus
This is a problem that pops up constantly:
User: "I add a listener to my distributed/replicated cache but this gets invoked numOwners times - can I make that to be invoked only once cluster wise?"
Developer: "Yes, you can! You have to do that and that..."
What about a "singleton" attribute on the Listener? Would make the reply shorter:
Developer: "Use @Listener(singleton=true)"
Cheers,
Mircea
11 years, 3 months
NPE in MapReduceTask running in cluster
by Matej Lazar
NPE ocures while running CapeDwarf cluster tests, see stack below.
Null comes from MapReduceTask.invokeEverywhere
13:36:51,053 INFO [org.jboss.as.clustering.infinispan] (http-/192.168.30.248:8080-1) JBAS010281: Started search_capedwarf-test cache from capedwarf container
13:36:51,058 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoking MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] across entire cluster
*13:36:51,065 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoked MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] across entire cluster, results are {node-b/capedwarf=null}*
13:36:51,067 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoking MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] locally
13:36:51,069 DEBUG [org.infinispan.distexec.mapreduce.MapReduceTask] (http-/192.168.30.248:8080-1) Invoked MapCombineCommand [keys=[], taskId=14c75f18-3861-4a06-8a4a-b1592d542d14] locally
Any idea ?
Thanks,
Matej.
java.lang.NullPointerException
at org.infinispan.distexec.mapreduce.MapReduceTask.mergeResponse(MapReduceTask.java:530)
at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhaseWithLocalReduction(MapReduceTask.java:439)
at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:328)
at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:692)
at org.jboss.capedwarf.search.CapedwarfSearchService.listIndexes(CapedwarfSearchService.java:94)
at org.jboss.test.capedwarf.cluster.SearchTestCase.clear(SearchTestCase.java:360)
at org.jboss.test.capedwarf.cluster.SearchTestCase.cleanpOnStart(SearchTestCase.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at org.jboss.arquillian.junit.Arquillian$6$1.invoke(Arquillian.java:270)
at org.jboss.arquillian.container.test.impl.execution.LocalTestExecuter.execute(LocalTestExecuter.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.invokeObservers(EventContextImpl.java:99)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:81)
at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:135)
at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:115)
at org.jboss.arquillian.core.impl.EventImpl.fire(EventImpl.java:67)
at org.jboss.arquillian.container.test.impl.execution.ContainerTestExecuter.execute(ContainerTestExecuter.java:38)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.invokeObservers(EventContextImpl.java:99)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:81)
at org.jboss.arquillian.test.impl.TestContextHandler.createTestContext(TestContextHandler.java:89)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
at org.jboss.arquillian.test.impl.TestContextHandler.createClassContext(TestContextHandler.java:75)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
at org.jboss.arquillian.test.impl.TestContextHandler.createSuiteContext(TestContextHandler.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.arquillian.core.impl.ObserverImpl.invoke(ObserverImpl.java:90)
at org.jboss.arquillian.core.impl.EventContextImpl.proceed(EventContextImpl.java:88)
at org.jboss.arquillian.core.impl.ManagerImpl.fire(ManagerImpl.java:135)
at org.jboss.arquillian.test.impl.EventTestRunnerAdaptor.test(EventTestRunnerAdaptor.java:111)
at org.jboss.arquillian.junit.Arquillian$6.evaluate(Arquillian.java:263)
at org.jboss.arquillian.junit.Arquillian$4.evaluate(Arquillian.java:226)
at org.jboss.arquillian.junit.Arquillian.multiExecute(Arquillian.java:314)
at org.jboss.arquillian.junit.Arquillian.access$100(Arquillian.java:46)
at org.jboss.arquillian.junit.Arquillian$5.evaluate(Arquillian.java:240)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.jboss.arquillian.junit.Arquillian$2.evaluate(Arquillian.java:185)
at org.jboss.arquillian.junit.Arquillian.multiExecute(Arquillian.java:314)
at org.jboss.arquillian.junit.Arquillian.access$100(Arquillian.java:46)
at org.jboss.arquillian.junit.Arquillian$3.evaluate(Arquillian.java:199)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.jboss.arquillian.junit.Arquillian.run(Arquillian.java:147)
at org.junit.runner.JUnitCore.run(JUnitCore.java:157)
at org.junit.runner.JUnitCore.run(JUnitCore.java:136)
at org.jboss.arquillian.junit.container.JUnitTestRunner.execute(JUnitTestRunner.java:65)
at org.jboss.arquillian.protocol.servlet.runner.ServletTestRunner.executeTest(ServletTestRunner.java:160)
at org.jboss.arquillian.protocol.servlet.runner.ServletTestRunner.execute(ServletTestRunner.java:126)
at org.jboss.arquillian.protocol.servlet.runner.ServletTestRunner.doGet(ServletTestRunner.java:90)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:734)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:329)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at org.jboss.weld.servlet.ConversationPropagationFilter.doFilter(ConversationPropagationFilter.java:62)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:280)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at org.jboss.capedwarf.appidentity.GAEFilter.doFilter(GAEFilter.java:57)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:280)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
at org.jboss.modcluster.container.jbossweb.JBossWebContext$RequestListenerValve.event(JBossWebContext.java:67)
at org.jboss.modcluster.container.jbossweb.JBossWebContext$RequestListenerValve.invoke(JBossWebContext.java:48)
at org.jboss.as.web.security.SecurityContextAssociationValve.invoke(SecurityContextAssociationValve.java:165)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:155)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:372)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:877)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:679)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:931)
at java.lang.Thread.run(Thread.java:722)
11 years, 6 months
Re: [infinispan-dev] Removing Infinispan dependency on the Hibernate-Infinispan module in 4.x
by Galder Zamarreño
Scott, what do you suggest doing instead then? Without the commands, evictAll invalidation won't work.
Are you suggesting that I revert back to using the cache as a notification bus so that regions are invalidated?
On Feb 8, 2012, at 4:13 PM, Scott Marlow wrote:
> http://lists.jboss.org/pipermail/infinispan-dev/2012-February/010125.html has more context.
>
> Since there are no easy/quick fixes that can be applied at this time, to remove the AS7 Infinispan dependency on the Hibernate-Infinispan module, I think we should avoid depending on the service loader way to supply the custom commands (in the Hibernate-Infinispan module), at least until this can be addressed elsewhere.
>
> I propose that the Hibernate-Infinispan second level cache should not use the Service Loader to pass custom commands into Infinispan. If we agree, I'll create a jira for this.
>
> Scott
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
11 years, 6 months
Testsuite error jiras
by Galder Zamarreño
When you open JIRAs to fix randomly failing tets, i.e. https://issues.jboss.org/browse/ISPN-2102, please make sure the test is disabled too.
This avoids confusion with the tests that have been identified and the ones that have not been.
Cheers,
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
12 years, 1 month
Command cancellation
by Vladimir Blagojevic
Hi,
I wanted to run by you guys design of command cancellation Manik and I
talked about recently. For more background regarding this task read
https://issues.jboss.org/browse/ISPN-1042
At originating node I would have each Cancellable command create its
UUID as part of constructor. After that command gets sent away to remote
VMs but just before it gets executed there we associated thread with
UUID by calling CancellationService.registerThread (as a first line of
code in Command#perform). After registration has been setup Cancellable
command goes into potentially lengthy (in seconds) execution. If needed
we would be able to send CancelCommand which would call
CancellationService#cancelTask with UUID. CancelTask would interrupt the
associated thread.
WDYT?
Regards,
Vladimir
Cancellation of tasks:
class CancellationService {
UUID registerThread(Thread t, UUID uuid) {
// put thread in map, associate with newly give UUID
}
void cancelTask(UUID uuid) {
// look up thread, interrupt if exists.
}
}
interface CancellableCommand {
UUID getCommandUUID();
}
command CancelCommand {
Object perform() {
// Calls CancellationService.cancelTask with UUID. The UUID is a
parameter of this command.
}
}
12 years, 2 months
ISPN-1586 and preloading in clustered caches
by Dan Berindei
Hi guys
During the final push for NBST I found a bug with preloading (entries that
didn't belong on a joiner weren't removed after the initial state
transfer). I decided to fix it and
https://issues.jboss.org/browse/ISPN-1586at the same time, since it
was a longstanding bug and I had a reasonable
idea on what to do. However, I missed some implications and I need to fix
them - there is at least one Query test failing because of my change
(SharedCacheLoaderQueryIndexTest).
In 5.1, preloading worked like this:
1. Start the CacheLoaderManager, which preloads everything from the cache
store in memory.
2. Start the StateTransferManager, retrieving data from the other cache
members and overwriting already-preloaded values.
3. When the initial state transfer ends, entries not owned by the local
node are deleted.
The main issue with this, raised in ISPN-1586, is that entries that were
deleted on the other cache members are "revived" on the joiner when it
reads the data from the cache store. There is another performance issue,
because we load a lot of data that we then discard, but that's less
important.
With the ISPN-1586 fix, preloading should work like this:
1. Start the StateTransferManager, receive initial CH.
2. If the local node is not the first to start up, fetching state (either
in-memory or persistent) is enabled and the cache store is non-shared,
clear it.
3. Start the CacheLoaderManager, which preloads the cache store in memory -
but only if the local node is the first one having started the cache OR if
the fetching state is disabled.
4. Run the initial state transfer, retrieving data from the other cache
members (if any, and if fetching state is enabled).
This solves ISPN-1586, but it does mean that data from non-shared cache
stores will be lost on all the nodes except the first that starts up. So if
the last node to shut down is not the first node to start back up, the
cluster will lose data.
These are the alternatives I'm considering:
a) Finish the ISPN-1586 fix and clearly document that non-shared cache
stores don't guarantee persistence after cluster restart (unless the last
cache to stop is the first to start back up and shutdown was spaced out to
allow state transfer to move everything to the last node).
b) Revert my ISPN-1586 fix and allow "zombie" cache entries on the joiners
(leaving ISPN-1586 open).
I think there may be a third option:
c) Make preload a JMX operation and allow the user to run a cluster-wide
preload once all the nodes in the cluster have started up. But this looks a
little complicated, and it would require either versioning or prohibiting
external cache writes until the cluster-wide preload is done to ensure
consistency.
What do you guys think? Sanne, I'm particularly interested how you think
option a) would fit with the query module.
Cheers
Dan
12 years, 2 months
optimistic (or not)
by Mircea Markus
Hi,
By default our optimistic transactions don't have writeSkewCheck enabled so their behaviour is counterintuitive for the user. That's because they don't fault the commit in the case of a concurrent change.
Anyone can remember why these defaults are being used?
I'd rather have optimistic transactions with WSK enabled by default so that we won't confuse users.
Cheers,
--
Mircea Markus
Infinispan lead (www.infinispan.org)
12 years, 2 months
Re: [infinispan-dev] Issue with cache blocks for local read-only cache
by Galder Zamarreño
On Sep 19, 2012, at 4:20 PM, Andrig Miller <anmiller(a)redhat.com> wrote:
> Yes, I can see how that can happen, if the data is deleted from outside the application.
^ The issue does not only happen if the data is deleted outside the application. As indicated in https://hibernate.onjira.com/browse/HHH-3817, this can happen with two competing transactions.
> If you cache something as READ_ONLY, and it gets deleted, that doesn't fit the definition of READ_ONLY though. You are using the wrong cache concurrency strategy.
>
> Even that issue outlines the scenario where the collection is updated, which means its not a READ_ONLY.
I think the update is irrelevant here. The issue is related to putFromLoad + remove, which both AFAIK, are allowed in READ_ONLY (remember that we had the discussion on whether remove should be allowed in a READ_ONLY cache: https://hibernate.onjira.com/browse/HHH-7350).
>
> Andy
>
> ----- Original Message -----
>> From: "Galder Zamarreño" <galder(a)redhat.com>
>> To: "infinispan -Dev List" <infinispan-dev(a)lists.jboss.org>
>> Cc: "Steve Ebersole" <steve(a)hibernate.org>, "John O'Hara" <johara(a)redhat.com>, "Andrig Miller" <anmiller(a)redhat.com>,
>> "Jeremy Whiting" <jwhiting(a)redhat.com>
>> Sent: Wednesday, September 19, 2012 2:48:37 AM
>> Subject: Re: [infinispan-dev] Issue with cache blocks for local read-only cache
>>
>> This is code written in JBoss Cache days to deal with situations
>> where putFromLoad might try to store stale data (if data is deleted
>> in between a database read and putFromLoad being called).
>>
>> Indeed this can happen with read only data, and has nothing to do
>> with clustering.
>>
>> The original issue is: https://hibernate.onjira.com/browse/HHH-3817
>>
>> I'll check how this can be improved.
>>
>> Cheers,
>>
>> On Sep 18, 2012, at 4:55 PM, Sanne Grinovero <sanne(a)infinispan.org>
>> wrote:
>>
>>> There seems to be a single (global) reentrant lock which is
>>> acquired
>>> on any put; I don't know much about the module design, but there is
>>> a
>>> comment close to the lock acquisition mentioning a need to flush an
>>> operations queue to avoid a deadlock.
>>> Could it just make sure to reorder keys, so that deadlocks are
>>> avoided?
>>>
>>> On 18 September 2012 16:26, Manik Surtani <manik(a)jboss.org> wrote:
>>>> Agreed.
>>>>
>>>> This locking appears to be in the 2LC integration code to guard a
>>>> local
>>>> collection though - there must be better, non-blocking ways to
>>>> guard this -
>>>> if it is needed at all for RO entities.
>>>>
>>>> - M
>>>>
>>>> On 18 Sep 2012, at 15:18, Andrig Miller <anmiller(a)redhat.com>
>>>> wrote:
>>>>
>>>> What I find interesting, is that in this use case, READ_ONLY
>>>> concurrency
>>>> strategy and a local cache (no invalidation and no replication),
>>>> there is no
>>>> need to do any locking.
>>>>
>>>> If we can get to a solution without any locking that would be
>>>> ideal.
>>>>
>>>> Andy
>>>>
>>>> ________________________________
>>>>
>>>> From: "Manik Surtani" <manik(a)jboss.org>
>>>> To: "Ståle W. Pedersen" <spederse(a)redhat.com>
>>>> Cc: "Galder Zamarreño" <galder(a)redhat.com>, "John O'Hara"
>>>> <johara(a)redhat.com>, "Jeremy Whiting" <jwhiting(a)redhat.com>,
>>>> "Andrig Miller"
>>>> <anmiller(a)redhat.com>, "Steve Ebersole" <steve(a)hibernate.org>,
>>>> "infinispan-dev" <infinispan-dev(a)lists.jboss.org>
>>>> Sent: Tuesday, September 18, 2012 8:13:43 AM
>>>> Subject: Re: Issue with cache blocks for local read-only cache
>>>>
>>>>
>>>> Looking at your profiler snapshot, these locks are in the
>>>> Hibernate 2nd
>>>> level cache implementation for Infinispan. Galder, any ideas?
>>>>
>>>> - M
>>>>
>>>> On 18 Sep 2012, at 13:43, Ståle W. Pedersen <spederse(a)redhat.com>
>>>> wrote:
>>>>
>>>> hi galder and manik, sorry for sending this mail to so many, but
>>>> we've ran
>>>> into a issue that prevents us from further scaling of the
>>>> specjenterprise2010 benchmark.
>>>>
>>>> so when doing specjenterprise2010 benchmark testing we've seen a
>>>> lot of
>>>> blocks caused by the entity/query cache. we've been testing with
>>>> only
>>>> caching a simple entity bean that's read-only and queries related
>>>> to this
>>>> entity (selects).
>>>>
>>>> here is a screenshot of the hotspot
>>>> found:https://dl.dropbox.com/u/30971563/specjent_block.png
>>>>
>>>> here is the standalone.xml:
>>>> https://dl.dropbox.com/u/30971563/standalone-full.xml
>>>>
>>>> here is the orm.xml:
>>>> https://dl.dropbox.com/u/30971563/order_orm.xml
>>>>
>>>> what we don't understand is why there are so many puts into the
>>>> cache for an
>>>> object that is marked as read-only. when we're testing without
>>>> caching we do
>>>> not see any blocks.
>>>>
>>>> any help/ideas would be great. if anyone want a jprofiler snapshot
>>>> of the
>>>> run, let me know.
>>>>
>>>> regards, ståle
>>>> --
>>>> JBoss Performance Team Lead
>>>> JBoss by Red Hat
>>>>
>>>>
>>>> --
>>>> Manik Surtani
>>>> manik(a)jboss.org
>>>> twitter.com/maniksurtani
>>>>
>>>> Platform Architect, JBoss Data Grid
>>>> http://red.ht/data-grid
>>>>
>>>>
>>>> --
>>>> Manik Surtani
>>>> manik(a)jboss.org
>>>> twitter.com/maniksurtani
>>>>
>>>> Platform Architect, JBoss Data Grid
>>>> http://red.ht/data-grid
>>>>
>>>>
>>>> _______________________________________________
>>>> infinispan-dev mailing list
>>>> infinispan-dev(a)lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>>
>> --
>> Galder Zamarreño
>> galder(a)redhat.com
>> twitter.com/galderz
>>
>> Project Lead, Escalante
>> http://escalante.io
>>
>> Engineer, Infinispan
>> http://infinispan.org
>>
>>
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
12 years, 2 months