From galder at redhat.com Fri Jan 3 04:58:54 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Fri, 3 Jan 2014 10:58:54 +0100 Subject: [infinispan-dev] Synchronous write on cachestore In-Reply-To: References: Message-ID: Hi Guillaume, Thanks a lot for looking into these MongoDB cache stores issues :). Apologies for the delay getting back to you. The test you've created does not really check that the data has been stored in MongoDB. It just checks that the cache's values() returns something, which it should since even in the default configuration, the contents of the cache in memory should have that data. The test you've created though should definitely update the cache store. I'd recommend tracing it with the IDE or inspecting the logs to see why the cache store is not being updated. If you're writing a test that verifies that after passivation the cache store contains data, then you'd need an Infinispan version that has [1] fixed. Cheers, [1] https://issues.jboss.org/browse/ISPN-761 On Dec 26, 2013, at 10:32 PM, Guillaume SCHEIBEL wrote: > Hello everyone, > > I'm fixing some issues on the MongoDB cachestore configuration (v5.3). I've written a test [1] to check that the value I've added in the cache is correctly persisted into my MongoDB collection. > > The problem is that when comes at the "assert" time, the value put in the cache has still not been stored in MongoDB. > > So how can I do to have the value directly persisted into the cache store database ? > > Thanks > Guillaume > > [1] https://gist.github.com/gscheibel/8138722 > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org From galder at redhat.com Fri Jan 3 11:38:38 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Fri, 3 Jan 2014 17:38:38 +0100 Subject: [infinispan-dev] IntelliJ 13 (133.370) and EOFException when compiling Message-ID: Hi guys, I've just came accross the same problem in [1]. On OSX, I solved it removing ~/Library/Caches/IntelliJIdea13 folder completely. Cheers, [1] http://devnet.jetbrains.com/thread/451731 -- Galder Zamarre?o galder at redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org From galder at redhat.com Tue Jan 7 03:08:32 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Tue, 7 Jan 2014 09:08:32 +0100 Subject: [infinispan-dev] Design of Remote Hot Rod events - round 2 In-Reply-To: <20131219121536.GA12590@hibernate.org> References: <11A2709F-3194-439C-8D8B-95D2FF38213C@redhat.com> <20131213160805.GA12937@hibernate.org> <283BDFDA-5F5F-43D8-897A-255010C34E74@redhat.com> <20131219121536.GA12590@hibernate.org> Message-ID: <6B23FC1C-CD2E-4846-ADDD-9ACF95766302@redhat.com> On Dec 19, 2013, at 1:15 PM, Emmanuel Bernard wrote: > On Thu 2013-12-19 9:46, Galder Zamarre?o wrote: >>> == Example of continuous query atop remote listeners >>> >>> Thinking about how to implement continuous query atop this >>> infrastructure I am missing a few things. >>> >>> The primary problem is that I don't want to enlist a filter id per >>> continuous query I want to run. Not only that but I'd love to be able to >>> add a continuous query on the fly and disable it on the fly as well per >>> client. For that filters and converters are not flexible enough. >>> >>> What is missing is the ability to pass parameters from the client to >>> the remote filter and remote converter. Parameters should be provided >>> *per client*. Say Client 1 register the continuous query listener with >>> "where age > 19" and client 2 registers the CQ listener with "where name >>> = emmanuel". The filter knowing for which client it filters, it will be able to only >>> return the keys that match the query. >> >> This all sounds a bit like remote code exectution to me? You're asking for the client to pass some kind of executable thing that acts as a filter. That's a separate feature IMO, which I believe @Tristan is looking into. Once that's in place, I'm happy to enhance stuff in the remote event side to support it. > > I don't think you are correct. > This is not remote execution in the sense of arbitrary code driven by > the client. Remote execution will likely be triggered, render a > result and stop. It will not send matching events in a continuous fashion. > Plus remote execution will likely involve dynamic languages and I'm not > sure we want to go that route for things like continuous query. Well, it's remote execution of a condition, which is a type of code :) >From Hot Rod perspective, until remote code execution is in place, we could add a list of N byte[] that are treated as parameters for the filter, and the filter deciphers what those mean. So, in your case, there would be only 1 parameter, a byte[], and it would be unmarshalled by the filter into "where age > 19". If multiple clients add the same parameter, we could use the same filter instance, that's assuming equality can be calculated based on the contents of the byte[] by Hot Rod. WRT your suggestion to activate/deactivate the continous query on the fly, can't we achieve that with registration/deregistration of listeners? Or are you trying to avoid all the set up involved in sending the listener registration stuff around? Adding activate/deactivate would require two new operations. Cheers, > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org From brmeyer at redhat.com Tue Jan 7 14:14:22 2014 From: brmeyer at redhat.com (Brett Meyer) Date: Tue, 7 Jan 2014 14:14:22 -0500 (EST) Subject: [infinispan-dev] help with Infinispan OSGi In-Reply-To: <7313DB94-6C42-4502-8FEA-FCFA18329218@redhat.com> References: <1953691887.24945507.1386275697958.JavaMail.root@redhat.com> <05D6A159-2E02-462A-AFB7-47DC3914CB02@redhat.com> <1378853890.25611986.1386348693503.JavaMail.root@redhat.com> <2084410860.25622457.1386349002612.JavaMail.root@redhat.com> <595428667.30164908.1386870900128.JavaMail.root@redhat.com> <52AF0781.7010400@redhat.com> <7313DB94-6C42-4502-8FEA-FCFA18329218@redhat.com> Message-ID: <1797519982.45377895.1389122062165.JavaMail.root@redhat.com> Apologies for the delay -- things have been nuts. Here's the route I've taken so far. I created OsgiClassLoader that searches available Bundles for classes and resources. A new (non-static) AggregateClassLoader replaces FileLookup, CL-related methods in Util, and parts of ReflectionUtil. AggregateClassLoader extends/overrides ClassLoader and searches over a prioritized list of user/app CL, OsgiCL, System CL, TCCL, etc. This is easily wired into GlobalConfiguration concepts. However, I'm not exactly sure how this will play into Configuration and Cache. Configuration#classLoader is deprecated. Is that in favor of CacheImpl#getClassLoader & CacheImpl#with(ClassLoader)? Can someone describe the scoping of CL concepts between GlobalConfiguration and the Configuration/Cache? Should both have their own instance of AggregateClassLoader? Cache's instance would put the user/app provided CL on the top of the queue? Hopefully that all makes sense... Brett Meyer Red Hat, Hibernate ORM ----- Original Message ----- From: "Galder Zamarre?o" To: "Eric Wittmann" Cc: "infinispan -Dev List" , "Brett Meyer" , "Sanne Grinovero" , "Pete Muir" , "Randall Hauch" , "Steve Jacobs" Sent: Wednesday, December 18, 2013 8:22:13 AM Subject: Re: [infinispan-dev] help with Infinispan OSGi On Dec 16, 2013, at 3:00 PM, Eric Wittmann wrote: > I wanted to add that in the Overlord group we're also looking into using ISPN in OSGi. Our directive is to get our projects running in Fuse 6.1. > > To that end I've been working on getting Overlord:S-RAMP up and running, which requires both ModeShape and ISPN. > > Additionally, Gary Brown uses ISPN in Overlord:RTGov and so will need to get it working directly (no ModeShape) in Fuse 6.1. > > I've made some progress on my end but have run into some of the same issues as Brett. > > An additional issue I hit was the use of Java's ServiceLoader for org.infinispan.configuration.parsing.ConfigurationParser. None of the parsers get loaded because ServiceLoader doesn't work particularly well in OSGi. We had this same issue in S-RAMP (we use ServiceLoader in a few places). I solved it by using the OSGi Service Registry when running in an OSGi container, but continuing to use ServiceLoader otherwise. ^ Can you add a JIRA for this so that we can abstract this away? I'm not sure how exactly we'd decide on the impl to use. By default it'd be SL impl. When used on OSGI though, an alternative service loading impl would need to be configured specifically by the user? Or would Infinispan itself detect that it's in OSGi and hence used the corresponding impl? I've no idea about OSGI. > In any case - I was wondering if anyone thought it might be a good idea to create a git repo where we can create some test OSGi applications that use ISPN and can be deployed (e.g. to Fuse). This would be for testing purposes only - to shake out problems. Might be useful for collaboration? A quickstart on [1] would be the perfect place for something like that, i.e. fuse + infinispan or something like that. [1] http://www.jboss.org/jdf/quickstarts/get-started/ > > -Eric > > > On 12/12/2013 12:55 PM, Brett Meyer wrote: >> I finally had a chance to start working with this, a bit, today. Here's what I've found so far. >> >> In general, I'm seeing 2 types of CL issues come up when testing w/ hibernate-infinispan: >> >> 1.) Reliance on the client bundle's CL. Take the following stack as an example: https://gist.github.com/brmeyer/c8aaa1157a4a951a462c. Hibernate's InfinispanRegionFactory is building a ConfigurationBuilderHolder. Parser60#parseTransport eventually gives the ConfigurationBuilderHolder#getClassLoader to Util#loadClass. But since this thread is happening within the hibernate-infinispan bundle, that CL instance is hibernate-infinispan's BundleWiring. If hibernate-infinispan's manifest explicitly imports the package being loaded, this works fine. But, as I hit, that's not usually the case. This stack fails when it attempted to load org.infinispan.remoting.transport.jgroups.JGroupsTransport. Adding org.infinispan.remoting.transport.jgroups to our imports worked, but that's not ideal. >> >> 2.) Reliance on TCCL. See GlobalConfigurationBuilder#cl as an example. TCCL should be avoided at all costs. Here's an example: https://gist.github.com/brmeyer/141ea83fb632dd126406. Yes, ConfigurationBuilderHolder could attempt to pass in a CL to GlobalConfigurationBuilder, but we'd run into the same situation for #1. In this specific example, we're trying to load the "infinispan-core-component-metadata.dat" resource within the infinispan-core bundle, not visible to the hibernate-infinispan bundle CL. >> >> commons already has a step towards a solution: OsgiFileLookup. However, it scans over *all* bundles activated in the container. There's certainly performance issues with that, but more importantly can introduce conflicts (multiple versions of Infinispan or client bundles running simultaneously, a resource existing in multiple bundles, etc.). >> >> What we did in Hibernate was to introduce an OSGi-specific implementation of ClassLoader that's aware of what bundles it needs to consider. In frameworks with multiple bundles/modules, this is definitely more complicated. For now, we limit the scope to core, entitymanager (JPA), and the "requesting bundle" (the client bundle requesting the Session). The "requesting bundle" concept was important for us since we scan and rely on the client bundle's entities, mapping files, etc. >> >> There are several routes, but all boil down to relying on OSGi APIs to use Bundles to discover classes and resources, with TCCL & Class#getClassLoader as a just-in-case backup. How the scope of that Bundle set is defined is largely up to the framework's existing architecture and dependency tree. >> >> What I might recommend as a first step would be expanding/refactoring OsgiFileLookup to include loading classes, but continue to allow it to scan all bundles (for now). That will at least remove the initial CL issues. But, that would need to be followed up. >> >> Before I keep going down the rabbit hole, just wanted to see if there were any other thoughts. I'm making general assumptions without knowing much about Infinispan's architecture. Thanks! >> >> Brett Meyer >> Red Hat, Hibernate ORM >> >> ----- Original Message ----- >> From: "Brett Meyer" >> To: "Randall Hauch" , "infinispan -Dev List" >> Cc: "Pete Muir" , "Steve Jacobs" >> Sent: Friday, December 6, 2013 11:56:42 AM >> Subject: Re: [infinispan-dev] help with Infinispan OSGi >> >> Sorry, forgot the link: >> >> [1] https://hibernate.atlassian.net/browse/HHH-8214 >> >> Brett Meyer >> Software Engineer >> Red Hat, Hibernate ORM >> >> ----- Original Message ----- >> From: "Brett Meyer" >> To: "Randall Hauch" , "infinispan -Dev List" >> Cc: "Pete Muir" , "Steve Jacobs" >> Sent: Friday, December 6, 2013 11:51:33 AM >> Subject: Re: [infinispan-dev] help with Infinispan OSGi >> >> Randall, that is *definitely* the case and is certainly true for Hibernate. The work involved: >> >> * correctly resolving ClassLoaders based on the activated bundles >> * supporting multiple containers and contexts (container-managed JPA, un-managed JPA/native, etc.) >> * fully supporting OSGi/Blueprint services (both for internal services as well as externally-registered) >> * bundle scanning >> * generally working towards supporting the dynamic nature >> * full unit-tests with Arquillian and an OSGi container >> >> It's a matter of holistically supporting the "OSGi way" (for better or worse), as opposed to simply ensuring the library's manifest is correct. >> >> There were a bloody ton of gotchas and caveats I hit along the way. That's more along the lines of where I might be able to help. >> >> I'm even more interested in this effort so that we can support hibernate-infinispan 2nd level caching within ORM. On the first attempt, I hit ClassLoader issues [1]. Some of that may already be resolved. >> >> The next step may simply be giving hibernate-infinispan another shot and correcting things as I find them. In parallel, feel free to let me know if there's anything else! ORM supports lots of OSGi-enabled extension points, etc. that are powerful for users, but obviously I don't have the Infinispan knowledge to know what would be necessary. >> >> Thanks! >> >> Brett Meyer >> Software Engineer >> Red Hat, Hibernate ORM >> >> ----- Original Message ----- >> From: "Randall Hauch" >> To: "infinispan -Dev List" >> Cc: "Pete Muir" , "Brett Meyer" >> Sent: Friday, December 6, 2013 10:57:23 AM >> Subject: Re: [infinispan-dev] help with Infinispan OSGi >> >> Brett, correct me if I?m wrong, but isn?t there a difference in making some library *work* in an OSGi environment and making that library *naturally fit well* in an OSGi-enabled application? For example, making the JAR?s be OSGi bundles is easy and technically makes it possible to deploy a JAR into an OSGi env, but that?s not where the payoff is. IIUC what you really want is a BundleActivator or Declarative Services [1] so that the library?s components are readily available in a naturally-OSGi way. >> >> [1] http://blog.knowhowlab.org/2010/10/osgi-tutorial-4-ways-to-activate-code.html >> >> On Dec 6, 2013, at 7:30 AM, Mircea Markus wrote: >> >>> + infinispan-dev >>> >>> Thanks for offering to look into this Brett! >>> We're already producing OSGi bundles for our modules, but these are not tested extensively so if you'd review them and test them a bit would be great! >>> Tristan can get you up to speed with this. >>> >>> >>>>> Sanne/Galder/Pete, >>>>> >>>>> Random question: what's the current state of making Infinispan OSGi friendly? I'm definitely interested in helping, if it's still a need. This past year, I went through the exercise of making Hibernate work well in OSGi, so all of challenges (read: *many* of them) are still fairly fresh on my mind. Plus, I'd love for hibernate-infinispan to work in OSGi. >>>>> >>>>> If you're up for it, fill me in? I'm happy to pull everything down and start working with it. >>>>> >>>>> Brett Meyer >>>>> Software Engineer >>>>> Red Hat, Hibernate ORM >>>>> >>>> >>> >>> Cheers, >>> -- >>> Mircea Markus >>> Infinispan lead (www.infinispan.org) >>> >>> >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> -- Galder Zamarre?o galder at redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org From pedro at infinispan.org Wed Jan 8 06:36:56 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Wed, 08 Jan 2014 11:36:56 +0000 Subject: [infinispan-dev] Non-tx X-Site: doubts and opinions Message-ID: <52CD3858.9030309@infinispan.org> Hi guys, I'm digging in detail in X-Site code and I have a couple of doubts/comments. *Note*: I'm talking about non-tx caches :) First, I've noticed that we are forcing the /site_master/ node to have a reference to the cache in order to perform the modifications from other sites. IMO, it can be possible the user may not want to have that cache running that node. WDYT? My idea is, if the /site_master/ has the reference to the cache, it can use, otherwise it multicast the command to all the nodes in the site. The nodes that does not have the cache (or if they are not primary owner of any key) will ignore the command :) Second and finally, I find a couple of optimization that can be done. #1 I've noticed that we are sending the conditional commands to the backup sites without checking if the command is successful in the originator site. I think we can save network bandwidth and time to avoid serializing the commands that will fail... #2 Also, the originator is the node that sends the command to the other sites. IMO, it can generate inconsistencies because 2 or more nodes in /site_a/ can update the same key concurrently and other /site_b/ can receive the commands in different order (at least, I didn't see any protection mechanism against this case) I think the backup to other sites should be sent by the /primary_owner/ (lock owner). Thanks in advance for the feedback. Cheers and Happy New Year :) Pedro From mmarkus at redhat.com Thu Jan 9 07:01:43 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Thu, 9 Jan 2014 12:01:43 +0000 Subject: [infinispan-dev] Non-tx X-Site: doubts and opinions In-Reply-To: <52CD3858.9030309@infinispan.org> References: <52CD3858.9030309@infinispan.org> Message-ID: <146660C2-9CE8-4EF7-AF7C-A25FB68154E4@redhat.com> On Jan 8, 2014, at 11:36 AM, Pedro Ruivo wrote: > Hi guys, > > I'm digging in detail in X-Site code and I have a couple of > doubts/comments. > > *Note*: I'm talking about non-tx caches :) > > First, I've noticed that we are forcing the /site_master/ node to have a > reference to the cache in order to perform the modifications from other > sites. IMO, it can be possible the user may not want to have that cache > running that node. WDYT? > > My idea is, if the /site_master/ has the reference to the cache, it can > use, otherwise it multicast the command to all the nodes in the site. > The nodes that does not have the cache (or if they are not primary owner > of any key) will ignore the command :) Alternatively it could configure a capacityFactor==0 for that node and have the same effect. Your approach is nice, but a bit more complex to implement IMO than the capacityFactor solution. > > Second and finally, I find a couple of optimization that can be done. #1 > I've noticed that we are sending the conditional commands to the backup > sites without checking if the command is successful in the originator > site. I think we can save network bandwidth and time to avoid > serializing the commands that will fail... +1 > > #2 Also, the originator is the node that sends the command to the other > sites. IMO, it can generate inconsistencies because 2 or more nodes in > /site_a/ can update the same key concurrently and other /site_b/ can > receive the commands in different order (at least, I didn't see any > protection mechanism against this case) > I think the backup to other sites should be sent by the /primary_owner/ > (lock owner). Good point. > > Thanks in advance for the feedback. > > Cheers and Happy New Year :) Happy New Year! :-) > Pedro > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From dan.berindei at gmail.com Thu Jan 9 08:18:35 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 9 Jan 2014 15:18:35 +0200 Subject: [infinispan-dev] help with Infinispan OSGi In-Reply-To: <1797519982.45377895.1389122062165.JavaMail.root@redhat.com> References: <1953691887.24945507.1386275697958.JavaMail.root@redhat.com> <05D6A159-2E02-462A-AFB7-47DC3914CB02@redhat.com> <1378853890.25611986.1386348693503.JavaMail.root@redhat.com> <2084410860.25622457.1386349002612.JavaMail.root@redhat.com> <595428667.30164908.1386870900128.JavaMail.root@redhat.com> <52AF0781.7010400@redhat.com> <7313DB94-6C42-4502-8FEA-FCFA18329218@redhat.com> <1797519982.45377895.1389122062165.JavaMail.root@redhat.com> Message-ID: On Tue, Jan 7, 2014 at 9:14 PM, Brett Meyer wrote: > Apologies for the delay -- things have been nuts. > > Here's the route I've taken so far. I created OsgiClassLoader that > searches available Bundles for classes and resources. A new (non-static) > AggregateClassLoader replaces FileLookup, CL-related methods in Util, and > parts of ReflectionUtil. AggregateClassLoader extends/overrides > ClassLoader and searches over a prioritized list of user/app CL, OsgiCL, > System CL, TCCL, etc. > > This is easily wired into GlobalConfiguration concepts. However, I'm not > exactly sure how this will play into Configuration and Cache. > Configuration#classLoader is deprecated. Is that in favor of > CacheImpl#getClassLoader & CacheImpl#with(ClassLoader)? > Actually, I think the idea was to only set the classloader in the GlobalConfiguration and use a separate CacheManager for each deployment, removing the need for AdvancedCache.with(ClassLoader) as well. But I'm not sure if we'll ever get to that... AdvancedCache.with(ClassLoader) is also very limited in scope. The classloader can't be sent remotely so AFAICT it's only really useful for unmarshalling return values for get operations with storeAsBinary enabled. > Can someone describe the scoping of CL concepts between > GlobalConfiguration and the Configuration/Cache? Should both have their > own instance of AggregateClassLoader? Cache's instance would put the > user/app provided CL on the top of the queue? > Sounds about right. For marshalling, ATM we use an EmbeddedContextClassResolver, which uses the InvocationContextContainer to obtain the classloader set by the user with AdvancedCache.with(ClassLoader) (unlike JBoss Marshalling's ContextClassResolver, which uses the TCCL). You will probably want to implement your own ClassResolver based on AggregateClassLoader, but using InvocationContextContainer as well. I'm not sure about other places where we need to load classes. While working on https://issues.jboss.org/browse/ISPN-3836 I did find some instances of ServiceLoader.load(Class), which use the TCCL implicitly, but I'm trying to change that to use the global/cache configuration's classloader. > Hopefully that all makes sense... > > Brett Meyer > Red Hat, Hibernate ORM > > ----- Original Message ----- > From: "Galder Zamarre?o" > To: "Eric Wittmann" > Cc: "infinispan -Dev List" , "Brett > Meyer" , "Sanne Grinovero" , > "Pete Muir" , "Randall Hauch" , > "Steve Jacobs" > Sent: Wednesday, December 18, 2013 8:22:13 AM > Subject: Re: [infinispan-dev] help with Infinispan OSGi > > > On Dec 16, 2013, at 3:00 PM, Eric Wittmann > wrote: > > > I wanted to add that in the Overlord group we're also looking into using > ISPN in OSGi. Our directive is to get our projects running in Fuse 6.1. > > > > To that end I've been working on getting Overlord:S-RAMP up and running, > which requires both ModeShape and ISPN. > > > > Additionally, Gary Brown uses ISPN in Overlord:RTGov and so will need to > get it working directly (no ModeShape) in Fuse 6.1. > > > > I've made some progress on my end but have run into some of the same > issues as Brett. > > > > An additional issue I hit was the use of Java's ServiceLoader for > org.infinispan.configuration.parsing.ConfigurationParser. None of the > parsers get loaded because ServiceLoader doesn't work particularly well in > OSGi. We had this same issue in S-RAMP (we use ServiceLoader in a few > places). I solved it by using the OSGi Service Registry when running in an > OSGi container, but continuing to use ServiceLoader otherwise. > > ^ Can you add a JIRA for this so that we can abstract this away? I'm not > sure how exactly we'd decide on the impl to use. By default it'd be SL > impl. When used on OSGI though, an alternative service loading impl would > need to be configured specifically by the user? Or would Infinispan itself > detect that it's in OSGi and hence used the corresponding impl? I've no > idea about OSGI. > > > In any case - I was wondering if anyone thought it might be a good idea > to create a git repo where we can create some test OSGi applications that > use ISPN and can be deployed (e.g. to Fuse). This would be for testing > purposes only - to shake out problems. Might be useful for collaboration? > > A quickstart on [1] would be the perfect place for something like that, > i.e. fuse + infinispan or something like that. > > [1] http://www.jboss.org/jdf/quickstarts/get-started/ > > > > > -Eric > > > > > > On 12/12/2013 12:55 PM, Brett Meyer wrote: > >> I finally had a chance to start working with this, a bit, today. > Here's what I've found so far. > >> > >> In general, I'm seeing 2 types of CL issues come up when testing w/ > hibernate-infinispan: > >> > >> 1.) Reliance on the client bundle's CL. Take the following stack as an > example: https://gist.github.com/brmeyer/c8aaa1157a4a951a462c. > Hibernate's InfinispanRegionFactory is building a > ConfigurationBuilderHolder. Parser60#parseTransport eventually gives the > ConfigurationBuilderHolder#getClassLoader to Util#loadClass. But since > this thread is happening within the hibernate-infinispan bundle, that CL > instance is hibernate-infinispan's BundleWiring. If hibernate-infinispan's > manifest explicitly imports the package being loaded, this works fine. > But, as I hit, that's not usually the case. This stack fails when it > attempted to load > org.infinispan.remoting.transport.jgroups.JGroupsTransport. Adding > org.infinispan.remoting.transport.jgroups to our imports worked, but that's > not ideal. > >> > >> 2.) Reliance on TCCL. See GlobalConfigurationBuilder#cl as an example. > TCCL should be avoided at all costs. Here's an example: > https://gist.github.com/brmeyer/141ea83fb632dd126406. Yes, > ConfigurationBuilderHolder could attempt to pass in a CL to > GlobalConfigurationBuilder, but we'd run into the same situation for #1. > In this specific example, we're trying to load the > "infinispan-core-component-metadata.dat" resource within the > infinispan-core bundle, not visible to the hibernate-infinispan bundle CL. > >> > >> commons already has a step towards a solution: OsgiFileLookup. > However, it scans over *all* bundles activated in the container. There's > certainly performance issues with that, but more importantly can introduce > conflicts (multiple versions of Infinispan or client bundles running > simultaneously, a resource existing in multiple bundles, etc.). > >> > >> What we did in Hibernate was to introduce an OSGi-specific > implementation of ClassLoader that's aware of what bundles it needs to > consider. In frameworks with multiple bundles/modules, this is definitely > more complicated. For now, we limit the scope to core, entitymanager > (JPA), and the "requesting bundle" (the client bundle requesting the > Session). The "requesting bundle" concept was important for us since we > scan and rely on the client bundle's entities, mapping files, etc. > >> > >> There are several routes, but all boil down to relying on OSGi APIs to > use Bundles to discover classes and resources, with TCCL & > Class#getClassLoader as a just-in-case backup. How the scope of that > Bundle set is defined is largely up to the framework's existing > architecture and dependency tree. > >> > >> What I might recommend as a first step would be expanding/refactoring > OsgiFileLookup to include loading classes, but continue to allow it to scan > all bundles (for now). That will at least remove the initial CL issues. > But, that would need to be followed up. > >> > >> Before I keep going down the rabbit hole, just wanted to see if there > were any other thoughts. I'm making general assumptions without knowing > much about Infinispan's architecture. Thanks! > >> > >> Brett Meyer > >> Red Hat, Hibernate ORM > >> > >> ----- Original Message ----- > >> From: "Brett Meyer" > >> To: "Randall Hauch" , "infinispan -Dev List" < > infinispan-dev at lists.jboss.org> > >> Cc: "Pete Muir" , "Steve Jacobs" > >> Sent: Friday, December 6, 2013 11:56:42 AM > >> Subject: Re: [infinispan-dev] help with Infinispan OSGi > >> > >> Sorry, forgot the link: > >> > >> [1] https://hibernate.atlassian.net/browse/HHH-8214 > >> > >> Brett Meyer > >> Software Engineer > >> Red Hat, Hibernate ORM > >> > >> ----- Original Message ----- > >> From: "Brett Meyer" > >> To: "Randall Hauch" , "infinispan -Dev List" < > infinispan-dev at lists.jboss.org> > >> Cc: "Pete Muir" , "Steve Jacobs" > >> Sent: Friday, December 6, 2013 11:51:33 AM > >> Subject: Re: [infinispan-dev] help with Infinispan OSGi > >> > >> Randall, that is *definitely* the case and is certainly true for > Hibernate. The work involved: > >> > >> * correctly resolving ClassLoaders based on the activated bundles > >> * supporting multiple containers and contexts (container-managed JPA, > un-managed JPA/native, etc.) > >> * fully supporting OSGi/Blueprint services (both for internal services > as well as externally-registered) > >> * bundle scanning > >> * generally working towards supporting the dynamic nature > >> * full unit-tests with Arquillian and an OSGi container > >> > >> It's a matter of holistically supporting the "OSGi way" (for better or > worse), as opposed to simply ensuring the library's manifest is correct. > >> > >> There were a bloody ton of gotchas and caveats I hit along the way. > That's more along the lines of where I might be able to help. > >> > >> I'm even more interested in this effort so that we can support > hibernate-infinispan 2nd level caching within ORM. On the first attempt, I > hit ClassLoader issues [1]. Some of that may already be resolved. > >> > >> The next step may simply be giving hibernate-infinispan another shot > and correcting things as I find them. In parallel, feel free to let me > know if there's anything else! ORM supports lots of OSGi-enabled extension > points, etc. that are powerful for users, but obviously I don't have the > Infinispan knowledge to know what would be necessary. > >> > >> Thanks! > >> > >> Brett Meyer > >> Software Engineer > >> Red Hat, Hibernate ORM > >> > >> ----- Original Message ----- > >> From: "Randall Hauch" > >> To: "infinispan -Dev List" > >> Cc: "Pete Muir" , "Brett Meyer" > >> Sent: Friday, December 6, 2013 10:57:23 AM > >> Subject: Re: [infinispan-dev] help with Infinispan OSGi > >> > >> Brett, correct me if I?m wrong, but isn?t there a difference in making > some library *work* in an OSGi environment and making that library > *naturally fit well* in an OSGi-enabled application? For example, making > the JAR?s be OSGi bundles is easy and technically makes it possible to > deploy a JAR into an OSGi env, but that?s not where the payoff is. IIUC > what you really want is a BundleActivator or Declarative Services [1] so > that the library?s components are readily available in a naturally-OSGi way. > >> > >> [1] > http://blog.knowhowlab.org/2010/10/osgi-tutorial-4-ways-to-activate-code.html > >> > >> On Dec 6, 2013, at 7:30 AM, Mircea Markus wrote: > >> > >>> + infinispan-dev > >>> > >>> Thanks for offering to look into this Brett! > >>> We're already producing OSGi bundles for our modules, but these are > not tested extensively so if you'd review them and test them a bit would be > great! > >>> Tristan can get you up to speed with this. > >>> > >>> > >>>>> Sanne/Galder/Pete, > >>>>> > >>>>> Random question: what's the current state of making Infinispan OSGi > friendly? I'm definitely interested in helping, if it's still a need. > This past year, I went through the exercise of making Hibernate work well > in OSGi, so all of challenges (read: *many* of them) are still fairly fresh > on my mind. Plus, I'd love for hibernate-infinispan to work in OSGi. > >>>>> > >>>>> If you're up for it, fill me in? I'm happy to pull everything down > and start working with it. > >>>>> > >>>>> Brett Meyer > >>>>> Software Engineer > >>>>> Red Hat, Hibernate ORM > >>>>> > >>>> > >>> > >>> Cheers, > >>> -- > >>> Mircea Markus > >>> Infinispan lead (www.infinispan.org) > >>> > >>> > >>> > >>> > >>> > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > Project Lead, Escalante > http://escalante.io > > Engineer, Infinispan > http://infinispan.org > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140109/d95b6006/attachment-0001.html From mmarkus at redhat.com Thu Jan 9 10:27:12 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Thu, 9 Jan 2014 15:27:12 +0000 Subject: [infinispan-dev] Design for clustered events In-Reply-To: <52AB1747.4050409@redhat.com> References: <52AB1747.4050409@redhat.com> Message-ID: Updated: https://github.com/infinispan/infinispan/wiki/Clustered-listeners#handling-topology-changes Thank you. On Dec 13, 2013, at 2:18 PM, Radim Vansa wrote: > Hi Mircea, > > as we were discussing the design of remote Hot Rod events with Galder, the document regarding clustered events does not cover how should the clustered listener information be propagated in case of topology change. Could you add this info (or at least TODO so that we can see that there is more work required on the document). Also, situations related to such changes (such as reliability guarantees in case of node crash/join) should be specified. > > Thanks > > Radim > > -- > Radim Vansa > JBoss DataGrid QA > Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From anistor at redhat.com Mon Jan 13 03:41:53 2014 From: anistor at redhat.com (Adrian Nistor) Date: Mon, 13 Jan 2014 10:41:53 +0200 Subject: [infinispan-dev] Remote queries over Hot Rod quick start guide Message-ID: <52D3A6D1.8000900@redhat.com> Just in case you missed the tweet in December, I've posted this on the Infinispan blog too: http://blog.infinispan.org/2014/01/a-new-quick-start-guide-for-remote.html From meenakrajani at gmail.com Mon Jan 13 16:29:38 2014 From: meenakrajani at gmail.com (Meena Rajani) Date: Tue, 14 Jan 2014 02:29:38 +0500 Subject: [infinispan-dev] Time stamps in infinispan cluster Message-ID: Hi How does the distributed clock work in infinispan/jboss cluster. Can some one please guide me. I have read a little bit about the total order messaging and vector clock. I have extended the infinispan API for freshness Aware caching. I have assumed the time is synchronized all the time and timestamps are comparable. But I want to know how the timestamp work in Infinispan in distributed environment, specially when the communication among the cluster nodes is in synchronous mode. Regards Meena -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140114/cae3322e/attachment.html From sanne at infinispan.org Tue Jan 14 06:59:01 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 14 Jan 2014 12:59:01 +0100 Subject: [infinispan-dev] Design change in Infinispan Query In-Reply-To: <888EA204-30A1-4BFF-9469-7118996024A1@hibernate.org> References: <888EA204-30A1-4BFF-9469-7118996024A1@hibernate.org> Message-ID: Up this: it was proposed again today ad a face to face meeting. Apparently multiple parties have been asking to be able to run cross-cache queries. Sanne On 11 April 2012 12:47, Emmanuel Bernard wrote: > > On 10 avr. 2012, at 19:10, Sanne Grinovero wrote: > >> Hello all, >> currently Infinispan Query is an interceptor registering on the >> specific Cache instance which has indexing enabled; one such >> interceptor is doing all what it needs to do in the sole scope of the >> cache it was registered in. >> >> If you enable indexing - for example - on 3 different caches, there >> will be 3 different Hibernate Search engines started in background, >> and they are all unaware of each other. >> >> After some design discussions with Ales for CapeDwarf, but also >> calling attention on something that bothered me since some time, I'd >> evaluate the option to have a single Hibernate Search Engine >> registered in the CacheManager, and have it shared across indexed >> caches. >> >> Current design limitations: >> >> A- If they are all configured to use the same base directory to >> store indexes, and happen to have same-named indexes, they'll share >> the index without being aware of each other. This is going to break >> unless the user configures some tricky parameters, and even so >> performance won't be great: instances will lock each other out, or at >> best write in alternate turns. >> B- The search engine isn't particularly "heavy", still it would be >> nice to share some components and internal services. >> C- Configuration details which need some care - like injecting a >> JGroups channel for clustering - needs to be done right isolating each >> instance (so large parts of configuration would be quite similar but >> not totally equal) >> D- Incoming messages into a JGroups Receiver need to be routed not >> only among indexes, but also among Engine instances. This prevents >> Query to reuse code from Hibernate Search. >> >> Problems with a unified Hibernate Search Engine: >> >> 1#- Isolation of types / indexes. If the same indexed class is >> stored in different (indexed) caches, they'll share the same index. Is >> it a problem? I'm tempted to consider this a good thing, but wonder if >> it would surprise some users. Would you expect that? > > I would not expect that. Unicity in Hibernate Search is not defined per identity but per class + provided id. > I can see people reusing the same class as partial DTO and willing to index that. I can even see people > using the Hibernate Search programmatic API to index the "DTO" stored in cache 2 differently than the > domain class stored in cache 1. > I can concede that I am pushing a bit the use case towards bad-ish design approaches. > >> 2#- configuration format overhaul: indexing options won't be set on >> the cache section but in the global section. I'm looking forward to >> use the schema extensions anyway to provide a better configuration >> experience than the current . >> 3#- Assuming 1# is fine, when a search hit is found I'd need to be >> able to figure out from which cache the value should be loaded. >> 3#A we could have the cache name encoded in the index, as part >> of the identifier: {PK,cacheName} >> 3#B we actually shard the index, keeping a physically separate >> index per cache. This would mean searching on the joint index view but >> extracting hits from specific indexes to keep track of "which index".. >> I think we can do that but it's definitely tricky. >> >> It's likely easier to keep indexed values from different caches in >> different indexes. that would mean to reject #1 and mess with the user >> defined index name, to add for example the cache name to the user >> defined string. >> >> Any comment? >> >> Cheers, >> Sanne >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ben.cotton at ALUMNI.RUTGERS.EDU Wed Jan 15 05:54:24 2014 From: ben.cotton at ALUMNI.RUTGERS.EDU (cotton-ben) Date: Wed, 15 Jan 2014 02:54:24 -0800 (PST) Subject: [infinispan-dev] Infinispan embedded off-heap cache In-Reply-To: References: <3BE9E09A-6651-45D9-B7F1-891C111F232C@redhat.com> Message-ID: <1389783264288-4028642.post@n3.nabble.com> Hi Yavuz, Tristan, et. al. I am extremely interested to learn if anything materialized from the https://issues.jboss.org/browse/ISPN-871 effort. If nothing materialized, I would like to take a stab at doing this, specifically by doing the following: 0. Use Peter Lawrey's openHFT HugeCollections project (https://github.com/OpenHFT/HugeCollections) as the off-Heap Cache implementation provider. 1. Start with Peter's net.openhft.collections.HugeHashMap implementation Class as a highly optimized off-Heap basis and a potential Cache candidate 2. Confirm from ISPN-dev team that the ambition to use org.infinispan.container.DataContainer interface as a bridge to provide potential non-ISPN built CacheImpl candidates is sound/complete (and intended) 3. Modify HugheHashMap so that it explicitly implements org.infinispan.container.DataContainer interface. 4. Confirm that modified my net.openhft.collections..HugeHashMap can interoperate with the full ISPN 5.3/6.x APIs, exactly as if it were a default ISPN-provided org.infinispan.CacheImpl Could any one from the ISPN-dev team comment if this ambition has merit and a liklihood of "working" as outlined above (effectively resuming the work started at https://issues.jboss.org/browse/ISPN-871)? Is there any in-place ISPN documentation that advocates the use of DataContainer for taking on this type of effort? Thanks, Ben -- View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/infinispan-dev-Infinispan-embedded-off-heap-cache-tp4026102p4028642.html Sent from the Infinispan Developer List mailing list archive at Nabble.com. From ttarrant at redhat.com Wed Jan 15 06:44:00 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 15 Jan 2014 12:44:00 +0100 Subject: [infinispan-dev] Infinispan embedded off-heap cache In-Reply-To: <1389783264288-4028642.post@n3.nabble.com> References: <3BE9E09A-6651-45D9-B7F1-891C111F232C@redhat.com> <1389783264288-4028642.post@n3.nabble.com> Message-ID: <52D67480.9020908@redhat.com> Hi Ben, HugeCollections does indeed look interesting, and we'd gladly accept a DataContainer implementation as you propose :) Before you start working on it, however, I'd better expand on the reasons for implementing an off-heap DataContainer: we would ultimately like to have cache entries directly accessible as (Direct)ByteBuffers which means we could then use NIO2 to directly transfer that data over the network (JGroups, HotRod, etc) without additional copy operations. HC uses Unsafe.allocateMemory() which does not have this facility, so we would still have to copy data from there to a heap-based ByteBuffer before being able to pass that on to any NIO2 methods. This is all in the initial planning stages, so any comments are welcome Tristan On 01/15/2014 11:54 AM, cotton-ben wrote: > Hi Yavuz, Tristan, et. al. > > I am extremely interested to learn if anything materialized from the > https://issues.jboss.org/browse/ISPN-871 effort. > > If nothing materialized, I would like to take a stab at doing this, > specifically by doing the following: > > > 0. Use Peter Lawrey's openHFT HugeCollections project > (https://github.com/OpenHFT/HugeCollections) as the off-Heap Cache > implementation provider. > > 1. Start with Peter's net.openhft.collections.HugeHashMap > implementation Class as a highly optimized off-Heap basis and a potential > Cache candidate > > 2. Confirm from ISPN-dev team that the ambition to use > org.infinispan.container.DataContainer interface as a bridge to provide > potential non-ISPN built CacheImpl candidates is sound/complete (and > intended) > > 3. Modify HugheHashMap so that it explicitly implements > org.infinispan.container.DataContainer interface. > > 4. Confirm that modified my net.openhft.collections..HugeHashMap can > interoperate with the full ISPN 5.3/6.x APIs, exactly as if it were a > default ISPN-provided org.infinispan.CacheImpl > > Could any one from the ISPN-dev team comment if this ambition has merit and > a liklihood of "working" as outlined above (effectively resuming the work > started at https://issues.jboss.org/browse/ISPN-871)? Is there any in-place > ISPN documentation that advocates the use of DataContainer for taking on > this type of effort? > > Thanks, > Ben > > > > -- > View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/infinispan-dev-Infinispan-embedded-off-heap-cache-tp4026102p4028642.html > Sent from the Infinispan Developer List mailing list archive at Nabble.com. > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > From jaromir.hamala at gmail.com Wed Jan 15 06:59:18 2014 From: jaromir.hamala at gmail.com (Jaromir Hamala) Date: Wed, 15 Jan 2014 11:59:18 +0000 Subject: [infinispan-dev] Infinispan embedded off-heap cache In-Reply-To: <52D67480.9020908@redhat.com> References: <3BE9E09A-6651-45D9-B7F1-891C111F232C@redhat.com> <1389783264288-4028642.post@n3.nabble.com> <52D67480.9020908@redhat.com> Message-ID: Hi, another option is to use the off-heap allocator from Netty project. It's just an allocator, so it would require extra work when compared with HugeCollections. I tried this approach approach with Hazelcast: https://github.com/jerrinot/hugecast Cheers, Jaromir On Wed, Jan 15, 2014 at 11:44 AM, Tristan Tarrant wrote: > Hi Ben, > > HugeCollections does indeed look interesting, and we'd gladly accept a > DataContainer implementation as you propose :) > > Before you start working on it, however, I'd better expand on the > reasons for implementing an off-heap DataContainer: we would ultimately > like to have cache entries directly accessible as (Direct)ByteBuffers > which means we could then use NIO2 to directly transfer that data over > the network (JGroups, HotRod, etc) without additional copy operations. > HC uses Unsafe.allocateMemory() which does not have this facility, so we > would still have to copy data from there to a heap-based ByteBuffer > before being able to pass that on to any NIO2 methods. > > This is all in the initial planning stages, so any comments are welcome > > Tristan > > On 01/15/2014 11:54 AM, cotton-ben wrote: > > Hi Yavuz, Tristan, et. al. > > > > I am extremely interested to learn if anything materialized from the > > https://issues.jboss.org/browse/ISPN-871 effort. > > > > If nothing materialized, I would like to take a stab at doing this, > > specifically by doing the following: > > > > > > 0. Use Peter Lawrey's openHFT HugeCollections project > > (https://github.com/OpenHFT/HugeCollections) as the off-Heap Cache > > implementation provider. > > > > 1. Start with Peter's net.openhft.collections.HugeHashMap > > implementation Class as a highly optimized off-Heap basis and a potential > > Cache candidate > > > > 2. Confirm from ISPN-dev team that the ambition to use > > org.infinispan.container.DataContainer interface as a bridge to provide > > potential non-ISPN built CacheImpl candidates is sound/complete > (and > > intended) > > > > 3. Modify HugheHashMap so that it explicitly implements > > org.infinispan.container.DataContainer interface. > > > > 4. Confirm that modified my net.openhft.collections..HugeHashMap > can > > interoperate with the full ISPN 5.3/6.x APIs, exactly as if it were a > > default ISPN-provided org.infinispan.CacheImpl > > > > Could any one from the ISPN-dev team comment if this ambition has merit > and > > a liklihood of "working" as outlined above (effectively resuming the work > > started at https://issues.jboss.org/browse/ISPN-871)? Is there any > in-place > > ISPN documentation that advocates the use of DataContainer for taking on > > this type of effort? > > > > Thanks, > > Ben > > > > > > > > -- > > View this message in context: > http://infinispan-developer-list.980875.n3.nabble.com/infinispan-dev-Infinispan-embedded-off-heap-cache-tp4026102p4028642.html > > Sent from the Infinispan Developer List mailing list archive at > Nabble.com. > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- ?Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.? Antoine de Saint Exup?ry -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140115/fcba3ea9/attachment-0001.html From ttarrant at redhat.com Wed Jan 15 07:02:05 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 15 Jan 2014 13:02:05 +0100 Subject: [infinispan-dev] Infinispan embedded off-heap cache In-Reply-To: References: <3BE9E09A-6651-45D9-B7F1-891C111F232C@redhat.com> <1389783264288-4028642.post@n3.nabble.com> <52D67480.9020908@redhat.com> Message-ID: <52D678BD.5010400@redhat.com> The Netty off-heap allocator (or rather a Java port of jemalloc) is exactly the sort of thing I had in mind. Tristan On 01/15/2014 12:59 PM, Jaromir Hamala wrote: > Hi, > > another option is to use the off-heap allocator from Netty project. > It's just an allocator, so it would require extra work when compared > with HugeCollections. > I tried this approach approach with Hazelcast: > https://github.com/jerrinot/hugecast > > Cheers, > Jaromir > > > On Wed, Jan 15, 2014 at 11:44 AM, Tristan Tarrant > wrote: > > Hi Ben, > > HugeCollections does indeed look interesting, and we'd gladly accept a > DataContainer implementation as you propose :) > > Before you start working on it, however, I'd better expand on the > reasons for implementing an off-heap DataContainer: we would > ultimately > like to have cache entries directly accessible as (Direct)ByteBuffers > which means we could then use NIO2 to directly transfer that data over > the network (JGroups, HotRod, etc) without additional copy operations. > HC uses Unsafe.allocateMemory() which does not have this facility, > so we > would still have to copy data from there to a heap-based ByteBuffer > before being able to pass that on to any NIO2 methods. > > This is all in the initial planning stages, so any comments are > welcome > > Tristan > > On 01/15/2014 11:54 AM, cotton-ben wrote: > > Hi Yavuz, Tristan, et. al. > > > > I am extremely interested to learn if anything materialized from the > > https://issues.jboss.org/browse/ISPN-871 effort. > > > > If nothing materialized, I would like to take a stab at doing this, > > specifically by doing the following: > > > > > > 0. Use Peter Lawrey's openHFT HugeCollections project > > (https://github.com/OpenHFT/HugeCollections) as the off-Heap Cache > > implementation provider. > > > > 1. Start with Peter's net.openhft.collections.HugeHashMap > > implementation Class as a highly optimized off-Heap basis and a > potential > > Cache candidate > > > > 2. Confirm from ISPN-dev team that the ambition to use > > org.infinispan.container.DataContainer interface as a bridge to > provide > > potential non-ISPN built CacheImpl candidates is > sound/complete (and > > intended) > > > > 3. Modify HugheHashMap so that it explicitly implements > > org.infinispan.container.DataContainer interface. > > > > 4. Confirm that modified my > net.openhft.collections..HugeHashMap can > > interoperate with the full ISPN 5.3/6.x APIs, exactly as if it > were a > > default ISPN-provided org.infinispan.CacheImpl > > > > Could any one from the ISPN-dev team comment if this ambition > has merit and > > a liklihood of "working" as outlined above (effectively resuming > the work > > started at https://issues.jboss.org/browse/ISPN-871)? Is there > any in-place > > ISPN documentation that advocates the use of DataContainer for > taking on > > this type of effort? > > > > Thanks, > > Ben > > > > > > > > -- > > View this message in context: > http://infinispan-developer-list.980875.n3.nabble.com/infinispan-dev-Infinispan-embedded-off-heap-cache-tp4026102p4028642.html > > Sent from the Infinispan Developer List mailing list archive at > Nabble.com. > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > -- > ?Perfection is achieved, not when there is nothing more to add, but > when there is nothing left to take away.? > Antoine de Saint Exup?ry > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Wed Jan 15 07:07:58 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 15 Jan 2014 13:07:58 +0100 Subject: [infinispan-dev] Infinispan embedded off-heap cache In-Reply-To: <52D678BD.5010400@redhat.com> References: <3BE9E09A-6651-45D9-B7F1-891C111F232C@redhat.com> <1389783264288-4028642.post@n3.nabble.com> <52D67480.9020908@redhat.com> <52D678BD.5010400@redhat.com> Message-ID: <52D67A1E.2010509@redhat.com> Reading that, it isn't clear what I meant: A Java port of jemalloc (which the Netty off-heap allocator is) is exactly what I had in mind. Tristan On 01/15/2014 01:02 PM, Tristan Tarrant wrote: > The Netty off-heap allocator (or rather a Java port of jemalloc) is > exactly the sort of thing I had in mind. > > Tristan > > On 01/15/2014 12:59 PM, Jaromir Hamala wrote: >> Hi, >> >> another option is to use the off-heap allocator from Netty project. >> From emmanuel at hibernate.org Wed Jan 15 08:42:02 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Wed, 15 Jan 2014 14:42:02 +0100 Subject: [infinispan-dev] Design change in Infinispan Query In-Reply-To: References: <888EA204-30A1-4BFF-9469-7118996024A1@hibernate.org> Message-ID: <6211A55D-9F1D-4686-9EF4-373C216E4927@hibernate.org> By the way, people looking for that feature are also asking for a unified Cache API accessing these several caches right? Otherwise I am not fully understanding why they ask for a unified query. Do you have written detailed use cases somewhere for me to better understand what is really requested? Emmanuel On 14 Jan 2014, at 12:59, Sanne Grinovero wrote: > Up this: it was proposed again today ad a face to face meeting. > Apparently multiple parties have been asking to be able to run > cross-cache queries. > > Sanne > > On 11 April 2012 12:47, Emmanuel Bernard wrote: >> >> On 10 avr. 2012, at 19:10, Sanne Grinovero wrote: >> >>> Hello all, >>> currently Infinispan Query is an interceptor registering on the >>> specific Cache instance which has indexing enabled; one such >>> interceptor is doing all what it needs to do in the sole scope of the >>> cache it was registered in. >>> >>> If you enable indexing - for example - on 3 different caches, there >>> will be 3 different Hibernate Search engines started in background, >>> and they are all unaware of each other. >>> >>> After some design discussions with Ales for CapeDwarf, but also >>> calling attention on something that bothered me since some time, I'd >>> evaluate the option to have a single Hibernate Search Engine >>> registered in the CacheManager, and have it shared across indexed >>> caches. >>> >>> Current design limitations: >>> >>> A- If they are all configured to use the same base directory to >>> store indexes, and happen to have same-named indexes, they'll share >>> the index without being aware of each other. This is going to break >>> unless the user configures some tricky parameters, and even so >>> performance won't be great: instances will lock each other out, or at >>> best write in alternate turns. >>> B- The search engine isn't particularly "heavy", still it would be >>> nice to share some components and internal services. >>> C- Configuration details which need some care - like injecting a >>> JGroups channel for clustering - needs to be done right isolating each >>> instance (so large parts of configuration would be quite similar but >>> not totally equal) >>> D- Incoming messages into a JGroups Receiver need to be routed not >>> only among indexes, but also among Engine instances. This prevents >>> Query to reuse code from Hibernate Search. >>> >>> Problems with a unified Hibernate Search Engine: >>> >>> 1#- Isolation of types / indexes. If the same indexed class is >>> stored in different (indexed) caches, they'll share the same index. Is >>> it a problem? I'm tempted to consider this a good thing, but wonder if >>> it would surprise some users. Would you expect that? >> >> I would not expect that. Unicity in Hibernate Search is not defined per identity but per class + provided id. >> I can see people reusing the same class as partial DTO and willing to index that. I can even see people >> using the Hibernate Search programmatic API to index the "DTO" stored in cache 2 differently than the >> domain class stored in cache 1. >> I can concede that I am pushing a bit the use case towards bad-ish design approaches. >> >>> 2#- configuration format overhaul: indexing options won't be set on >>> the cache section but in the global section. I'm looking forward to >>> use the schema extensions anyway to provide a better configuration >>> experience than the current . >>> 3#- Assuming 1# is fine, when a search hit is found I'd need to be >>> able to figure out from which cache the value should be loaded. >>> 3#A we could have the cache name encoded in the index, as part >>> of the identifier: {PK,cacheName} >>> 3#B we actually shard the index, keeping a physically separate >>> index per cache. This would mean searching on the joint index view but >>> extracting hits from specific indexes to keep track of "which index".. >>> I think we can do that but it's definitely tricky. >>> >>> It's likely easier to keep indexed values from different caches in >>> different indexes. that would mean to reject #1 and mess with the user >>> defined index name, to add for example the cache name to the user >>> defined string. >>> >>> Any comment? >>> >>> Cheers, >>> Sanne >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From wfink at redhat.com Wed Jan 15 09:58:24 2014 From: wfink at redhat.com (Wolf-Dieter Fink) Date: Wed, 15 Jan 2014 15:58:24 +0100 Subject: [infinispan-dev] infinispan build process - fresh mvn-repo Message-ID: <52D6A210.9080907@redhat.com> Hi, I build the git at github.com:infinispan/infinispan.git from scratch and follow the documentation/README. I use the maven-settings.xml mvn -s maven-settings.xml -Dmaven.test.skip=true clean install with that setting the build failed, see error "1.Build" A build with skipping test will not work due to dependency issues mvn -s maven-settings.xml -Dmaven.test.skip=true clean install see "2.Build" I found that "-Dmaven.test.skip.exec=true" will build correct. after that the test hung forever (or longer than my patience ;) Test suite progress: tests succeeded: 506, failed: 0, skipped: 7. [testng-BulkGetSimpleTest] Test testBulkGetWithSize(org.infinispan.client.hotrod.BulkGetSimpleTest) succeeded. Test suite progress: tests succeeded: 507, failed: 0, skipped: 7. [testng-ClientSocketReadTimeoutTest] Test testPutTimeout(org.infinispan.client.hotrod.ClientSocketReadTimeoutTest) succeeded. Test suite progress: tests succeeded: 508, failed: 0, skipped: 7. ==> this test hung a longer time [testng-DistributionRetryTest] Test testRemoveIfUnmodified(org.infinispan.client.hotrod.retry.DistributionRetryTest) failed. Test suite progress: tests succeeded: 508, failed: 1, skipped: 7. ===> this test "never" came back The main problem is that the first build will have issues and you need to bypass it. Second is that there is a dependency if the tests are skipped, a hint within the documentation or readme might be helpful to avoid frustration ;) And last but not least is there a reason why the "[testng-ClientSocketReadTimeoutTest" hung? Would it be an idea to rename it if it takes long, i.e. "ClientSocket10MinuteReadTimeoutTest"? to show that this test takes a long time, And also a time-limit for the test. - Wolf ------------------------ 1. Build ------------------------------------------- ~~~~~~~~~~~~~~~~~~~~~~~~~ ENVIRONMENT INFO ~~~~~~~~~~~~~~~~~~~~~~~~~~ Tests run: 4044, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 357.511 sec <<< FAILURE! testNoEntryInL1GetWithConcurrentReplace(org.infinispan.distribution.DistSyncL1FuncTest) Time elapsed: 0.005 sec <<< FAILURE! java.lang.AssertionError: Entry for key [key-to-the-cache] should be in L1 on cache at [DistSyncL1FuncTest-NodeA-21024]! at org.infinispan.distribution.DistributionTestHelper.assertIsInL1(DistributionTestHelper.java:31) at org.infinispan.distribution.BaseDistFunctionalTest.assertIsInL1(BaseDistFunctionalTest.java:183) at org.infinispan.distribution.DistSyncL1FuncTest.testNoEntryInL1GetWithConcurrentReplace(DistSyncL1FuncTest.java:193) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80) at org.testng.internal.Invoker.invokeMethod(Invoker.java:714) at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901) at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231) at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127) at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111) at org.testng.TestRunner.privateRun(TestRunner.java:767) at org.testng.TestRunner.run(TestRunner.java:617) at org.testng.SuiteRunner.runTest(SuiteRunner.java:334) at org.testng.SuiteRunner.access$000(SuiteRunner.java:37) at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368) at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) testInvokeMapWithReduceExceptionPhaseInRemoteExecution(org.infinispan.distexec.mapreduce.SimpleTwoNodesMapReduceTest) Time elapsed: 0.018 sec <<< FAILURE! org.testng.TestException: Method SimpleTwoNodesMapReduceTest.testInvokeMapWithReduceExceptionPhaseInRemoteExecution()[pri:0, instance:org.infinispan.distexec.mapreduce.SimpleTwoNodesMapReduceTest at 70bd631a] should have thrown an exception of class org.infinispan.commons.CacheException at org.testng.internal.Invoker.handleInvocationResults(Invoker.java:1512) at org.testng.internal.Invoker.invokeMethod(Invoker.java:754) at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901) at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231) at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127) at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111) at org.testng.TestRunner.privateRun(TestRunner.java:767) at org.testng.TestRunner.run(TestRunner.java:617) at org.testng.SuiteRunner.runTest(SuiteRunner.java:334) at org.testng.SuiteRunner.access$000(SuiteRunner.java:37) at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368) at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Results : Failed tests: DistSyncL1FuncTest.testNoEntryInL1GetWithConcurrentReplace:193->BaseDistFunctionalTest.assertIsInL1:183 Entry for key [key-to-the-cache] should be in L1 on cache at [DistSyncL1FuncTest-NodeA-21024]! ? Test Method SimpleTwoNodesMapReduceTest.testInvokeMapWithReduceExceptionPh... Tests run: 4044, Failures: 2, Errors: 0, Skipped: 0 [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Infinispan BOM .................................... SUCCESS [0.100s] [INFO] Infinispan Common Parent .......................... SUCCESS [1.324s] [INFO] Infinispan Checkstyle Rules ....................... SUCCESS [2.197s] [INFO] Infinispan Commons ................................ SUCCESS [4.583s] [INFO] Infinispan Core ................................... FAILURE [6:21.850s] [INFO] Infinispan Extended Statistics .................... SKIPPED [INFO] Parent pom for server modules ..................... SKIPPED [INFO] Infinispan Server - Core Components ............... SKIPPED [INFO] Infinispan Query DSL API .......................... SKIPPED [INFO] Parent pom for cachestore modules ................. SKIPPED [INFO] Infinispan JDBC CacheStore ........................ SKIPPED [INFO] Parent pom for the Lucene integration modules ..... SKIPPED [INFO] Infinispan integration with Lucene v.3.x .......... SKIPPED [INFO] Infinispan integration with Lucene v.4.x .......... SKIPPED [INFO] Infinispan Lucene Directory Implementation ........ SKIPPED [INFO] Infinispan Query API .............................. SKIPPED [INFO] Infinispan Tools .................................. SKIPPED [INFO] Infinispan Remote Query Client .................... SKIPPED [INFO] Infinispan Remote Query Server .................... SKIPPED [INFO] Infinispan Tree API ............................... SKIPPED [INFO] Infinispan Hot Rod Server ......................... SKIPPED [INFO] Infinispan Hot Rod Client ......................... SKIPPED [INFO] Parent pom for compatibility modules .............. SKIPPED [INFO] infinispan-custom52x-store ........................ SKIPPED [INFO] infinispan-adaptor52x ............................. SKIPPED [INFO] Infinispan remote CacheStore ...................... SKIPPED [INFO] Infinispan CLI Client ............................. SKIPPED [INFO] Infinispan Memcached Server ....................... SKIPPED [INFO] Infinispan REST Server ............................ SKIPPED [INFO] Infinispan CLI Server ............................. SKIPPED [INFO] Infinispan Command Line Interface persistence ..... SKIPPED [INFO] Infinispan LevelDB CacheStore ..................... SKIPPED [INFO] Infinispan REST CacheStore ........................ SKIPPED [INFO] Infinispan WebSocket Server ....................... SKIPPED [INFO] Infinispan RHQ Plugin ............................. SKIPPED [INFO] Infinispan Spring Integration ..................... SKIPPED [INFO] Infinispan GUI Demo ............................... SKIPPED [INFO] Infinispan EC2 Demo ............................... SKIPPED [INFO] Infinispan Distributed Executors and Map/Reduce Demo SKIPPED [INFO] Infinispan EC2 Demo UI ............................ SKIPPED [INFO] Infinispan Directory Demo ......................... SKIPPED [INFO] Infinispan Lucene Directory Demo .................. SKIPPED [INFO] Infinispan GridFileSystem WebDAV interface ........ SKIPPED [INFO] Infinispan Near Cache Demo ........................ SKIPPED [INFO] Infinispan CDI support ............................ SKIPPED [INFO] Infinispan Near Cache Demo Client ................. SKIPPED [INFO] Infinispan AS/EAP modules ......................... SKIPPED [INFO] Integration tests: Lucene Directory with Infinispan Query SKIPPED [INFO] Infinispan JCACHE (JSR-107) implementation ........ SKIPPED [INFO] Integration tests: AS Module Integration Tests .... SKIPPED [INFO] Integration tests: Infinispan compatibility mode .. SKIPPED [INFO] Integration tests: Infinispan CDI/JCache interactions SKIPPED [INFO] infinispan-cli-migrator52x ........................ SKIPPED [INFO] Infinispan Server - BOM ........................... SKIPPED [INFO] Infinispan Server - JGroups Subsystem ............. SKIPPED [INFO] Infinispan Server - Infinispan Subsystem .......... SKIPPED [INFO] Infinispan Server - Security Subsystem ............ SKIPPED [INFO] Infinispan Server - Endpoints Subsystem ........... SKIPPED [INFO] Infinispan Server - Build ......................... SKIPPED [INFO] Infinispan Server - RHQ/JON plugin ................ SKIPPED [INFO] Infinispan Server - Test Suite .................... SKIPPED [INFO] Infinispan Server ................................. SKIPPED [INFO] Infinispan Distribution ........................... SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 6:31.353s [INFO] Finished at: Wed Jan 15 14:12:40 CET 2014 [INFO] Final Memory: 80M/1337M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.14.1:test (default-test) on project infinispan-core: There are test failures. [ERROR] [ERROR] Please refer to /data/devel/github/Infinispan/infinispan/core/target/surefire-reports for the individual test results. [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn -rf :infinispan-core --------------------------- 2.Build ------------------------------------- Downloaded: http://repo.maven.apache.org/maven2/com/clearspring/analytics/stream/2.2.0/stream-2.2.0.jar (73 KB at 1007.9 KB/sec) [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Infinispan BOM .................................... SUCCESS [8.620s] [INFO] Infinispan Common Parent .......................... SUCCESS [8:48.158s] [INFO] Infinispan Checkstyle Rules ....................... SUCCESS [5:47.825s] [INFO] Infinispan Commons ................................ SUCCESS [18.225s] [INFO] Infinispan Core ................................... SUCCESS [34.340s] [INFO] Infinispan Extended Statistics .................... FAILURE [5.186s] [INFO] Parent pom for server modules ..................... SKIPPED [INFO] Infinispan Server - Core Components ............... SKIPPED [INFO] Infinispan Query DSL API .......................... SKIPPED [INFO] Parent pom for cachestore modules ................. SKIPPED [INFO] Infinispan JDBC CacheStore ........................ SKIPPED [INFO] Parent pom for the Lucene integration modules ..... SKIPPED [INFO] Infinispan integration with Lucene v.3.x .......... SKIPPED [INFO] Infinispan integration with Lucene v.4.x .......... SKIPPED [INFO] Infinispan Lucene Directory Implementation ........ SKIPPED [INFO] Infinispan Query API .............................. SKIPPED [INFO] Infinispan Tools .................................. SKIPPED [INFO] Infinispan Remote Query Client .................... SKIPPED [INFO] Infinispan Remote Query Server .................... SKIPPED [INFO] Infinispan Tree API ............................... SKIPPED [INFO] Infinispan Hot Rod Server ......................... SKIPPED [INFO] Infinispan Hot Rod Client ......................... SKIPPED [INFO] Parent pom for compatibility modules .............. SKIPPED [INFO] infinispan-custom52x-store ........................ SKIPPED [INFO] infinispan-adaptor52x ............................. SKIPPED [INFO] Infinispan remote CacheStore ...................... SKIPPED [INFO] Infinispan CLI Client ............................. SKIPPED [INFO] Infinispan Memcached Server ....................... SKIPPED [INFO] Infinispan REST Server ............................ SKIPPED [INFO] Infinispan CLI Server ............................. SKIPPED [INFO] Infinispan Command Line Interface persistence ..... SKIPPED [INFO] Infinispan LevelDB CacheStore ..................... SKIPPED [INFO] Infinispan REST CacheStore ........................ SKIPPED [INFO] Infinispan WebSocket Server ....................... SKIPPED [INFO] Infinispan RHQ Plugin ............................. SKIPPED [INFO] Infinispan Spring Integration ..................... SKIPPED [INFO] Infinispan GUI Demo ............................... SKIPPED [INFO] Infinispan EC2 Demo ............................... SKIPPED [INFO] Infinispan Distributed Executors and Map/Reduce Demo SKIPPED [INFO] Infinispan EC2 Demo UI ............................ SKIPPED [INFO] Infinispan Directory Demo ......................... SKIPPED [INFO] Infinispan Lucene Directory Demo .................. SKIPPED [INFO] Infinispan GridFileSystem WebDAV interface ........ SKIPPED [INFO] Infinispan Near Cache Demo ........................ SKIPPED [INFO] Infinispan CDI support ............................ SKIPPED [INFO] Infinispan Near Cache Demo Client ................. SKIPPED [INFO] Infinispan AS/EAP modules ......................... SKIPPED [INFO] Integration tests: Lucene Directory with Infinispan Query SKIPPED [INFO] Infinispan JCACHE (JSR-107) implementation ........ SKIPPED [INFO] Integration tests: AS Module Integration Tests .... SKIPPED [INFO] Integration tests: Infinispan compatibility mode .. SKIPPED [INFO] Integration tests: Infinispan CDI/JCache interactions SKIPPED [INFO] infinispan-cli-migrator52x ........................ SKIPPED [INFO] Infinispan Server - BOM ........................... SKIPPED [INFO] Infinispan Server - JGroups Subsystem ............. SKIPPED [INFO] Infinispan Server - Infinispan Subsystem .......... SKIPPED [INFO] Infinispan Server - Security Subsystem ............ SKIPPED [INFO] Infinispan Server - Endpoints Subsystem ........... SKIPPED [INFO] Infinispan Server - Build ......................... SKIPPED [INFO] Infinispan Server - RHQ/JON plugin ................ SKIPPED [INFO] Infinispan Server - Test Suite .................... SKIPPED [INFO] Infinispan Server ................................. SKIPPED [INFO] Infinispan Distribution ........................... SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 19:44.748s [INFO] Finished at: Wed Jan 15 13:55:05 CET 2014 [INFO] Final Memory: 64M/384M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal on project infinispan-extended-statistics: Could not resolve dependencies for project org.infinispan:infinispan-extended-statistics:jar:7.0.0-SNAPSHOT: Could not find artifact org.infinispan:infinispan-core:jar:tests:7.0.0-SNAPSHOT in redhat-earlyaccess-repository-group (http://maven.repository.redhat.com/earlyaccess/all/) -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn -rf :infinispan-extended-statistics From ben.cotton at ALUMNI.RUTGERS.EDU Wed Jan 15 12:51:06 2014 From: ben.cotton at ALUMNI.RUTGERS.EDU (cotton-ben) Date: Wed, 15 Jan 2014 09:51:06 -0800 (PST) Subject: [infinispan-dev] Infinispan embedded off-heap cache In-Reply-To: <52D67A1E.2010509@redhat.com> References: <3BE9E09A-6651-45D9-B7F1-891C111F232C@redhat.com> <1389783264288-4028642.post@n3.nabble.com> <52D67480.9020908@redhat.com> <52D678BD.5010400@redhat.com> <52D67A1E.2010509@redhat.com> Message-ID: <1389808266330-4028649.post@n3.nabble.com> Thanks very much Tristan and Jaromir for these responses. Interesting that Netty's off-heap allocation management (jemalloc) may deliver us a ' Save a 1xCOPY back to the heap!' advantage that Unsafe malloc/free does not. Peter Lawrey has commented that he will research Netty's potential to be used in parts of OpenHFT HugeCollections .... TBD: what is the full advantage/dis-advantage accounting of replacing direct Unsafe malloc/free with Netty? But these are nityt/gritty details. The big-picture news is that API bridges (i.e. DataContainer) are available (and intended) to empower the community to build their own "ISPN pluggable" off-heap impls of Cache. Nice! The consequences of our *necessarily* staying on-heap has been a "monstrous" experience for us (see http://4.bp.blogspot.com/-upwza0_lLn4/TmXB4lKkPKI/AAAAAAAAAHY/9lA7VYCmSkI/s1600/heap_0001.jpg ). We are excited about this potential to bring us real remedy. -- View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/infinispan-dev-Infinispan-embedded-off-heap-cache-tp4026102p4028649.html Sent from the Infinispan Developer List mailing list archive at Nabble.com. From ben.cotton at ALUMNI.RUTGERS.EDU Wed Jan 15 14:26:28 2014 From: ben.cotton at ALUMNI.RUTGERS.EDU (cotton-ben) Date: Wed, 15 Jan 2014 11:26:28 -0800 (PST) Subject: [infinispan-dev] Infinispan embedded off-heap cache In-Reply-To: <1389808266330-4028649.post@n3.nabble.com> References: <3BE9E09A-6651-45D9-B7F1-891C111F232C@redhat.com> <1389783264288-4028642.post@n3.nabble.com> <52D67480.9020908@redhat.com> <52D678BD.5010400@redhat.com> <52D67A1E.2010509@redhat.com> <1389808266330-4028649.post@n3.nabble.com> Message-ID: <1389813988520-4028650.post@n3.nabble.com> FYI. Some results from a Test that Peter just wrote wrt to comparing Netty allocater vs. OpenHFT's direct invoke of Unsafe malloc/free. Indeed, Netty's use of a PooledHeap approach does result in a 100% speed improvement (wrt to allocation events). However, OpenHFT has a huge advantage wrt its underlying BytesMarshallable capability to blazing serialize/deserialize 'back to the heap!' value object COPY transports (that could then be viewed as NIO-operable ByteBuffer). Interesting. Moral of the story? Both Netty and OpenHFT should likely both be significant contributors to this ambition to deliver a compelling off-heap Cache capability to ISPN. ---peter.lawrey at higherfrequencytrading.com wrote: -------------------- The first thing I noticed is that allocating using the Pooled Heap is twice as fast on my machine, Netty creating/freeing 256 bytes is 11 million vs DirectStore 5.6 million per second. Note: HHM avoids doing this at all and I suspect this difference is not important for HHM. I re-wrote one of their tests as a performance test. Given they don't appear to performance test their object serialization is a worry ;) but it also means I probably didn't do it as optimally as it could be. In the following test I serialize and deserialize an object with four fields String, int, double, Enum using the same writeExternalizable/readExternalizble code. Netty: Serialization/Deserialization latency: 327,499 us avg Netty: Serialization/Deserialization latency: 97,419 us avg Netty: Serialization/Deserialization latency: 54,232 us avg Netty: Serialization/Deserialization latency: 58,950 us avg Netty: Serialization/Deserialization latency: 53,177 us avg Netty: Serialization/Deserialization latency: 53,189 us avg Netty: Serialization/Deserialization latency: 53,672 us avg Netty: Serialization/Deserialization latency: 52,871 us avg Netty: Serialization/Deserialization latency: 52,211 us avg Netty: Serialization/Deserialization latency: 51,924 us avg DirectStore: Externalizable latency: 6,899 us avg DirectStore: Externalizable latency: 825 us avg DirectStore: Externalizable latency: 496 us avg DirectStore: Externalizable latency: 494 us avg DirectStore: Externalizable latency: 385 us avg DirectStore: Externalizable latency: 212 us avg DirectStore: Externalizable latency: 201 us avg DirectStore: Externalizable latency: 197 us avg DirectStore: Externalizable latency: 199 us avg DirectStore: Externalizable latency: 203 us avg The code is /* * Copyright 2012 The Netty Project * * The Netty Project licenses this file to you under the Apache License, * version 2.0 (the "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at: * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the * License for the specific language governing permissions and limitations * under the License. */ package io.netty.handler.codec.marshalling; import io.netty.buffer.ByteBuf; import io.netty.channel.ChannelHandler; import io.netty.channel.embedded.EmbeddedChannel; import net.openhft.lang.io.Bytes; import net.openhft.lang.io.DirectBytes; import net.openhft.lang.io.DirectStore; import net.openhft.lang.io.serialization.BytesMarshallable; import org.jboss.marshalling.MarshallerFactory; import org.jboss.marshalling.Marshalling; import org.jboss.marshalling.MarshallingConfiguration; import org.jboss.marshalling.Unmarshaller; import org.jetbrains.annotations.NotNull; import org.junit.Test; import java.io.Externalizable; import java.io.IOException; import java.io.ObjectInput; import java.io.ObjectOutput; import java.lang.annotation.RetentionPolicy; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNull; import static org.junit.Assert.assertTrue; public class SerialMarshallingEncoderTest extends SerialCompatibleMarshallingEncoderTest { @Override protected ByteBuf truncate(ByteBuf buf) { buf.readInt(); return buf; } @Override protected ChannelHandler createEncoder() { return new MarshallingEncoder(createProvider()); } @Test public void testMarshallingPerf() throws Exception { MyData testObject = new MyData("Hello World", 1, 2.0, RetentionPolicy.RUNTIME); final MarshallerFactory marshallerFactory = createMarshallerFactory(); final MarshallingConfiguration configuration = createMarshallingConfig(); Unmarshaller unmarshaller = marshallerFactory.createUnmarshaller(configuration); for (int t = 0; t < 10; t++) { long start = System.nanoTime(); int RUNS = 10000; for (int i = 0; i < RUNS; i++) { EmbeddedChannel ch = new EmbeddedChannel(createEncoder()); ch.writeOutbound(testObject); assertTrue(ch.finish()); ByteBuf buffer = ch.readOutbound(); unmarshaller.start(Marshalling.createByteInput(truncate(buffer).nioBuffer())); MyData read = (MyData) unmarshaller.readObject(); assertEquals(testObject, read); assertEquals(-1, unmarshaller.read()); assertNull(ch.readOutbound()); buffer.release(); } long average = (System.nanoTime() - start) / RUNS; System.out.printf("Netty: Serialization/Deserialization latency: %,d us avg%n", average); } unmarshaller.finish(); unmarshaller.close(); } @Test public void testMarshallingPerfDirectStore() throws Exception { MyData testObject = new MyData("Hello World", 1, 2.0, RetentionPolicy.RUNTIME); MyData testObject2 = new MyData("test", 12, 222.0, RetentionPolicy.CLASS); DirectStore ds = DirectStore.allocateLazy(256); DirectBytes db = ds.createSlice(); for (int t = 0; t < 10; t++) { long start = System.nanoTime(); int RUNS = 10000; for (int i = 0; i < RUNS; i++) { db.reset(); testObject.writeExternal(db); long position = db.position(); db.reset(); testObject2.readExternal(db); assertEquals(testObject, testObject2); assertEquals(position, db.position()); } long average = (System.nanoTime() - start) / RUNS; System.out.printf("DirectStore: Externalizable latency: %,d us avg%n", average); } ds.free(); } public static class MyData implements Externalizable { String text; int value; double number; RetentionPolicy policy; public MyData() { } public MyData(String text, int value, double number, RetentionPolicy policy) { this.text = text; this.value = value; this.number = number; this.policy = policy; } @Override public void writeExternal(ObjectOutput out) throws IOException { out.writeUTF(text); out.writeInt(value); out.writeDouble(number); out.writeUTF(policy.name()); } @Override public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException { text = in.readUTF(); value = in.readInt(); number = in.readDouble(); policy = RetentionPolicy.valueOf(in.readUTF()); } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; MyData myData = (MyData) o; if (Double.compare(myData.number, number) != 0) return false; if (value != myData.value) return false; if (policy != myData.policy) return false; if (text != null ? !text.equals(myData.text) : myData.text != null) return false; return true; } } } On 15 January 2014 16:59, Peter Lawrey wrote: Good question. I suspect there is a bunch of things it is not doing, but I will investigate. On 15 January 2014 16:50, Ben Cotton wrote: Simplifying my question, is there something that Netty's jemalloc() like off-heap allocation management does that is somehow different (advantageous?) when compared with straightforward usage of Unsafe malloc/free ? -- View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/infinispan-dev-Infinispan-embedded-off-heap-cache-tp4026102p4028650.html Sent from the Infinispan Developer List mailing list archive at Nabble.com. From emmanuel at hibernate.org Thu Jan 16 05:10:56 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Thu, 16 Jan 2014 11:10:56 +0100 Subject: [infinispan-dev] Infinispan embedded off-heap cache In-Reply-To: <1389808266330-4028649.post@n3.nabble.com> References: <3BE9E09A-6651-45D9-B7F1-891C111F232C@redhat.com> <1389783264288-4028642.post@n3.nabble.com> <52D67480.9020908@redhat.com> <52D678BD.5010400@redhat.com> <52D67A1E.2010509@redhat.com> <1389808266330-4028649.post@n3.nabble.com> Message-ID: On 15 Jan 2014, at 18:51, cotton-ben wrote: > Nice! The consequences of our *necessarily* staying on-heap has been a > "monstrous" experience for us (see > http://4.bp.blogspot.com/-upwza0_lLn4/TmXB4lKkPKI/AAAAAAAAAHY/9lA7VYCmSkI/s1600/heap_0001.jpg > ). As Tristan hinted, can share (worse case privately) the reasons that lead to that horrific experience? This would be very useful to the infinispan team to better shape and explain the off-heap approach. Off-heap does come with non trivial drawbacks around manual garbage collection and memory fragmentation (at least when the data is not homogeneous). Emmanuel -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140116/7f9093d6/attachment.html From jaromir.hamala at gmail.com Thu Jan 16 05:15:49 2014 From: jaromir.hamala at gmail.com (Jaromir Hamala) Date: Thu, 16 Jan 2014 10:15:49 +0000 Subject: [infinispan-dev] Infinispan embedded off-heap cache In-Reply-To: References: <3BE9E09A-6651-45D9-B7F1-891C111F232C@redhat.com> <1389783264288-4028642.post@n3.nabble.com> <52D67480.9020908@redhat.com> <52D678BD.5010400@redhat.com> <52D67A1E.2010509@redhat.com> <1389808266330-4028649.post@n3.nabble.com> Message-ID: I thought this JEP was (sort of) relevant to this discussion: http://openjdk.java.net/jeps/189 Cheers, Jaromir On Thu, Jan 16, 2014 at 10:10 AM, Emmanuel Bernard wrote: > > On 15 Jan 2014, at 18:51, cotton-ben > wrote: > > Nice! The consequences of our *necessarily* staying on-heap has been a > "monstrous" experience for us (see > > http://4.bp.blogspot.com/-upwza0_lLn4/TmXB4lKkPKI/AAAAAAAAAHY/9lA7VYCmSkI/s1600/heap_0001.jpg > ). > > > As Tristan hinted, can share (worse case privately) the reasons that lead > to that horrific experience? This would be very useful to the infinispan > team to better shape and explain the off-heap approach. Off-heap does come > with non trivial drawbacks around manual garbage collection and memory > fragmentation (at least when the data is not homogeneous). > > Emmanuel > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- ?Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.? Antoine de Saint Exup?ry -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140116/fe5af2a9/attachment.html From ben.cotton at ALUMNI.RUTGERS.EDU Thu Jan 16 12:37:51 2014 From: ben.cotton at ALUMNI.RUTGERS.EDU (cotton-ben) Date: Thu, 16 Jan 2014 09:37:51 -0800 (PST) Subject: [infinispan-dev] Infinispan embedded off-heap cache In-Reply-To: References: <3BE9E09A-6651-45D9-B7F1-891C111F232C@redhat.com> <1389783264288-4028642.post@n3.nabble.com> <52D67480.9020908@redhat.com> <52D678BD.5010400@redhat.com> <52D67A1E.2010509@redhat.com> <1389808266330-4028649.post@n3.nabble.com> Message-ID: <1389893871449-4028653.post@n3.nabble.com> /> As Tristan hinted, can share (worse case privately) the reasons that lead to that horrific experience?/ Thank you for this question. The answer is simple: managed run-time Garbage Collection - despite its elegances, many recent advances, and real potential and promise - has /consistently/ betrayed us (and at the most in-opportune times) with regard to our SLA to deliver to our bank stakeholders the capability to render and aggregate real-time liquidity risk. Without going into details, to do this in /real-time/ is a hard problem to solve. But it is a problem that we are not only committed to solving ... it is a problem that /is/ going to be solved. The resources we have to empower to us to solve this are ... well ... they are /nice./ (240 CPU 3TB/RAM Linux Supercomputer, onto which we deploy a 90-node ISPN 5.3 DIST_SYNC data grid, to which we dispatch M-R quantitative SCENARIOxSTRESS "risk search" algorithms (using ISPN's lovely DistributedExecutorService,NotifyingFuture APIs) that empower us to always accurately answer this primordial question="what is our Liquidity Risk indicator (LRI) wrt to Position (P) on AssetClass(A) at Time(T)"?). These resources do empower us to solve this problem ... except in the case... and only in this case ... whenever any part of the grid (for any reason) endures a STW GC event. In those cases, this platform fails to solve this problem. >From our view, this is the fault of Java's run-time necessarily requiring us to endure its GC priority. We don't want it!! We know it is elegant and impressive, but WE DON'T WANT IT. *We want to sacrifice elegance for capability.* Now, we love Java, we love Infinispan, etc. etc. And we know that our intolerance for any GC event "totally messes us up" is a very rare use-case -- rare enought that some Java solution providers would consider it not worth the bother to accommodate us. So that's our story. We crave going off-heap in as elegant a way as possible. We are very hopeful that you guys @ISPN can feel our pain and have interest in considering accommodating us (w your unmistakable ambition to be the leading /community-driven/ Java solution provider). -- View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/infinispan-dev-Infinispan-embedded-off-heap-cache-tp4026102p4028653.html Sent from the Infinispan Developer List mailing list archive at Nabble.com. From sanne at infinispan.org Thu Jan 16 20:18:19 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 17 Jan 2014 02:18:19 +0100 Subject: [infinispan-dev] Precision Time Protocol Message-ID: This is now supported in software since RHEL 6.5 : http://en.wikipedia.org/wiki/Precision_time_protocol Might mean that to use reliable timestamping one no longer needs specific hardware? Cheers, Sanne From rvansa at redhat.com Fri Jan 17 08:06:12 2014 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 17 Jan 2014 14:06:12 +0100 Subject: [infinispan-dev] Store as binary Message-ID: <52D92AC4.7080701@redhat.com> Hi Mircea, I've ran a simple stress test [1] in dist mode with store as binary (not enabled, enabled keys only, enabled values only, enabled both). The difference is < 2 % (with storeAsBinary enabled fully being slower). Radim [1] https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/jdg-radargun-perf-store-as-binary/1/artifact/report/All_report.html -- Radim Vansa JBoss DataGrid QA From manik at infinispan.org Fri Jan 17 18:12:36 2014 From: manik at infinispan.org (Manik Surtani) Date: Fri, 17 Jan 2014 15:12:36 -0800 Subject: [infinispan-dev] Precision Time Protocol In-Reply-To: References: Message-ID: I think JGroups should requite a caesium clock [1]. So what if we need a lead suit to install it. :) [1] http://en.wikipedia.org/wiki/Caesium_standard On 16 January 2014 17:18, Sanne Grinovero wrote: > This is now supported in software since RHEL 6.5 : > > http://en.wikipedia.org/wiki/Precision_time_protocol > > Might mean that to use reliable timestamping one no longer needs > specific hardware? > > Cheers, > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140117/ce4408f4/attachment.html From mmarkus at redhat.com Mon Jan 20 04:41:21 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Mon, 20 Jan 2014 09:41:21 +0000 Subject: [infinispan-dev] Store as binary In-Reply-To: <52D92AC4.7080701@redhat.com> References: <52D92AC4.7080701@redhat.com> Message-ID: Hi Radim, I think 4 nodes with numOwner=2 is too small of a cluster. My calculus here[1] points out that for numOwners=1, the performance benefits is only visible for clusters having more than two nodes. Following a similar logic for numOwenrs=2, the benefit would only be visible for clusters having more than 4 nodes. Would it be possible to run the test on a larger cluster, 8+ nodes? [1] http://lists.jboss.org/pipermail/infinispan-dev/2009-October/004299.html On Jan 17, 2014, at 1:06 PM, Radim Vansa wrote: > Hi Mircea, > > I've ran a simple stress test [1] in dist mode with store as binary (not > enabled, enabled keys only, enabled values only, enabled both). > The difference is < 2 % (with storeAsBinary enabled fully being slower). > > Radim > > [1] > https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/jdg-radargun-perf-store-as-binary/1/artifact/report/All_report.html > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From pedro at infinispan.org Mon Jan 20 04:48:49 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Mon, 20 Jan 2014 09:48:49 +0000 Subject: [infinispan-dev] Store as binary In-Reply-To: References: <52D92AC4.7080701@redhat.com> Message-ID: <52DCF101.3020903@infinispan.org> Hi, IMO, we should try the worst scenario: Local Mode + Single thread. this will show us the highest impact in performance. Cheers, Pedro On 01/20/2014 09:41 AM, Mircea Markus wrote: > Hi Radim, > > I think 4 nodes with numOwner=2 is too small of a cluster. My calculus here[1] points out that for numOwners=1, the performance benefits is only visible for clusters having more than two nodes. Following a similar logic for numOwenrs=2, the benefit would only be visible for clusters having more than 4 nodes. Would it be possible to run the test on a larger cluster, 8+ nodes? > > [1] http://lists.jboss.org/pipermail/infinispan-dev/2009-October/004299.html > > On Jan 17, 2014, at 1:06 PM, Radim Vansa wrote: > >> Hi Mircea, >> >> I've ran a simple stress test [1] in dist mode with store as binary (not >> enabled, enabled keys only, enabled values only, enabled both). >> The difference is < 2 % (with storeAsBinary enabled fully being slower). >> >> Radim >> >> [1] >> https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/jdg-radargun-perf-store-as-binary/1/artifact/report/All_report.html >> >> -- >> Radim Vansa >> JBoss DataGrid QA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > Cheers, > From mmarkus at redhat.com Mon Jan 20 05:07:27 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Mon, 20 Jan 2014 10:07:27 +0000 Subject: [infinispan-dev] Store as binary In-Reply-To: <52DCF101.3020903@infinispan.org> References: <52D92AC4.7080701@redhat.com> <52DCF101.3020903@infinispan.org> Message-ID: <87020416-72D3-412E-818B-A7F9161355CC@redhat.com> Would be interesting to see as well, though performance figure would not include the network latency, hence it would not tell much about the benefit of using this on a real life system. On Jan 20, 2014, at 9:48 AM, Pedro Ruivo wrote: > IMO, we should try the worst scenario: Local Mode + Single thread. > > this will show us the highest impact in performance. Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From pedro at infinispan.org Mon Jan 20 05:14:36 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Mon, 20 Jan 2014 10:14:36 +0000 Subject: [infinispan-dev] Store as binary In-Reply-To: <87020416-72D3-412E-818B-A7F9161355CC@redhat.com> References: <52D92AC4.7080701@redhat.com> <52DCF101.3020903@infinispan.org> <87020416-72D3-412E-818B-A7F9161355CC@redhat.com> Message-ID: <52DCF70C.4090404@infinispan.org> On 01/20/2014 10:07 AM, Mircea Markus wrote: > Would be interesting to see as well, though performance figure would not include the network latency, hence it would not tell much about the benefit of using this on a real life system. that's my point. I'm interested to see the worst scenario since all other cluster modes, will have a lower (or none) impact in performance. Of course, the best scenario would be only each node have access to remote keys... Pedro > > On Jan 20, 2014, at 9:48 AM, Pedro Ruivo wrote: > >> IMO, we should try the worst scenario: Local Mode + Single thread. >> >> this will show us the highest impact in performance. > > Cheers, > From galder at redhat.com Mon Jan 20 06:04:15 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Mon, 20 Jan 2014 12:04:15 +0100 Subject: [infinispan-dev] Time stamps in infinispan cluster In-Reply-To: References: Message-ID: Infinispan does nothing to synchronize the time in each of the nodes. On Jan 13, 2014, at 10:29 PM, Meena Rajani wrote: > Hi > > How does the distributed clock work in infinispan/jboss cluster. > Can some one please guide me. I have read a little bit about the total order messaging and vector clock. > I have extended the infinispan API for freshness Aware caching. I have assumed the time is synchronized all the time and timestamps are comparable. But I want to know how the timestamp work in Infinispan in distributed environment, specially when the communication among the cluster nodes is in synchronous mode. > > Regards > > Meena -- Galder Zamarre?o galder at redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org From galder at redhat.com Mon Jan 20 06:28:45 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Mon, 20 Jan 2014 12:28:45 +0100 Subject: [infinispan-dev] Dropping AtomicMap/FineGrainedAtomicMap Message-ID: Hi all, Dropping AtomicMap and FineGrainedAtomicMap was discussed last week in the F2F meeting [1]. It's complex and buggy, and we'd recommend people to use the Grouping API instead [2]. Grouping API would allow data to reside together, while the standard map API would apply per-key locking. We don't have a timeline for this yet, but we want to get as much feedback on the topic as possible so that we can evaluate the options. Cheers, [1] https://issues.jboss.org/browse/ISPN-3901 [2] http://infinispan.org/docs/6.0.x/user_guide/user_guide.html#_the_grouping_api -- Galder Zamarre?o galder at redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org From pedro at infinispan.org Mon Jan 20 06:32:49 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Mon, 20 Jan 2014 11:32:49 +0000 Subject: [infinispan-dev] Dropping AtomicMap/FineGrainedAtomicMap In-Reply-To: References: Message-ID: <52DD0961.90600@infinispan.org> Hi, On 01/20/2014 11:28 AM, Galder Zamarre?o wrote: > Hi all, > > Dropping AtomicMap and FineGrainedAtomicMap was discussed last week in the F2F meeting [1]. It's complex and buggy, and we'd recommend people to use the Grouping API instead [2]. Grouping API would allow data to reside together, while the standard map API would apply per-key locking. +1. are we going to dropping the Delta stuff? > > We don't have a timeline for this yet, but we want to get as much feedback on the topic as possible so that we can evaluate the options. before starting with it, I would recommend to add the following method to cache API: /** * returns all the keys and values associated with the group name. The Map is immutable (i.e. read-only) **/ Map getGroup(String groupName); Cheers, Pedro > > Cheers, > > [1] https://issues.jboss.org/browse/ISPN-3901 > [2] http://infinispan.org/docs/6.0.x/user_guide/user_guide.html#_the_grouping_api > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > Project Lead, Escalante > http://escalante.io > > Engineer, Infinispan > http://infinispan.org > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From galder at redhat.com Mon Jan 20 06:33:07 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Mon, 20 Jan 2014 12:33:07 +0100 Subject: [infinispan-dev] infinispan build process - fresh mvn-repo In-Reply-To: <52D6A210.9080907@redhat.com> References: <52D6A210.9080907@redhat.com> Message-ID: <7151A5A5-EE42-4CCC-8CB0-BCED0818D866@redhat.com> Did you look at http://infinispan.org/docs/6.0.x/contributing/contributing.html#_building_infinispan ? On Jan 15, 2014, at 3:58 PM, Wolf-Dieter Fink wrote: > Hi, > > I build the git at github.com:infinispan/infinispan.git from scratch and > follow the documentation/README. > > I use the maven-settings.xml > mvn -s maven-settings.xml -Dmaven.test.skip=true clean install > with that setting the build failed, see error "1.Build" > > A build with skipping test will not work due to dependency issues > mvn -s maven-settings.xml -Dmaven.test.skip=true clean install > see "2.Build" > > I found that "-Dmaven.test.skip.exec=true" will build correct. after > that the test hung forever (or longer than my patience ;) > > Test suite progress: tests succeeded: 506, failed: 0, skipped: 7. > [testng-BulkGetSimpleTest] Test > testBulkGetWithSize(org.infinispan.client.hotrod.BulkGetSimpleTest) > succeeded. > Test suite progress: tests succeeded: 507, failed: 0, skipped: 7. > [testng-ClientSocketReadTimeoutTest] Test > testPutTimeout(org.infinispan.client.hotrod.ClientSocketReadTimeoutTest) > succeeded. > Test suite progress: tests succeeded: 508, failed: 0, skipped: 7. > ==> this test hung a longer time > > [testng-DistributionRetryTest] Test > testRemoveIfUnmodified(org.infinispan.client.hotrod.retry.DistributionRetryTest) > failed. > Test suite progress: tests succeeded: 508, failed: 1, skipped: 7. > ===> this test "never" came back > > > > The main problem is that the first build will have issues and you need > to bypass it. > Second is that there is a dependency if the tests are skipped, a hint > within the documentation or readme might be helpful to avoid frustration ;) > And last but not least is there a reason why the > "[testng-ClientSocketReadTimeoutTest" hung? Would it be an idea to > rename it if it takes long, i.e. "ClientSocket10MinuteReadTimeoutTest"? > to show that this test takes a long time, And also a time-limit for the > test. > > > - Wolf > > > > ------------------------ 1. Build > ------------------------------------------- > ~~~~~~~~~~~~~~~~~~~~~~~~~ ENVIRONMENT INFO ~~~~~~~~~~~~~~~~~~~~~~~~~~ > Tests run: 4044, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: > 357.511 sec <<< FAILURE! > testNoEntryInL1GetWithConcurrentReplace(org.infinispan.distribution.DistSyncL1FuncTest) > Time elapsed: 0.005 sec <<< FAILURE! > java.lang.AssertionError: Entry for key [key-to-the-cache] should be in > L1 on cache at [DistSyncL1FuncTest-NodeA-21024]! > at > org.infinispan.distribution.DistributionTestHelper.assertIsInL1(DistributionTestHelper.java:31) > at > org.infinispan.distribution.BaseDistFunctionalTest.assertIsInL1(BaseDistFunctionalTest.java:183) > at > org.infinispan.distribution.DistSyncL1FuncTest.testNoEntryInL1GetWithConcurrentReplace(DistSyncL1FuncTest.java:193) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80) > at org.testng.internal.Invoker.invokeMethod(Invoker.java:714) > at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901) > at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231) > at > org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127) > at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111) > at org.testng.TestRunner.privateRun(TestRunner.java:767) > at org.testng.TestRunner.run(TestRunner.java:617) > at org.testng.SuiteRunner.runTest(SuiteRunner.java:334) > at org.testng.SuiteRunner.access$000(SuiteRunner.java:37) > at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368) > at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > > testInvokeMapWithReduceExceptionPhaseInRemoteExecution(org.infinispan.distexec.mapreduce.SimpleTwoNodesMapReduceTest) > Time elapsed: 0.018 sec <<< FAILURE! > org.testng.TestException: > Method > SimpleTwoNodesMapReduceTest.testInvokeMapWithReduceExceptionPhaseInRemoteExecution()[pri:0, > instance:org.infinispan.distexec.mapreduce.SimpleTwoNodesMapReduceTest at 70bd631a] > should have thrown an exception of class > org.infinispan.commons.CacheException > at > org.testng.internal.Invoker.handleInvocationResults(Invoker.java:1512) > at org.testng.internal.Invoker.invokeMethod(Invoker.java:754) > at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901) > at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231) > at > org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127) > at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111) > at org.testng.TestRunner.privateRun(TestRunner.java:767) > at org.testng.TestRunner.run(TestRunner.java:617) > at org.testng.SuiteRunner.runTest(SuiteRunner.java:334) > at org.testng.SuiteRunner.access$000(SuiteRunner.java:37) > at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368) > at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > > > Results : > > Failed tests: > DistSyncL1FuncTest.testNoEntryInL1GetWithConcurrentReplace:193->BaseDistFunctionalTest.assertIsInL1:183 > Entry for key [key-to-the-cache] should be in L1 on cache at > [DistSyncL1FuncTest-NodeA-21024]! > ? Test > Method SimpleTwoNodesMapReduceTest.testInvokeMapWithReduceExceptionPh... > > Tests run: 4044, Failures: 2, Errors: 0, Skipped: 0 > > [INFO] > ------------------------------------------------------------------------ > [INFO] Reactor Summary: > [INFO] > [INFO] Infinispan BOM .................................... SUCCESS [0.100s] > [INFO] Infinispan Common Parent .......................... SUCCESS [1.324s] > [INFO] Infinispan Checkstyle Rules ....................... SUCCESS [2.197s] > [INFO] Infinispan Commons ................................ SUCCESS [4.583s] > [INFO] Infinispan Core ................................... FAILURE > [6:21.850s] > [INFO] Infinispan Extended Statistics .................... SKIPPED > [INFO] Parent pom for server modules ..................... SKIPPED > [INFO] Infinispan Server - Core Components ............... SKIPPED > [INFO] Infinispan Query DSL API .......................... SKIPPED > [INFO] Parent pom for cachestore modules ................. SKIPPED > [INFO] Infinispan JDBC CacheStore ........................ SKIPPED > [INFO] Parent pom for the Lucene integration modules ..... SKIPPED > [INFO] Infinispan integration with Lucene v.3.x .......... SKIPPED > [INFO] Infinispan integration with Lucene v.4.x .......... SKIPPED > [INFO] Infinispan Lucene Directory Implementation ........ SKIPPED > [INFO] Infinispan Query API .............................. SKIPPED > [INFO] Infinispan Tools .................................. SKIPPED > [INFO] Infinispan Remote Query Client .................... SKIPPED > [INFO] Infinispan Remote Query Server .................... SKIPPED > [INFO] Infinispan Tree API ............................... SKIPPED > [INFO] Infinispan Hot Rod Server ......................... SKIPPED > [INFO] Infinispan Hot Rod Client ......................... SKIPPED > [INFO] Parent pom for compatibility modules .............. SKIPPED > [INFO] infinispan-custom52x-store ........................ SKIPPED > [INFO] infinispan-adaptor52x ............................. SKIPPED > [INFO] Infinispan remote CacheStore ...................... SKIPPED > [INFO] Infinispan CLI Client ............................. SKIPPED > [INFO] Infinispan Memcached Server ....................... SKIPPED > [INFO] Infinispan REST Server ............................ SKIPPED > [INFO] Infinispan CLI Server ............................. SKIPPED > [INFO] Infinispan Command Line Interface persistence ..... SKIPPED > [INFO] Infinispan LevelDB CacheStore ..................... SKIPPED > [INFO] Infinispan REST CacheStore ........................ SKIPPED > [INFO] Infinispan WebSocket Server ....................... SKIPPED > [INFO] Infinispan RHQ Plugin ............................. SKIPPED > [INFO] Infinispan Spring Integration ..................... SKIPPED > [INFO] Infinispan GUI Demo ............................... SKIPPED > [INFO] Infinispan EC2 Demo ............................... SKIPPED > [INFO] Infinispan Distributed Executors and Map/Reduce Demo SKIPPED > [INFO] Infinispan EC2 Demo UI ............................ SKIPPED > [INFO] Infinispan Directory Demo ......................... SKIPPED > [INFO] Infinispan Lucene Directory Demo .................. SKIPPED > [INFO] Infinispan GridFileSystem WebDAV interface ........ SKIPPED > [INFO] Infinispan Near Cache Demo ........................ SKIPPED > [INFO] Infinispan CDI support ............................ SKIPPED > [INFO] Infinispan Near Cache Demo Client ................. SKIPPED > [INFO] Infinispan AS/EAP modules ......................... SKIPPED > [INFO] Integration tests: Lucene Directory with Infinispan Query SKIPPED > [INFO] Infinispan JCACHE (JSR-107) implementation ........ SKIPPED > [INFO] Integration tests: AS Module Integration Tests .... SKIPPED > [INFO] Integration tests: Infinispan compatibility mode .. SKIPPED > [INFO] Integration tests: Infinispan CDI/JCache interactions SKIPPED > [INFO] infinispan-cli-migrator52x ........................ SKIPPED > [INFO] Infinispan Server - BOM ........................... SKIPPED > [INFO] Infinispan Server - JGroups Subsystem ............. SKIPPED > [INFO] Infinispan Server - Infinispan Subsystem .......... SKIPPED > [INFO] Infinispan Server - Security Subsystem ............ SKIPPED > [INFO] Infinispan Server - Endpoints Subsystem ........... SKIPPED > [INFO] Infinispan Server - Build ......................... SKIPPED > [INFO] Infinispan Server - RHQ/JON plugin ................ SKIPPED > [INFO] Infinispan Server - Test Suite .................... SKIPPED > [INFO] Infinispan Server ................................. SKIPPED > [INFO] Infinispan Distribution ........................... SKIPPED > [INFO] > ------------------------------------------------------------------------ > [INFO] BUILD FAILURE > [INFO] > ------------------------------------------------------------------------ > [INFO] Total time: 6:31.353s > [INFO] Finished at: Wed Jan 15 14:12:40 CET 2014 > [INFO] Final Memory: 80M/1337M > [INFO] > ------------------------------------------------------------------------ > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-surefire-plugin:2.14.1:test > (default-test) on project infinispan-core: There are test failures. > [ERROR] > [ERROR] Please refer to > /data/devel/github/Infinispan/infinispan/core/target/surefire-reports > for the individual test results. > [ERROR] -> [Help 1] > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the > -e switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, > please read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException > [ERROR] > [ERROR] After correcting the problems, you can resume the build with the > command > [ERROR] mvn -rf :infinispan-core > > > --------------------------- 2.Build ------------------------------------- > Downloaded: > http://repo.maven.apache.org/maven2/com/clearspring/analytics/stream/2.2.0/stream-2.2.0.jar > (73 KB at 1007.9 KB/sec) > [INFO] > ------------------------------------------------------------------------ > [INFO] Reactor Summary: > [INFO] > [INFO] Infinispan BOM .................................... SUCCESS [8.620s] > [INFO] Infinispan Common Parent .......................... SUCCESS > [8:48.158s] > [INFO] Infinispan Checkstyle Rules ....................... SUCCESS > [5:47.825s] > [INFO] Infinispan Commons ................................ SUCCESS [18.225s] > [INFO] Infinispan Core ................................... SUCCESS [34.340s] > [INFO] Infinispan Extended Statistics .................... FAILURE [5.186s] > [INFO] Parent pom for server modules ..................... SKIPPED > [INFO] Infinispan Server - Core Components ............... SKIPPED > [INFO] Infinispan Query DSL API .......................... SKIPPED > [INFO] Parent pom for cachestore modules ................. SKIPPED > [INFO] Infinispan JDBC CacheStore ........................ SKIPPED > [INFO] Parent pom for the Lucene integration modules ..... SKIPPED > [INFO] Infinispan integration with Lucene v.3.x .......... SKIPPED > [INFO] Infinispan integration with Lucene v.4.x .......... SKIPPED > [INFO] Infinispan Lucene Directory Implementation ........ SKIPPED > [INFO] Infinispan Query API .............................. SKIPPED > [INFO] Infinispan Tools .................................. SKIPPED > [INFO] Infinispan Remote Query Client .................... SKIPPED > [INFO] Infinispan Remote Query Server .................... SKIPPED > [INFO] Infinispan Tree API ............................... SKIPPED > [INFO] Infinispan Hot Rod Server ......................... SKIPPED > [INFO] Infinispan Hot Rod Client ......................... SKIPPED > [INFO] Parent pom for compatibility modules .............. SKIPPED > [INFO] infinispan-custom52x-store ........................ SKIPPED > [INFO] infinispan-adaptor52x ............................. SKIPPED > [INFO] Infinispan remote CacheStore ...................... SKIPPED > [INFO] Infinispan CLI Client ............................. SKIPPED > [INFO] Infinispan Memcached Server ....................... SKIPPED > [INFO] Infinispan REST Server ............................ SKIPPED > [INFO] Infinispan CLI Server ............................. SKIPPED > [INFO] Infinispan Command Line Interface persistence ..... SKIPPED > [INFO] Infinispan LevelDB CacheStore ..................... SKIPPED > [INFO] Infinispan REST CacheStore ........................ SKIPPED > [INFO] Infinispan WebSocket Server ....................... SKIPPED > [INFO] Infinispan RHQ Plugin ............................. SKIPPED > [INFO] Infinispan Spring Integration ..................... SKIPPED > [INFO] Infinispan GUI Demo ............................... SKIPPED > [INFO] Infinispan EC2 Demo ............................... SKIPPED > [INFO] Infinispan Distributed Executors and Map/Reduce Demo SKIPPED > [INFO] Infinispan EC2 Demo UI ............................ SKIPPED > [INFO] Infinispan Directory Demo ......................... SKIPPED > [INFO] Infinispan Lucene Directory Demo .................. SKIPPED > [INFO] Infinispan GridFileSystem WebDAV interface ........ SKIPPED > [INFO] Infinispan Near Cache Demo ........................ SKIPPED > [INFO] Infinispan CDI support ............................ SKIPPED > [INFO] Infinispan Near Cache Demo Client ................. SKIPPED > [INFO] Infinispan AS/EAP modules ......................... SKIPPED > [INFO] Integration tests: Lucene Directory with Infinispan Query SKIPPED > [INFO] Infinispan JCACHE (JSR-107) implementation ........ SKIPPED > [INFO] Integration tests: AS Module Integration Tests .... SKIPPED > [INFO] Integration tests: Infinispan compatibility mode .. SKIPPED > [INFO] Integration tests: Infinispan CDI/JCache interactions SKIPPED > [INFO] infinispan-cli-migrator52x ........................ SKIPPED > [INFO] Infinispan Server - BOM ........................... SKIPPED > [INFO] Infinispan Server - JGroups Subsystem ............. SKIPPED > [INFO] Infinispan Server - Infinispan Subsystem .......... SKIPPED > [INFO] Infinispan Server - Security Subsystem ............ SKIPPED > [INFO] Infinispan Server - Endpoints Subsystem ........... SKIPPED > [INFO] Infinispan Server - Build ......................... SKIPPED > [INFO] Infinispan Server - RHQ/JON plugin ................ SKIPPED > [INFO] Infinispan Server - Test Suite .................... SKIPPED > [INFO] Infinispan Server ................................. SKIPPED > [INFO] Infinispan Distribution ........................... SKIPPED > [INFO] > ------------------------------------------------------------------------ > [INFO] BUILD FAILURE > [INFO] > ------------------------------------------------------------------------ > [INFO] Total time: 19:44.748s > [INFO] Finished at: Wed Jan 15 13:55:05 CET 2014 > [INFO] Final Memory: 64M/384M > [INFO] > ------------------------------------------------------------------------ > [ERROR] Failed to execute goal on project > infinispan-extended-statistics: Could not resolve dependencies for > project > org.infinispan:infinispan-extended-statistics:jar:7.0.0-SNAPSHOT: Could > not find artifact > org.infinispan:infinispan-core:jar:tests:7.0.0-SNAPSHOT in > redhat-earlyaccess-repository-group > (http://maven.repository.redhat.com/earlyaccess/all/) -> [Help 1] > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the > -e switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, > please read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException > [ERROR] > [ERROR] After correcting the problems, you can resume the build with the > command > [ERROR] mvn -rf :infinispan-extended-statistics > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org From emmanuel at hibernate.org Mon Jan 20 07:39:39 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Mon, 20 Jan 2014 13:39:39 +0100 Subject: [infinispan-dev] Dropping AtomicMap/FineGrainedAtomicMap In-Reply-To: References: Message-ID: <20140120123939.GC71332@hibernate.org> Then cf the detailed feedback from that mailing list from the last time we discussed it :) There was specifically some feedback on how we use it for OGM and how we need a way to retrieve all entries for a given group (at least). Emmanuel On Mon 2014-01-20 12:28, Galder Zamarre?o wrote: > Hi all, > > Dropping AtomicMap and FineGrainedAtomicMap was discussed last week in the F2F meeting [1]. It's complex and buggy, and we'd recommend people to use the Grouping API instead [2]. Grouping API would allow data to reside together, while the standard map API would apply per-key locking. > > We don't have a timeline for this yet, but we want to get as much feedback on the topic as possible so that we can evaluate the options. > > Cheers, > > [1] https://issues.jboss.org/browse/ISPN-3901 > [2] http://infinispan.org/docs/6.0.x/user_guide/user_guide.html#_the_grouping_api > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > Project Lead, Escalante > http://escalante.io > > Engineer, Infinispan > http://infinispan.org > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Mon Jan 20 08:08:57 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 20 Jan 2014 14:08:57 +0100 Subject: [infinispan-dev] Moved projects Message-ID: <52DD1FE9.9010204@redhat.com> Dear all, in order to avoid confusion, I have done some Git surgery on the following projects: - infinispan-server - infinispan-cachestore-rest - infinispan-cachestore-leveldb In particular: - master has been renamed to deprecated_master - a DEPRECATED tag has been created which points to the previous HEAD - the master branch now contains a README file which tells users what has happened and to go to the main Infinispan project The new master branch was created as an "orphan" git branch and force-pushed as master, something I wouldn't usually do, but in this case it should act as an additional "barrier". Tristan From sanne at infinispan.org Mon Jan 20 09:01:40 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 20 Jan 2014 14:01:40 +0000 Subject: [infinispan-dev] Performance of accessing off-heap buffers: NIO & Unsafe Message-ID: At our meeting last week, there was a debate about the fact that the (various) off-heap buffer usage proposals, including NIO2 reads, would potentially be slower because of it potentially needing more "native" invocations. At the following link you can see the full list of methods which will actually be optimised using "intrinsics" i.e. being replaced by the compiler as it was a macro with highly optimized ad-hoc code which might be platform dependant (or in other words, which will be able to take best advantage of the capabilities of the executing platform): http://hg.openjdk.java.net/jdk8/awt/hotspot/file/d61761bf3050/src/share/vm/classfile/vmSymbols.hpp In particular, note the "do_intrinsic" qualifier marking all uses of Unsafe and the NIO Buffer. Hope you'll all agree now that further arguing about any of this will be dismissed unless we want to talk about measurements :-) Kudos to all scepticals (always good), still let's not dismiss the large work needed for this yet, nor let us revert from the rightful path until we know we've tried it to the end: I do not expect to see incremental performance improvements while we make progress, it might even slow down until we get to the larger rewards. Cheers, Sanne From ttarrant at redhat.com Mon Jan 20 09:23:14 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 20 Jan 2014 15:23:14 +0100 Subject: [infinispan-dev] Performance of accessing off-heap buffers: NIO & Unsafe In-Reply-To: References: Message-ID: <52DD3152.4030001@redhat.com> Hi Sanne, ultimately I believe that it is not about the "intrinsic" (sorry for overloading the term) performance of the memory allocation invocations, but the advantage of using ByteBuffers as the de-facto standard for passing data around between Infinispan, JGroups and any I/O layers (network, disk). Removing various points of copying, marshalling, etc is the real win. Tristan On 01/20/2014 03:01 PM, Sanne Grinovero wrote: > At our meeting last week, there was a debate about the fact that the > (various) off-heap buffer usage proposals, including NIO2 reads, would > potentially be slower because of it potentially needing more "native" > invocations. > > At the following link you can see the full list of methods which will > actually be optimised using "intrinsics" i.e. being replaced by the > compiler as it was a macro with highly optimized ad-hoc code which > might be platform dependant (or in other words, which will be able to > take best advantage of the capabilities of the executing platform): > > http://hg.openjdk.java.net/jdk8/awt/hotspot/file/d61761bf3050/src/share/vm/classfile/vmSymbols.hpp > > In particular, note the "do_intrinsic" qualifier marking all uses of > Unsafe and the NIO Buffer. > > Hope you'll all agree now that further arguing about any of this will > be dismissed unless we want to talk about measurements :-) > > Kudos to all scepticals (always good), still let's not dismiss the > large work needed for this yet, nor let us revert from the rightful > path until we know we've tried it to the end: I do not expect to see > incremental performance improvements while we make progress, it might > even slow down until we get to the larger rewards. > > Cheers, > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > From sanne at infinispan.org Mon Jan 20 10:09:54 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 20 Jan 2014 15:09:54 +0000 Subject: [infinispan-dev] Performance of accessing off-heap buffers: NIO & Unsafe In-Reply-To: <52DD3152.4030001@redhat.com> References: <52DD3152.4030001@redhat.com> Message-ID: On 20 January 2014 14:23, Tristan Tarrant wrote: > Hi Sanne, > > ultimately I believe that it is not about the "intrinsic" (sorry for > overloading the term) performance of the memory allocation invocations, > but the advantage of using ByteBuffers as the de-facto standard for > passing data around between Infinispan, JGroups and any I/O layers > (network, disk). Removing various points of copying, marshalling, etc is > the real win. Absolutely. Still there was some skepticism from others building on the amount of times we' d need to do some random access to these buffers; my point is that it's probably an unfounded concern, and I wouldn't like to have such theories to prevent evolution in this direction. Sanne > > Tristan > > On 01/20/2014 03:01 PM, Sanne Grinovero wrote: >> At our meeting last week, there was a debate about the fact that the >> (various) off-heap buffer usage proposals, including NIO2 reads, would >> potentially be slower because of it potentially needing more "native" >> invocations. >> >> At the following link you can see the full list of methods which will >> actually be optimised using "intrinsics" i.e. being replaced by the >> compiler as it was a macro with highly optimized ad-hoc code which >> might be platform dependant (or in other words, which will be able to >> take best advantage of the capabilities of the executing platform): >> >> http://hg.openjdk.java.net/jdk8/awt/hotspot/file/d61761bf3050/src/share/vm/classfile/vmSymbols.hpp >> >> In particular, note the "do_intrinsic" qualifier marking all uses of >> Unsafe and the NIO Buffer. >> >> Hope you'll all agree now that further arguing about any of this will >> be dismissed unless we want to talk about measurements :-) >> >> Kudos to all scepticals (always good), still let's not dismiss the >> large work needed for this yet, nor let us revert from the rightful >> path until we know we've tried it to the end: I do not expect to see >> incremental performance improvements while we make progress, it might >> even slow down until we get to the larger rewards. >> >> Cheers, >> Sanne >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Mon Jan 20 10:48:04 2014 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 20 Jan 2014 16:48:04 +0100 Subject: [infinispan-dev] Store as binary In-Reply-To: <52DCF70C.4090404@infinispan.org> References: <52D92AC4.7080701@redhat.com> <52DCF101.3020903@infinispan.org> <87020416-72D3-412E-818B-A7F9161355CC@redhat.com> <52DCF70C.4090404@infinispan.org> Message-ID: <52DD4534.7080209@redhat.com> OK, I have results for dist-udp-no-tx or local-no-tx modes on 8 nodes (in local mode the nodes don't communicate, naturally): Dist mode: 3 % down for reads, 1 % for writes Local mode: 19 % down for reads, 16 % for writes Details in [1], ^ is for both keys and values stored as binary. Radim [1] https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/jdg-radargun-perf-store-as-binary/4/artifact/report/All_report.html On 01/20/2014 11:14 AM, Pedro Ruivo wrote: > > On 01/20/2014 10:07 AM, Mircea Markus wrote: >> Would be interesting to see as well, though performance figure would not include the network latency, hence it would not tell much about the benefit of using this on a real life system. > that's my point. I'm interested to see the worst scenario since all > other cluster modes, will have a lower (or none) impact in performance. > > Of course, the best scenario would be only each node have access to > remote keys... > > Pedro > >> On Jan 20, 2014, at 9:48 AM, Pedro Ruivo wrote: >> >>> IMO, we should try the worst scenario: Local Mode + Single thread. >>> >>> this will show us the highest impact in performance. >> Cheers, >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From sanne at infinispan.org Tue Jan 21 07:36:51 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 21 Jan 2014 12:36:51 +0000 Subject: [infinispan-dev] Store as binary In-Reply-To: <52DD4534.7080209@redhat.com> References: <52D92AC4.7080701@redhat.com> <52DCF101.3020903@infinispan.org> <87020416-72D3-412E-818B-A7F9161355CC@redhat.com> <52DCF70C.4090404@infinispan.org> <52DD4534.7080209@redhat.com> Message-ID: What's the point for these tests? On 20 Jan 2014 15:48, "Radim Vansa" wrote: > OK, I have results for dist-udp-no-tx or local-no-tx modes on 8 nodes > (in local mode the nodes don't communicate, naturally): > Dist mode: 3 % down for reads, 1 % for writes > Local mode: 19 % down for reads, 16 % for writes > > Details in [1], ^ is for both keys and values stored as binary. > > Radim > > [1] > > https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/jdg-radargun-perf-store-as-binary/4/artifact/report/All_report.html > > On 01/20/2014 11:14 AM, Pedro Ruivo wrote: > > > > On 01/20/2014 10:07 AM, Mircea Markus wrote: > >> Would be interesting to see as well, though performance figure would > not include the network latency, hence it would not tell much about the > benefit of using this on a real life system. > > that's my point. I'm interested to see the worst scenario since all > > other cluster modes, will have a lower (or none) impact in performance. > > > > Of course, the best scenario would be only each node have access to > > remote keys... > > > > Pedro > > > >> On Jan 20, 2014, at 9:48 AM, Pedro Ruivo wrote: > >> > >>> IMO, we should try the worst scenario: Local Mode + Single thread. > >>> > >>> this will show us the highest impact in performance. > >> Cheers, > >> > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140121/1f4bc558/attachment.html From galder at redhat.com Tue Jan 21 08:21:43 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Tue, 21 Jan 2014 14:21:43 +0100 Subject: [infinispan-dev] Store as binary In-Reply-To: References: <52D92AC4.7080701@redhat.com> <52DCF101.3020903@infinispan.org> <87020416-72D3-412E-818B-A7F9161355CC@redhat.com> <52DCF70C.4090404@infinispan.org> <52DD4534.7080209@redhat.com> Message-ID: On Jan 21, 2014, at 1:36 PM, Sanne Grinovero wrote: > What's the point for these tests? +1 > On 20 Jan 2014 15:48, "Radim Vansa" wrote: > OK, I have results for dist-udp-no-tx or local-no-tx modes on 8 nodes > (in local mode the nodes don't communicate, naturally): > Dist mode: 3 % down for reads, 1 % for writes > Local mode: 19 % down for reads, 16 % for writes > > Details in [1], ^ is for both keys and values stored as binary. > > Radim > > [1] > https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/jdg-radargun-perf-store-as-binary/4/artifact/report/All_report.html > > On 01/20/2014 11:14 AM, Pedro Ruivo wrote: > > > > On 01/20/2014 10:07 AM, Mircea Markus wrote: > >> Would be interesting to see as well, though performance figure would not include the network latency, hence it would not tell much about the benefit of using this on a real life system. > > that's my point. I'm interested to see the worst scenario since all > > other cluster modes, will have a lower (or none) impact in performance. > > > > Of course, the best scenario would be only each node have access to > > remote keys... > > > > Pedro > > > >> On Jan 20, 2014, at 9:48 AM, Pedro Ruivo wrote: > >> > >>> IMO, we should try the worst scenario: Local Mode + Single thread. > >>> > >>> this will show us the highest impact in performance. > >> Cheers, > >> > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org From mmarkus at redhat.com Tue Jan 21 08:37:08 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Tue, 21 Jan 2014 13:37:08 +0000 Subject: [infinispan-dev] Store as binary In-Reply-To: References: <52D92AC4.7080701@redhat.com> <52DCF101.3020903@infinispan.org> <87020416-72D3-412E-818B-A7F9161355CC@redhat.com> <52DCF70C.4090404@infinispan.org> <52DD4534.7080209@redhat.com> Message-ID: On Jan 21, 2014, at 1:21 PM, Galder Zamarre?o wrote: >> What's the point for these tests? > > +1 To validate if storing the data in binary format yields better performance than store is as a POJO. As of now, it doesn't so I need to check why. Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From sanne at infinispan.org Tue Jan 21 09:13:28 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 21 Jan 2014 14:13:28 +0000 Subject: [infinispan-dev] Store as binary In-Reply-To: References: <52D92AC4.7080701@redhat.com> <52DCF101.3020903@infinispan.org> <87020416-72D3-412E-818B-A7F9161355CC@redhat.com> <52DCF70C.4090404@infinispan.org> <52DD4534.7080209@redhat.com> Message-ID: On 21 January 2014 13:37, Mircea Markus wrote: > > On Jan 21, 2014, at 1:21 PM, Galder Zamarre?o wrote: > >>> What's the point for these tests? >> >> +1 > > To validate if storing the data in binary format yields better performance than store is as a POJO. That will highly depend on the scenarios you want to test for. AFAIK this started after Paul described how session replication works in WildFly, and we already know that both strategies are suboptimal with the current options available: in his case the active node will always write on the POJO, while the backup node will essentially only need to store the buffer "just in case" he might need to take over. Sure, one will be slower, but if you want to make a suggestion to him about which configuration he should be using, we should measure his use case, not a different one. Even then as discussed in Palma, an in memory String representation might be way more compact because of pooling of strings and a very high likelihood for repeated headers (as common in web frameworks), so you might want to measure the CPU vs storage cost on the receiving side.. but then again your results will definitely depend on the input data and assumptions on likelihood of failover, how often is being written on the owner node vs on the other node (since he uses locality), etc.. many factors I'm not seeing being considered here and which could make a significant difference. > As of now, it doesn't so I need to check why. You could play with the test parameters until it produces an output you like better, but I still see no point? This is not a realistic scenario, at best it could help us document suggestions about which scenarios you'd want to keep the option enabled vs disabled, but then again I think we're wasting time as we could implement a better strategy for Paul's use case: one which never deserializes a value received from a remote node until it's been requested as a POJO, but keeps the POJO as-is when it's stored locally. I believe that would make sense also for OGM and probably most other users of Embedded. Basically, that would re-implement something similar to the previous design but simplifying it a bit so that it doesn't allow for a back-and-forth conversion between storage types but rather dynamically favors a specific storage strategy. Cheers, Sanne > > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mmarkus at redhat.com Tue Jan 21 10:07:32 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Tue, 21 Jan 2014 15:07:32 +0000 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> References: <20131112145454.GB5423@hibernate.org> <20131118095612.GN3262@hibernate.org> <24C30EDB-1978-4AB0-93F8-A02B35C1193C@redhat.com> <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> Message-ID: Hi Emmanuel, Just had a good chat with Davide on this and one solution to overcome the shortcoming you mentioned in the above email would be to enhance the hotrod client to support grouping: RemoteClient.put(G g, K k, V v); //first param is the group RemoteClinet.getGroup(G g) : Map; It requires an enhancement on our local grouping API: EmbeddedCache.getGroup(G). This is something useful for us in a broader context, as it is the step needed to be able to deprecated AtomicMaps and get suggest them being replaced with Grouping. This approach still has some limitations compared to the current embedded integration: - performance caused by the lack of transactions: this means increased TCP chattiness between the Hot Rod client and the server. - you'd have to handle atomicity, potentially by retrying an operation What do you think? On Dec 3, 2013, at 3:10 AM, Mircea Markus wrote: > > On Nov 19, 2013, at 10:22 AM, Emmanuel Bernard wrote: > >> It's an interesting approach that would work fine-ish for entities >> assuming the Hot Rod client is multi threaded and assuming the client >> uses Future to parallelize the calls. > > The Java Hotrod client is both multithreaded and exposes an Async API. > >> >> But it won't work for associations as we have them designed today. >> Each association - or more precisely the query results to go from an >> entity A1 to the list of entities B associated to it - is represented by >> an AtomicMap. >> Each entry in this map does correspond to an entry in the association. >> >> While we can "guess" the column names and build from the metadata the >> list of composed keys for entities, we cannot do the same for >> associations as the key is literally the (composite) id of the >> association and we cannot guess that most of the time (we can in very >> pathological cases). >> We could imagine that we list the association row keys in a special >> entry to work around that but this approach is just as problematic and >> is conceptually the same. >> The only solution would be to lock the whole association for each >> operation and I guess impose some versioning / optimistic lock. >> >> That is not a pattern that scales sufficiently from my experience. > > I think so too :-) > >> That's the problem with interconnected data :) >> >> Emmanuel >> >> On Mon 2013-11-18 23:05, Mircea Markus wrote: >>> Neither the grouping API nor the AtomicMap work over hotrod. >>> Between the grouping API and AtomicMap, I think the one that would make more sense migrating is the grouping API. >>> One way or the other, I think the hotrod protocol would require an enhancement - mind raising a JIRA for that? >>> For now I guess you can sacrifice performance and always sending the entire object across on every update instead of only the deltas? >>> >>> On Nov 18, 2013, at 9:56 AM, Emmanuel Bernard wrote: >>> >>>> Someone mentioned the grouping API as some sort of alternative to >>>> AtomicMap. Maybe we should use that? >>>> Note that if we don't have a fine-grained approach we will need to >>>> make sure we *copy* the complex data structure upon reads to mimic >>>> proper transaction isolation. >>>> >>>> On Tue 2013-11-12 15:14, Sanne Grinovero wrote: >>>>> On 12 November 2013 14:54, Emmanuel Bernard wrote: >>>>>> On the transaction side, we can start without them. >>>>> >>>>> +1 on omitting transactions for now. >>>>> >>>>> And on the missing AtomicMaps, I hope the Infinispan will want to implement it? >>>>> Would be good to eventually converge on similar featuresets on remote >>>>> vs embedded APIs. >>>>> >>>>> I know the embedded version relies on batching/transactions, but I >>>>> guess we could obtain a similar effect with some ad-hoc commands in >>>>> Hot Rod? >>>>> >>>>> Sanne >>>>> >>>>>> >>>>>> On Tue 2013-11-12 14:34, Davide D'Alto wrote: >>>>>>> Hi, >>>>>>> I'm working on the integration between HotRod and OGM. >>>>>>> >>>>>>> We already have a dialect for Inifinispan and I'm trying to follow the same >>>>>>> logic. >>>>>>> At the moment I'm having two problems: >>>>>>> >>>>>>> 1) In the Infinispan dialect we are using the AtomicMap and the >>>>>>> AtomicMapLookup but this classes don't work with the RemoteCache. Is there >>>>>>> an equivalent for HotRod? >>>>>>> >>>>>>> 2) As far as I know HotRod does not support transactions. I've found a link >>>>>>> to a branch on Mircea repository: >>>>>>> https://github.com/mmarkus/ops_over_hotrod/wiki/Usage-guide >>>>>>> Is this something I could/should use? >>>>>>> >>>>>>> Any help is appreciated. >>>>>>> >>>>>>> Thanks, >>>>>>> Davide >>>>>> >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> Cheers, >>> -- >>> Mircea Markus >>> Infinispan lead (www.infinispan.org) >>> >>> >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) > > > > Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From sanne at infinispan.org Tue Jan 21 11:08:57 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 21 Jan 2014 16:08:57 +0000 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: References: <20131112145454.GB5423@hibernate.org> <20131118095612.GN3262@hibernate.org> <24C30EDB-1978-4AB0-93F8-A02B35C1193C@redhat.com> <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> Message-ID: Hi Mircea, could you explain how Grouping is different than AtomicMaps ? I understand you're all suggesting to move to AtomicMaps as "the implementation is better" but is that an implementation detail, or how is it inherently different so that we can build something more reliable on it? >From the limited knowledge I have in this area, I have been assuming - since they have very similar properties - that this was essentially a different syntax to get to the same semantics but obviously I'm wrong. It would be especially helpfull to have a clear comparison on the different semantics in terms of transactions, atomicity and visibility of state across the three kinds: AtomicMaps, FineGrainedAtomicMaps, Grouping. Let's also keep in mind that Hibernate OGM uses a carefully selected combination of *both* AtomicMap and FGAM instances - depending on the desired semantics we want to achieve, so since those two where clearly different and we actually build on those differences - I'm not seeing how we could migrate two different things to the same construct without having to move "fishy locking details" out of Infinispan but in OGM, and I wouldn't be too happy with that as such logic would belong in Infinispan to provide. - Sanne On 21 January 2014 15:07, Mircea Markus wrote: > Hi Emmanuel, > > Just had a good chat with Davide on this and one solution to overcome the shortcoming you mentioned in the above email would be to enhance the hotrod client to support grouping: > > RemoteClient.put(G g, K k, V v); //first param is the group > RemoteClinet.getGroup(G g) : Map; > > It requires an enhancement on our local grouping API: EmbeddedCache.getGroup(G). This is something useful for us in a broader context, as it is the step needed to be able to deprecated AtomicMaps and get suggest them being replaced with Grouping. > > This approach still has some limitations compared to the current embedded integration: > - performance caused by the lack of transactions: this means increased TCP chattiness between the Hot Rod client and the server. > - you'd have to handle atomicity, potentially by retrying an operation > > What do you think? > > > On Dec 3, 2013, at 3:10 AM, Mircea Markus wrote: > >> >> On Nov 19, 2013, at 10:22 AM, Emmanuel Bernard wrote: >> >>> It's an interesting approach that would work fine-ish for entities >>> assuming the Hot Rod client is multi threaded and assuming the client >>> uses Future to parallelize the calls. >> >> The Java Hotrod client is both multithreaded and exposes an Async API. >> >>> >>> But it won't work for associations as we have them designed today. >>> Each association - or more precisely the query results to go from an >>> entity A1 to the list of entities B associated to it - is represented by >>> an AtomicMap. >>> Each entry in this map does correspond to an entry in the association. >>> >>> While we can "guess" the column names and build from the metadata the >>> list of composed keys for entities, we cannot do the same for >>> associations as the key is literally the (composite) id of the >>> association and we cannot guess that most of the time (we can in very >>> pathological cases). >>> We could imagine that we list the association row keys in a special >>> entry to work around that but this approach is just as problematic and >>> is conceptually the same. >>> The only solution would be to lock the whole association for each >>> operation and I guess impose some versioning / optimistic lock. >>> >>> That is not a pattern that scales sufficiently from my experience. >> >> I think so too :-) >> >>> That's the problem with interconnected data :) >>> >>> Emmanuel >>> >>> On Mon 2013-11-18 23:05, Mircea Markus wrote: >>>> Neither the grouping API nor the AtomicMap work over hotrod. >>>> Between the grouping API and AtomicMap, I think the one that would make more sense migrating is the grouping API. >>>> One way or the other, I think the hotrod protocol would require an enhancement - mind raising a JIRA for that? >>>> For now I guess you can sacrifice performance and always sending the entire object across on every update instead of only the deltas? >>>> >>>> On Nov 18, 2013, at 9:56 AM, Emmanuel Bernard wrote: >>>> >>>>> Someone mentioned the grouping API as some sort of alternative to >>>>> AtomicMap. Maybe we should use that? >>>>> Note that if we don't have a fine-grained approach we will need to >>>>> make sure we *copy* the complex data structure upon reads to mimic >>>>> proper transaction isolation. >>>>> >>>>> On Tue 2013-11-12 15:14, Sanne Grinovero wrote: >>>>>> On 12 November 2013 14:54, Emmanuel Bernard wrote: >>>>>>> On the transaction side, we can start without them. >>>>>> >>>>>> +1 on omitting transactions for now. >>>>>> >>>>>> And on the missing AtomicMaps, I hope the Infinispan will want to implement it? >>>>>> Would be good to eventually converge on similar featuresets on remote >>>>>> vs embedded APIs. >>>>>> >>>>>> I know the embedded version relies on batching/transactions, but I >>>>>> guess we could obtain a similar effect with some ad-hoc commands in >>>>>> Hot Rod? >>>>>> >>>>>> Sanne >>>>>> >>>>>>> >>>>>>> On Tue 2013-11-12 14:34, Davide D'Alto wrote: >>>>>>>> Hi, >>>>>>>> I'm working on the integration between HotRod and OGM. >>>>>>>> >>>>>>>> We already have a dialect for Inifinispan and I'm trying to follow the same >>>>>>>> logic. >>>>>>>> At the moment I'm having two problems: >>>>>>>> >>>>>>>> 1) In the Infinispan dialect we are using the AtomicMap and the >>>>>>>> AtomicMapLookup but this classes don't work with the RemoteCache. Is there >>>>>>>> an equivalent for HotRod? >>>>>>>> >>>>>>>> 2) As far as I know HotRod does not support transactions. I've found a link >>>>>>>> to a branch on Mircea repository: >>>>>>>> https://github.com/mmarkus/ops_over_hotrod/wiki/Usage-guide >>>>>>>> Is this something I could/should use? >>>>>>>> >>>>>>>> Any help is appreciated. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Davide >>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> Cheers, >>>> -- >>>> Mircea Markus >>>> Infinispan lead (www.infinispan.org) >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> Cheers, >> -- >> Mircea Markus >> Infinispan lead (www.infinispan.org) >> >> >> >> > > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mmarkus at redhat.com Tue Jan 21 11:45:52 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Tue, 21 Jan 2014 16:45:52 +0000 Subject: [infinispan-dev] Store as binary In-Reply-To: References: <52D92AC4.7080701@redhat.com> <52DCF101.3020903@infinispan.org> <87020416-72D3-412E-818B-A7F9161355CC@redhat.com> <52DCF70C.4090404@infinispan.org> <52DD4534.7080209@redhat.com> Message-ID: <68B26C2A-389B-4C0A-A3C6-DBE3B0526DAC@redhat.com> On Jan 21, 2014, at 2:13 PM, Sanne Grinovero wrote: > On 21 January 2014 13:37, Mircea Markus wrote: >> >> On Jan 21, 2014, at 1:21 PM, Galder Zamarre?o wrote: >> >>>> What's the point for these tests? >>> >>> +1 >> >> To validate if storing the data in binary format yields better performance than store is as a POJO. > > That will highly depend on the scenarios you want to test for. AFAIK > this started after Paul described how session replication works in > WildFly, and we already know that both strategies are suboptimal with > the current options available: in his case the active node will always > write on the POJO, while the backup node will essentially only need to > store the buffer "just in case" he might need to take over. Indeed as it is today, it doesn't make sense for WildFly's session replication. > > Sure, one will be slower, but if you want to make a suggestion to him > about which configuration he should be using, we should measure his > use case, not a different one. > > Even then as discussed in Palma, an in memory String representation > might be way more compact because of pooling of strings and a very > high likelihood for repeated headers (as common in web frameworks), pooling like in String.intern()? Even so, if most of your access to the String is to serialize it and sent is remotely then you have a serialization cost(CPU) to pay for the reduced size. > so > you might want to measure the CPU vs storage cost on the receiving > side.. but then again your results will definitely depend on the input > data and assumptions on likelihood of failover, how often is being > written on the owner node vs on the other node (since he uses > locality), etc.. many factors I'm not seeing being considered here and > which could make a significant difference. I'm looking for the default setting of storeAsBinary in the configurations we ship. I think the default configs should be optimized for distribution, random key access (every reads/writes for any key executes on every node of the cluster with the same probability) for both read an write. > >> As of now, it doesn't so I need to check why. > > You could play with the test parameters until it produces an output > you like better, but I still see no point? the point is to provide the best defaults params for the default config, and see what's the usefulness of storeAsBinary. > This is not a realistic > scenario, at best it could help us document suggestions about which > scenarios you'd want to keep the option enabled vs disabled, but then > again I think we're wasting time as we could implement a better > strategy for Paul's use case: one which never deserializes a value > received from a remote node until it's been requested as a POJO, but > keeps the POJO as-is when it's stored locally. I disagree: Paul's scenario, whilst very important, is quite specific. For what I consider the general case (random key access, see above), your approach is suboptimal. > I believe that would > make sense also for OGM and probably most other users of Embedded. > Basically, that would re-implement something similar to the previous > design but simplifying it a bit so that it doesn't allow for a > back-and-forth conversion between storage types but rather dynamically > favors a specific storage strategy. It all boils down to what we want to optimize for: random key access or some degree of affinity. I think the former is the default. One way or the other, from the test Radim ran with random key access, the storeAsBinary doesn't bring any benefit and it should: http://lists.jboss.org/pipermail/infinispan-dev/2009-October/004299.html > > Cheers, > Sanne > >> >> Cheers, >> -- >> Mircea Markus >> Infinispan lead (www.infinispan.org) >> >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From mmarkus at redhat.com Tue Jan 21 11:57:50 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Tue, 21 Jan 2014 16:57:50 +0000 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: References: <20131112145454.GB5423@hibernate.org> <20131118095612.GN3262@hibernate.org> <24C30EDB-1978-4AB0-93F8-A02B35C1193C@redhat.com> <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> Message-ID: <33D7BCA3-3F43-4F00-BA27-FFE5B2A07FE2@redhat.com> On Jan 21, 2014, at 4:08 PM, Sanne Grinovero wrote: > Hi Mircea, > could you explain how Grouping is different than AtomicMaps ? Here's the original thread where this has been discussed: http://goo.gl/WNs6KY I would add to that that the AtomicMap requires transactions, which grouping doesn't. Also in the context of hotrod (i.e. this email thread) FGAM is a structure that would be harder to migrate over > I understand you're all suggesting to move to AtomicMaps as "the > implementation is better" we're suggesting to move from the AM to grouping > but is that an implementation detail, or how > is it inherently different so that we can build something more > reliable on it? They both are doing pretty much the same thing, so it's more a matter of choosing one instead of the other. Grouping fits way nicer into the picture, both as a concept and the implementation. > >> From the limited knowledge I have in this area, I have been assuming - > since they have very similar properties - that this was essentially a > different syntax to get to the same semantics but obviously I'm wrong. > > It would be especially helpfull to have a clear comparison on the > different semantics in terms of transactions, atomicity and visibility > of state across the three kinds: AtomicMaps, FineGrainedAtomicMaps, > Grouping. > > Let's also keep in mind that Hibernate OGM uses a carefully selected > combination of *both* AtomicMap and FGAM instances - depending on the > desired semantics we want to achieve, so since those two where clearly > different and we actually build on those differences - I'm not seeing > how we could migrate two different things to the same construct > without having to move "fishy locking details" out of Infinispan but > in OGM, and I wouldn't be too happy with that as such logic would > belong in Infinispan to provide. I wasn't aware that OGM still uses AtomicMap, but the only case in which I imagine that would be useful is in order to force a lock on the whole AtomicMap. Is that so or some other aspect that I'm missing? > > - Sanne > > > On 21 January 2014 15:07, Mircea Markus wrote: >> Hi Emmanuel, >> >> Just had a good chat with Davide on this and one solution to overcome the shortcoming you mentioned in the above email would be to enhance the hotrod client to support grouping: >> >> RemoteClient.put(G g, K k, V v); //first param is the group >> RemoteClinet.getGroup(G g) : Map; >> >> It requires an enhancement on our local grouping API: EmbeddedCache.getGroup(G). This is something useful for us in a broader context, as it is the step needed to be able to deprecated AtomicMaps and get suggest them being replaced with Grouping. >> >> This approach still has some limitations compared to the current embedded integration: >> - performance caused by the lack of transactions: this means increased TCP chattiness between the Hot Rod client and the server. >> - you'd have to handle atomicity, potentially by retrying an operation >> >> What do you think? >> >> >> On Dec 3, 2013, at 3:10 AM, Mircea Markus wrote: >> >>> >>> On Nov 19, 2013, at 10:22 AM, Emmanuel Bernard wrote: >>> >>>> It's an interesting approach that would work fine-ish for entities >>>> assuming the Hot Rod client is multi threaded and assuming the client >>>> uses Future to parallelize the calls. >>> >>> The Java Hotrod client is both multithreaded and exposes an Async API. >>> >>>> >>>> But it won't work for associations as we have them designed today. >>>> Each association - or more precisely the query results to go from an >>>> entity A1 to the list of entities B associated to it - is represented by >>>> an AtomicMap. >>>> Each entry in this map does correspond to an entry in the association. >>>> >>>> While we can "guess" the column names and build from the metadata the >>>> list of composed keys for entities, we cannot do the same for >>>> associations as the key is literally the (composite) id of the >>>> association and we cannot guess that most of the time (we can in very >>>> pathological cases). >>>> We could imagine that we list the association row keys in a special >>>> entry to work around that but this approach is just as problematic and >>>> is conceptually the same. >>>> The only solution would be to lock the whole association for each >>>> operation and I guess impose some versioning / optimistic lock. >>>> >>>> That is not a pattern that scales sufficiently from my experience. >>> >>> I think so too :-) >>> >>>> That's the problem with interconnected data :) >>>> >>>> Emmanuel >>>> >>>> On Mon 2013-11-18 23:05, Mircea Markus wrote: >>>>> Neither the grouping API nor the AtomicMap work over hotrod. >>>>> Between the grouping API and AtomicMap, I think the one that would make more sense migrating is the grouping API. >>>>> One way or the other, I think the hotrod protocol would require an enhancement - mind raising a JIRA for that? >>>>> For now I guess you can sacrifice performance and always sending the entire object across on every update instead of only the deltas? >>>>> >>>>> On Nov 18, 2013, at 9:56 AM, Emmanuel Bernard wrote: >>>>> >>>>>> Someone mentioned the grouping API as some sort of alternative to >>>>>> AtomicMap. Maybe we should use that? >>>>>> Note that if we don't have a fine-grained approach we will need to >>>>>> make sure we *copy* the complex data structure upon reads to mimic >>>>>> proper transaction isolation. >>>>>> >>>>>> On Tue 2013-11-12 15:14, Sanne Grinovero wrote: >>>>>>> On 12 November 2013 14:54, Emmanuel Bernard wrote: >>>>>>>> On the transaction side, we can start without them. >>>>>>> >>>>>>> +1 on omitting transactions for now. >>>>>>> >>>>>>> And on the missing AtomicMaps, I hope the Infinispan will want to implement it? >>>>>>> Would be good to eventually converge on similar featuresets on remote >>>>>>> vs embedded APIs. >>>>>>> >>>>>>> I know the embedded version relies on batching/transactions, but I >>>>>>> guess we could obtain a similar effect with some ad-hoc commands in >>>>>>> Hot Rod? >>>>>>> >>>>>>> Sanne >>>>>>> >>>>>>>> >>>>>>>> On Tue 2013-11-12 14:34, Davide D'Alto wrote: >>>>>>>>> Hi, >>>>>>>>> I'm working on the integration between HotRod and OGM. >>>>>>>>> >>>>>>>>> We already have a dialect for Inifinispan and I'm trying to follow the same >>>>>>>>> logic. >>>>>>>>> At the moment I'm having two problems: >>>>>>>>> >>>>>>>>> 1) In the Infinispan dialect we are using the AtomicMap and the >>>>>>>>> AtomicMapLookup but this classes don't work with the RemoteCache. Is there >>>>>>>>> an equivalent for HotRod? >>>>>>>>> >>>>>>>>> 2) As far as I know HotRod does not support transactions. I've found a link >>>>>>>>> to a branch on Mircea repository: >>>>>>>>> https://github.com/mmarkus/ops_over_hotrod/wiki/Usage-guide >>>>>>>>> Is this something I could/should use? >>>>>>>>> >>>>>>>>> Any help is appreciated. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Davide >>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> infinispan-dev mailing list >>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> Cheers, >>>>> -- >>>>> Mircea Markus >>>>> Infinispan lead (www.infinispan.org) >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> Cheers, >>> -- >>> Mircea Markus >>> Infinispan lead (www.infinispan.org) >>> >>> >>> >>> >> >> Cheers, >> -- >> Mircea Markus >> Infinispan lead (www.infinispan.org) >> >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From mmarkus at redhat.com Tue Jan 21 12:20:11 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Tue, 21 Jan 2014 17:20:11 +0000 Subject: [infinispan-dev] Dropping AtomicMap/FineGrainedAtomicMap In-Reply-To: References: Message-ID: On Jan 20, 2014, at 11:28 AM, Galder Zamarre?o wrote: > Hi all, > > Dropping AtomicMap and FineGrainedAtomicMap was discussed last week in the F2F meeting [1]. It's complex and buggy, and we'd recommend people to use the Grouping API instead [2]. Grouping API would allow data to reside together, while the standard map API would apply per-key locking. > > We don't have a timeline for this yet, but we want to get as much feedback on the topic as possible so that we can evaluate the options. +1 There's been a good discussion on this topic: http://goo.gl/WNs6KY > > Cheers, > > [1] https://issues.jboss.org/browse/ISPN-3901 > [2] http://infinispan.org/docs/6.0.x/user_guide/user_guide.html#_the_grouping_api > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > Project Lead, Escalante > http://escalante.io > > Engineer, Infinispan > http://infinispan.org > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From mmarkus at redhat.com Tue Jan 21 13:22:28 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Tue, 21 Jan 2014 18:22:28 +0000 Subject: [infinispan-dev] Design change in Infinispan Query In-Reply-To: <6211A55D-9F1D-4686-9EF4-373C216E4927@hibernate.org> References: <888EA204-30A1-4BFF-9469-7118996024A1@hibernate.org> <6211A55D-9F1D-4686-9EF4-373C216E4927@hibernate.org> Message-ID: On Jan 15, 2014, at 1:42 PM, Emmanuel Bernard wrote: > By the way, people looking for that feature are also asking for a unified Cache API accessing these several caches right? Otherwise I am not fully understanding why they ask for a unified query. > Do you have written detailed use cases somewhere for me to better understand what is really requested? IMO from a user perspective, being able to run queries spreading several caches makes the programming simplifies the programming model: each cache corresponding to a single entity type, with potentially different configuration. Besides the query API that would need to be extended to support accessing multiple caches, not sure what other APIs would need to be extended to take advantage of this? > > Emmanuel > > On 14 Jan 2014, at 12:59, Sanne Grinovero wrote: > >> Up this: it was proposed again today ad a face to face meeting. >> Apparently multiple parties have been asking to be able to run >> cross-cache queries. >> >> Sanne >> >> On 11 April 2012 12:47, Emmanuel Bernard wrote: >>> >>> On 10 avr. 2012, at 19:10, Sanne Grinovero wrote: >>> >>>> Hello all, >>>> currently Infinispan Query is an interceptor registering on the >>>> specific Cache instance which has indexing enabled; one such >>>> interceptor is doing all what it needs to do in the sole scope of the >>>> cache it was registered in. >>>> >>>> If you enable indexing - for example - on 3 different caches, there >>>> will be 3 different Hibernate Search engines started in background, >>>> and they are all unaware of each other. >>>> >>>> After some design discussions with Ales for CapeDwarf, but also >>>> calling attention on something that bothered me since some time, I'd >>>> evaluate the option to have a single Hibernate Search Engine >>>> registered in the CacheManager, and have it shared across indexed >>>> caches. >>>> >>>> Current design limitations: >>>> >>>> A- If they are all configured to use the same base directory to >>>> store indexes, and happen to have same-named indexes, they'll share >>>> the index without being aware of each other. This is going to break >>>> unless the user configures some tricky parameters, and even so >>>> performance won't be great: instances will lock each other out, or at >>>> best write in alternate turns. >>>> B- The search engine isn't particularly "heavy", still it would be >>>> nice to share some components and internal services. >>>> C- Configuration details which need some care - like injecting a >>>> JGroups channel for clustering - needs to be done right isolating each >>>> instance (so large parts of configuration would be quite similar but >>>> not totally equal) >>>> D- Incoming messages into a JGroups Receiver need to be routed not >>>> only among indexes, but also among Engine instances. This prevents >>>> Query to reuse code from Hibernate Search. >>>> >>>> Problems with a unified Hibernate Search Engine: >>>> >>>> 1#- Isolation of types / indexes. If the same indexed class is >>>> stored in different (indexed) caches, they'll share the same index. Is >>>> it a problem? I'm tempted to consider this a good thing, but wonder if >>>> it would surprise some users. Would you expect that? >>> >>> I would not expect that. Unicity in Hibernate Search is not defined per identity but per class + provided id. >>> I can see people reusing the same class as partial DTO and willing to index that. I can even see people >>> using the Hibernate Search programmatic API to index the "DTO" stored in cache 2 differently than the >>> domain class stored in cache 1. >>> I can concede that I am pushing a bit the use case towards bad-ish design approaches. >>> >>>> 2#- configuration format overhaul: indexing options won't be set on >>>> the cache section but in the global section. I'm looking forward to >>>> use the schema extensions anyway to provide a better configuration >>>> experience than the current . >>>> 3#- Assuming 1# is fine, when a search hit is found I'd need to be >>>> able to figure out from which cache the value should be loaded. >>>> 3#A we could have the cache name encoded in the index, as part >>>> of the identifier: {PK,cacheName} >>>> 3#B we actually shard the index, keeping a physically separate >>>> index per cache. This would mean searching on the joint index view but >>>> extracting hits from specific indexes to keep track of "which index".. >>>> I think we can do that but it's definitely tricky. >>>> >>>> It's likely easier to keep indexed values from different caches in >>>> different indexes. that would mean to reject #1 and mess with the user >>>> defined index name, to add for example the cache name to the user >>>> defined string. >>>> >>>> Any comment? >>>> >>>> Cheers, >>>> Sanne >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From an1310 at hotmail.com Tue Jan 21 13:39:41 2014 From: an1310 at hotmail.com (Erik Salter) Date: Tue, 21 Jan 2014 13:39:41 -0500 Subject: [infinispan-dev] Dropping AtomicMap/FineGrainedAtomicMap In-Reply-To: <52DD0961.90600@infinispan.org> References: <52DD0961.90600@infinispan.org> Message-ID: Please don't remove the Delta stuff. That's quite useful, especially for large collections. Erik -----Original Message----- From: infinispan-dev-bounces at lists.jboss.org [mailto:infinispan-dev-bounces at lists.jboss.org] On Behalf Of Pedro Ruivo Sent: Monday, January 20, 2014 6:33 AM To: infinispan-dev at lists.jboss.org Subject: Re: [infinispan-dev] Dropping AtomicMap/FineGrainedAtomicMap Hi, On 01/20/2014 11:28 AM, Galder Zamarre?o wrote: > Hi all, > > Dropping AtomicMap and FineGrainedAtomicMap was discussed last week in the F2F meeting [1]. It's complex and buggy, and we'd recommend people to use the Grouping API instead [2]. Grouping API would allow data to reside together, while the standard map API would apply per-key locking. +1. are we going to dropping the Delta stuff? > > We don't have a timeline for this yet, but we want to get as much feedback on the topic as possible so that we can evaluate the options. before starting with it, I would recommend to add the following method to cache API: /** * returns all the keys and values associated with the group name. The Map is immutable (i.e. read-only) **/ Map getGroup(String groupName); Cheers, Pedro > > Cheers, > > [1] https://issues.jboss.org/browse/ISPN-3901 > [2] > http://infinispan.org/docs/6.0.x/user_guide/user_guide.html#_the_group > ing_api > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > Project Lead, Escalante > http://escalante.io > > Engineer, Infinispan > http://infinispan.org > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ infinispan-dev mailing list infinispan-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev From vblagoje at redhat.com Tue Jan 21 15:42:55 2014 From: vblagoje at redhat.com (Vladimir Blagojevic) Date: Tue, 21 Jan 2014 15:42:55 -0500 Subject: [infinispan-dev] Dropping AtomicMap/FineGrainedAtomicMap In-Reply-To: References: <52DD0961.90600@infinispan.org> Message-ID: <52DEDBCF.7030204@redhat.com> I agree with Erik here. Deltas are used in M/R and I've never detected any problems so far. On 1/21/2014, 1:39 PM, Erik Salter wrote: > Please don't remove the Delta stuff. That's quite useful, especially for > large collections. > > Erik > From mmarkus at redhat.com Wed Jan 22 07:45:49 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Wed, 22 Jan 2014 12:45:49 +0000 Subject: [infinispan-dev] Dropping AtomicMap/FineGrainedAtomicMap In-Reply-To: <52DEDBCF.7030204@redhat.com> References: <52DD0961.90600@infinispan.org> <52DEDBCF.7030204@redhat.com> Message-ID: <1EB0E9C8-AFD2-4172-874F-25BC2B12C6C4@redhat.com> On Jan 21, 2014, at 8:42 PM, Vladimir Blagojevic wrote: > I agree with Erik here. Deltas are used in M/R and I've never detected > any problems so far. > On 1/21/2014, 1:39 PM, Erik Salter wrote: >> Please don't remove the Delta stuff. That's quite useful, especially for >> large collections. +1 to keep DeltaAware. Thanks for the feedbak >> >> Erik >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From emmanuel at hibernate.org Wed Jan 22 08:26:45 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Wed, 22 Jan 2014 14:26:45 +0100 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: References: <20131112145454.GB5423@hibernate.org> <20131118095612.GN3262@hibernate.org> <24C30EDB-1978-4AB0-93F8-A02B35C1193C@redhat.com> <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> Message-ID: Conceptually I like the grouping API better than AtomicMap as I don?t have to rely on a specific Infinispan type. We do use FineGrainedAtomicMap both for the entity and the association persistence (not AtomicMap). It is particularly critical for how we store the association navigation information. I don?t want one update to literally prevent the whole association from being updated. This is the same semantic a RDBMS has and that?s why Manik and I designed the FGAM requirements. So my question is what are the differences between the grouping API and the FGAM in particular for: - the amount of data sent back and forth (seems like grouping is sending the data naturally per key as ?delta compared to the group" - the locking level when a new entry is added to the FGAM / Grouping API - the locking level when a new entry is removed to the FGAM / Grouping API - the locking level when a new entry is updated to the FGAM / Grouping API - the overall network verbosity - does grouping offer the same repeatable read protection that AtomicMap offers within a transaction? I think retrying as a transaction workaround is quite fragile. We can offer it as a solution but supporting or encouraging it is another story. Unless each OGM nodes do behave like a transaction but that would be wrong. I am also concerned about reading data form a group that are inconsistent. Emmanuel On 21 Jan 2014, at 16:07, Mircea Markus wrote: > Hi Emmanuel, > > Just had a good chat with Davide on this and one solution to overcome the shortcoming you mentioned in the above email would be to enhance the hotrod client to support grouping: > > RemoteClient.put(G g, K k, V v); //first param is the group > RemoteClinet.getGroup(G g) : Map; > > It requires an enhancement on our local grouping API: EmbeddedCache.getGroup(G). This is something useful for us in a broader context, as it is the step needed to be able to deprecated AtomicMaps and get suggest them being replaced with Grouping. > > This approach still has some limitations compared to the current embedded integration: > - performance caused by the lack of transactions: this means increased TCP chattiness between the Hot Rod client and the server. > - you'd have to handle atomicity, potentially by retrying an operation > > What do you think? > > > On Dec 3, 2013, at 3:10 AM, Mircea Markus wrote: > >> >> On Nov 19, 2013, at 10:22 AM, Emmanuel Bernard wrote: >> >>> It's an interesting approach that would work fine-ish for entities >>> assuming the Hot Rod client is multi threaded and assuming the client >>> uses Future to parallelize the calls. >> >> The Java Hotrod client is both multithreaded and exposes an Async API. >> >>> >>> But it won't work for associations as we have them designed today. >>> Each association - or more precisely the query results to go from an >>> entity A1 to the list of entities B associated to it - is represented by >>> an AtomicMap. >>> Each entry in this map does correspond to an entry in the association. >>> >>> While we can "guess" the column names and build from the metadata the >>> list of composed keys for entities, we cannot do the same for >>> associations as the key is literally the (composite) id of the >>> association and we cannot guess that most of the time (we can in very >>> pathological cases). >>> We could imagine that we list the association row keys in a special >>> entry to work around that but this approach is just as problematic and >>> is conceptually the same. >>> The only solution would be to lock the whole association for each >>> operation and I guess impose some versioning / optimistic lock. >>> >>> That is not a pattern that scales sufficiently from my experience. >> >> I think so too :-) >> >>> That's the problem with interconnected data :) >>> >>> Emmanuel >>> >>> On Mon 2013-11-18 23:05, Mircea Markus wrote: >>>> Neither the grouping API nor the AtomicMap work over hotrod. >>>> Between the grouping API and AtomicMap, I think the one that would make more sense migrating is the grouping API. >>>> One way or the other, I think the hotrod protocol would require an enhancement - mind raising a JIRA for that? >>>> For now I guess you can sacrifice performance and always sending the entire object across on every update instead of only the deltas? >>>> >>>> On Nov 18, 2013, at 9:56 AM, Emmanuel Bernard wrote: >>>> >>>>> Someone mentioned the grouping API as some sort of alternative to >>>>> AtomicMap. Maybe we should use that? >>>>> Note that if we don't have a fine-grained approach we will need to >>>>> make sure we *copy* the complex data structure upon reads to mimic >>>>> proper transaction isolation. >>>>> >>>>> On Tue 2013-11-12 15:14, Sanne Grinovero wrote: >>>>>> On 12 November 2013 14:54, Emmanuel Bernard wrote: >>>>>>> On the transaction side, we can start without them. >>>>>> >>>>>> +1 on omitting transactions for now. >>>>>> >>>>>> And on the missing AtomicMaps, I hope the Infinispan will want to implement it? >>>>>> Would be good to eventually converge on similar featuresets on remote >>>>>> vs embedded APIs. >>>>>> >>>>>> I know the embedded version relies on batching/transactions, but I >>>>>> guess we could obtain a similar effect with some ad-hoc commands in >>>>>> Hot Rod? >>>>>> >>>>>> Sanne >>>>>> >>>>>>> >>>>>>> On Tue 2013-11-12 14:34, Davide D'Alto wrote: >>>>>>>> Hi, >>>>>>>> I'm working on the integration between HotRod and OGM. >>>>>>>> >>>>>>>> We already have a dialect for Inifinispan and I'm trying to follow the same >>>>>>>> logic. >>>>>>>> At the moment I'm having two problems: >>>>>>>> >>>>>>>> 1) In the Infinispan dialect we are using the AtomicMap and the >>>>>>>> AtomicMapLookup but this classes don't work with the RemoteCache. Is there >>>>>>>> an equivalent for HotRod? >>>>>>>> >>>>>>>> 2) As far as I know HotRod does not support transactions. I've found a link >>>>>>>> to a branch on Mircea repository: >>>>>>>> https://github.com/mmarkus/ops_over_hotrod/wiki/Usage-guide >>>>>>>> Is this something I could/should use? >>>>>>>> >>>>>>>> Any help is appreciated. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Davide >>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> Cheers, >>>> -- >>>> Mircea Markus >>>> Infinispan lead (www.infinispan.org) >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> Cheers, >> -- >> Mircea Markus >> Infinispan lead (www.infinispan.org) >> >> >> >> > > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From emmanuel at hibernate.org Wed Jan 22 08:33:58 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Wed, 22 Jan 2014 14:33:58 +0100 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: References: <20131112145454.GB5423@hibernate.org> <20131118095612.GN3262@hibernate.org> <24C30EDB-1978-4AB0-93F8-A02B35C1193C@redhat.com> <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> Message-ID: BTW query support on groups (by entry of each group) is an interesting non covered use case today. On 22 Jan 2014, at 14:26, Emmanuel Bernard wrote: > Conceptually I like the grouping API better than AtomicMap as I don?t have to rely on a specific Infinispan type. > > We do use FineGrainedAtomicMap both for the entity and the association persistence (not AtomicMap). It is particularly critical for how we store the association navigation information. I don?t want one update to literally prevent the whole association from being updated. This is the same semantic a RDBMS has and that?s why Manik and I designed the FGAM requirements. > > So my question is what are the differences between the grouping API and the FGAM in particular for: > > - the amount of data sent back and forth (seems like grouping is sending the data naturally per key as ?delta compared to the group" > - the locking level when a new entry is added to the FGAM / Grouping API > - the locking level when a new entry is removed to the FGAM / Grouping API > - the locking level when a new entry is updated to the FGAM / Grouping API > - the overall network verbosity > - does grouping offer the same repeatable read protection that AtomicMap offers within a transaction? > > I think retrying as a transaction workaround is quite fragile. We can offer it as a solution but supporting or encouraging it is another story. Unless each OGM nodes do behave like a transaction but that would be wrong. I am also concerned about reading data form a group that are inconsistent. > > Emmanuel > > On 21 Jan 2014, at 16:07, Mircea Markus wrote: > >> Hi Emmanuel, >> >> Just had a good chat with Davide on this and one solution to overcome the shortcoming you mentioned in the above email would be to enhance the hotrod client to support grouping: >> >> RemoteClient.put(G g, K k, V v); //first param is the group >> RemoteClinet.getGroup(G g) : Map; >> >> It requires an enhancement on our local grouping API: EmbeddedCache.getGroup(G). This is something useful for us in a broader context, as it is the step needed to be able to deprecated AtomicMaps and get suggest them being replaced with Grouping. >> >> This approach still has some limitations compared to the current embedded integration: >> - performance caused by the lack of transactions: this means increased TCP chattiness between the Hot Rod client and the server. >> - you'd have to handle atomicity, potentially by retrying an operation >> >> What do you think? >> >> >> On Dec 3, 2013, at 3:10 AM, Mircea Markus wrote: >> >>> >>> On Nov 19, 2013, at 10:22 AM, Emmanuel Bernard wrote: >>> >>>> It's an interesting approach that would work fine-ish for entities >>>> assuming the Hot Rod client is multi threaded and assuming the client >>>> uses Future to parallelize the calls. >>> >>> The Java Hotrod client is both multithreaded and exposes an Async API. >>> >>>> >>>> But it won't work for associations as we have them designed today. >>>> Each association - or more precisely the query results to go from an >>>> entity A1 to the list of entities B associated to it - is represented by >>>> an AtomicMap. >>>> Each entry in this map does correspond to an entry in the association. >>>> >>>> While we can "guess" the column names and build from the metadata the >>>> list of composed keys for entities, we cannot do the same for >>>> associations as the key is literally the (composite) id of the >>>> association and we cannot guess that most of the time (we can in very >>>> pathological cases). >>>> We could imagine that we list the association row keys in a special >>>> entry to work around that but this approach is just as problematic and >>>> is conceptually the same. >>>> The only solution would be to lock the whole association for each >>>> operation and I guess impose some versioning / optimistic lock. >>>> >>>> That is not a pattern that scales sufficiently from my experience. >>> >>> I think so too :-) >>> >>>> That's the problem with interconnected data :) >>>> >>>> Emmanuel >>>> >>>> On Mon 2013-11-18 23:05, Mircea Markus wrote: >>>>> Neither the grouping API nor the AtomicMap work over hotrod. >>>>> Between the grouping API and AtomicMap, I think the one that would make more sense migrating is the grouping API. >>>>> One way or the other, I think the hotrod protocol would require an enhancement - mind raising a JIRA for that? >>>>> For now I guess you can sacrifice performance and always sending the entire object across on every update instead of only the deltas? >>>>> >>>>> On Nov 18, 2013, at 9:56 AM, Emmanuel Bernard wrote: >>>>> >>>>>> Someone mentioned the grouping API as some sort of alternative to >>>>>> AtomicMap. Maybe we should use that? >>>>>> Note that if we don't have a fine-grained approach we will need to >>>>>> make sure we *copy* the complex data structure upon reads to mimic >>>>>> proper transaction isolation. >>>>>> >>>>>> On Tue 2013-11-12 15:14, Sanne Grinovero wrote: >>>>>>> On 12 November 2013 14:54, Emmanuel Bernard wrote: >>>>>>>> On the transaction side, we can start without them. >>>>>>> >>>>>>> +1 on omitting transactions for now. >>>>>>> >>>>>>> And on the missing AtomicMaps, I hope the Infinispan will want to implement it? >>>>>>> Would be good to eventually converge on similar featuresets on remote >>>>>>> vs embedded APIs. >>>>>>> >>>>>>> I know the embedded version relies on batching/transactions, but I >>>>>>> guess we could obtain a similar effect with some ad-hoc commands in >>>>>>> Hot Rod? >>>>>>> >>>>>>> Sanne >>>>>>> >>>>>>>> >>>>>>>> On Tue 2013-11-12 14:34, Davide D'Alto wrote: >>>>>>>>> Hi, >>>>>>>>> I'm working on the integration between HotRod and OGM. >>>>>>>>> >>>>>>>>> We already have a dialect for Inifinispan and I'm trying to follow the same >>>>>>>>> logic. >>>>>>>>> At the moment I'm having two problems: >>>>>>>>> >>>>>>>>> 1) In the Infinispan dialect we are using the AtomicMap and the >>>>>>>>> AtomicMapLookup but this classes don't work with the RemoteCache. Is there >>>>>>>>> an equivalent for HotRod? >>>>>>>>> >>>>>>>>> 2) As far as I know HotRod does not support transactions. I've found a link >>>>>>>>> to a branch on Mircea repository: >>>>>>>>> https://github.com/mmarkus/ops_over_hotrod/wiki/Usage-guide >>>>>>>>> Is this something I could/should use? >>>>>>>>> >>>>>>>>> Any help is appreciated. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Davide >>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> infinispan-dev mailing list >>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> Cheers, >>>>> -- >>>>> Mircea Markus >>>>> Infinispan lead (www.infinispan.org) >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> Cheers, >>> -- >>> Mircea Markus >>> Infinispan lead (www.infinispan.org) >>> >>> >>> >>> >> >> Cheers, >> -- >> Mircea Markus >> Infinispan lead (www.infinispan.org) >> >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From emmanuel at hibernate.org Wed Jan 22 08:41:11 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Wed, 22 Jan 2014 14:41:11 +0100 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: References: <20131112145454.GB5423@hibernate.org> <20131118095612.GN3262@hibernate.org> <24C30EDB-1978-4AB0-93F8-A02B35C1193C@redhat.com> <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> Message-ID: On the other hand if the group feature adds a way to: - get all the keys, - get a subset of the keys based on a filter Infinispan will be able to start supporting use cases restricted to Cassandra in the past (esp around time series). I am assuming groups offer offer a way to add / change and remove a key from / to a group without having to load all the group or even the group keys. On 22 Jan 2014, at 14:33, Emmanuel Bernard wrote: > BTW query support on groups (by entry of each group) is an interesting non covered use case today. > > On 22 Jan 2014, at 14:26, Emmanuel Bernard wrote: > >> Conceptually I like the grouping API better than AtomicMap as I don?t have to rely on a specific Infinispan type. >> >> We do use FineGrainedAtomicMap both for the entity and the association persistence (not AtomicMap). It is particularly critical for how we store the association navigation information. I don?t want one update to literally prevent the whole association from being updated. This is the same semantic a RDBMS has and that?s why Manik and I designed the FGAM requirements. >> >> So my question is what are the differences between the grouping API and the FGAM in particular for: >> >> - the amount of data sent back and forth (seems like grouping is sending the data naturally per key as ?delta compared to the group" >> - the locking level when a new entry is added to the FGAM / Grouping API >> - the locking level when a new entry is removed to the FGAM / Grouping API >> - the locking level when a new entry is updated to the FGAM / Grouping API >> - the overall network verbosity >> - does grouping offer the same repeatable read protection that AtomicMap offers within a transaction? >> >> I think retrying as a transaction workaround is quite fragile. We can offer it as a solution but supporting or encouraging it is another story. Unless each OGM nodes do behave like a transaction but that would be wrong. I am also concerned about reading data form a group that are inconsistent. >> >> Emmanuel >> >> On 21 Jan 2014, at 16:07, Mircea Markus wrote: >> >>> Hi Emmanuel, >>> >>> Just had a good chat with Davide on this and one solution to overcome the shortcoming you mentioned in the above email would be to enhance the hotrod client to support grouping: >>> >>> RemoteClient.put(G g, K k, V v); //first param is the group >>> RemoteClinet.getGroup(G g) : Map; >>> >>> It requires an enhancement on our local grouping API: EmbeddedCache.getGroup(G). This is something useful for us in a broader context, as it is the step needed to be able to deprecated AtomicMaps and get suggest them being replaced with Grouping. >>> >>> This approach still has some limitations compared to the current embedded integration: >>> - performance caused by the lack of transactions: this means increased TCP chattiness between the Hot Rod client and the server. >>> - you'd have to handle atomicity, potentially by retrying an operation >>> >>> What do you think? >>> >>> >>> On Dec 3, 2013, at 3:10 AM, Mircea Markus wrote: >>> >>>> >>>> On Nov 19, 2013, at 10:22 AM, Emmanuel Bernard wrote: >>>> >>>>> It's an interesting approach that would work fine-ish for entities >>>>> assuming the Hot Rod client is multi threaded and assuming the client >>>>> uses Future to parallelize the calls. >>>> >>>> The Java Hotrod client is both multithreaded and exposes an Async API. >>>> >>>>> >>>>> But it won't work for associations as we have them designed today. >>>>> Each association - or more precisely the query results to go from an >>>>> entity A1 to the list of entities B associated to it - is represented by >>>>> an AtomicMap. >>>>> Each entry in this map does correspond to an entry in the association. >>>>> >>>>> While we can "guess" the column names and build from the metadata the >>>>> list of composed keys for entities, we cannot do the same for >>>>> associations as the key is literally the (composite) id of the >>>>> association and we cannot guess that most of the time (we can in very >>>>> pathological cases). >>>>> We could imagine that we list the association row keys in a special >>>>> entry to work around that but this approach is just as problematic and >>>>> is conceptually the same. >>>>> The only solution would be to lock the whole association for each >>>>> operation and I guess impose some versioning / optimistic lock. >>>>> >>>>> That is not a pattern that scales sufficiently from my experience. >>>> >>>> I think so too :-) >>>> >>>>> That's the problem with interconnected data :) >>>>> >>>>> Emmanuel >>>>> >>>>> On Mon 2013-11-18 23:05, Mircea Markus wrote: >>>>>> Neither the grouping API nor the AtomicMap work over hotrod. >>>>>> Between the grouping API and AtomicMap, I think the one that would make more sense migrating is the grouping API. >>>>>> One way or the other, I think the hotrod protocol would require an enhancement - mind raising a JIRA for that? >>>>>> For now I guess you can sacrifice performance and always sending the entire object across on every update instead of only the deltas? >>>>>> >>>>>> On Nov 18, 2013, at 9:56 AM, Emmanuel Bernard wrote: >>>>>> >>>>>>> Someone mentioned the grouping API as some sort of alternative to >>>>>>> AtomicMap. Maybe we should use that? >>>>>>> Note that if we don't have a fine-grained approach we will need to >>>>>>> make sure we *copy* the complex data structure upon reads to mimic >>>>>>> proper transaction isolation. >>>>>>> >>>>>>> On Tue 2013-11-12 15:14, Sanne Grinovero wrote: >>>>>>>> On 12 November 2013 14:54, Emmanuel Bernard wrote: >>>>>>>>> On the transaction side, we can start without them. >>>>>>>> >>>>>>>> +1 on omitting transactions for now. >>>>>>>> >>>>>>>> And on the missing AtomicMaps, I hope the Infinispan will want to implement it? >>>>>>>> Would be good to eventually converge on similar featuresets on remote >>>>>>>> vs embedded APIs. >>>>>>>> >>>>>>>> I know the embedded version relies on batching/transactions, but I >>>>>>>> guess we could obtain a similar effect with some ad-hoc commands in >>>>>>>> Hot Rod? >>>>>>>> >>>>>>>> Sanne >>>>>>>> >>>>>>>>> >>>>>>>>> On Tue 2013-11-12 14:34, Davide D'Alto wrote: >>>>>>>>>> Hi, >>>>>>>>>> I'm working on the integration between HotRod and OGM. >>>>>>>>>> >>>>>>>>>> We already have a dialect for Inifinispan and I'm trying to follow the same >>>>>>>>>> logic. >>>>>>>>>> At the moment I'm having two problems: >>>>>>>>>> >>>>>>>>>> 1) In the Infinispan dialect we are using the AtomicMap and the >>>>>>>>>> AtomicMapLookup but this classes don't work with the RemoteCache. Is there >>>>>>>>>> an equivalent for HotRod? >>>>>>>>>> >>>>>>>>>> 2) As far as I know HotRod does not support transactions. I've found a link >>>>>>>>>> to a branch on Mircea repository: >>>>>>>>>> https://github.com/mmarkus/ops_over_hotrod/wiki/Usage-guide >>>>>>>>>> Is this something I could/should use? >>>>>>>>>> >>>>>>>>>> Any help is appreciated. >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Davide >>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> infinispan-dev mailing list >>>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> infinispan-dev mailing list >>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> >>>>>> Cheers, >>>>>> -- >>>>>> Mircea Markus >>>>>> Infinispan lead (www.infinispan.org) >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> Cheers, >>>> -- >>>> Mircea Markus >>>> Infinispan lead (www.infinispan.org) >>>> >>>> >>>> >>>> >>> >>> Cheers, >>> -- >>> Mircea Markus >>> Infinispan lead (www.infinispan.org) >>> >>> >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mmarkus at redhat.com Wed Jan 22 08:48:10 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Wed, 22 Jan 2014 13:48:10 +0000 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: References: <20131112145454.GB5423@hibernate.org> <20131118095612.GN3262@hibernate.org> <24C30EDB-1978-4AB0-93F8-A02B35C1193C@redhat.com> <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> Message-ID: On Jan 22, 2014, at 1:26 PM, Emmanuel Bernard wrote: > Conceptually I like the grouping API better than AtomicMap as I don?t have to rely on a specific Infinispan type. > > We do use FineGrainedAtomicMap both for the entity and the association persistence (not AtomicMap). So you don't use the AtomicMap(vs FGAM) at all? Is there any place in which you require a lock in the whole map to be acquired? > It is particularly critical for how we store the association navigation information. I don?t want one update to literally prevent the whole association from being updated. This is the same semantic a RDBMS has and that?s why Manik and I designed the FGAM requirements. > > So my question is what are the differences between the grouping API and the FGAM in particular for: > > - the amount of data sent back and forth (seems like grouping is sending the data naturally per key as ?delta compared to the group" - we'll only send the (group, key, value) for every group write. The same amount of info is sent for an FGAM.put > - the locking level when a new entry is added to the FGAM / Grouping API > - the locking level when a new entry is removed to the FGAM / Grouping API > - the locking level when a new entry is updated to the FGAM / Grouping API - in the case of grouping the lock object is the tuple (group, key), so the lock granularity is the same as FGAM, which under the hood build an synthetic lock object based on (FGAM, innerKey) > - the overall network verbosity - same > - does grouping offer the same repeatable read protection that AtomicMap offers within a transaction? - yes, and it actually has a clearer semantics, as the grouping API is entirely built on top of the basic cache operations, instead of being a first class citizen with it's own transaction semantics. > > I think retrying as a transaction workaround is quite fragile. We can offer it as a solution but supporting or encouraging it is another story. Unless each OGM nodes do behave like a transaction but that would be wrong. I am also concerned about reading data form a group that are inconsistent. The grouping API I'm suggesting offers an nicer alternative to the (FG)AM approach that would also work over HotRod. ATM there's no TX support for HotRod so it seems like to be able to support the HotRod and OGM integration fully, we'd need HR transactions as well. Do you think the integration can be done in steps: 1. add grouping over hotrod and integrate with OGM 2. add tx and make the integration use it Or we should wait till 2. and then proceed with the integration? > > Emmanuel > > On 21 Jan 2014, at 16:07, Mircea Markus wrote: > >> Hi Emmanuel, >> >> Just had a good chat with Davide on this and one solution to overcome the shortcoming you mentioned in the above email would be to enhance the hotrod client to support grouping: >> >> RemoteClient.put(G g, K k, V v); //first param is the group >> RemoteClinet.getGroup(G g) : Map; >> >> It requires an enhancement on our local grouping API: EmbeddedCache.getGroup(G). This is something useful for us in a broader context, as it is the step needed to be able to deprecated AtomicMaps and get suggest them being replaced with Grouping. >> >> This approach still has some limitations compared to the current embedded integration: >> - performance caused by the lack of transactions: this means increased TCP chattiness between the Hot Rod client and the server. >> - you'd have to handle atomicity, potentially by retrying an operation >> >> What do you think? >> >> >> On Dec 3, 2013, at 3:10 AM, Mircea Markus wrote: >> >>> >>> On Nov 19, 2013, at 10:22 AM, Emmanuel Bernard wrote: >>> >>>> It's an interesting approach that would work fine-ish for entities >>>> assuming the Hot Rod client is multi threaded and assuming the client >>>> uses Future to parallelize the calls. >>> >>> The Java Hotrod client is both multithreaded and exposes an Async API. >>> >>>> >>>> But it won't work for associations as we have them designed today. >>>> Each association - or more precisely the query results to go from an >>>> entity A1 to the list of entities B associated to it - is represented by >>>> an AtomicMap. >>>> Each entry in this map does correspond to an entry in the association. >>>> >>>> While we can "guess" the column names and build from the metadata the >>>> list of composed keys for entities, we cannot do the same for >>>> associations as the key is literally the (composite) id of the >>>> association and we cannot guess that most of the time (we can in very >>>> pathological cases). >>>> We could imagine that we list the association row keys in a special >>>> entry to work around that but this approach is just as problematic and >>>> is conceptually the same. >>>> The only solution would be to lock the whole association for each >>>> operation and I guess impose some versioning / optimistic lock. >>>> >>>> That is not a pattern that scales sufficiently from my experience. >>> >>> I think so too :-) >>> >>>> That's the problem with interconnected data :) >>>> >>>> Emmanuel >>>> >>>> On Mon 2013-11-18 23:05, Mircea Markus wrote: >>>>> Neither the grouping API nor the AtomicMap work over hotrod. >>>>> Between the grouping API and AtomicMap, I think the one that would make more sense migrating is the grouping API. >>>>> One way or the other, I think the hotrod protocol would require an enhancement - mind raising a JIRA for that? >>>>> For now I guess you can sacrifice performance and always sending the entire object across on every update instead of only the deltas? >>>>> >>>>> On Nov 18, 2013, at 9:56 AM, Emmanuel Bernard wrote: >>>>> >>>>>> Someone mentioned the grouping API as some sort of alternative to >>>>>> AtomicMap. Maybe we should use that? >>>>>> Note that if we don't have a fine-grained approach we will need to >>>>>> make sure we *copy* the complex data structure upon reads to mimic >>>>>> proper transaction isolation. >>>>>> >>>>>> On Tue 2013-11-12 15:14, Sanne Grinovero wrote: >>>>>>> On 12 November 2013 14:54, Emmanuel Bernard wrote: >>>>>>>> On the transaction side, we can start without them. >>>>>>> >>>>>>> +1 on omitting transactions for now. >>>>>>> >>>>>>> And on the missing AtomicMaps, I hope the Infinispan will want to implement it? >>>>>>> Would be good to eventually converge on similar featuresets on remote >>>>>>> vs embedded APIs. >>>>>>> >>>>>>> I know the embedded version relies on batching/transactions, but I >>>>>>> guess we could obtain a similar effect with some ad-hoc commands in >>>>>>> Hot Rod? >>>>>>> >>>>>>> Sanne >>>>>>> >>>>>>>> >>>>>>>> On Tue 2013-11-12 14:34, Davide D'Alto wrote: >>>>>>>>> Hi, >>>>>>>>> I'm working on the integration between HotRod and OGM. >>>>>>>>> >>>>>>>>> We already have a dialect for Inifinispan and I'm trying to follow the same >>>>>>>>> logic. >>>>>>>>> At the moment I'm having two problems: >>>>>>>>> >>>>>>>>> 1) In the Infinispan dialect we are using the AtomicMap and the >>>>>>>>> AtomicMapLookup but this classes don't work with the RemoteCache. Is there >>>>>>>>> an equivalent for HotRod? >>>>>>>>> >>>>>>>>> 2) As far as I know HotRod does not support transactions. I've found a link >>>>>>>>> to a branch on Mircea repository: >>>>>>>>> https://github.com/mmarkus/ops_over_hotrod/wiki/Usage-guide >>>>>>>>> Is this something I could/should use? >>>>>>>>> >>>>>>>>> Any help is appreciated. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Davide >>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> infinispan-dev mailing list >>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> Cheers, >>>>> -- >>>>> Mircea Markus >>>>> Infinispan lead (www.infinispan.org) >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> Cheers, >>> -- >>> Mircea Markus >>> Infinispan lead (www.infinispan.org) >>> >>> >>> >>> >> >> Cheers, >> -- >> Mircea Markus >> Infinispan lead (www.infinispan.org) >> >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From dan.berindei at gmail.com Wed Jan 22 08:58:06 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 22 Jan 2014 14:58:06 +0100 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: References: <20131112145454.GB5423@hibernate.org> <20131118095612.GN3262@hibernate.org> <24C30EDB-1978-4AB0-93F8-A02B35C1193C@redhat.com> <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> Message-ID: On Tue, Jan 21, 2014 at 4:07 PM, Mircea Markus wrote: > Hi Emmanuel, > > Just had a good chat with Davide on this and one solution to overcome the > shortcoming you mentioned in the above email would be to enhance the hotrod > client to support grouping: > > RemoteClient.put(G g, K k, V v); //first param is the group > RemoteClinet.getGroup(G g) : Map; > I think you'd also need RemoteClient.get(G g, K k), as in embedded mode the group is included in the key. > > It requires an enhancement on our local grouping API: > EmbeddedCache.getGroup(G). This is something useful for us in a broader > context, as it is the step needed to be able to deprecated AtomicMaps and > get suggest them being replaced with Grouping. > It would also require us to keep a Set for each group, with the keys associated with that group. As such, I'm not sure it would be a lot easier to implement (correctly) than FineGrainedAtomicMap. > This approach still has some limitations compared to the current embedded > integration: > - performance caused by the lack of transactions: this means increased TCP > chattiness between the Hot Rod client and the server. > - you'd have to handle atomicity, potentially by retrying an operation > > What do you think? > > > On Dec 3, 2013, at 3:10 AM, Mircea Markus wrote: > > > > > On Nov 19, 2013, at 10:22 AM, Emmanuel Bernard > wrote: > > > >> It's an interesting approach that would work fine-ish for entities > >> assuming the Hot Rod client is multi threaded and assuming the client > >> uses Future to parallelize the calls. > > > > The Java Hotrod client is both multithreaded and exposes an Async API. > > > >> > >> But it won't work for associations as we have them designed today. > >> Each association - or more precisely the query results to go from an > >> entity A1 to the list of entities B associated to it - is represented by > >> an AtomicMap. > >> Each entry in this map does correspond to an entry in the association. > >> > >> While we can "guess" the column names and build from the metadata the > >> list of composed keys for entities, we cannot do the same for > >> associations as the key is literally the (composite) id of the > >> association and we cannot guess that most of the time (we can in very > >> pathological cases). > >> We could imagine that we list the association row keys in a special > >> entry to work around that but this approach is just as problematic and > >> is conceptually the same. > >> The only solution would be to lock the whole association for each > >> operation and I guess impose some versioning / optimistic lock. > >> > >> That is not a pattern that scales sufficiently from my experience. > > > > I think so too :-) > > > >> That's the problem with interconnected data :) > >> > >> Emmanuel > >> > >> On Mon 2013-11-18 23:05, Mircea Markus wrote: > >>> Neither the grouping API nor the AtomicMap work over hotrod. > >>> Between the grouping API and AtomicMap, I think the one that would > make more sense migrating is the grouping API. > >>> One way or the other, I think the hotrod protocol would require an > enhancement - mind raising a JIRA for that? > >>> For now I guess you can sacrifice performance and always sending the > entire object across on every update instead of only the deltas? > >>> > >>> On Nov 18, 2013, at 9:56 AM, Emmanuel Bernard > wrote: > >>> > >>>> Someone mentioned the grouping API as some sort of alternative to > >>>> AtomicMap. Maybe we should use that? > >>>> Note that if we don't have a fine-grained approach we will need to > >>>> make sure we *copy* the complex data structure upon reads to mimic > >>>> proper transaction isolation. > >>>> > >>>> On Tue 2013-11-12 15:14, Sanne Grinovero wrote: > >>>>> On 12 November 2013 14:54, Emmanuel Bernard > wrote: > >>>>>> On the transaction side, we can start without them. > >>>>> > >>>>> +1 on omitting transactions for now. > >>>>> > >>>>> And on the missing AtomicMaps, I hope the Infinispan will want to > implement it? > >>>>> Would be good to eventually converge on similar featuresets on remote > >>>>> vs embedded APIs. > >>>>> > >>>>> I know the embedded version relies on batching/transactions, but I > >>>>> guess we could obtain a similar effect with some ad-hoc commands in > >>>>> Hot Rod? > >>>>> > >>>>> Sanne > >>>>> > >>>>>> > >>>>>> On Tue 2013-11-12 14:34, Davide D'Alto wrote: > >>>>>>> Hi, > >>>>>>> I'm working on the integration between HotRod and OGM. > >>>>>>> > >>>>>>> We already have a dialect for Inifinispan and I'm trying to follow > the same > >>>>>>> logic. > >>>>>>> At the moment I'm having two problems: > >>>>>>> > >>>>>>> 1) In the Infinispan dialect we are using the AtomicMap and the > >>>>>>> AtomicMapLookup but this classes don't work with the RemoteCache. > Is there > >>>>>>> an equivalent for HotRod? > >>>>>>> > >>>>>>> 2) As far as I know HotRod does not support transactions. I've > found a link > >>>>>>> to a branch on Mircea repository: > >>>>>>> https://github.com/mmarkus/ops_over_hotrod/wiki/Usage-guide > >>>>>>> Is this something I could/should use? > >>>>>>> > >>>>>>> Any help is appreciated. > >>>>>>> > >>>>>>> Thanks, > >>>>>>> Davide > >>>>>> > >>>>>>> _______________________________________________ > >>>>>>> infinispan-dev mailing list > >>>>>>> infinispan-dev at lists.jboss.org > >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>>>> > >>>>>> _______________________________________________ > >>>>>> infinispan-dev mailing list > >>>>>> infinispan-dev at lists.jboss.org > >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>>> _______________________________________________ > >>>>> infinispan-dev mailing list > >>>>> infinispan-dev at lists.jboss.org > >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>> _______________________________________________ > >>>> infinispan-dev mailing list > >>>> infinispan-dev at lists.jboss.org > >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>> > >>> Cheers, > >>> -- > >>> Mircea Markus > >>> Infinispan lead (www.infinispan.org) > >>> > >>> > >>> > >>> > >>> > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > Cheers, > > -- > > Mircea Markus > > Infinispan lead (www.infinispan.org) > > > > > > > > > > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140122/ff877f4e/attachment-0001.html From mmarkus at redhat.com Wed Jan 22 09:05:15 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Wed, 22 Jan 2014 09:05:15 -0500 (EST) Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: References: <20131112145454.GB5423@hibernate.org> <20131118095612.GN3262@hibernate.org> <24C30EDB-1978-4AB0-93F8-A02B35C1193C@redhat.com> <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> Message-ID: Sent from my iPhone > On 22 Jan 2014, at 13:58, Dan Berindei wrote: > > > > >> On Tue, Jan 21, 2014 at 4:07 PM, Mircea Markus wrote: >> Hi Emmanuel, >> >> Just had a good chat with Davide on this and one solution to overcome the shortcoming you mentioned in the above email would be to enhance the hotrod client to support grouping: >> >> RemoteClient.put(G g, K k, V v); //first param is the group >> RemoteClinet.getGroup(G g) : Map; > > I think you'd also need RemoteClient.get(G g, K k), as in embedded mode the group is included in the key. Yes > > >> >> It requires an enhancement on our local grouping API: EmbeddedCache.getGroup(G). This is something useful for us in a broader context, as it is the step needed to be able to deprecated AtomicMaps and get suggest them being replaced with Grouping. > > It would also require us to keep a Set for each group, with the keys associated with that group. As such, I'm not sure it would be a lot easier to implement (correctly) than FineGrainedAtomicMap. > >> >> This approach still has some limitations compared to the current embedded integration: >> - performance caused by the lack of transactions: this means increased TCP chattiness between the Hot Rod client and the server. >> - you'd have to handle atomicity, potentially by retrying an operation >> >> What do you think? >> >> >> On Dec 3, 2013, at 3:10 AM, Mircea Markus wrote: >> >> > >> > On Nov 19, 2013, at 10:22 AM, Emmanuel Bernard wrote: >> > >> >> It's an interesting approach that would work fine-ish for entities >> >> assuming the Hot Rod client is multi threaded and assuming the client >> >> uses Future to parallelize the calls. >> > >> > The Java Hotrod client is both multithreaded and exposes an Async API. >> > >> >> >> >> But it won't work for associations as we have them designed today. >> >> Each association - or more precisely the query results to go from an >> >> entity A1 to the list of entities B associated to it - is represented by >> >> an AtomicMap. >> >> Each entry in this map does correspond to an entry in the association. >> >> >> >> While we can "guess" the column names and build from the metadata the >> >> list of composed keys for entities, we cannot do the same for >> >> associations as the key is literally the (composite) id of the >> >> association and we cannot guess that most of the time (we can in very >> >> pathological cases). >> >> We could imagine that we list the association row keys in a special >> >> entry to work around that but this approach is just as problematic and >> >> is conceptually the same. >> >> The only solution would be to lock the whole association for each >> >> operation and I guess impose some versioning / optimistic lock. >> >> >> >> That is not a pattern that scales sufficiently from my experience. >> > >> > I think so too :-) >> > >> >> That's the problem with interconnected data :) >> >> >> >> Emmanuel >> >> >> >> On Mon 2013-11-18 23:05, Mircea Markus wrote: >> >>> Neither the grouping API nor the AtomicMap work over hotrod. >> >>> Between the grouping API and AtomicMap, I think the one that would make more sense migrating is the grouping API. >> >>> One way or the other, I think the hotrod protocol would require an enhancement - mind raising a JIRA for that? >> >>> For now I guess you can sacrifice performance and always sending the entire object across on every update instead of only the deltas? >> >>> >> >>> On Nov 18, 2013, at 9:56 AM, Emmanuel Bernard wrote: >> >>> >> >>>> Someone mentioned the grouping API as some sort of alternative to >> >>>> AtomicMap. Maybe we should use that? >> >>>> Note that if we don't have a fine-grained approach we will need to >> >>>> make sure we *copy* the complex data structure upon reads to mimic >> >>>> proper transaction isolation. >> >>>> >> >>>> On Tue 2013-11-12 15:14, Sanne Grinovero wrote: >> >>>>> On 12 November 2013 14:54, Emmanuel Bernard wrote: >> >>>>>> On the transaction side, we can start without them. >> >>>>> >> >>>>> +1 on omitting transactions for now. >> >>>>> >> >>>>> And on the missing AtomicMaps, I hope the Infinispan will want to implement it? >> >>>>> Would be good to eventually converge on similar featuresets on remote >> >>>>> vs embedded APIs. >> >>>>> >> >>>>> I know the embedded version relies on batching/transactions, but I >> >>>>> guess we could obtain a similar effect with some ad-hoc commands in >> >>>>> Hot Rod? >> >>>>> >> >>>>> Sanne >> >>>>> >> >>>>>> >> >>>>>> On Tue 2013-11-12 14:34, Davide D'Alto wrote: >> >>>>>>> Hi, >> >>>>>>> I'm working on the integration between HotRod and OGM. >> >>>>>>> >> >>>>>>> We already have a dialect for Inifinispan and I'm trying to follow the same >> >>>>>>> logic. >> >>>>>>> At the moment I'm having two problems: >> >>>>>>> >> >>>>>>> 1) In the Infinispan dialect we are using the AtomicMap and the >> >>>>>>> AtomicMapLookup but this classes don't work with the RemoteCache. Is there >> >>>>>>> an equivalent for HotRod? >> >>>>>>> >> >>>>>>> 2) As far as I know HotRod does not support transactions. I've found a link >> >>>>>>> to a branch on Mircea repository: >> >>>>>>> https://github.com/mmarkus/ops_over_hotrod/wiki/Usage-guide >> >>>>>>> Is this something I could/should use? >> >>>>>>> >> >>>>>>> Any help is appreciated. >> >>>>>>> >> >>>>>>> Thanks, >> >>>>>>> Davide >> >>>>>> >> >>>>>>> _______________________________________________ >> >>>>>>> infinispan-dev mailing list >> >>>>>>> infinispan-dev at lists.jboss.org >> >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>>>> >> >>>>>> _______________________________________________ >> >>>>>> infinispan-dev mailing list >> >>>>>> infinispan-dev at lists.jboss.org >> >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>>> _______________________________________________ >> >>>>> infinispan-dev mailing list >> >>>>> infinispan-dev at lists.jboss.org >> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>> _______________________________________________ >> >>>> infinispan-dev mailing list >> >>>> infinispan-dev at lists.jboss.org >> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>> >> >>> Cheers, >> >>> -- >> >>> Mircea Markus >> >>> Infinispan lead (www.infinispan.org) >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> _______________________________________________ >> >>> infinispan-dev mailing list >> >>> infinispan-dev at lists.jboss.org >> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > Cheers, >> > -- >> > Mircea Markus >> > Infinispan lead (www.infinispan.org) >> > >> > >> > >> > >> >> Cheers, >> -- >> Mircea Markus >> Infinispan lead (www.infinispan.org) >> >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140122/5f4d5a98/attachment.html From pedro at infinispan.org Wed Jan 22 09:10:02 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Wed, 22 Jan 2014 14:10:02 +0000 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: References: <20131112145454.GB5423@hibernate.org> <20131118095612.GN3262@hibernate.org> <24C30EDB-1978-4AB0-93F8-A02B35C1193C@redhat.com> <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> Message-ID: <52DFD13A.4030500@infinispan.org> On 01/22/2014 01:58 PM, Dan Berindei wrote: > > > It would also require us to keep a Set for each group, with the keys > associated with that group. As such, I'm not sure it would be a lot > easier to implement (correctly) than FineGrainedAtomicMap. > > Dan, I didn't understand why do we need to keep a Set. Can you elaborate? From emmanuel at hibernate.org Wed Jan 22 09:11:13 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Wed, 22 Jan 2014 15:11:13 +0100 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: References: <20131112145454.GB5423@hibernate.org> <20131118095612.GN3262@hibernate.org> <24C30EDB-1978-4AB0-93F8-A02B35C1193C@redhat.com> <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> Message-ID: On 22 Jan 2014, at 14:48, Mircea Markus wrote: > > On Jan 22, 2014, at 1:26 PM, Emmanuel Bernard wrote: > >> Conceptually I like the grouping API better than AtomicMap as I don?t have to rely on a specific Infinispan type. >> >> We do use FineGrainedAtomicMap both for the entity and the association persistence (not AtomicMap). > > So you don't use the AtomicMap(vs FGAM) at all? Is there any place in which you require a lock in the whole map to be acquired? I will be not right now. It seems that Sanne moved both entity and association to use the FGAM to lower the lock contentions. I don?t quite remember if that was fully intentional or a side effect. Intuitively, I?d see us use AM for entities but that?s not the case. > >> It is particularly critical for how we store the association navigation information. I don?t want one update to literally prevent the whole association from being updated. This is the same semantic a RDBMS has and that?s why Manik and I designed the FGAM requirements. >> >> So my question is what are the differences between the grouping API and the FGAM in particular for: >> >> - the amount of data sent back and forth (seems like grouping is sending the data naturally per key as ?delta compared to the group" > > - we'll only send the (group, key, value) for every group write. The same amount of info is sent for an FGAM.put > >> - the locking level when a new entry is added to the FGAM / Grouping API >> - the locking level when a new entry is removed to the FGAM / Grouping API >> - the locking level when a new entry is updated to the FGAM / Grouping API > > - in the case of grouping the lock object is the tuple (group, key), so the lock granularity is the same as FGAM, which under the hood build an synthetic lock object based on (FGAM, innerKey) Don?t FGAM uses the ?group? level lock when it create / delete the group? What about create / delta keys in the group (which have to be added to a list of keys in the group)? > >> - the overall network verbosity > > - same > >> - does grouping offer the same repeatable read protection that AtomicMap offers within a transaction? > > - yes, and it actually has a clearer semantics, as the grouping API is entirely built on top of the basic cache operations, instead of being a first class citizen with it's own transaction semantics. > >> >> I think retrying as a transaction workaround is quite fragile. We can offer it as a solution but supporting or encouraging it is another story. Unless each OGM nodes do behave like a transaction but that would be wrong. I am also concerned about reading data form a group that are inconsistent. > > The grouping API I'm suggesting offers an nicer alternative to the (FG)AM approach that would also work over HotRod. ATM there's no TX support for HotRod so it seems like to be able to support the HotRod and OGM integration fully, we'd need HR transactions as well. Do you think the integration can be done in steps: > 1. add grouping over hotrod and integrate with OGM > 2. add tx and make the integration use it > > Or we should wait till 2. and then proceed with the integration? We can / should to it in two steps as long as we mark it as toy / non data safe in the documentation until step 2 is done. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140122/a54388cc/attachment-0001.html From emmanuel at hibernate.org Wed Jan 22 09:13:09 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Wed, 22 Jan 2014 15:13:09 +0100 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: References: <20131112145454.GB5423@hibernate.org> <20131118095612.GN3262@hibernate.org> <24C30EDB-1978-4AB0-93F8-A02B35C1193C@redhat.com> <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> Message-ID: <3E81704C-DB16-4A92-A3E1-4F485045F56E@hibernate.org> On 22 Jan 2014, at 15:11, Emmanuel Bernard wrote: >> So you don't use the AtomicMap(vs FGAM) at all? Is there any place in which you require a lock in the whole map to be acquired? > > I will be not right now. Hum, it should read: It will be. But not right now. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140122/183f9d52/attachment.html From mudokonman at gmail.com Thu Jan 23 12:48:09 2014 From: mudokonman at gmail.com (William Burns) Date: Thu, 23 Jan 2014 12:48:09 -0500 Subject: [infinispan-dev] New Cache Entry Notifications Message-ID: Hello all, I have been working with notifications and most recently I have come to look into events generated when a new entry is created. Now normally I would just expect a CacheEntryCreatedEvent to be raised. However we currently raise a CacheEntryModifiedEvent event and then a CacheEntryCreatedEvent. I notice that there are comments around the code saying that tests require both to be fired. I am wondering if anyone has an objection to only raising a CacheEntryCreatedEvent on a new cache entry being created. Does anyone know why we raise both currently? Was it just so the PutKeyValueCommand could more ignorantly just raise the CacheEntryModified pre Event? Any input would be appreciated, Thanks. - Will From mmarkus at redhat.com Thu Jan 23 12:54:46 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Thu, 23 Jan 2014 17:54:46 +0000 Subject: [infinispan-dev] New Cache Entry Notifications In-Reply-To: References: Message-ID: <1FAA84A0-3AFE-4DA7-980F-AF1FB5725F5A@redhat.com> On Jan 23, 2014, at 5:48 PM, William Burns wrote: > Hello all, > > I have been working with notifications and most recently I have come > to look into events generated when a new entry is created. Now > normally I would just expect a CacheEntryCreatedEvent to be raised. > However we currently raise a CacheEntryModifiedEvent event and then a > CacheEntryCreatedEvent. I notice that there are comments around the > code saying that tests require both to be fired. it doesn't sound right to me: modified is different than created. > > I am wondering if anyone has an objection to only raising a > CacheEntryCreatedEvent on a new cache entry being created. Does > anyone know why we raise both currently? Was it just so the > PutKeyValueCommand could more ignorantly just raise the > CacheEntryModified pre Event? > > Any input would be appreciated, Thanks. Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From dan.berindei at gmail.com Thu Jan 23 13:03:44 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 23 Jan 2014 19:03:44 +0100 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: <52DFD13A.4030500@infinispan.org> References: <20131112145454.GB5423@hibernate.org> <20131118095612.GN3262@hibernate.org> <24C30EDB-1978-4AB0-93F8-A02B35C1193C@redhat.com> <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> <52DFD13A.4030500@infinispan.org> Message-ID: On 22 Jan 2014 16:10, "Pedro Ruivo" wrote: > > > > On 01/22/2014 01:58 PM, Dan Berindei wrote: > > > > > > It would also require us to keep a Set for each group, with the keys > > associated with that group. As such, I'm not sure it would be a lot > > easier to implement (correctly) than FineGrainedAtomicMap. > > > > > > Dan, I didn't understand why do we need to keep a Set. Can you > elaborate? We'd need some way to keep track of the keys that are part of the group, iterating over the entire cache for every getGroup() call would be way too slow. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140123/ba406e3d/attachment.html From mmarkus at redhat.com Fri Jan 24 10:07:59 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Fri, 24 Jan 2014 15:07:59 +0000 Subject: [infinispan-dev] major features for Infinispan 7.0 Message-ID: <10E0080A-26BF-41CA-BB4A-31FF68F4FB8C@redhat.com> Hi, Just a heads up, the next Infinspan release will be Infinispan 7.0 and the major features we plan to add are: Server: - .NET hot rod client (HRCPP-122, Ion) - Remote Events of HotRod (ISPN-374, Galder) - authentication and authorization over HotRod (ISPN-3908, ISPN-3910, Tristan) - configuration revamp (ISPN-3930, Galder) Core: - Map/Reduce enhancements (Vladimir) - parallel iteration of keys (ISPN-2284) - cache the results of mapping (scale out M/R) - consider an Hadoop M/R adaptor - x-site state transfer (ISPN-2342, Pedro) - Controlled cluster shutdown with data restore from persistent storage (Dan, ISPN-3351) - handling of cluster partitions (Mircea, ISPN-263) - transactions improvements (ISPN-3927, Ion) - clustered listeners (ISPN-3355 , Will) - authentication and authorization in embedded mode (ISPN-3909) Query: - execute query on non-indexed fields (ISPN-3917, Adrian) - stabilize remote querying (performance, bug fixing) (Adrian, Sanne) The target date for 7.0.Final is end of July. Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From sunimalr at gmail.com Fri Jan 24 13:50:30 2014 From: sunimalr at gmail.com (Sunimal Rathnayake) Date: Sat, 25 Jan 2014 00:20:30 +0530 Subject: [infinispan-dev] .NET hot rod client (HRCPP-122) Message-ID: Hi, I noticed that .NET hot rod client (HRCPP-122) is a major feature to be included in Infinispan 7.0. I developed a native level 1 C# .NET client as a GSoC student in 2012[1]. I also noticed that HRCPP-122 will be a wrapper to native CPP client. Since I have played with a protocol a lot I'd like to contribute to HRCPP-122. Is there some way I could contribute? =) Cheers! Sunimal [1]https://github.com/infinispan/dotnet-client -- Sunimal Rathnayake Undergraduate Department of Computer Science & Engineering University of Moratuwa Sri Lanka -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140125/ace81a6d/attachment.html From mmarkus at redhat.com Fri Jan 24 15:13:23 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Fri, 24 Jan 2014 20:13:23 +0000 Subject: [infinispan-dev] .NET hot rod client (HRCPP-122) In-Reply-To: References: Message-ID: <59A68BC3-6995-435B-AE0F-FD691C2224E6@redhat.com> Hi Sunimal, We've taken a slightly different approach with the C# client, i.e. build a thin layer on top of the CPP client we have already written. I think Ion (CC) is in a pretty advanced stage developing it, but if you're interested there are plenty of other things you could start looking at, e.g migration of the CacheStore API would be a very good start: http://goo.gl/YcFJJF On Jan 24, 2014, at 6:50 PM, Sunimal Rathnayake wrote: > Hi, > > I noticed that .NET hot rod client (HRCPP-122) is a major feature to be included in Infinispan 7.0. > I developed a native level 1 C# .NET client as a GSoC student in 2012[1]. > I also noticed that HRCPP-122 will be a wrapper to native CPP client. > > Since I have played with a protocol a lot I'd like to contribute to HRCPP-122. > > Is there some way I could contribute? =) > > Cheers! > Sunimal > > [1]https://github.com/infinispan/dotnet-client > > -- > Sunimal Rathnayake > Undergraduate > Department of Computer Science & Engineering > University of Moratuwa > Sri Lanka > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From sunimalr at gmail.com Fri Jan 24 16:53:40 2014 From: sunimalr at gmail.com (Sunimal Rathnayake) Date: Sat, 25 Jan 2014 03:23:40 +0530 Subject: [infinispan-dev] .NET hot rod client (HRCPP-122) In-Reply-To: <59A68BC3-6995-435B-AE0F-FD691C2224E6@redhat.com> References: <59A68BC3-6995-435B-AE0F-FD691C2224E6@redhat.com> Message-ID: Hi Mircea, CacheStore migration seems a good place to start with. If there's no prioritized one, I'll start with ISPN-3546 mongoDB cache store migration.[1] I think I can use leveldb[2] one as a reference. If you think there is anything that I should look at first, let me know. Cheers! Sunimal [1] https://issues.jboss.org/browse/ISPN-3546 [2] https://github.com/infinispan/infinispan/tree/master/persistence/leveldb/src/main/java/org/infinispan/persistence/leveldb On Sat, Jan 25, 2014 at 1:43 AM, Mircea Markus wrote: > Hi Sunimal, > > We've taken a slightly different approach with the C# client, i.e. build a > thin layer on top of the CPP client we have already written. I think Ion > (CC) is in a pretty advanced stage developing it, but if you're interested > there are plenty of other things you could start looking at, e.g migration > of the CacheStore API would be a very good start: http://goo.gl/YcFJJF > > On Jan 24, 2014, at 6:50 PM, Sunimal Rathnayake > wrote: > > > Hi, > > > > I noticed that .NET hot rod client (HRCPP-122) is a major feature to be > included in Infinispan 7.0. > > I developed a native level 1 C# .NET client as a GSoC student in > 2012[1]. > > I also noticed that HRCPP-122 will be a wrapper to native CPP client. > > > > Since I have played with a protocol a lot I'd like to contribute to > HRCPP-122. > > > > Is there some way I could contribute? =) > > > > Cheers! > > Sunimal > > > > [1]https://github.com/infinispan/dotnet-client > > > > -- > > Sunimal Rathnayake > > Undergraduate > > Department of Computer Science & Engineering > > University of Moratuwa > > Sri Lanka > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Sunimal Rathnayake Undergraduate Department of Computer Science & Engineering University of Moratuwa Sri Lanka -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140125/2deb3671/attachment-0001.html From sanne at infinispan.org Mon Jan 27 04:20:34 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 27 Jan 2014 09:20:34 +0000 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: References: <20131112145454.GB5423@hibernate.org> <20131118095612.GN3262@hibernate.org> <24C30EDB-1978-4AB0-93F8-A02B35C1193C@redhat.com> <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> <52DFD13A.4030500@infinispan.org> Message-ID: On 23 January 2014 18:03, Dan Berindei wrote: > > On 22 Jan 2014 16:10, "Pedro Ruivo" wrote: >> >> >> >> On 01/22/2014 01:58 PM, Dan Berindei wrote: >> > >> > >> > It would also require us to keep a Set for each group, with the keys >> > associated with that group. As such, I'm not sure it would be a lot >> > easier to implement (correctly) than FineGrainedAtomicMap. >> > >> > >> >> Dan, I didn't understand why do we need to keep a Set. Can you >> elaborate? > > > We'd need some way to keep track of the keys that are part of the group, > iterating over the entire cache for every getGroup() call would be way too > slow. Right, and load all entries from any CacheStore too :-/ From sanne at infinispan.org Mon Jan 27 04:35:51 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 27 Jan 2014 09:35:51 +0000 Subject: [infinispan-dev] New Cache Entry Notifications In-Reply-To: <1FAA84A0-3AFE-4DA7-980F-AF1FB5725F5A@redhat.com> References: <1FAA84A0-3AFE-4DA7-980F-AF1FB5725F5A@redhat.com> Message-ID: On 23 January 2014 17:54, Mircea Markus wrote: > > On Jan 23, 2014, at 5:48 PM, William Burns wrote: > >> Hello all, >> >> I have been working with notifications and most recently I have come >> to look into events generated when a new entry is created. Now >> normally I would just expect a CacheEntryCreatedEvent to be raised. >> However we currently raise a CacheEntryModifiedEvent event and then a >> CacheEntryCreatedEvent. I notice that there are comments around the >> code saying that tests require both to be fired. > > it doesn't sound right to me: modified is different than created. I'd tend to agree with you, still it's a matter of perception as I could say "a key is changing value from null to some new value so it's an update".. I realize it's a bit far fetched, still if you start introducing tombstones for eventual consistency you can have a longer history of changes to append to even if it's just a "creation". For example with this sequence of commands: Put(K1, V1), Remove(K1), Put(K1, V2) < Creation event? As the history sequence for this entry is V1 - null - V2 And in case of asynchronous CacheStores, Remote eventlistener, X-site, .. and all other features where you might have coalescing of changes applied, this sequence could be "compacted" as V1 - V2 Is the write of V2 still a creation event? I don't necessarily disagree with the choice, but it's not that simple and will have consequent complexities down the road. Personally I think you should drop the differentiation between the two event types, as it's over-promising on something that we can't deliver consistently. Sanne > >> >> I am wondering if anyone has an objection to only raising a >> CacheEntryCreatedEvent on a new cache entry being created. Does >> anyone know why we raise both currently? Was it just so the >> PutKeyValueCommand could more ignorantly just raise the >> CacheEntryModified pre Event? >> >> Any input would be appreciated, Thanks. > > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From pedro at infinispan.org Mon Jan 27 04:38:44 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Mon, 27 Jan 2014 09:38:44 +0000 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: References: <20131112145454.GB5423@hibernate.org> <20131118095612.GN3262@hibernate.org> <24C30EDB-1978-4AB0-93F8-A02B35C1193C@redhat.com> <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> <52DFD13A.4030500@infinispan.org> Message-ID: <52E62924.1080900@infinispan.org> On 01/27/2014 09:20 AM, Sanne Grinovero wrote: > On 23 January 2014 18:03, Dan Berindei wrote: >> >> On 22 Jan 2014 16:10, "Pedro Ruivo" wrote: >>> >>> >>> >>> On 01/22/2014 01:58 PM, Dan Berindei wrote: >>>> >>>> >>>> It would also require us to keep a Set for each group, with the keys >>>> associated with that group. As such, I'm not sure it would be a lot >>>> easier to implement (correctly) than FineGrainedAtomicMap. >>>> >>>> >>> >>> Dan, I didn't understand why do we need to keep a Set. Can you >>> elaborate? >> >> >> We'd need some way to keep track of the keys that are part of the group, >> iterating over the entire cache for every getGroup() call would be way too >> slow. > > Right, and load all entries from any CacheStore too :-/ IMO, I prefer to iterate over the data container and cache loader when it is needed than keep the Set for each group. I think the memory will thank you > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From sanne at infinispan.org Mon Jan 27 04:52:35 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 27 Jan 2014 09:52:35 +0000 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: <52E62924.1080900@infinispan.org> References: <20131112145454.GB5423@hibernate.org> <20131118095612.GN3262@hibernate.org> <24C30EDB-1978-4AB0-93F8-A02B35C1193C@redhat.com> <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> <52DFD13A.4030500@infinispan.org> <52E62924.1080900@infinispan.org> Message-ID: On 27 January 2014 09:38, Pedro Ruivo wrote: > > > On 01/27/2014 09:20 AM, Sanne Grinovero wrote: >> On 23 January 2014 18:03, Dan Berindei wrote: >>> >>> On 22 Jan 2014 16:10, "Pedro Ruivo" wrote: >>>> >>>> >>>> >>>> On 01/22/2014 01:58 PM, Dan Berindei wrote: >>>>> >>>>> >>>>> It would also require us to keep a Set for each group, with the keys >>>>> associated with that group. As such, I'm not sure it would be a lot >>>>> easier to implement (correctly) than FineGrainedAtomicMap. >>>>> >>>>> >>>> >>>> Dan, I didn't understand why do we need to keep a Set. Can you >>>> elaborate? >>> >>> >>> We'd need some way to keep track of the keys that are part of the group, >>> iterating over the entire cache for every getGroup() call would be way too >>> slow. >> >> Right, and load all entries from any CacheStore too :-/ > > IMO, I prefer to iterate over the data container and cache loader when > it is needed than keep the Set for each group. I think the memory > will thank you Of course. I'm just highlighting how importand Dan's comment is, because we obviously don' t want to load everything from CacheStore. So, tracking which entries are part of the group is essential: failing this, I'm still skeptical about why the Grouping API is a better replacement than FGAM. Sanne From dan.berindei at gmail.com Mon Jan 27 05:27:48 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 27 Jan 2014 12:27:48 +0200 Subject: [infinispan-dev] Dropping AtomicMap/FineGrainedAtomicMap In-Reply-To: <1EB0E9C8-AFD2-4172-874F-25BC2B12C6C4@redhat.com> References: <52DD0961.90600@infinispan.org> <52DEDBCF.7030204@redhat.com> <1EB0E9C8-AFD2-4172-874F-25BC2B12C6C4@redhat.com> Message-ID: I think it's way too early to discuss removing FineGrainedAtomicMap and AtomicMap, as long as we don't have a concrete alternative with similar properties. Cache.getGroup(groupName) is just a method name at this point, we don't have any idea how it will compare to AtomicMap/FineGrainedAtomicMap from a transaction isolation or performance perspective. BTW, do we really need the group name to be a String? A good way to prove that the grouping API is a proper replacement for the atomic maps would be to replace the usage of atomic maps in the Tree module with the grouping API. Unless we plan to drop the Tree module completely... Cheers Dan On Wed, Jan 22, 2014 at 2:45 PM, Mircea Markus wrote: > > On Jan 21, 2014, at 8:42 PM, Vladimir Blagojevic > wrote: > > > I agree with Erik here. Deltas are used in M/R and I've never detected > > any problems so far. > > On 1/21/2014, 1:39 PM, Erik Salter wrote: > >> Please don't remove the Delta stuff. That's quite useful, especially > for > >> large collections. > > +1 to keep DeltaAware. Thanks for the feedbak > > >> > >> Erik > >> > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140127/08b866bf/attachment.html From pedro at infinispan.org Mon Jan 27 06:54:04 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Mon, 27 Jan 2014 11:54:04 +0000 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: References: <20131112145454.GB5423@hibernate.org> <20131118095612.GN3262@hibernate.org> <24C30EDB-1978-4AB0-93F8-A02B35C1193C@redhat.com> <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> <52DFD13A.4030500@infinispan.org> <52E62924.1080900@infinispan.org> Message-ID: <52E648DC.1090300@infinispan.org> On 01/27/2014 09:52 AM, Sanne Grinovero wrote: > On 27 January 2014 09:38, Pedro Ruivo wrote: >> >> >> On 01/27/2014 09:20 AM, Sanne Grinovero wrote: >>> On 23 January 2014 18:03, Dan Berindei wrote: >>>> >>>> On 22 Jan 2014 16:10, "Pedro Ruivo" wrote: >>>>> >>>>> >>>>> >>>>> On 01/22/2014 01:58 PM, Dan Berindei wrote: >>>>>> >>>>>> >>>>>> It would also require us to keep a Set for each group, with the keys >>>>>> associated with that group. As such, I'm not sure it would be a lot >>>>>> easier to implement (correctly) than FineGrainedAtomicMap. >>>>>> >>>>>> >>>>> >>>>> Dan, I didn't understand why do we need to keep a Set. Can you >>>>> elaborate? >>>> >>>> >>>> We'd need some way to keep track of the keys that are part of the group, >>>> iterating over the entire cache for every getGroup() call would be way too >>>> slow. >>> >>> Right, and load all entries from any CacheStore too :-/ >> >> IMO, I prefer to iterate over the data container and cache loader when >> it is needed than keep the Set for each group. I think the memory >> will thank you > > Of course. I'm just highlighting how importand Dan's comment is, > because we obviously don' t want to load everything from CacheStore. > So, tracking which entries are part of the group is essential: > failing this, I'm still skeptical about why the Grouping API is a > better replacement than FGAM. I have one reason: FGAM does not work inside transactions... > > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From emmanuel at hibernate.org Mon Jan 27 07:26:59 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Mon, 27 Jan 2014 13:26:59 +0100 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: <52E648DC.1090300@infinispan.org> References: <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> <52DFD13A.4030500@infinispan.org> <52E62924.1080900@infinispan.org> <52E648DC.1090300@infinispan.org> Message-ID: <20140127122659.GN9557@hibernate.org> I'd be curious to see performance tests on Pedro's approach (ie walk through the entire data key set to find the matching elements of a given group). That might be fast enough but that looks quite scary compared to a single lookup. Any doc explaining how FGAM is broken in transactions for curiosity. On Mon 2014-01-27 11:54, Pedro Ruivo wrote: > > > On 01/27/2014 09:52 AM, Sanne Grinovero wrote: > > On 27 January 2014 09:38, Pedro Ruivo wrote: > >> > >> > >> On 01/27/2014 09:20 AM, Sanne Grinovero wrote: > >>> On 23 January 2014 18:03, Dan Berindei wrote: > >>>> > >>>> On 22 Jan 2014 16:10, "Pedro Ruivo" wrote: > >>>>> > >>>>> > >>>>> > >>>>> On 01/22/2014 01:58 PM, Dan Berindei wrote: > >>>>>> > >>>>>> > >>>>>> It would also require us to keep a Set for each group, with the keys > >>>>>> associated with that group. As such, I'm not sure it would be a lot > >>>>>> easier to implement (correctly) than FineGrainedAtomicMap. > >>>>>> > >>>>>> > >>>>> > >>>>> Dan, I didn't understand why do we need to keep a Set. Can you > >>>>> elaborate? > >>>> > >>>> > >>>> We'd need some way to keep track of the keys that are part of the group, > >>>> iterating over the entire cache for every getGroup() call would be way too > >>>> slow. > >>> > >>> Right, and load all entries from any CacheStore too :-/ > >> > >> IMO, I prefer to iterate over the data container and cache loader when > >> it is needed than keep the Set for each group. I think the memory > >> will thank you > > > > Of course. I'm just highlighting how importand Dan's comment is, > > because we obviously don' t want to load everything from CacheStore. > > So, tracking which entries are part of the group is essential: > > failing this, I'm still skeptical about why the Grouping API is a > > better replacement than FGAM. > > I have one reason: FGAM does not work inside transactions... > > > > > Sanne > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From emmanuel at hibernate.org Mon Jan 27 07:30:19 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Mon, 27 Jan 2014 13:30:19 +0100 Subject: [infinispan-dev] Dropping AtomicMap/FineGrainedAtomicMap In-Reply-To: References: <52DD0961.90600@infinispan.org> <52DEDBCF.7030204@redhat.com> <1EB0E9C8-AFD2-4172-874F-25BC2B12C6C4@redhat.com> Message-ID: <20140127123019.GO9557@hibernate.org> On Mon 2014-01-27 12:27, Dan Berindei wrote: > Cache.getGroup(groupName) is just a method name at this point, we don't > have any idea how it will compare to AtomicMap/FineGrainedAtomicMap from a > transaction isolation or performance perspective. BTW, do we really need > the group name to be a String? +1 for the name not being a String. Today in OGM we use a generic Key object to represent associations. A string version has many drawbacks. From pedro at infinispan.org Mon Jan 27 07:43:01 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Mon, 27 Jan 2014 12:43:01 +0000 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: <20140127122659.GN9557@hibernate.org> References: <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> <52DFD13A.4030500@infinispan.org> <52E62924.1080900@infinispan.org> <52E648DC.1090300@infinispan.org> <20140127122659.GN9557@hibernate.org> Message-ID: <52E65455.9020704@infinispan.org> On 01/27/2014 12:26 PM, Emmanuel Bernard wrote: > I'd be curious to see performance tests on Pedro's approach (ie walk > through the entire data key set to find the matching elements of a given > group). That might be fast enough but that looks quite scary compared to > a single lookup. I would prefer to have a performance hit than a map of sets (group name => set of keys). I also think that keep this map synchronized with the keys in data container will not be easy... > > Any doc explaining how FGAM is broken in transactions for curiosity. > well, the map is not isolated, so you can see the updates from other transactions immediately (https://issues.jboss.org/browse/ISPN-3932) It also does not work when you enable write skew check with optimistic transactions (we have a JIRA somewhere) > On Mon 2014-01-27 11:54, Pedro Ruivo wrote: >> >> >> On 01/27/2014 09:52 AM, Sanne Grinovero wrote: >>> On 27 January 2014 09:38, Pedro Ruivo wrote: >>>> >>>> >>>> On 01/27/2014 09:20 AM, Sanne Grinovero wrote: >>>>> On 23 January 2014 18:03, Dan Berindei wrote: >>>>>> >>>>>> On 22 Jan 2014 16:10, "Pedro Ruivo" wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 01/22/2014 01:58 PM, Dan Berindei wrote: >>>>>>>> >>>>>>>> >>>>>>>> It would also require us to keep a Set for each group, with the keys >>>>>>>> associated with that group. As such, I'm not sure it would be a lot >>>>>>>> easier to implement (correctly) than FineGrainedAtomicMap. >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> Dan, I didn't understand why do we need to keep a Set. Can you >>>>>>> elaborate? >>>>>> >>>>>> >>>>>> We'd need some way to keep track of the keys that are part of the group, >>>>>> iterating over the entire cache for every getGroup() call would be way too >>>>>> slow. >>>>> >>>>> Right, and load all entries from any CacheStore too :-/ >>>> >>>> IMO, I prefer to iterate over the data container and cache loader when >>>> it is needed than keep the Set for each group. I think the memory >>>> will thank you >>> >>> Of course. I'm just highlighting how importand Dan's comment is, >>> because we obviously don' t want to load everything from CacheStore. >>> So, tracking which entries are part of the group is essential: >>> failing this, I'm still skeptical about why the Grouping API is a >>> better replacement than FGAM. >> >> I have one reason: FGAM does not work inside transactions... >> >>> >>> Sanne >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From dan.berindei at gmail.com Mon Jan 27 08:38:32 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 27 Jan 2014 15:38:32 +0200 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: <52E65455.9020704@infinispan.org> References: <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> <52DFD13A.4030500@infinispan.org> <52E62924.1080900@infinispan.org> <52E648DC.1090300@infinispan.org> <20140127122659.GN9557@hibernate.org> <52E65455.9020704@infinispan.org> Message-ID: On Mon, Jan 27, 2014 at 2:43 PM, Pedro Ruivo wrote: > > > On 01/27/2014 12:26 PM, Emmanuel Bernard wrote: > > I'd be curious to see performance tests on Pedro's approach (ie walk > > through the entire data key set to find the matching elements of a given > > group). That might be fast enough but that looks quite scary compared to > > a single lookup. > > I would prefer to have a performance hit than a map of sets (group name > => set of keys). I also think that keep this map synchronized with the > keys in data container will not be easy... > Sure, I would prefer the simpler implementation as well. But if changing an application to use groups instead of atomic maps will change the processing time of a request from 1ms to 1s, I'm pretty sure users will prefer to keep use the atomic maps :) > > > > Any doc explaining how FGAM is broken in transactions for curiosity. > > > > well, the map is not isolated, so you can see the updates from other > transactions immediately (https://issues.jboss.org/browse/ISPN-3932) > > Do you know if AtomicMap is affected, too? > It also does not work when you enable write skew check with optimistic > transactions (we have a JIRA somewhere) > I assume you mean https://issues.jboss.org/browse/ISPN-3939 ? This looks like it also affects AtomicMap, so the only workaround is to use pessimistic locking. > > > On Mon 2014-01-27 11:54, Pedro Ruivo wrote: > >> > >> > >> On 01/27/2014 09:52 AM, Sanne Grinovero wrote: > >>> On 27 January 2014 09:38, Pedro Ruivo wrote: > >>>> > >>>> > >>>> On 01/27/2014 09:20 AM, Sanne Grinovero wrote: > >>>>> On 23 January 2014 18:03, Dan Berindei > wrote: > >>>>>> > >>>>>> On 22 Jan 2014 16:10, "Pedro Ruivo" wrote: > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> On 01/22/2014 01:58 PM, Dan Berindei wrote: > >>>>>>>> > >>>>>>>> > >>>>>>>> It would also require us to keep a Set for each group, with > the keys > >>>>>>>> associated with that group. As such, I'm not sure it would be a > lot > >>>>>>>> easier to implement (correctly) than FineGrainedAtomicMap. > >>>>>>>> > >>>>>>>> > >>>>>>> > >>>>>>> Dan, I didn't understand why do we need to keep a Set. Can you > >>>>>>> elaborate? > >>>>>> > >>>>>> > >>>>>> We'd need some way to keep track of the keys that are part of the > group, > >>>>>> iterating over the entire cache for every getGroup() call would be > way too > >>>>>> slow. > >>>>> > >>>>> Right, and load all entries from any CacheStore too :-/ > >>>> > >>>> IMO, I prefer to iterate over the data container and cache loader when > >>>> it is needed than keep the Set for each group. I think the memory > >>>> will thank you > >>> > >>> Of course. I'm just highlighting how importand Dan's comment is, > >>> because we obviously don' t want to load everything from CacheStore. > >>> So, tracking which entries are part of the group is essential: > >>> failing this, I'm still skeptical about why the Grouping API is a > >>> better replacement than FGAM. > >> > >> I have one reason: FGAM does not work inside transactions... > >> > >>> > >>> Sanne > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140127/5dc09e8a/attachment.html From pedro at infinispan.org Mon Jan 27 09:02:46 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Mon, 27 Jan 2014 14:02:46 +0000 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: References: <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> <52DFD13A.4030500@infinispan.org> <52E62924.1080900@infinispan.org> <52E648DC.1090300@infinispan.org> <20140127122659.GN9557@hibernate.org> <52E65455.9020704@infinispan.org> Message-ID: <52E66706.6060600@infinispan.org> On 01/27/2014 01:38 PM, Dan Berindei wrote: > > > > On Mon, Jan 27, 2014 at 2:43 PM, Pedro Ruivo > wrote: > > > > On 01/27/2014 12:26 PM, Emmanuel Bernard wrote: > > I'd be curious to see performance tests on Pedro's approach (ie walk > > through the entire data key set to find the matching elements of > a given > > group). That might be fast enough but that looks quite scary > compared to > > a single lookup. > > I would prefer to have a performance hit than a map of sets (group name > => set of keys). I also think that keep this map synchronized with the > keys in data container will not be easy... > > > Sure, I would prefer the simpler implementation as well. But if changing > an application to use groups instead of atomic maps will change the > processing time of a request from 1ms to 1s, I'm pretty sure users will > prefer to keep use the atomic maps :) you don't need to change the application. we can implement the AtomicHashMap interface on top of grouping :D I'm expecting a negative performance impact but not that high. Also, with the current implementation, FGAHM performs a copy for writing... anyway, we should test and see how it goes :) > > > > > > Any doc explaining how FGAM is broken in transactions for curiosity. > > > > well, the map is not isolated, so you can see the updates from other > transactions immediately (https://issues.jboss.org/browse/ISPN-3932) > > > Do you know if AtomicMap is affected, too? I haven't tested yet, but I'm assuming the worst (i.e. yes, it is affected). I'm trying to find a way to fix it without destroying anything else :( > > It also does not work when you enable write skew check with optimistic > transactions (we have a JIRA somewhere) > > > I assume you mean https://issues.jboss.org/browse/ISPN-3939 ? > This looks like it also affects AtomicMap, so the only workaround is to > use pessimistic locking. that is cross-site replication... I mean to this: https://issues.jboss.org/browse/ISPN-2729 that is originated because we don't support version in Deltas > > > > > On Mon 2014-01-27 11:54, Pedro Ruivo wrote: > >> > >> > >> On 01/27/2014 09:52 AM, Sanne Grinovero wrote: > >>> On 27 January 2014 09:38, Pedro Ruivo > wrote: > >>>> > >>>> > >>>> On 01/27/2014 09:20 AM, Sanne Grinovero wrote: > >>>>> On 23 January 2014 18:03, Dan Berindei > > wrote: > >>>>>> > >>>>>> On 22 Jan 2014 16:10, "Pedro Ruivo" > wrote: > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> On 01/22/2014 01:58 PM, Dan Berindei wrote: > >>>>>>>> > >>>>>>>> > >>>>>>>> It would also require us to keep a Set for each group, > with the keys > >>>>>>>> associated with that group. As such, I'm not sure it would > be a lot > >>>>>>>> easier to implement (correctly) than FineGrainedAtomicMap. > >>>>>>>> > >>>>>>>> > >>>>>>> > >>>>>>> Dan, I didn't understand why do we need to keep a Set. > Can you > >>>>>>> elaborate? > >>>>>> > >>>>>> > >>>>>> We'd need some way to keep track of the keys that are part > of the group, > >>>>>> iterating over the entire cache for every getGroup() call > would be way too > >>>>>> slow. > >>>>> > >>>>> Right, and load all entries from any CacheStore too :-/ > >>>> > >>>> IMO, I prefer to iterate over the data container and cache > loader when > >>>> it is needed than keep the Set for each group. I think the > memory > >>>> will thank you > >>> > >>> Of course. I'm just highlighting how importand Dan's comment is, > >>> because we obviously don' t want to load everything from > CacheStore. > >>> So, tracking which entries are part of the group is essential: > >>> failing this, I'm still skeptical about why the Grouping API is a > >>> better replacement than FGAM. > >> > >> I have one reason: FGAM does not work inside transactions... > >> > >>> > >>> Sanne > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From mmarkus at redhat.com Mon Jan 27 09:05:00 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Mon, 27 Jan 2014 14:05:00 +0000 Subject: [infinispan-dev] Java Fast Sockets Message-ID: These are the slisde from a presentation ran last week in London. Also contains some performance runs on top of JGroups and the figures look pretty good: JFS yields 2.5 better performance than java sockets( optimized for a specific HW stack: Mellanox cards + InfiniBand). Gullermo also offered us access to his lab in case we want to play with it. -------------- next part -------------- A non-text attachment was scrubbed... Name: JavaCommsFasterThanC++.pdf Type: application/pdf Size: 1746855 bytes Desc: not available Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140127/5155e178/attachment-0001.pdf -------------- next part -------------- Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From mmarkus at redhat.com Mon Jan 27 09:20:53 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Mon, 27 Jan 2014 14:20:53 +0000 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: References: <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> <52DFD13A.4030500@infinispan.org> <52E62924.1080900@infinispan.org> <52E648DC.1090300@infinispan.org> <20140127122659.GN9557@hibernate.org> <52E65455.9020704@infinispan.org> Message-ID: <3BF181BE-527E-4D91-8959-43301CEDB693@redhat.com> On Jan 27, 2014, at 1:38 PM, Dan Berindei wrote: > > > > On Mon, Jan 27, 2014 at 2:43 PM, Pedro Ruivo wrote: > > > On 01/27/2014 12:26 PM, Emmanuel Bernard wrote: > > I'd be curious to see performance tests on Pedro's approach (ie walk > > through the entire data key set to find the matching elements of a given > > group). That might be fast enough but that looks quite scary compared to > > a single lookup. > > I would prefer to have a performance hit than a map of sets (group name > => set of keys). I also think that keep this map synchronized with the > keys in data container will not be easy... > > Sure, I would prefer the simpler implementation as well. But if changing an application to use groups instead of atomic maps will change the processing time of a request from 1ms to 1s, I'm pretty sure users will prefer to keep use the atomic maps :) +1 Also the Map> Emmanuel mentions is something that already exists within the (FG)AM. Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From mmarkus at redhat.com Mon Jan 27 09:22:38 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Mon, 27 Jan 2014 14:22:38 +0000 Subject: [infinispan-dev] Integration between HotRod and OGM In-Reply-To: <52E66706.6060600@infinispan.org> References: <20131119102205.GL3262@hibernate.org> <0F4A687C-0A39-4748-A9BA-942A850457E7@redhat.com> <52DFD13A.4030500@infinispan.org> <52E62924.1080900@infinispan.org> <52E648DC.1090300@infinispan.org> <20140127122659.GN9557@hibernate.org> <52E65455.9020704@infinispan.org> <52E66706.6060600@infinispan.org> Message-ID: <9FA57AAE-68A4-49BB-9DF0-2B3829D886B8@redhat.com> On Jan 27, 2014, at 2:02 PM, Pedro Ruivo wrote: > On 01/27/2014 01:38 PM, Dan Berindei wrote: >> >> >> >> On Mon, Jan 27, 2014 at 2:43 PM, Pedro Ruivo > > wrote: >> >> >> >> On 01/27/2014 12:26 PM, Emmanuel Bernard wrote: >>> I'd be curious to see performance tests on Pedro's approach (ie walk >>> through the entire data key set to find the matching elements of >> a given >>> group). That might be fast enough but that looks quite scary >> compared to >>> a single lookup. >> >> I would prefer to have a performance hit than a map of sets (group name >> => set of keys). I also think that keep this map synchronized with the >> keys in data container will not be easy... >> >> >> Sure, I would prefer the simpler implementation as well. But if changing >> an application to use groups instead of atomic maps will change the >> processing time of a request from 1ms to 1s, I'm pretty sure users will >> prefer to keep use the atomic maps :) > > you don't need to change the application. we can implement the > AtomicHashMap interface on top of grouping :D > > I'm expecting a negative performance impact but not that high. Also, > with the current implementation, FGAHM performs a copy for writing... > anyway, we should test and see how it goes :) +1. We can keep both around for a while and only drop FGAM iff grouping does the job right. Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From mudokonman at gmail.com Tue Jan 28 09:29:27 2014 From: mudokonman at gmail.com (William Burns) Date: Tue, 28 Jan 2014 09:29:27 -0500 Subject: [infinispan-dev] L1OnRehash Discussion Message-ID: Hello everyone, I wanted to discuss what I would say as dubious benefit of L1OnRehash especially compared to the benefits it provide. L1OnRehash is used to retain a value by moving a previously owned value into the L1 when a rehash occurs and this node no longer owns that value Also any current L1 values are removed when a rehash occurs. Therefore it can only save a single remote get for only a few keys when a rehash occurs. This by itself is fine however L1OnRehash has many edge cases to guarantee consistency as can be seen from https://issues.jboss.org/browse/ISPN-3838. This can get quite complicated for a feature that gives marginal performance increases (especially given that this value may never have been read recently - at least normal L1 usage guarantees this). My first suggestion is instead to deprecate the L1OnRehash configuration option and to remove this logic. My second suggestion is a new implementation of L1OnRehash that is always enabled when L1 threshold is configured to 0. For those not familiar L1 threshold controls whether invalidations are broadcasted instead of individual messages. A value of 0 means to always broadcast. This would allow for some benefits that we can't currently do: 1. L1 values would never have to be invalidated on a rehash event (guarantee locality reads under rehash) 2. L1 requestors would not have to be tracked any longer However every write would be required to send an invalidation which could slow write performance in additional cases (since we currently only send invalidations when requestors are found). The difference would be lessened with udp, which is the transport I would assume someone would use when configuring L1 threshold to 0. What do you guys think? I am thinking that no one minds the removal of L1OnRehash that we have currently (if so let me know). I am quite curious what others think about the changes for L1 threshold value of 0, maybe this configuration value is never used? Thanks, - Will From mmarkus at redhat.com Tue Jan 28 17:48:02 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Tue, 28 Jan 2014 22:48:02 +0000 Subject: [infinispan-dev] New Cache Entry Notifications In-Reply-To: References: <1FAA84A0-3AFE-4DA7-980F-AF1FB5725F5A@redhat.com> Message-ID: <7AF78B20-6C8E-41A9-A7D6-0657D26D8FE8@redhat.com> On Jan 27, 2014, at 9:35 AM, Sanne Grinovero wrote: > On 23 January 2014 17:54, Mircea Markus wrote: >> >> On Jan 23, 2014, at 5:48 PM, William Burns wrote: >> >>> Hello all, >>> >>> I have been working with notifications and most recently I have come >>> to look into events generated when a new entry is created. Now >>> normally I would just expect a CacheEntryCreatedEvent to be raised. >>> However we currently raise a CacheEntryModifiedEvent event and then a >>> CacheEntryCreatedEvent. I notice that there are comments around the >>> code saying that tests require both to be fired. >> >> it doesn't sound right to me: modified is different than created. > > I'd tend to agree with you, still it's a matter of perception as I could say > "a key is changing value from null to some new value so it's an update".. > I realize it's a bit far fetched, there were actually discussions about implementing all the ISPN operations as a single generic one, so you do have a point :-) > still if you start introducing > tombstones for eventual consistency you can have a longer history of > changes to append to even if it's just a "creation". Whilst I agree with you in theory, IMO the users have a clear expectations on what an update vs an create is (as in CRUD). Using tombstones is an implementation detail, but there is a general accepted semantical difference between a create and an update. > > For example with this sequence of commands: > > Put(K1, V1), Remove(K1), Put(K1, V2) < Creation event? > > As the history sequence for this entry is > V1 - null - V2 > > And in case of asynchronous CacheStores, Remote eventlistener, X-site, > .. and all other features where you might have coalescing of changes > applied, this sequence could be "compacted" as > > V1 - V2 > > Is the write of V2 still a creation event? > > I don't necessarily disagree with the choice, but it's not that simple > and will have consequent complexities down the road. > > Personally I think you should drop the differentiation between the two > event types, as it's over-promising on something that we can't deliver > consistently. > > Sanne > >> >>> >>> I am wondering if anyone has an objection to only raising a >>> CacheEntryCreatedEvent on a new cache entry being created. Does >>> anyone know why we raise both currently? Was it just so the >>> PutKeyValueCommand could more ignorantly just raise the >>> CacheEntryModified pre Event? >>> >>> Any input would be appreciated, Thanks. >> >> Cheers, >> -- >> Mircea Markus >> Infinispan lead (www.infinispan.org) >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From sanne at infinispan.org Tue Jan 28 19:05:16 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 29 Jan 2014 00:05:16 +0000 Subject: [infinispan-dev] Frequent releases? Message-ID: Hi all, can I hope for a release to happen soon? I am needing releases to happen more frequently, or the various cross-project integrations can't evolve. During the 6.0 iteration nothing happened for months, then we went crazy fast and the time margin was too short for me to perform various improvements before getting at CR phases (which I consider too late): ideally I'd like to see timeboxed releases, following a reliable pattern: like every two weeks would be awesome. To make an example, I've released 8 tags (counting various projects) just this past 2 weeks to accomodate for evolution of coupled sister projects, not least to include fixes and adapt for API or SPI changes into Infinispan. Cheers, Sanne From ales.justin at gmail.com Wed Jan 29 07:08:38 2014 From: ales.justin at gmail.com (Ales Justin) Date: Wed, 29 Jan 2014 13:08:38 +0100 Subject: [infinispan-dev] wf config Message-ID: <30E73386-C785-487D-96FD-295C49803CEF@gmail.com> I'm looking at current WildFly integration. In CacheAdd I see this code: if ((lockingMode == LockingMode.OPTIMISTIC) && (isolationLevel == IsolationLevel.REPEATABLE_READ)) { builder.locking().writeSkewCheck(true); } but then locking has this validation: public void validate() { if (writeSkewCheck) { if (isolationLevel != IsolationLevel.REPEATABLE_READ) throw new CacheConfigurationException("Write-skew checking only allowed with REPEATABLE_READ isolation level for cache"); if (transaction().lockingMode != LockingMode.OPTIMISTIC) throw new CacheConfigurationException("Write-skew checking only allowed with OPTIMISTIC transactions"); if (!versioning().enabled || versioning().scheme != VersioningScheme.SIMPLE) throw new CacheConfigurationException( "Write-skew checking requires versioning to be enabled and versioning scheme 'SIMPLE' to be configured"); Yet there is no versioning handling in WF subsystem. (just listing what's supported) private void parseCacheElement(XMLExtendedStreamReader reader, Element element, ModelNode cache, List operations) throws XMLStreamException { switch (element) { case LOCKING: { case TRANSACTION: { case EVICTION: { case EXPIRATION: { case STORE: { case FILE_STORE: { case STRING_KEYED_JDBC_STORE: { case BINARY_KEYED_JDBC_STORE: { case MIXED_KEYED_JDBC_STORE: { case REMOTE_STORE: { case INDEXING: { default: { throw ParseUtils.unexpectedElement(reader); } How do you expect the user to get pass the validation, where you magically enable writeSkewCheck? -Ales From mmarkus at redhat.com Wed Jan 29 08:19:43 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Wed, 29 Jan 2014 13:19:43 +0000 Subject: [infinispan-dev] wf config In-Reply-To: <30E73386-C785-487D-96FD-295C49803CEF@gmail.com> References: <30E73386-C785-487D-96FD-295C49803CEF@gmail.com> Message-ID: <1E6CF221-8E0A-465E-8E60-C4BE6A208EF3@redhat.com> I guess this is a question for Paul. On Jan 29, 2014, at 12:08 PM, Ales Justin wrote: > I'm looking at current WildFly integration. > > In CacheAdd I see this code: > > if ((lockingMode == LockingMode.OPTIMISTIC) && (isolationLevel == IsolationLevel.REPEATABLE_READ)) { > builder.locking().writeSkewCheck(true); > } > > but then locking has this validation: > > public void validate() { > if (writeSkewCheck) { > if (isolationLevel != IsolationLevel.REPEATABLE_READ) > throw new CacheConfigurationException("Write-skew checking only allowed with REPEATABLE_READ isolation level for cache"); > if (transaction().lockingMode != LockingMode.OPTIMISTIC) > throw new CacheConfigurationException("Write-skew checking only allowed with OPTIMISTIC transactions"); > if (!versioning().enabled || versioning().scheme != VersioningScheme.SIMPLE) > throw new CacheConfigurationException( > "Write-skew checking requires versioning to be enabled and versioning scheme 'SIMPLE' to be configured"); > > Yet there is no versioning handling in WF subsystem. > (just listing what's supported) > > private void parseCacheElement(XMLExtendedStreamReader reader, Element element, ModelNode cache, List operations) throws XMLStreamException { > switch (element) { > case LOCKING: { > case TRANSACTION: { > case EVICTION: { > case EXPIRATION: { > case STORE: { > case FILE_STORE: { > case STRING_KEYED_JDBC_STORE: { > case BINARY_KEYED_JDBC_STORE: { > case MIXED_KEYED_JDBC_STORE: { > case REMOTE_STORE: { > case INDEXING: { > default: { > throw ParseUtils.unexpectedElement(reader); > } > > How do you expect the user to get pass the validation, > where you magically enable writeSkewCheck? > > -Ales > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From paul.ferraro at redhat.com Wed Jan 29 08:42:13 2014 From: paul.ferraro at redhat.com (Paul Ferraro) Date: Wed, 29 Jan 2014 08:42:13 -0500 Subject: [infinispan-dev] wf config In-Reply-To: <30E73386-C785-487D-96FD-295C49803CEF@gmail.com> References: <30E73386-C785-487D-96FD-295C49803CEF@gmail.com> Message-ID: <1391002933.28710.2.camel@T520> Ooops. That's a bug - I'll submit a PR momentarily. On Wed, 2014-01-29 at 13:08 +0100, Ales Justin wrote: > I'm looking at current WildFly integration. > > In CacheAdd I see this code: > > if ((lockingMode == LockingMode.OPTIMISTIC) && (isolationLevel == IsolationLevel.REPEATABLE_READ)) { > builder.locking().writeSkewCheck(true); > } > > but then locking has this validation: > > public void validate() { > if (writeSkewCheck) { > if (isolationLevel != IsolationLevel.REPEATABLE_READ) > throw new CacheConfigurationException("Write-skew checking only allowed with REPEATABLE_READ isolation level for cache"); > if (transaction().lockingMode != LockingMode.OPTIMISTIC) > throw new CacheConfigurationException("Write-skew checking only allowed with OPTIMISTIC transactions"); > if (!versioning().enabled || versioning().scheme != VersioningScheme.SIMPLE) > throw new CacheConfigurationException( > "Write-skew checking requires versioning to be enabled and versioning scheme 'SIMPLE' to be configured"); > > Yet there is no versioning handling in WF subsystem. > (just listing what's supported) > > private void parseCacheElement(XMLExtendedStreamReader reader, Element element, ModelNode cache, List operations) throws XMLStreamException { > switch (element) { > case LOCKING: { > case TRANSACTION: { > case EVICTION: { > case EXPIRATION: { > case STORE: { > case FILE_STORE: { > case STRING_KEYED_JDBC_STORE: { > case BINARY_KEYED_JDBC_STORE: { > case MIXED_KEYED_JDBC_STORE: { > case REMOTE_STORE: { > case INDEXING: { > default: { > throw ParseUtils.unexpectedElement(reader); > } > > How do you expect the user to get pass the validation, > where you magically enable writeSkewCheck? > > -Ales > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From paul.ferraro at redhat.com Wed Jan 29 09:19:21 2014 From: paul.ferraro at redhat.com (Paul Ferraro) Date: Wed, 29 Jan 2014 09:19:21 -0500 Subject: [infinispan-dev] wf config In-Reply-To: <1391002933.28710.2.camel@T520> References: <30E73386-C785-487D-96FD-295C49803CEF@gmail.com> <1391002933.28710.2.camel@T520> Message-ID: <1391005161.28710.4.camel@T520> FYI: https://issues.jboss.org/browse/WFLY-2829 https://github.com/wildfly/wildfly/pull/5808 On Wed, 2014-01-29 at 08:42 -0500, Paul Ferraro wrote: > Ooops. That's a bug - I'll submit a PR momentarily. > > On Wed, 2014-01-29 at 13:08 +0100, Ales Justin wrote: > > I'm looking at current WildFly integration. > > > > In CacheAdd I see this code: > > > > if ((lockingMode == LockingMode.OPTIMISTIC) && (isolationLevel == IsolationLevel.REPEATABLE_READ)) { > > builder.locking().writeSkewCheck(true); > > } > > > > but then locking has this validation: > > > > public void validate() { > > if (writeSkewCheck) { > > if (isolationLevel != IsolationLevel.REPEATABLE_READ) > > throw new CacheConfigurationException("Write-skew checking only allowed with REPEATABLE_READ isolation level for cache"); > > if (transaction().lockingMode != LockingMode.OPTIMISTIC) > > throw new CacheConfigurationException("Write-skew checking only allowed with OPTIMISTIC transactions"); > > if (!versioning().enabled || versioning().scheme != VersioningScheme.SIMPLE) > > throw new CacheConfigurationException( > > "Write-skew checking requires versioning to be enabled and versioning scheme 'SIMPLE' to be configured"); > > > > Yet there is no versioning handling in WF subsystem. > > (just listing what's supported) > > > > private void parseCacheElement(XMLExtendedStreamReader reader, Element element, ModelNode cache, List operations) throws XMLStreamException { > > switch (element) { > > case LOCKING: { > > case TRANSACTION: { > > case EVICTION: { > > case EXPIRATION: { > > case STORE: { > > case FILE_STORE: { > > case STRING_KEYED_JDBC_STORE: { > > case BINARY_KEYED_JDBC_STORE: { > > case MIXED_KEYED_JDBC_STORE: { > > case REMOTE_STORE: { > > case INDEXING: { > > default: { > > throw ParseUtils.unexpectedElement(reader); > > } > > > > How do you expect the user to get pass the validation, > > where you magically enable writeSkewCheck? > > > > -Ales > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From paul.ferraro at redhat.com Wed Jan 29 09:20:12 2014 From: paul.ferraro at redhat.com (Paul Ferraro) Date: Wed, 29 Jan 2014 09:20:12 -0500 Subject: [infinispan-dev] Store as binary In-Reply-To: <52D92AC4.7080701@redhat.com> References: <52D92AC4.7080701@redhat.com> Message-ID: <1391005212.28710.5.camel@T520> What was the read/write ratio used for this test? On Fri, 2014-01-17 at 14:06 +0100, Radim Vansa wrote: > Hi Mircea, > > I've ran a simple stress test [1] in dist mode with store as binary (not > enabled, enabled keys only, enabled values only, enabled both). > The difference is < 2 % (with storeAsBinary enabled fully being slower). > > Radim > > [1] > https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/jdg-radargun-perf-store-as-binary/1/artifact/report/All_report.html > From rvansa at redhat.com Wed Jan 29 11:16:15 2014 From: rvansa at redhat.com (Radim Vansa) Date: Wed, 29 Jan 2014 17:16:15 +0100 Subject: [infinispan-dev] Store as binary In-Reply-To: <1391005212.28710.5.camel@T520> References: <52D92AC4.7080701@redhat.com> <1391005212.28710.5.camel@T520> Message-ID: <52E9294F.8010701@redhat.com> 20 % writes, 80 % reads Radim On 01/29/2014 03:20 PM, Paul Ferraro wrote: > What was the read/write ratio used for this test? > > On Fri, 2014-01-17 at 14:06 +0100, Radim Vansa wrote: >> Hi Mircea, >> >> I've ran a simple stress test [1] in dist mode with store as binary (not >> enabled, enabled keys only, enabled values only, enabled both). >> The difference is < 2 % (with storeAsBinary enabled fully being slower). >> >> Radim >> >> [1] >> https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/jdg-radargun-perf-store-as-binary/1/artifact/report/All_report.html >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From galder at redhat.com Thu Jan 30 04:42:34 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Thu, 30 Jan 2014 15:12:34 +0530 Subject: [infinispan-dev] Design change in Infinispan Query In-Reply-To: References: <888EA204-30A1-4BFF-9469-7118996024A1@hibernate.org> <6211A55D-9F1D-4686-9EF4-373C216E4927@hibernate.org> Message-ID: On Jan 21, 2014, at 11:52 PM, Mircea Markus wrote: > > On Jan 15, 2014, at 1:42 PM, Emmanuel Bernard wrote: > >> By the way, people looking for that feature are also asking for a unified Cache API accessing these several caches right? Otherwise I am not fully understanding why they ask for a unified query. >> Do you have written detailed use cases somewhere for me to better understand what is really requested? > > IMO from a user perspective, being able to run queries spreading several caches makes the programming simplifies the programming model: each cache corresponding to a single entity type, with potentially different configuration. Not sure if it simplifies things TBH if the configuration is the same. IMO, it just adds clutter. Just yesterday I discovered this gem in Scala's Shapeless extensions [1]. This is experimental stuff but essentially it allows to define what the key/value type pairs a map will contain, and it does type checking at compile time. I almost wet my pants when I saw that ;) :p. In the example, it defines a map as containing Int -> String, and String -> Int key/value pairs. If you try to add an Int -> Int, it fails compilation. Java's type checking is not powerful enough to do this, and it's compilation logic is not extendable in the same way Scala macros does, but I think the fact that other languages are looking into this validates Paul's suggestion in [2], on top of all the benefits listed there. Cheers, [1] https://github.com/milessabin/shapeless/wiki/Feature-overview:-shapeless-2.0.0#heterogenous-maps [2] https://issues.jboss.org/browse/ISPN-3640 > Besides the query API that would need to be extended to support accessing multiple caches, not sure what other APIs would need to be extended to take advantage of this? > >> >> Emmanuel >> >> On 14 Jan 2014, at 12:59, Sanne Grinovero wrote: >> >>> Up this: it was proposed again today ad a face to face meeting. >>> Apparently multiple parties have been asking to be able to run >>> cross-cache queries. >>> >>> Sanne >>> >>> On 11 April 2012 12:47, Emmanuel Bernard wrote: >>>> >>>> On 10 avr. 2012, at 19:10, Sanne Grinovero wrote: >>>> >>>>> Hello all, >>>>> currently Infinispan Query is an interceptor registering on the >>>>> specific Cache instance which has indexing enabled; one such >>>>> interceptor is doing all what it needs to do in the sole scope of the >>>>> cache it was registered in. >>>>> >>>>> If you enable indexing - for example - on 3 different caches, there >>>>> will be 3 different Hibernate Search engines started in background, >>>>> and they are all unaware of each other. >>>>> >>>>> After some design discussions with Ales for CapeDwarf, but also >>>>> calling attention on something that bothered me since some time, I'd >>>>> evaluate the option to have a single Hibernate Search Engine >>>>> registered in the CacheManager, and have it shared across indexed >>>>> caches. >>>>> >>>>> Current design limitations: >>>>> >>>>> A- If they are all configured to use the same base directory to >>>>> store indexes, and happen to have same-named indexes, they'll share >>>>> the index without being aware of each other. This is going to break >>>>> unless the user configures some tricky parameters, and even so >>>>> performance won't be great: instances will lock each other out, or at >>>>> best write in alternate turns. >>>>> B- The search engine isn't particularly "heavy", still it would be >>>>> nice to share some components and internal services. >>>>> C- Configuration details which need some care - like injecting a >>>>> JGroups channel for clustering - needs to be done right isolating each >>>>> instance (so large parts of configuration would be quite similar but >>>>> not totally equal) >>>>> D- Incoming messages into a JGroups Receiver need to be routed not >>>>> only among indexes, but also among Engine instances. This prevents >>>>> Query to reuse code from Hibernate Search. >>>>> >>>>> Problems with a unified Hibernate Search Engine: >>>>> >>>>> 1#- Isolation of types / indexes. If the same indexed class is >>>>> stored in different (indexed) caches, they'll share the same index. Is >>>>> it a problem? I'm tempted to consider this a good thing, but wonder if >>>>> it would surprise some users. Would you expect that? >>>> >>>> I would not expect that. Unicity in Hibernate Search is not defined per identity but per class + provided id. >>>> I can see people reusing the same class as partial DTO and willing to index that. I can even see people >>>> using the Hibernate Search programmatic API to index the "DTO" stored in cache 2 differently than the >>>> domain class stored in cache 1. >>>> I can concede that I am pushing a bit the use case towards bad-ish design approaches. >>>> >>>>> 2#- configuration format overhaul: indexing options won't be set on >>>>> the cache section but in the global section. I'm looking forward to >>>>> use the schema extensions anyway to provide a better configuration >>>>> experience than the current . >>>>> 3#- Assuming 1# is fine, when a search hit is found I'd need to be >>>>> able to figure out from which cache the value should be loaded. >>>>> 3#A we could have the cache name encoded in the index, as part >>>>> of the identifier: {PK,cacheName} >>>>> 3#B we actually shard the index, keeping a physically separate >>>>> index per cache. This would mean searching on the joint index view but >>>>> extracting hits from specific indexes to keep track of "which index".. >>>>> I think we can do that but it's definitely tricky. >>>>> >>>>> It's likely easier to keep indexed values from different caches in >>>>> different indexes. that would mean to reject #1 and mess with the user >>>>> defined index name, to add for example the cache name to the user >>>>> defined string. >>>>> >>>>> Any comment? >>>>> >>>>> Cheers, >>>>> Sanne >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org From rvansa at redhat.com Thu Jan 30 05:29:46 2014 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 30 Jan 2014 11:29:46 +0100 Subject: [infinispan-dev] JPA Store -> Hibernate Store? Message-ID: <52EA299A.9090900@redhat.com> Hi, as I am upgrading the JPA Store to work with Infinispan 6.0 SPI, there have been several ideas/recommendations to use Hibernate-specific API [1][2]. Currently, the code uses javax.persistence.* stuff only (although it uses on hibernate implemenation). What do you think, should we: a) stay with javax.persistence only b) use hibernate API, if it offers better performance / gets rid of some problems -> should we then rename the store to infinispan-persistence-hibernate? Or is the Hibernate API an implementation detail? c) provide performant (hibernate) and standard implementation? My guess is b) (without renaming) as the main idea should be that we can store JPA objects into relational DB Radim [1] https://issues.jboss.org/browse/ISPN-3953 [2] https://issues.jboss.org/browse/ISPN-3954 -- Radim Vansa JBoss DataGrid QA From anistor at redhat.com Thu Jan 30 07:13:20 2014 From: anistor at redhat.com (Adrian Nistor) Date: Thu, 30 Jan 2014 14:13:20 +0200 Subject: [infinispan-dev] reusing infinispan's marshalling Message-ID: <52EA41E0.2010505@redhat.com> Hi list! I've been pondering about re-using the marshalling machinery of Infinispan in another project, specifically in ProtoStream, where I'm planning to add it as a test scoped dependency so I can create a benchmark to compare marshalling performace. I'm basically interested in comparing ProtoStream and Infinispan's JBoss Marshalling based mechanism. Comparing against plain JBMAR, without using the ExternalizerTable and Externalizers introduced by Infinispan is not going to get me accurate results. But how? I see the marshaling is spread across infinispan-commons and infinispan-core modules. Thanks! Adrian From mmarkus at redhat.com Thu Jan 30 09:47:03 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Thu, 30 Jan 2014 14:47:03 +0000 Subject: [infinispan-dev] Frequent releases? In-Reply-To: References: Message-ID: On Jan 29, 2014, at 12:05 AM, Sanne Grinovero wrote: > Hi all, > can I hope for a release to happen soon? > > I am needing releases to happen more frequently, or the various > cross-project integrations can't evolve. +1. We have quite some pending things to integrate. I'll update the release schedule for 7.0 and suggest an 1-2 weeks interval for releasing alphas and betas. > > During the 6.0 iteration nothing happened for months, then we went > crazy fast It so not so! :-) Looking at the dates it wasn't like that at all: first 6.0 alpha was released 3 weeks after 5.3.0.Final, then followed by 1~2weeks releases in between. If anything I think the 6.0.0 release was pretty well balanced. 6.0.0.Final 18/Nov/13 6.0.0.CR1 04/Oct/13 6.0.0.Beta2 27/Sep/13 6.0.0.Beta1 19/Sep/13 6.0.0.Alpha4 06/Sep/13 6.0.0.Alpha3 21/Aug/13 6.0.0.Alpha2 02/Aug/13 6.0.0.Alpha1 17/Jul/13 5.3.0.Final 25/Jun/13 > and the time margin was too short for me to perform various > improvements before getting at CR phases (which I consider too late): The CR was released 4 months after 6.0 the development began. > ideally I'd like to see timeboxed releases, following a reliable > pattern: like every two weeks would be awesome. +1 and that's what we actually did during 6.0 (except the CR1-Final gap, when we were struggling with the performance problems). > > To make an example, I've released 8 tags (counting various projects) > just this past 2 weeks to accomodate for evolution of coupled sister > projects, not least to include fixes and adapt for API or SPI changes > into Infinispan. > > Cheers, > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From mmarkus at redhat.com Thu Jan 30 14:51:13 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Thu, 30 Jan 2014 19:51:13 +0000 Subject: [infinispan-dev] Design change in Infinispan Query In-Reply-To: References: <888EA204-30A1-4BFF-9469-7118996024A1@hibernate.org> <6211A55D-9F1D-4686-9EF4-373C216E4927@hibernate.org> Message-ID: On Jan 30, 2014, at 9:42 AM, Galder Zamarre?o wrote: > > On Jan 21, 2014, at 11:52 PM, Mircea Markus wrote: > >> >> On Jan 15, 2014, at 1:42 PM, Emmanuel Bernard wrote: >> >>> By the way, people looking for that feature are also asking for a unified Cache API accessing these several caches right? Otherwise I am not fully understanding why they ask for a unified query. >>> Do you have written detailed use cases somewhere for me to better understand what is really requested? >> >> IMO from a user perspective, being able to run queries spreading several caches makes the programming simplifies the programming model: each cache corresponding to a single entity type, with potentially different configuration. > > Not sure if it simplifies things TBH if the configuration is the same. IMO, it just adds clutter. Not sure I follow: having a cache that contains both Cars and Persons sound more cluttering to me. I think it's cumbersome to write any kind of querying with an heterogenous cache, e.g. Map/Reduce tasks that need to count all the green Cars would need to be aware of Persons and ignore them. Not only it is harder to write, but discourages code reuse and makes it hard to maintain (if you'll add Pets in the same cache in future you need to update the M/R code as well). And of course there are also different cache-based configuration options that are not immediately obvious (at design time) but will be in the future (there are more Persons than Cars, they live longer/expiry etc): mixing everything together in the same cache from the begging is a design decision that might bite you in the future. The way I see it - and very curious to see your opinion on this - following an database analogy, the CacheManager corresponds to an Database and the Cache to a Table. Hence my thought that queries spreading multiple caches are both useful and needed (same as query spreading over multiple tables). > > Just yesterday I discovered this gem in Scala's Shapeless extensions [1]. This is experimental stuff but essentially it allows to define what the key/value type pairs a map will contain, and it does type checking at compile time. I almost wet my pants when I saw that ;) :p. In the example, it defines a map as containing Int -> String, and String -> Int key/value pairs. If you try to add an Int -> Int, it fails compilation. Agreed the compile time check is pretty awesome :-) Still mix and matching types in a Map doesn't look great to me for ISPN. > > Java's type checking is not powerful enough to do this, and it's compilation logic is not extendable in the same way Scala macros does, but I think the fact that other languages are looking into this validates Paul's suggestion in [2], on top of all the benefits listed there. > > Cheers, > > [1] https://github.com/milessabin/shapeless/wiki/Feature-overview:-shapeless-2.0.0#heterogenous-maps > [2] https://issues.jboss.org/browse/ISPN-3640 > >> Besides the query API that would need to be extended to support accessing multiple caches, not sure what other APIs would need to be extended to take advantage of this? >> >>> >>> Emmanuel >>> >>> On 14 Jan 2014, at 12:59, Sanne Grinovero wrote: >>> >>>> Up this: it was proposed again today ad a face to face meeting. >>>> Apparently multiple parties have been asking to be able to run >>>> cross-cache queries. >>>> >>>> Sanne >>>> >>>> On 11 April 2012 12:47, Emmanuel Bernard wrote: >>>>> >>>>> On 10 avr. 2012, at 19:10, Sanne Grinovero wrote: >>>>> >>>>>> Hello all, >>>>>> currently Infinispan Query is an interceptor registering on the >>>>>> specific Cache instance which has indexing enabled; one such >>>>>> interceptor is doing all what it needs to do in the sole scope of the >>>>>> cache it was registered in. >>>>>> >>>>>> If you enable indexing - for example - on 3 different caches, there >>>>>> will be 3 different Hibernate Search engines started in background, >>>>>> and they are all unaware of each other. >>>>>> >>>>>> After some design discussions with Ales for CapeDwarf, but also >>>>>> calling attention on something that bothered me since some time, I'd >>>>>> evaluate the option to have a single Hibernate Search Engine >>>>>> registered in the CacheManager, and have it shared across indexed >>>>>> caches. >>>>>> >>>>>> Current design limitations: >>>>>> >>>>>> A- If they are all configured to use the same base directory to >>>>>> store indexes, and happen to have same-named indexes, they'll share >>>>>> the index without being aware of each other. This is going to break >>>>>> unless the user configures some tricky parameters, and even so >>>>>> performance won't be great: instances will lock each other out, or at >>>>>> best write in alternate turns. >>>>>> B- The search engine isn't particularly "heavy", still it would be >>>>>> nice to share some components and internal services. >>>>>> C- Configuration details which need some care - like injecting a >>>>>> JGroups channel for clustering - needs to be done right isolating each >>>>>> instance (so large parts of configuration would be quite similar but >>>>>> not totally equal) >>>>>> D- Incoming messages into a JGroups Receiver need to be routed not >>>>>> only among indexes, but also among Engine instances. This prevents >>>>>> Query to reuse code from Hibernate Search. >>>>>> >>>>>> Problems with a unified Hibernate Search Engine: >>>>>> >>>>>> 1#- Isolation of types / indexes. If the same indexed class is >>>>>> stored in different (indexed) caches, they'll share the same index. Is >>>>>> it a problem? I'm tempted to consider this a good thing, but wonder if >>>>>> it would surprise some users. Would you expect that? >>>>> >>>>> I would not expect that. Unicity in Hibernate Search is not defined per identity but per class + provided id. >>>>> I can see people reusing the same class as partial DTO and willing to index that. I can even see people >>>>> using the Hibernate Search programmatic API to index the "DTO" stored in cache 2 differently than the >>>>> domain class stored in cache 1. >>>>> I can concede that I am pushing a bit the use case towards bad-ish design approaches. >>>>> >>>>>> 2#- configuration format overhaul: indexing options won't be set on >>>>>> the cache section but in the global section. I'm looking forward to >>>>>> use the schema extensions anyway to provide a better configuration >>>>>> experience than the current . >>>>>> 3#- Assuming 1# is fine, when a search hit is found I'd need to be >>>>>> able to figure out from which cache the value should be loaded. >>>>>> 3#A we could have the cache name encoded in the index, as part >>>>>> of the identifier: {PK,cacheName} >>>>>> 3#B we actually shard the index, keeping a physically separate >>>>>> index per cache. This would mean searching on the joint index view but >>>>>> extracting hits from specific indexes to keep track of "which index".. >>>>>> I think we can do that but it's definitely tricky. >>>>>> >>>>>> It's likely easier to keep indexed values from different caches in >>>>>> different indexes. that would mean to reject #1 and mess with the user >>>>>> defined index name, to add for example the cache name to the user >>>>>> defined string. >>>>>> >>>>>> Any comment? >>>>>> >>>>>> Cheers, >>>>>> Sanne >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> Cheers, >> -- >> Mircea Markus >> Infinispan lead (www.infinispan.org) >> >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > Project Lead, Escalante > http://escalante.io > > Engineer, Infinispan > http://infinispan.org > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From galder at redhat.com Fri Jan 31 02:08:24 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Fri, 31 Jan 2014 12:38:24 +0530 Subject: [infinispan-dev] Ditching ASYNC modes for REPL/DIST/INV/CacheStores? Message-ID: <57740CDD-8EFD-4D3E-9395-FABCF19B2448@redhat.com> Hi all, The following came to my mind yesterday: I think we should ditch ASYNC modes for DIST/REPL/INV and our async cache store functionality. Instead, whoever wants to store something asyncronously should use asynchronous methods, i.e. call putAsync. So, this would mean that when you call put(), it's always sync. This would reduce the complexity and configuration of our code base, without affecting our functionality, and it would make things more logical IMO. WDYT? Cheers, -- Galder Zamarre?o galder at redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org From rvansa at redhat.com Fri Jan 31 02:30:56 2014 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 31 Jan 2014 08:30:56 +0100 Subject: [infinispan-dev] Design change in Infinispan Query In-Reply-To: References: <888EA204-30A1-4BFF-9469-7118996024A1@hibernate.org> <6211A55D-9F1D-4686-9EF4-373C216E4927@hibernate.org> Message-ID: <52EB5130.7010907@redhat.com> On 01/30/2014 08:51 PM, Mircea Markus wrote: > On Jan 30, 2014, at 9:42 AM, Galder Zamarre?o wrote: > >> On Jan 21, 2014, at 11:52 PM, Mircea Markus wrote: >> >>> On Jan 15, 2014, at 1:42 PM, Emmanuel Bernard wrote: >>> >>>> By the way, people looking for that feature are also asking for a unified Cache API accessing these several caches right? Otherwise I am not fully understanding why they ask for a unified query. >>>> Do you have written detailed use cases somewhere for me to better understand what is really requested? >>> IMO from a user perspective, being able to run queries spreading several caches makes the programming simplifies the programming model: each cache corresponding to a single entity type, with potentially different configuration. >> Not sure if it simplifies things TBH if the configuration is the same. IMO, it just adds clutter. > Not sure I follow: having a cache that contains both Cars and Persons sound more cluttering to me. I think it's cumbersome to write any kind of querying with an heterogenous cache, e.g. Map/Reduce tasks that need to count all the green Cars would need to be aware of Persons and ignore them. Not only it is harder to write, but discourages code reuse and makes it hard to maintain (if you'll add Pets in the same cache in future you need to update the M/R code as well). And of course there are also different cache-based configuration options that are not immediately obvious (at design time) but will be in the future (there are more Persons than Cars, they live longer/expiry etc): mixing everything together in the same cache from the begging is a design decision that might bite you in the future. > > The way I see it - and very curious to see your opinion on this - following an database analogy, the CacheManager corresponds to an Database and the Cache to a Table. Hence my thought that queries spreading multiple caches are both useful and needed (same as query spreading over multiple tables). I would be all hands for this approach, but there's still one thing where it makes sense - Animal cache with Cats and Dogs. Radim From dereed at redhat.com Fri Jan 31 02:32:39 2014 From: dereed at redhat.com (Dennis Reed) Date: Fri, 31 Jan 2014 01:32:39 -0600 Subject: [infinispan-dev] Ditching ASYNC modes for REPL/DIST/INV/CacheStores? In-Reply-To: <57740CDD-8EFD-4D3E-9395-FABCF19B2448@redhat.com> References: <57740CDD-8EFD-4D3E-9395-FABCF19B2448@redhat.com> Message-ID: <52EB5197.4050801@redhat.com> It would be a loss of functionality. As a common example, the AS web session replication cache is configured for ASYNC by default, for performance reasons. But it can be changed to SYNC to guarantee that when the request finishes that the session was replicated. That wouldn't be possible if you could no longer switch between ASYNC/SYNC with just a configuration change. -Dennis On 01/31/2014 01:08 AM, Galder Zamarre?o wrote: > Hi all, > > The following came to my mind yesterday: I think we should ditch ASYNC modes for DIST/REPL/INV and our async cache store functionality. > > Instead, whoever wants to store something asyncronously should use asynchronous methods, i.e. call putAsync. So, this would mean that when you call put(), it's always sync. This would reduce the complexity and configuration of our code base, without affecting our functionality, and it would make things more logical IMO. > > WDYT? > > Cheers, > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > Project Lead, Escalante > http://escalante.io > > Engineer, Infinispan > http://infinispan.org > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From emmanuel at hibernate.org Fri Jan 31 03:28:24 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Fri, 31 Jan 2014 09:28:24 +0100 Subject: [infinispan-dev] Design change in Infinispan Query In-Reply-To: References: <888EA204-30A1-4BFF-9469-7118996024A1@hibernate.org> <6211A55D-9F1D-4686-9EF4-373C216E4927@hibernate.org> Message-ID: <2C233AC3-BEFC-4FD5-A297-A854FEA8165D@hibernate.org> > On 30 janv. 2014, at 20:51, Mircea Markus wrote: > > >> On Jan 30, 2014, at 9:42 AM, Galder Zamarre?o wrote: >> >> >>> On Jan 21, 2014, at 11:52 PM, Mircea Markus wrote: >>> >>> >>>> On Jan 15, 2014, at 1:42 PM, Emmanuel Bernard wrote: >>>> >>>> By the way, people looking for that feature are also asking for a unified Cache API accessing these several caches right? Otherwise I am not fully understanding why they ask for a unified query. >>>> Do you have written detailed use cases somewhere for me to better understand what is really requested? >>> >>> IMO from a user perspective, being able to run queries spreading several caches makes the programming simplifies the programming model: each cache corresponding to a single entity type, with potentially different configuration. >> >> Not sure if it simplifies things TBH if the configuration is the same. IMO, it just adds clutter. > > Not sure I follow: having a cache that contains both Cars and Persons sound more cluttering to me. I think it's cumbersome to write any kind of querying with an heterogenous cache, e.g. Map/Reduce tasks that need to count all the green Cars would need to be aware of Persons and ignore them. Not only it is harder to write, but discourages code reuse and makes it hard to maintain (if you'll add Pets in the same cache in future you need to update the M/R code as well). And of course there are also different cache-based configuration options that are not immediately obvious (at design time) but will be in the future (there are more Persons than Cars, they live longer/expiry etc): mixing everything together in the same cache from the begging is a design decision that might bite you in the future. > > The way I see it - and very curious to see your opinion on this - following an database analogy, the CacheManager corresponds to an Database and the Cache to a Table. Hence my thought that queries spreading multiple caches are both useful and needed (same as query spreading over multiple tables). I know Sanne and you are keen to have one entity type per cache to be able to fine tune the configuration. I am a little more skeptical but I don't have strong opinions on the subject. However, I don't think you can forbid the case where people want to store heterogenous types in the same cache: - it's easy to start with - configuration is indeed simpler - when you work in the same service with cats, dogs, owners, addresses and refuges, juggling between these n Cache instances begins to be fugly I suspect - should write some application code to confirm - people will add to the grid types unknown at configuration time. They might want a single bucket. Btw with the distributed execution engine, it looks reasonably simple to migrate data from one cache to another. I imagine you can also focus only on the keys whose node is primary which should limit data transfers. Am I missing something? From mmarkus at redhat.com Fri Jan 31 04:39:56 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Fri, 31 Jan 2014 09:39:56 +0000 Subject: [infinispan-dev] Design change in Infinispan Query In-Reply-To: <52EB5130.7010907@redhat.com> References: <888EA204-30A1-4BFF-9469-7118996024A1@hibernate.org> <6211A55D-9F1D-4686-9EF4-373C216E4927@hibernate.org> <52EB5130.7010907@redhat.com> Message-ID: <09F7770F-E174-41AB-ADB0-605ABF0FCBB6@redhat.com> On Jan 31, 2014, at 7:30 AM, Radim Vansa wrote: > On 01/30/2014 08:51 PM, Mircea Markus wrote: >> On Jan 30, 2014, at 9:42 AM, Galder Zamarre?o wrote: >> >>> On Jan 21, 2014, at 11:52 PM, Mircea Markus wrote: >>> >>>> On Jan 15, 2014, at 1:42 PM, Emmanuel Bernard wrote: >>>> >>>>> By the way, people looking for that feature are also asking for a unified Cache API accessing these several caches right? Otherwise I am not fully understanding why they ask for a unified query. >>>>> Do you have written detailed use cases somewhere for me to better understand what is really requested? >>>> IMO from a user perspective, being able to run queries spreading several caches makes the programming simplifies the programming model: each cache corresponding to a single entity type, with potentially different configuration. >>> Not sure if it simplifies things TBH if the configuration is the same. IMO, it just adds clutter. >> Not sure I follow: having a cache that contains both Cars and Persons sound more cluttering to me. I think it's cumbersome to write any kind of querying with an heterogenous cache, e.g. Map/Reduce tasks that need to count all the green Cars would need to be aware of Persons and ignore them. Not only it is harder to write, but discourages code reuse and makes it hard to maintain (if you'll add Pets in the same cache in future you need to update the M/R code as well). And of course there are also different cache-based configuration options that are not immediately obvious (at design time) but will be in the future (there are more Persons than Cars, they live longer/expiry etc): mixing everything together in the same cache from the begging is a design decision that might bite you in the future. >> >> The way I see it - and very curious to see your opinion on this - following an database analogy, the CacheManager corresponds to an Database and the Cache to a Table. Hence my thought that queries spreading multiple caches are both useful and needed (same as query spreading over multiple tables). > I would be all hands for this approach, but there's still one thing > where it makes sense - Animal cache with Cats and Dogs. Not a good idea to keep cats and dogs together :-) Would it be a problem iff the user works with Animals only? Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From ttarrant at redhat.com Fri Jan 31 05:48:12 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 31 Jan 2014 11:48:12 +0100 Subject: [infinispan-dev] Ditching ASYNC modes for REPL/DIST/INV/CacheStores? In-Reply-To: <52EB5197.4050801@redhat.com> References: <57740CDD-8EFD-4D3E-9395-FABCF19B2448@redhat.com> <52EB5197.4050801@redhat.com> Message-ID: <52EB7F6C.505@redhat.com> Couldn't this be handled higher up in our implementatoin then ? If I enable an async mode, all puts / gets become putAsync/getAsync transparently to both the application and to the state transfer. Tristan On 01/31/2014 08:32 AM, Dennis Reed wrote: > It would be a loss of functionality. > > As a common example, the AS web session replication cache is configured > for ASYNC by default, for performance reasons. > But it can be changed to SYNC to guarantee that when the request > finishes that the session was replicated. > > That wouldn't be possible if you could no longer switch between > ASYNC/SYNC with just a configuration change. > > -Dennis > > On 01/31/2014 01:08 AM, Galder Zamarre?o wrote: >> Hi all, >> >> The following came to my mind yesterday: I think we should ditch ASYNC modes for DIST/REPL/INV and our async cache store functionality. >> >> Instead, whoever wants to store something asyncronously should use asynchronous methods, i.e. call putAsync. So, this would mean that when you call put(), it's always sync. This would reduce the complexity and configuration of our code base, without affecting our functionality, and it would make things more logical IMO. >> >> WDYT? >> >> Cheers, >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> Project Lead, Escalante >> http://escalante.io >> >> Engineer, Infinispan >> http://infinispan.org >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From sanne at infinispan.org Fri Jan 31 05:59:45 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 31 Jan 2014 10:59:45 +0000 Subject: [infinispan-dev] Ditching ASYNC modes for REPL/DIST/INV/CacheStores? In-Reply-To: <52EB7F6C.505@redhat.com> References: <57740CDD-8EFD-4D3E-9395-FABCF19B2448@redhat.com> <52EB5197.4050801@redhat.com> <52EB7F6C.505@redhat.com> Message-ID: Generally I like the systems designed with SYNC_DIST + async shared cachestore. It's probably the best setup we can offer: - you need a shared cachestore for persistence consistency - using SYNC distribution to other replicas provides a fairly decent resilience - if your cachestore needs to be updated in sync, your write performance will be limited by the cachestore performance: this prevents you to use Infinispan to buffer, absorbing write spikes, and reducing write latency But I agree we should investigate on removing duplicate "asynchronizations" where they are not needed, there might be some opportunities to remove thread switching and blocking. On 31 January 2014 10:48, Tristan Tarrant wrote: > Couldn't this be handled higher up in our implementatoin then ? > > If I enable an async mode, all puts / gets become putAsync/getAsync > transparently to both the application and to the state transfer. > > Tristan > > On 01/31/2014 08:32 AM, Dennis Reed wrote: >> It would be a loss of functionality. >> >> As a common example, the AS web session replication cache is configured >> for ASYNC by default, for performance reasons. >> But it can be changed to SYNC to guarantee that when the request >> finishes that the session was replicated. >> >> That wouldn't be possible if you could no longer switch between >> ASYNC/SYNC with just a configuration change. >> >> -Dennis >> >> On 01/31/2014 01:08 AM, Galder Zamarre?o wrote: >>> Hi all, >>> >>> The following came to my mind yesterday: I think we should ditch ASYNC modes for DIST/REPL/INV and our async cache store functionality. >>> >>> Instead, whoever wants to store something asyncronously should use asynchronous methods, i.e. call putAsync. So, this would mean that when you call put(), it's always sync. This would reduce the complexity and configuration of our code base, without affecting our functionality, and it would make things more logical IMO. >>> >>> WDYT? >>> >>> Cheers, >>> -- >>> Galder Zamarre?o >>> galder at redhat.com >>> twitter.com/galderz >>> >>> Project Lead, Escalante >>> http://escalante.io >>> >>> Engineer, Infinispan >>> http://infinispan.org >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mmarkus at redhat.com Fri Jan 31 07:33:18 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Fri, 31 Jan 2014 12:33:18 +0000 Subject: [infinispan-dev] Ditching ASYNC modes for REPL/DIST/INV/CacheStores? In-Reply-To: References: <57740CDD-8EFD-4D3E-9395-FABCF19B2448@redhat.com> <52EB5197.4050801@redhat.com> <52EB7F6C.505@redhat.com> Message-ID: <12300D48-F6C2-410B-B397-5B9D0818C194@redhat.com> On Jan 31, 2014, at 10:59 AM, Sanne Grinovero wrote: > Generally I like the systems designed with SYNC_DIST + async shared cachestore. > > It's probably the best setup we can offer: > - you need a shared cachestore for persistence consistency > - using SYNC distribution to other replicas provides a fairly decent resilience > - if your cachestore needs to be updated in sync, your write > performance will be limited by the cachestore performance: this > prevents you to use Infinispan to buffer, absorbing write spikes, and > reducing write latency +1. I would add to that that the async store also shields the user from e.g. database's availability or slowness: if the write to the database takes time then entries are queued up in memory and written when the backend can handle it. > > But I agree we should investigate on removing duplicate > "asynchronizations" where they are not needed, there might be some > opportunities to remove thread switching and blocking. Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From rvansa at redhat.com Fri Jan 31 07:35:21 2014 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 31 Jan 2014 13:35:21 +0100 Subject: [infinispan-dev] Ditching ASYNC modes for REPL/DIST/INV/CacheStores? In-Reply-To: <52EB7F6C.505@redhat.com> References: <57740CDD-8EFD-4D3E-9395-FABCF19B2448@redhat.com> <52EB5197.4050801@redhat.com> <52EB7F6C.505@redhat.com> Message-ID: <52EB9889.9070800@redhat.com> Worth to note that Infinispan does not have true async operation - executing synchronous request in another threadpool is rather simplistic solution that has serious drawbacks (I can imagine a situation where I'd do 100 async gets in parallel, but this would drain the whole threadpool). Implementing that would require serious changes in all interceptors, because you wouldn't be able to call visitWhateverCommand(command) { /* do something */ try { invokeNextInterceptor(command); } finally { /* do another stuff */ } } - you'd have to put all local state prior to invoking next interceptor to context. And you'd need twice as many methods, because now the code would explicitly traverse interceptor stack in both directions. Still, I believe that this may be something to consider/plan for future. And then, yes, you'd need just put(key, value) { future = putAsync(key, value); return sync ? future.get() : null; } Radim On 01/31/2014 11:48 AM, Tristan Tarrant wrote: > Couldn't this be handled higher up in our implementatoin then ? > > If I enable an async mode, all puts / gets become putAsync/getAsync > transparently to both the application and to the state transfer. > > Tristan > > On 01/31/2014 08:32 AM, Dennis Reed wrote: >> It would be a loss of functionality. >> >> As a common example, the AS web session replication cache is configured >> for ASYNC by default, for performance reasons. >> But it can be changed to SYNC to guarantee that when the request >> finishes that the session was replicated. >> >> That wouldn't be possible if you could no longer switch between >> ASYNC/SYNC with just a configuration change. >> >> -Dennis >> >> On 01/31/2014 01:08 AM, Galder Zamarre?o wrote: >>> Hi all, >>> >>> The following came to my mind yesterday: I think we should ditch ASYNC modes for DIST/REPL/INV and our async cache store functionality. >>> >>> Instead, whoever wants to store something asyncronously should use asynchronous methods, i.e. call putAsync. So, this would mean that when you call put(), it's always sync. This would reduce the complexity and configuration of our code base, without affecting our functionality, and it would make things more logical IMO. >>> >>> WDYT? >>> >>> Cheers, >>> -- >>> Galder Zamarre?o >>> galder at redhat.com >>> twitter.com/galderz >>> >>> Project Lead, Escalante >>> http://escalante.io >>> >>> Engineer, Infinispan >>> http://infinispan.org >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From mudokonman at gmail.com Fri Jan 31 08:26:45 2014 From: mudokonman at gmail.com (William Burns) Date: Fri, 31 Jan 2014 08:26:45 -0500 Subject: [infinispan-dev] Ditching ASYNC modes for REPL/DIST/INV/CacheStores? In-Reply-To: <52EB9889.9070800@redhat.com> References: <57740CDD-8EFD-4D3E-9395-FABCF19B2448@redhat.com> <52EB5197.4050801@redhat.com> <52EB7F6C.505@redhat.com> <52EB9889.9070800@redhat.com> Message-ID: +1 to moving to the async methods only. I have mentioned this as well in passing when discussing L1 as there is no way to ensure consistency with an async transport. Although if we fire the async methods with either SKIP_REMOTE_LOOKUP/IGNORE_RETURN_VALUES flag than this consistency is still lost as I am guessing some people will want this to reduce network overhead. When I thought about this before these were the drawbacks I could think of: 1. Another configuration option that may need to be tweaked (async executor properties) - although we could then get rid of AsyncConfiguration so overall less configuration 2. The submitting node would hold memory for a bit longer since it has to keep the callable in memory during the call instead of until it is sent through JGroups 3. You can overwhelm the thread pool with requests and use more threads, technically you could swamp the async transport as well 4. We don't get the benefit of batching requests with the replication queue 5. We have to process the response since we aren't using GET_NONE with JGroups - although we can't guarantee consistency without doing so I personally think to reduce complexity of code/configuration and provide consistency these are probably fine. Also to note if we moved to async methods it should be faster for any invoker since it doesn't even have to traverse the interceptor chain at all in the calling thread (implicit async marshalling). This also means that every operation from the same node will not be ordered using OOB thread pool allowing messages to the same node to operate in parallel. - Will On Fri, Jan 31, 2014 at 7:35 AM, Radim Vansa wrote: > Worth to note that Infinispan does not have true async operation - > executing synchronous request in another threadpool is rather simplistic > solution that has serious drawbacks (I can imagine a situation where I'd > do 100 async gets in parallel, but this would drain the whole threadpool). I agree if we could optimize this with batching it would make it better. > > Implementing that would require serious changes in all interceptors, > because you wouldn't be able to call > > visitWhateverCommand(command) { > /* do something */ > try { > invokeNextInterceptor(command); > } finally { > /* do another stuff */ > } > } > > - you'd have to put all local state prior to invoking next interceptor > to context. And you'd need twice as many methods, because now the code > would explicitly traverse interceptor stack in both directions. I am not quite sure what you mean here. Async transport currently traverses the interceptors for originator and receiver (albeit originator goes back up without a response). > > Still, I believe that this may be something to consider/plan for future. > > And then, yes, you'd need just > > put(key, value) { > future = putAsync(key, value); > return sync ? future.get() : null; > } For sync we would want to invoke directly to avoid context switching. > > Radim > > On 01/31/2014 11:48 AM, Tristan Tarrant wrote: >> Couldn't this be handled higher up in our implementatoin then ? >> >> If I enable an async mode, all puts / gets become putAsync/getAsync >> transparently to both the application and to the state transfer. >> >> Tristan >> >> On 01/31/2014 08:32 AM, Dennis Reed wrote: >>> It would be a loss of functionality. >>> >>> As a common example, the AS web session replication cache is configured >>> for ASYNC by default, for performance reasons. >>> But it can be changed to SYNC to guarantee that when the request >>> finishes that the session was replicated. >>> >>> That wouldn't be possible if you could no longer switch between >>> ASYNC/SYNC with just a configuration change. >>> >>> -Dennis >>> >>> On 01/31/2014 01:08 AM, Galder Zamarre?o wrote: >>>> Hi all, >>>> >>>> The following came to my mind yesterday: I think we should ditch ASYNC modes for DIST/REPL/INV and our async cache store functionality. >>>> >>>> Instead, whoever wants to store something asyncronously should use asynchronous methods, i.e. call putAsync. So, this would mean that when you call put(), it's always sync. This would reduce the complexity and configuration of our code base, without affecting our functionality, and it would make things more logical IMO. >>>> >>>> WDYT? >>>> >>>> Cheers, >>>> -- >>>> Galder Zamarre?o >>>> galder at redhat.com >>>> twitter.com/galderz >>>> >>>> Project Lead, Escalante >>>> http://escalante.io >>>> >>>> Engineer, Infinispan >>>> http://infinispan.org >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Fri Jan 31 08:44:03 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 31 Jan 2014 13:44:03 +0000 Subject: [infinispan-dev] JPA Store -> Hibernate Store? In-Reply-To: <52EA299A.9090900@redhat.com> References: <52EA299A.9090900@redhat.com> Message-ID: +1 on b without renaming. The fact that we're using Hibernate is an implementation detail, I think we should focus on the user contract: the end user is supposed to provide annotated entities. The user is free to use JPA annotations only in his mapping, so I think the name is not too bad. Still in practice the user is also free to use Hibernate specific annotations, but you have this same liberty when deploying a JPA based application in an pplication server: we don't strictly ban their usage, but that doesn't imply a name change either. Sanne On 30 January 2014 10:29, Radim Vansa wrote: > Hi, > > as I am upgrading the JPA Store to work with Infinispan 6.0 SPI, there > have been several ideas/recommendations to use Hibernate-specific API > [1][2]. Currently, the code uses javax.persistence.* stuff only > (although it uses on hibernate implemenation). > > What do you think, should we: > a) stay with javax.persistence only > b) use hibernate API, if it offers better performance / gets rid of some > problems -> should we then rename the store to > infinispan-persistence-hibernate? Or is the Hibernate API an > implementation detail? > c) provide performant (hibernate) and standard implementation? > > My guess is b) (without renaming) as the main idea should be that we can > store JPA objects into relational DB > > Radim > > [1] https://issues.jboss.org/browse/ISPN-3953 > [2] https://issues.jboss.org/browse/ISPN-3954 > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mmarkus at redhat.com Fri Jan 31 08:54:34 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Fri, 31 Jan 2014 13:54:34 +0000 Subject: [infinispan-dev] JPA Store -> Hibernate Store? In-Reply-To: References: <52EA299A.9090900@redhat.com> Message-ID: <3BB7EE19-7FDB-4503-83CD-D9647C94C528@redhat.com> On Jan 31, 2014, at 1:44 PM, Sanne Grinovero wrote: > +1 on b without renaming. > > The fact that we're using Hibernate is an implementation detail, I > think we should focus on the user contract: the end user is supposed > to provide annotated entities. > > The user is free to use JPA annotations only in his mapping, so I > think the name is not too bad. > Still in practice the user is also free to use Hibernate specific > annotations, but you have this same liberty when deploying a JPA based > application in an pplication server: we don't strictly ban their > usage, but that doesn't imply a name change either. +1 on hibernante being an implementation detail. > > Sanne > > > On 30 January 2014 10:29, Radim Vansa wrote: >> Hi, >> >> as I am upgrading the JPA Store to work with Infinispan 6.0 SPI, there >> have been several ideas/recommendations to use Hibernate-specific API >> [1][2]. Currently, the code uses javax.persistence.* stuff only >> (although it uses on hibernate implemenation). >> >> What do you think, should we: >> a) stay with javax.persistence only >> b) use hibernate API, if it offers better performance / gets rid of some >> problems -> should we then rename the store to >> infinispan-persistence-hibernate? Or is the Hibernate API an >> implementation detail? >> c) provide performant (hibernate) and standard implementation? >> >> My guess is b) (without renaming) as the main idea should be that we can >> store JPA objects into relational DB >> >> Radim >> >> [1] https://issues.jboss.org/browse/ISPN-3953 >> [2] https://issues.jboss.org/browse/ISPN-3954 >> >> -- >> Radim Vansa >> JBoss DataGrid QA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From vblagoje at redhat.com Fri Jan 31 11:29:49 2014 From: vblagoje at redhat.com (Vladimir Blagojevic) Date: Fri, 31 Jan 2014 11:29:49 -0500 Subject: [infinispan-dev] reusing infinispan's marshalling In-Reply-To: <52EA41E0.2010505@redhat.com> References: <52EA41E0.2010505@redhat.com> Message-ID: <52EBCF7D.2030707@redhat.com> Not 100% related to what you are asking about but have a look at this post and the discussion that "erupted": http://gridgain.blogspot.ca/2012/12/java-serialization-good-fast-and-faster.html Vladimir On 1/30/2014, 7:13 AM, Adrian Nistor wrote: > Hi list! > > I've been pondering about re-using the marshalling machinery of > Infinispan in another project, specifically in ProtoStream, where I'm > planning to add it as a test scoped dependency so I can create a > benchmark to compare marshalling performace. I'm basically interested > in comparing ProtoStream and Infinispan's JBoss Marshalling based > mechanism. Comparing against plain JBMAR, without using the > ExternalizerTable and Externalizers introduced by Infinispan is not > going to get me accurate results. > > But how? I see the marshaling is spread across infinispan-commons and > infinispan-core modules. > > Thanks! > Adrian > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Fri Jan 31 11:59:09 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 31 Jan 2014 16:59:09 +0000 Subject: [infinispan-dev] Kyro performance (Was: reusing infinispan's marshalling) Message-ID: Changing the subject, as Adrian will need a reply to his (more important) question. I don't think we should go shopping for different marshaller implementations, especially given other priorities. I've been keeping an eye on Kryo since a while and it looks very good indeed, but JBMarshaller is serving us pretty well and I'm loving its reliability. If we need more speed in this area, I'd rather see us perform some very accurate benchmark development and try to understand why Kyro is faster than JBM (if it really is), and potentially improve JBM. For example as I've already suggested, it's using an internal identityMap to detect graphs, and often we might not need that, or also it would be nice to refactor it to write to an existing byte stream rather than having it allocate internal buffers, and finally we might want a "stateless edition" so to get rid of need for pooling of JBMar instances. -- Sanne On 31 January 2014 16:29, Vladimir Blagojevic wrote: > Not 100% related to what you are asking about but have a look at this > post and the discussion that "erupted": > > http://gridgain.blogspot.ca/2012/12/java-serialization-good-fast-and-faster.html > > Vladimir > On 1/30/2014, 7:13 AM, Adrian Nistor wrote: >> Hi list! >> >> I've been pondering about re-using the marshalling machinery of >> Infinispan in another project, specifically in ProtoStream, where I'm >> planning to add it as a test scoped dependency so I can create a >> benchmark to compare marshalling performace. I'm basically interested >> in comparing ProtoStream and Infinispan's JBoss Marshalling based >> mechanism. Comparing against plain JBMAR, without using the >> ExternalizerTable and Externalizers introduced by Infinispan is not >> going to get me accurate results. >> >> But how? I see the marshaling is spread across infinispan-commons and >> infinispan-core modules. >> >> Thanks! >> Adrian >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From anistor at redhat.com Fri Jan 31 13:05:14 2014 From: anistor at redhat.com (Adrian Nistor) Date: Fri, 31 Jan 2014 20:05:14 +0200 Subject: [infinispan-dev] reusing infinispan's marshalling In-Reply-To: <52EBCF7D.2030707@redhat.com> References: <52EA41E0.2010505@redhat.com> <52EBCF7D.2030707@redhat.com> Message-ID: <52EBE5DA.9030106@redhat.com> Thanks Vladimir! It's a really fun and interesting discussion going on there :) On 01/31/2014 06:29 PM, Vladimir Blagojevic wrote: > Not 100% related to what you are asking about but have a look at this > post and the discussion that "erupted": > > http://gridgain.blogspot.ca/2012/12/java-serialization-good-fast-and-faster.html > > Vladimir > On 1/30/2014, 7:13 AM, Adrian Nistor wrote: >> Hi list! >> >> I've been pondering about re-using the marshalling machinery of >> Infinispan in another project, specifically in ProtoStream, where I'm >> planning to add it as a test scoped dependency so I can create a >> benchmark to compare marshalling performace. I'm basically interested >> in comparing ProtoStream and Infinispan's JBoss Marshalling based >> mechanism. Comparing against plain JBMAR, without using the >> ExternalizerTable and Externalizers introduced by Infinispan is not >> going to get me accurate results. >> >> But how? I see the marshaling is spread across infinispan-commons and >> infinispan-core modules. >> >> Thanks! >> Adrian >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From anistor at redhat.com Fri Jan 31 13:22:57 2014 From: anistor at redhat.com (Adrian Nistor) Date: Fri, 31 Jan 2014 20:22:57 +0200 Subject: [infinispan-dev] Kyro performance (Was: reusing infinispan's marshalling) In-Reply-To: References: Message-ID: <52EBEA01.60209@redhat.com> Indeed, I'm not looking for a JBMAR replacement, just trying to create a comparative benchmark between it and Protobuf/ProtoStream. I'm trying to make an apple to apple comparison by marshalling the same domain model with both libraries. My (incipient) test currently indicates a 2x better write perf and 3x better read perf with ProtoStream but I'm sure it is flawed because I do not have custom JBMAR externalizers for my entities so I suspect it is basically resorting to plain old serialization. Was hoping to reuse this part from infinispan but is seems to be very tied to core. Need to dig deeper into that awesome jbmar user guide :) On 01/31/2014 06:59 PM, Sanne Grinovero wrote: > Changing the subject, as Adrian will need a reply to his (more > important) question. > > I don't think we should go shopping for different marshaller > implementations, especially given other priorities. > > I've been keeping an eye on Kryo since a while and it looks very good > indeed, but JBMarshaller is serving us pretty well and I'm loving its > reliability. > > If we need more speed in this area, I'd rather see us perform some > very accurate benchmark development and try to understand why Kyro is > faster than JBM (if it really is), and potentially improve JBM. > For example as I've already suggested, it's using an internal > identityMap to detect graphs, and often we might not need that, or > also it would be nice to refactor it to write to an existing byte > stream rather than having it allocate internal buffers, and finally we > might want a "stateless edition" so to get rid of need for pooling of > JBMar instances. > > -- Sanne > > > > On 31 January 2014 16:29, Vladimir Blagojevic wrote: >> Not 100% related to what you are asking about but have a look at this >> post and the discussion that "erupted": >> >> http://gridgain.blogspot.ca/2012/12/java-serialization-good-fast-and-faster.html >> >> Vladimir >> On 1/30/2014, 7:13 AM, Adrian Nistor wrote: >>> Hi list! >>> >>> I've been pondering about re-using the marshalling machinery of >>> Infinispan in another project, specifically in ProtoStream, where I'm >>> planning to add it as a test scoped dependency so I can create a >>> benchmark to compare marshalling performace. I'm basically interested >>> in comparing ProtoStream and Infinispan's JBoss Marshalling based >>> mechanism. Comparing against plain JBMAR, without using the >>> ExternalizerTable and Externalizers introduced by Infinispan is not >>> going to get me accurate results. >>> >>> But how? I see the marshaling is spread across infinispan-commons and >>> infinispan-core modules. >>> >>> Thanks! >>> Adrian >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev