From ttarrant at redhat.com Mon Aug 3 10:39:24 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 03 Aug 2015 16:39:24 +0200 Subject: [infinispan-dev] Weekly IRC meeting minutes 2015-08-03 Message-ID: <55BF7D1C.3020500@redhat.com> Hi all, the minutes for this week's IRC meeting are at http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2015/infinispan.2015-08-03-14.00.log.html Enjoy Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From galder at redhat.com Wed Aug 5 04:48:41 2015 From: galder at redhat.com (Galder Zamarreno) Date: Wed, 5 Aug 2015 04:48:41 -0400 (EDT) Subject: [infinispan-dev] Special cache types and their configuration (or lack of) In-Reply-To: <55B64E6C.6020706@redhat.com> References: <55B64303.4070006@redhat.com> <55B64E6C.6020706@redhat.com> Message-ID: <63584839.7062351.1438764521498.JavaMail.zimbra@redhat.com> Indeed, JCache, MR and DistExec assume you'll be given a fully fledged Cache instance that allows them to do things that go beyond the basics, so as correctly pointed out here, it's hard to make the distinction purely based on the configuration. My gut feeling is that we need a way to specifically build a simple/basic cache directly based on your use case. With existing usages out there, you can't simply get a simple/basic cache just like that since a lot of the existing use cases expect to be able to use advanced features. An easy solution, as hinted by Radim, would be to have a wrapper for a simple/basic cache, which takes a standard Cache in, but don't go as far as to allow dynamic switching. E.g. if you chose to build a simple/basic cache, then things like add interceptor would fail...etc. I think this would work well for scenarios such as 2LC where we can control how the cache to be used is constructed. However, in scenarios where we expect it to work magically with existing code, it'd not work due to the need to know about the wrapper. Cheers, -- Galder Zamarre?o Infinispan, Red Hat ----- Original Message ----- > There's one glitch that needs to be stressed: some limitations of > simplified cache are not discoverable on creation time. While > persistence, tx and others are, adding custom interceptors and running > map-reduce or distributed-executors can't be guessed when the cache is > created. > I could (theoretically) implement MR and DistExec, but never the custom > interceptors: the idea of simple cache is that there are *no > interceptors*. And regrettably, this is not as rare case as I have > initially assumed, as for example JCaches grab any cache, insert their > interceptor and provide the wrapper. > > One way to go would be to not return the simple cache directly, but wrap > it in a delegating cache that would switch the implementation on the fly > as soon as someone tries to play with interceptors. However, this is not > without cost - the delegate would have to read a volatile field and > execute megamorphic call upon every cache operation. Applications could > get around that by doing instanceof and calling unwrap method during > initialization, but it's not really elegant solution. > > I wanted the choice transparent to the user from the beginning, but it's > not a way to go without penalties. > > For those who will suggest 'just a flag on local cache': Following the > 'less configuration, not more' I believe that the amount of > runtime-prohibited configurations should be kept at minimum. With such > flag, we would expand the state space of configuration 2 times, while > 95% of the configurations would be illegal. That's why I have rather > used new cache mode than adding a flag. > > Radim > > On 07/27/2015 04:41 PM, Tristan Tarrant wrote: > > Hi all, > > > > I wanted to bring attention to some discussion that has happened in the > > context of Radim's work on simplified code for specific cache types [1]. > > > > In particular, Radim proposes adding explicit configuration options > > (i.e. a new simple-cache cache type) to the programmatic/declarative API > > to ensure that a user is aware of the limitations of the resulting cache > > type (no interceptors, no persistence, no tx, etc). > > > > My opinion is that we should aim for "less" configuration and not > > "more", and that optimizations such as these should get enabled > > implicitly when the parameters allow it: if the configuration code > > detects it can use a "simple" cache. > > > > Also, this choice should happen at cache construction time, and not > > dynamically at cache usage time. > > > > WDYT ? > > > > Tristan > > > > [1] https://github.com/infinispan/infinispan/pull/3577 > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From galder at redhat.com Wed Aug 5 05:06:44 2015 From: galder at redhat.com (Galder Zamarreno) Date: Wed, 5 Aug 2015 05:06:44 -0400 (EDT) Subject: [infinispan-dev] Question about Hibernate ORM 5.0 + Infinispan 8.0... In-Reply-To: References: <55A501B9.7060608@redhat.com> <55B91FE3.70008@redhat.com> <55B92E55.9030709@redhat.com> <55B93E87.2060604@redhat.com> Message-ID: <1291825164.7071494.1438765604385.JavaMail.zimbra@redhat.com> ----- Original Message ----- > On 31 July 2015 at 11:30, Dan Berindei wrote: > > Hi Sanne > > > > Does Hibernate really need to use the Infinispan test helpers? They > > don't really do much unless you run the tests in parallel and you need > > them to be isolated... > > Good point. I don't know why it's using them exactly, but I guess it's > also to reduce thread pools and similar. We should ask Galder. There's not much to say other than Hibernate 2LC uses test helpers to make things easier to test, e.g. create clusters more easily, use test ping, mbeans...etc. > But I thought there was an intention to ultimately suggest Infinispan > end users to use our test helpers for development too? I talked about this in our last F2F, at some point we should create a testkit project with all the helpers...etc, independent of the testing framework used by the user. Right now, both set of helpers (test framework, and cache/cache manager creation ones) are bundled into one, and there's a fair bit of duplication around. > > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From sanne at infinispan.org Wed Aug 5 06:04:42 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 5 Aug 2015 11:04:42 +0100 Subject: [infinispan-dev] Question about Hibernate ORM 5.0 + Infinispan 8.0... In-Reply-To: <1291825164.7071494.1438765604385.JavaMail.zimbra@redhat.com> References: <55A501B9.7060608@redhat.com> <55B91FE3.70008@redhat.com> <55B92E55.9030709@redhat.com> <55B93E87.2060604@redhat.com> <1291825164.7071494.1438765604385.JavaMail.zimbra@redhat.com> Message-ID: On 5 August 2015 at 10:06, Galder Zamarreno wrote: > ----- Original Message ----- >> On 31 July 2015 at 11:30, Dan Berindei wrote: >> > Hi Sanne >> > >> > Does Hibernate really need to use the Infinispan test helpers? They >> > don't really do much unless you run the tests in parallel and you need >> > them to be isolated... >> >> Good point. I don't know why it's using them exactly, but I guess it's >> also to reduce thread pools and similar. We should ask Galder. > > There's not much to say other than Hibernate 2LC uses test helpers to make things easier to test, e.g. create clusters more easily, use test ping, mbeans...etc. > >> But I thought there was an intention to ultimately suggest Infinispan >> end users to use our test helpers for development too? > > I talked about this in our last F2F, at some point we should create a testkit project with all the helpers...etc, independent of the testing framework used by the user. Right now, both set of helpers (test framework, and cache/cache manager creation ones) are bundled into one, and there's a fair bit of duplication around. +1 and apparently it would also be useful to provide a somewhat stable API for testing utilities: these are useful for Hibernate but are likely useful for others too. Sanne From dan.berindei at gmail.com Wed Aug 5 09:37:16 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 5 Aug 2015 16:37:16 +0300 Subject: [infinispan-dev] Special cache types and their configuration (or lack of) In-Reply-To: <63584839.7062351.1438764521498.JavaMail.zimbra@redhat.com> References: <55B64303.4070006@redhat.com> <55B64E6C.6020706@redhat.com> <63584839.7062351.1438764521498.JavaMail.zimbra@redhat.com> Message-ID: Radim's implementation already throws exceptions when the application tries to use unsupported features like throwing exceptions. The question is how to choose the simple cache: a new CacheMode/XML element, an attribute on the local-cache element, or reusing the existing configuration to figure out whether the user needs advanced features. Radim's implementation uses a new CacheMode and a new "simple-cache" XML element. I feel this makes it too visible, since it's based on what we can do now without an interceptor stack, and that might change in the future. I'm in the "new local-cache attribute" camp, because the programmatic configuration has to validate all those impossible configurations anyway. In the UI as well, when a user tries to create a cache with a store, I think it's better to tell him explicitly that he can't add a store to a simple cache, than let him wonder why there isn't any option to add a store in Infinispan. I don't really like the idea of switching the cache implementation dynamically, either. From the JIT's point of view, I think a call site in an application is likely to always use the same kind of cache, so the call will be monomorphic most of the time. But as a user, I'd rather have something that's constantly slow than something that's initially fast and then suddenly gets slower without me knowing why. Cheers Dan On Wed, Aug 5, 2015 at 11:48 AM, Galder Zamarreno wrote: > Indeed, JCache, MR and DistExec assume you'll be given a fully fledged Cache instance that allows them to do things that go beyond the basics, so as correctly pointed out here, it's hard to make the distinction purely based on the configuration. > > My gut feeling is that we need a way to specifically build a simple/basic cache directly based on your use case. With existing usages out there, you can't simply get a simple/basic cache just like that since a lot of the existing use cases expect to be able to use advanced features. An easy solution, as hinted by Radim, would be to have a wrapper for a simple/basic cache, which takes a standard Cache in, but don't go as far as to allow dynamic switching. E.g. if you chose to build a simple/basic cache, then things like add interceptor would fail...etc. I think this would work well for scenarios such as 2LC where we can control how the cache to be used is constructed. However, in scenarios where we expect it to work magically with existing code, it'd not work due to the need to know about the wrapper. > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > > ----- Original Message ----- >> There's one glitch that needs to be stressed: some limitations of >> simplified cache are not discoverable on creation time. While >> persistence, tx and others are, adding custom interceptors and running >> map-reduce or distributed-executors can't be guessed when the cache is >> created. >> I could (theoretically) implement MR and DistExec, but never the custom >> interceptors: the idea of simple cache is that there are *no >> interceptors*. And regrettably, this is not as rare case as I have >> initially assumed, as for example JCaches grab any cache, insert their >> interceptor and provide the wrapper. >> >> One way to go would be to not return the simple cache directly, but wrap >> it in a delegating cache that would switch the implementation on the fly >> as soon as someone tries to play with interceptors. However, this is not >> without cost - the delegate would have to read a volatile field and >> execute megamorphic call upon every cache operation. Applications could >> get around that by doing instanceof and calling unwrap method during >> initialization, but it's not really elegant solution. >> >> I wanted the choice transparent to the user from the beginning, but it's >> not a way to go without penalties. >> >> For those who will suggest 'just a flag on local cache': Following the >> 'less configuration, not more' I believe that the amount of >> runtime-prohibited configurations should be kept at minimum. With such >> flag, we would expand the state space of configuration 2 times, while >> 95% of the configurations would be illegal. That's why I have rather >> used new cache mode than adding a flag. >> >> Radim >> >> On 07/27/2015 04:41 PM, Tristan Tarrant wrote: >> > Hi all, >> > >> > I wanted to bring attention to some discussion that has happened in the >> > context of Radim's work on simplified code for specific cache types [1]. >> > >> > In particular, Radim proposes adding explicit configuration options >> > (i.e. a new simple-cache cache type) to the programmatic/declarative API >> > to ensure that a user is aware of the limitations of the resulting cache >> > type (no interceptors, no persistence, no tx, etc). >> > >> > My opinion is that we should aim for "less" configuration and not >> > "more", and that optimizations such as these should get enabled >> > implicitly when the parameters allow it: if the configuration code >> > detects it can use a "simple" cache. >> > >> > Also, this choice should happen at cache construction time, and not >> > dynamically at cache usage time. >> > >> > WDYT ? >> > >> > Tristan >> > >> > [1] https://github.com/infinispan/infinispan/pull/3577 >> >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Wed Aug 5 10:31:23 2015 From: rvansa at redhat.com (Radim Vansa) Date: Wed, 05 Aug 2015 16:31:23 +0200 Subject: [infinispan-dev] Special cache types and their configuration (or lack of) In-Reply-To: References: <55B64303.4070006@redhat.com> <55B64E6C.6020706@redhat.com> <63584839.7062351.1438764521498.JavaMail.zimbra@redhat.com> Message-ID: <55C21E3B.3020302@redhat.com> On 08/05/2015 03:37 PM, Dan Berindei wrote: > Radim's implementation already throws exceptions when the application > tries to use unsupported features like throwing exceptions. The > question is how to choose the simple cache: a new CacheMode/XML > element, an attribute on the local-cache element, or reusing the > existing configuration to figure out whether the user needs advanced > features. > > Radim's implementation uses a new CacheMode and a new "simple-cache" > XML element. I feel this makes it too visible, since it's based on > what we can do now without an interceptor stack, and that might change > in the future. > > I'm in the "new local-cache attribute" camp, because the programmatic > configuration has to validate all those impossible configurations > anyway. In the UI as well, when a user tries to create a cache with a > store, I think it's better to tell him explicitly that he can't add a > store to a simple cache, than let him wonder why there isn't any > option to add a store in Infinispan. What UI do you mean? IDE with XSD, or does Infinispan have any tool with Mr. Clippy? Not having a button/configuration element is IMO the _proper_ way to tell the user 'You can't do that', rather than showing pop-up/throwing exception with 'Don't press this button, please!'. I admit that exception with link to docs is more _BFU-proof_, though. If users really cared about the schema, there wouldn't be so many threads where they try to copy-paste embedded configuration into server. The parser error message should be more ironic, like 'Something's wrong. I won't tell you what, but your XSD schema validator will!' > > I don't really like the idea of switching the cache implementation > dynamically, either. From the JIT's point of view, I think a call site > in an application is likely to always use the same kind of cache, so > the call will be monomorphic most of the time. But as a user, I'd > rather have something that's constantly slow than something that's > initially fast and then suddenly gets slower without me knowing why. +1 I was about to write the dynamic switcher, but having consistent performance is strong argument against that. Radim > > Cheers > Dan > > > > On Wed, Aug 5, 2015 at 11:48 AM, Galder Zamarreno wrote: >> Indeed, JCache, MR and DistExec assume you'll be given a fully fledged Cache instance that allows them to do things that go beyond the basics, so as correctly pointed out here, it's hard to make the distinction purely based on the configuration. >> >> My gut feeling is that we need a way to specifically build a simple/basic cache directly based on your use case. With existing usages out there, you can't simply get a simple/basic cache just like that since a lot of the existing use cases expect to be able to use advanced features. An easy solution, as hinted by Radim, would be to have a wrapper for a simple/basic cache, which takes a standard Cache in, but don't go as far as to allow dynamic switching. E.g. if you chose to build a simple/basic cache, then things like add interceptor would fail...etc. I think this would work well for scenarios such as 2LC where we can control how the cache to be used is constructed. However, in scenarios where we expect it to work magically with existing code, it'd not work due to the need to know about the wrapper. >> >> Cheers, >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> ----- Original Message ----- >>> There's one glitch that needs to be stressed: some limitations of >>> simplified cache are not discoverable on creation time. While >>> persistence, tx and others are, adding custom interceptors and running >>> map-reduce or distributed-executors can't be guessed when the cache is >>> created. >>> I could (theoretically) implement MR and DistExec, but never the custom >>> interceptors: the idea of simple cache is that there are *no >>> interceptors*. And regrettably, this is not as rare case as I have >>> initially assumed, as for example JCaches grab any cache, insert their >>> interceptor and provide the wrapper. >>> >>> One way to go would be to not return the simple cache directly, but wrap >>> it in a delegating cache that would switch the implementation on the fly >>> as soon as someone tries to play with interceptors. However, this is not >>> without cost - the delegate would have to read a volatile field and >>> execute megamorphic call upon every cache operation. Applications could >>> get around that by doing instanceof and calling unwrap method during >>> initialization, but it's not really elegant solution. >>> >>> I wanted the choice transparent to the user from the beginning, but it's >>> not a way to go without penalties. >>> >>> For those who will suggest 'just a flag on local cache': Following the >>> 'less configuration, not more' I believe that the amount of >>> runtime-prohibited configurations should be kept at minimum. With such >>> flag, we would expand the state space of configuration 2 times, while >>> 95% of the configurations would be illegal. That's why I have rather >>> used new cache mode than adding a flag. >>> >>> Radim >>> >>> On 07/27/2015 04:41 PM, Tristan Tarrant wrote: >>>> Hi all, >>>> >>>> I wanted to bring attention to some discussion that has happened in the >>>> context of Radim's work on simplified code for specific cache types [1]. >>>> >>>> In particular, Radim proposes adding explicit configuration options >>>> (i.e. a new simple-cache cache type) to the programmatic/declarative API >>>> to ensure that a user is aware of the limitations of the resulting cache >>>> type (no interceptors, no persistence, no tx, etc). >>>> >>>> My opinion is that we should aim for "less" configuration and not >>>> "more", and that optimizations such as these should get enabled >>>> implicitly when the parameters allow it: if the configuration code >>>> detects it can use a "simple" cache. >>>> >>>> Also, this choice should happen at cache construction time, and not >>>> dynamically at cache usage time. >>>> >>>> WDYT ? >>>> >>>> Tristan >>>> >>>> [1] https://github.com/infinispan/infinispan/pull/3577 >>> >>> -- >>> Radim Vansa >>> JBoss Performance Team >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From rory.odonnell at oracle.com Wed Aug 5 10:52:40 2015 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Wed, 5 Aug 2015 15:52:40 +0100 Subject: [infinispan-dev] Early Access builds for JDK 8u60 b26 and JDK 9 b75 are available on java.net Message-ID: <55C22338.90905@oracle.com> Hi Galder, Early Access build for JDK 8u60 b26 is available on java.net, summary of changes are listed here. As we enter the later phases of development for JDK 8u60, please log any show stoppers as soon as possible. Early Access build for JDK 9 b75 is available on java.net, summary of changes are listed here . With respect to ongoing JDK 9 development, there are two new Candidate JEPs I'd like to draw your attention to. Firstly, Mark Reinhold has put forward JEP 260: Encapsulate Most Internal APIs to make most of the JDK's internal APIs inaccessible by default but leave a few critical, widely-used internal APIs accessible, until supported replacements exist for all or most of their functionality. You can find the JEP here: http://openjdk.java.net/jeps/260 - and an introductory e-mail and discussion thread on the OpenJDK jigsaw-dev mailing list, starting at http://mail.openjdk.java.net/pipermail/jigsaw-dev/2015-August/004433.html . If you would like to provide additional feedback, please join the jigsaw-dev mailing list, and contribute to the discussion there. Secondly, Mandy Chung has put forward JEP 259: Stack-Walking API to define an efficient standard API for stack walking that allows easy filtering of, and lazy access to, the information in stack traces. You can find the JEP here: http://openjdk.java.net/jeps/259 . If you would like to provide feedback on JEP 259, please join the OpenJDK core-libs-dev mailing list and contribute to the discussion there. Finally, we are looking for feedback via a survey on Java Style Guidelines Please see here for more details. http://mail.openjdk.java.net/pipermail/discuss/2015-August/003766.html Rgds, Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150805/12cf7838/attachment.html From galder at redhat.com Wed Aug 5 12:51:24 2015 From: galder at redhat.com (Galder Zamarreno) Date: Wed, 5 Aug 2015 12:51:24 -0400 (EDT) Subject: [infinispan-dev] Weekly IRC meeting minutes 2015-08-03 In-Reply-To: <55BF7D1C.3020500@redhat.com> References: <55BF7D1C.3020500@redhat.com> Message-ID: <1975031191.7397333.1438793484840.JavaMail.zimbra@redhat.com> Hi all, The week before my vacation I was primarily implementing functional listeners and their tests, as well as discussing extensive feedback from Dan and Will WRT the functional API. Out of that resulted a list of tasks that I'm working on this week, as well as catching with a fair few emails and other topics, including Radim's Hibernate 2LC work. Cheers, -- Galder Zamarre?o Infinispan, Red Hat ----- Original Message ----- > Hi all, > > the minutes for this week's IRC meeting are at > > http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2015/infinispan.2015-08-03-14.00.log.html > > Enjoy > > Tristan > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From dan.berindei at gmail.com Wed Aug 5 16:24:48 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 5 Aug 2015 23:24:48 +0300 Subject: [infinispan-dev] Special cache types and their configuration (or lack of) In-Reply-To: <55C21E3B.3020302@redhat.com> References: <55B64303.4070006@redhat.com> <55B64E6C.6020706@redhat.com> <63584839.7062351.1438764521498.JavaMail.zimbra@redhat.com> <55C21E3B.3020302@redhat.com> Message-ID: On Wed, Aug 5, 2015 at 5:31 PM, Radim Vansa wrote: > On 08/05/2015 03:37 PM, Dan Berindei wrote: >> Radim's implementation already throws exceptions when the application >> tries to use unsupported features like throwing exceptions. The >> question is how to choose the simple cache: a new CacheMode/XML >> element, an attribute on the local-cache element, or reusing the >> existing configuration to figure out whether the user needs advanced >> features. >> >> Radim's implementation uses a new CacheMode and a new "simple-cache" >> XML element. I feel this makes it too visible, since it's based on >> what we can do now without an interceptor stack, and that might change >> in the future. >> >> I'm in the "new local-cache attribute" camp, because the programmatic >> configuration has to validate all those impossible configurations >> anyway. In the UI as well, when a user tries to create a cache with a >> store, I think it's better to tell him explicitly that he can't add a >> store to a simple cache, than let him wonder why there isn't any >> option to add a store in Infinispan. > > What UI do you mean? IDE with XSD, or does Infinispan have any tool with > Mr. Clippy? I meant the server (and WildFly) management console. No Clippy there, at least not yet :) > Not having a button/configuration element is IMO the _proper_ way to > tell the user 'You can't do that', rather than showing pop-up/throwing > exception with 'Don't press this button, please!'. I admit that > exception with link to docs is more _BFU-proof_, though. If users really > cared about the schema, there wouldn't be so many threads where they try > to copy-paste embedded configuration into server. The parser error > message should be more ironic, like 'Something's wrong. I won't tell you > what, but your XSD schema validator will!' > I admit having only the options that really work in the XSD and relying on the XSD to point out mistakes seems cleaner. My concern is discoverability: the user may be looking for an option that's only available on a local-cache, and there's nothing telling them to replace simple-cache with local-cache. >> >> I don't really like the idea of switching the cache implementation >> dynamically, either. From the JIT's point of view, I think a call site >> in an application is likely to always use the same kind of cache, so >> the call will be monomorphic most of the time. But as a user, I'd >> rather have something that's constantly slow than something that's >> initially fast and then suddenly gets slower without me knowing why. > > +1 I was about to write the dynamic switcher, but having consistent > performance is strong argument against that. > > Radim > >> >> Cheers >> Dan >> >> >> >> On Wed, Aug 5, 2015 at 11:48 AM, Galder Zamarreno wrote: >>> Indeed, JCache, MR and DistExec assume you'll be given a fully fledged Cache instance that allows them to do things that go beyond the basics, so as correctly pointed out here, it's hard to make the distinction purely based on the configuration. >>> >>> My gut feeling is that we need a way to specifically build a simple/basic cache directly based on your use case. With existing usages out there, you can't simply get a simple/basic cache just like that since a lot of the existing use cases expect to be able to use advanced features. An easy solution, as hinted by Radim, would be to have a wrapper for a simple/basic cache, which takes a standard Cache in, but don't go as far as to allow dynamic switching. E.g. if you chose to build a simple/basic cache, then things like add interceptor would fail...etc. I think this would work well for scenarios such as 2LC where we can control how the cache to be used is constructed. However, in scenarios where we expect it to work magically with existing code, it'd not work due to the need to know about the wrapper. >>> >>> Cheers, >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >>> ----- Original Message ----- >>>> There's one glitch that needs to be stressed: some limitations of >>>> simplified cache are not discoverable on creation time. While >>>> persistence, tx and others are, adding custom interceptors and running >>>> map-reduce or distributed-executors can't be guessed when the cache is >>>> created. >>>> I could (theoretically) implement MR and DistExec, but never the custom >>>> interceptors: the idea of simple cache is that there are *no >>>> interceptors*. And regrettably, this is not as rare case as I have >>>> initially assumed, as for example JCaches grab any cache, insert their >>>> interceptor and provide the wrapper. >>>> >>>> One way to go would be to not return the simple cache directly, but wrap >>>> it in a delegating cache that would switch the implementation on the fly >>>> as soon as someone tries to play with interceptors. However, this is not >>>> without cost - the delegate would have to read a volatile field and >>>> execute megamorphic call upon every cache operation. Applications could >>>> get around that by doing instanceof and calling unwrap method during >>>> initialization, but it's not really elegant solution. >>>> >>>> I wanted the choice transparent to the user from the beginning, but it's >>>> not a way to go without penalties. >>>> >>>> For those who will suggest 'just a flag on local cache': Following the >>>> 'less configuration, not more' I believe that the amount of >>>> runtime-prohibited configurations should be kept at minimum. With such >>>> flag, we would expand the state space of configuration 2 times, while >>>> 95% of the configurations would be illegal. That's why I have rather >>>> used new cache mode than adding a flag. >>>> >>>> Radim >>>> >>>> On 07/27/2015 04:41 PM, Tristan Tarrant wrote: >>>>> Hi all, >>>>> >>>>> I wanted to bring attention to some discussion that has happened in the >>>>> context of Radim's work on simplified code for specific cache types [1]. >>>>> >>>>> In particular, Radim proposes adding explicit configuration options >>>>> (i.e. a new simple-cache cache type) to the programmatic/declarative API >>>>> to ensure that a user is aware of the limitations of the resulting cache >>>>> type (no interceptors, no persistence, no tx, etc). >>>>> >>>>> My opinion is that we should aim for "less" configuration and not >>>>> "more", and that optimizations such as these should get enabled >>>>> implicitly when the parameters allow it: if the configuration code >>>>> detects it can use a "simple" cache. >>>>> >>>>> Also, this choice should happen at cache construction time, and not >>>>> dynamically at cache usage time. >>>>> >>>>> WDYT ? >>>>> >>>>> Tristan >>>>> >>>>> [1] https://github.com/infinispan/infinispan/pull/3577 >>>> >>>> -- >>>> Radim Vansa >>>> JBoss Performance Team >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Wed Aug 5 17:13:22 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 6 Aug 2015 00:13:22 +0300 Subject: [infinispan-dev] Shared vs Non-Shared CacheStores In-Reply-To: References: <55ACBA42.5070507@redhat.com> Message-ID: On Fri, Jul 31, 2015 at 3:30 PM, Sanne Grinovero wrote: > On 20 July 2015 at 11:02, Dan Berindei wrote: >> Sanne, I think changing the cache store API is actually the most >> painful part, so we should only do it if we gain a concrete advantage >> from doing it. From a compatibility point of view, implementing a new >> interface vs implementing the same interface with completely different >> methods is just as bad. > > Right, from that perspective it's a quite horrible proposal. > > But I think we can agree that only the "SharedCacheStore" deserves to > be considered an SPI, right? > That's the one people will normally customize to map stuff to other > stores one might have. > > I think it's important that beyond Infinispan 8.0 API's freeze, we can > make any change to the non-shared SPI > without affecting users who implement a custom shared cachestore. > > I highly doubt someone will implement a high-performance custom off > heap swap strategy, but if someone does he should contribute it and > will probably need to make integration level changes. > > We probably won't have the time to implement a new super efficient > local-only cachestore to replace the leveldb one, but I'd like to keep > the possibility open to do that beyond 8.0, *especially* without > breaking compatibility for other people. We already have a new super efficient local-only cachestore :) https://github.com/infinispan/infinispan/tree/master/persistence/soft-index > > Sanne > > >> >> On Mon, Jul 20, 2015 at 12:41 PM, Sanne Grinovero wrote: >>> +1 for incremental changes.. >>> >>> I'd see the first step as defining two different interfaces; >>> essentially we need to choose two good names. >>> >>> Then we could have both interfaces still implement the same identical >>> methods, but go through each implementation and decide to "mark" it as >>> shared-only or never-shared. >>> >>> That would make it simpler to make concrete change proposals on each >>> of them and start taking some advantage from the split. I think you'll >>> need the two different interfaces to implement the validations you >>> mentioned. >>> >>> For Infinispan 8's goals, I'd be happy enough to keep the >>> "shared-only" interface quite similar to the current one, but mark the >>> never-shared one as a private or experimental SPI to allow ourselves >>> some more flexibility in performance oriented changes. >>> >>> Thanks, >>> Sanne >>> >>> On 20 July 2015 at 10:07, Tristan Tarrant wrote: >>>> Sanne, well written. >>>> Before actually implementing any of the optimizations/changes you >>>> mention, I think the lowest-hanging fruit we should grab now is just to >>>> add checks to all of our cachestores to actually throw an exception when >>>> they are being enabled in unsupported configurations. >>>> >>>> I've created [1] to get us started >>>> >>>> Tristan >>>> >>>> [1] https://issues.jboss.org/browse/ISPN-5617 >>>> >>>> On 16/07/2015 15:32, Sanne Grinovero wrote: >>>>> I would like to propose a clear cut separation between our shared and >>>>> non-shared CacheStores, >>>>> in all terms such as: >>>>> - Configuration options >>>>> - Integration contracts (Split the CacheStore SPI) >>>>> - Implementations >>>>> - Terminology, to avoid any further confusion around valid >>>>> configurations and sensible architectures >>>>> >>>>> We have loads of examples of users who get in trouble by configuring >>>>> one incorrectly, but also there are plenty of efficiency improvements >>>>> we could take advantage of by clearly splitting the integration points >>>>> and the implementations in two categories. >>>>> >>>>> Not least, it's a very common and dangerous pitfall to assume that >>>>> Infinispan is able to restore a consistent state after having stopped >>>>> a DIST cluster which passivated into non-shared CacheStore instances, >>>>> or even REPL clusters when they don't shutdown all at the same exact >>>>> time (and "exact same time" is a strange concept at least..). We need >>>>> to clarify the different options, tradeoffs and their consequences.. >>>>> to users and ourselves, as a clearly defined use case will avoid bugs >>>>> and simplify implementations. >>>>> >>>>> # The purpose of each >>>>> I think that people should use a non-shared (local?) CacheStore for >>>>> the sole purpose of expanding to storage capacity of each single >>>>> node.. be it because you don't have enough memory at all, or be it >>>>> because you prefer some extra safety margin because either your >>>>> estimates are complex, or maybe because we live in a real world were >>>>> the hashing function might not be perfect in practice. I hope we all >>>>> agree that Infinispan should be able to take such situations with at >>>>> worst a graceful performance degradatation, rather than complain >>>>> sending OOMs to the admin and setting the service on strike. >>>>> >>>>> A Shared CacheStore is useful for very different purposes; primarily >>>>> to implement a Cache on some other service - for example your (single, >>>>> shared) RDBMs, a slow (or expensive) webservice your organization has >>>>> to call frequently, etc.. Or it's useful even as a write-through cache >>>>> on a similar service, maybe internal but not able to handle the high >>>>> variation of load spikes which Infinsipan can handle better. >>>>> Finally, a great use case is to have a consistent backup of all your >>>>> data-grid content, possibly in some "reference" form such as JPA >>>>> mapped entities. >>>>> >>>>> # Benefits of a Non-Shared >>>>> A non-shared CacheStore implementor should be able to take advantage >>>>> of *its purpose*, among the big ones I see: >>>>> - Exclusive usage -> locking of a specific entry can be handled at >>>>> datacontainer level, can simplify quite some internal code. >>>>> - Reliability -> since a clustered node needs to wipe its state at >>>>> reboot (after a crash), it's much simpler to code any such CacheStore >>>>> to avoid any form of disk synch or persistance guarantees. >>>>> - Encoding format -> this can be controlled entirely by Infinispan, >>>>> and no need to take factors like rolling upgrade compatible encodings >>>>> in mind. JBoss Marshalling would be good enough, or some >>>>> implementations might not need to serialize at all. >>>>> >>>>> Our non-shared CacheStore implentation(s) could take advantage of >>>>> lower level more complex code optimisations and interfaces, as users >>>>> would rarely want to customize one of these, while the use case of >>>>> mapping data to a shared service needs a more user friendly SPI so to >>>>> keep it simple to plug in custom stores: custom data formats, custom >>>>> connectors, get some help in implementing concurrency correctly. >>>>> Proper Transaction integration for the CacheStore has been on our >>>>> wishlist for some time too, I suspect that accepting that we have been >>>>> mixing up two different things under a same name so far, would make it >>>>> simpler to implement further improvements such as transactions: the >>>>> way to do such a thing is very different in each of these use cases, >>>>> so it would help at least to implement it on a subset first, or maybe >>>>> only if it turns out there's no need for such things in the context of >>>>> the local-only-dedicated "swapfile". >>>>> >>>>> # Mixed types should be killed >>>>> I'm aware that some of our current implementations _could_ work both as >>>>> shared or non-shared, for example the JDBC or JPACacheStore or the >>>>> Remote Cachestore.. but in most cases it doesn't make much sense. Why >>>>> would you ever want to use the JPACacheStore if not to share data with >>>>> a _shared_ database? >>>>> >>>>> We should take such options away, and by doing so focus on the use >>>>> cases which actually matter and simplify the implementations and >>>>> improve the configuration validations. >>>>> >>>>> If ever a compelling storage technology is identified which we'd like to >>>>> offer as an option for both shared or non-shared, I would still >>>>> recommend to make two different implementations, as there certainly are >>>>> different requirements and assumptions when coding such a thing. >>>>> >>>>> Not least, I would very like to see a default local CacheStore: >>>>> picking one for local "emergency swapping" should be a no-brainer for >>>>> users; we could setup one by default and not bother newcomers with >>>>> complex choices. >>>>> >>>>> If we simplify the requirement of such a thing, it should be easy to >>>>> write one on standard Java NIO2 APIs and get rid of the complexities of >>>>> maintaining the native integration with things like LevelDB, not least >>>>> the inefficiency of Java to make such native calls. >>>>> >>>>> Then as a second step, we should attack the other use case: backups; >>>>> from a *purpose driven perspective* I'd then see us revive the Cassandra >>>>> integration; obviously as a shared-only option. >>>>> >>>>> Cheers, >>>>> Sanne >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>> >>>> -- >>>> Tristan Tarrant >>>> Infinispan Lead >>>> JBoss, a division of Red Hat >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Wed Aug 5 17:57:36 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 5 Aug 2015 22:57:36 +0100 Subject: [infinispan-dev] Shared vs Non-Shared CacheStores In-Reply-To: References: <55ACBA42.5070507@redhat.com> Message-ID: I don't doubt Radim's code :) but I'm pretty confident that even that implementation is limited by the constraints of the general-purpose API. For example it seems Bela will soon allow more flexibility in JGroups regarding buffer representations. We need to commit on a stable API for end user integrations (shared cachestore implementors), but we also need to keep options open to soon play with other approaches. That's why I think this separation should be done before Infinispan 8.0.0.Final even if I don't have a concrete proposal for how this other API should look like: I don't presume to be able to anticipate which API exactly will be best, but I think we can all see that we will want to change that. There should be a private internal contract which we can change even in micro versions without concerns of compatibility, so to allow R&D progress in the most performance sensitive areas w/o this being a problem for integrators and users. Better configuration validations are additional (strong) benefits: we've seen lots of misunderstandings about which CacheStores / configuration combinations are valid. Thanks, Sanne On 5 August 2015 at 22:13, Dan Berindei wrote: > On Fri, Jul 31, 2015 at 3:30 PM, Sanne Grinovero wrote: >> On 20 July 2015 at 11:02, Dan Berindei wrote: >>> Sanne, I think changing the cache store API is actually the most >>> painful part, so we should only do it if we gain a concrete advantage >>> from doing it. From a compatibility point of view, implementing a new >>> interface vs implementing the same interface with completely different >>> methods is just as bad. >> >> Right, from that perspective it's a quite horrible proposal. >> >> But I think we can agree that only the "SharedCacheStore" deserves to >> be considered an SPI, right? >> That's the one people will normally customize to map stuff to other >> stores one might have. >> >> I think it's important that beyond Infinispan 8.0 API's freeze, we can >> make any change to the non-shared SPI >> without affecting users who implement a custom shared cachestore. >> >> I highly doubt someone will implement a high-performance custom off >> heap swap strategy, but if someone does he should contribute it and >> will probably need to make integration level changes. >> >> We probably won't have the time to implement a new super efficient >> local-only cachestore to replace the leveldb one, but I'd like to keep >> the possibility open to do that beyond 8.0, *especially* without >> breaking compatibility for other people. > > We already have a new super efficient local-only cachestore :) > > https://github.com/infinispan/infinispan/tree/master/persistence/soft-index > > >> >> Sanne >> >> >>> >>> On Mon, Jul 20, 2015 at 12:41 PM, Sanne Grinovero wrote: >>>> +1 for incremental changes.. >>>> >>>> I'd see the first step as defining two different interfaces; >>>> essentially we need to choose two good names. >>>> >>>> Then we could have both interfaces still implement the same identical >>>> methods, but go through each implementation and decide to "mark" it as >>>> shared-only or never-shared. >>>> >>>> That would make it simpler to make concrete change proposals on each >>>> of them and start taking some advantage from the split. I think you'll >>>> need the two different interfaces to implement the validations you >>>> mentioned. >>>> >>>> For Infinispan 8's goals, I'd be happy enough to keep the >>>> "shared-only" interface quite similar to the current one, but mark the >>>> never-shared one as a private or experimental SPI to allow ourselves >>>> some more flexibility in performance oriented changes. >>>> >>>> Thanks, >>>> Sanne >>>> >>>> On 20 July 2015 at 10:07, Tristan Tarrant wrote: >>>>> Sanne, well written. >>>>> Before actually implementing any of the optimizations/changes you >>>>> mention, I think the lowest-hanging fruit we should grab now is just to >>>>> add checks to all of our cachestores to actually throw an exception when >>>>> they are being enabled in unsupported configurations. >>>>> >>>>> I've created [1] to get us started >>>>> >>>>> Tristan >>>>> >>>>> [1] https://issues.jboss.org/browse/ISPN-5617 >>>>> >>>>> On 16/07/2015 15:32, Sanne Grinovero wrote: >>>>>> I would like to propose a clear cut separation between our shared and >>>>>> non-shared CacheStores, >>>>>> in all terms such as: >>>>>> - Configuration options >>>>>> - Integration contracts (Split the CacheStore SPI) >>>>>> - Implementations >>>>>> - Terminology, to avoid any further confusion around valid >>>>>> configurations and sensible architectures >>>>>> >>>>>> We have loads of examples of users who get in trouble by configuring >>>>>> one incorrectly, but also there are plenty of efficiency improvements >>>>>> we could take advantage of by clearly splitting the integration points >>>>>> and the implementations in two categories. >>>>>> >>>>>> Not least, it's a very common and dangerous pitfall to assume that >>>>>> Infinispan is able to restore a consistent state after having stopped >>>>>> a DIST cluster which passivated into non-shared CacheStore instances, >>>>>> or even REPL clusters when they don't shutdown all at the same exact >>>>>> time (and "exact same time" is a strange concept at least..). We need >>>>>> to clarify the different options, tradeoffs and their consequences.. >>>>>> to users and ourselves, as a clearly defined use case will avoid bugs >>>>>> and simplify implementations. >>>>>> >>>>>> # The purpose of each >>>>>> I think that people should use a non-shared (local?) CacheStore for >>>>>> the sole purpose of expanding to storage capacity of each single >>>>>> node.. be it because you don't have enough memory at all, or be it >>>>>> because you prefer some extra safety margin because either your >>>>>> estimates are complex, or maybe because we live in a real world were >>>>>> the hashing function might not be perfect in practice. I hope we all >>>>>> agree that Infinispan should be able to take such situations with at >>>>>> worst a graceful performance degradatation, rather than complain >>>>>> sending OOMs to the admin and setting the service on strike. >>>>>> >>>>>> A Shared CacheStore is useful for very different purposes; primarily >>>>>> to implement a Cache on some other service - for example your (single, >>>>>> shared) RDBMs, a slow (or expensive) webservice your organization has >>>>>> to call frequently, etc.. Or it's useful even as a write-through cache >>>>>> on a similar service, maybe internal but not able to handle the high >>>>>> variation of load spikes which Infinsipan can handle better. >>>>>> Finally, a great use case is to have a consistent backup of all your >>>>>> data-grid content, possibly in some "reference" form such as JPA >>>>>> mapped entities. >>>>>> >>>>>> # Benefits of a Non-Shared >>>>>> A non-shared CacheStore implementor should be able to take advantage >>>>>> of *its purpose*, among the big ones I see: >>>>>> - Exclusive usage -> locking of a specific entry can be handled at >>>>>> datacontainer level, can simplify quite some internal code. >>>>>> - Reliability -> since a clustered node needs to wipe its state at >>>>>> reboot (after a crash), it's much simpler to code any such CacheStore >>>>>> to avoid any form of disk synch or persistance guarantees. >>>>>> - Encoding format -> this can be controlled entirely by Infinispan, >>>>>> and no need to take factors like rolling upgrade compatible encodings >>>>>> in mind. JBoss Marshalling would be good enough, or some >>>>>> implementations might not need to serialize at all. >>>>>> >>>>>> Our non-shared CacheStore implentation(s) could take advantage of >>>>>> lower level more complex code optimisations and interfaces, as users >>>>>> would rarely want to customize one of these, while the use case of >>>>>> mapping data to a shared service needs a more user friendly SPI so to >>>>>> keep it simple to plug in custom stores: custom data formats, custom >>>>>> connectors, get some help in implementing concurrency correctly. >>>>>> Proper Transaction integration for the CacheStore has been on our >>>>>> wishlist for some time too, I suspect that accepting that we have been >>>>>> mixing up two different things under a same name so far, would make it >>>>>> simpler to implement further improvements such as transactions: the >>>>>> way to do such a thing is very different in each of these use cases, >>>>>> so it would help at least to implement it on a subset first, or maybe >>>>>> only if it turns out there's no need for such things in the context of >>>>>> the local-only-dedicated "swapfile". >>>>>> >>>>>> # Mixed types should be killed >>>>>> I'm aware that some of our current implementations _could_ work both as >>>>>> shared or non-shared, for example the JDBC or JPACacheStore or the >>>>>> Remote Cachestore.. but in most cases it doesn't make much sense. Why >>>>>> would you ever want to use the JPACacheStore if not to share data with >>>>>> a _shared_ database? >>>>>> >>>>>> We should take such options away, and by doing so focus on the use >>>>>> cases which actually matter and simplify the implementations and >>>>>> improve the configuration validations. >>>>>> >>>>>> If ever a compelling storage technology is identified which we'd like to >>>>>> offer as an option for both shared or non-shared, I would still >>>>>> recommend to make two different implementations, as there certainly are >>>>>> different requirements and assumptions when coding such a thing. >>>>>> >>>>>> Not least, I would very like to see a default local CacheStore: >>>>>> picking one for local "emergency swapping" should be a no-brainer for >>>>>> users; we could setup one by default and not bother newcomers with >>>>>> complex choices. >>>>>> >>>>>> If we simplify the requirement of such a thing, it should be easy to >>>>>> write one on standard Java NIO2 APIs and get rid of the complexities of >>>>>> maintaining the native integration with things like LevelDB, not least >>>>>> the inefficiency of Java to make such native calls. >>>>>> >>>>>> Then as a second step, we should attack the other use case: backups; >>>>>> from a *purpose driven perspective* I'd then see us revive the Cassandra >>>>>> integration; obviously as a shared-only option. >>>>>> >>>>>> Cheers, >>>>>> Sanne >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> >>>>> >>>>> -- >>>>> Tristan Tarrant >>>>> Infinispan Lead >>>>> JBoss, a division of Red Hat >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From smarlow at redhat.com Wed Aug 5 19:35:51 2015 From: smarlow at redhat.com (Scott Marlow) Date: Wed, 5 Aug 2015 19:35:51 -0400 Subject: [infinispan-dev] Hibernate ORM 5.0.0.CR4 not working so well with Infinispan 8.0.0.Beta2... Message-ID: <55C29DD7.8010000@redhat.com> http://pastebin.com/59X7aXaX shows the "NoSuchMethodError: org.infinispan.AdvancedCache.keySet()Lorg/infinispan/commons/util/CloseableIteratorSet" that we are getting when running the WildFly 10 testsuite with Hibernate ORM 5.0.0.CR4 and Infinispan 8.0.0.Beta2. Suggestions? Scott From mudokonman at gmail.com Wed Aug 5 22:01:29 2015 From: mudokonman at gmail.com (William Burns) Date: Thu, 06 Aug 2015 02:01:29 +0000 Subject: [infinispan-dev] Hibernate ORM 5.0.0.CR4 not working so well with Infinispan 8.0.0.Beta2... In-Reply-To: <55C29DD7.8010000@redhat.com> References: <55C29DD7.8010000@redhat.com> Message-ID: It seems ORM was compiled with a version earlier than Beta1 but then ran with Beta2? The keySet method was changed to return a subclass of CloseableIteratorSet with Beta2 to support distributed streams [1]. [1] https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/CacheSet.java - Will On Wed, Aug 5, 2015 at 7:36 PM Scott Marlow wrote: > http://pastebin.com/59X7aXaX shows the "NoSuchMethodError: > > org.infinispan.AdvancedCache.keySet()Lorg/infinispan/commons/util/CloseableIteratorSet" > that we are getting when running the WildFly 10 testsuite with Hibernate > ORM 5.0.0.CR4 and Infinispan 8.0.0.Beta2. > > Suggestions? > > Scott > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150806/6aa75ecc/attachment.html From sanne at infinispan.org Thu Aug 6 04:30:41 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 6 Aug 2015 09:30:41 +0100 Subject: [infinispan-dev] Hibernate ORM 5.0.0.CR4 not working so well with Infinispan 8.0.0.Beta2... In-Reply-To: References: <55C29DD7.8010000@redhat.com> Message-ID: On 6 August 2015 at 03:01, William Burns wrote: > It seems ORM was compiled with a version earlier than Beta1 but then ran > with Beta2? The keySet method was changed to return a subclass of > CloseableIteratorSet with Beta2 to support distributed streams [1]. Right, that's my same conclusion. Hibernate ORM is not updating to Infinispan 8 as it needs to be released with Infinispan 7; we previously adjusted the Hibernate code to verify that it would work with Infinispan 8 too, but I did that by rebuilding ORM and temporarily changing the property which defines the Infinispan dependency version (no source code changes of course). I proposed a Jenkins job but that didn't seem popular :-/ So it turns out it was source-code compatible but not at runtime, my bad sorry I didn't think to verify that. But now if we don't get it to work at runtime, that's a problem to get Infinispan 8 in WildFly 10. Scott, I think the most reliable way is to have you setup a WildFly job which tests snapshots of Hibernate and Infinispan of the branches you intend to merge next in WildFly. AFAIK it would also save you some time, as you seem to constantly run such builds? BTW that CacheSet API change looks like it was intended to be backwards compatible? It's not, as we just realised. I you want it to be backwards compatible you'll have to revert that API change. BTW 2 .. when I verified those things it was passing the ORM testsuite, but latest ORM with latest Infinispan now have a lot of test failures. I had proposed to setup a Jenkins job to monitor the combination, but need a volunteer to act on the reports. It turns out it's not even given all the coverage we need but it would be a start. Sanne > > [1] > https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/CacheSet.java > > - Will > > On Wed, Aug 5, 2015 at 7:36 PM Scott Marlow wrote: >> >> http://pastebin.com/59X7aXaX shows the "NoSuchMethodError: >> >> org.infinispan.AdvancedCache.keySet()Lorg/infinispan/commons/util/CloseableIteratorSet" >> that we are getting when running the WildFly 10 testsuite with Hibernate >> ORM 5.0.0.CR4 and Infinispan 8.0.0.Beta2. >> >> Suggestions? >> >> Scott >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Thu Aug 6 04:31:18 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 06 Aug 2015 10:31:18 +0200 Subject: [infinispan-dev] Shared vs Non-Shared CacheStores In-Reply-To: References: <55ACBA42.5070507@redhat.com> Message-ID: <55C31B56.8090005@redhat.com> On 05/08/2015 23:57, Sanne Grinovero wrote: > For example it seems Bela will soon allow more flexibility in JGroups > regarding buffer representations. We need to commit on a stable API > for end user integrations (shared cachestore implementors), but we > also need to keep options open to soon play with other approaches. > > That's why I think this separation should be done before Infinispan > 8.0.0.Final even if I don't have a concrete proposal for how this If we don't have a concrete proposal by the end of this week, I think we should forgo this until Infinispan 9 and until we've clearly defined what we need/want. I am pretty annoyed, and I'm certain our users even more so, by the SPI and configuration changes that have happened over all of our major versions and I don't want to inflict that pain again, even though we might reap some benefits by redesigning (yet again) the store SPI. The solution I would like to see would be some backward- and forward- compatible way of a store exposing the "capabilities" it supports (e.g. SHARED, TRANSACTIONAL, etc) so that the PersistenceManager deals with them accordingly. Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From ttarrant at redhat.com Thu Aug 6 04:42:31 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 06 Aug 2015 10:42:31 +0200 Subject: [infinispan-dev] Hibernate ORM 5.0.0.CR4 not working so well with Infinispan 8.0.0.Beta2... In-Reply-To: References: <55C29DD7.8010000@redhat.com> Message-ID: <55C31DF7.9040602@redhat.com> On 06/08/2015 10:30, Sanne Grinovero wrote: > On 6 August 2015 at 03:01, William Burns wrote: >> It seems ORM was compiled with a version earlier than Beta1 but then ran >> with Beta2? The keySet method was changed to return a subclass of >> CloseableIteratorSet with Beta2 to support distributed streams [1]. > > BTW that CacheSet API change looks like it was intended to be > backwards compatible? It's not, as we just realised. I you want it to > be backwards compatible you'll have to revert that API change. From the ORM version numbers (CR4) I guess we are near endgame and we MUST ensure that ORM works with both Infinispan 7.x and 8.x. Will is it possible to make the signature of the method backwards compatible ? If this is inconvenient, what can be done in Hibernate Infinispan to insulate it from this change ? Still, I'm wondering whether hibernate-infinispan shouldn't be subclassed into multiple versions so that we can be a bit more liberal with some changes. Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From rvansa at redhat.com Thu Aug 6 04:39:31 2015 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 06 Aug 2015 10:39:31 +0200 Subject: [infinispan-dev] Shared vs Non-Shared CacheStores In-Reply-To: References: <55ACBA42.5070507@redhat.com> Message-ID: <55C31D43.30106@redhat.com> I understand that shared cache stores will be more common to be implemented, I don't think that non-shared stores should be considered 'private interface'. But separating them would really give the oportunity to change this non-shared SPI more often if needed without breaking shared one. However, hot-glueing a new cool interface without referential implementation that supports transaction, solves the ton of issues described in [1] is not a wise move, IMO. And there's no time to implement this before 8.0.0.Final. Radim [1] https://github.com/infinispan/infinispan/wiki/Consistency-guarantees-in-Infinispan On 08/05/2015 11:57 PM, Sanne Grinovero wrote: > I don't doubt Radim's code :) but I'm pretty confident that even that > implementation is limited by the constraints of the general-purpose > API. > > For example it seems Bela will soon allow more flexibility in JGroups > regarding buffer representations. We need to commit on a stable API > for end user integrations (shared cachestore implementors), but we > also need to keep options open to soon play with other approaches. > > That's why I think this separation should be done before Infinispan > 8.0.0.Final even if I don't have a concrete proposal for how this > other API should look like: I don't presume to be able to anticipate > which API exactly will be best, but I think we can all see that we > will want to change that. There should be a private internal contract > which we can change even in micro versions without concerns of > compatibility, so to allow R&D progress in the most performance > sensitive areas w/o this being a problem for integrators and users. > > Better configuration validations are additional (strong) benefits: > we've seen lots of misunderstandings about which CacheStores / > configuration combinations are valid. > > Thanks, > Sanne > > On 5 August 2015 at 22:13, Dan Berindei wrote: >> On Fri, Jul 31, 2015 at 3:30 PM, Sanne Grinovero wrote: >>> On 20 July 2015 at 11:02, Dan Berindei wrote: >>>> Sanne, I think changing the cache store API is actually the most >>>> painful part, so we should only do it if we gain a concrete advantage >>>> from doing it. From a compatibility point of view, implementing a new >>>> interface vs implementing the same interface with completely different >>>> methods is just as bad. >>> Right, from that perspective it's a quite horrible proposal. >>> >>> But I think we can agree that only the "SharedCacheStore" deserves to >>> be considered an SPI, right? >>> That's the one people will normally customize to map stuff to other >>> stores one might have. >>> >>> I think it's important that beyond Infinispan 8.0 API's freeze, we can >>> make any change to the non-shared SPI >>> without affecting users who implement a custom shared cachestore. >>> >>> I highly doubt someone will implement a high-performance custom off >>> heap swap strategy, but if someone does he should contribute it and >>> will probably need to make integration level changes. >>> >>> We probably won't have the time to implement a new super efficient >>> local-only cachestore to replace the leveldb one, but I'd like to keep >>> the possibility open to do that beyond 8.0, *especially* without >>> breaking compatibility for other people. >> We already have a new super efficient local-only cachestore :) >> >> https://github.com/infinispan/infinispan/tree/master/persistence/soft-index >> >> >>> Sanne >>> >>> >>>> On Mon, Jul 20, 2015 at 12:41 PM, Sanne Grinovero wrote: >>>>> +1 for incremental changes.. >>>>> >>>>> I'd see the first step as defining two different interfaces; >>>>> essentially we need to choose two good names. >>>>> >>>>> Then we could have both interfaces still implement the same identical >>>>> methods, but go through each implementation and decide to "mark" it as >>>>> shared-only or never-shared. >>>>> >>>>> That would make it simpler to make concrete change proposals on each >>>>> of them and start taking some advantage from the split. I think you'll >>>>> need the two different interfaces to implement the validations you >>>>> mentioned. >>>>> >>>>> For Infinispan 8's goals, I'd be happy enough to keep the >>>>> "shared-only" interface quite similar to the current one, but mark the >>>>> never-shared one as a private or experimental SPI to allow ourselves >>>>> some more flexibility in performance oriented changes. >>>>> >>>>> Thanks, >>>>> Sanne >>>>> >>>>> On 20 July 2015 at 10:07, Tristan Tarrant wrote: >>>>>> Sanne, well written. >>>>>> Before actually implementing any of the optimizations/changes you >>>>>> mention, I think the lowest-hanging fruit we should grab now is just to >>>>>> add checks to all of our cachestores to actually throw an exception when >>>>>> they are being enabled in unsupported configurations. >>>>>> >>>>>> I've created [1] to get us started >>>>>> >>>>>> Tristan >>>>>> >>>>>> [1] https://issues.jboss.org/browse/ISPN-5617 >>>>>> >>>>>> On 16/07/2015 15:32, Sanne Grinovero wrote: >>>>>>> I would like to propose a clear cut separation between our shared and >>>>>>> non-shared CacheStores, >>>>>>> in all terms such as: >>>>>>> - Configuration options >>>>>>> - Integration contracts (Split the CacheStore SPI) >>>>>>> - Implementations >>>>>>> - Terminology, to avoid any further confusion around valid >>>>>>> configurations and sensible architectures >>>>>>> >>>>>>> We have loads of examples of users who get in trouble by configuring >>>>>>> one incorrectly, but also there are plenty of efficiency improvements >>>>>>> we could take advantage of by clearly splitting the integration points >>>>>>> and the implementations in two categories. >>>>>>> >>>>>>> Not least, it's a very common and dangerous pitfall to assume that >>>>>>> Infinispan is able to restore a consistent state after having stopped >>>>>>> a DIST cluster which passivated into non-shared CacheStore instances, >>>>>>> or even REPL clusters when they don't shutdown all at the same exact >>>>>>> time (and "exact same time" is a strange concept at least..). We need >>>>>>> to clarify the different options, tradeoffs and their consequences.. >>>>>>> to users and ourselves, as a clearly defined use case will avoid bugs >>>>>>> and simplify implementations. >>>>>>> >>>>>>> # The purpose of each >>>>>>> I think that people should use a non-shared (local?) CacheStore for >>>>>>> the sole purpose of expanding to storage capacity of each single >>>>>>> node.. be it because you don't have enough memory at all, or be it >>>>>>> because you prefer some extra safety margin because either your >>>>>>> estimates are complex, or maybe because we live in a real world were >>>>>>> the hashing function might not be perfect in practice. I hope we all >>>>>>> agree that Infinispan should be able to take such situations with at >>>>>>> worst a graceful performance degradatation, rather than complain >>>>>>> sending OOMs to the admin and setting the service on strike. >>>>>>> >>>>>>> A Shared CacheStore is useful for very different purposes; primarily >>>>>>> to implement a Cache on some other service - for example your (single, >>>>>>> shared) RDBMs, a slow (or expensive) webservice your organization has >>>>>>> to call frequently, etc.. Or it's useful even as a write-through cache >>>>>>> on a similar service, maybe internal but not able to handle the high >>>>>>> variation of load spikes which Infinsipan can handle better. >>>>>>> Finally, a great use case is to have a consistent backup of all your >>>>>>> data-grid content, possibly in some "reference" form such as JPA >>>>>>> mapped entities. >>>>>>> >>>>>>> # Benefits of a Non-Shared >>>>>>> A non-shared CacheStore implementor should be able to take advantage >>>>>>> of *its purpose*, among the big ones I see: >>>>>>> - Exclusive usage -> locking of a specific entry can be handled at >>>>>>> datacontainer level, can simplify quite some internal code. >>>>>>> - Reliability -> since a clustered node needs to wipe its state at >>>>>>> reboot (after a crash), it's much simpler to code any such CacheStore >>>>>>> to avoid any form of disk synch or persistance guarantees. >>>>>>> - Encoding format -> this can be controlled entirely by Infinispan, >>>>>>> and no need to take factors like rolling upgrade compatible encodings >>>>>>> in mind. JBoss Marshalling would be good enough, or some >>>>>>> implementations might not need to serialize at all. >>>>>>> >>>>>>> Our non-shared CacheStore implentation(s) could take advantage of >>>>>>> lower level more complex code optimisations and interfaces, as users >>>>>>> would rarely want to customize one of these, while the use case of >>>>>>> mapping data to a shared service needs a more user friendly SPI so to >>>>>>> keep it simple to plug in custom stores: custom data formats, custom >>>>>>> connectors, get some help in implementing concurrency correctly. >>>>>>> Proper Transaction integration for the CacheStore has been on our >>>>>>> wishlist for some time too, I suspect that accepting that we have been >>>>>>> mixing up two different things under a same name so far, would make it >>>>>>> simpler to implement further improvements such as transactions: the >>>>>>> way to do such a thing is very different in each of these use cases, >>>>>>> so it would help at least to implement it on a subset first, or maybe >>>>>>> only if it turns out there's no need for such things in the context of >>>>>>> the local-only-dedicated "swapfile". >>>>>>> >>>>>>> # Mixed types should be killed >>>>>>> I'm aware that some of our current implementations _could_ work both as >>>>>>> shared or non-shared, for example the JDBC or JPACacheStore or the >>>>>>> Remote Cachestore.. but in most cases it doesn't make much sense. Why >>>>>>> would you ever want to use the JPACacheStore if not to share data with >>>>>>> a _shared_ database? >>>>>>> >>>>>>> We should take such options away, and by doing so focus on the use >>>>>>> cases which actually matter and simplify the implementations and >>>>>>> improve the configuration validations. >>>>>>> >>>>>>> If ever a compelling storage technology is identified which we'd like to >>>>>>> offer as an option for both shared or non-shared, I would still >>>>>>> recommend to make two different implementations, as there certainly are >>>>>>> different requirements and assumptions when coding such a thing. >>>>>>> >>>>>>> Not least, I would very like to see a default local CacheStore: >>>>>>> picking one for local "emergency swapping" should be a no-brainer for >>>>>>> users; we could setup one by default and not bother newcomers with >>>>>>> complex choices. >>>>>>> >>>>>>> If we simplify the requirement of such a thing, it should be easy to >>>>>>> write one on standard Java NIO2 APIs and get rid of the complexities of >>>>>>> maintaining the native integration with things like LevelDB, not least >>>>>>> the inefficiency of Java to make such native calls. >>>>>>> >>>>>>> Then as a second step, we should attack the other use case: backups; >>>>>>> from a *purpose driven perspective* I'd then see us revive the Cassandra >>>>>>> integration; obviously as a shared-only option. >>>>>>> >>>>>>> Cheers, >>>>>>> Sanne >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> >>>>>> -- >>>>>> Tristan Tarrant >>>>>> Infinispan Lead >>>>>> JBoss, a division of Red Hat >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From sanne at infinispan.org Thu Aug 6 06:51:07 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 6 Aug 2015 11:51:07 +0100 Subject: [infinispan-dev] Hibernate ORM 5.0.0.CR4 not working so well with Infinispan 8.0.0.Beta2... In-Reply-To: <55C31DF7.9040602@redhat.com> References: <55C29DD7.8010000@redhat.com> <55C31DF7.9040602@redhat.com> Message-ID: On 6 August 2015 at 09:42, Tristan Tarrant wrote: > > > On 06/08/2015 10:30, Sanne Grinovero wrote: >> On 6 August 2015 at 03:01, William Burns wrote: >>> It seems ORM was compiled with a version earlier than Beta1 but then ran >>> with Beta2? The keySet method was changed to return a subclass of >>> CloseableIteratorSet with Beta2 to support distributed streams [1]. >> >> BTW that CacheSet API change looks like it was intended to be >> backwards compatible? It's not, as we just realised. I you want it to >> be backwards compatible you'll have to revert that API change. > > From the ORM version numbers (CR4) I guess we are near endgame and we > MUST ensure that ORM works with both Infinispan 7.x and 8.x. > Will is it possible to make the signature of the method backwards > compatible ? > If this is inconvenient, what can be done in Hibernate Infinispan to > insulate it from this change ? > > Still, I'm wondering whether hibernate-infinispan shouldn't be > subclassed into multiple versions so that we can be a bit more liberal > with some changes. It has always been maintained by the Infinispan team. You can do that, or you could even move the sources to the Infinispan project so you get to pick the versions; that would solve the current version entanglement but I wonder if we'd end up being stuck in the opposite but similar situation.. just something to consider. In particular the upgrade to Hibernate ORM 5 did change the Caching SPI, but that's not happening often at all. It looks like Infinispan moves much quicker, so owning the module in Infinispan might make these issues less frequent. On a different note: it turns out it's quite easy for Gradle to compile vs Infinispan 7 but test vs Infinispan 8, so I've setup such a job now to trigger on each Hibernate commit, but it will depend on a hardcoded version Infinispan 8.0.0.Beta2 because that's all we have on Nexus. You can checkout Hibernate ORM's master and build it with: ./gradlew clean :hibernate-infinispan:test -PoverrideInfinispanVersionForTesting=8.0.0.Beta2 to have it run the Infinispan integration tests only. Or replace the property value to test vs a different Infinispan version: '8.0.0.Beta1' works fine, '8.0.0.Beta2' produces quite some failures, we'll probably switch to use '8.0.0-SNAPSHOT' on ci.hibernate.org when Sebastian gets Infinispan to upload nightly builds, but ideally you should get similar jobs setup on ci.infinispan.org. HTH Sanne From smarlow at redhat.com Thu Aug 6 08:33:55 2015 From: smarlow at redhat.com (Scott Marlow) Date: Thu, 6 Aug 2015 08:33:55 -0400 Subject: [infinispan-dev] Hibernate ORM 5.0.0.CR4 not working so well with Infinispan 8.0.0.Beta2... In-Reply-To: References: <55C29DD7.8010000@redhat.com> Message-ID: <55C35433.3060807@redhat.com> > > Scott, I think the most reliable way is to have you setup a WildFly > job which tests snapshots of Hibernate and Infinispan of the branches > you intend to merge next in WildFly. AFAIK it would also save you some > time, as you seem to constantly run such builds? I think that we talked about doing that a few years ago. I setup a CI job that built Hibernate master, Infinispan master and tried to build WildFly with the two but it didn't work. One of the problems was that WildFly wouldn't build without code/configuration changes to adjust for the Infinispan (master) changes that had been made since the version of Infinispan that was integrated. https://github.com/wildfly/wildfly/pull/7896 is the pull request for upgrading WildFly 10 to use Infinispan 8. I'm not sure how much of it is optional but I'm guessing a lot of the changes are needed for WildFly to work at all with Infinispan 8. In order for this testing to work, Infinispan would need to maintain compatibility (e.g. configuration/api/spi) for at least one major version back. I'm not sure if there are any other problems to overcome before doing WildFly/Hibernate/Infinispan testing. I suppose that another problem, when we integrate the latest WildFly master the latest master branches for Infinispan/Hibernate, we will likely see failures as new features are being developed (not everything works on day one of a new major release cycle). Its probably closer to the end of a development cycle (when all three development teams are ready to be spammed that they have integration bugs that need to be fixed). But, if we all agree that we need to keep integration working from the start of a new major release cycle, this could work. We also need to sign off on keeping compatibility as mentioned above. Does Infinispan want 8.0 to be fully (configuration/API/SPI) compatible with 7.x? How about 9.0 with 8.0? Scott From mudokonman at gmail.com Thu Aug 6 10:17:10 2015 From: mudokonman at gmail.com (William Burns) Date: Thu, 06 Aug 2015 14:17:10 +0000 Subject: [infinispan-dev] Hibernate ORM 5.0.0.CR4 not working so well with Infinispan 8.0.0.Beta2... In-Reply-To: <55C31DF7.9040602@redhat.com> References: <55C29DD7.8010000@redhat.com> <55C31DF7.9040602@redhat.com> Message-ID: On Thu, Aug 6, 2015 at 4:42 AM Tristan Tarrant wrote: > > > On 06/08/2015 10:30, Sanne Grinovero wrote: > > On 6 August 2015 at 03:01, William Burns wrote: > >> It seems ORM was compiled with a version earlier than Beta1 but then ran > >> with Beta2? The keySet method was changed to return a subclass of > >> CloseableIteratorSet with Beta2 to support distributed streams [1]. > > > > BTW that CacheSet API change looks like it was intended to be > > backwards compatible? It's not, as we just realised. I you want it to > > be backwards compatible you'll have to revert that API change. > > From the ORM version numbers (CR4) I guess we are near endgame and we > MUST ensure that ORM works with both Infinispan 7.x and 8.x. > Will is it possible to make the signature of the method backwards > compatible ? > The method is backwards compatible when compiling and running against ISPN 8. I don't think there is any way here to make this backwards compatible changing only ISPN with varying versions at runtime without adding new methods in ISPN 8 which are identical to keySet, values and entrySet except the return type for them are defined as returning the new CacheSet/CacheCollection interfaces. The underlying implementation for the duplicates would be identical however. Without those new methods we wouldn't have access to the more advanced options when using distributed streams unless we manually casted. This doesn't sound too appealing to me. > If this is inconvenient, what can be done in Hibernate Infinispan to > insulate it from this change ? > This code is essentially doing a clear, but can this code live with the semantics of clear [1] that are the same between ISPN 7 & 8? I am guessing not. We could add a utility method in ISPN 7 & 8 that is defined as returning a ClosebaleIteratorSet/Collection that would be identical in both versions that we could also do [2]. Then if ORM calls the appropriate static method instead of keySet, values, entrySet it would be backwards runtime compatible as well. > > Still, I'm wondering whether hibernate-infinispan shouldn't be > subclassed into multiple versions so that we can be a bit more liberal > with some changes. > This does hamper changes quite a bit. This change was pretty minor, I can't imagine if they are using some class/method that we are removing in ISPN 8. > > Tristan > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev [1] https://github.com/infinispan/infinispan/blob/7.2.x/core/src/main/java/org/infinispan/Cache.java#L329 [2] https://gist.github.com/wburns/c3d3d95483d35be4b8c6 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150806/a5698c91/attachment-0001.html From sanne at infinispan.org Thu Aug 6 14:49:20 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 6 Aug 2015 19:49:20 +0100 Subject: [infinispan-dev] Hibernate ORM 5.0.0.CR4 not working so well with Infinispan 8.0.0.Beta2... In-Reply-To: <55C35433.3060807@redhat.com> References: <55C29DD7.8010000@redhat.com> <55C35433.3060807@redhat.com> Message-ID: Hi Scott, On 6 August 2015 at 13:33, Scott Marlow wrote: >> >> Scott, I think the most reliable way is to have you setup a WildFly >> job which tests snapshots of Hibernate and Infinispan of the branches >> you intend to merge next in WildFly. AFAIK it would also save you some >> time, as you seem to constantly run such builds? > > I think that we talked about doing that a few years ago. I setup a CI > job that built Hibernate master, Infinispan master and tried to build > WildFly with the two but it didn't work. One of the problems was that > WildFly wouldn't build without code/configuration changes to adjust for > the Infinispan (master) changes that had been made since the version of > Infinispan that was integrated. > https://github.com/wildfly/wildfly/pull/7896 is the pull request for > upgrading WildFly 10 to use Infinispan 8. I'm not sure how much of it > is optional but I'm guessing a lot of the changes are needed for WildFly > to work at all with Infinispan 8. I agree, that would be horrible. The two are different independent projects, and hoping that the two masters work flawlessly with each other every day would be a foolish assumption: that's *not* what I suggested. What I suggested is that you have a job running which monitors the two branches which you're planning to get working together, and ideally within WildFly too. In this specific case, they happen to be {master} and {master} of each project, but that's a coincidence as that's the current aim: we intentionally want these two versions to work together in the same platform. And things seemed to work a week ago... what was really disappointing today, was to figure out that there was a regression *after* we tagged the ORM release. When we reach such a point in which it looks like they are converging fine, that's when we need to setup such a WildFly CI job: if only we had known yesterday before the release, you'd be merging them both in WildFly today. Or someone could have run a manual test, but this could be done by a bot. > In order for this testing to work, Infinispan would need to maintain > compatibility (e.g. configuration/api/spi) for at least one major > version back. I'm not sure if there are any other problems to overcome > before doing WildFly/Hibernate/Infinispan testing. Both projects need to be able to freely evolve between major versions. But occasionally, someone from the WildFly team is going to start planning and pick versions from these projects - not necessarily the latest - at that point you start testing and see if the choice is valid, request integration fixes, and should also reconfigure the "convergence jobs". It sounds like you spend a lot of time testing such integrations, so we should automate those builds and make sure we get preventive reports: saves a lot of critical time. > I suppose that another problem, when we integrate the latest WildFly > master the latest master branches for Infinispan/Hibernate, we will > likely see failures as new features are being developed (not everything > works on day one of a new major release cycle). Its probably closer to > the end of a development cycle (when all three development teams are > ready to be spammed that they have integration bugs that need to be > fixed). But, if we all agree that we need to keep integration working > from the start of a new major release cycle, this could work. We also > need to sign off on keeping compatibility as mentioned above. > > Does Infinispan want 8.0 to be fully (configuration/API/SPI) compatible > with 7.x? How about 9.0 with 8.0? No those are obviously non viable options; for example I think the specific change in Infinispan which caused all this trouble is perfectly fine: there are lots of things which can be done, the problem is just that we're notified of these problems very late in the game. Both projects need to experiment and evolve without expensive constraints, we only need specifically selected branches to work together. Thanks, Sanne > > Scott > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Thu Aug 6 10:50:40 2015 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 06 Aug 2015 16:50:40 +0200 Subject: [infinispan-dev] Hibernate ORM 5.0.0.CR4 not working so well with Infinispan 8.0.0.Beta2... In-Reply-To: References: <55C29DD7.8010000@redhat.com> <55C31DF7.9040602@redhat.com> Message-ID: <55C37440.7010700@redhat.com> I have already solved that using filterEntries [1], but that's not end of the pain. Another change is ISPN removing ExtendedModuleCommandsFactory - I could not get around that with Infinispan snapshot. And if Infinispan does not keep fixed API even between major versions, these attempts to sync it seem quite futile. Radim [1] https://github.com/hibernate/hibernate-orm/pull/1045 On 08/06/2015 04:17 PM, William Burns wrote: > > > On Thu, Aug 6, 2015 at 4:42 AM Tristan Tarrant > wrote: > > > > On 06/08/2015 10:30, Sanne Grinovero wrote: > > On 6 August 2015 at 03:01, William Burns > wrote: > >> It seems ORM was compiled with a version earlier than Beta1 but > then ran > >> with Beta2? The keySet method was changed to return a subclass of > >> CloseableIteratorSet with Beta2 to support distributed streams [1]. > > > > BTW that CacheSet API change looks like it was intended to be > > backwards compatible? It's not, as we just realised. I you want > it to > > be backwards compatible you'll have to revert that API change. > > From the ORM version numbers (CR4) I guess we are near endgame and we > MUST ensure that ORM works with both Infinispan 7.x and 8.x. > Will is it possible to make the signature of the method backwards > compatible ? > > > The method is backwards compatible when compiling and running against > ISPN 8. > > I don't think there is any way here to make this backwards compatible > changing only ISPN with varying versions at runtime without adding new > methods in ISPN 8 which are identical to keySet, values and entrySet > except the return type for them are defined as returning the new > CacheSet/CacheCollection interfaces. The underlying implementation > for the duplicates would be identical however. Without those new > methods we wouldn't have access to the more advanced options when > using distributed streams unless we manually casted. This doesn't > sound too appealing to me. > > If this is inconvenient, what can be done in Hibernate Infinispan to > insulate it from this change ? > > > This code is essentially doing a clear, but can this code live with > the semantics of clear [1] that are the same between ISPN 7 & 8? I am > guessing not. > > We could add a utility method in ISPN 7 & 8 that is defined as > returning a ClosebaleIteratorSet/Collection that would be identical in > both versions that we could also do [2]. Then if ORM calls the > appropriate static method instead of keySet, values, entrySet it would > be backwards runtime compatible as well. > > > Still, I'm wondering whether hibernate-infinispan shouldn't be > subclassed into multiple versions so that we can be a bit more liberal > with some changes. > > > This does hamper changes quite a bit. This change was pretty minor, I > can't imagine if they are using some class/method that we are removing > in ISPN 8. > > > Tristan > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > [1] > https://github.com/infinispan/infinispan/blob/7.2.x/core/src/main/java/org/infinispan/Cache.java#L329 > > [2] https://gist.github.com/wburns/c3d3d95483d35be4b8c6 > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From mudokonman at gmail.com Fri Aug 7 00:04:37 2015 From: mudokonman at gmail.com (William Burns) Date: Fri, 07 Aug 2015 04:04:37 +0000 Subject: [infinispan-dev] Hibernate ORM 5.0.0.CR4 not working so well with Infinispan 8.0.0.Beta2... In-Reply-To: <55C37440.7010700@redhat.com> References: <55C29DD7.8010000@redhat.com> <55C31DF7.9040602@redhat.com> <55C37440.7010700@redhat.com> Message-ID: On Thu, Aug 6, 2015 at 10:39 PM Radim Vansa wrote: > I have already solved that using filterEntries [1], but that's not end > of the pain. Another change is ISPN removing > ExtendedModuleCommandsFactory - I could not get around that with > Infinispan snapshot. And if Infinispan does not keep fixed API even > between major versions, these attempts to sync it seem quite futile. > I integrated the module command factory change in ISPN master earlier today. I was able to run the infinispan hibernate tests and they worked for me with your PR applied. Is there some other thing I needed to run? And since we know to watch out for the binary compatibility we can make sure it stays that way for the 8.0 Final release. > > Radim > > [1] https://github.com/hibernate/hibernate-orm/pull/1045 > > On 08/06/2015 04:17 PM, William Burns wrote: > > > > > > On Thu, Aug 6, 2015 at 4:42 AM Tristan Tarrant > > wrote: > > > > > > > > On 06/08/2015 10:30, Sanne Grinovero wrote: > > > On 6 August 2015 at 03:01, William Burns > > wrote: > > >> It seems ORM was compiled with a version earlier than Beta1 but > > then ran > > >> with Beta2? The keySet method was changed to return a subclass of > > >> CloseableIteratorSet with Beta2 to support distributed streams > [1]. > > > > > > BTW that CacheSet API change looks like it was intended to be > > > backwards compatible? It's not, as we just realised. I you want > > it to > > > be backwards compatible you'll have to revert that API change. > > > > From the ORM version numbers (CR4) I guess we are near endgame and > we > > MUST ensure that ORM works with both Infinispan 7.x and 8.x. > > Will is it possible to make the signature of the method backwards > > compatible ? > > > > > > The method is backwards compatible when compiling and running against > > ISPN 8. > > > > I don't think there is any way here to make this backwards compatible > > changing only ISPN with varying versions at runtime without adding new > > methods in ISPN 8 which are identical to keySet, values and entrySet > > except the return type for them are defined as returning the new > > CacheSet/CacheCollection interfaces. The underlying implementation > > for the duplicates would be identical however. Without those new > > methods we wouldn't have access to the more advanced options when > > using distributed streams unless we manually casted. This doesn't > > sound too appealing to me. > > > > If this is inconvenient, what can be done in Hibernate Infinispan to > > insulate it from this change ? > > > > > > This code is essentially doing a clear, but can this code live with > > the semantics of clear [1] that are the same between ISPN 7 & 8? I am > > guessing not. > > > > We could add a utility method in ISPN 7 & 8 that is defined as > > returning a ClosebaleIteratorSet/Collection that would be identical in > > both versions that we could also do [2]. Then if ORM calls the > > appropriate static method instead of keySet, values, entrySet it would > > be backwards runtime compatible as well. > > > > > > Still, I'm wondering whether hibernate-infinispan shouldn't be > > subclassed into multiple versions so that we can be a bit more > liberal > > with some changes. > > > > > > This does hamper changes quite a bit. This change was pretty minor, I > > can't imagine if they are using some class/method that we are removing > > in ISPN 8. > > > > > > Tristan > > > > -- > > Tristan Tarrant > > Infinispan Lead > > JBoss, a division of Red Hat > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org infinispan-dev at lists.jboss.org> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > [1] > > > https://github.com/infinispan/infinispan/blob/7.2.x/core/src/main/java/org/infinispan/Cache.java#L329 > > > > [2] https://gist.github.com/wburns/c3d3d95483d35be4b8c6 > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150807/b8711e9d/attachment.html From rvansa at redhat.com Fri Aug 7 05:04:13 2015 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 07 Aug 2015 11:04:13 +0200 Subject: [infinispan-dev] Special cache types and their configuration (or lack of) In-Reply-To: References: <55B64303.4070006@redhat.com> <55B64E6C.6020706@redhat.com> <63584839.7062351.1438764521498.JavaMail.zimbra@redhat.com> <55C21E3B.3020302@redhat.com> Message-ID: <55C4748D.30005@redhat.com> It seems that I am outnumbered by the 'new local-cache attribute' camp (though not convinced!). If there is not any other input on this topic, I'll migrate that to local attribute, since I want to squeeze simple cache to 8.0.0.Final (That attribute will need to be explicitly set, I will not implement any hot-switch) Radim On 08/05/2015 10:24 PM, Dan Berindei wrote: > On Wed, Aug 5, 2015 at 5:31 PM, Radim Vansa wrote: >> On 08/05/2015 03:37 PM, Dan Berindei wrote: >>> Radim's implementation already throws exceptions when the application >>> tries to use unsupported features like throwing exceptions. The >>> question is how to choose the simple cache: a new CacheMode/XML >>> element, an attribute on the local-cache element, or reusing the >>> existing configuration to figure out whether the user needs advanced >>> features. >>> >>> Radim's implementation uses a new CacheMode and a new "simple-cache" >>> XML element. I feel this makes it too visible, since it's based on >>> what we can do now without an interceptor stack, and that might change >>> in the future. >>> >>> I'm in the "new local-cache attribute" camp, because the programmatic >>> configuration has to validate all those impossible configurations >>> anyway. In the UI as well, when a user tries to create a cache with a >>> store, I think it's better to tell him explicitly that he can't add a >>> store to a simple cache, than let him wonder why there isn't any >>> option to add a store in Infinispan. >> What UI do you mean? IDE with XSD, or does Infinispan have any tool with >> Mr. Clippy? > I meant the server (and WildFly) management console. No Clippy there, > at least not yet :) > >> Not having a button/configuration element is IMO the _proper_ way to >> tell the user 'You can't do that', rather than showing pop-up/throwing >> exception with 'Don't press this button, please!'. I admit that >> exception with link to docs is more _BFU-proof_, though. If users really >> cared about the schema, there wouldn't be so many threads where they try >> to copy-paste embedded configuration into server. The parser error >> message should be more ironic, like 'Something's wrong. I won't tell you >> what, but your XSD schema validator will!' >> > I admit having only the options that really work in the XSD and > relying on the XSD to point out mistakes seems cleaner. My concern is > discoverability: the user may be looking for an option that's only > available on a local-cache, and there's nothing telling them to > replace simple-cache with local-cache. > >>> I don't really like the idea of switching the cache implementation >>> dynamically, either. From the JIT's point of view, I think a call site >>> in an application is likely to always use the same kind of cache, so >>> the call will be monomorphic most of the time. But as a user, I'd >>> rather have something that's constantly slow than something that's >>> initially fast and then suddenly gets slower without me knowing why. >> +1 I was about to write the dynamic switcher, but having consistent >> performance is strong argument against that. >> >> Radim >> >>> Cheers >>> Dan >>> >>> >>> >>> On Wed, Aug 5, 2015 at 11:48 AM, Galder Zamarreno wrote: >>>> Indeed, JCache, MR and DistExec assume you'll be given a fully fledged Cache instance that allows them to do things that go beyond the basics, so as correctly pointed out here, it's hard to make the distinction purely based on the configuration. >>>> >>>> My gut feeling is that we need a way to specifically build a simple/basic cache directly based on your use case. With existing usages out there, you can't simply get a simple/basic cache just like that since a lot of the existing use cases expect to be able to use advanced features. An easy solution, as hinted by Radim, would be to have a wrapper for a simple/basic cache, which takes a standard Cache in, but don't go as far as to allow dynamic switching. E.g. if you chose to build a simple/basic cache, then things like add interceptor would fail...etc. I think this would work well for scenarios such as 2LC where we can control how the cache to be used is constructed. However, in scenarios where we expect it to work magically with existing code, it'd not work due to the need to know about the wrapper. >>>> >>>> Cheers, >>>> -- >>>> Galder Zamarre?o >>>> Infinispan, Red Hat >>>> >>>> ----- Original Message ----- >>>>> There's one glitch that needs to be stressed: some limitations of >>>>> simplified cache are not discoverable on creation time. While >>>>> persistence, tx and others are, adding custom interceptors and running >>>>> map-reduce or distributed-executors can't be guessed when the cache is >>>>> created. >>>>> I could (theoretically) implement MR and DistExec, but never the custom >>>>> interceptors: the idea of simple cache is that there are *no >>>>> interceptors*. And regrettably, this is not as rare case as I have >>>>> initially assumed, as for example JCaches grab any cache, insert their >>>>> interceptor and provide the wrapper. >>>>> >>>>> One way to go would be to not return the simple cache directly, but wrap >>>>> it in a delegating cache that would switch the implementation on the fly >>>>> as soon as someone tries to play with interceptors. However, this is not >>>>> without cost - the delegate would have to read a volatile field and >>>>> execute megamorphic call upon every cache operation. Applications could >>>>> get around that by doing instanceof and calling unwrap method during >>>>> initialization, but it's not really elegant solution. >>>>> >>>>> I wanted the choice transparent to the user from the beginning, but it's >>>>> not a way to go without penalties. >>>>> >>>>> For those who will suggest 'just a flag on local cache': Following the >>>>> 'less configuration, not more' I believe that the amount of >>>>> runtime-prohibited configurations should be kept at minimum. With such >>>>> flag, we would expand the state space of configuration 2 times, while >>>>> 95% of the configurations would be illegal. That's why I have rather >>>>> used new cache mode than adding a flag. >>>>> >>>>> Radim >>>>> >>>>> On 07/27/2015 04:41 PM, Tristan Tarrant wrote: >>>>>> Hi all, >>>>>> >>>>>> I wanted to bring attention to some discussion that has happened in the >>>>>> context of Radim's work on simplified code for specific cache types [1]. >>>>>> >>>>>> In particular, Radim proposes adding explicit configuration options >>>>>> (i.e. a new simple-cache cache type) to the programmatic/declarative API >>>>>> to ensure that a user is aware of the limitations of the resulting cache >>>>>> type (no interceptors, no persistence, no tx, etc). >>>>>> >>>>>> My opinion is that we should aim for "less" configuration and not >>>>>> "more", and that optimizations such as these should get enabled >>>>>> implicitly when the parameters allow it: if the configuration code >>>>>> detects it can use a "simple" cache. >>>>>> >>>>>> Also, this choice should happen at cache construction time, and not >>>>>> dynamically at cache usage time. >>>>>> >>>>>> WDYT ? >>>>>> >>>>>> Tristan >>>>>> >>>>>> [1] https://github.com/infinispan/infinispan/pull/3577 >>>>> -- >>>>> Radim Vansa >>>>> JBoss Performance Team >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From sanne at infinispan.org Fri Aug 7 05:32:38 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 7 Aug 2015 10:32:38 +0100 Subject: [infinispan-dev] Special cache types and their configuration (or lack of) In-Reply-To: <55C4748D.30005@redhat.com> References: <55B64303.4070006@redhat.com> <55B64E6C.6020706@redhat.com> <63584839.7062351.1438764521498.JavaMail.zimbra@redhat.com> <55C21E3B.3020302@redhat.com> <55C4748D.30005@redhat.com> Message-ID: +1 If it doesn't get too complex, I would love to see that packaged in a low-dependency module. That's of course secondary, but we'd be using it in many more projects. Thanks, Sanne On 7 Aug 2015 10:05, "Radim Vansa" wrote: > It seems that I am outnumbered by the 'new local-cache attribute' camp > (though not convinced!). If there is not any other input on this topic, > I'll migrate that to local attribute, since I want to squeeze simple > cache to 8.0.0.Final > (That attribute will need to be explicitly set, I will not implement any > hot-switch) > > Radim > > On 08/05/2015 10:24 PM, Dan Berindei wrote: > > On Wed, Aug 5, 2015 at 5:31 PM, Radim Vansa wrote: > >> On 08/05/2015 03:37 PM, Dan Berindei wrote: > >>> Radim's implementation already throws exceptions when the application > >>> tries to use unsupported features like throwing exceptions. The > >>> question is how to choose the simple cache: a new CacheMode/XML > >>> element, an attribute on the local-cache element, or reusing the > >>> existing configuration to figure out whether the user needs advanced > >>> features. > >>> > >>> Radim's implementation uses a new CacheMode and a new "simple-cache" > >>> XML element. I feel this makes it too visible, since it's based on > >>> what we can do now without an interceptor stack, and that might change > >>> in the future. > >>> > >>> I'm in the "new local-cache attribute" camp, because the programmatic > >>> configuration has to validate all those impossible configurations > >>> anyway. In the UI as well, when a user tries to create a cache with a > >>> store, I think it's better to tell him explicitly that he can't add a > >>> store to a simple cache, than let him wonder why there isn't any > >>> option to add a store in Infinispan. > >> What UI do you mean? IDE with XSD, or does Infinispan have any tool with > >> Mr. Clippy? > > I meant the server (and WildFly) management console. No Clippy there, > > at least not yet :) > > > >> Not having a button/configuration element is IMO the _proper_ way to > >> tell the user 'You can't do that', rather than showing pop-up/throwing > >> exception with 'Don't press this button, please!'. I admit that > >> exception with link to docs is more _BFU-proof_, though. If users really > >> cared about the schema, there wouldn't be so many threads where they try > >> to copy-paste embedded configuration into server. The parser error > >> message should be more ironic, like 'Something's wrong. I won't tell you > >> what, but your XSD schema validator will!' > >> > > I admit having only the options that really work in the XSD and > > relying on the XSD to point out mistakes seems cleaner. My concern is > > discoverability: the user may be looking for an option that's only > > available on a local-cache, and there's nothing telling them to > > replace simple-cache with local-cache. > > > >>> I don't really like the idea of switching the cache implementation > >>> dynamically, either. From the JIT's point of view, I think a call site > >>> in an application is likely to always use the same kind of cache, so > >>> the call will be monomorphic most of the time. But as a user, I'd > >>> rather have something that's constantly slow than something that's > >>> initially fast and then suddenly gets slower without me knowing why. > >> +1 I was about to write the dynamic switcher, but having consistent > >> performance is strong argument against that. > >> > >> Radim > >> > >>> Cheers > >>> Dan > >>> > >>> > >>> > >>> On Wed, Aug 5, 2015 at 11:48 AM, Galder Zamarreno > wrote: > >>>> Indeed, JCache, MR and DistExec assume you'll be given a fully > fledged Cache instance that allows them to do things that go beyond the > basics, so as correctly pointed out here, it's hard to make the distinction > purely based on the configuration. > >>>> > >>>> My gut feeling is that we need a way to specifically build a > simple/basic cache directly based on your use case. With existing usages > out there, you can't simply get a simple/basic cache just like that since a > lot of the existing use cases expect to be able to use advanced features. > An easy solution, as hinted by Radim, would be to have a wrapper for a > simple/basic cache, which takes a standard Cache in, but don't go as far as > to allow dynamic switching. E.g. if you chose to build a simple/basic > cache, then things like add interceptor would fail...etc. I think this > would work well for scenarios such as 2LC where we can control how the > cache to be used is constructed. However, in scenarios where we expect it > to work magically with existing code, it'd not work due to the need to know > about the wrapper. > >>>> > >>>> Cheers, > >>>> -- > >>>> Galder Zamarre?o > >>>> Infinispan, Red Hat > >>>> > >>>> ----- Original Message ----- > >>>>> There's one glitch that needs to be stressed: some limitations of > >>>>> simplified cache are not discoverable on creation time. While > >>>>> persistence, tx and others are, adding custom interceptors and > running > >>>>> map-reduce or distributed-executors can't be guessed when the cache > is > >>>>> created. > >>>>> I could (theoretically) implement MR and DistExec, but never the > custom > >>>>> interceptors: the idea of simple cache is that there are *no > >>>>> interceptors*. And regrettably, this is not as rare case as I have > >>>>> initially assumed, as for example JCaches grab any cache, insert > their > >>>>> interceptor and provide the wrapper. > >>>>> > >>>>> One way to go would be to not return the simple cache directly, but > wrap > >>>>> it in a delegating cache that would switch the implementation on the > fly > >>>>> as soon as someone tries to play with interceptors. However, this is > not > >>>>> without cost - the delegate would have to read a volatile field and > >>>>> execute megamorphic call upon every cache operation. Applications > could > >>>>> get around that by doing instanceof and calling unwrap method during > >>>>> initialization, but it's not really elegant solution. > >>>>> > >>>>> I wanted the choice transparent to the user from the beginning, but > it's > >>>>> not a way to go without penalties. > >>>>> > >>>>> For those who will suggest 'just a flag on local cache': Following > the > >>>>> 'less configuration, not more' I believe that the amount of > >>>>> runtime-prohibited configurations should be kept at minimum. With > such > >>>>> flag, we would expand the state space of configuration 2 times, while > >>>>> 95% of the configurations would be illegal. That's why I have rather > >>>>> used new cache mode than adding a flag. > >>>>> > >>>>> Radim > >>>>> > >>>>> On 07/27/2015 04:41 PM, Tristan Tarrant wrote: > >>>>>> Hi all, > >>>>>> > >>>>>> I wanted to bring attention to some discussion that has happened in > the > >>>>>> context of Radim's work on simplified code for specific cache types > [1]. > >>>>>> > >>>>>> In particular, Radim proposes adding explicit configuration options > >>>>>> (i.e. a new simple-cache cache type) to the > programmatic/declarative API > >>>>>> to ensure that a user is aware of the limitations of the resulting > cache > >>>>>> type (no interceptors, no persistence, no tx, etc). > >>>>>> > >>>>>> My opinion is that we should aim for "less" configuration and not > >>>>>> "more", and that optimizations such as these should get enabled > >>>>>> implicitly when the parameters allow it: if the configuration code > >>>>>> detects it can use a "simple" cache. > >>>>>> > >>>>>> Also, this choice should happen at cache construction time, and not > >>>>>> dynamically at cache usage time. > >>>>>> > >>>>>> WDYT ? > >>>>>> > >>>>>> Tristan > >>>>>> > >>>>>> [1] https://github.com/infinispan/infinispan/pull/3577 > >>>>> -- > >>>>> Radim Vansa > >>>>> JBoss Performance Team > >>>>> > >>>>> _______________________________________________ > >>>>> infinispan-dev mailing list > >>>>> infinispan-dev at lists.jboss.org > >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>>> > >>>> _______________________________________________ > >>>> infinispan-dev mailing list > >>>> infinispan-dev at lists.jboss.org > >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> -- > >> Radim Vansa > >> JBoss Performance Team > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150807/56fa68d9/attachment.html From rvansa at redhat.com Fri Aug 7 06:35:25 2015 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 07 Aug 2015 12:35:25 +0200 Subject: [infinispan-dev] Special cache types and their configuration (or lack of) In-Reply-To: References: <55B64303.4070006@redhat.com> <55B64E6C.6020706@redhat.com> <63584839.7062351.1438764521498.JavaMail.zimbra@redhat.com> <55C21E3B.3020302@redhat.com> <55C4748D.30005@redhat.com> Message-ID: <55C489ED.7050206@redhat.com> The simple cache is just a thin wrapper over DataContainer, and uses listeners, CacheNotifier and all that stuff from infinispan-core. The low-dependency part is BoundedConcurrentHashMap. Radim On 08/07/2015 11:32 AM, Sanne Grinovero wrote: > > +1 > If it doesn't get too complex, I would love to see that packaged in a > low-dependency module. That's of course secondary, but we'd be using > it in many more projects. > > Thanks, > Sanne > > On 7 Aug 2015 10:05, "Radim Vansa" > wrote: > > It seems that I am outnumbered by the 'new local-cache attribute' camp > (though not convinced!). If there is not any other input on this > topic, > I'll migrate that to local attribute, since I want to squeeze simple > cache to 8.0.0.Final > (That attribute will need to be explicitly set, I will not > implement any > hot-switch) > > Radim > > On 08/05/2015 10:24 PM, Dan Berindei wrote: > > On Wed, Aug 5, 2015 at 5:31 PM, Radim Vansa > wrote: > >> On 08/05/2015 03:37 PM, Dan Berindei wrote: > >>> Radim's implementation already throws exceptions when the > application > >>> tries to use unsupported features like throwing exceptions. The > >>> question is how to choose the simple cache: a new CacheMode/XML > >>> element, an attribute on the local-cache element, or reusing the > >>> existing configuration to figure out whether the user needs > advanced > >>> features. > >>> > >>> Radim's implementation uses a new CacheMode and a new > "simple-cache" > >>> XML element. I feel this makes it too visible, since it's based on > >>> what we can do now without an interceptor stack, and that > might change > >>> in the future. > >>> > >>> I'm in the "new local-cache attribute" camp, because the > programmatic > >>> configuration has to validate all those impossible configurations > >>> anyway. In the UI as well, when a user tries to create a cache > with a > >>> store, I think it's better to tell him explicitly that he > can't add a > >>> store to a simple cache, than let him wonder why there isn't any > >>> option to add a store in Infinispan. > >> What UI do you mean? IDE with XSD, or does Infinispan have any > tool with > >> Mr. Clippy? > > I meant the server (and WildFly) management console. No Clippy > there, > > at least not yet :) > > > >> Not having a button/configuration element is IMO the _proper_ > way to > >> tell the user 'You can't do that', rather than showing > pop-up/throwing > >> exception with 'Don't press this button, please!'. I admit that > >> exception with link to docs is more _BFU-proof_, though. If > users really > >> cared about the schema, there wouldn't be so many threads where > they try > >> to copy-paste embedded configuration into server. The parser error > >> message should be more ironic, like 'Something's wrong. I won't > tell you > >> what, but your XSD schema validator will!' > >> > > I admit having only the options that really work in the XSD and > > relying on the XSD to point out mistakes seems cleaner. My > concern is > > discoverability: the user may be looking for an option that's only > > available on a local-cache, and there's nothing telling them to > > replace simple-cache with local-cache. > > > >>> I don't really like the idea of switching the cache implementation > >>> dynamically, either. From the JIT's point of view, I think a > call site > >>> in an application is likely to always use the same kind of > cache, so > >>> the call will be monomorphic most of the time. But as a user, I'd > >>> rather have something that's constantly slow than something that's > >>> initially fast and then suddenly gets slower without me > knowing why. > >> +1 I was about to write the dynamic switcher, but having consistent > >> performance is strong argument against that. > >> > >> Radim > >> > >>> Cheers > >>> Dan > >>> > >>> > >>> > >>> On Wed, Aug 5, 2015 at 11:48 AM, Galder Zamarreno > > wrote: > >>>> Indeed, JCache, MR and DistExec assume you'll be given a > fully fledged Cache instance that allows them to do things that go > beyond the basics, so as correctly pointed out here, it's hard to > make the distinction purely based on the configuration. > >>>> > >>>> My gut feeling is that we need a way to specifically build a > simple/basic cache directly based on your use case. With existing > usages out there, you can't simply get a simple/basic cache just > like that since a lot of the existing use cases expect to be able > to use advanced features. An easy solution, as hinted by Radim, > would be to have a wrapper for a simple/basic cache, which takes a > standard Cache in, but don't go as far as to allow dynamic > switching. E.g. if you chose to build a simple/basic cache, then > things like add interceptor would fail...etc. I think this would > work well for scenarios such as 2LC where we can control how the > cache to be used is constructed. However, in scenarios where we > expect it to work magically with existing code, it'd not work due > to the need to know about the wrapper. > >>>> > >>>> Cheers, > >>>> -- > >>>> Galder Zamarre?o > >>>> Infinispan, Red Hat > >>>> > >>>> ----- Original Message ----- > >>>>> There's one glitch that needs to be stressed: some > limitations of > >>>>> simplified cache are not discoverable on creation time. While > >>>>> persistence, tx and others are, adding custom interceptors > and running > >>>>> map-reduce or distributed-executors can't be guessed when > the cache is > >>>>> created. > >>>>> I could (theoretically) implement MR and DistExec, but never > the custom > >>>>> interceptors: the idea of simple cache is that there are *no > >>>>> interceptors*. And regrettably, this is not as rare case as > I have > >>>>> initially assumed, as for example JCaches grab any cache, > insert their > >>>>> interceptor and provide the wrapper. > >>>>> > >>>>> One way to go would be to not return the simple cache > directly, but wrap > >>>>> it in a delegating cache that would switch the > implementation on the fly > >>>>> as soon as someone tries to play with interceptors. However, > this is not > >>>>> without cost - the delegate would have to read a volatile > field and > >>>>> execute megamorphic call upon every cache operation. > Applications could > >>>>> get around that by doing instanceof and calling unwrap > method during > >>>>> initialization, but it's not really elegant solution. > >>>>> > >>>>> I wanted the choice transparent to the user from the > beginning, but it's > >>>>> not a way to go without penalties. > >>>>> > >>>>> For those who will suggest 'just a flag on local cache': > Following the > >>>>> 'less configuration, not more' I believe that the amount of > >>>>> runtime-prohibited configurations should be kept at minimum. > With such > >>>>> flag, we would expand the state space of configuration 2 > times, while > >>>>> 95% of the configurations would be illegal. That's why I > have rather > >>>>> used new cache mode than adding a flag. > >>>>> > >>>>> Radim > >>>>> > >>>>> On 07/27/2015 04:41 PM, Tristan Tarrant wrote: > >>>>>> Hi all, > >>>>>> > >>>>>> I wanted to bring attention to some discussion that has > happened in the > >>>>>> context of Radim's work on simplified code for specific > cache types [1]. > >>>>>> > >>>>>> In particular, Radim proposes adding explicit configuration > options > >>>>>> (i.e. a new simple-cache cache type) to the > programmatic/declarative API > >>>>>> to ensure that a user is aware of the limitations of the > resulting cache > >>>>>> type (no interceptors, no persistence, no tx, etc). > >>>>>> > >>>>>> My opinion is that we should aim for "less" configuration > and not > >>>>>> "more", and that optimizations such as these should get enabled > >>>>>> implicitly when the parameters allow it: if the > configuration code > >>>>>> detects it can use a "simple" cache. > >>>>>> > >>>>>> Also, this choice should happen at cache construction time, > and not > >>>>>> dynamically at cache usage time. > >>>>>> > >>>>>> WDYT ? > >>>>>> > >>>>>> Tristan > >>>>>> > >>>>>> [1] https://github.com/infinispan/infinispan/pull/3577 > >>>>> -- > >>>>> Radim Vansa > > >>>>> JBoss Performance Team > >>>>> > >>>>> _______________________________________________ > >>>>> infinispan-dev mailing list > >>>>> infinispan-dev at lists.jboss.org > > >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>>> > >>>> _______________________________________________ > >>>> infinispan-dev mailing list > >>>> infinispan-dev at lists.jboss.org > > >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> -- > >> Radim Vansa > > >> JBoss Performance Team > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From sanne at infinispan.org Fri Aug 7 07:40:24 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 7 Aug 2015 12:40:24 +0100 Subject: [infinispan-dev] Special cache types and their configuration (or lack of) In-Reply-To: <55C489ED.7050206@redhat.com> References: <55B64303.4070006@redhat.com> <55B64E6C.6020706@redhat.com> <63584839.7062351.1438764521498.JavaMail.zimbra@redhat.com> <55C21E3B.3020302@redhat.com> <55C4748D.30005@redhat.com> <55C489ED.7050206@redhat.com> Message-ID: On 7 August 2015 at 11:35, Radim Vansa wrote: > The simple cache is just a thin wrapper over DataContainer, and uses > listeners, CacheNotifier and all that stuff from infinispan-core. The > low-dependency part is BoundedConcurrentHashMap. Ok, it was worth a try ;-) Cheers, Sanne > > Radim > > On 08/07/2015 11:32 AM, Sanne Grinovero wrote: >> >> +1 >> If it doesn't get too complex, I would love to see that packaged in a >> low-dependency module. That's of course secondary, but we'd be using >> it in many more projects. >> >> Thanks, >> Sanne >> >> On 7 Aug 2015 10:05, "Radim Vansa" > > wrote: >> >> It seems that I am outnumbered by the 'new local-cache attribute' camp >> (though not convinced!). If there is not any other input on this >> topic, >> I'll migrate that to local attribute, since I want to squeeze simple >> cache to 8.0.0.Final >> (That attribute will need to be explicitly set, I will not >> implement any >> hot-switch) >> >> Radim >> >> On 08/05/2015 10:24 PM, Dan Berindei wrote: >> > On Wed, Aug 5, 2015 at 5:31 PM, Radim Vansa > > wrote: >> >> On 08/05/2015 03:37 PM, Dan Berindei wrote: >> >>> Radim's implementation already throws exceptions when the >> application >> >>> tries to use unsupported features like throwing exceptions. The >> >>> question is how to choose the simple cache: a new CacheMode/XML >> >>> element, an attribute on the local-cache element, or reusing the >> >>> existing configuration to figure out whether the user needs >> advanced >> >>> features. >> >>> >> >>> Radim's implementation uses a new CacheMode and a new >> "simple-cache" >> >>> XML element. I feel this makes it too visible, since it's based on >> >>> what we can do now without an interceptor stack, and that >> might change >> >>> in the future. >> >>> >> >>> I'm in the "new local-cache attribute" camp, because the >> programmatic >> >>> configuration has to validate all those impossible configurations >> >>> anyway. In the UI as well, when a user tries to create a cache >> with a >> >>> store, I think it's better to tell him explicitly that he >> can't add a >> >>> store to a simple cache, than let him wonder why there isn't any >> >>> option to add a store in Infinispan. >> >> What UI do you mean? IDE with XSD, or does Infinispan have any >> tool with >> >> Mr. Clippy? >> > I meant the server (and WildFly) management console. No Clippy >> there, >> > at least not yet :) >> > >> >> Not having a button/configuration element is IMO the _proper_ >> way to >> >> tell the user 'You can't do that', rather than showing >> pop-up/throwing >> >> exception with 'Don't press this button, please!'. I admit that >> >> exception with link to docs is more _BFU-proof_, though. If >> users really >> >> cared about the schema, there wouldn't be so many threads where >> they try >> >> to copy-paste embedded configuration into server. The parser error >> >> message should be more ironic, like 'Something's wrong. I won't >> tell you >> >> what, but your XSD schema validator will!' >> >> >> > I admit having only the options that really work in the XSD and >> > relying on the XSD to point out mistakes seems cleaner. My >> concern is >> > discoverability: the user may be looking for an option that's only >> > available on a local-cache, and there's nothing telling them to >> > replace simple-cache with local-cache. >> > >> >>> I don't really like the idea of switching the cache implementation >> >>> dynamically, either. From the JIT's point of view, I think a >> call site >> >>> in an application is likely to always use the same kind of >> cache, so >> >>> the call will be monomorphic most of the time. But as a user, I'd >> >>> rather have something that's constantly slow than something that's >> >>> initially fast and then suddenly gets slower without me >> knowing why. >> >> +1 I was about to write the dynamic switcher, but having consistent >> >> performance is strong argument against that. >> >> >> >> Radim >> >> >> >>> Cheers >> >>> Dan >> >>> >> >>> >> >>> >> >>> On Wed, Aug 5, 2015 at 11:48 AM, Galder Zamarreno >> > wrote: >> >>>> Indeed, JCache, MR and DistExec assume you'll be given a >> fully fledged Cache instance that allows them to do things that go >> beyond the basics, so as correctly pointed out here, it's hard to >> make the distinction purely based on the configuration. >> >>>> >> >>>> My gut feeling is that we need a way to specifically build a >> simple/basic cache directly based on your use case. With existing >> usages out there, you can't simply get a simple/basic cache just >> like that since a lot of the existing use cases expect to be able >> to use advanced features. An easy solution, as hinted by Radim, >> would be to have a wrapper for a simple/basic cache, which takes a >> standard Cache in, but don't go as far as to allow dynamic >> switching. E.g. if you chose to build a simple/basic cache, then >> things like add interceptor would fail...etc. I think this would >> work well for scenarios such as 2LC where we can control how the >> cache to be used is constructed. However, in scenarios where we >> expect it to work magically with existing code, it'd not work due >> to the need to know about the wrapper. >> >>>> >> >>>> Cheers, >> >>>> -- >> >>>> Galder Zamarre?o >> >>>> Infinispan, Red Hat >> >>>> >> >>>> ----- Original Message ----- >> >>>>> There's one glitch that needs to be stressed: some >> limitations of >> >>>>> simplified cache are not discoverable on creation time. While >> >>>>> persistence, tx and others are, adding custom interceptors >> and running >> >>>>> map-reduce or distributed-executors can't be guessed when >> the cache is >> >>>>> created. >> >>>>> I could (theoretically) implement MR and DistExec, but never >> the custom >> >>>>> interceptors: the idea of simple cache is that there are *no >> >>>>> interceptors*. And regrettably, this is not as rare case as >> I have >> >>>>> initially assumed, as for example JCaches grab any cache, >> insert their >> >>>>> interceptor and provide the wrapper. >> >>>>> >> >>>>> One way to go would be to not return the simple cache >> directly, but wrap >> >>>>> it in a delegating cache that would switch the >> implementation on the fly >> >>>>> as soon as someone tries to play with interceptors. However, >> this is not >> >>>>> without cost - the delegate would have to read a volatile >> field and >> >>>>> execute megamorphic call upon every cache operation. >> Applications could >> >>>>> get around that by doing instanceof and calling unwrap >> method during >> >>>>> initialization, but it's not really elegant solution. >> >>>>> >> >>>>> I wanted the choice transparent to the user from the >> beginning, but it's >> >>>>> not a way to go without penalties. >> >>>>> >> >>>>> For those who will suggest 'just a flag on local cache': >> Following the >> >>>>> 'less configuration, not more' I believe that the amount of >> >>>>> runtime-prohibited configurations should be kept at minimum. >> With such >> >>>>> flag, we would expand the state space of configuration 2 >> times, while >> >>>>> 95% of the configurations would be illegal. That's why I >> have rather >> >>>>> used new cache mode than adding a flag. >> >>>>> >> >>>>> Radim >> >>>>> >> >>>>> On 07/27/2015 04:41 PM, Tristan Tarrant wrote: >> >>>>>> Hi all, >> >>>>>> >> >>>>>> I wanted to bring attention to some discussion that has >> happened in the >> >>>>>> context of Radim's work on simplified code for specific >> cache types [1]. >> >>>>>> >> >>>>>> In particular, Radim proposes adding explicit configuration >> options >> >>>>>> (i.e. a new simple-cache cache type) to the >> programmatic/declarative API >> >>>>>> to ensure that a user is aware of the limitations of the >> resulting cache >> >>>>>> type (no interceptors, no persistence, no tx, etc). >> >>>>>> >> >>>>>> My opinion is that we should aim for "less" configuration >> and not >> >>>>>> "more", and that optimizations such as these should get enabled >> >>>>>> implicitly when the parameters allow it: if the >> configuration code >> >>>>>> detects it can use a "simple" cache. >> >>>>>> >> >>>>>> Also, this choice should happen at cache construction time, >> and not >> >>>>>> dynamically at cache usage time. >> >>>>>> >> >>>>>> WDYT ? >> >>>>>> >> >>>>>> Tristan >> >>>>>> >> >>>>>> [1] https://github.com/infinispan/infinispan/pull/3577 >> >>>>> -- >> >>>>> Radim Vansa > >> >>>>> JBoss Performance Team >> >>>>> >> >>>>> _______________________________________________ >> >>>>> infinispan-dev mailing list >> >>>>> infinispan-dev at lists.jboss.org >> >> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>>> >> >>>> _______________________________________________ >> >>>> infinispan-dev mailing list >> >>>> infinispan-dev at lists.jboss.org >> >> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>> _______________________________________________ >> >>> infinispan-dev mailing list >> >>> infinispan-dev at lists.jboss.org >> >> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> -- >> >> Radim Vansa > >> >> JBoss Performance Team >> >> >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Radim Vansa > >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From christian at sweazer.com Sun Aug 9 12:02:13 2015 From: christian at sweazer.com (Christian Beikov) Date: Sun, 9 Aug 2015 18:02:13 +0200 Subject: [infinispan-dev] JCache integration with Wildfly provided configuration Message-ID: <55C77985.7080706@sweazer.com> Hello, I am using Infinispan 7.2.3.Final within Wildfly 9.0.1 and I would like to use the JCache integration but I struggle a bit. I configured the JGroups subsystem in the standalone.xml of my Wildfly installation to enable clustering of Infinispan caches. That works as expected, but I wasn't sure how I would have my caches clustered too. I thought of some possible solutions but they both aren't really what I am looking for. 1. Put the cache container configuration into standalone.xml 2. Copy the JGroups configuration and create a new transport in a custom infinispan configuration When doing 1. I can't really use the JCache integration because there is no way to tell the caching provider, that I want a CacheManager for a specific cache container. If you would recommend doing 1. then it would be nice if the caching provider would not only accept file URIs, but also something like JNDI names. By doing that, I could reference existing cache containers which at least solves the problem with the JCache integration. Still I would prefer option 2. because I wouldn't have to change the standalone.xml every time I add a cache. When doing 2. I can use the infinispan configuration file as URI when creating the cache manager so the JCache integration works without a problem. The only thing that is bothering me is, that I have to copy the JGroups configuration to have a separate transport for my applications cache container. I can't seem to reference the transport that I configured in the standalone.xml nor does it default to that. I would really like to reuse the JGroup channel that is already established. What I would like to know is, whether there is a possibility to make use of the JGroups configuration I did in the standalone.xml. If there isn't, what should I do when wanting to cluster my caches? Just go with option 1? Regards, Christian Beikov -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150809/26eec5ad/attachment.html From slaskawi at redhat.com Mon Aug 10 05:37:19 2015 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 10 Aug 2015 11:37:19 +0200 Subject: [infinispan-dev] Exposing Configuration through JMX (ISPN-5340) Message-ID: Hey! I'm working on ISPN-5340 [1] (exposing Cache Configuration through JMX). Before diving into the code I prepared a really short design doc which describes how I would like to implement it. Please have a look at it [2] and share your thoughts. Thanks Sebastian [1] https://issues.jboss.org/browse/ISPN-5340 [2] https://github.com/infinispan/infinispan/wiki/Dynamic-JMX-exposer-for-Configuration -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150810/603bd003/attachment.html From rvansa at redhat.com Mon Aug 10 06:13:08 2015 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 10 Aug 2015 12:13:08 +0200 Subject: [infinispan-dev] Exposing Configuration through JMX (ISPN-5340) In-Reply-To: References: Message-ID: <55C87934.8010605@redhat.com> Isn't the Configuration instance meant to be immutable? IIUC you aim at changing the Configuration instance directly. However, it seems to me that you should rather prepare new ConfigurationBuilder and add a method that will apply the overrides (only modified properties), creating new Configuration instance. Radim On 08/10/2015 11:37 AM, Sebastian Laskawiec wrote: > Hey! > > I'm working on ISPN-5340 [1] (exposing Cache Configuration through JMX). > > Before diving into the code I prepared a really short design doc which > describes how I would like to implement it. > > Please have a look at it [2] and share your thoughts. > > Thanks > Sebastian > > [1] https://issues.jboss.org/browse/ISPN-5340 > [2] > https://github.com/infinispan/infinispan/wiki/Dynamic-JMX-exposer-for-Configuration > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From slaskawi at redhat.com Mon Aug 10 06:24:34 2015 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 10 Aug 2015 12:24:34 +0200 Subject: [infinispan-dev] Exposing Configuration through JMX (ISPN-5340) In-Reply-To: <55C87934.8010605@redhat.com> References: <55C87934.8010605@redhat.com> Message-ID: Some parts of the configuration are mutable (see Attribute#isImmutable) and you may also attach a listener (Attribute#addListener) and get notified once they are modified. Thanks Sebastian On Mon, Aug 10, 2015 at 12:13 PM, Radim Vansa wrote: > Isn't the Configuration instance meant to be immutable? IIUC you aim at > changing the Configuration instance directly. However, it seems to me > that you should rather prepare new ConfigurationBuilder and add a method > that will apply the overrides (only modified properties), creating new > Configuration instance. > > Radim > > On 08/10/2015 11:37 AM, Sebastian Laskawiec wrote: > > Hey! > > > > I'm working on ISPN-5340 [1] (exposing Cache Configuration through JMX). > > > > Before diving into the code I prepared a really short design doc which > > describes how I would like to implement it. > > > > Please have a look at it [2] and share your thoughts. > > > > Thanks > > Sebastian > > > > [1] https://issues.jboss.org/browse/ISPN-5340 > > [2] > > > https://github.com/infinispan/infinispan/wiki/Dynamic-JMX-exposer-for-Configuration > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150810/1d72f416/attachment-0001.html From sanne at infinispan.org Mon Aug 10 14:46:06 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 10 Aug 2015 19:46:06 +0100 Subject: [infinispan-dev] Hidden failures in the testsuite Message-ID: Hi all, I just updated my local master fork and started the testsuite, as I sometimes do. It's great to see that the build was successful, and no tests *appeared* to have failed. But! lazily scrolling up in the console, I see lots of exceptions which don't look like intentional (I'm aware that some tests intentionally create error conditions). Also some tests are extremely verbose, which might be the reason for nobody noticing these. Some examples: - org.infinispan.it.compatibility.EmbeddedRestHotRodTest seems to log TRACE to the console (and probably the whole module) - CDI tests such as org.infinispan.cdi.InfinispanExtensionRemote seem to fail in great number because of some ClassNotFoundException(s) and/or ResourceLoadingException(s) - OSGi integration tests seem to be all broken by some invalid integration with Aries / Geronimo - OSGi integration tests dump a lot of unnecessary information to the build console - the Infinispan Query tests log lots of WARN too, around missing configuration properties and in some cases concerning exceptions; I'm pretty sure that I had resolved those in the past, seems some refactorings were done w/o considering the log outputs. Please don't ignore the output; if it's too verbose to watch, that needs to be resolved too. I also monitor the "expected execution time" of some modules I'm interested in, that's been useful in some cases to figure out that there was some regression. One big question: why is it that so many tests "appear to be good" but are actually broken? I would like to understand that. Thanks, Sanne From galder at redhat.com Mon Aug 10 16:23:41 2015 From: galder at redhat.com (Galder Zamarreno) Date: Mon, 10 Aug 2015 16:23:41 -0400 (EDT) Subject: [infinispan-dev] Infinispan 8.0.0.Beta3 out with Lucene 5, Functional API, Configuration Templates...etc Message-ID: <1127166190.10120703.1439238221647.JavaMail.zimbra@redhat.com> Hi all, You can read all about Infinispan 8.0.0.Beta3: http://blog.infinispan.org/2015/08/infinispan-800beta3-out-with-lucene-5.html Cheers, -- Galder Zamarre?o Infinispan, Red Hat From galder at redhat.com Tue Aug 11 04:18:14 2015 From: galder at redhat.com (Galder Zamarreno) Date: Tue, 11 Aug 2015 04:18:14 -0400 (EDT) Subject: [infinispan-dev] Infinispan 7.2.4.Final is out with fixes in async store, Hot Rod server/client...etc In-Reply-To: <1220274112.10432818.1439281058576.JavaMail.zimbra@redhat.com> Message-ID: <1299145500.10432996.1439281094772.JavaMail.zimbra@redhat.com> Hi all, Infinispan 7.2.4.Final is out with a few fixes in async store, Hot Rod server/client...etc. You can read all about it here: http://blog.infinispan.org/2015/08/infinispan-724final-including-fixes-for.html Cheers, -- Galder Zamarre?o Infinispan, Red Hat From paul.ferraro at redhat.com Tue Aug 11 08:29:21 2015 From: paul.ferraro at redhat.com (Paul Ferraro) Date: Tue, 11 Aug 2015 08:29:21 -0400 (EDT) Subject: [infinispan-dev] JCache integration with Wildfly provided configuration In-Reply-To: <55C77985.7080706@sweazer.com> References: <55C77985.7080706@sweazer.com> Message-ID: <716579693.8548846.1439296161099.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Christian Beikov" > To: infinispan-dev at lists.jboss.org > Sent: Sunday, August 9, 2015 12:02:13 PM > Subject: [infinispan-dev] JCache integration with Wildfly provided configuration > > Hello, > > I am using Infinispan 7.2.3.Final within Wildfly 9.0.1 and I would like to > use the JCache integration but I struggle a bit. > > I configured the JGroups subsystem in the standalone.xml of my Wildfly > installation to enable clustering of Infinispan caches. That works as > expected, but I wasn't sure how I would have my caches clustered too. I > thought of some possible solutions but they both aren't really what I am > looking for. > > > 1. Put the cache container configuration into standalone.xml > 2. Copy the JGroups configuration and create a new transport in a custom > infinispan configuration > > When doing 1. I can't really use the JCache integration because there is no > way to tell the caching provider, that I want a CacheManager for a specific > cache container. If you would recommend doing 1. then it would be nice if > the caching provider would not only accept file URIs, but also something > like JNDI names. By doing that, I could reference existing cache containers > which at least solves the problem with the JCache integration. Still I would > prefer option 2. because I wouldn't have to change the standalone.xml every > time I add a cache. The trouble with JCache integration into WildFly is that JCache is not an EE spec - and lacks any details for how cache resources should be managed by a container - or how a user would go about accessing container managed caches. Nor is there any concept of sharing cache resources between applications (i.e. each manages the cache lifecycle manually) - or about isolation of cache entries beyond ClassLoader. The API itself couples your code to a specific JCache implementation, since the URI of a cache manager is, by nature, provider specific. Consequently, we've deferred integrating JCache into WildFly until it's next evolution (i.e. 1.1/2.0) - or unless it somehow gets pulled into EE8. Consequently, if you really want to use the JCache API, you'll have to stick with option #2 for now. > When doing 2. I can use the infinispan configuration file as URI when > creating the cache manager so the JCache integration works without a > problem. The only thing that is bothering me is, that I have to copy the > JGroups configuration to have a separate transport for my applications cache > container. I can't seem to reference the transport that I configured in the > standalone.xml nor does it default to that. I would really like to reuse the > JGroup channel that is already established. Infinispan requires a separate jgroups channel per cache manager - this is a limitation of Infinispan itself. The only means of changing this is via a custom Transport implementation (as we do in WildFly). WildFly works around this limitation by providing Infinispan with a unique ForkChannel using a common JChannel. You might try configuring Infinispan with a custom transport based on the implementation from WildFly. See: https://github.com/wildfly/wildfly/blob/9.0.1.Final/clustering/infinispan/extension/src/main/java/org/jboss/as/clustering/infinispan/ChannelTransport.java > What I would like to know is, whether there is a possibility to make use of > the JGroups configuration I did in the standalone.xml. If there isn't, what > should I do when wanting to cluster my caches? Just go with option 1? Not without customization, see above. > Regards, > Christian Beikov > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From christian at sweazer.com Mon Aug 17 15:48:11 2015 From: christian at sweazer.com (Christian Beikov) Date: Mon, 17 Aug 2015 21:48:11 +0200 Subject: [infinispan-dev] JCache integration with Wildfly provided configuration In-Reply-To: <716579693.8548846.1439296161099.JavaMail.zimbra@redhat.com> References: <55C77985.7080706@sweazer.com> <716579693.8548846.1439296161099.JavaMail.zimbra@redhat.com> Message-ID: <55D23A7B.4020609@sweazer.com> Thanks for the quick answer! Could you maybe tell me if what I did is ok? So far I don't seem to have a problem with the following solution. I put my cache configuration into a custom cache container of my standalone.xml and inject the cache container just to produce a JSR 107 CacheManager like that: @Singleton public class ClusteredCacheManagerProducer { @Resource(lookup = "java:jboss/infinispan/container/app") private EmbeddedCacheManager cacheManager; @Produces @ClusteredCache @ApplicationScoped public CacheManager produceJcacheCacheManager() { return new org.infinispan.jcache.JCacheManager(URI.create("app"), cacheManager, Caching.getCachingProvider()); } public void dispose(@Disposes @ClusteredCache CacheManager jcacheManager) { jcacheManager.close(); } } Do you see anything problematic with that approach? Regards, Christian Am 11.08.2015 um 14:29 schrieb Paul Ferraro: > ----- Original Message ----- >> From: "Christian Beikov" >> To: infinispan-dev at lists.jboss.org >> Sent: Sunday, August 9, 2015 12:02:13 PM >> Subject: [infinispan-dev] JCache integration with Wildfly provided configuration >> >> Hello, >> >> I am using Infinispan 7.2.3.Final within Wildfly 9.0.1 and I would like to >> use the JCache integration but I struggle a bit. >> >> I configured the JGroups subsystem in the standalone.xml of my Wildfly >> installation to enable clustering of Infinispan caches. That works as >> expected, but I wasn't sure how I would have my caches clustered too. I >> thought of some possible solutions but they both aren't really what I am >> looking for. >> >> >> 1. Put the cache container configuration into standalone.xml >> 2. Copy the JGroups configuration and create a new transport in a custom >> infinispan configuration >> >> When doing 1. I can't really use the JCache integration because there is no >> way to tell the caching provider, that I want a CacheManager for a specific >> cache container. If you would recommend doing 1. then it would be nice if >> the caching provider would not only accept file URIs, but also something >> like JNDI names. By doing that, I could reference existing cache containers >> which at least solves the problem with the JCache integration. Still I would >> prefer option 2. because I wouldn't have to change the standalone.xml every >> time I add a cache. > The trouble with JCache integration into WildFly is that JCache is not an EE spec - and lacks any details for how cache resources should be managed by a container - or how a user would go about accessing container managed caches. Nor is there any concept of sharing cache resources between applications (i.e. each manages the cache lifecycle manually) - or about isolation of cache entries beyond ClassLoader. The API itself couples your code to a specific JCache implementation, since the URI of a cache manager is, by nature, provider specific. > Consequently, we've deferred integrating JCache into WildFly until it's next evolution (i.e. 1.1/2.0) - or unless it somehow gets pulled into EE8. > > Consequently, if you really want to use the JCache API, you'll have to stick with option #2 for now. > >> When doing 2. I can use the infinispan configuration file as URI when >> creating the cache manager so the JCache integration works without a >> problem. The only thing that is bothering me is, that I have to copy the >> JGroups configuration to have a separate transport for my applications cache >> container. I can't seem to reference the transport that I configured in the >> standalone.xml nor does it default to that. I would really like to reuse the >> JGroup channel that is already established. > Infinispan requires a separate jgroups channel per cache manager - this is a limitation of Infinispan itself. The only means of changing this is via a custom Transport implementation (as we do in WildFly). > WildFly works around this limitation by providing Infinispan with a unique ForkChannel using a common JChannel. You might try configuring Infinispan with a custom transport based on the implementation from WildFly. See: > https://github.com/wildfly/wildfly/blob/9.0.1.Final/clustering/infinispan/extension/src/main/java/org/jboss/as/clustering/infinispan/ChannelTransport.java > >> What I would like to know is, whether there is a possibility to make use of >> the JGroups configuration I did in the standalone.xml. If there isn't, what >> should I do when wanting to cluster my caches? Just go with option 1? > Not without customization, see above. > >> Regards, >> Christian Beikov >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From anujshahwork at gmail.com Tue Aug 18 07:55:45 2015 From: anujshahwork at gmail.com (Anuj Shah) Date: Tue, 18 Aug 2015 12:55:45 +0100 Subject: [infinispan-dev] Enhanced safety of Lucene directory reader locker Message-ID: Hello, When using the Lucene directory in production we have encountered many instances of files being removed incorrectly. After much investigation, we found issues and logged: - ISPN-4497 - ISPN-4777 After a long period of stability, we've still found rare problems, relating to when applications are shutdown. I don't need to go into the details here. We've now decided rather than trying to fix the problems we would enhance the DistributedSegmentReadLocker to prevent unwarranted file deletes. The mechanism is quite simple, in that when the directory really wants a file deleted we would add an additional marker to the cache and then proceed with a real delete only if this marker is present. You can see the changes here: https://github.com/anujshahwork/infinispan/commit/a31e93cee452549e8820f7247a79396de0312f83 Would appreciate feedback on this idea and whether we should create a feature request along with pull request. Thanks Anuj Shah -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150818/3265b2f8/attachment.html From paul.ferraro at redhat.com Tue Aug 18 21:40:03 2015 From: paul.ferraro at redhat.com (Paul Ferraro) Date: Tue, 18 Aug 2015 21:40:03 -0400 (EDT) Subject: [infinispan-dev] JCache integration with Wildfly provided configuration In-Reply-To: <55D23A7B.4020609@sweazer.com> References: <55C77985.7080706@sweazer.com> <716579693.8548846.1439296161099.JavaMail.zimbra@redhat.com> <55D23A7B.4020609@sweazer.com> Message-ID: <353573766.13487707.1439948403327.JavaMail.zimbra@redhat.com> I see no reason why your producer wouldn't work - however, JCacheManager.close() will attempt to close the EmbeddedCacheManager. You shouldn't do that. The server manages the lifecycle of that resource. ----- Original Message ----- > From: "Christian Beikov" > To: infinispan-dev at lists.jboss.org > Sent: Monday, August 17, 2015 3:48:11 PM > Subject: Re: [infinispan-dev] JCache integration with Wildfly provided configuration > > Thanks for the quick answer! > > Could you maybe tell me if what I did is ok? So far I don't seem to have > a problem with the following solution. > I put my cache configuration into a custom cache container of my > standalone.xml and inject the cache container just to produce a JSR 107 > CacheManager like that: > > @Singleton > public class ClusteredCacheManagerProducer { > > @Resource(lookup = "java:jboss/infinispan/container/app") > private EmbeddedCacheManager cacheManager; > > @Produces > @ClusteredCache > @ApplicationScoped > public CacheManager produceJcacheCacheManager() { > return new > org.infinispan.jcache.JCacheManager(URI.create("app"), cacheManager, > Caching.getCachingProvider()); > } > > public void dispose(@Disposes @ClusteredCache CacheManager > jcacheManager) { > jcacheManager.close(); > } > } > > Do you see anything problematic with that approach? > > Regards, > Christian > > Am 11.08.2015 um 14:29 schrieb Paul Ferraro: > > ----- Original Message ----- > >> From: "Christian Beikov" > >> To: infinispan-dev at lists.jboss.org > >> Sent: Sunday, August 9, 2015 12:02:13 PM > >> Subject: [infinispan-dev] JCache integration with Wildfly provided > >> configuration > >> > >> Hello, > >> > >> I am using Infinispan 7.2.3.Final within Wildfly 9.0.1 and I would like to > >> use the JCache integration but I struggle a bit. > >> > >> I configured the JGroups subsystem in the standalone.xml of my Wildfly > >> installation to enable clustering of Infinispan caches. That works as > >> expected, but I wasn't sure how I would have my caches clustered too. I > >> thought of some possible solutions but they both aren't really what I am > >> looking for. > >> > >> > >> 1. Put the cache container configuration into standalone.xml > >> 2. Copy the JGroups configuration and create a new transport in a > >> custom > >> infinispan configuration > >> > >> When doing 1. I can't really use the JCache integration because there is > >> no > >> way to tell the caching provider, that I want a CacheManager for a > >> specific > >> cache container. If you would recommend doing 1. then it would be nice if > >> the caching provider would not only accept file URIs, but also something > >> like JNDI names. By doing that, I could reference existing cache > >> containers > >> which at least solves the problem with the JCache integration. Still I > >> would > >> prefer option 2. because I wouldn't have to change the standalone.xml > >> every > >> time I add a cache. > > The trouble with JCache integration into WildFly is that JCache is not an > > EE spec - and lacks any details for how cache resources should be managed > > by a container - or how a user would go about accessing container managed > > caches. Nor is there any concept of sharing cache resources between > > applications (i.e. each manages the cache lifecycle manually) - or about > > isolation of cache entries beyond ClassLoader. The API itself couples > > your code to a specific JCache implementation, since the URI of a cache > > manager is, by nature, provider specific. > > Consequently, we've deferred integrating JCache into WildFly until it's > > next evolution (i.e. 1.1/2.0) - or unless it somehow gets pulled into EE8. > > > > Consequently, if you really want to use the JCache API, you'll have to > > stick with option #2 for now. > > > >> When doing 2. I can use the infinispan configuration file as URI when > >> creating the cache manager so the JCache integration works without a > >> problem. The only thing that is bothering me is, that I have to copy the > >> JGroups configuration to have a separate transport for my applications > >> cache > >> container. I can't seem to reference the transport that I configured in > >> the > >> standalone.xml nor does it default to that. I would really like to reuse > >> the > >> JGroup channel that is already established. > > Infinispan requires a separate jgroups channel per cache manager - this is > > a limitation of Infinispan itself. The only means of changing this is via > > a custom Transport implementation (as we do in WildFly). > > WildFly works around this limitation by providing Infinispan with a unique > > ForkChannel using a common JChannel. You might try configuring Infinispan > > with a custom transport based on the implementation from WildFly. See: > > https://github.com/wildfly/wildfly/blob/9.0.1.Final/clustering/infinispan/extension/src/main/java/org/jboss/as/clustering/infinispan/ChannelTransport.java > > > >> What I would like to know is, whether there is a possibility to make use > >> of > >> the JGroups configuration I did in the standalone.xml. If there isn't, > >> what > >> should I do when wanting to cluster my caches? Just go with option 1? > > Not without customization, see above. > > > >> Regards, > >> Christian Beikov > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From christian at sweazer.com Wed Aug 19 05:54:49 2015 From: christian at sweazer.com (Christian Beikov) Date: Wed, 19 Aug 2015 11:54:49 +0200 Subject: [infinispan-dev] JCache integration with Wildfly provided configuration In-Reply-To: <353573766.13487707.1439948403327.JavaMail.zimbra@redhat.com> References: <55C77985.7080706@sweazer.com> <716579693.8548846.1439296161099.JavaMail.zimbra@redhat.com> <55D23A7B.4020609@sweazer.com> <353573766.13487707.1439948403327.JavaMail.zimbra@redhat.com> Message-ID: <55D45269.7040509@sweazer.com> Thanks for that info. Maybe you shouldn't actually close the underlying cache manager when managedCacheManager is true? Am 19.08.2015 um 03:40 schrieb Paul Ferraro: > I see no reason why your producer wouldn't work - however, JCacheManager.close() will attempt to close the EmbeddedCacheManager. You shouldn't do that. The server manages the lifecycle of that resource. > > ----- Original Message ----- >> From: "Christian Beikov" >> To: infinispan-dev at lists.jboss.org >> Sent: Monday, August 17, 2015 3:48:11 PM >> Subject: Re: [infinispan-dev] JCache integration with Wildfly provided configuration >> >> Thanks for the quick answer! >> >> Could you maybe tell me if what I did is ok? So far I don't seem to have >> a problem with the following solution. >> I put my cache configuration into a custom cache container of my >> standalone.xml and inject the cache container just to produce a JSR 107 >> CacheManager like that: >> >> @Singleton >> public class ClusteredCacheManagerProducer { >> >> @Resource(lookup = "java:jboss/infinispan/container/app") >> private EmbeddedCacheManager cacheManager; >> >> @Produces >> @ClusteredCache >> @ApplicationScoped >> public CacheManager produceJcacheCacheManager() { >> return new >> org.infinispan.jcache.JCacheManager(URI.create("app"), cacheManager, >> Caching.getCachingProvider()); >> } >> >> public void dispose(@Disposes @ClusteredCache CacheManager >> jcacheManager) { >> jcacheManager.close(); >> } >> } >> >> Do you see anything problematic with that approach? >> >> Regards, >> Christian >> >> Am 11.08.2015 um 14:29 schrieb Paul Ferraro: >>> ----- Original Message ----- >>>> From: "Christian Beikov" >>>> To: infinispan-dev at lists.jboss.org >>>> Sent: Sunday, August 9, 2015 12:02:13 PM >>>> Subject: [infinispan-dev] JCache integration with Wildfly provided >>>> configuration >>>> >>>> Hello, >>>> >>>> I am using Infinispan 7.2.3.Final within Wildfly 9.0.1 and I would like to >>>> use the JCache integration but I struggle a bit. >>>> >>>> I configured the JGroups subsystem in the standalone.xml of my Wildfly >>>> installation to enable clustering of Infinispan caches. That works as >>>> expected, but I wasn't sure how I would have my caches clustered too. I >>>> thought of some possible solutions but they both aren't really what I am >>>> looking for. >>>> >>>> >>>> 1. Put the cache container configuration into standalone.xml >>>> 2. Copy the JGroups configuration and create a new transport in a >>>> custom >>>> infinispan configuration >>>> >>>> When doing 1. I can't really use the JCache integration because there is >>>> no >>>> way to tell the caching provider, that I want a CacheManager for a >>>> specific >>>> cache container. If you would recommend doing 1. then it would be nice if >>>> the caching provider would not only accept file URIs, but also something >>>> like JNDI names. By doing that, I could reference existing cache >>>> containers >>>> which at least solves the problem with the JCache integration. Still I >>>> would >>>> prefer option 2. because I wouldn't have to change the standalone.xml >>>> every >>>> time I add a cache. >>> The trouble with JCache integration into WildFly is that JCache is not an >>> EE spec - and lacks any details for how cache resources should be managed >>> by a container - or how a user would go about accessing container managed >>> caches. Nor is there any concept of sharing cache resources between >>> applications (i.e. each manages the cache lifecycle manually) - or about >>> isolation of cache entries beyond ClassLoader. The API itself couples >>> your code to a specific JCache implementation, since the URI of a cache >>> manager is, by nature, provider specific. >>> Consequently, we've deferred integrating JCache into WildFly until it's >>> next evolution (i.e. 1.1/2.0) - or unless it somehow gets pulled into EE8. >>> >>> Consequently, if you really want to use the JCache API, you'll have to >>> stick with option #2 for now. >>> >>>> When doing 2. I can use the infinispan configuration file as URI when >>>> creating the cache manager so the JCache integration works without a >>>> problem. The only thing that is bothering me is, that I have to copy the >>>> JGroups configuration to have a separate transport for my applications >>>> cache >>>> container. I can't seem to reference the transport that I configured in >>>> the >>>> standalone.xml nor does it default to that. I would really like to reuse >>>> the >>>> JGroup channel that is already established. >>> Infinispan requires a separate jgroups channel per cache manager - this is >>> a limitation of Infinispan itself. The only means of changing this is via >>> a custom Transport implementation (as we do in WildFly). >>> WildFly works around this limitation by providing Infinispan with a unique >>> ForkChannel using a common JChannel. You might try configuring Infinispan >>> with a custom transport based on the implementation from WildFly. See: >>> https://github.com/wildfly/wildfly/blob/9.0.1.Final/clustering/infinispan/extension/src/main/java/org/jboss/as/clustering/infinispan/ChannelTransport.java >>> >>>> What I would like to know is, whether there is a possibility to make use >>>> of >>>> the JGroups configuration I did in the standalone.xml. If there isn't, >>>> what >>>> should I do when wanting to cluster my caches? Just go with option 1? >>> Not without customization, see above. >>> >>>> Regards, >>>> Christian Beikov >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Fri Aug 21 05:10:42 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 21 Aug 2015 11:10:42 +0200 Subject: [infinispan-dev] Shared vs Non-Shared CacheStores In-Reply-To: <55C31D43.30106@redhat.com> References: <55ACBA42.5070507@redhat.com> <55C31D43.30106@redhat.com> Message-ID: <55D6EB12.5040203@redhat.com> I've been thinking more about this issue, after talking with Sanne, and here's my (possibly faulty) analysis: I don't think this is so dramatic or urgent that we need a solution (i.e. a distinct SPI for embedded cachestores) in place by 8.0. This is something that we can design and introduce as a private-only SPI during the 8.x series and migrate our stores to use it accordingly. Note that such a SPI would be more closely tied to the DataContainer so it may not even have a relationship with the PersistenceManager. What I would like to see in the current SPI for 8.0, however, is an extensible way for cachestores to expose "capabilities" so that not only can we prevent potentially broken configurations, but we can also declare support for advanced functionality (shared, transactional, schema-aware, etc). I'm not fond of marker-only interfaces (see org.infinispan.persistence.spi.LocalOnlyCacheLoader), so I'd prefer an annotation-based approach. Tristan On 06/08/2015 10:39, Radim Vansa wrote: > I understand that shared cache stores will be more common to be > implemented, I don't think that non-shared stores should be considered > 'private interface'. But separating them would really give the > oportunity to change this non-shared SPI more often if needed without > breaking shared one. > However, hot-glueing a new cool interface without referential > implementation that supports transaction, solves the ton of issues > described in [1] is not a wise move, IMO. And there's no time to > implement this before 8.0.0.Final. > > Radim > > [1] > https://github.com/infinispan/infinispan/wiki/Consistency-guarantees-in-Infinispan > > On 08/05/2015 11:57 PM, Sanne Grinovero wrote: >> I don't doubt Radim's code :) but I'm pretty confident that even that >> implementation is limited by the constraints of the general-purpose >> API. >> >> For example it seems Bela will soon allow more flexibility in JGroups >> regarding buffer representations. We need to commit on a stable API >> for end user integrations (shared cachestore implementors), but we >> also need to keep options open to soon play with other approaches. >> >> That's why I think this separation should be done before Infinispan >> 8.0.0.Final even if I don't have a concrete proposal for how this >> other API should look like: I don't presume to be able to anticipate >> which API exactly will be best, but I think we can all see that we >> will want to change that. There should be a private internal contract >> which we can change even in micro versions without concerns of >> compatibility, so to allow R&D progress in the most performance >> sensitive areas w/o this being a problem for integrators and users. >> >> Better configuration validations are additional (strong) benefits: >> we've seen lots of misunderstandings about which CacheStores / >> configuration combinations are valid. >> >> Thanks, >> Sanne >> >> On 5 August 2015 at 22:13, Dan Berindei wrote: >>> On Fri, Jul 31, 2015 at 3:30 PM, Sanne Grinovero wrote: >>>> On 20 July 2015 at 11:02, Dan Berindei wrote: >>>>> Sanne, I think changing the cache store API is actually the most >>>>> painful part, so we should only do it if we gain a concrete advantage >>>>> from doing it. From a compatibility point of view, implementing a new >>>>> interface vs implementing the same interface with completely different >>>>> methods is just as bad. >>>> Right, from that perspective it's a quite horrible proposal. >>>> >>>> But I think we can agree that only the "SharedCacheStore" deserves to >>>> be considered an SPI, right? >>>> That's the one people will normally customize to map stuff to other >>>> stores one might have. >>>> >>>> I think it's important that beyond Infinispan 8.0 API's freeze, we can >>>> make any change to the non-shared SPI >>>> without affecting users who implement a custom shared cachestore. >>>> >>>> I highly doubt someone will implement a high-performance custom off >>>> heap swap strategy, but if someone does he should contribute it and >>>> will probably need to make integration level changes. >>>> >>>> We probably won't have the time to implement a new super efficient >>>> local-only cachestore to replace the leveldb one, but I'd like to keep >>>> the possibility open to do that beyond 8.0, *especially* without >>>> breaking compatibility for other people. >>> We already have a new super efficient local-only cachestore :) >>> >>> https://github.com/infinispan/infinispan/tree/master/persistence/soft-index >>> >>> >>>> Sanne >>>> >>>> >>>>> On Mon, Jul 20, 2015 at 12:41 PM, Sanne Grinovero wrote: >>>>>> +1 for incremental changes.. >>>>>> >>>>>> I'd see the first step as defining two different interfaces; >>>>>> essentially we need to choose two good names. >>>>>> >>>>>> Then we could have both interfaces still implement the same identical >>>>>> methods, but go through each implementation and decide to "mark" it as >>>>>> shared-only or never-shared. >>>>>> >>>>>> That would make it simpler to make concrete change proposals on each >>>>>> of them and start taking some advantage from the split. I think you'll >>>>>> need the two different interfaces to implement the validations you >>>>>> mentioned. >>>>>> >>>>>> For Infinispan 8's goals, I'd be happy enough to keep the >>>>>> "shared-only" interface quite similar to the current one, but mark the >>>>>> never-shared one as a private or experimental SPI to allow ourselves >>>>>> some more flexibility in performance oriented changes. >>>>>> >>>>>> Thanks, >>>>>> Sanne >>>>>> >>>>>> On 20 July 2015 at 10:07, Tristan Tarrant wrote: >>>>>>> Sanne, well written. >>>>>>> Before actually implementing any of the optimizations/changes you >>>>>>> mention, I think the lowest-hanging fruit we should grab now is just to >>>>>>> add checks to all of our cachestores to actually throw an exception when >>>>>>> they are being enabled in unsupported configurations. >>>>>>> >>>>>>> I've created [1] to get us started >>>>>>> >>>>>>> Tristan >>>>>>> >>>>>>> [1] https://issues.jboss.org/browse/ISPN-5617 >>>>>>> >>>>>>> On 16/07/2015 15:32, Sanne Grinovero wrote: >>>>>>>> I would like to propose a clear cut separation between our shared and >>>>>>>> non-shared CacheStores, >>>>>>>> in all terms such as: >>>>>>>> - Configuration options >>>>>>>> - Integration contracts (Split the CacheStore SPI) >>>>>>>> - Implementations >>>>>>>> - Terminology, to avoid any further confusion around valid >>>>>>>> configurations and sensible architectures >>>>>>>> >>>>>>>> We have loads of examples of users who get in trouble by configuring >>>>>>>> one incorrectly, but also there are plenty of efficiency improvements >>>>>>>> we could take advantage of by clearly splitting the integration points >>>>>>>> and the implementations in two categories. >>>>>>>> >>>>>>>> Not least, it's a very common and dangerous pitfall to assume that >>>>>>>> Infinispan is able to restore a consistent state after having stopped >>>>>>>> a DIST cluster which passivated into non-shared CacheStore instances, >>>>>>>> or even REPL clusters when they don't shutdown all at the same exact >>>>>>>> time (and "exact same time" is a strange concept at least..). We need >>>>>>>> to clarify the different options, tradeoffs and their consequences.. >>>>>>>> to users and ourselves, as a clearly defined use case will avoid bugs >>>>>>>> and simplify implementations. >>>>>>>> >>>>>>>> # The purpose of each >>>>>>>> I think that people should use a non-shared (local?) CacheStore for >>>>>>>> the sole purpose of expanding to storage capacity of each single >>>>>>>> node.. be it because you don't have enough memory at all, or be it >>>>>>>> because you prefer some extra safety margin because either your >>>>>>>> estimates are complex, or maybe because we live in a real world were >>>>>>>> the hashing function might not be perfect in practice. I hope we all >>>>>>>> agree that Infinispan should be able to take such situations with at >>>>>>>> worst a graceful performance degradatation, rather than complain >>>>>>>> sending OOMs to the admin and setting the service on strike. >>>>>>>> >>>>>>>> A Shared CacheStore is useful for very different purposes; primarily >>>>>>>> to implement a Cache on some other service - for example your (single, >>>>>>>> shared) RDBMs, a slow (or expensive) webservice your organization has >>>>>>>> to call frequently, etc.. Or it's useful even as a write-through cache >>>>>>>> on a similar service, maybe internal but not able to handle the high >>>>>>>> variation of load spikes which Infinsipan can handle better. >>>>>>>> Finally, a great use case is to have a consistent backup of all your >>>>>>>> data-grid content, possibly in some "reference" form such as JPA >>>>>>>> mapped entities. >>>>>>>> >>>>>>>> # Benefits of a Non-Shared >>>>>>>> A non-shared CacheStore implementor should be able to take advantage >>>>>>>> of *its purpose*, among the big ones I see: >>>>>>>> - Exclusive usage -> locking of a specific entry can be handled at >>>>>>>> datacontainer level, can simplify quite some internal code. >>>>>>>> - Reliability -> since a clustered node needs to wipe its state at >>>>>>>> reboot (after a crash), it's much simpler to code any such CacheStore >>>>>>>> to avoid any form of disk synch or persistance guarantees. >>>>>>>> - Encoding format -> this can be controlled entirely by Infinispan, >>>>>>>> and no need to take factors like rolling upgrade compatible encodings >>>>>>>> in mind. JBoss Marshalling would be good enough, or some >>>>>>>> implementations might not need to serialize at all. >>>>>>>> >>>>>>>> Our non-shared CacheStore implentation(s) could take advantage of >>>>>>>> lower level more complex code optimisations and interfaces, as users >>>>>>>> would rarely want to customize one of these, while the use case of >>>>>>>> mapping data to a shared service needs a more user friendly SPI so to >>>>>>>> keep it simple to plug in custom stores: custom data formats, custom >>>>>>>> connectors, get some help in implementing concurrency correctly. >>>>>>>> Proper Transaction integration for the CacheStore has been on our >>>>>>>> wishlist for some time too, I suspect that accepting that we have been >>>>>>>> mixing up two different things under a same name so far, would make it >>>>>>>> simpler to implement further improvements such as transactions: the >>>>>>>> way to do such a thing is very different in each of these use cases, >>>>>>>> so it would help at least to implement it on a subset first, or maybe >>>>>>>> only if it turns out there's no need for such things in the context of >>>>>>>> the local-only-dedicated "swapfile". >>>>>>>> >>>>>>>> # Mixed types should be killed >>>>>>>> I'm aware that some of our current implementations _could_ work both as >>>>>>>> shared or non-shared, for example the JDBC or JPACacheStore or the >>>>>>>> Remote Cachestore.. but in most cases it doesn't make much sense. Why >>>>>>>> would you ever want to use the JPACacheStore if not to share data with >>>>>>>> a _shared_ database? >>>>>>>> >>>>>>>> We should take such options away, and by doing so focus on the use >>>>>>>> cases which actually matter and simplify the implementations and >>>>>>>> improve the configuration validations. >>>>>>>> >>>>>>>> If ever a compelling storage technology is identified which we'd like to >>>>>>>> offer as an option for both shared or non-shared, I would still >>>>>>>> recommend to make two different implementations, as there certainly are >>>>>>>> different requirements and assumptions when coding such a thing. >>>>>>>> >>>>>>>> Not least, I would very like to see a default local CacheStore: >>>>>>>> picking one for local "emergency swapping" should be a no-brainer for >>>>>>>> users; we could setup one by default and not bother newcomers with >>>>>>>> complex choices. >>>>>>>> >>>>>>>> If we simplify the requirement of such a thing, it should be easy to >>>>>>>> write one on standard Java NIO2 APIs and get rid of the complexities of >>>>>>>> maintaining the native integration with things like LevelDB, not least >>>>>>>> the inefficiency of Java to make such native calls. >>>>>>>> >>>>>>>> Then as a second step, we should attack the other use case: backups; >>>>>>>> from a *purpose driven perspective* I'd then see us revive the Cassandra >>>>>>>> integration; obviously as a shared-only option. >>>>>>>> >>>>>>>> Cheers, >>>>>>>> Sanne >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>> >>>>>>> -- >>>>>>> Tristan Tarrant >>>>>>> Infinispan Lead >>>>>>> JBoss, a division of Red Hat >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From galder at redhat.com Fri Aug 21 06:16:18 2015 From: galder at redhat.com (Galder Zamarreno) Date: Fri, 21 Aug 2015 06:16:18 -0400 (EDT) Subject: [infinispan-dev] New Functional Map API in Infinispan 8 - Introduction In-Reply-To: <967584384.16541578.1440152071121.JavaMail.zimbra@redhat.com> Message-ID: <2039468103.16541760.1440152178960.JavaMail.zimbra@redhat.com> Hi all, In Infinispan 8, we're introducing a new experimental Functional, Asynchronous, Lambda-based, Map API. I've written a blog post doing an overall introduction of the API: http://blog.infinispan.org/2015/08/new-functional-map-api-in-infinispan-8.html Cheers, -- Galder Zamarre?o Infinispan, Red Hat From sanne at infinispan.org Fri Aug 21 08:21:52 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 21 Aug 2015 13:21:52 +0100 Subject: [infinispan-dev] Shared vs Non-Shared CacheStores In-Reply-To: <55D6EB12.5040203@redhat.com> References: <55ACBA42.5070507@redhat.com> <55C31D43.30106@redhat.com> <55D6EB12.5040203@redhat.com> Message-ID: +1 I like that plan, however I don't have any problem with marker interfaces either. I remember the annotations used internally by Infinispan were (a long time ago) the cause for a very slow start, which was then fixed by indexing the annotations at compile time; loading this index as a resource at runtime time has caused some unnecessary complexity in some modular environments. No biggie, just saying annotations have some tradeoffs too ;) On 21 August 2015 at 10:10, Tristan Tarrant wrote: > I've been thinking more about this issue, after talking with Sanne, and > here's my (possibly faulty) analysis: > > I don't think this is so dramatic or urgent that we need a solution > (i.e. a distinct SPI for embedded cachestores) in place by 8.0. This is > something that we can design and introduce as a private-only SPI during > the 8.x series and migrate our stores to use it accordingly. Note that > such a SPI would be more closely tied to the DataContainer so it may not > even have a relationship with the PersistenceManager. > > What I would like to see in the current SPI for 8.0, however, is an > extensible way for cachestores to expose "capabilities" so that not only > can we prevent potentially broken configurations, but we can also > declare support for advanced functionality (shared, transactional, > schema-aware, etc). I'm not fond of marker-only interfaces (see > org.infinispan.persistence.spi.LocalOnlyCacheLoader), so I'd prefer an > annotation-based approach. > > Tristan > > On 06/08/2015 10:39, Radim Vansa wrote: >> I understand that shared cache stores will be more common to be >> implemented, I don't think that non-shared stores should be considered >> 'private interface'. But separating them would really give the >> oportunity to change this non-shared SPI more often if needed without >> breaking shared one. >> However, hot-glueing a new cool interface without referential >> implementation that supports transaction, solves the ton of issues >> described in [1] is not a wise move, IMO. And there's no time to >> implement this before 8.0.0.Final. >> >> Radim >> >> [1] >> https://github.com/infinispan/infinispan/wiki/Consistency-guarantees-in-Infinispan >> >> On 08/05/2015 11:57 PM, Sanne Grinovero wrote: >>> I don't doubt Radim's code :) but I'm pretty confident that even that >>> implementation is limited by the constraints of the general-purpose >>> API. >>> >>> For example it seems Bela will soon allow more flexibility in JGroups >>> regarding buffer representations. We need to commit on a stable API >>> for end user integrations (shared cachestore implementors), but we >>> also need to keep options open to soon play with other approaches. >>> >>> That's why I think this separation should be done before Infinispan >>> 8.0.0.Final even if I don't have a concrete proposal for how this >>> other API should look like: I don't presume to be able to anticipate >>> which API exactly will be best, but I think we can all see that we >>> will want to change that. There should be a private internal contract >>> which we can change even in micro versions without concerns of >>> compatibility, so to allow R&D progress in the most performance >>> sensitive areas w/o this being a problem for integrators and users. >>> >>> Better configuration validations are additional (strong) benefits: >>> we've seen lots of misunderstandings about which CacheStores / >>> configuration combinations are valid. >>> >>> Thanks, >>> Sanne >>> >>> On 5 August 2015 at 22:13, Dan Berindei wrote: >>>> On Fri, Jul 31, 2015 at 3:30 PM, Sanne Grinovero wrote: >>>>> On 20 July 2015 at 11:02, Dan Berindei wrote: >>>>>> Sanne, I think changing the cache store API is actually the most >>>>>> painful part, so we should only do it if we gain a concrete advantage >>>>>> from doing it. From a compatibility point of view, implementing a new >>>>>> interface vs implementing the same interface with completely different >>>>>> methods is just as bad. >>>>> Right, from that perspective it's a quite horrible proposal. >>>>> >>>>> But I think we can agree that only the "SharedCacheStore" deserves to >>>>> be considered an SPI, right? >>>>> That's the one people will normally customize to map stuff to other >>>>> stores one might have. >>>>> >>>>> I think it's important that beyond Infinispan 8.0 API's freeze, we can >>>>> make any change to the non-shared SPI >>>>> without affecting users who implement a custom shared cachestore. >>>>> >>>>> I highly doubt someone will implement a high-performance custom off >>>>> heap swap strategy, but if someone does he should contribute it and >>>>> will probably need to make integration level changes. >>>>> >>>>> We probably won't have the time to implement a new super efficient >>>>> local-only cachestore to replace the leveldb one, but I'd like to keep >>>>> the possibility open to do that beyond 8.0, *especially* without >>>>> breaking compatibility for other people. >>>> We already have a new super efficient local-only cachestore :) >>>> >>>> https://github.com/infinispan/infinispan/tree/master/persistence/soft-index >>>> >>>> >>>>> Sanne >>>>> >>>>> >>>>>> On Mon, Jul 20, 2015 at 12:41 PM, Sanne Grinovero wrote: >>>>>>> +1 for incremental changes.. >>>>>>> >>>>>>> I'd see the first step as defining two different interfaces; >>>>>>> essentially we need to choose two good names. >>>>>>> >>>>>>> Then we could have both interfaces still implement the same identical >>>>>>> methods, but go through each implementation and decide to "mark" it as >>>>>>> shared-only or never-shared. >>>>>>> >>>>>>> That would make it simpler to make concrete change proposals on each >>>>>>> of them and start taking some advantage from the split. I think you'll >>>>>>> need the two different interfaces to implement the validations you >>>>>>> mentioned. >>>>>>> >>>>>>> For Infinispan 8's goals, I'd be happy enough to keep the >>>>>>> "shared-only" interface quite similar to the current one, but mark the >>>>>>> never-shared one as a private or experimental SPI to allow ourselves >>>>>>> some more flexibility in performance oriented changes. >>>>>>> >>>>>>> Thanks, >>>>>>> Sanne >>>>>>> >>>>>>> On 20 July 2015 at 10:07, Tristan Tarrant wrote: >>>>>>>> Sanne, well written. >>>>>>>> Before actually implementing any of the optimizations/changes you >>>>>>>> mention, I think the lowest-hanging fruit we should grab now is just to >>>>>>>> add checks to all of our cachestores to actually throw an exception when >>>>>>>> they are being enabled in unsupported configurations. >>>>>>>> >>>>>>>> I've created [1] to get us started >>>>>>>> >>>>>>>> Tristan >>>>>>>> >>>>>>>> [1] https://issues.jboss.org/browse/ISPN-5617 >>>>>>>> >>>>>>>> On 16/07/2015 15:32, Sanne Grinovero wrote: >>>>>>>>> I would like to propose a clear cut separation between our shared and >>>>>>>>> non-shared CacheStores, >>>>>>>>> in all terms such as: >>>>>>>>> - Configuration options >>>>>>>>> - Integration contracts (Split the CacheStore SPI) >>>>>>>>> - Implementations >>>>>>>>> - Terminology, to avoid any further confusion around valid >>>>>>>>> configurations and sensible architectures >>>>>>>>> >>>>>>>>> We have loads of examples of users who get in trouble by configuring >>>>>>>>> one incorrectly, but also there are plenty of efficiency improvements >>>>>>>>> we could take advantage of by clearly splitting the integration points >>>>>>>>> and the implementations in two categories. >>>>>>>>> >>>>>>>>> Not least, it's a very common and dangerous pitfall to assume that >>>>>>>>> Infinispan is able to restore a consistent state after having stopped >>>>>>>>> a DIST cluster which passivated into non-shared CacheStore instances, >>>>>>>>> or even REPL clusters when they don't shutdown all at the same exact >>>>>>>>> time (and "exact same time" is a strange concept at least..). We need >>>>>>>>> to clarify the different options, tradeoffs and their consequences.. >>>>>>>>> to users and ourselves, as a clearly defined use case will avoid bugs >>>>>>>>> and simplify implementations. >>>>>>>>> >>>>>>>>> # The purpose of each >>>>>>>>> I think that people should use a non-shared (local?) CacheStore for >>>>>>>>> the sole purpose of expanding to storage capacity of each single >>>>>>>>> node.. be it because you don't have enough memory at all, or be it >>>>>>>>> because you prefer some extra safety margin because either your >>>>>>>>> estimates are complex, or maybe because we live in a real world were >>>>>>>>> the hashing function might not be perfect in practice. I hope we all >>>>>>>>> agree that Infinispan should be able to take such situations with at >>>>>>>>> worst a graceful performance degradatation, rather than complain >>>>>>>>> sending OOMs to the admin and setting the service on strike. >>>>>>>>> >>>>>>>>> A Shared CacheStore is useful for very different purposes; primarily >>>>>>>>> to implement a Cache on some other service - for example your (single, >>>>>>>>> shared) RDBMs, a slow (or expensive) webservice your organization has >>>>>>>>> to call frequently, etc.. Or it's useful even as a write-through cache >>>>>>>>> on a similar service, maybe internal but not able to handle the high >>>>>>>>> variation of load spikes which Infinsipan can handle better. >>>>>>>>> Finally, a great use case is to have a consistent backup of all your >>>>>>>>> data-grid content, possibly in some "reference" form such as JPA >>>>>>>>> mapped entities. >>>>>>>>> >>>>>>>>> # Benefits of a Non-Shared >>>>>>>>> A non-shared CacheStore implementor should be able to take advantage >>>>>>>>> of *its purpose*, among the big ones I see: >>>>>>>>> - Exclusive usage -> locking of a specific entry can be handled at >>>>>>>>> datacontainer level, can simplify quite some internal code. >>>>>>>>> - Reliability -> since a clustered node needs to wipe its state at >>>>>>>>> reboot (after a crash), it's much simpler to code any such CacheStore >>>>>>>>> to avoid any form of disk synch or persistance guarantees. >>>>>>>>> - Encoding format -> this can be controlled entirely by Infinispan, >>>>>>>>> and no need to take factors like rolling upgrade compatible encodings >>>>>>>>> in mind. JBoss Marshalling would be good enough, or some >>>>>>>>> implementations might not need to serialize at all. >>>>>>>>> >>>>>>>>> Our non-shared CacheStore implentation(s) could take advantage of >>>>>>>>> lower level more complex code optimisations and interfaces, as users >>>>>>>>> would rarely want to customize one of these, while the use case of >>>>>>>>> mapping data to a shared service needs a more user friendly SPI so to >>>>>>>>> keep it simple to plug in custom stores: custom data formats, custom >>>>>>>>> connectors, get some help in implementing concurrency correctly. >>>>>>>>> Proper Transaction integration for the CacheStore has been on our >>>>>>>>> wishlist for some time too, I suspect that accepting that we have been >>>>>>>>> mixing up two different things under a same name so far, would make it >>>>>>>>> simpler to implement further improvements such as transactions: the >>>>>>>>> way to do such a thing is very different in each of these use cases, >>>>>>>>> so it would help at least to implement it on a subset first, or maybe >>>>>>>>> only if it turns out there's no need for such things in the context of >>>>>>>>> the local-only-dedicated "swapfile". >>>>>>>>> >>>>>>>>> # Mixed types should be killed >>>>>>>>> I'm aware that some of our current implementations _could_ work both as >>>>>>>>> shared or non-shared, for example the JDBC or JPACacheStore or the >>>>>>>>> Remote Cachestore.. but in most cases it doesn't make much sense. Why >>>>>>>>> would you ever want to use the JPACacheStore if not to share data with >>>>>>>>> a _shared_ database? >>>>>>>>> >>>>>>>>> We should take such options away, and by doing so focus on the use >>>>>>>>> cases which actually matter and simplify the implementations and >>>>>>>>> improve the configuration validations. >>>>>>>>> >>>>>>>>> If ever a compelling storage technology is identified which we'd like to >>>>>>>>> offer as an option for both shared or non-shared, I would still >>>>>>>>> recommend to make two different implementations, as there certainly are >>>>>>>>> different requirements and assumptions when coding such a thing. >>>>>>>>> >>>>>>>>> Not least, I would very like to see a default local CacheStore: >>>>>>>>> picking one for local "emergency swapping" should be a no-brainer for >>>>>>>>> users; we could setup one by default and not bother newcomers with >>>>>>>>> complex choices. >>>>>>>>> >>>>>>>>> If we simplify the requirement of such a thing, it should be easy to >>>>>>>>> write one on standard Java NIO2 APIs and get rid of the complexities of >>>>>>>>> maintaining the native integration with things like LevelDB, not least >>>>>>>>> the inefficiency of Java to make such native calls. >>>>>>>>> >>>>>>>>> Then as a second step, we should attack the other use case: backups; >>>>>>>>> from a *purpose driven perspective* I'd then see us revive the Cassandra >>>>>>>>> integration; obviously as a shared-only option. >>>>>>>>> >>>>>>>>> Cheers, >>>>>>>>> Sanne >>>>>>>>> _______________________________________________ >>>>>>>>> infinispan-dev mailing list >>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>> >>>>>>>> -- >>>>>>>> Tristan Tarrant >>>>>>>> Infinispan Lead >>>>>>>> JBoss, a division of Red Hat >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From gustavo at infinispan.org Sat Aug 22 08:04:34 2015 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Sat, 22 Aug 2015 13:04:34 +0100 Subject: [infinispan-dev] Infinispan 8.0.0.CR1 released! Message-ID: Dear community, It's my pleasure to announce the first release candidate of Infinispan 8! All details are on our blog: http://blog.infinispan.org/2015/08/infinispan-800cr1-is-out.html Cheers, Gustavo -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150822/c784cce3/attachment.html From christian at sweazer.com Sun Aug 23 10:33:59 2015 From: christian at sweazer.com (Christian Beikov) Date: Sun, 23 Aug 2015 16:33:59 +0200 Subject: [infinispan-dev] Blue-Green deployment scenario Message-ID: <55D9D9D7.7000009@sweazer.com> Hello, I have been reading the rolling upgrade chapter[1] from the documentation and I have some questions. 1. The documentation states that in the target cluster, every cache that should be migrated, should use a CLI cache loader pointing to the source cluster. I suppose that this can only be configured via XML but not via the CLI or JMX? That would be bad because after a node restart the cache loader would be enabled again. 2. How would the JMX URL look like if I wanted to connect to a secured Wildfly over HTTP? I was thinking of jmx:http-remoting-jmx://USER:PASSWORD at HOST:PORT/CACHEMANAGER/CACHE 3. What do I need to do to rollback to the source cluster after switching a few nodes to the target cluster? Thanks in advance! Regards, Christian [1] http://infinispan.org/docs/7.2.x/user_guide/user_guide.html#_rolling_upgrades_for_infinispan_library_embedded_mode -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150823/e587dafd/attachment.html From galder at redhat.com Mon Aug 24 03:38:48 2015 From: galder at redhat.com (Galder Zamarreno) Date: Mon, 24 Aug 2015 03:38:48 -0400 (EDT) Subject: [infinispan-dev] Shared vs Non-Shared CacheStores In-Reply-To: References: <55C31D43.30106@redhat.com> <55D6EB12.5040203@redhat.com> Message-ID: <2043960465.17436804.1440401928420.JavaMail.zimbra@redhat.com> ----- Original Message ----- > +1 > > I like that plan, however I don't have any problem with marker > interfaces either. > > I remember the annotations used internally by Infinispan were (a long > time ago) the cause for a very slow start, which was then fixed by > indexing the annotations at compile time; loading this index as a > resource at runtime time has caused some unnecessary complexity in > some modular environments. No biggie, just saying annotations have > some tradeoffs too ;) +1 to what Sanne says. Unless we're going to generate boiler-plate code at compile time from annotations (something WF has been doing), I would not like to extend our usage of annotations internally, since as Sanne says, it requires further annotation indexing and that's caused problems. Also, from a user POV, annotations might look useful for Java users, but for other JVM langs not using annotations, e.g. Clojure, they can be problematic. Clojure guys have been asking us to have interface-based listeners which we currently don't support (except for functional and jcache APIs) Cheers, > > > On 21 August 2015 at 10:10, Tristan Tarrant wrote: > > I've been thinking more about this issue, after talking with Sanne, and > > here's my (possibly faulty) analysis: > > > > I don't think this is so dramatic or urgent that we need a solution > > (i.e. a distinct SPI for embedded cachestores) in place by 8.0. This is > > something that we can design and introduce as a private-only SPI during > > the 8.x series and migrate our stores to use it accordingly. Note that > > such a SPI would be more closely tied to the DataContainer so it may not > > even have a relationship with the PersistenceManager. > > > > What I would like to see in the current SPI for 8.0, however, is an > > extensible way for cachestores to expose "capabilities" so that not only > > can we prevent potentially broken configurations, but we can also > > declare support for advanced functionality (shared, transactional, > > schema-aware, etc). I'm not fond of marker-only interfaces (see > > org.infinispan.persistence.spi.LocalOnlyCacheLoader), so I'd prefer an > > annotation-based approach. > > > > Tristan > > > > On 06/08/2015 10:39, Radim Vansa wrote: > >> I understand that shared cache stores will be more common to be > >> implemented, I don't think that non-shared stores should be considered > >> 'private interface'. But separating them would really give the > >> oportunity to change this non-shared SPI more often if needed without > >> breaking shared one. > >> However, hot-glueing a new cool interface without referential > >> implementation that supports transaction, solves the ton of issues > >> described in [1] is not a wise move, IMO. And there's no time to > >> implement this before 8.0.0.Final. > >> > >> Radim > >> > >> [1] > >> https://github.com/infinispan/infinispan/wiki/Consistency-guarantees-in-Infinispan > >> > >> On 08/05/2015 11:57 PM, Sanne Grinovero wrote: > >>> I don't doubt Radim's code :) but I'm pretty confident that even that > >>> implementation is limited by the constraints of the general-purpose > >>> API. > >>> > >>> For example it seems Bela will soon allow more flexibility in JGroups > >>> regarding buffer representations. We need to commit on a stable API > >>> for end user integrations (shared cachestore implementors), but we > >>> also need to keep options open to soon play with other approaches. > >>> > >>> That's why I think this separation should be done before Infinispan > >>> 8.0.0.Final even if I don't have a concrete proposal for how this > >>> other API should look like: I don't presume to be able to anticipate > >>> which API exactly will be best, but I think we can all see that we > >>> will want to change that. There should be a private internal contract > >>> which we can change even in micro versions without concerns of > >>> compatibility, so to allow R&D progress in the most performance > >>> sensitive areas w/o this being a problem for integrators and users. > >>> > >>> Better configuration validations are additional (strong) benefits: > >>> we've seen lots of misunderstandings about which CacheStores / > >>> configuration combinations are valid. > >>> > >>> Thanks, > >>> Sanne > >>> > >>> On 5 August 2015 at 22:13, Dan Berindei wrote: > >>>> On Fri, Jul 31, 2015 at 3:30 PM, Sanne Grinovero > >>>> wrote: > >>>>> On 20 July 2015 at 11:02, Dan Berindei wrote: > >>>>>> Sanne, I think changing the cache store API is actually the most > >>>>>> painful part, so we should only do it if we gain a concrete advantage > >>>>>> from doing it. From a compatibility point of view, implementing a new > >>>>>> interface vs implementing the same interface with completely different > >>>>>> methods is just as bad. > >>>>> Right, from that perspective it's a quite horrible proposal. > >>>>> > >>>>> But I think we can agree that only the "SharedCacheStore" deserves to > >>>>> be considered an SPI, right? > >>>>> That's the one people will normally customize to map stuff to other > >>>>> stores one might have. > >>>>> > >>>>> I think it's important that beyond Infinispan 8.0 API's freeze, we can > >>>>> make any change to the non-shared SPI > >>>>> without affecting users who implement a custom shared cachestore. > >>>>> > >>>>> I highly doubt someone will implement a high-performance custom off > >>>>> heap swap strategy, but if someone does he should contribute it and > >>>>> will probably need to make integration level changes. > >>>>> > >>>>> We probably won't have the time to implement a new super efficient > >>>>> local-only cachestore to replace the leveldb one, but I'd like to keep > >>>>> the possibility open to do that beyond 8.0, *especially* without > >>>>> breaking compatibility for other people. > >>>> We already have a new super efficient local-only cachestore :) > >>>> > >>>> https://github.com/infinispan/infinispan/tree/master/persistence/soft-index > >>>> > >>>> > >>>>> Sanne > >>>>> > >>>>> > >>>>>> On Mon, Jul 20, 2015 at 12:41 PM, Sanne Grinovero > >>>>>> wrote: > >>>>>>> +1 for incremental changes.. > >>>>>>> > >>>>>>> I'd see the first step as defining two different interfaces; > >>>>>>> essentially we need to choose two good names. > >>>>>>> > >>>>>>> Then we could have both interfaces still implement the same identical > >>>>>>> methods, but go through each implementation and decide to "mark" it > >>>>>>> as > >>>>>>> shared-only or never-shared. > >>>>>>> > >>>>>>> That would make it simpler to make concrete change proposals on each > >>>>>>> of them and start taking some advantage from the split. I think > >>>>>>> you'll > >>>>>>> need the two different interfaces to implement the validations you > >>>>>>> mentioned. > >>>>>>> > >>>>>>> For Infinispan 8's goals, I'd be happy enough to keep the > >>>>>>> "shared-only" interface quite similar to the current one, but mark > >>>>>>> the > >>>>>>> never-shared one as a private or experimental SPI to allow ourselves > >>>>>>> some more flexibility in performance oriented changes. > >>>>>>> > >>>>>>> Thanks, > >>>>>>> Sanne > >>>>>>> > >>>>>>> On 20 July 2015 at 10:07, Tristan Tarrant > >>>>>>> wrote: > >>>>>>>> Sanne, well written. > >>>>>>>> Before actually implementing any of the optimizations/changes you > >>>>>>>> mention, I think the lowest-hanging fruit we should grab now is just > >>>>>>>> to > >>>>>>>> add checks to all of our cachestores to actually throw an exception > >>>>>>>> when > >>>>>>>> they are being enabled in unsupported configurations. > >>>>>>>> > >>>>>>>> I've created [1] to get us started > >>>>>>>> > >>>>>>>> Tristan > >>>>>>>> > >>>>>>>> [1] https://issues.jboss.org/browse/ISPN-5617 > >>>>>>>> > >>>>>>>> On 16/07/2015 15:32, Sanne Grinovero wrote: > >>>>>>>>> I would like to propose a clear cut separation between our shared > >>>>>>>>> and > >>>>>>>>> non-shared CacheStores, > >>>>>>>>> in all terms such as: > >>>>>>>>> - Configuration options > >>>>>>>>> - Integration contracts (Split the CacheStore SPI) > >>>>>>>>> - Implementations > >>>>>>>>> - Terminology, to avoid any further confusion around valid > >>>>>>>>> configurations and sensible architectures > >>>>>>>>> > >>>>>>>>> We have loads of examples of users who get in trouble by > >>>>>>>>> configuring > >>>>>>>>> one incorrectly, but also there are plenty of efficiency > >>>>>>>>> improvements > >>>>>>>>> we could take advantage of by clearly splitting the integration > >>>>>>>>> points > >>>>>>>>> and the implementations in two categories. > >>>>>>>>> > >>>>>>>>> Not least, it's a very common and dangerous pitfall to assume that > >>>>>>>>> Infinispan is able to restore a consistent state after having > >>>>>>>>> stopped > >>>>>>>>> a DIST cluster which passivated into non-shared CacheStore > >>>>>>>>> instances, > >>>>>>>>> or even REPL clusters when they don't shutdown all at the same > >>>>>>>>> exact > >>>>>>>>> time (and "exact same time" is a strange concept at least..). We > >>>>>>>>> need > >>>>>>>>> to clarify the different options, tradeoffs and their > >>>>>>>>> consequences.. > >>>>>>>>> to users and ourselves, as a clearly defined use case will avoid > >>>>>>>>> bugs > >>>>>>>>> and simplify implementations. > >>>>>>>>> > >>>>>>>>> # The purpose of each > >>>>>>>>> I think that people should use a non-shared (local?) CacheStore for > >>>>>>>>> the sole purpose of expanding to storage capacity of each single > >>>>>>>>> node.. be it because you don't have enough memory at all, or be it > >>>>>>>>> because you prefer some extra safety margin because either your > >>>>>>>>> estimates are complex, or maybe because we live in a real world > >>>>>>>>> were > >>>>>>>>> the hashing function might not be perfect in practice. I hope we > >>>>>>>>> all > >>>>>>>>> agree that Infinispan should be able to take such situations with > >>>>>>>>> at > >>>>>>>>> worst a graceful performance degradatation, rather than complain > >>>>>>>>> sending OOMs to the admin and setting the service on strike. > >>>>>>>>> > >>>>>>>>> A Shared CacheStore is useful for very different purposes; > >>>>>>>>> primarily > >>>>>>>>> to implement a Cache on some other service - for example your > >>>>>>>>> (single, > >>>>>>>>> shared) RDBMs, a slow (or expensive) webservice your organization > >>>>>>>>> has > >>>>>>>>> to call frequently, etc.. Or it's useful even as a write-through > >>>>>>>>> cache > >>>>>>>>> on a similar service, maybe internal but not able to handle the > >>>>>>>>> high > >>>>>>>>> variation of load spikes which Infinsipan can handle better. > >>>>>>>>> Finally, a great use case is to have a consistent backup of all > >>>>>>>>> your > >>>>>>>>> data-grid content, possibly in some "reference" form such as JPA > >>>>>>>>> mapped entities. > >>>>>>>>> > >>>>>>>>> # Benefits of a Non-Shared > >>>>>>>>> A non-shared CacheStore implementor should be able to take > >>>>>>>>> advantage > >>>>>>>>> of *its purpose*, among the big ones I see: > >>>>>>>>> - Exclusive usage -> locking of a specific entry can be handled > >>>>>>>>> at > >>>>>>>>> datacontainer level, can simplify quite some internal code. > >>>>>>>>> - Reliability -> since a clustered node needs to wipe its state > >>>>>>>>> at > >>>>>>>>> reboot (after a crash), it's much simpler to code any such > >>>>>>>>> CacheStore > >>>>>>>>> to avoid any form of disk synch or persistance guarantees. > >>>>>>>>> - Encoding format -> this can be controlled entirely by > >>>>>>>>> Infinispan, > >>>>>>>>> and no need to take factors like rolling upgrade compatible > >>>>>>>>> encodings > >>>>>>>>> in mind. JBoss Marshalling would be good enough, or some > >>>>>>>>> implementations might not need to serialize at all. > >>>>>>>>> > >>>>>>>>> Our non-shared CacheStore implentation(s) could take advantage of > >>>>>>>>> lower level more complex code optimisations and interfaces, as > >>>>>>>>> users > >>>>>>>>> would rarely want to customize one of these, while the use case of > >>>>>>>>> mapping data to a shared service needs a more user friendly SPI so > >>>>>>>>> to > >>>>>>>>> keep it simple to plug in custom stores: custom data formats, > >>>>>>>>> custom > >>>>>>>>> connectors, get some help in implementing concurrency correctly. > >>>>>>>>> Proper Transaction integration for the CacheStore has been on our > >>>>>>>>> wishlist for some time too, I suspect that accepting that we have > >>>>>>>>> been > >>>>>>>>> mixing up two different things under a same name so far, would make > >>>>>>>>> it > >>>>>>>>> simpler to implement further improvements such as transactions: the > >>>>>>>>> way to do such a thing is very different in each of these use > >>>>>>>>> cases, > >>>>>>>>> so it would help at least to implement it on a subset first, or > >>>>>>>>> maybe > >>>>>>>>> only if it turns out there's no need for such things in the context > >>>>>>>>> of > >>>>>>>>> the local-only-dedicated "swapfile". > >>>>>>>>> > >>>>>>>>> # Mixed types should be killed > >>>>>>>>> I'm aware that some of our current implementations _could_ work > >>>>>>>>> both as > >>>>>>>>> shared or non-shared, for example the JDBC or JPACacheStore or the > >>>>>>>>> Remote Cachestore.. but in most cases it doesn't make much sense. > >>>>>>>>> Why > >>>>>>>>> would you ever want to use the JPACacheStore if not to share data > >>>>>>>>> with > >>>>>>>>> a _shared_ database? > >>>>>>>>> > >>>>>>>>> We should take such options away, and by doing so focus on the use > >>>>>>>>> cases which actually matter and simplify the implementations and > >>>>>>>>> improve the configuration validations. > >>>>>>>>> > >>>>>>>>> If ever a compelling storage technology is identified which we'd > >>>>>>>>> like to > >>>>>>>>> offer as an option for both shared or non-shared, I would still > >>>>>>>>> recommend to make two different implementations, as there certainly > >>>>>>>>> are > >>>>>>>>> different requirements and assumptions when coding such a thing. > >>>>>>>>> > >>>>>>>>> Not least, I would very like to see a default local CacheStore: > >>>>>>>>> picking one for local "emergency swapping" should be a no-brainer > >>>>>>>>> for > >>>>>>>>> users; we could setup one by default and not bother newcomers with > >>>>>>>>> complex choices. > >>>>>>>>> > >>>>>>>>> If we simplify the requirement of such a thing, it should be easy > >>>>>>>>> to > >>>>>>>>> write one on standard Java NIO2 APIs and get rid of the > >>>>>>>>> complexities of > >>>>>>>>> maintaining the native integration with things like LevelDB, not > >>>>>>>>> least > >>>>>>>>> the inefficiency of Java to make such native calls. > >>>>>>>>> > >>>>>>>>> Then as a second step, we should attack the other use case: > >>>>>>>>> backups; > >>>>>>>>> from a *purpose driven perspective* I'd then see us revive the > >>>>>>>>> Cassandra > >>>>>>>>> integration; obviously as a shared-only option. > >>>>>>>>> > >>>>>>>>> Cheers, > >>>>>>>>> Sanne > >>>>>>>>> _______________________________________________ > >>>>>>>>> infinispan-dev mailing list > >>>>>>>>> infinispan-dev at lists.jboss.org > >>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>>>>>>> > >>>>>>>> -- > >>>>>>>> Tristan Tarrant > >>>>>>>> Infinispan Lead > >>>>>>>> JBoss, a division of Red Hat > >>>>>>>> _______________________________________________ > >>>>>>>> infinispan-dev mailing list > >>>>>>>> infinispan-dev at lists.jboss.org > >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>>>>> _______________________________________________ > >>>>>>> infinispan-dev mailing list > >>>>>>> infinispan-dev at lists.jboss.org > >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>>>> _______________________________________________ > >>>>>> infinispan-dev mailing list > >>>>>> infinispan-dev at lists.jboss.org > >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>>> _______________________________________________ > >>>>> infinispan-dev mailing list > >>>>> infinispan-dev at lists.jboss.org > >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>> _______________________________________________ > >>>> infinispan-dev mailing list > >>>> infinispan-dev at lists.jboss.org > >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> > > > > -- > > Tristan Tarrant > > Infinispan Lead > > JBoss, a division of Red Hat > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From vchepeli at redhat.com Mon Aug 24 04:04:34 2015 From: vchepeli at redhat.com (Vitalii Chepeliuk) Date: Mon, 24 Aug 2015 04:04:34 -0400 (EDT) Subject: [infinispan-dev] Hidden failures in the testsuite In-Reply-To: References: Message-ID: <101300424.14304090.1440403474589.JavaMail.zimbra@redhat.com> Hey Sanne! Yep you are right ignoring output is BAD IDEA. I realized that it's difficult to look through all log manually so probably we should write some parser in python or bash to grep it and put it into bin/ folder with other scripts, So at least we can run this script after all tests were run and analyze it somehow. And about "appear to be good" nobody knows why. It could be testng/junit issue as we mix it a lot. So this needs further discussion and analysis. Vitalii ----- ??????? ???????????? ----- ???: "Sanne Grinovero" ????: "infinispan -Dev List" ?????????: ?????????, 10 ??????? 2015 ? 20:46:06 ????: [infinispan-dev] Hidden failures in the testsuite Hi all, I just updated my local master fork and started the testsuite, as I sometimes do. It's great to see that the build was successful, and no tests *appeared* to have failed. But! lazily scrolling up in the console, I see lots of exceptions which don't look like intentional (I'm aware that some tests intentionally create error conditions). Also some tests are extremely verbose, which might be the reason for nobody noticing these. Some examples: - org.infinispan.it.compatibility.EmbeddedRestHotRodTest seems to log TRACE to the console (and probably the whole module) - CDI tests such as org.infinispan.cdi.InfinispanExtensionRemote seem to fail in great number because of some ClassNotFoundException(s) and/or ResourceLoadingException(s) - OSGi integration tests seem to be all broken by some invalid integration with Aries / Geronimo - OSGi integration tests dump a lot of unnecessary information to the build console - the Infinispan Query tests log lots of WARN too, around missing configuration properties and in some cases concerning exceptions; I'm pretty sure that I had resolved those in the past, seems some refactorings were done w/o considering the log outputs. Please don't ignore the output; if it's too verbose to watch, that needs to be resolved too. I also monitor the "expected execution time" of some modules I'm interested in, that's been useful in some cases to figure out that there was some regression. One big question: why is it that so many tests "appear to be good" but are actually broken? I would like to understand that. Thanks, Sanne _______________________________________________ infinispan-dev mailing list infinispan-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev From gustavo at infinispan.org Mon Aug 24 04:59:48 2015 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Mon, 24 Aug 2015 09:59:48 +0100 Subject: [infinispan-dev] Infinispan 8.0.0.CR1 released! In-Reply-To: References: Message-ID: Apologies for the incorrect rendering of the link below, here's the correct destination: http://blog.infinispan.org/2015/08/infinispan-800cr1-is-out.html Cheers, Gustavo On Sat, Aug 22, 2015 at 1:04 PM, Gustavo Fernandes wrote: > Dear community, > > It's my pleasure to announce the first release candidate of Infinispan 8! > > All details are on our blog: > http://blog.infinispan.org/2015/08/infinispan-800cr1-is-out.html > > > Cheers, > Gustavo > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150824/6cf5876c/attachment.html From ttarrant at redhat.com Mon Aug 24 05:55:41 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 24 Aug 2015 11:55:41 +0200 Subject: [infinispan-dev] Shared vs Non-Shared CacheStores In-Reply-To: <2043960465.17436804.1440401928420.JavaMail.zimbra@redhat.com> References: <55C31D43.30106@redhat.com> <55D6EB12.5040203@redhat.com> <2043960465.17436804.1440401928420.JavaMail.zimbra@redhat.com> Message-ID: <55DAEA1D.1060006@redhat.com> Actually, can't LocalOnlyCacheLoader already be used for this purpose ? Tristan On 24/08/2015 09:38, Galder Zamarreno wrote: > ----- Original Message ----- >> +1 >> >> I like that plan, however I don't have any problem with marker >> interfaces either. >> >> I remember the annotations used internally by Infinispan were (a long >> time ago) the cause for a very slow start, which was then fixed by >> indexing the annotations at compile time; loading this index as a >> resource at runtime time has caused some unnecessary complexity in >> some modular environments. No biggie, just saying annotations have >> some tradeoffs too ;) > > +1 to what Sanne says. Unless we're going to generate boiler-plate code at compile time from annotations (something WF has been doing), I would not like to extend our usage of annotations internally, since as Sanne says, it requires further annotation indexing and that's caused problems. > > Also, from a user POV, annotations might look useful for Java users, but for other JVM langs not using annotations, e.g. Clojure, they can be problematic. Clojure guys have been asking us to have interface-based listeners which we currently don't support (except for functional and jcache APIs) > > Cheers, > >> >> >> On 21 August 2015 at 10:10, Tristan Tarrant wrote: >>> I've been thinking more about this issue, after talking with Sanne, and >>> here's my (possibly faulty) analysis: >>> >>> I don't think this is so dramatic or urgent that we need a solution >>> (i.e. a distinct SPI for embedded cachestores) in place by 8.0. This is >>> something that we can design and introduce as a private-only SPI during >>> the 8.x series and migrate our stores to use it accordingly. Note that >>> such a SPI would be more closely tied to the DataContainer so it may not >>> even have a relationship with the PersistenceManager. >>> >>> What I would like to see in the current SPI for 8.0, however, is an >>> extensible way for cachestores to expose "capabilities" so that not only >>> can we prevent potentially broken configurations, but we can also >>> declare support for advanced functionality (shared, transactional, >>> schema-aware, etc). I'm not fond of marker-only interfaces (see >>> org.infinispan.persistence.spi.LocalOnlyCacheLoader), so I'd prefer an >>> annotation-based approach. >>> >>> Tristan >>> >>> On 06/08/2015 10:39, Radim Vansa wrote: >>>> I understand that shared cache stores will be more common to be >>>> implemented, I don't think that non-shared stores should be considered >>>> 'private interface'. But separating them would really give the >>>> oportunity to change this non-shared SPI more often if needed without >>>> breaking shared one. >>>> However, hot-glueing a new cool interface without referential >>>> implementation that supports transaction, solves the ton of issues >>>> described in [1] is not a wise move, IMO. And there's no time to >>>> implement this before 8.0.0.Final. >>>> >>>> Radim >>>> >>>> [1] >>>> https://github.com/infinispan/infinispan/wiki/Consistency-guarantees-in-Infinispan >>>> >>>> On 08/05/2015 11:57 PM, Sanne Grinovero wrote: >>>>> I don't doubt Radim's code :) but I'm pretty confident that even that >>>>> implementation is limited by the constraints of the general-purpose >>>>> API. >>>>> >>>>> For example it seems Bela will soon allow more flexibility in JGroups >>>>> regarding buffer representations. We need to commit on a stable API >>>>> for end user integrations (shared cachestore implementors), but we >>>>> also need to keep options open to soon play with other approaches. >>>>> >>>>> That's why I think this separation should be done before Infinispan >>>>> 8.0.0.Final even if I don't have a concrete proposal for how this >>>>> other API should look like: I don't presume to be able to anticipate >>>>> which API exactly will be best, but I think we can all see that we >>>>> will want to change that. There should be a private internal contract >>>>> which we can change even in micro versions without concerns of >>>>> compatibility, so to allow R&D progress in the most performance >>>>> sensitive areas w/o this being a problem for integrators and users. >>>>> >>>>> Better configuration validations are additional (strong) benefits: >>>>> we've seen lots of misunderstandings about which CacheStores / >>>>> configuration combinations are valid. >>>>> >>>>> Thanks, >>>>> Sanne >>>>> >>>>> On 5 August 2015 at 22:13, Dan Berindei wrote: >>>>>> On Fri, Jul 31, 2015 at 3:30 PM, Sanne Grinovero >>>>>> wrote: >>>>>>> On 20 July 2015 at 11:02, Dan Berindei wrote: >>>>>>>> Sanne, I think changing the cache store API is actually the most >>>>>>>> painful part, so we should only do it if we gain a concrete advantage >>>>>>>> from doing it. From a compatibility point of view, implementing a new >>>>>>>> interface vs implementing the same interface with completely different >>>>>>>> methods is just as bad. >>>>>>> Right, from that perspective it's a quite horrible proposal. >>>>>>> >>>>>>> But I think we can agree that only the "SharedCacheStore" deserves to >>>>>>> be considered an SPI, right? >>>>>>> That's the one people will normally customize to map stuff to other >>>>>>> stores one might have. >>>>>>> >>>>>>> I think it's important that beyond Infinispan 8.0 API's freeze, we can >>>>>>> make any change to the non-shared SPI >>>>>>> without affecting users who implement a custom shared cachestore. >>>>>>> >>>>>>> I highly doubt someone will implement a high-performance custom off >>>>>>> heap swap strategy, but if someone does he should contribute it and >>>>>>> will probably need to make integration level changes. >>>>>>> >>>>>>> We probably won't have the time to implement a new super efficient >>>>>>> local-only cachestore to replace the leveldb one, but I'd like to keep >>>>>>> the possibility open to do that beyond 8.0, *especially* without >>>>>>> breaking compatibility for other people. >>>>>> We already have a new super efficient local-only cachestore :) >>>>>> >>>>>> https://github.com/infinispan/infinispan/tree/master/persistence/soft-index >>>>>> >>>>>> >>>>>>> Sanne >>>>>>> >>>>>>> >>>>>>>> On Mon, Jul 20, 2015 at 12:41 PM, Sanne Grinovero >>>>>>>> wrote: >>>>>>>>> +1 for incremental changes.. >>>>>>>>> >>>>>>>>> I'd see the first step as defining two different interfaces; >>>>>>>>> essentially we need to choose two good names. >>>>>>>>> >>>>>>>>> Then we could have both interfaces still implement the same identical >>>>>>>>> methods, but go through each implementation and decide to "mark" it >>>>>>>>> as >>>>>>>>> shared-only or never-shared. >>>>>>>>> >>>>>>>>> That would make it simpler to make concrete change proposals on each >>>>>>>>> of them and start taking some advantage from the split. I think >>>>>>>>> you'll >>>>>>>>> need the two different interfaces to implement the validations you >>>>>>>>> mentioned. >>>>>>>>> >>>>>>>>> For Infinispan 8's goals, I'd be happy enough to keep the >>>>>>>>> "shared-only" interface quite similar to the current one, but mark >>>>>>>>> the >>>>>>>>> never-shared one as a private or experimental SPI to allow ourselves >>>>>>>>> some more flexibility in performance oriented changes. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Sanne >>>>>>>>> >>>>>>>>> On 20 July 2015 at 10:07, Tristan Tarrant >>>>>>>>> wrote: >>>>>>>>>> Sanne, well written. >>>>>>>>>> Before actually implementing any of the optimizations/changes you >>>>>>>>>> mention, I think the lowest-hanging fruit we should grab now is just >>>>>>>>>> to >>>>>>>>>> add checks to all of our cachestores to actually throw an exception >>>>>>>>>> when >>>>>>>>>> they are being enabled in unsupported configurations. >>>>>>>>>> >>>>>>>>>> I've created [1] to get us started >>>>>>>>>> >>>>>>>>>> Tristan >>>>>>>>>> >>>>>>>>>> [1] https://issues.jboss.org/browse/ISPN-5617 >>>>>>>>>> >>>>>>>>>> On 16/07/2015 15:32, Sanne Grinovero wrote: >>>>>>>>>>> I would like to propose a clear cut separation between our shared >>>>>>>>>>> and >>>>>>>>>>> non-shared CacheStores, >>>>>>>>>>> in all terms such as: >>>>>>>>>>> - Configuration options >>>>>>>>>>> - Integration contracts (Split the CacheStore SPI) >>>>>>>>>>> - Implementations >>>>>>>>>>> - Terminology, to avoid any further confusion around valid >>>>>>>>>>> configurations and sensible architectures >>>>>>>>>>> >>>>>>>>>>> We have loads of examples of users who get in trouble by >>>>>>>>>>> configuring >>>>>>>>>>> one incorrectly, but also there are plenty of efficiency >>>>>>>>>>> improvements >>>>>>>>>>> we could take advantage of by clearly splitting the integration >>>>>>>>>>> points >>>>>>>>>>> and the implementations in two categories. >>>>>>>>>>> >>>>>>>>>>> Not least, it's a very common and dangerous pitfall to assume that >>>>>>>>>>> Infinispan is able to restore a consistent state after having >>>>>>>>>>> stopped >>>>>>>>>>> a DIST cluster which passivated into non-shared CacheStore >>>>>>>>>>> instances, >>>>>>>>>>> or even REPL clusters when they don't shutdown all at the same >>>>>>>>>>> exact >>>>>>>>>>> time (and "exact same time" is a strange concept at least..). We >>>>>>>>>>> need >>>>>>>>>>> to clarify the different options, tradeoffs and their >>>>>>>>>>> consequences.. >>>>>>>>>>> to users and ourselves, as a clearly defined use case will avoid >>>>>>>>>>> bugs >>>>>>>>>>> and simplify implementations. >>>>>>>>>>> >>>>>>>>>>> # The purpose of each >>>>>>>>>>> I think that people should use a non-shared (local?) CacheStore for >>>>>>>>>>> the sole purpose of expanding to storage capacity of each single >>>>>>>>>>> node.. be it because you don't have enough memory at all, or be it >>>>>>>>>>> because you prefer some extra safety margin because either your >>>>>>>>>>> estimates are complex, or maybe because we live in a real world >>>>>>>>>>> were >>>>>>>>>>> the hashing function might not be perfect in practice. I hope we >>>>>>>>>>> all >>>>>>>>>>> agree that Infinispan should be able to take such situations with >>>>>>>>>>> at >>>>>>>>>>> worst a graceful performance degradatation, rather than complain >>>>>>>>>>> sending OOMs to the admin and setting the service on strike. >>>>>>>>>>> >>>>>>>>>>> A Shared CacheStore is useful for very different purposes; >>>>>>>>>>> primarily >>>>>>>>>>> to implement a Cache on some other service - for example your >>>>>>>>>>> (single, >>>>>>>>>>> shared) RDBMs, a slow (or expensive) webservice your organization >>>>>>>>>>> has >>>>>>>>>>> to call frequently, etc.. Or it's useful even as a write-through >>>>>>>>>>> cache >>>>>>>>>>> on a similar service, maybe internal but not able to handle the >>>>>>>>>>> high >>>>>>>>>>> variation of load spikes which Infinsipan can handle better. >>>>>>>>>>> Finally, a great use case is to have a consistent backup of all >>>>>>>>>>> your >>>>>>>>>>> data-grid content, possibly in some "reference" form such as JPA >>>>>>>>>>> mapped entities. >>>>>>>>>>> >>>>>>>>>>> # Benefits of a Non-Shared >>>>>>>>>>> A non-shared CacheStore implementor should be able to take >>>>>>>>>>> advantage >>>>>>>>>>> of *its purpose*, among the big ones I see: >>>>>>>>>>> - Exclusive usage -> locking of a specific entry can be handled >>>>>>>>>>> at >>>>>>>>>>> datacontainer level, can simplify quite some internal code. >>>>>>>>>>> - Reliability -> since a clustered node needs to wipe its state >>>>>>>>>>> at >>>>>>>>>>> reboot (after a crash), it's much simpler to code any such >>>>>>>>>>> CacheStore >>>>>>>>>>> to avoid any form of disk synch or persistance guarantees. >>>>>>>>>>> - Encoding format -> this can be controlled entirely by >>>>>>>>>>> Infinispan, >>>>>>>>>>> and no need to take factors like rolling upgrade compatible >>>>>>>>>>> encodings >>>>>>>>>>> in mind. JBoss Marshalling would be good enough, or some >>>>>>>>>>> implementations might not need to serialize at all. >>>>>>>>>>> >>>>>>>>>>> Our non-shared CacheStore implentation(s) could take advantage of >>>>>>>>>>> lower level more complex code optimisations and interfaces, as >>>>>>>>>>> users >>>>>>>>>>> would rarely want to customize one of these, while the use case of >>>>>>>>>>> mapping data to a shared service needs a more user friendly SPI so >>>>>>>>>>> to >>>>>>>>>>> keep it simple to plug in custom stores: custom data formats, >>>>>>>>>>> custom >>>>>>>>>>> connectors, get some help in implementing concurrency correctly. >>>>>>>>>>> Proper Transaction integration for the CacheStore has been on our >>>>>>>>>>> wishlist for some time too, I suspect that accepting that we have >>>>>>>>>>> been >>>>>>>>>>> mixing up two different things under a same name so far, would make >>>>>>>>>>> it >>>>>>>>>>> simpler to implement further improvements such as transactions: the >>>>>>>>>>> way to do such a thing is very different in each of these use >>>>>>>>>>> cases, >>>>>>>>>>> so it would help at least to implement it on a subset first, or >>>>>>>>>>> maybe >>>>>>>>>>> only if it turns out there's no need for such things in the context >>>>>>>>>>> of >>>>>>>>>>> the local-only-dedicated "swapfile". >>>>>>>>>>> >>>>>>>>>>> # Mixed types should be killed >>>>>>>>>>> I'm aware that some of our current implementations _could_ work >>>>>>>>>>> both as >>>>>>>>>>> shared or non-shared, for example the JDBC or JPACacheStore or the >>>>>>>>>>> Remote Cachestore.. but in most cases it doesn't make much sense. >>>>>>>>>>> Why >>>>>>>>>>> would you ever want to use the JPACacheStore if not to share data >>>>>>>>>>> with >>>>>>>>>>> a _shared_ database? >>>>>>>>>>> >>>>>>>>>>> We should take such options away, and by doing so focus on the use >>>>>>>>>>> cases which actually matter and simplify the implementations and >>>>>>>>>>> improve the configuration validations. >>>>>>>>>>> >>>>>>>>>>> If ever a compelling storage technology is identified which we'd >>>>>>>>>>> like to >>>>>>>>>>> offer as an option for both shared or non-shared, I would still >>>>>>>>>>> recommend to make two different implementations, as there certainly >>>>>>>>>>> are >>>>>>>>>>> different requirements and assumptions when coding such a thing. >>>>>>>>>>> >>>>>>>>>>> Not least, I would very like to see a default local CacheStore: >>>>>>>>>>> picking one for local "emergency swapping" should be a no-brainer >>>>>>>>>>> for >>>>>>>>>>> users; we could setup one by default and not bother newcomers with >>>>>>>>>>> complex choices. >>>>>>>>>>> >>>>>>>>>>> If we simplify the requirement of such a thing, it should be easy >>>>>>>>>>> to >>>>>>>>>>> write one on standard Java NIO2 APIs and get rid of the >>>>>>>>>>> complexities of >>>>>>>>>>> maintaining the native integration with things like LevelDB, not >>>>>>>>>>> least >>>>>>>>>>> the inefficiency of Java to make such native calls. >>>>>>>>>>> >>>>>>>>>>> Then as a second step, we should attack the other use case: >>>>>>>>>>> backups; >>>>>>>>>>> from a *purpose driven perspective* I'd then see us revive the >>>>>>>>>>> Cassandra >>>>>>>>>>> integration; obviously as a shared-only option. >>>>>>>>>>> >>>>>>>>>>> Cheers, >>>>>>>>>>> Sanne >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> infinispan-dev mailing list >>>>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Tristan Tarrant >>>>>>>>>> Infinispan Lead >>>>>>>>>> JBoss, a division of Red Hat >>>>>>>>>> _______________________________________________ >>>>>>>>>> infinispan-dev mailing list >>>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>> _______________________________________________ >>>>>>>>> infinispan-dev mailing list >>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> >>> >>> -- >>> Tristan Tarrant >>> Infinispan Lead >>> JBoss, a division of Red Hat >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From ttarrant at redhat.com Mon Aug 24 11:06:09 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 24 Aug 2015 17:06:09 +0200 Subject: [infinispan-dev] Weekly Infinispan IRC Meeting minutes 2015-08-24 Message-ID: <55DB32E1.6050004@redhat.com> Hi all, here are the minutes from this week's #infinispan IRC meeting: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2015/infinispan.2015-08-24-14.01.log.html Enjoy Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From spaulger at codezen.co.uk Thu Aug 27 06:49:26 2015 From: spaulger at codezen.co.uk (Simon Paulger) Date: Thu, 27 Aug 2015 11:49:26 +0100 Subject: [infinispan-dev] Redis infinispan cache store In-Reply-To: <55BB2B48.5080802@redhat.com> References: <55B8A148.1090709@redhat.com> <55BB2B48.5080802@redhat.com> Message-ID: Hi, This is now done and is available to see here: https://github.com/spaulg/infinispan-cachestore-redis. I used the remote store within the infinispan repo as a base of reference. There are some points that my be worth further discussion. They are: 1. the cache loader size method return type is limited to int. Redis servers can hold much more than Integer.MAX_VALUE and the Jedis client method for counting items on the Redis server returns longs for each server, which in addition must be totalled up for each server if using a Redis cluster topology. To get around this I am checking for a long over Integer.MAX_VALUE and logging a warn, then returning Integer.MAX_VALUE. 2. Redis handles expiration. I am using lifespan to immediately set the expiration of the cache entry in Redis, and when that lifespan is reached the item is immediately purged by Redis itself. This means, there is no idle time, and there is no purge method implementation. 3. A few unit tests around expiration had to be disabled as they require changes to time. As expiration is handled by Redis, I would have to change the system time to make Redis force expiration. For now, they are just disabled. I have built it against the Jedis client. I also tried 2 other clients, lettuce and redisson, but felt that Jedis gave the best implementation as a) it didn't try to do too much (by this I mean running background monitoring threads that try to detect failure and perform automatic failover of Redis slaves) and b) had all the API features I needed to make the implementation work efficiently. Jedis supports 3 main modes of operation. They are, single server, Redis sentinel and Redis cluster. Redis versions that should be supported are 2.8+ and 3.0+. I haven't tested this beyond the unit tests distributed with Infinispan which are starting full Redis servers in single server, sentinel and cluster configurations to run the tests, but I am hoping to start working on getting integration in to Wildfly 10, which I can test with a cache container for web sessions and a simple counter web app. Regards Simon On 31 July 2015 at 09:01, Tristan Tarrant wrote: > Let's start with a separate repo to begin with. As for third party > clients, choose the one you feel is the best. > > Thanks for looking into this > > Tristan > > On 29/07/2015 20:31, Simon Paulger wrote: > > Hi Tristan, > > > > With regards to project repositories, should I add the code to a fork of > > the main infinispan project or create a standalone repository as per > > hbase, jdbm, etc? > > > > And I presume there's no objections to using a third party Redis client? > > I was thinking Jedis (https://github.com/xetorthio/jedis - MIT license, > > currently maintained). > > > > Thanks, > > Simon > > > > On 29 July 2015 at 10:47, Tristan Tarrant > > wrote: > > > > Yes, we would be very interested. Check out the Infinispan cachestore > > archetype [1] to get things started, and ask here or on IRC on > > #infinispan for help, if you need more information. > > > > > > Tristan > > > > [1] https://github.com/infinispan/infinispan-cachestore-archetype > > > > On 28/07/2015 22:43, Simon Paulger wrote: > > > Hi, > > > > > > I'm interested in developing inifinispan integration with Redis > > for use > > > in JBoss. Before working on JBoss, I first need to add the > > capability to > > > Infinispan itself. > > > > > > Is this an enhancement that the infinispan community would be > > interested in? > > > > > > Regards, > > > Simon > > > > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > -- > > Tristan Tarrant > > Infinispan Lead > > JBoss, a division of Red Hat > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org infinispan-dev at lists.jboss.org> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150827/dadf75a2/attachment-0001.html From ttarrant at redhat.com Thu Aug 27 07:21:24 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 27 Aug 2015 13:21:24 +0200 Subject: [infinispan-dev] Redis infinispan cache store In-Reply-To: References: <55B8A148.1090709@redhat.com> <55BB2B48.5080802@redhat.com> Message-ID: <55DEF2B4.80506@redhat.com> Thank you Simon, this is excellent news ! On 27/08/2015 12:49, Simon Paulger wrote: > Hi, > > This is now done and is available to see here: > https://github.com/spaulg/infinispan-cachestore-redis. I used the remote > store within the infinispan repo as a base of reference. I see there is no license associated with that repo. I think you should add one. Would like the repo to become officially owned by the Infinispan organization ? > > There are some points that my be worth further discussion. They are: > 1. the cache loader size method return type is limited to int. Redis > servers can hold much more than Integer.MAX_VALUE and the Jedis client > method for counting items on the Redis server returns longs for each > server, which in addition must be totalled up for each server if using a > Redis cluster topology. To get around this I am checking for a long over > Integer.MAX_VALUE and logging a warn, then returning Integer.MAX_VALUE. This is a last-minute change we could do in Infinispan 8's AdvancedCacheLoader. > 2. Redis handles expiration. I am using lifespan to immediately set the > expiration of the cache entry in Redis, and when that lifespan is > reached the item is immediately purged by Redis itself. This means, > there is no idle time, and there is no purge method implementation. Good :) > 3. A few unit tests around expiration had to be disabled as they require > changes to time. As expiration is handled by Redis, I would have to > change the system time to make Redis force expiration. For now, they are > just disabled. Absolutely reasonable. > I have built it against the Jedis client. I also tried 2 other clients, > lettuce and redisson, but felt that Jedis gave the best implementation > as a) it didn't try to do too much (by this I mean running background > monitoring threads that try to detect failure and perform automatic > failover of Redis slaves) and b) had all the API features I needed to > make the implementation work efficiently. > > Jedis supports 3 main modes of operation. They are, single server, Redis > sentinel and Redis cluster. Redis versions that should be supported are > 2.8+ and 3.0+. > > I haven't tested this beyond the unit tests distributed with Infinispan > which are starting full Redis servers in single server, sentinel and > cluster configurations to run the tests, but I am hoping to start > working on getting integration in to Wildfly 10, which I can test with a > cache container for web sessions and a simple counter web app. I will take a look at the code. Thanks again for this awesome contribution. Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From spaulger at codezen.co.uk Thu Aug 27 08:12:19 2015 From: spaulger at codezen.co.uk (Simon Paulger) Date: Thu, 27 Aug 2015 13:12:19 +0100 Subject: [infinispan-dev] Redis infinispan cache store In-Reply-To: <55DEF2B4.80506@redhat.com> References: <55B8A148.1090709@redhat.com> <55BB2B48.5080802@redhat.com> <55DEF2B4.80506@redhat.com> Message-ID: I have added a license as well as configuration snippets on use. I think its probably best for infinispan if it were transferred to the Infinispan org. Looking forward to your feedback. Thanks, Simon On 27 August 2015 at 12:21, Tristan Tarrant wrote: > Thank you Simon, this is excellent news ! > > On 27/08/2015 12:49, Simon Paulger wrote: > > Hi, > > > > This is now done and is available to see here: > > https://github.com/spaulg/infinispan-cachestore-redis. I used the remote > > store within the infinispan repo as a base of reference. > > I see there is no license associated with that repo. I think you should > add one. > Would like the repo to become officially owned by the Infinispan > organization ? > > > > > There are some points that my be worth further discussion. They are: > > 1. the cache loader size method return type is limited to int. Redis > > servers can hold much more than Integer.MAX_VALUE and the Jedis client > > method for counting items on the Redis server returns longs for each > > server, which in addition must be totalled up for each server if using a > > Redis cluster topology. To get around this I am checking for a long over > > Integer.MAX_VALUE and logging a warn, then returning Integer.MAX_VALUE. > > This is a last-minute change we could do in Infinispan 8's > AdvancedCacheLoader. > > > 2. Redis handles expiration. I am using lifespan to immediately set the > > expiration of the cache entry in Redis, and when that lifespan is > > reached the item is immediately purged by Redis itself. This means, > > there is no idle time, and there is no purge method implementation. > > Good :) > > > 3. A few unit tests around expiration had to be disabled as they require > > changes to time. As expiration is handled by Redis, I would have to > > change the system time to make Redis force expiration. For now, they are > > just disabled. > > Absolutely reasonable. > > > > I have built it against the Jedis client. I also tried 2 other clients, > > lettuce and redisson, but felt that Jedis gave the best implementation > > as a) it didn't try to do too much (by this I mean running background > > monitoring threads that try to detect failure and perform automatic > > failover of Redis slaves) and b) had all the API features I needed to > > make the implementation work efficiently. > > > > Jedis supports 3 main modes of operation. They are, single server, Redis > > sentinel and Redis cluster. Redis versions that should be supported are > > 2.8+ and 3.0+. > > > > I haven't tested this beyond the unit tests distributed with Infinispan > > which are starting full Redis servers in single server, sentinel and > > cluster configurations to run the tests, but I am hoping to start > > working on getting integration in to Wildfly 10, which I can test with a > > cache container for web sessions and a simple counter web app. > > I will take a look at the code. > > Thanks again for this awesome contribution. > > Tristan > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150827/5c3e53d7/attachment.html From rory.odonnell at oracle.com Fri Aug 28 12:50:47 2015 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Fri, 28 Aug 2015 17:50:47 +0100 Subject: [infinispan-dev] Early Access builds for JDK 8u66 b02 and JDK 9 b78 are available on java.net Message-ID: <55E09167.6060905@oracle.com> Hi Galder, Early Access build for JDK 8u66 b02 is available on java.net, summary of changes are listed here. Early Access build for JDK 9 b78 is available on java.net, summary of changes are listed here . With respect to ongoing JDK 9 development, I'd like to draw your attention to the following requests to provide feedback on the relevant mailing lists. *OpenJDK JarSigner API* JDK 9 is more restricted on calling sun.* public methods but we know there are users calling sun.security.tools.jarsigner.Main to sign jar files. A new API is proposed for this very purpose in OpenJDK. Feedback on this API should be provided on the security-dev mailing list. *RFC JEP: NIST SP 800-90A SecureRandom implementations : *Feedback on this draft JEP should be provided on the security-dev mailing list. * * *Public API for internal Swing classes* According to the JEP 200: The Modular JDK we expect that classes from internal packages (like sun.swing) will not be accessible. If you are using the internal Swing API and it is not possible to replace it by public API, please provide feedback on the swing-dev mailing list. If you haven?t already subscribed to a list then please do so first, otherwise your message will be discarded as spam. Finally, videos of presentations from the JVM Language Summit have been published at : http://www.oracle.com/technetwork/java/javase/community/jlssessions-2015-2633029.html . Rgds, Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150828/915ec273/attachment.html