From vjuranek at redhat.com Mon Nov 3 08:27:00 2014 From: vjuranek at redhat.com (Vojtech Juranek) Date: Mon, 03 Nov 2014 14:27 +0100 Subject: [infinispan-dev] Docker images now available for Infinispan Server In-Reply-To: References: Message-ID: <1993244.x590xSs3p8@localhost> Now there's also Docker image for library mode (WildFly + ISPN modules): https://registry.hub.docker.com/u/jboss/infinispan-modules/ On Sunday 26 October 2014 10:27:10 Sanne Grinovero wrote: > https://twitter.com/marekgoldmann/status/526060068945817601 > > Thanks Marek! > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part. Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20141103/d0b8e934/attachment.bin From ttarrant at redhat.com Mon Nov 3 11:08:57 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 03 Nov 2014 17:08:57 +0100 Subject: [infinispan-dev] Weekly Infinispan IRC meeting 2014-11-03 Message-ID: <5457A899.6070600@redhat.com> Read the logs here: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2014/infinispan.2014-11-03-15.04.log.html Tristan From galder at redhat.com Mon Nov 3 11:34:47 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Mon, 3 Nov 2014 17:34:47 +0100 Subject: [infinispan-dev] PHP hot rod client In-Reply-To: References: Message-ID: <7BA62086-A294-4DA4-8BE5-E30167D001BA@redhat.com> Hi Albert, Thanks a lot for writing that PHP client! I?m not a PHP expert, but hopefully someone in the Infinispan team can have a look at. Radargun [1] is a benchmark framework we use to run performance tests of Infinispan. We also run tests for the Hot Rod client, but it?s very JVM centric, so not sure how you could add PHP there. In terms of Hot Rod/memcached performance perspective, Hot Rod should be slightly faster, but the big gains come when you deploy a cluster of Infinispan Servers since Hot Rod has more clever topology routing logic and it can update topology at runtime. Cheers, [1] https://github.com/radargun/radargun On 30 Oct 2014, at 11:42, Albert Bertram wrote: > Hi, > > A couple of years ago there were a few messages on this list about a potential PHP Hot Rod client. I haven't seen any further discussion of it, but I find myself in the same situation described before: I want to have a Drupal installation write cache data to Infinispan, and I'd prefer if it could do it via the Hot Rod protocol rather than the memcached protocol. > > I haven't seen any further evidence of the existence of a Hot Rod client native to PHP out on the open web, so I wrote a small wrapper around the Hot Rod C++ client which works for my purposes so far. The code is at https://github.com/bertrama/php-hotrod > > I wanted to send a note to the list to ask a couple questions: > > Would anyone else be interested in this php extension? > > Are there client-oriented benchmarks I should run? I looked around for some, but didn't find any. Specifically, I want to compare performance of this PHP Hot Rod client to the PHP memcached client when talking to the same Infinispan server. > > Thanks! > > Albert Bertram > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From rory.odonnell at oracle.com Mon Nov 3 14:28:21 2014 From: rory.odonnell at oracle.com (Rory O'Donnell Oracle, Dublin Ireland) Date: Mon, 03 Nov 2014 19:28:21 +0000 Subject: [infinispan-dev] JDK 9 Early Access with Project Jigsaw build b36 is available on java.net Message-ID: <5457D755.5090700@oracle.com> Hi Galder, JDK 9 Early Access with Project Jigsaw build b36 is available on java.net [1] The goal of Project Jigsaw [2] is to design and implement a standard module system for the Java SE Platform, and to apply that system to the Platform itself and to the JDK. As described in JEP 220 [3], this build provides a new runtime image structure. For example, this new runtime image does not install an rt.jar file or a tools.jar file. Please refer to Project Jigsaw's updated project pages [2] & [4] and Mark Reinhold's announcement email [5] for further details. We are very interested in your experiences testing this build. Comments, questions, and suggestions are welcome on the jigsaw-dev mailing list or else submit bug reports via bugs.java.com. Note: If you haven?t already subscribed to that mailing list then please do so first, otherwise your message will be discarded as spam. [1] https://jdk9.java.net/jigsaw/ [2] http://openjdk.java.net/projects/jigsaw/ [3] http://openjdk.java.net/jeps/220 [4] http://openjdk.java.net/projects/jigsaw/ea [5] http://mail.openjdk.java.net/pipermail/jigsaw-dev/2014-November/003878.html -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland From rvansa at redhat.com Tue Nov 4 02:20:01 2014 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 04 Nov 2014 08:20:01 +0100 Subject: [infinispan-dev] PHP hot rod client In-Reply-To: <7BA62086-A294-4DA4-8BE5-E30167D001BA@redhat.com> References: <7BA62086-A294-4DA4-8BE5-E30167D001BA@redhat.com> Message-ID: <54587E21.2060307@redhat.com> RadarGun can't benchmark anything outside JVM. You could create wrapper to PHP, but then you'd benchmark this layer as well. That's the reason we don't do C++ or .NET client benchmarks through RadarGun. Radim On 11/03/2014 05:34 PM, Galder Zamarre?o wrote: > Hi Albert, > > Thanks a lot for writing that PHP client! I?m not a PHP expert, but hopefully someone in the Infinispan team can have a look at. > > Radargun [1] is a benchmark framework we use to run performance tests of Infinispan. We also run tests for the Hot Rod client, but it?s very JVM centric, so not sure how you could add PHP there. > > In terms of Hot Rod/memcached performance perspective, Hot Rod should be slightly faster, but the big gains come when you deploy a cluster of Infinispan Servers since Hot Rod has more clever topology routing logic and it can update topology at runtime. > > Cheers, > > [1] https://github.com/radargun/radargun > > On 30 Oct 2014, at 11:42, Albert Bertram wrote: > >> Hi, >> >> A couple of years ago there were a few messages on this list about a potential PHP Hot Rod client. I haven't seen any further discussion of it, but I find myself in the same situation described before: I want to have a Drupal installation write cache data to Infinispan, and I'd prefer if it could do it via the Hot Rod protocol rather than the memcached protocol. >> >> I haven't seen any further evidence of the existence of a Hot Rod client native to PHP out on the open web, so I wrote a small wrapper around the Hot Rod C++ client which works for my purposes so far. The code is at https://github.com/bertrama/php-hotrod >> >> I wanted to send a note to the list to ask a couple questions: >> >> Would anyone else be interested in this php extension? >> >> Are there client-oriented benchmarks I should run? I looked around for some, but didn't find any. Specifically, I want to compare performance of this PHP Hot Rod client to the PHP memcached client when talking to the same Infinispan server. >> >> Thanks! >> >> Albert Bertram >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From bertrama at umich.edu Tue Nov 4 06:55:50 2014 From: bertrama at umich.edu (Albert Bertram) Date: Tue, 4 Nov 2014 06:55:50 -0500 Subject: [infinispan-dev] PHP hot rod client In-Reply-To: <54587E21.2060307@redhat.com> References: <7BA62086-A294-4DA4-8BE5-E30167D001BA@redhat.com> <54587E21.2060307@redhat.com> Message-ID: Thanks Galder, Radim, I've been browsing the RadarGun code this morning and I agree that it'll be difficult to use it directly for my benchmarks. That said, it's given me a good direction for which scenarios to run, and that's a fantastic place to be. Thanks again! Albert On Tue, Nov 4, 2014 at 2:20 AM, Radim Vansa wrote: > RadarGun can't benchmark anything outside JVM. You could create wrapper > to PHP, but then you'd benchmark this layer as well. That's the reason > we don't do C++ or .NET client benchmarks through RadarGun. > > Radim > > On 11/03/2014 05:34 PM, Galder Zamarre?o wrote: > > Hi Albert, > > > > Thanks a lot for writing that PHP client! I?m not a PHP expert, but > hopefully someone in the Infinispan team can have a look at. > > > > Radargun [1] is a benchmark framework we use to run performance tests of > Infinispan. We also run tests for the Hot Rod client, but it?s very JVM > centric, so not sure how you could add PHP there. > > > > In terms of Hot Rod/memcached performance perspective, Hot Rod should be > slightly faster, but the big gains come when you deploy a cluster of > Infinispan Servers since Hot Rod has more clever topology routing logic and > it can update topology at runtime. > > > > Cheers, > > > > [1] https://github.com/radargun/radargun > > > > On 30 Oct 2014, at 11:42, Albert Bertram wrote: > > > >> Hi, > >> > >> A couple of years ago there were a few messages on this list about a > potential PHP Hot Rod client. I haven't seen any further discussion of it, > but I find myself in the same situation described before: I want to have a > Drupal installation write cache data to Infinispan, and I'd prefer if it > could do it via the Hot Rod protocol rather than the memcached protocol. > >> > >> I haven't seen any further evidence of the existence of a Hot Rod > client native to PHP out on the open web, so I wrote a small wrapper around > the Hot Rod C++ client which works for my purposes so far. The code is at > https://github.com/bertrama/php-hotrod > >> > >> I wanted to send a note to the list to ask a couple questions: > >> > >> Would anyone else be interested in this php extension? > >> > >> Are there client-oriented benchmarks I should run? I looked around for > some, but didn't find any. Specifically, I want to compare performance of > this PHP Hot Rod client to the PHP memcached client when talking to the > same Infinispan server. > >> > >> Thanks! > >> > >> Albert Bertram > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > > Galder Zamarre?o > > galder at redhat.com > > twitter.com/galderz > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20141104/b293bffd/attachment-0001.html From galder at redhat.com Tue Nov 4 07:13:56 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Tue, 4 Nov 2014 13:13:56 +0100 Subject: [infinispan-dev] Infinispan 7.0.0.Final is out!! Message-ID: <56C0212E-59E2-4DFC-9738-94A9D19746BB@redhat.com> Hi all, We?ve just released Infinispan 7.0.0.Final with a lot of goodies in it :) Check the blog post for acknoweldgements and link to release notes: http://blog.infinispan.org/2014/11/infinispan-700final-is-out.html Thanks to all! Cheers, -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From mudokonman at gmail.com Tue Nov 4 09:23:03 2014 From: mudokonman at gmail.com (William Burns) Date: Tue, 4 Nov 2014 09:23:03 -0500 Subject: [infinispan-dev] About size() In-Reply-To: References: <542E5E92.7060504@redhat.com> <3EA0122E-8293-49EB-8CB7-F67FA2E58532@redhat.com> <593A31AE-2C9C-4B90-9048-0EDBADCA1ADF@redhat.com> <8BAD2C28-ADC3-4D1E-8B5D-F0D17B35C83C@redhat.com> <543CC8F5.9000403@redhat.com> Message-ID: The various bulk operations for the Map interface are now implemented to utilize the entire cluster in the new Infinispan 7.0 Release [1]. Please find more information about the bulk operations at [2] [1] http://blog.infinispan.org/2014/11/infinispan-700final-is-out.html [2] http://blog.infinispan.org/2014/11/why-doesnt-mapsize-return-size-of.html On Tue, Oct 14, 2014 at 8:11 AM, William Burns wrote: > On Tue, Oct 14, 2014 at 3:33 AM, Dan Berindei wrote: >> >> >> On Tue, Oct 14, 2014 at 9:55 AM, Radim Vansa wrote: >>> >>> On 10/13/2014 05:55 PM, Mircea Markus wrote: >>> > On Oct 13, 2014, at 14:06, Dan Berindei wrote: >>> > >>> >> >>> >> On Fri, Oct 10, 2014 at 9:01 PM, Mircea Markus >>> >> wrote: >>> >> >>> >> On Oct 10, 2014, at 17:30, William Burns wrote: >>> >> >>> >>>>>>> Also we didn't really talk about the fact that these methods would >>> >>>>>>> ignore ongoing transactions and if that is a concern or not. >>> >>>>>>> >>> >>>>>> It might be a concern for the Hibernate 2LC impl, it was their TCK >>> >>>>>> that >>> >>>>>> prompted the last round of discussions about clear(). >>> >>>>> Although I wonder how much these methods are even used since they >>> >>>>> only >>> >>>>> work for Local, Replication or Invalidation caches in their current >>> >>>>> state (and didn't even use loaders until 6.0). >>> >>>> >>> >>>> There is some more information about the test in the mailing list >>> >>>> discussion >>> >>>> [1] >>> >>>> There's also a JIRA for clear() [2] >>> >>>> >>> >>>> I think 2LC almost never uses distribution, so size() being >>> >>>> local-only >>> >>>> didn't matter, but making it non-tx could cause problems - at least >>> >>>> for that >>> >>>> particular test. >>> >>> I had toyed around with the following idea before, but I never thought >>> >>> of it in the scope of the size method solely, but I have a solution >>> >>> that would work mostly for transactional caches. Essentially the size >>> >>> method would always operate in a READ_COMMITTED like state, using >>> >>> REPEATABLE_READ doesn't seem feasible since we can't keep all the >>> >>> contents in memory. Essentially the iterator would be ran and for >>> >>> each key that is found it checks the context to see if it is there. >>> >>> If the context entry is marked as removed it doesn't count the key, if >>> >>> the key is there it marks the key as found and counts it, and if it is >>> >>> not found it counts it. Then after iteration it finds all the keys in >>> >>> the context that were not found and also adds them to the count. This >>> >>> way it doesn't need to store additional memory (besides iteration >>> >>> costs) as all the context information is in memory. >>> >> sounds good to me. >>> >> >>> >> Mircea, you have to decide whether you want the precise estimation >>> >> using the entry iterator or the loose estimation using dataContainer.size() >>> >> :) >>> >> >>> >> I guess we can't make size() read everything into the invocation >>> >> context, so READ_COMMITTED is all we can provide if we want to keep size() >>> >> transactional. Maybe we don't really need it though... Will, could you >>> >> investigate the failing test that started the clear() thread [1] to see if >>> >> it really needs size() to be transactional? >>> > I'm okay with both approaches TBH, both are much better than what we >>> > currently have. The accurate one is more costly but seems to be the solution >>> > of choice so let's go for it. >>> > >>> >> >>> >>> My original thought was to also make the EntryIterator transactional >>> >>> in the same way which also means the keySet, entrySet and values >>> >>> methods could do the same things. The main reason stumbling block I >>> >>> had was the fact that the iterator and various collections returned >>> >>> could be used outside of the ongoing transaction which didn't seem to >>> >>> make much sense to me. But maybe these should be changed to be more >>> >>> like backing maps which HashMap, ConcurrentHashMap etc use for their >>> >>> methods, where instead it would pick up the transaction if there is >>> >>> one in the current thread and if there is no transaction just start an >>> >>> implicit one. >>> >> or if they are outside of a transaction to deny progress >>> >> >>> >> I don't think it's fair to require an explicit transaction for every >>> >> entrySet(). It should be possible to start an iteration without a >>> >> transaction, and only to invalidate an iteration started from an explicit >>> >> transaction the moment the transaction is committed/rolled back (although it >>> >> would complicate rules a bit). >>> >> >>> >> And what happens if the user writes to the cache while it's iterating >>> >> through the cache-backed collection? Should the user see the new entry in >>> >> the iteration, or not? I don't think you can figure out at the end of the >>> >> iteration which keys were included without keeping all the keys on the >>> >> originator. >>> > If the modification is done outside the iterator one might expect an >>> > ConcurrentModificationException, as it is the case with some JDK iterators. >>> >>> -1 We're aiming at high performance cache with a lot of changes while >>> the operation is executed. This way, the iteration would never complete, >>> unless you explicitly switch the cache to read only mode (either through >>> Infinispan operation or in application). >> >> >> I was referring only to changes made in the same transaction, not changes >> made by other transactions. But you make a good point, we can't throw a >> ConcurrentModificationException if the user for writes in the same >> transaction and ignore other transactions. >> >>> >>> >>> I think that adding isCacheModified() or isTopologyChanged() to the >>> iterator would make sense, if that's not too complicated to implement. >>> Though, if we want non-disturbed iteration, snapshot isolation is the >>> only answer. >> >> >> isCacheModified() is probably too costly to implement. >> isTopologyChanged() could be done, but I'm not sure what's the use case, as >> the entry iterator abstracts topology changes from the user. >> >> I don't think we want undisturbed iteration, at least not at this point. >> Personally, I just want to have a good story on why the iteration behaves in >> a certain way. By my standards, explaining that changes made by other >> transactions may completely/partially/not at all be visible in the iteration >> is fine, explaining that changes made by the same transaction may or may not >> be visible is not. > > Sorry I didn't respond earlier. But these commands would check the > transaction context before returning the value to the user. This > requires a user interaction for this to occur, so we can guarantee > they will always see their updated value if they have one in the > transaction (even if one is ran in between the iteration). The big > thing is whether or not another transaction's update is seen when we > don't have an update for that key (that will be occur if the segment > is completed before the update or not). > > There should be no need to tell if the cache modified or the topology > changed (the former would require a very high impact for performance > to tell with a DIST cache). > > To be honest the wrapper classes would just be delegating to the Cache > for the vast majority of operations (get, remove, contains etc.). It > would only be when someone specifically uses the iterator on the > various collections that the distributed iterator would even be used. > This way the various collections would be backing maps, like HashMap > and ConcurrentHashMap have, just they have to check the transaction as > well. The values collection would be extremely limited in its > supported methods though, pretty much only to iteration and size. > >> >> >> Cheers >> Dan >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Wed Nov 5 02:34:13 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Wed, 5 Nov 2014 08:34:13 +0100 Subject: [infinispan-dev] Infinispan tutorial In-Reply-To: <5450EFF4.6050103@redhat.com> References: <5450EFF4.6050103@redhat.com> Message-ID: <1627E280-0E25-461F-A967-BD3EFAF56E5C@redhat.com> Hi Tristan, +1 to having a more step-by-step tutorial :) I?ve tried the tutorial locally and made some notes: - step-0 is a bit confusing since nothing is logged. However, no logging is not due to not enabling it, but the fact that nothing kicks in until getCache() is called, and that only happens in step-1. - How do you enable logging? Also, not sure what I need to change in logging.properties to see some logging of Infinispan. For example: how do you enable debug/trace logging? I?ve tried FINER/FINEST too but did not make a difference. Maybe I need a org.infinispan specific level/formatter combination? - step-4 tag missing. Great work!! Cheers, On 29 Oct 2014, at 14:47, Tristan Tarrant wrote: > Hi guys, > > I've been working on how to spruce up our website, docs and code samples. > While quickstarts are ok, they come as monolithic blobs which tell you > nothing about how you got there. For this reason I believe a > step-by-step tutorial approach is better and I've been looking at the > AngularJS tutorials [0] as good examples on how to achieve this. > I have created a repo [1] on my GitHub user where each commit is a step > in the tutorial. I have tagged the commits using 'step-n' so that you > can checkout any of the steps and run them: > > git checkout step-1 > mvn clean package exec:java > > The GitHub web interface can be used to show the diff between steps, so > that it can be linked from the docs [2]. > > Currently I'm not aiming to build a real application (although > suggestions are welcome in this sense), but just going through the > basics, adding features one by one, etc. > > Comments are welcome. > > Tristan > > --- > [0] https://docs.angularjs.org/tutorial/step_00 > [1] https://github.com/tristantarrant/infinispan-embedded-tutorial > [2] > https://github.com/tristantarrant/infinispan-embedded-tutorial/compare/step-0...step-1?diff=unified > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From tsykora at redhat.com Thu Nov 6 05:01:23 2014 From: tsykora at redhat.com (Tomas Sykora) Date: Thu, 6 Nov 2014 05:01:23 -0500 (EST) Subject: [infinispan-dev] Infinispan 7 documentation In-Reply-To: <1151447816.2432147.1414664816470.JavaMail.zimbra@redhat.com> References: <1151447816.2432147.1414664816470.JavaMail.zimbra@redhat.com> Message-ID: <476679884.6357066.1415268083600.JavaMail.zimbra@redhat.com> My big +1 here. This is something we really need to address, see e.g.: http://stackoverflow.com/questions/26753263/how-to-setup-infinispan-cache-in-a-two-node-cluster We need to somehow change new users' sentences from (citation ^): "The documentation found in Internet is so vague and doesn't suit a beginner." to "Wow, ISPN doc is awesome and I only need a bit of help with this little detail to achieve what I want to do." Tom ----- Original Message ----- > From: "Jiri Holusa" > To: "infinispan -Dev List" > Sent: Thursday, October 30, 2014 11:26:56 AM > Subject: [infinispan-dev] Infinispan 7 documentation > > Hi guys, > > I wanted to share one user experience feedback with you. At university, I had > a lecture about NoSQL datastores and Infinispan was also mentioned. The > lecturer also showed some code examples. To my surprise, he used Infinispan > 6. So after the lecture I asked him why version 6, not 7, and his answer was > quite surprising. > > He told me that he got angry on Infinispan 7 documentation, because many code > snippet examples were from old 6 version and that he was basically unable to > configure it in a reasonable time. So he threw it away and switched back to > Infinispan 6. I justed wanted to make a little discussion about this, > because I think this is quite a big issue. > > I noticed that part of this issue was fixed just recently (18 hours ago, nice > coincidence :)) by [1] (+10000 Gustavo), but there are still some > out-of-date examples. > > But the message I want to say, we should pay attention to this (I know, > boring) stuff, because we're basically discouraging users/community from > using the newest version. Every customer/user will start playing with the > community version and if he's not able to set it up in a few moments, he > will move on to another product. And we don't want that, right? :) > > I also have clap the effort of Tristan with step-by-step tutorial, that's > exactly what user wants and I would be happy to help you in anyway > (verifying, keeping up-to-date, whatever) with it. > > Conclusion: let's pay more attention to documentation, it's the entering > point for every newcomer and we want to make as best first impression as > possible :) > > Thanks, > Jirka > > P.S.: I don't see the changes from [1] in Infinispan User Guide [2], am I > missing something or will it appear there later? > > > > [1] https://github.com/infinispan/infinispan/pull/3011/ > [2] http://infinispan.org/docs/7.0.x/user_guide/user_guide.html > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From ttarrant at redhat.com Thu Nov 6 06:38:33 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 06 Nov 2014 12:38:33 +0100 Subject: [infinispan-dev] Infinispan tutorial In-Reply-To: <1627E280-0E25-461F-A967-BD3EFAF56E5C@redhat.com> References: <5450EFF4.6050103@redhat.com> <1627E280-0E25-461F-A967-BD3EFAF56E5C@redhat.com> Message-ID: <545B5DB9.3050405@redhat.com> Thanks Galder, - no logging in step-0: that is expected (and why it's called step '0'), and I will say so in the actual tutorial text - logging is happening for me, haven't tried with the lower settings I have added one more step which makes the cache clustered and I have updated the tags. Obviously all of this is done via horrible git force pushing :) Tristan On 05/11/14 08:34, Galder Zamarre?o wrote: > Hi Tristan, > > +1 to having a more step-by-step tutorial :) > > I?ve tried the tutorial locally and made some notes: > > - step-0 is a bit confusing since nothing is logged. However, no logging is not due to not enabling it, but the fact that nothing kicks in until getCache() is called, and that only happens in step-1. > > - How do you enable logging? Also, not sure what I need to change in logging.properties to see some logging of Infinispan. For example: how do you enable debug/trace logging? I?ve tried FINER/FINEST too but did not make a difference. Maybe I need a org.infinispan specific level/formatter combination? > > - step-4 tag missing. > > Great work!! > > Cheers, > > On 29 Oct 2014, at 14:47, Tristan Tarrant wrote: > >> Hi guys, >> >> I've been working on how to spruce up our website, docs and code samples. >> While quickstarts are ok, they come as monolithic blobs which tell you >> nothing about how you got there. For this reason I believe a >> step-by-step tutorial approach is better and I've been looking at the >> AngularJS tutorials [0] as good examples on how to achieve this. >> I have created a repo [1] on my GitHub user where each commit is a step >> in the tutorial. I have tagged the commits using 'step-n' so that you >> can checkout any of the steps and run them: >> >> git checkout step-1 >> mvn clean package exec:java >> >> The GitHub web interface can be used to show the diff between steps, so >> that it can be linked from the docs [2]. >> >> Currently I'm not aiming to build a real application (although >> suggestions are welcome in this sense), but just going through the >> basics, adding features one by one, etc. >> >> Comments are welcome. >> >> Tristan >> >> --- >> [0] https://docs.angularjs.org/tutorial/step_00 >> [1] https://github.com/tristantarrant/infinispan-embedded-tutorial >> [2] >> https://github.com/tristantarrant/infinispan-embedded-tutorial/compare/step-0...step-1?diff=unified >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > From pedro at infinispan.org Thu Nov 6 09:36:43 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Thu, 06 Nov 2014 14:36:43 +0000 Subject: [infinispan-dev] Remoting package refactor Message-ID: <545B877B.9050105@infinispan.org> Hello, As many of you should know, I'm refactoring the remoting package. I was trying and testing what can be done and it's time for an update. Finally, I got a working version (needs some cleanups) that can be found here [1]. The goal is to reduce the complexity from CommandAwareRpcDispatcher and InboundInvocationHandler. This classes are getting full of "if command is type T, then do this" and it will get worse when ISPN-2849 [2] and ISPN-4610 [3] is done. Also, it decouples the Transport from the processing remote commands logic. The main idea is to have one global inbound invocations handler and a per cache inbound invocations handler. The first one will handle the non-cache rpc commands and the later the cache rpc commands. Since each cache has a different configuration, multiple per cache inbound invocation handlers will be implemented. Currently, I have the non-total order and a total order implementation. After ISPN-2849 and ISPN-4610, I'll probably add more, for example: TO tx and TO non-tx, pessimistic, optimistic and default non-tx implementations. change details: * remove total order remote executor service. The remote executor service can be used and it has exactly the same goal. * added a single thread remote executor service. This will handle the FIFO deliver commands. Previously, they were handled by JGroups incoming threads and with a new executor service, each cache can process their own FIFO commands concurrently. * currently the non-blocking cache rpc commands are processed directly in JGroups threads. Not sure if it is ok, but I think we can add another remote executor service for these commands. The advantage is that the JGroups threads no longer will execute remote commands. * the Transport is decoupled. May be useful for the test suite. * possibly remove the TotalOrder*Command (TotalOrderPrepareCommand and similar) (needs to double check). Since we have a total order inbound invocation handler for total order transactions, it is not necessary a special command to identify when the transaction is in total order or not. Comments, ideas, feedback are welcome. Cheers, Pedro [1] https://github.com/pruivo/infinispan/compare/remoting_refactor [2] https://issues.jboss.org/browse/ISPN-2849 => don't keep thread blocked waiting for locks [3] https://issues.jboss.org/browse/ISPN-4610 => non transactional total order From bban at redhat.com Thu Nov 6 10:01:05 2014 From: bban at redhat.com (Bela Ban) Date: Thu, 06 Nov 2014 16:01:05 +0100 Subject: [infinispan-dev] Remoting package refactor In-Reply-To: <545B877B.9050105@infinispan.org> References: <545B877B.9050105@infinispan.org> Message-ID: <545B8D31.1020404@redhat.com> On 06/11/14 15:36, Pedro Ruivo wrote: > Hello, > > As many of you should know, I'm refactoring the remoting package. I was > trying and testing what can be done and it's time for an update. > Finally, I got a working version (needs some cleanups) that can be found > here [1]. > > The goal is to reduce the complexity from CommandAwareRpcDispatcher and > InboundInvocationHandler. This classes are getting full of "if command > is type T, then do this" and it will get worse when ISPN-2849 [2] and > ISPN-4610 [3] is done. Also, it decouples the Transport from the > processing remote commands logic. > > The main idea is to have one global inbound invocations handler and a > per cache inbound invocations handler. The first one will handle the > non-cache rpc commands and the later the cache rpc commands. Since each > cache has a different configuration, multiple per cache inbound > invocation handlers will be implemented. Currently, I have the non-total > order and a total order implementation. After ISPN-2849 and ISPN-4610, > I'll probably add more, for example: TO tx and TO non-tx, pessimistic, > optimistic and default non-tx implementations. > > change details: > > * remove total order remote executor service. The remote executor > service can be used and it has exactly the same goal. > > * added a single thread remote executor service. This will handle the > FIFO deliver commands. Previously, they were handled by JGroups incoming > threads and with a new executor service, each cache can process their > own FIFO commands concurrently. +1000. This allows multiple updates from the same sender but to different caches to be executed in parallel, and will speed thing up. Do you intend to share a thread pool between the invocations handlers of the various caches, or do they each have their own thread pool ? Or is this configurable ? > * currently the non-blocking cache rpc commands are processed directly > in JGroups threads. Not sure if it is ok, but I think we can add another > remote executor service for these commands. The advantage is that the > JGroups threads no longer will execute remote commands. > > * the Transport is decoupled. May be useful for the test suite. > > * possibly remove the TotalOrder*Command (TotalOrderPrepareCommand and > similar) (needs to double check). Since we have a total order inbound > invocation handler for total order transactions, it is not necessary a > special command to identify when the transaction is in total order or not. > > Comments, ideas, feedback are welcome. > > Cheers, > Pedro > > [1] https://github.com/pruivo/infinispan/compare/remoting_refactor > [2] https://issues.jboss.org/browse/ISPN-2849 => don't keep thread > blocked waiting for locks > [3] https://issues.jboss.org/browse/ISPN-4610 => non transactional total > order > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban, JGroups lead (http://www.jgroups.org) From pedro at infinispan.org Thu Nov 6 10:23:56 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Thu, 06 Nov 2014 15:23:56 +0000 Subject: [infinispan-dev] Remoting package refactor In-Reply-To: <545B8D31.1020404@redhat.com> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> Message-ID: <545B928C.3000307@infinispan.org> On 11/06/2014 03:01 PM, Bela Ban wrote: > > > On 06/11/14 15:36, Pedro Ruivo wrote: >> >> * added a single thread remote executor service. This will handle the >> FIFO deliver commands. Previously, they were handled by JGroups incoming >> threads and with a new executor service, each cache can process their >> own FIFO commands concurrently. > > +1000. This allows multiple updates from the same sender but to > different caches to be executed in parallel, and will speed thing up. > > Do you intend to share a thread pool between the invocations handlers of > the various caches, or do they each have their own thread pool ? Or is > this configurable ? > That is question that cross my mind and I don't have any idea what would be the best. So, for now, I will leave the thread pool shared between the handlers. Never thought to make it configurable, but maybe that is the best option. And maybe, it should be possible to have different max-thread size per cache. For example: * all caches using this remote executor will share the same instance * all caches using this remote executor will create their own thread pool with max-threads equals to 1 * all caches using this remote executor will create their own thread pool with max-threads equals to 1000 is this what you have in mind? comments? Cheers, Pedro From bban at redhat.com Thu Nov 6 10:37:22 2014 From: bban at redhat.com (Bela Ban) Date: Thu, 06 Nov 2014 16:37:22 +0100 Subject: [infinispan-dev] Remoting package refactor In-Reply-To: <545B928C.3000307@infinispan.org> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> Message-ID: <545B95B2.4070506@redhat.com> #1 I would by default have 1 thread pool shared by all caches #2 This global thread pool should be configurable, perhaps in the section ? #3 Each cache by default uses the gobal thread pool #4 A cache can define its own thread pool, then it would use this one and not the global thread pool I think this gives you a mixture between ease of use and flexibility in configuring pool per cache if needed On 06/11/14 16:23, Pedro Ruivo wrote: > > > On 11/06/2014 03:01 PM, Bela Ban wrote: >> >> >> On 06/11/14 15:36, Pedro Ruivo wrote: >>> >>> * added a single thread remote executor service. This will handle the >>> FIFO deliver commands. Previously, they were handled by JGroups incoming >>> threads and with a new executor service, each cache can process their >>> own FIFO commands concurrently. >> >> +1000. This allows multiple updates from the same sender but to >> different caches to be executed in parallel, and will speed thing up. >> >> Do you intend to share a thread pool between the invocations handlers of >> the various caches, or do they each have their own thread pool ? Or is >> this configurable ? >> > > That is question that cross my mind and I don't have any idea what would > be the best. So, for now, I will leave the thread pool shared between > the handlers. > > Never thought to make it configurable, but maybe that is the best > option. And maybe, it should be possible to have different max-thread > size per cache. For example: > > * all caches using this remote executor will share the same instance > > > * all caches using this remote executor will create their own thread > pool with max-threads equals to 1 > max-threads=1 .../> > > * all caches using this remote executor will create their own thread > pool with max-threads equals to 1000 > max-thread=1000 .../> > > is this what you have in mind? comments? > > Cheers, > Pedro > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban, JGroups lead (http://www.jgroups.org) From rvansa at redhat.com Thu Nov 6 13:40:46 2014 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 06 Nov 2014 19:40:46 +0100 Subject: [infinispan-dev] Remoting package refactor In-Reply-To: <545B95B2.4070506@redhat.com> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> Message-ID: <545BC0AE.90002@redhat.com> I second the opinion that any threadpools should be shared by default. There are users who have hundreds or thousands of caches and having separate threadpool for each of them could easily drain resources. And sharing resources is the purpose of threadpools, right? Radim On 11/06/2014 04:37 PM, Bela Ban wrote: > #1 I would by default have 1 thread pool shared by all caches > #2 This global thread pool should be configurable, perhaps in the > section ? > #3 Each cache by default uses the gobal thread pool > #4 A cache can define its own thread pool, then it would use this one > and not the global thread pool > > I think this gives you a mixture between ease of use and flexibility in > configuring pool per cache if needed > > On 06/11/14 16:23, Pedro Ruivo wrote: >> >> On 11/06/2014 03:01 PM, Bela Ban wrote: >>> >>> On 06/11/14 15:36, Pedro Ruivo wrote: >>>> * added a single thread remote executor service. This will handle the >>>> FIFO deliver commands. Previously, they were handled by JGroups incoming >>>> threads and with a new executor service, each cache can process their >>>> own FIFO commands concurrently. >>> +1000. This allows multiple updates from the same sender but to >>> different caches to be executed in parallel, and will speed thing up. >>> >>> Do you intend to share a thread pool between the invocations handlers of >>> the various caches, or do they each have their own thread pool ? Or is >>> this configurable ? >>> >> That is question that cross my mind and I don't have any idea what would >> be the best. So, for now, I will leave the thread pool shared between >> the handlers. >> >> Never thought to make it configurable, but maybe that is the best >> option. And maybe, it should be possible to have different max-thread >> size per cache. For example: >> >> * all caches using this remote executor will share the same instance >> >> >> * all caches using this remote executor will create their own thread >> pool with max-threads equals to 1 >> > max-threads=1 .../> >> >> * all caches using this remote executor will create their own thread >> pool with max-threads equals to 1000 >> > max-thread=1000 .../> >> >> is this what you have in mind? comments? >> >> Cheers, >> Pedro >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> -- Radim Vansa JBoss DataGrid QA From gustavonalle at gmail.com Thu Nov 6 14:29:48 2014 From: gustavonalle at gmail.com (Gustavo Fernandes) Date: Thu, 6 Nov 2014 19:29:48 +0000 Subject: [infinispan-dev] Remoting package refactor In-Reply-To: <545BC0AE.90002@redhat.com> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> Message-ID: > I second the opinion that any threadpools should be shared by default. > There are users who have hundreds or thousands of caches and having > separate threadpool for each of them could easily drain resources. And > sharing resources is the purpose of threadpools, right? Provided that no interdependent tasks are executed in the bounded shared thread pool, leading to starvation deadlock Gustavo From ttarrant at redhat.com Thu Nov 6 14:31:31 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 06 Nov 2014 20:31:31 +0100 Subject: [infinispan-dev] Remoting package refactor In-Reply-To: <545BC0AE.90002@redhat.com> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> Message-ID: <545BCC93.4010205@redhat.com> My opinion is that we should aim for less configuration, i.e. threadpools should mostly have sensible defaults and be shared by default unless there are extremely good reasons for not doing so. Tristan On 06/11/14 19:40, Radim Vansa wrote: > I second the opinion that any threadpools should be shared by default. > There are users who have hundreds or thousands of caches and having > separate threadpool for each of them could easily drain resources. And > sharing resources is the purpose of threadpools, right? > > Radim > > On 11/06/2014 04:37 PM, Bela Ban wrote: >> #1 I would by default have 1 thread pool shared by all caches >> #2 This global thread pool should be configurable, perhaps in the >> section ? >> #3 Each cache by default uses the gobal thread pool >> #4 A cache can define its own thread pool, then it would use this one >> and not the global thread pool >> >> I think this gives you a mixture between ease of use and flexibility in >> configuring pool per cache if needed >> >> On 06/11/14 16:23, Pedro Ruivo wrote: >>> On 11/06/2014 03:01 PM, Bela Ban wrote: >>>> On 06/11/14 15:36, Pedro Ruivo wrote: >>>>> * added a single thread remote executor service. This will handle the >>>>> FIFO deliver commands. Previously, they were handled by JGroups incoming >>>>> threads and with a new executor service, each cache can process their >>>>> own FIFO commands concurrently. >>>> +1000. This allows multiple updates from the same sender but to >>>> different caches to be executed in parallel, and will speed thing up. >>>> >>>> Do you intend to share a thread pool between the invocations handlers of >>>> the various caches, or do they each have their own thread pool ? Or is >>>> this configurable ? >>>> >>> That is question that cross my mind and I don't have any idea what would >>> be the best. So, for now, I will leave the thread pool shared between >>> the handlers. >>> >>> Never thought to make it configurable, but maybe that is the best >>> option. And maybe, it should be possible to have different max-thread >>> size per cache. For example: >>> >>> * all caches using this remote executor will share the same instance >>> >>> >>> * all caches using this remote executor will create their own thread >>> pool with max-threads equals to 1 >>> >> max-threads=1 .../> >>> >>> * all caches using this remote executor will create their own thread >>> pool with max-threads equals to 1000 >>> >> max-thread=1000 .../> >>> >>> is this what you have in mind? comments? >>> >>> Cheers, >>> Pedro >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> > From bban at redhat.com Fri Nov 7 02:35:37 2014 From: bban at redhat.com (Bela Ban) Date: Fri, 07 Nov 2014 08:35:37 +0100 Subject: [infinispan-dev] Remoting package refactor In-Reply-To: <545BCC93.4010205@redhat.com> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> Message-ID: <545C7649.6090309@redhat.com> That's exactly what I suggested. No config gives you a shared global thread pool for all caches. Those caches which need a separate pool can do that via configuration (and of course also programmatically) On 06/11/14 20:31, Tristan Tarrant wrote: > My opinion is that we should aim for less configuration, i.e. > threadpools should mostly have sensible defaults and be shared by > default unless there are extremely good reasons for not doing so. > > Tristan > > On 06/11/14 19:40, Radim Vansa wrote: >> I second the opinion that any threadpools should be shared by default. >> There are users who have hundreds or thousands of caches and having >> separate threadpool for each of them could easily drain resources. And >> sharing resources is the purpose of threadpools, right? >> >> Radim >> >> On 11/06/2014 04:37 PM, Bela Ban wrote: >>> #1 I would by default have 1 thread pool shared by all caches >>> #2 This global thread pool should be configurable, perhaps in the >>> section ? >>> #3 Each cache by default uses the gobal thread pool >>> #4 A cache can define its own thread pool, then it would use this one >>> and not the global thread pool >>> >>> I think this gives you a mixture between ease of use and flexibility in >>> configuring pool per cache if needed >>> >>> On 06/11/14 16:23, Pedro Ruivo wrote: >>>> On 11/06/2014 03:01 PM, Bela Ban wrote: >>>>> On 06/11/14 15:36, Pedro Ruivo wrote: >>>>>> * added a single thread remote executor service. This will handle the >>>>>> FIFO deliver commands. Previously, they were handled by JGroups incoming >>>>>> threads and with a new executor service, each cache can process their >>>>>> own FIFO commands concurrently. >>>>> +1000. This allows multiple updates from the same sender but to >>>>> different caches to be executed in parallel, and will speed thing up. >>>>> >>>>> Do you intend to share a thread pool between the invocations handlers of >>>>> the various caches, or do they each have their own thread pool ? Or is >>>>> this configurable ? >>>>> >>>> That is question that cross my mind and I don't have any idea what would >>>> be the best. So, for now, I will leave the thread pool shared between >>>> the handlers. >>>> >>>> Never thought to make it configurable, but maybe that is the best >>>> option. And maybe, it should be possible to have different max-thread >>>> size per cache. For example: >>>> >>>> * all caches using this remote executor will share the same instance >>>> >>>> >>>> * all caches using this remote executor will create their own thread >>>> pool with max-threads equals to 1 >>>> >>> max-threads=1 .../> >>>> >>>> * all caches using this remote executor will create their own thread >>>> pool with max-threads equals to 1000 >>>> >>> max-thread=1000 .../> >>>> >>>> is this what you have in mind? comments? >>>> >>>> Cheers, >>>> Pedro >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban, JGroups lead (http://www.jgroups.org) From rvansa at redhat.com Fri Nov 7 03:31:24 2014 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 07 Nov 2014 09:31:24 +0100 Subject: [infinispan-dev] Remoting package refactor In-Reply-To: <545C7649.6090309@redhat.com> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> <545C7649.6090309@redhat.com> Message-ID: <545C835C.3020301@redhat.com> Btw., have you ever considered checks if a thread returns to pool reasonably often? Some of the other datagrids use this, though there's not much how to react upon that beyond printing out stack traces (but you can at least report to management that some node seems to be broken). Radim On 11/07/2014 08:35 AM, Bela Ban wrote: > That's exactly what I suggested. No config gives you a shared global > thread pool for all caches. > > Those caches which need a separate pool can do that via configuration > (and of course also programmatically) > > On 06/11/14 20:31, Tristan Tarrant wrote: >> My opinion is that we should aim for less configuration, i.e. >> threadpools should mostly have sensible defaults and be shared by >> default unless there are extremely good reasons for not doing so. >> >> Tristan >> >> On 06/11/14 19:40, Radim Vansa wrote: >>> I second the opinion that any threadpools should be shared by default. >>> There are users who have hundreds or thousands of caches and having >>> separate threadpool for each of them could easily drain resources. And >>> sharing resources is the purpose of threadpools, right? >>> >>> Radim >>> >>> On 11/06/2014 04:37 PM, Bela Ban wrote: >>>> #1 I would by default have 1 thread pool shared by all caches >>>> #2 This global thread pool should be configurable, perhaps in the >>>> section ? >>>> #3 Each cache by default uses the gobal thread pool >>>> #4 A cache can define its own thread pool, then it would use this one >>>> and not the global thread pool >>>> >>>> I think this gives you a mixture between ease of use and flexibility in >>>> configuring pool per cache if needed >>>> >>>> On 06/11/14 16:23, Pedro Ruivo wrote: >>>>> On 11/06/2014 03:01 PM, Bela Ban wrote: >>>>>> On 06/11/14 15:36, Pedro Ruivo wrote: >>>>>>> * added a single thread remote executor service. This will handle the >>>>>>> FIFO deliver commands. Previously, they were handled by JGroups incoming >>>>>>> threads and with a new executor service, each cache can process their >>>>>>> own FIFO commands concurrently. >>>>>> +1000. This allows multiple updates from the same sender but to >>>>>> different caches to be executed in parallel, and will speed thing up. >>>>>> >>>>>> Do you intend to share a thread pool between the invocations handlers of >>>>>> the various caches, or do they each have their own thread pool ? Or is >>>>>> this configurable ? >>>>>> >>>>> That is question that cross my mind and I don't have any idea what would >>>>> be the best. So, for now, I will leave the thread pool shared between >>>>> the handlers. >>>>> >>>>> Never thought to make it configurable, but maybe that is the best >>>>> option. And maybe, it should be possible to have different max-thread >>>>> size per cache. For example: >>>>> >>>>> * all caches using this remote executor will share the same instance >>>>> >>>>> >>>>> * all caches using this remote executor will create their own thread >>>>> pool with max-threads equals to 1 >>>>> >>>> max-threads=1 .../> >>>>> >>>>> * all caches using this remote executor will create their own thread >>>>> pool with max-threads equals to 1000 >>>>> >>>> max-thread=1000 .../> >>>>> >>>>> is this what you have in mind? comments? >>>>> >>>>> Cheers, >>>>> Pedro >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> -- Radim Vansa JBoss DataGrid QA From bban at redhat.com Fri Nov 7 07:21:46 2014 From: bban at redhat.com (Bela Ban) Date: Fri, 07 Nov 2014 13:21:46 +0100 Subject: [infinispan-dev] Remoting package refactor In-Reply-To: <545C835C.3020301@redhat.com> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> <545C7649.6090309@redhat.com> <545C835C.3020301@redhat.com> Message-ID: <545CB95A.4020909@redhat.com> Hi Radim, no I haven't. However, you can replace the thread pools used by JGroups and use custom pools. I like another idea better: inject Byteman code at runtime that keeps track of this, and *other useful stats as well*. It would be very useful to support if we could ship a package to a customer that is injected into their running system and grabs all the vital stats we need for a few minutes, then removes itself again and those stats are then sent to use as a ZIP file. The good thing about byteman is that it can remove itself without a trace; ie. there's no overhead before / after running byteman. On 07/11/14 09:31, Radim Vansa wrote: > Btw., have you ever considered checks if a thread returns to pool > reasonably often? Some of the other datagrids use this, though there's > not much how to react upon that beyond printing out stack traces (but > you can at least report to management that some node seems to be broken). > > Radim > > On 11/07/2014 08:35 AM, Bela Ban wrote: >> That's exactly what I suggested. No config gives you a shared global >> thread pool for all caches. >> >> Those caches which need a separate pool can do that via configuration >> (and of course also programmatically) >> >> On 06/11/14 20:31, Tristan Tarrant wrote: >>> My opinion is that we should aim for less configuration, i.e. >>> threadpools should mostly have sensible defaults and be shared by >>> default unless there are extremely good reasons for not doing so. >>> >>> Tristan >>> >>> On 06/11/14 19:40, Radim Vansa wrote: >>>> I second the opinion that any threadpools should be shared by default. >>>> There are users who have hundreds or thousands of caches and having >>>> separate threadpool for each of them could easily drain resources. And >>>> sharing resources is the purpose of threadpools, right? >>>> >>>> Radim >>>> >>>> On 11/06/2014 04:37 PM, Bela Ban wrote: >>>>> #1 I would by default have 1 thread pool shared by all caches >>>>> #2 This global thread pool should be configurable, perhaps in the >>>>> section ? >>>>> #3 Each cache by default uses the gobal thread pool >>>>> #4 A cache can define its own thread pool, then it would use this one >>>>> and not the global thread pool >>>>> >>>>> I think this gives you a mixture between ease of use and flexibility in >>>>> configuring pool per cache if needed >>>>> >>>>> On 06/11/14 16:23, Pedro Ruivo wrote: >>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote: >>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote: >>>>>>>> * added a single thread remote executor service. This will handle the >>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups incoming >>>>>>>> threads and with a new executor service, each cache can process their >>>>>>>> own FIFO commands concurrently. >>>>>>> +1000. This allows multiple updates from the same sender but to >>>>>>> different caches to be executed in parallel, and will speed thing up. >>>>>>> >>>>>>> Do you intend to share a thread pool between the invocations handlers of >>>>>>> the various caches, or do they each have their own thread pool ? Or is >>>>>>> this configurable ? >>>>>>> >>>>>> That is question that cross my mind and I don't have any idea what would >>>>>> be the best. So, for now, I will leave the thread pool shared between >>>>>> the handlers. >>>>>> >>>>>> Never thought to make it configurable, but maybe that is the best >>>>>> option. And maybe, it should be possible to have different max-thread >>>>>> size per cache. For example: >>>>>> >>>>>> * all caches using this remote executor will share the same instance >>>>>> >>>>>> >>>>>> * all caches using this remote executor will create their own thread >>>>>> pool with max-threads equals to 1 >>>>>> >>>>> max-threads=1 .../> >>>>>> >>>>>> * all caches using this remote executor will create their own thread >>>>>> pool with max-threads equals to 1000 >>>>>> >>>>> max-thread=1000 .../> >>>>>> >>>>>> is this what you have in mind? comments? >>>>>> >>>>>> Cheers, >>>>>> Pedro >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> > > -- Bela Ban, JGroups lead (http://www.jgroups.org) From sanne at infinispan.org Fri Nov 7 07:43:08 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 7 Nov 2014 12:43:08 +0000 Subject: [infinispan-dev] Remoting package refactor In-Reply-To: <545CB95A.4020909@redhat.com> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> <545C7649.6090309@redhat.com> <545C835C.3020301@redhat.com> <545CB95A.4020909@redhat.com> Message-ID: I think our priority should be to get rid of the need for threadpools - not their configuration options. If there is a real need for threadpools, then you have to provide full configuration options as you simply don't know how it's going to be used nor on what kind of hardware it's going to be run. Sounds like yet another reason to see if we should split the configuration in two areas: - high level simple configuration (what you need to set to get started) - expert tuning (what you'll need when production time comes) Also some of most recent users on Hibernate forums are puzzled on how to do tuning for Infinispan, when they're deploying several applications using it in the same container. I'm educating them on FORK, but we should be able to go beyond that: allow containers and platform developers to share threadpools across CacheManagers, so in such a case you'd want Infinispan to use a service for threadpool management, and allow people to inject a custom component for it. Sanne On 7 November 2014 12:21, Bela Ban wrote: > Hi Radim, > > no I haven't. However, you can replace the thread pools used by JGroups > and use custom pools. > > I like another idea better: inject Byteman code at runtime that keeps > track of this, and *other useful stats as well*. > > It would be very useful to support if we could ship a package to a > customer that is injected into their running system and grabs all the > vital stats we need for a few minutes, then removes itself again and > those stats are then sent to use as a ZIP file. > The good thing about byteman is that it can remove itself without a > trace; ie. there's no overhead before / after running byteman. > > > On 07/11/14 09:31, Radim Vansa wrote: >> Btw., have you ever considered checks if a thread returns to pool >> reasonably often? Some of the other datagrids use this, though there's >> not much how to react upon that beyond printing out stack traces (but >> you can at least report to management that some node seems to be broken). >> >> Radim >> >> On 11/07/2014 08:35 AM, Bela Ban wrote: >>> That's exactly what I suggested. No config gives you a shared global >>> thread pool for all caches. >>> >>> Those caches which need a separate pool can do that via configuration >>> (and of course also programmatically) >>> >>> On 06/11/14 20:31, Tristan Tarrant wrote: >>>> My opinion is that we should aim for less configuration, i.e. >>>> threadpools should mostly have sensible defaults and be shared by >>>> default unless there are extremely good reasons for not doing so. >>>> >>>> Tristan >>>> >>>> On 06/11/14 19:40, Radim Vansa wrote: >>>>> I second the opinion that any threadpools should be shared by default. >>>>> There are users who have hundreds or thousands of caches and having >>>>> separate threadpool for each of them could easily drain resources. And >>>>> sharing resources is the purpose of threadpools, right? >>>>> >>>>> Radim >>>>> >>>>> On 11/06/2014 04:37 PM, Bela Ban wrote: >>>>>> #1 I would by default have 1 thread pool shared by all caches >>>>>> #2 This global thread pool should be configurable, perhaps in the >>>>>> section ? >>>>>> #3 Each cache by default uses the gobal thread pool >>>>>> #4 A cache can define its own thread pool, then it would use this one >>>>>> and not the global thread pool >>>>>> >>>>>> I think this gives you a mixture between ease of use and flexibility in >>>>>> configuring pool per cache if needed >>>>>> >>>>>> On 06/11/14 16:23, Pedro Ruivo wrote: >>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote: >>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote: >>>>>>>>> * added a single thread remote executor service. This will handle the >>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups incoming >>>>>>>>> threads and with a new executor service, each cache can process their >>>>>>>>> own FIFO commands concurrently. >>>>>>>> +1000. This allows multiple updates from the same sender but to >>>>>>>> different caches to be executed in parallel, and will speed thing up. >>>>>>>> >>>>>>>> Do you intend to share a thread pool between the invocations handlers of >>>>>>>> the various caches, or do they each have their own thread pool ? Or is >>>>>>>> this configurable ? >>>>>>>> >>>>>>> That is question that cross my mind and I don't have any idea what would >>>>>>> be the best. So, for now, I will leave the thread pool shared between >>>>>>> the handlers. >>>>>>> >>>>>>> Never thought to make it configurable, but maybe that is the best >>>>>>> option. And maybe, it should be possible to have different max-thread >>>>>>> size per cache. For example: >>>>>>> >>>>>>> * all caches using this remote executor will share the same instance >>>>>>> >>>>>>> >>>>>>> * all caches using this remote executor will create their own thread >>>>>>> pool with max-threads equals to 1 >>>>>>> >>>>>> max-threads=1 .../> >>>>>>> >>>>>>> * all caches using this remote executor will create their own thread >>>>>>> pool with max-threads equals to 1000 >>>>>>> >>>>>> max-thread=1000 .../> >>>>>>> >>>>>>> is this what you have in mind? comments? >>>>>>> >>>>>>> Cheers, >>>>>>> Pedro >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >> >> > > -- > Bela Ban, JGroups lead (http://www.jgroups.org) > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Fri Nov 7 07:45:38 2014 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 07 Nov 2014 13:45:38 +0100 Subject: [infinispan-dev] Thread pools monitoring In-Reply-To: <545CB95A.4020909@redhat.com> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> <545C7649.6090309@redhat.com> <545C835C.3020301@redhat.com> <545CB95A.4020909@redhat.com> Message-ID: <545CBEF2.503@redhat.com> Hijacking thread 'Remoting package refactor' as the discussion has shifted. Sure, AOP is another approach. However, besided another limitations, Byteman rules are quite fragile with respect to different versions: if you're injecting code based on internal implementation method, when the name/signature changes, the rule is broken. Sometimes you even have to use AT LINE to formulate the injection point. Would you accept a compile-time dependency to some annotations package in JGroups that could 'tag' the injection points? The idea is that anyone changing the source code would move the injection point annotations as well. I was already thinking about this in relation with Message Flow Tracer [1] (not working right now as the JGroups have changed since I was writing that)? Roman Macor is right now updating the rules and I was hoping that we could insert annotations into JGroups that would be used instead of the rules (I was already considering different AOP framework as Byteman does not allow AT EXIT to catch on leaving exceptions [2]). Radim [1] https://github.com/rvansa/message-flow-tracer [2] https://issues.jboss.org/browse/BYTEMAN-237 On 11/07/2014 01:21 PM, Bela Ban wrote: > Hi Radim, > > no I haven't. However, you can replace the thread pools used by JGroups > and use custom pools. > > I like another idea better: inject Byteman code at runtime that keeps > track of this, and *other useful stats as well*. > > It would be very useful to support if we could ship a package to a > customer that is injected into their running system and grabs all the > vital stats we need for a few minutes, then removes itself again and > those stats are then sent to use as a ZIP file. > The good thing about byteman is that it can remove itself without a > trace; ie. there's no overhead before / after running byteman. > > > On 07/11/14 09:31, Radim Vansa wrote: >> Btw., have you ever considered checks if a thread returns to pool >> reasonably often? Some of the other datagrids use this, though there's >> not much how to react upon that beyond printing out stack traces (but >> you can at least report to management that some node seems to be broken). >> >> Radim >> >> On 11/07/2014 08:35 AM, Bela Ban wrote: >>> That's exactly what I suggested. No config gives you a shared global >>> thread pool for all caches. >>> >>> Those caches which need a separate pool can do that via configuration >>> (and of course also programmatically) >>> >>> On 06/11/14 20:31, Tristan Tarrant wrote: >>>> My opinion is that we should aim for less configuration, i.e. >>>> threadpools should mostly have sensible defaults and be shared by >>>> default unless there are extremely good reasons for not doing so. >>>> >>>> Tristan >>>> >>>> On 06/11/14 19:40, Radim Vansa wrote: >>>>> I second the opinion that any threadpools should be shared by default. >>>>> There are users who have hundreds or thousands of caches and having >>>>> separate threadpool for each of them could easily drain resources. And >>>>> sharing resources is the purpose of threadpools, right? >>>>> >>>>> Radim >>>>> >>>>> On 11/06/2014 04:37 PM, Bela Ban wrote: >>>>>> #1 I would by default have 1 thread pool shared by all caches >>>>>> #2 This global thread pool should be configurable, perhaps in the >>>>>> section ? >>>>>> #3 Each cache by default uses the gobal thread pool >>>>>> #4 A cache can define its own thread pool, then it would use this one >>>>>> and not the global thread pool >>>>>> >>>>>> I think this gives you a mixture between ease of use and flexibility in >>>>>> configuring pool per cache if needed >>>>>> >>>>>> On 06/11/14 16:23, Pedro Ruivo wrote: >>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote: >>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote: >>>>>>>>> * added a single thread remote executor service. This will handle the >>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups incoming >>>>>>>>> threads and with a new executor service, each cache can process their >>>>>>>>> own FIFO commands concurrently. >>>>>>>> +1000. This allows multiple updates from the same sender but to >>>>>>>> different caches to be executed in parallel, and will speed thing up. >>>>>>>> >>>>>>>> Do you intend to share a thread pool between the invocations handlers of >>>>>>>> the various caches, or do they each have their own thread pool ? Or is >>>>>>>> this configurable ? >>>>>>>> >>>>>>> That is question that cross my mind and I don't have any idea what would >>>>>>> be the best. So, for now, I will leave the thread pool shared between >>>>>>> the handlers. >>>>>>> >>>>>>> Never thought to make it configurable, but maybe that is the best >>>>>>> option. And maybe, it should be possible to have different max-thread >>>>>>> size per cache. For example: >>>>>>> >>>>>>> * all caches using this remote executor will share the same instance >>>>>>> >>>>>>> >>>>>>> * all caches using this remote executor will create their own thread >>>>>>> pool with max-threads equals to 1 >>>>>>> >>>>>> max-threads=1 .../> >>>>>>> >>>>>>> * all caches using this remote executor will create their own thread >>>>>>> pool with max-threads equals to 1000 >>>>>>> >>>>>> max-thread=1000 .../> >>>>>>> >>>>>>> is this what you have in mind? comments? >>>>>>> >>>>>>> Cheers, >>>>>>> Pedro >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >> -- Radim Vansa JBoss DataGrid QA From bban at redhat.com Fri Nov 7 08:27:31 2014 From: bban at redhat.com (Bela Ban) Date: Fri, 07 Nov 2014 14:27:31 +0100 Subject: [infinispan-dev] Thread pools monitoring In-Reply-To: <545CBEF2.503@redhat.com> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> <545C7649.6090309@redhat.com> <545C835C.3020301@redhat.com> <545CB95A.4020909@redhat.com> <545CBEF2.503@redhat.com> Message-ID: <545CC8C3.40507@redhat.com> On 07/11/14 13:45, Radim Vansa wrote: > Hijacking thread 'Remoting package refactor' as the discussion has shifted. > > Sure, AOP is another approach. However, besided another limitations, > Byteman rules are quite fragile with respect to different versions: if > you're injecting code based on internal implementation method, when the > name/signature changes, the rule is broken. Sometimes you even have to > use AT LINE to formulate the injection point. Right. This is the same problem though as when support needs to create a (e.f. one-off) patch to be applied by a customer: they need to grab the exact same version the customer is running. So each diagnosis package would have to be dependent on the version (of JGroups or JDG) used. Regardless of whether custom rules are added by a support engineer, this has to be tested anyway before sending it off to the customer. > Would you accept a compile-time dependency to some annotations package > in JGroups that could 'tag' the injection points? The idea is that > anyone changing the source code would move the injection point > annotations as well. You mean something like this ? @InjectionPoint("down") public void down(Event e) or @InjectingPoint ("num_msgs_sent") protected int num_msgs_sent; No, this won't work... how would you do that ? I don't really like this, on a general principle: AOP should *not* have to change the src code in order to work. And the fact of the matter is that you won't be able to identify *all* injection points beforehand... unless you want to sprinkle your code with annotations. > I was already thinking about this in relation with Message Flow Tracer > [1] (not working right now as the JGroups have changed since I was > writing that)? I took a quick look: nice ! This is exactly what I meant. Should be some sort of rule base in a VCS, to which support engineers add rules when they have a case which requires it and they deem it to be generally useful. Re API changes: doesn't Byteman have functionality which can check a rule set against a code base (offline), to find out incompatibilities ? Something like a static rule checker ? > Roman Macor is right now updating the rules and I was > hoping that we could insert annotations into JGroups that would be used > instead of the rules (I was already considering different AOP framework > as Byteman does not allow AT EXIT to catch on leaving exceptions [2]). Yes, I've also run into this before, not really nice. > Radim > > [1] https://github.com/rvansa/message-flow-tracer > [2] https://issues.jboss.org/browse/BYTEMAN-237 > > On 11/07/2014 01:21 PM, Bela Ban wrote: >> Hi Radim, >> >> no I haven't. However, you can replace the thread pools used by JGroups >> and use custom pools. >> >> I like another idea better: inject Byteman code at runtime that keeps >> track of this, and *other useful stats as well*. >> >> It would be very useful to support if we could ship a package to a >> customer that is injected into their running system and grabs all the >> vital stats we need for a few minutes, then removes itself again and >> those stats are then sent to use as a ZIP file. >> The good thing about byteman is that it can remove itself without a >> trace; ie. there's no overhead before / after running byteman. >> >> >> On 07/11/14 09:31, Radim Vansa wrote: >>> Btw., have you ever considered checks if a thread returns to pool >>> reasonably often? Some of the other datagrids use this, though there's >>> not much how to react upon that beyond printing out stack traces (but >>> you can at least report to management that some node seems to be broken). >>> >>> Radim >>> >>> On 11/07/2014 08:35 AM, Bela Ban wrote: >>>> That's exactly what I suggested. No config gives you a shared global >>>> thread pool for all caches. >>>> >>>> Those caches which need a separate pool can do that via configuration >>>> (and of course also programmatically) >>>> >>>> On 06/11/14 20:31, Tristan Tarrant wrote: >>>>> My opinion is that we should aim for less configuration, i.e. >>>>> threadpools should mostly have sensible defaults and be shared by >>>>> default unless there are extremely good reasons for not doing so. >>>>> >>>>> Tristan >>>>> >>>>> On 06/11/14 19:40, Radim Vansa wrote: >>>>>> I second the opinion that any threadpools should be shared by default. >>>>>> There are users who have hundreds or thousands of caches and having >>>>>> separate threadpool for each of them could easily drain resources. And >>>>>> sharing resources is the purpose of threadpools, right? >>>>>> >>>>>> Radim >>>>>> >>>>>> On 11/06/2014 04:37 PM, Bela Ban wrote: >>>>>>> #1 I would by default have 1 thread pool shared by all caches >>>>>>> #2 This global thread pool should be configurable, perhaps in the >>>>>>> section ? >>>>>>> #3 Each cache by default uses the gobal thread pool >>>>>>> #4 A cache can define its own thread pool, then it would use this one >>>>>>> and not the global thread pool >>>>>>> >>>>>>> I think this gives you a mixture between ease of use and flexibility in >>>>>>> configuring pool per cache if needed >>>>>>> >>>>>>> On 06/11/14 16:23, Pedro Ruivo wrote: >>>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote: >>>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote: >>>>>>>>>> * added a single thread remote executor service. This will handle the >>>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups incoming >>>>>>>>>> threads and with a new executor service, each cache can process their >>>>>>>>>> own FIFO commands concurrently. >>>>>>>>> +1000. This allows multiple updates from the same sender but to >>>>>>>>> different caches to be executed in parallel, and will speed thing up. >>>>>>>>> >>>>>>>>> Do you intend to share a thread pool between the invocations handlers of >>>>>>>>> the various caches, or do they each have their own thread pool ? Or is >>>>>>>>> this configurable ? >>>>>>>>> >>>>>>>> That is question that cross my mind and I don't have any idea what would >>>>>>>> be the best. So, for now, I will leave the thread pool shared between >>>>>>>> the handlers. >>>>>>>> >>>>>>>> Never thought to make it configurable, but maybe that is the best >>>>>>>> option. And maybe, it should be possible to have different max-thread >>>>>>>> size per cache. For example: >>>>>>>> >>>>>>>> * all caches using this remote executor will share the same instance >>>>>>>> >>>>>>>> >>>>>>>> * all caches using this remote executor will create their own thread >>>>>>>> pool with max-threads equals to 1 >>>>>>>> >>>>>>> max-threads=1 .../> >>>>>>>> >>>>>>>> * all caches using this remote executor will create their own thread >>>>>>>> pool with max-threads equals to 1000 >>>>>>>> >>>>>>> max-thread=1000 .../> >>>>>>>> >>>>>>>> is this what you have in mind? comments? >>>>>>>> >>>>>>>> Cheers, >>>>>>>> Pedro >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>> > > -- Bela Ban, JGroups lead (http://www.jgroups.org) From dan.berindei at gmail.com Fri Nov 7 10:24:20 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Fri, 7 Nov 2014 17:24:20 +0200 Subject: [infinispan-dev] Rebalancing flag as part of the CacheStatusResponse In-Reply-To: References: Message-ID: Hi Erik This makes a lot of sense. In fact, I was really close to implementing it while I was replacing RebalancePolicy with AvailabilityStrategy. Unfortunately I hit some problems and I had to postpone it (mostly because I was also trying to make the flag per-cache). The only question is what happens after a merge, if one partition has rebalancing enabled, and the other has rebalancing disabled. I think I would prefer to keep it disabled if at least one partition had it disabled. E.g. if you start a new node and it doesn't join properly, you wouldn't want it to trigger a rebalance when it finally finds the cluster, only after you enable rebalancing yourself. Cheers Dan On Tue, Oct 28, 2014 at 12:00 AM, Erik Salter wrote: > Hi all, > > This topic came up in a separate discussion with Mircea, and he suggested > I post something on the mailing list for a wider audience. > > I have a business case where I need the value of the rebalancing flag read > by the joining nodes. Let's say we have a TACH where we want our keys > striped across machines, racks, etc. Due to how NBST works, if we start a > bunch of nodes on one side of the topology marker, we'rewill end up with > the case where all keys will dog-pile on the first node that joins before > being disseminated to the other nodes. In other words, the first joining > node on the other side of the topology acts as a "pivot." That's bad, > especially if the key is marked as DELTA_WRITE, where the receiving node > must pull the key from the readCH before applying the changelog. > > So not only do we have a single choke-point, but it's made worse by the > initial burst of every write requiring numOwner threads for remote reads. > > If we disable rebalancing and start up the nodes on the other side of the > topology, we can process this in a single view change. But there's a > catch -- and this is the reason I added the state of the flag. We've run > into a case where the current coordinator changed (crash or a MERGE) as > the other nodes are starting up. And the new coordinator was elected from > the new side of the topology. So we had two separate but balanced CHs on > both sides of the topology. And data integrity went out the window. > > Hence the flag. Note also that this deployment requires the > awaitInitialTransfer flag to be false. > > In a real production environment, this has saved me more times than I can > count. Node failover/failback is now reasonably deterministic with a > simple operational procedure for our customer(s) to follow. > > > The question is whether this feature would be useful for the community. > Even with the new partition handling, I think this implementation is still > viable and may warrant inclusion into 7.0 (or 7.1). What does the team > think? I welcome any and all feedback. > > Regards, > > Erik Salter > Cisco Systems, SPVTG > (404) 317-0693 > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20141107/3afe4c8a/attachment.html From sanne at infinispan.org Fri Nov 7 10:47:13 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 7 Nov 2014 15:47:13 +0000 Subject: [infinispan-dev] Feature request: manage and share a CacheManager across deployments on WildFly Message-ID: I'm witnessing users of Hibernate Search who say they deploy several dozens of JPA applications using Hibernate Search in a single container, and when evaluating usage of Infnispan for index storage they would like them all to share the CacheManager, rather than starting a new CacheManager for each and then have to worry about things like JGroups isolation or rather reuse via FORK. This is easy to achieve by configuring the CacheManager in the WildFly configuration, and then looking it up by JNDI name.. but is not easy at all to achieve if you want to use the custom modules which we deliver to allow using a different Infinispan version of what's included in WildFly. That's nasty, because we ultimately want people to use our modules and leave the ones in WildFly for its internal usage. It would be nice if the team could include in the modules.zip a way to pre-start configured caches, and instructions to mark their deployments as depending on this service. Would be useful to then connect this to monitoring too.. Sanne From dan.berindei at gmail.com Sat Nov 8 05:12:35 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Sat, 8 Nov 2014 12:12:35 +0200 Subject: [infinispan-dev] Remoting package refactor In-Reply-To: References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> <545C7649.6090309@redhat.com> <545C835C.3020301@redhat.com> <545CB95A.4020909@redhat.com> Message-ID: I don't think we'll ever get to the point where we don't need *any* thread pools in Infinispan :) OTOH I also want to reduce the number of thread pools and thread pool configurations, so I'd rather not add per-cache thread pools until we see a clear need for it. In particular, I don't think we need a single-thread pool per cache for non-OOB commands, they can be executed on the global remote executor thread pool just like total order messages. The same way we maintain FIFO order per cache + key for total order commands, we can maintain FIFO order per cache for non-OOB commands. I'm not sure it will lead to better performance than executing the commands on the JGroups threads directly, as we're gaining the execution in parallel of commands for different caches and we're losing the execution in parallel of commands from different senders. But I guess it's worth trying. Further comments inline... On Fri, Nov 7, 2014 at 2:43 PM, Sanne Grinovero wrote: > I think our priority should be to get rid of the need for threadpools > - not their configuration options. > > If there is a real need for threadpools, then you have to provide full > configuration options as you simply don't know how it's going to be > used nor on what kind of hardware it's going to be run. > Sounds like yet another reason to see if we should split the > configuration in two areas: > - high level simple configuration (what you need to set to get started) > - expert tuning (what you'll need when production time comes) > I hope you don't mean to say that most Infinispan users will give up before production time comes, so they will never need to learn the expert tuning configuration :) I'd rather remove a configuration option than tell the users that they're not smart enough to use it. > > Also some of most recent users on Hibernate forums are puzzled on how > to do tuning for Infinispan, when they're deploying several > applications using it in the same container. I'm educating them on > FORK, but we should be able to go beyond that: allow containers and > platform developers to share threadpools across CacheManagers, so in > such a case you'd want Infinispan to use a service for threadpool > management, and allow people to inject a custom component for it. > I'm not sure what kind of service you have in mind here, we allow the injection of each executor in the programmatic configuration, so you can already share thread pools between cache managers. The server XML configuration also allows you to reuse a thread pool for all the cache-containers. > > Sanne > > > On 7 November 2014 12:21, Bela Ban wrote: > > Hi Radim, > > > > no I haven't. However, you can replace the thread pools used by JGroups > > and use custom pools. > > > > I like another idea better: inject Byteman code at runtime that keeps > > track of this, and *other useful stats as well*. > > > > It would be very useful to support if we could ship a package to a > > customer that is injected into their running system and grabs all the > > vital stats we need for a few minutes, then removes itself again and > > those stats are then sent to use as a ZIP file. > > The good thing about byteman is that it can remove itself without a > > trace; ie. there's no overhead before / after running byteman. > > > > > > On 07/11/14 09:31, Radim Vansa wrote: > >> Btw., have you ever considered checks if a thread returns to pool > >> reasonably often? Some of the other datagrids use this, though there's > >> not much how to react upon that beyond printing out stack traces (but > >> you can at least report to management that some node seems to be > broken). > >> > >> Radim > >> > >> On 11/07/2014 08:35 AM, Bela Ban wrote: > >>> That's exactly what I suggested. No config gives you a shared global > >>> thread pool for all caches. > >>> > >>> Those caches which need a separate pool can do that via configuration > >>> (and of course also programmatically) > >>> > >>> On 06/11/14 20:31, Tristan Tarrant wrote: > >>>> My opinion is that we should aim for less configuration, i.e. > >>>> threadpools should mostly have sensible defaults and be shared by > >>>> default unless there are extremely good reasons for not doing so. > >>>> > >>>> Tristan > >>>> > >>>> On 06/11/14 19:40, Radim Vansa wrote: > >>>>> I second the opinion that any threadpools should be shared by > default. > >>>>> There are users who have hundreds or thousands of caches and having > >>>>> separate threadpool for each of them could easily drain resources. > And > >>>>> sharing resources is the purpose of threadpools, right? > >>>>> > >>>>> Radim > >>>>> > >>>>> On 11/06/2014 04:37 PM, Bela Ban wrote: > >>>>>> #1 I would by default have 1 thread pool shared by all caches > >>>>>> #2 This global thread pool should be configurable, perhaps in the > >>>>>> section ? > >>>>>> #3 Each cache by default uses the gobal thread pool > >>>>>> #4 A cache can define its own thread pool, then it would use this > one > >>>>>> and not the global thread pool > >>>>>> > >>>>>> I think this gives you a mixture between ease of use and > flexibility in > >>>>>> configuring pool per cache if needed > >>>>>> > >>>>>> On 06/11/14 16:23, Pedro Ruivo wrote: > >>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote: > >>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote: > >>>>>>>>> * added a single thread remote executor service. This will > handle the > >>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups > incoming > >>>>>>>>> threads and with a new executor service, each cache can process > their > >>>>>>>>> own FIFO commands concurrently. > >>>>>>>> +1000. This allows multiple updates from the same sender but to > >>>>>>>> different caches to be executed in parallel, and will speed thing > up. > >>>>>>>> > >>>>>>>> Do you intend to share a thread pool between the invocations > handlers of > >>>>>>>> the various caches, or do they each have their own thread pool ? > Or is > >>>>>>>> this configurable ? > >>>>>>>> > >>>>>>> That is question that cross my mind and I don't have any idea what > would > >>>>>>> be the best. So, for now, I will leave the thread pool shared > between > >>>>>>> the handlers. > >>>>>>> > >>>>>>> Never thought to make it configurable, but maybe that is the best > >>>>>>> option. And maybe, it should be possible to have different > max-thread > >>>>>>> size per cache. For example: > >>>>>>> > >>>>>>> * all caches using this remote executor will share the same > instance > >>>>>>> > >>>>>>> > >>>>>>> * all caches using this remote executor will create their own > thread > >>>>>>> pool with max-threads equals to 1 > >>>>>>> >>>>>>> max-threads=1 .../> > >>>>>>> > >>>>>>> * all caches using this remote executor will create their own > thread > >>>>>>> pool with max-threads equals to 1000 > >>>>>>> >>>>>>> max-thread=1000 .../> > >>>>>>> > >>>>>>> is this what you have in mind? comments? > >>>>>>> > >>>>>>> Cheers, > >>>>>>> Pedro > >>>>>>> _______________________________________________ > >>>>>>> infinispan-dev mailing list > >>>>>>> infinispan-dev at lists.jboss.org > >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>>>>> > >>>> _______________________________________________ > >>>> infinispan-dev mailing list > >>>> infinispan-dev at lists.jboss.org > >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>> > >> > >> > > > > -- > > Bela Ban, JGroups lead (http://www.jgroups.org) > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20141108/ce6884fb/attachment-0001.html From rvansa at redhat.com Mon Nov 10 04:22:05 2014 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 10 Nov 2014 10:22:05 +0100 Subject: [infinispan-dev] Thread pools monitoring In-Reply-To: <545CC8C3.40507@redhat.com> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> <545C7649.6090309@redhat.com> <545C835C.3020301@redhat.com> <545CB95A.4020909@redhat.com> <545CBEF2.503@redhat.com> <545CC8C3.40507@redhat.com> Message-ID: <546083BD.2080303@redhat.com> On 11/07/2014 02:27 PM, Bela Ban wrote: > On 07/11/14 13:45, Radim Vansa wrote: >> Hijacking thread 'Remoting package refactor' as the discussion has shifted. >> >> Sure, AOP is another approach. However, besided another limitations, >> Byteman rules are quite fragile with respect to different versions: if >> you're injecting code based on internal implementation method, when the >> name/signature changes, the rule is broken. Sometimes you even have to >> use AT LINE to formulate the injection point. > Right. This is the same problem though as when support needs to create a > (e.f. one-off) patch to be applied by a customer: they need to grab the > exact same version the customer is running. > > So each diagnosis package would have to be dependent on the version (of > JGroups or JDG) used. Regardless of whether custom rules are added by a > support engineer, this has to be tested anyway before sending it off to > the customer. > >> Would you accept a compile-time dependency to some annotations package >> in JGroups that could 'tag' the injection points? The idea is that >> anyone changing the source code would move the injection point >> annotations as well. > You mean something like this ? > > @InjectionPoint("down") public void down(Event e) > > or > > @InjectingPoint ("num_msgs_sent") > protected int num_msgs_sent; > > No, this won't work... how would you do that ? Yes, this is the annotation syntax I had in mind, though, I was thinking about more high-level abstraction what's happening than just marking down injection points. Such as @ReceivedData public void receive(@From Address sender, byte[] data, int offset, @Size int length) {...} @ProcessingMessage protected void passMessageUp(@Message msg, ...) { ... } @ProcessingBatch protected void deliverBatch(@Batch MessageBatch batch) { ... } > > I don't really like this, on a general principle: AOP should *not* have > to change the src code in order to work. And the fact of the matter is > that you won't be able to identify *all* injection points beforehand... > unless you want to sprinkle your code with annotations. I have to agree with the fact that AOP should not have to change source. I had a special case in mind, that is tied to JGroups inspection and offers a way the monitoring with zero overhead when the monitoring is not in place. There, you'd just conceptually describe what JGroups does. > > >> I was already thinking about this in relation with Message Flow Tracer >> [1] (not working right now as the JGroups have changed since I was >> writing that)? > I took a quick look: nice ! > > This is exactly what I meant. Should be some sort of rule base in a VCS, > to which support engineers add rules when they have a case which > requires it and they deem it to be generally useful. > > Re API changes: doesn't Byteman have functionality which can check a > rule set against a code base (offline), to find out incompatibilities ? > Something like a static rule checker ? Right, this is possible - but you won't find if you've added another place that should be checked (e.g. MFT has to determine whether now you're processing a whole batch, or message alone - when you add a functionality to grab some stored messages and start processing them, as you do in UNICASTx, you won't spot that automatically). Beyond that, there are many false positives. E.g. if you have a never terminating loop in Runnable.run(), there is no place to inject the AT EXIT code and Byteman complains. In the end, human intervention is always required. Radim > >> Roman Macor is right now updating the rules and I was >> hoping that we could insert annotations into JGroups that would be used >> instead of the rules (I was already considering different AOP framework >> as Byteman does not allow AT EXIT to catch on leaving exceptions [2]). > Yes, I've also run into this before, not really nice. > >> Radim >> >> [1] https://github.com/rvansa/message-flow-tracer >> [2] https://issues.jboss.org/browse/BYTEMAN-237 >> >> On 11/07/2014 01:21 PM, Bela Ban wrote: >>> Hi Radim, >>> >>> no I haven't. However, you can replace the thread pools used by JGroups >>> and use custom pools. >>> >>> I like another idea better: inject Byteman code at runtime that keeps >>> track of this, and *other useful stats as well*. >>> >>> It would be very useful to support if we could ship a package to a >>> customer that is injected into their running system and grabs all the >>> vital stats we need for a few minutes, then removes itself again and >>> those stats are then sent to use as a ZIP file. >>> The good thing about byteman is that it can remove itself without a >>> trace; ie. there's no overhead before / after running byteman. >>> >>> >>> On 07/11/14 09:31, Radim Vansa wrote: >>>> Btw., have you ever considered checks if a thread returns to pool >>>> reasonably often? Some of the other datagrids use this, though there's >>>> not much how to react upon that beyond printing out stack traces (but >>>> you can at least report to management that some node seems to be broken). >>>> >>>> Radim >>>> >>>> On 11/07/2014 08:35 AM, Bela Ban wrote: >>>>> That's exactly what I suggested. No config gives you a shared global >>>>> thread pool for all caches. >>>>> >>>>> Those caches which need a separate pool can do that via configuration >>>>> (and of course also programmatically) >>>>> >>>>> On 06/11/14 20:31, Tristan Tarrant wrote: >>>>>> My opinion is that we should aim for less configuration, i.e. >>>>>> threadpools should mostly have sensible defaults and be shared by >>>>>> default unless there are extremely good reasons for not doing so. >>>>>> >>>>>> Tristan >>>>>> >>>>>> On 06/11/14 19:40, Radim Vansa wrote: >>>>>>> I second the opinion that any threadpools should be shared by default. >>>>>>> There are users who have hundreds or thousands of caches and having >>>>>>> separate threadpool for each of them could easily drain resources. And >>>>>>> sharing resources is the purpose of threadpools, right? >>>>>>> >>>>>>> Radim >>>>>>> >>>>>>> On 11/06/2014 04:37 PM, Bela Ban wrote: >>>>>>>> #1 I would by default have 1 thread pool shared by all caches >>>>>>>> #2 This global thread pool should be configurable, perhaps in the >>>>>>>> section ? >>>>>>>> #3 Each cache by default uses the gobal thread pool >>>>>>>> #4 A cache can define its own thread pool, then it would use this one >>>>>>>> and not the global thread pool >>>>>>>> >>>>>>>> I think this gives you a mixture between ease of use and flexibility in >>>>>>>> configuring pool per cache if needed >>>>>>>> >>>>>>>> On 06/11/14 16:23, Pedro Ruivo wrote: >>>>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote: >>>>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote: >>>>>>>>>>> * added a single thread remote executor service. This will handle the >>>>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups incoming >>>>>>>>>>> threads and with a new executor service, each cache can process their >>>>>>>>>>> own FIFO commands concurrently. >>>>>>>>>> +1000. This allows multiple updates from the same sender but to >>>>>>>>>> different caches to be executed in parallel, and will speed thing up. >>>>>>>>>> >>>>>>>>>> Do you intend to share a thread pool between the invocations handlers of >>>>>>>>>> the various caches, or do they each have their own thread pool ? Or is >>>>>>>>>> this configurable ? >>>>>>>>>> >>>>>>>>> That is question that cross my mind and I don't have any idea what would >>>>>>>>> be the best. So, for now, I will leave the thread pool shared between >>>>>>>>> the handlers. >>>>>>>>> >>>>>>>>> Never thought to make it configurable, but maybe that is the best >>>>>>>>> option. And maybe, it should be possible to have different max-thread >>>>>>>>> size per cache. For example: >>>>>>>>> >>>>>>>>> * all caches using this remote executor will share the same instance >>>>>>>>> >>>>>>>>> >>>>>>>>> * all caches using this remote executor will create their own thread >>>>>>>>> pool with max-threads equals to 1 >>>>>>>>> >>>>>>>> max-threads=1 .../> >>>>>>>>> >>>>>>>>> * all caches using this remote executor will create their own thread >>>>>>>>> pool with max-threads equals to 1000 >>>>>>>>> >>>>>>>> max-thread=1000 .../> >>>>>>>>> >>>>>>>>> is this what you have in mind? comments? >>>>>>>>> >>>>>>>>> Cheers, >>>>>>>>> Pedro >>>>>>>>> _______________________________________________ >>>>>>>>> infinispan-dev mailing list >>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>> >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> >> -- Radim Vansa JBoss DataGrid QA From bban at redhat.com Mon Nov 10 05:05:49 2014 From: bban at redhat.com (Bela Ban) Date: Mon, 10 Nov 2014 11:05:49 +0100 Subject: [infinispan-dev] Thread pools monitoring In-Reply-To: <546083BD.2080303@redhat.com> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> <545C7649.6090309@redhat.com> <545C835C.3020301@redhat.com> <545CB95A.4020909@redhat.com> <545CBEF2.503@redhat.com> <545CC8C3.40507@redhat.com> <546083BD.2080303@redhat.com> Message-ID: <54608DFD.7070502@redhat.com> Does Byteman allow you to use annotations as injection points ? Didn't know that. Can you show a sample RULE ? On 10/11/14 10:22, Radim Vansa wrote: > On 11/07/2014 02:27 PM, Bela Ban wrote: >> On 07/11/14 13:45, Radim Vansa wrote: >>> Hijacking thread 'Remoting package refactor' as the discussion has shifted. >>> >>> Sure, AOP is another approach. However, besided another limitations, >>> Byteman rules are quite fragile with respect to different versions: if >>> you're injecting code based on internal implementation method, when the >>> name/signature changes, the rule is broken. Sometimes you even have to >>> use AT LINE to formulate the injection point. >> Right. This is the same problem though as when support needs to create a >> (e.f. one-off) patch to be applied by a customer: they need to grab the >> exact same version the customer is running. >> >> So each diagnosis package would have to be dependent on the version (of >> JGroups or JDG) used. Regardless of whether custom rules are added by a >> support engineer, this has to be tested anyway before sending it off to >> the customer. >> >>> Would you accept a compile-time dependency to some annotations package >>> in JGroups that could 'tag' the injection points? The idea is that >>> anyone changing the source code would move the injection point >>> annotations as well. >> You mean something like this ? >> >> @InjectionPoint("down") public void down(Event e) >> >> or >> >> @InjectingPoint ("num_msgs_sent") >> protected int num_msgs_sent; >> >> No, this won't work... how would you do that ? > > Yes, this is the annotation syntax I had in mind, though, I was thinking > about more high-level abstraction what's happening than just marking > down injection points. > Such as > > @ReceivedData > public void receive(@From Address sender, byte[] data, int offset, @Size > int length) {...} > > @ProcessingMessage > protected void passMessageUp(@Message msg, ...) { ... } > > @ProcessingBatch > protected void deliverBatch(@Batch MessageBatch batch) { ... } > > >> >> I don't really like this, on a general principle: AOP should *not* have >> to change the src code in order to work. And the fact of the matter is >> that you won't be able to identify *all* injection points beforehand... >> unless you want to sprinkle your code with annotations. > > I have to agree with the fact that AOP should not have to change source. > I had a special case in mind, that is tied to JGroups inspection and > offers a way the monitoring with zero overhead when the monitoring is > not in place. There, you'd just conceptually describe what JGroups does. > >> >> >>> I was already thinking about this in relation with Message Flow Tracer >>> [1] (not working right now as the JGroups have changed since I was >>> writing that)? >> I took a quick look: nice ! >> >> This is exactly what I meant. Should be some sort of rule base in a VCS, >> to which support engineers add rules when they have a case which >> requires it and they deem it to be generally useful. >> >> Re API changes: doesn't Byteman have functionality which can check a >> rule set against a code base (offline), to find out incompatibilities ? >> Something like a static rule checker ? > > Right, this is possible - but you won't find if you've added another > place that should be checked (e.g. MFT has to determine whether now > you're processing a whole batch, or message alone - when you add a > functionality to grab some stored messages and start processing them, as > you do in UNICASTx, you won't spot that automatically). > > Beyond that, there are many false positives. E.g. if you have a never > terminating loop in Runnable.run(), there is no place to inject the AT > EXIT code and Byteman complains. > > In the end, human intervention is always required. > > Radim > >> >>> Roman Macor is right now updating the rules and I was >>> hoping that we could insert annotations into JGroups that would be used >>> instead of the rules (I was already considering different AOP framework >>> as Byteman does not allow AT EXIT to catch on leaving exceptions [2]). >> Yes, I've also run into this before, not really nice. >> >>> Radim >>> >>> [1] https://github.com/rvansa/message-flow-tracer >>> [2] https://issues.jboss.org/browse/BYTEMAN-237 >>> >>> On 11/07/2014 01:21 PM, Bela Ban wrote: >>>> Hi Radim, >>>> >>>> no I haven't. However, you can replace the thread pools used by JGroups >>>> and use custom pools. >>>> >>>> I like another idea better: inject Byteman code at runtime that keeps >>>> track of this, and *other useful stats as well*. >>>> >>>> It would be very useful to support if we could ship a package to a >>>> customer that is injected into their running system and grabs all the >>>> vital stats we need for a few minutes, then removes itself again and >>>> those stats are then sent to use as a ZIP file. >>>> The good thing about byteman is that it can remove itself without a >>>> trace; ie. there's no overhead before / after running byteman. >>>> >>>> >>>> On 07/11/14 09:31, Radim Vansa wrote: >>>>> Btw., have you ever considered checks if a thread returns to pool >>>>> reasonably often? Some of the other datagrids use this, though there's >>>>> not much how to react upon that beyond printing out stack traces (but >>>>> you can at least report to management that some node seems to be broken). >>>>> >>>>> Radim >>>>> >>>>> On 11/07/2014 08:35 AM, Bela Ban wrote: >>>>>> That's exactly what I suggested. No config gives you a shared global >>>>>> thread pool for all caches. >>>>>> >>>>>> Those caches which need a separate pool can do that via configuration >>>>>> (and of course also programmatically) >>>>>> >>>>>> On 06/11/14 20:31, Tristan Tarrant wrote: >>>>>>> My opinion is that we should aim for less configuration, i.e. >>>>>>> threadpools should mostly have sensible defaults and be shared by >>>>>>> default unless there are extremely good reasons for not doing so. >>>>>>> >>>>>>> Tristan >>>>>>> >>>>>>> On 06/11/14 19:40, Radim Vansa wrote: >>>>>>>> I second the opinion that any threadpools should be shared by default. >>>>>>>> There are users who have hundreds or thousands of caches and having >>>>>>>> separate threadpool for each of them could easily drain resources. And >>>>>>>> sharing resources is the purpose of threadpools, right? >>>>>>>> >>>>>>>> Radim >>>>>>>> >>>>>>>> On 11/06/2014 04:37 PM, Bela Ban wrote: >>>>>>>>> #1 I would by default have 1 thread pool shared by all caches >>>>>>>>> #2 This global thread pool should be configurable, perhaps in the >>>>>>>>> section ? >>>>>>>>> #3 Each cache by default uses the gobal thread pool >>>>>>>>> #4 A cache can define its own thread pool, then it would use this one >>>>>>>>> and not the global thread pool >>>>>>>>> >>>>>>>>> I think this gives you a mixture between ease of use and flexibility in >>>>>>>>> configuring pool per cache if needed >>>>>>>>> >>>>>>>>> On 06/11/14 16:23, Pedro Ruivo wrote: >>>>>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote: >>>>>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote: >>>>>>>>>>>> * added a single thread remote executor service. This will handle the >>>>>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups incoming >>>>>>>>>>>> threads and with a new executor service, each cache can process their >>>>>>>>>>>> own FIFO commands concurrently. >>>>>>>>>>> +1000. This allows multiple updates from the same sender but to >>>>>>>>>>> different caches to be executed in parallel, and will speed thing up. >>>>>>>>>>> >>>>>>>>>>> Do you intend to share a thread pool between the invocations handlers of >>>>>>>>>>> the various caches, or do they each have their own thread pool ? Or is >>>>>>>>>>> this configurable ? >>>>>>>>>>> >>>>>>>>>> That is question that cross my mind and I don't have any idea what would >>>>>>>>>> be the best. So, for now, I will leave the thread pool shared between >>>>>>>>>> the handlers. >>>>>>>>>> >>>>>>>>>> Never thought to make it configurable, but maybe that is the best >>>>>>>>>> option. And maybe, it should be possible to have different max-thread >>>>>>>>>> size per cache. For example: >>>>>>>>>> >>>>>>>>>> * all caches using this remote executor will share the same instance >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> * all caches using this remote executor will create their own thread >>>>>>>>>> pool with max-threads equals to 1 >>>>>>>>>> >>>>>>>>> max-threads=1 .../> >>>>>>>>>> >>>>>>>>>> * all caches using this remote executor will create their own thread >>>>>>>>>> pool with max-threads equals to 1000 >>>>>>>>>> >>>>>>>>> max-thread=1000 .../> >>>>>>>>>> >>>>>>>>>> is this what you have in mind? comments? >>>>>>>>>> >>>>>>>>>> Cheers, >>>>>>>>>> Pedro >>>>>>>>>> _______________________________________________ >>>>>>>>>> infinispan-dev mailing list >>>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>> >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> >>> > > -- Bela Ban, JGroups lead (http://www.jgroups.org) From rvansa at redhat.com Mon Nov 10 05:33:29 2014 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 10 Nov 2014 11:33:29 +0100 Subject: [infinispan-dev] Thread pools monitoring In-Reply-To: <54608DFD.7070502@redhat.com> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> <545C7649.6090309@redhat.com> <545C835C.3020301@redhat.com> <545CB95A.4020909@redhat.com> <545CBEF2.503@redhat.com> <545CC8C3.40507@redhat.com> <546083BD.2080303@redhat.com> <54608DFD.7070502@redhat.com> Message-ID: <54609479.4060501@redhat.com> No way I'd be aware of (you can specify the rule directly in annotation, but that's not what I'd like to do). Though, I don't think it would be too complicated to implement. But as I've said, I was inclining towards another AOP frameworks, or more low-level solutions such as Javassist. For example similar tool Kamon [1] uses AspectJ Weaver. Roman, do you have the document describing pros and cons of those other AOP frameworks? [1] http://kamon.io/ On 11/10/2014 11:05 AM, Bela Ban wrote: > Does Byteman allow you to use annotations as injection points ? Didn't > know that. Can you show a sample RULE ? > > On 10/11/14 10:22, Radim Vansa wrote: >> On 11/07/2014 02:27 PM, Bela Ban wrote: >>> On 07/11/14 13:45, Radim Vansa wrote: >>>> Hijacking thread 'Remoting package refactor' as the discussion has shifted. >>>> >>>> Sure, AOP is another approach. However, besided another limitations, >>>> Byteman rules are quite fragile with respect to different versions: if >>>> you're injecting code based on internal implementation method, when the >>>> name/signature changes, the rule is broken. Sometimes you even have to >>>> use AT LINE to formulate the injection point. >>> Right. This is the same problem though as when support needs to create a >>> (e.f. one-off) patch to be applied by a customer: they need to grab the >>> exact same version the customer is running. >>> >>> So each diagnosis package would have to be dependent on the version (of >>> JGroups or JDG) used. Regardless of whether custom rules are added by a >>> support engineer, this has to be tested anyway before sending it off to >>> the customer. >>> >>>> Would you accept a compile-time dependency to some annotations package >>>> in JGroups that could 'tag' the injection points? The idea is that >>>> anyone changing the source code would move the injection point >>>> annotations as well. >>> You mean something like this ? >>> >>> @InjectionPoint("down") public void down(Event e) >>> >>> or >>> >>> @InjectingPoint ("num_msgs_sent") >>> protected int num_msgs_sent; >>> >>> No, this won't work... how would you do that ? >> Yes, this is the annotation syntax I had in mind, though, I was thinking >> about more high-level abstraction what's happening than just marking >> down injection points. >> Such as >> >> @ReceivedData >> public void receive(@From Address sender, byte[] data, int offset, @Size >> int length) {...} >> >> @ProcessingMessage >> protected void passMessageUp(@Message msg, ...) { ... } >> >> @ProcessingBatch >> protected void deliverBatch(@Batch MessageBatch batch) { ... } >> >> >>> I don't really like this, on a general principle: AOP should *not* have >>> to change the src code in order to work. And the fact of the matter is >>> that you won't be able to identify *all* injection points beforehand... >>> unless you want to sprinkle your code with annotations. >> I have to agree with the fact that AOP should not have to change source. >> I had a special case in mind, that is tied to JGroups inspection and >> offers a way the monitoring with zero overhead when the monitoring is >> not in place. There, you'd just conceptually describe what JGroups does. >> >>> >>>> I was already thinking about this in relation with Message Flow Tracer >>>> [1] (not working right now as the JGroups have changed since I was >>>> writing that)? >>> I took a quick look: nice ! >>> >>> This is exactly what I meant. Should be some sort of rule base in a VCS, >>> to which support engineers add rules when they have a case which >>> requires it and they deem it to be generally useful. >>> >>> Re API changes: doesn't Byteman have functionality which can check a >>> rule set against a code base (offline), to find out incompatibilities ? >>> Something like a static rule checker ? >> Right, this is possible - but you won't find if you've added another >> place that should be checked (e.g. MFT has to determine whether now >> you're processing a whole batch, or message alone - when you add a >> functionality to grab some stored messages and start processing them, as >> you do in UNICASTx, you won't spot that automatically). >> >> Beyond that, there are many false positives. E.g. if you have a never >> terminating loop in Runnable.run(), there is no place to inject the AT >> EXIT code and Byteman complains. >> >> In the end, human intervention is always required. >> >> Radim >> >>>> Roman Macor is right now updating the rules and I was >>>> hoping that we could insert annotations into JGroups that would be used >>>> instead of the rules (I was already considering different AOP framework >>>> as Byteman does not allow AT EXIT to catch on leaving exceptions [2]). >>> Yes, I've also run into this before, not really nice. >>> >>>> Radim >>>> >>>> [1] https://github.com/rvansa/message-flow-tracer >>>> [2] https://issues.jboss.org/browse/BYTEMAN-237 >>>> >>>> On 11/07/2014 01:21 PM, Bela Ban wrote: >>>>> Hi Radim, >>>>> >>>>> no I haven't. However, you can replace the thread pools used by JGroups >>>>> and use custom pools. >>>>> >>>>> I like another idea better: inject Byteman code at runtime that keeps >>>>> track of this, and *other useful stats as well*. >>>>> >>>>> It would be very useful to support if we could ship a package to a >>>>> customer that is injected into their running system and grabs all the >>>>> vital stats we need for a few minutes, then removes itself again and >>>>> those stats are then sent to use as a ZIP file. >>>>> The good thing about byteman is that it can remove itself without a >>>>> trace; ie. there's no overhead before / after running byteman. >>>>> >>>>> >>>>> On 07/11/14 09:31, Radim Vansa wrote: >>>>>> Btw., have you ever considered checks if a thread returns to pool >>>>>> reasonably often? Some of the other datagrids use this, though there's >>>>>> not much how to react upon that beyond printing out stack traces (but >>>>>> you can at least report to management that some node seems to be broken). >>>>>> >>>>>> Radim >>>>>> >>>>>> On 11/07/2014 08:35 AM, Bela Ban wrote: >>>>>>> That's exactly what I suggested. No config gives you a shared global >>>>>>> thread pool for all caches. >>>>>>> >>>>>>> Those caches which need a separate pool can do that via configuration >>>>>>> (and of course also programmatically) >>>>>>> >>>>>>> On 06/11/14 20:31, Tristan Tarrant wrote: >>>>>>>> My opinion is that we should aim for less configuration, i.e. >>>>>>>> threadpools should mostly have sensible defaults and be shared by >>>>>>>> default unless there are extremely good reasons for not doing so. >>>>>>>> >>>>>>>> Tristan >>>>>>>> >>>>>>>> On 06/11/14 19:40, Radim Vansa wrote: >>>>>>>>> I second the opinion that any threadpools should be shared by default. >>>>>>>>> There are users who have hundreds or thousands of caches and having >>>>>>>>> separate threadpool for each of them could easily drain resources. And >>>>>>>>> sharing resources is the purpose of threadpools, right? >>>>>>>>> >>>>>>>>> Radim >>>>>>>>> >>>>>>>>> On 11/06/2014 04:37 PM, Bela Ban wrote: >>>>>>>>>> #1 I would by default have 1 thread pool shared by all caches >>>>>>>>>> #2 This global thread pool should be configurable, perhaps in the >>>>>>>>>> section ? >>>>>>>>>> #3 Each cache by default uses the gobal thread pool >>>>>>>>>> #4 A cache can define its own thread pool, then it would use this one >>>>>>>>>> and not the global thread pool >>>>>>>>>> >>>>>>>>>> I think this gives you a mixture between ease of use and flexibility in >>>>>>>>>> configuring pool per cache if needed >>>>>>>>>> >>>>>>>>>> On 06/11/14 16:23, Pedro Ruivo wrote: >>>>>>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote: >>>>>>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote: >>>>>>>>>>>>> * added a single thread remote executor service. This will handle the >>>>>>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups incoming >>>>>>>>>>>>> threads and with a new executor service, each cache can process their >>>>>>>>>>>>> own FIFO commands concurrently. >>>>>>>>>>>> +1000. This allows multiple updates from the same sender but to >>>>>>>>>>>> different caches to be executed in parallel, and will speed thing up. >>>>>>>>>>>> >>>>>>>>>>>> Do you intend to share a thread pool between the invocations handlers of >>>>>>>>>>>> the various caches, or do they each have their own thread pool ? Or is >>>>>>>>>>>> this configurable ? >>>>>>>>>>>> >>>>>>>>>>> That is question that cross my mind and I don't have any idea what would >>>>>>>>>>> be the best. So, for now, I will leave the thread pool shared between >>>>>>>>>>> the handlers. >>>>>>>>>>> >>>>>>>>>>> Never thought to make it configurable, but maybe that is the best >>>>>>>>>>> option. And maybe, it should be possible to have different max-thread >>>>>>>>>>> size per cache. For example: >>>>>>>>>>> >>>>>>>>>>> * all caches using this remote executor will share the same instance >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> * all caches using this remote executor will create their own thread >>>>>>>>>>> pool with max-threads equals to 1 >>>>>>>>>>> >>>>>>>>>> max-threads=1 .../> >>>>>>>>>>> >>>>>>>>>>> * all caches using this remote executor will create their own thread >>>>>>>>>>> pool with max-threads equals to 1000 >>>>>>>>>>> >>>>>>>>>> max-thread=1000 .../> >>>>>>>>>>> >>>>>>>>>>> is this what you have in mind? comments? >>>>>>>>>>> >>>>>>>>>>> Cheers, >>>>>>>>>>> Pedro >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> infinispan-dev mailing list >>>>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>> >> -- Radim Vansa JBoss DataGrid QA From bban at redhat.com Mon Nov 10 05:41:47 2014 From: bban at redhat.com (Bela Ban) Date: Mon, 10 Nov 2014 11:41:47 +0100 Subject: [infinispan-dev] Thread pools monitoring In-Reply-To: <54609479.4060501@redhat.com> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> <545C7649.6090309@redhat.com> <545C835C.3020301@redhat.com> <545CB95A.4020909@redhat.com> <545CBEF2.503@redhat.com> <545CC8C3.40507@redhat.com> <546083BD.2080303@redhat.com> <54608DFD.7070502@redhat.com> <54609479.4060501@redhat.com> Message-ID: <5460966B.6010305@redhat.com> On 10/11/14 11:33, Radim Vansa wrote: > No way I'd be aware of (you can specify the rule directly in annotation, > but that's not what I'd like to do). Though, I don't think it would be > too complicated to implement. > But as I've said, I was inclining towards another AOP frameworks, or > more low-level solutions such as Javassist. What's the benefit of this ? I don't think you could define the joinpoint in a strongly-typed fashion, so refactoring would not work either if you for example change a method name. Or would it ? > For example similar tool > Kamon [1] uses AspectJ Weaver. > > Roman, do you have the document describing pros and cons of those other > AOP frameworks? > > [1] http://kamon.io/ > > On 11/10/2014 11:05 AM, Bela Ban wrote: >> Does Byteman allow you to use annotations as injection points ? Didn't >> know that. Can you show a sample RULE ? >> >> On 10/11/14 10:22, Radim Vansa wrote: >>> On 11/07/2014 02:27 PM, Bela Ban wrote: >>>> On 07/11/14 13:45, Radim Vansa wrote: >>>>> Hijacking thread 'Remoting package refactor' as the discussion has shifted. >>>>> >>>>> Sure, AOP is another approach. However, besided another limitations, >>>>> Byteman rules are quite fragile with respect to different versions: if >>>>> you're injecting code based on internal implementation method, when the >>>>> name/signature changes, the rule is broken. Sometimes you even have to >>>>> use AT LINE to formulate the injection point. >>>> Right. This is the same problem though as when support needs to create a >>>> (e.f. one-off) patch to be applied by a customer: they need to grab the >>>> exact same version the customer is running. >>>> >>>> So each diagnosis package would have to be dependent on the version (of >>>> JGroups or JDG) used. Regardless of whether custom rules are added by a >>>> support engineer, this has to be tested anyway before sending it off to >>>> the customer. >>>> >>>>> Would you accept a compile-time dependency to some annotations package >>>>> in JGroups that could 'tag' the injection points? The idea is that >>>>> anyone changing the source code would move the injection point >>>>> annotations as well. >>>> You mean something like this ? >>>> >>>> @InjectionPoint("down") public void down(Event e) >>>> >>>> or >>>> >>>> @InjectingPoint ("num_msgs_sent") >>>> protected int num_msgs_sent; >>>> >>>> No, this won't work... how would you do that ? >>> Yes, this is the annotation syntax I had in mind, though, I was thinking >>> about more high-level abstraction what's happening than just marking >>> down injection points. >>> Such as >>> >>> @ReceivedData >>> public void receive(@From Address sender, byte[] data, int offset, @Size >>> int length) {...} >>> >>> @ProcessingMessage >>> protected void passMessageUp(@Message msg, ...) { ... } >>> >>> @ProcessingBatch >>> protected void deliverBatch(@Batch MessageBatch batch) { ... } >>> >>> >>>> I don't really like this, on a general principle: AOP should *not* have >>>> to change the src code in order to work. And the fact of the matter is >>>> that you won't be able to identify *all* injection points beforehand... >>>> unless you want to sprinkle your code with annotations. >>> I have to agree with the fact that AOP should not have to change source. >>> I had a special case in mind, that is tied to JGroups inspection and >>> offers a way the monitoring with zero overhead when the monitoring is >>> not in place. There, you'd just conceptually describe what JGroups does. >>> >>>> >>>>> I was already thinking about this in relation with Message Flow Tracer >>>>> [1] (not working right now as the JGroups have changed since I was >>>>> writing that)? >>>> I took a quick look: nice ! >>>> >>>> This is exactly what I meant. Should be some sort of rule base in a VCS, >>>> to which support engineers add rules when they have a case which >>>> requires it and they deem it to be generally useful. >>>> >>>> Re API changes: doesn't Byteman have functionality which can check a >>>> rule set against a code base (offline), to find out incompatibilities ? >>>> Something like a static rule checker ? >>> Right, this is possible - but you won't find if you've added another >>> place that should be checked (e.g. MFT has to determine whether now >>> you're processing a whole batch, or message alone - when you add a >>> functionality to grab some stored messages and start processing them, as >>> you do in UNICASTx, you won't spot that automatically). >>> >>> Beyond that, there are many false positives. E.g. if you have a never >>> terminating loop in Runnable.run(), there is no place to inject the AT >>> EXIT code and Byteman complains. >>> >>> In the end, human intervention is always required. >>> >>> Radim >>> >>>>> Roman Macor is right now updating the rules and I was >>>>> hoping that we could insert annotations into JGroups that would be used >>>>> instead of the rules (I was already considering different AOP framework >>>>> as Byteman does not allow AT EXIT to catch on leaving exceptions [2]). >>>> Yes, I've also run into this before, not really nice. >>>> >>>>> Radim >>>>> >>>>> [1] https://github.com/rvansa/message-flow-tracer >>>>> [2] https://issues.jboss.org/browse/BYTEMAN-237 >>>>> >>>>> On 11/07/2014 01:21 PM, Bela Ban wrote: >>>>>> Hi Radim, >>>>>> >>>>>> no I haven't. However, you can replace the thread pools used by JGroups >>>>>> and use custom pools. >>>>>> >>>>>> I like another idea better: inject Byteman code at runtime that keeps >>>>>> track of this, and *other useful stats as well*. >>>>>> >>>>>> It would be very useful to support if we could ship a package to a >>>>>> customer that is injected into their running system and grabs all the >>>>>> vital stats we need for a few minutes, then removes itself again and >>>>>> those stats are then sent to use as a ZIP file. >>>>>> The good thing about byteman is that it can remove itself without a >>>>>> trace; ie. there's no overhead before / after running byteman. >>>>>> >>>>>> >>>>>> On 07/11/14 09:31, Radim Vansa wrote: >>>>>>> Btw., have you ever considered checks if a thread returns to pool >>>>>>> reasonably often? Some of the other datagrids use this, though there's >>>>>>> not much how to react upon that beyond printing out stack traces (but >>>>>>> you can at least report to management that some node seems to be broken). >>>>>>> >>>>>>> Radim >>>>>>> >>>>>>> On 11/07/2014 08:35 AM, Bela Ban wrote: >>>>>>>> That's exactly what I suggested. No config gives you a shared global >>>>>>>> thread pool for all caches. >>>>>>>> >>>>>>>> Those caches which need a separate pool can do that via configuration >>>>>>>> (and of course also programmatically) >>>>>>>> >>>>>>>> On 06/11/14 20:31, Tristan Tarrant wrote: >>>>>>>>> My opinion is that we should aim for less configuration, i.e. >>>>>>>>> threadpools should mostly have sensible defaults and be shared by >>>>>>>>> default unless there are extremely good reasons for not doing so. >>>>>>>>> >>>>>>>>> Tristan >>>>>>>>> >>>>>>>>> On 06/11/14 19:40, Radim Vansa wrote: >>>>>>>>>> I second the opinion that any threadpools should be shared by default. >>>>>>>>>> There are users who have hundreds or thousands of caches and having >>>>>>>>>> separate threadpool for each of them could easily drain resources. And >>>>>>>>>> sharing resources is the purpose of threadpools, right? >>>>>>>>>> >>>>>>>>>> Radim >>>>>>>>>> >>>>>>>>>> On 11/06/2014 04:37 PM, Bela Ban wrote: >>>>>>>>>>> #1 I would by default have 1 thread pool shared by all caches >>>>>>>>>>> #2 This global thread pool should be configurable, perhaps in the >>>>>>>>>>> section ? >>>>>>>>>>> #3 Each cache by default uses the gobal thread pool >>>>>>>>>>> #4 A cache can define its own thread pool, then it would use this one >>>>>>>>>>> and not the global thread pool >>>>>>>>>>> >>>>>>>>>>> I think this gives you a mixture between ease of use and flexibility in >>>>>>>>>>> configuring pool per cache if needed >>>>>>>>>>> >>>>>>>>>>> On 06/11/14 16:23, Pedro Ruivo wrote: >>>>>>>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote: >>>>>>>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote: >>>>>>>>>>>>>> * added a single thread remote executor service. This will handle the >>>>>>>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups incoming >>>>>>>>>>>>>> threads and with a new executor service, each cache can process their >>>>>>>>>>>>>> own FIFO commands concurrently. >>>>>>>>>>>>> +1000. This allows multiple updates from the same sender but to >>>>>>>>>>>>> different caches to be executed in parallel, and will speed thing up. >>>>>>>>>>>>> >>>>>>>>>>>>> Do you intend to share a thread pool between the invocations handlers of >>>>>>>>>>>>> the various caches, or do they each have their own thread pool ? Or is >>>>>>>>>>>>> this configurable ? >>>>>>>>>>>>> >>>>>>>>>>>> That is question that cross my mind and I don't have any idea what would >>>>>>>>>>>> be the best. So, for now, I will leave the thread pool shared between >>>>>>>>>>>> the handlers. >>>>>>>>>>>> >>>>>>>>>>>> Never thought to make it configurable, but maybe that is the best >>>>>>>>>>>> option. And maybe, it should be possible to have different max-thread >>>>>>>>>>>> size per cache. For example: >>>>>>>>>>>> >>>>>>>>>>>> * all caches using this remote executor will share the same instance >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> * all caches using this remote executor will create their own thread >>>>>>>>>>>> pool with max-threads equals to 1 >>>>>>>>>>>> >>>>>>>>>>> max-threads=1 .../> >>>>>>>>>>>> >>>>>>>>>>>> * all caches using this remote executor will create their own thread >>>>>>>>>>>> pool with max-threads equals to 1000 >>>>>>>>>>>> >>>>>>>>>>> max-thread=1000 .../> >>>>>>>>>>>> >>>>>>>>>>>> is this what you have in mind? comments? >>>>>>>>>>>> >>>>>>>>>>>> Cheers, >>>>>>>>>>>> Pedro >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> infinispan-dev mailing list >>>>>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> infinispan-dev mailing list >>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>> >>> > > -- Bela Ban, JGroups lead (http://www.jgroups.org) From sanne at infinispan.org Mon Nov 10 05:49:02 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 10 Nov 2014 10:49:02 +0000 Subject: [infinispan-dev] Thread pools monitoring In-Reply-To: <5460966B.6010305@redhat.com> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> <545C7649.6090309@redhat.com> <545C835C.3020301@redhat.com> <545CB95A.4020909@redhat.com> <545CBEF2.503@redhat.com> <545CC8C3.40507@redhat.com> <546083BD.2080303@redhat.com> <54608DFD.7070502@redhat.com> <54609479.4060501@redhat.com> <5460966B.6010305@redhat.com> Message-ID: Could we just have a listener interface rather than playing with AOP? You don't want to support those libraries ;-) For example Hibernate is deprecating javassist. Sanne On 10 November 2014 10:41, Bela Ban wrote: > > > On 10/11/14 11:33, Radim Vansa wrote: >> No way I'd be aware of (you can specify the rule directly in annotation, >> but that's not what I'd like to do). Though, I don't think it would be >> too complicated to implement. >> But as I've said, I was inclining towards another AOP frameworks, or >> more low-level solutions such as Javassist. > > What's the benefit of this ? I don't think you could define the > joinpoint in a strongly-typed fashion, so refactoring would not work > either if you for example change a method name. Or would it ? > >> For example similar tool >> Kamon [1] uses AspectJ Weaver. >> >> Roman, do you have the document describing pros and cons of those other >> AOP frameworks? >> >> [1] http://kamon.io/ >> >> On 11/10/2014 11:05 AM, Bela Ban wrote: >>> Does Byteman allow you to use annotations as injection points ? Didn't >>> know that. Can you show a sample RULE ? >>> >>> On 10/11/14 10:22, Radim Vansa wrote: >>>> On 11/07/2014 02:27 PM, Bela Ban wrote: >>>>> On 07/11/14 13:45, Radim Vansa wrote: >>>>>> Hijacking thread 'Remoting package refactor' as the discussion has shifted. >>>>>> >>>>>> Sure, AOP is another approach. However, besided another limitations, >>>>>> Byteman rules are quite fragile with respect to different versions: if >>>>>> you're injecting code based on internal implementation method, when the >>>>>> name/signature changes, the rule is broken. Sometimes you even have to >>>>>> use AT LINE to formulate the injection point. >>>>> Right. This is the same problem though as when support needs to create a >>>>> (e.f. one-off) patch to be applied by a customer: they need to grab the >>>>> exact same version the customer is running. >>>>> >>>>> So each diagnosis package would have to be dependent on the version (of >>>>> JGroups or JDG) used. Regardless of whether custom rules are added by a >>>>> support engineer, this has to be tested anyway before sending it off to >>>>> the customer. >>>>> >>>>>> Would you accept a compile-time dependency to some annotations package >>>>>> in JGroups that could 'tag' the injection points? The idea is that >>>>>> anyone changing the source code would move the injection point >>>>>> annotations as well. >>>>> You mean something like this ? >>>>> >>>>> @InjectionPoint("down") public void down(Event e) >>>>> >>>>> or >>>>> >>>>> @InjectingPoint ("num_msgs_sent") >>>>> protected int num_msgs_sent; >>>>> >>>>> No, this won't work... how would you do that ? >>>> Yes, this is the annotation syntax I had in mind, though, I was thinking >>>> about more high-level abstraction what's happening than just marking >>>> down injection points. >>>> Such as >>>> >>>> @ReceivedData >>>> public void receive(@From Address sender, byte[] data, int offset, @Size >>>> int length) {...} >>>> >>>> @ProcessingMessage >>>> protected void passMessageUp(@Message msg, ...) { ... } >>>> >>>> @ProcessingBatch >>>> protected void deliverBatch(@Batch MessageBatch batch) { ... } >>>> >>>> >>>>> I don't really like this, on a general principle: AOP should *not* have >>>>> to change the src code in order to work. And the fact of the matter is >>>>> that you won't be able to identify *all* injection points beforehand... >>>>> unless you want to sprinkle your code with annotations. >>>> I have to agree with the fact that AOP should not have to change source. >>>> I had a special case in mind, that is tied to JGroups inspection and >>>> offers a way the monitoring with zero overhead when the monitoring is >>>> not in place. There, you'd just conceptually describe what JGroups does. >>>> >>>>> >>>>>> I was already thinking about this in relation with Message Flow Tracer >>>>>> [1] (not working right now as the JGroups have changed since I was >>>>>> writing that)? >>>>> I took a quick look: nice ! >>>>> >>>>> This is exactly what I meant. Should be some sort of rule base in a VCS, >>>>> to which support engineers add rules when they have a case which >>>>> requires it and they deem it to be generally useful. >>>>> >>>>> Re API changes: doesn't Byteman have functionality which can check a >>>>> rule set against a code base (offline), to find out incompatibilities ? >>>>> Something like a static rule checker ? >>>> Right, this is possible - but you won't find if you've added another >>>> place that should be checked (e.g. MFT has to determine whether now >>>> you're processing a whole batch, or message alone - when you add a >>>> functionality to grab some stored messages and start processing them, as >>>> you do in UNICASTx, you won't spot that automatically). >>>> >>>> Beyond that, there are many false positives. E.g. if you have a never >>>> terminating loop in Runnable.run(), there is no place to inject the AT >>>> EXIT code and Byteman complains. >>>> >>>> In the end, human intervention is always required. >>>> >>>> Radim >>>> >>>>>> Roman Macor is right now updating the rules and I was >>>>>> hoping that we could insert annotations into JGroups that would be used >>>>>> instead of the rules (I was already considering different AOP framework >>>>>> as Byteman does not allow AT EXIT to catch on leaving exceptions [2]). >>>>> Yes, I've also run into this before, not really nice. >>>>> >>>>>> Radim >>>>>> >>>>>> [1] https://github.com/rvansa/message-flow-tracer >>>>>> [2] https://issues.jboss.org/browse/BYTEMAN-237 >>>>>> >>>>>> On 11/07/2014 01:21 PM, Bela Ban wrote: >>>>>>> Hi Radim, >>>>>>> >>>>>>> no I haven't. However, you can replace the thread pools used by JGroups >>>>>>> and use custom pools. >>>>>>> >>>>>>> I like another idea better: inject Byteman code at runtime that keeps >>>>>>> track of this, and *other useful stats as well*. >>>>>>> >>>>>>> It would be very useful to support if we could ship a package to a >>>>>>> customer that is injected into their running system and grabs all the >>>>>>> vital stats we need for a few minutes, then removes itself again and >>>>>>> those stats are then sent to use as a ZIP file. >>>>>>> The good thing about byteman is that it can remove itself without a >>>>>>> trace; ie. there's no overhead before / after running byteman. >>>>>>> >>>>>>> >>>>>>> On 07/11/14 09:31, Radim Vansa wrote: >>>>>>>> Btw., have you ever considered checks if a thread returns to pool >>>>>>>> reasonably often? Some of the other datagrids use this, though there's >>>>>>>> not much how to react upon that beyond printing out stack traces (but >>>>>>>> you can at least report to management that some node seems to be broken). >>>>>>>> >>>>>>>> Radim >>>>>>>> >>>>>>>> On 11/07/2014 08:35 AM, Bela Ban wrote: >>>>>>>>> That's exactly what I suggested. No config gives you a shared global >>>>>>>>> thread pool for all caches. >>>>>>>>> >>>>>>>>> Those caches which need a separate pool can do that via configuration >>>>>>>>> (and of course also programmatically) >>>>>>>>> >>>>>>>>> On 06/11/14 20:31, Tristan Tarrant wrote: >>>>>>>>>> My opinion is that we should aim for less configuration, i.e. >>>>>>>>>> threadpools should mostly have sensible defaults and be shared by >>>>>>>>>> default unless there are extremely good reasons for not doing so. >>>>>>>>>> >>>>>>>>>> Tristan >>>>>>>>>> >>>>>>>>>> On 06/11/14 19:40, Radim Vansa wrote: >>>>>>>>>>> I second the opinion that any threadpools should be shared by default. >>>>>>>>>>> There are users who have hundreds or thousands of caches and having >>>>>>>>>>> separate threadpool for each of them could easily drain resources. And >>>>>>>>>>> sharing resources is the purpose of threadpools, right? >>>>>>>>>>> >>>>>>>>>>> Radim >>>>>>>>>>> >>>>>>>>>>> On 11/06/2014 04:37 PM, Bela Ban wrote: >>>>>>>>>>>> #1 I would by default have 1 thread pool shared by all caches >>>>>>>>>>>> #2 This global thread pool should be configurable, perhaps in the >>>>>>>>>>>> section ? >>>>>>>>>>>> #3 Each cache by default uses the gobal thread pool >>>>>>>>>>>> #4 A cache can define its own thread pool, then it would use this one >>>>>>>>>>>> and not the global thread pool >>>>>>>>>>>> >>>>>>>>>>>> I think this gives you a mixture between ease of use and flexibility in >>>>>>>>>>>> configuring pool per cache if needed >>>>>>>>>>>> >>>>>>>>>>>> On 06/11/14 16:23, Pedro Ruivo wrote: >>>>>>>>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote: >>>>>>>>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote: >>>>>>>>>>>>>>> * added a single thread remote executor service. This will handle the >>>>>>>>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups incoming >>>>>>>>>>>>>>> threads and with a new executor service, each cache can process their >>>>>>>>>>>>>>> own FIFO commands concurrently. >>>>>>>>>>>>>> +1000. This allows multiple updates from the same sender but to >>>>>>>>>>>>>> different caches to be executed in parallel, and will speed thing up. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Do you intend to share a thread pool between the invocations handlers of >>>>>>>>>>>>>> the various caches, or do they each have their own thread pool ? Or is >>>>>>>>>>>>>> this configurable ? >>>>>>>>>>>>>> >>>>>>>>>>>>> That is question that cross my mind and I don't have any idea what would >>>>>>>>>>>>> be the best. So, for now, I will leave the thread pool shared between >>>>>>>>>>>>> the handlers. >>>>>>>>>>>>> >>>>>>>>>>>>> Never thought to make it configurable, but maybe that is the best >>>>>>>>>>>>> option. And maybe, it should be possible to have different max-thread >>>>>>>>>>>>> size per cache. For example: >>>>>>>>>>>>> >>>>>>>>>>>>> * all caches using this remote executor will share the same instance >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> * all caches using this remote executor will create their own thread >>>>>>>>>>>>> pool with max-threads equals to 1 >>>>>>>>>>>>> >>>>>>>>>>>> max-threads=1 .../> >>>>>>>>>>>>> >>>>>>>>>>>>> * all caches using this remote executor will create their own thread >>>>>>>>>>>>> pool with max-threads equals to 1000 >>>>>>>>>>>>> >>>>>>>>>>>> max-thread=1000 .../> >>>>>>>>>>>>> >>>>>>>>>>>>> is this what you have in mind? comments? >>>>>>>>>>>>> >>>>>>>>>>>>> Cheers, >>>>>>>>>>>>> Pedro >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> infinispan-dev mailing list >>>>>>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> infinispan-dev mailing list >>>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>> >>>> >> >> > > -- > Bela Ban, JGroups lead (http://www.jgroups.org) > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From bban at redhat.com Mon Nov 10 06:08:37 2014 From: bban at redhat.com (Bela Ban) Date: Mon, 10 Nov 2014 12:08:37 +0100 Subject: [infinispan-dev] Thread pools monitoring In-Reply-To: References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> <545C7649.6090309@redhat.com> <545C835C.3020301@redhat.com> <545CB95A.4020909@redhat.com> <545CBEF2.503@redhat.com> <545CC8C3.40507@redhat.com> <546083BD.2080303@redhat.com> <54608DFD.7070502@redhat.com> <54609479.4060501@redhat.com> <5460966B.6010305@redhat.com> Message-ID: <54609CB5.9030905@redhat.com> The drawbacks of a listener interface are: - You don't know up-front which methods or attributes you want to listen on - You need code in your application to provide listener registration and instrument your code to notify listeners These add code to the app which has nothing to do with operational semantics and IMO eliminating this code is exactly one of the strong points of AOP. On 10/11/14 11:49, Sanne Grinovero wrote: > Could we just have a listener interface rather than playing with AOP? > > You don't want to support those libraries ;-) For example Hibernate is > deprecating javassist. > > Sanne > > On 10 November 2014 10:41, Bela Ban wrote: >> >> >> On 10/11/14 11:33, Radim Vansa wrote: >>> No way I'd be aware of (you can specify the rule directly in annotation, >>> but that's not what I'd like to do). Though, I don't think it would be >>> too complicated to implement. >>> But as I've said, I was inclining towards another AOP frameworks, or >>> more low-level solutions such as Javassist. >> >> What's the benefit of this ? I don't think you could define the >> joinpoint in a strongly-typed fashion, so refactoring would not work >> either if you for example change a method name. Or would it ? >> >>> For example similar tool >>> Kamon [1] uses AspectJ Weaver. >>> >>> Roman, do you have the document describing pros and cons of those other >>> AOP frameworks? >>> >>> [1] http://kamon.io/ >>> >>> On 11/10/2014 11:05 AM, Bela Ban wrote: >>>> Does Byteman allow you to use annotations as injection points ? Didn't >>>> know that. Can you show a sample RULE ? >>>> >>>> On 10/11/14 10:22, Radim Vansa wrote: >>>>> On 11/07/2014 02:27 PM, Bela Ban wrote: >>>>>> On 07/11/14 13:45, Radim Vansa wrote: >>>>>>> Hijacking thread 'Remoting package refactor' as the discussion has shifted. >>>>>>> >>>>>>> Sure, AOP is another approach. However, besided another limitations, >>>>>>> Byteman rules are quite fragile with respect to different versions: if >>>>>>> you're injecting code based on internal implementation method, when the >>>>>>> name/signature changes, the rule is broken. Sometimes you even have to >>>>>>> use AT LINE to formulate the injection point. >>>>>> Right. This is the same problem though as when support needs to create a >>>>>> (e.f. one-off) patch to be applied by a customer: they need to grab the >>>>>> exact same version the customer is running. >>>>>> >>>>>> So each diagnosis package would have to be dependent on the version (of >>>>>> JGroups or JDG) used. Regardless of whether custom rules are added by a >>>>>> support engineer, this has to be tested anyway before sending it off to >>>>>> the customer. >>>>>> >>>>>>> Would you accept a compile-time dependency to some annotations package >>>>>>> in JGroups that could 'tag' the injection points? The idea is that >>>>>>> anyone changing the source code would move the injection point >>>>>>> annotations as well. >>>>>> You mean something like this ? >>>>>> >>>>>> @InjectionPoint("down") public void down(Event e) >>>>>> >>>>>> or >>>>>> >>>>>> @InjectingPoint ("num_msgs_sent") >>>>>> protected int num_msgs_sent; >>>>>> >>>>>> No, this won't work... how would you do that ? >>>>> Yes, this is the annotation syntax I had in mind, though, I was thinking >>>>> about more high-level abstraction what's happening than just marking >>>>> down injection points. >>>>> Such as >>>>> >>>>> @ReceivedData >>>>> public void receive(@From Address sender, byte[] data, int offset, @Size >>>>> int length) {...} >>>>> >>>>> @ProcessingMessage >>>>> protected void passMessageUp(@Message msg, ...) { ... } >>>>> >>>>> @ProcessingBatch >>>>> protected void deliverBatch(@Batch MessageBatch batch) { ... } >>>>> >>>>> >>>>>> I don't really like this, on a general principle: AOP should *not* have >>>>>> to change the src code in order to work. And the fact of the matter is >>>>>> that you won't be able to identify *all* injection points beforehand... >>>>>> unless you want to sprinkle your code with annotations. >>>>> I have to agree with the fact that AOP should not have to change source. >>>>> I had a special case in mind, that is tied to JGroups inspection and >>>>> offers a way the monitoring with zero overhead when the monitoring is >>>>> not in place. There, you'd just conceptually describe what JGroups does. >>>>> >>>>>> >>>>>>> I was already thinking about this in relation with Message Flow Tracer >>>>>>> [1] (not working right now as the JGroups have changed since I was >>>>>>> writing that)? >>>>>> I took a quick look: nice ! >>>>>> >>>>>> This is exactly what I meant. Should be some sort of rule base in a VCS, >>>>>> to which support engineers add rules when they have a case which >>>>>> requires it and they deem it to be generally useful. >>>>>> >>>>>> Re API changes: doesn't Byteman have functionality which can check a >>>>>> rule set against a code base (offline), to find out incompatibilities ? >>>>>> Something like a static rule checker ? >>>>> Right, this is possible - but you won't find if you've added another >>>>> place that should be checked (e.g. MFT has to determine whether now >>>>> you're processing a whole batch, or message alone - when you add a >>>>> functionality to grab some stored messages and start processing them, as >>>>> you do in UNICASTx, you won't spot that automatically). >>>>> >>>>> Beyond that, there are many false positives. E.g. if you have a never >>>>> terminating loop in Runnable.run(), there is no place to inject the AT >>>>> EXIT code and Byteman complains. >>>>> >>>>> In the end, human intervention is always required. >>>>> >>>>> Radim >>>>> >>>>>>> Roman Macor is right now updating the rules and I was >>>>>>> hoping that we could insert annotations into JGroups that would be used >>>>>>> instead of the rules (I was already considering different AOP framework >>>>>>> as Byteman does not allow AT EXIT to catch on leaving exceptions [2]). >>>>>> Yes, I've also run into this before, not really nice. >>>>>> >>>>>>> Radim >>>>>>> >>>>>>> [1] https://github.com/rvansa/message-flow-tracer >>>>>>> [2] https://issues.jboss.org/browse/BYTEMAN-237 >>>>>>> >>>>>>> On 11/07/2014 01:21 PM, Bela Ban wrote: >>>>>>>> Hi Radim, >>>>>>>> >>>>>>>> no I haven't. However, you can replace the thread pools used by JGroups >>>>>>>> and use custom pools. >>>>>>>> >>>>>>>> I like another idea better: inject Byteman code at runtime that keeps >>>>>>>> track of this, and *other useful stats as well*. >>>>>>>> >>>>>>>> It would be very useful to support if we could ship a package to a >>>>>>>> customer that is injected into their running system and grabs all the >>>>>>>> vital stats we need for a few minutes, then removes itself again and >>>>>>>> those stats are then sent to use as a ZIP file. >>>>>>>> The good thing about byteman is that it can remove itself without a >>>>>>>> trace; ie. there's no overhead before / after running byteman. >>>>>>>> >>>>>>>> >>>>>>>> On 07/11/14 09:31, Radim Vansa wrote: >>>>>>>>> Btw., have you ever considered checks if a thread returns to pool >>>>>>>>> reasonably often? Some of the other datagrids use this, though there's >>>>>>>>> not much how to react upon that beyond printing out stack traces (but >>>>>>>>> you can at least report to management that some node seems to be broken). >>>>>>>>> >>>>>>>>> Radim >>>>>>>>> >>>>>>>>> On 11/07/2014 08:35 AM, Bela Ban wrote: >>>>>>>>>> That's exactly what I suggested. No config gives you a shared global >>>>>>>>>> thread pool for all caches. >>>>>>>>>> >>>>>>>>>> Those caches which need a separate pool can do that via configuration >>>>>>>>>> (and of course also programmatically) >>>>>>>>>> >>>>>>>>>> On 06/11/14 20:31, Tristan Tarrant wrote: >>>>>>>>>>> My opinion is that we should aim for less configuration, i.e. >>>>>>>>>>> threadpools should mostly have sensible defaults and be shared by >>>>>>>>>>> default unless there are extremely good reasons for not doing so. >>>>>>>>>>> >>>>>>>>>>> Tristan >>>>>>>>>>> >>>>>>>>>>> On 06/11/14 19:40, Radim Vansa wrote: >>>>>>>>>>>> I second the opinion that any threadpools should be shared by default. >>>>>>>>>>>> There are users who have hundreds or thousands of caches and having >>>>>>>>>>>> separate threadpool for each of them could easily drain resources. And >>>>>>>>>>>> sharing resources is the purpose of threadpools, right? >>>>>>>>>>>> >>>>>>>>>>>> Radim >>>>>>>>>>>> >>>>>>>>>>>> On 11/06/2014 04:37 PM, Bela Ban wrote: >>>>>>>>>>>>> #1 I would by default have 1 thread pool shared by all caches >>>>>>>>>>>>> #2 This global thread pool should be configurable, perhaps in the >>>>>>>>>>>>> section ? >>>>>>>>>>>>> #3 Each cache by default uses the gobal thread pool >>>>>>>>>>>>> #4 A cache can define its own thread pool, then it would use this one >>>>>>>>>>>>> and not the global thread pool >>>>>>>>>>>>> >>>>>>>>>>>>> I think this gives you a mixture between ease of use and flexibility in >>>>>>>>>>>>> configuring pool per cache if needed >>>>>>>>>>>>> >>>>>>>>>>>>> On 06/11/14 16:23, Pedro Ruivo wrote: >>>>>>>>>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote: >>>>>>>>>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote: >>>>>>>>>>>>>>>> * added a single thread remote executor service. This will handle the >>>>>>>>>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups incoming >>>>>>>>>>>>>>>> threads and with a new executor service, each cache can process their >>>>>>>>>>>>>>>> own FIFO commands concurrently. >>>>>>>>>>>>>>> +1000. This allows multiple updates from the same sender but to >>>>>>>>>>>>>>> different caches to be executed in parallel, and will speed thing up. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Do you intend to share a thread pool between the invocations handlers of >>>>>>>>>>>>>>> the various caches, or do they each have their own thread pool ? Or is >>>>>>>>>>>>>>> this configurable ? >>>>>>>>>>>>>>> >>>>>>>>>>>>>> That is question that cross my mind and I don't have any idea what would >>>>>>>>>>>>>> be the best. So, for now, I will leave the thread pool shared between >>>>>>>>>>>>>> the handlers. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Never thought to make it configurable, but maybe that is the best >>>>>>>>>>>>>> option. And maybe, it should be possible to have different max-thread >>>>>>>>>>>>>> size per cache. For example: >>>>>>>>>>>>>> >>>>>>>>>>>>>> * all caches using this remote executor will share the same instance >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> * all caches using this remote executor will create their own thread >>>>>>>>>>>>>> pool with max-threads equals to 1 >>>>>>>>>>>>>> >>>>>>>>>>>>> max-threads=1 .../> >>>>>>>>>>>>>> >>>>>>>>>>>>>> * all caches using this remote executor will create their own thread >>>>>>>>>>>>>> pool with max-threads equals to 1000 >>>>>>>>>>>>>> >>>>>>>>>>>>> max-thread=1000 .../> >>>>>>>>>>>>>> >>>>>>>>>>>>>> is this what you have in mind? comments? >>>>>>>>>>>>>> >>>>>>>>>>>>>> Cheers, >>>>>>>>>>>>>> Pedro >>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>> infinispan-dev mailing list >>>>>>>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> infinispan-dev mailing list >>>>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>>> >>>>> >>> >>> >> >> -- >> Bela Ban, JGroups lead (http://www.jgroups.org) >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban, JGroups lead (http://www.jgroups.org) From rvansa at redhat.com Mon Nov 10 06:17:34 2014 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 10 Nov 2014 12:17:34 +0100 Subject: [infinispan-dev] Thread pools monitoring In-Reply-To: References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> <545C7649.6090309@redhat.com> <545C835C.3020301@redhat.com> <545CB95A.4020909@redhat.com> <545CBEF2.503@redhat.com> <545CC8C3.40507@redhat.com> <546083BD.2080303@redhat.com> <54608DFD.7070502@redhat.com> <54609479.4060501@redhat.com> <5460966B.6010305@redhat.com> Message-ID: <54609ECE.3060409@redhat.com> Could you tell (or link) us why? R On 11/10/2014 11:49 AM, Sanne Grinovero wrote: > Could we just have a listener interface rather than playing with AOP? > > You don't want to support those libraries ;-) For example Hibernate is > deprecating javassist. > > Sanne > > On 10 November 2014 10:41, Bela Ban wrote: >> >> On 10/11/14 11:33, Radim Vansa wrote: >>> No way I'd be aware of (you can specify the rule directly in annotation, >>> but that's not what I'd like to do). Though, I don't think it would be >>> too complicated to implement. >>> But as I've said, I was inclining towards another AOP frameworks, or >>> more low-level solutions such as Javassist. >> What's the benefit of this ? I don't think you could define the >> joinpoint in a strongly-typed fashion, so refactoring would not work >> either if you for example change a method name. Or would it ? >> >>> For example similar tool >>> Kamon [1] uses AspectJ Weaver. >>> >>> Roman, do you have the document describing pros and cons of those other >>> AOP frameworks? >>> >>> [1] http://kamon.io/ >>> >>> On 11/10/2014 11:05 AM, Bela Ban wrote: >>>> Does Byteman allow you to use annotations as injection points ? Didn't >>>> know that. Can you show a sample RULE ? >>>> >>>> On 10/11/14 10:22, Radim Vansa wrote: >>>>> On 11/07/2014 02:27 PM, Bela Ban wrote: >>>>>> On 07/11/14 13:45, Radim Vansa wrote: >>>>>>> Hijacking thread 'Remoting package refactor' as the discussion has shifted. >>>>>>> >>>>>>> Sure, AOP is another approach. However, besided another limitations, >>>>>>> Byteman rules are quite fragile with respect to different versions: if >>>>>>> you're injecting code based on internal implementation method, when the >>>>>>> name/signature changes, the rule is broken. Sometimes you even have to >>>>>>> use AT LINE to formulate the injection point. >>>>>> Right. This is the same problem though as when support needs to create a >>>>>> (e.f. one-off) patch to be applied by a customer: they need to grab the >>>>>> exact same version the customer is running. >>>>>> >>>>>> So each diagnosis package would have to be dependent on the version (of >>>>>> JGroups or JDG) used. Regardless of whether custom rules are added by a >>>>>> support engineer, this has to be tested anyway before sending it off to >>>>>> the customer. >>>>>> >>>>>>> Would you accept a compile-time dependency to some annotations package >>>>>>> in JGroups that could 'tag' the injection points? The idea is that >>>>>>> anyone changing the source code would move the injection point >>>>>>> annotations as well. >>>>>> You mean something like this ? >>>>>> >>>>>> @InjectionPoint("down") public void down(Event e) >>>>>> >>>>>> or >>>>>> >>>>>> @InjectingPoint ("num_msgs_sent") >>>>>> protected int num_msgs_sent; >>>>>> >>>>>> No, this won't work... how would you do that ? >>>>> Yes, this is the annotation syntax I had in mind, though, I was thinking >>>>> about more high-level abstraction what's happening than just marking >>>>> down injection points. >>>>> Such as >>>>> >>>>> @ReceivedData >>>>> public void receive(@From Address sender, byte[] data, int offset, @Size >>>>> int length) {...} >>>>> >>>>> @ProcessingMessage >>>>> protected void passMessageUp(@Message msg, ...) { ... } >>>>> >>>>> @ProcessingBatch >>>>> protected void deliverBatch(@Batch MessageBatch batch) { ... } >>>>> >>>>> >>>>>> I don't really like this, on a general principle: AOP should *not* have >>>>>> to change the src code in order to work. And the fact of the matter is >>>>>> that you won't be able to identify *all* injection points beforehand... >>>>>> unless you want to sprinkle your code with annotations. >>>>> I have to agree with the fact that AOP should not have to change source. >>>>> I had a special case in mind, that is tied to JGroups inspection and >>>>> offers a way the monitoring with zero overhead when the monitoring is >>>>> not in place. There, you'd just conceptually describe what JGroups does. >>>>> >>>>>>> I was already thinking about this in relation with Message Flow Tracer >>>>>>> [1] (not working right now as the JGroups have changed since I was >>>>>>> writing that)? >>>>>> I took a quick look: nice ! >>>>>> >>>>>> This is exactly what I meant. Should be some sort of rule base in a VCS, >>>>>> to which support engineers add rules when they have a case which >>>>>> requires it and they deem it to be generally useful. >>>>>> >>>>>> Re API changes: doesn't Byteman have functionality which can check a >>>>>> rule set against a code base (offline), to find out incompatibilities ? >>>>>> Something like a static rule checker ? >>>>> Right, this is possible - but you won't find if you've added another >>>>> place that should be checked (e.g. MFT has to determine whether now >>>>> you're processing a whole batch, or message alone - when you add a >>>>> functionality to grab some stored messages and start processing them, as >>>>> you do in UNICASTx, you won't spot that automatically). >>>>> >>>>> Beyond that, there are many false positives. E.g. if you have a never >>>>> terminating loop in Runnable.run(), there is no place to inject the AT >>>>> EXIT code and Byteman complains. >>>>> >>>>> In the end, human intervention is always required. >>>>> >>>>> Radim >>>>> >>>>>>> Roman Macor is right now updating the rules and I was >>>>>>> hoping that we could insert annotations into JGroups that would be used >>>>>>> instead of the rules (I was already considering different AOP framework >>>>>>> as Byteman does not allow AT EXIT to catch on leaving exceptions [2]). >>>>>> Yes, I've also run into this before, not really nice. >>>>>> >>>>>>> Radim >>>>>>> >>>>>>> [1] https://github.com/rvansa/message-flow-tracer >>>>>>> [2] https://issues.jboss.org/browse/BYTEMAN-237 >>>>>>> >>>>>>> On 11/07/2014 01:21 PM, Bela Ban wrote: >>>>>>>> Hi Radim, >>>>>>>> >>>>>>>> no I haven't. However, you can replace the thread pools used by JGroups >>>>>>>> and use custom pools. >>>>>>>> >>>>>>>> I like another idea better: inject Byteman code at runtime that keeps >>>>>>>> track of this, and *other useful stats as well*. >>>>>>>> >>>>>>>> It would be very useful to support if we could ship a package to a >>>>>>>> customer that is injected into their running system and grabs all the >>>>>>>> vital stats we need for a few minutes, then removes itself again and >>>>>>>> those stats are then sent to use as a ZIP file. >>>>>>>> The good thing about byteman is that it can remove itself without a >>>>>>>> trace; ie. there's no overhead before / after running byteman. >>>>>>>> >>>>>>>> >>>>>>>> On 07/11/14 09:31, Radim Vansa wrote: >>>>>>>>> Btw., have you ever considered checks if a thread returns to pool >>>>>>>>> reasonably often? Some of the other datagrids use this, though there's >>>>>>>>> not much how to react upon that beyond printing out stack traces (but >>>>>>>>> you can at least report to management that some node seems to be broken). >>>>>>>>> >>>>>>>>> Radim >>>>>>>>> >>>>>>>>> On 11/07/2014 08:35 AM, Bela Ban wrote: >>>>>>>>>> That's exactly what I suggested. No config gives you a shared global >>>>>>>>>> thread pool for all caches. >>>>>>>>>> >>>>>>>>>> Those caches which need a separate pool can do that via configuration >>>>>>>>>> (and of course also programmatically) >>>>>>>>>> >>>>>>>>>> On 06/11/14 20:31, Tristan Tarrant wrote: >>>>>>>>>>> My opinion is that we should aim for less configuration, i.e. >>>>>>>>>>> threadpools should mostly have sensible defaults and be shared by >>>>>>>>>>> default unless there are extremely good reasons for not doing so. >>>>>>>>>>> >>>>>>>>>>> Tristan >>>>>>>>>>> >>>>>>>>>>> On 06/11/14 19:40, Radim Vansa wrote: >>>>>>>>>>>> I second the opinion that any threadpools should be shared by default. >>>>>>>>>>>> There are users who have hundreds or thousands of caches and having >>>>>>>>>>>> separate threadpool for each of them could easily drain resources. And >>>>>>>>>>>> sharing resources is the purpose of threadpools, right? >>>>>>>>>>>> >>>>>>>>>>>> Radim >>>>>>>>>>>> >>>>>>>>>>>> On 11/06/2014 04:37 PM, Bela Ban wrote: >>>>>>>>>>>>> #1 I would by default have 1 thread pool shared by all caches >>>>>>>>>>>>> #2 This global thread pool should be configurable, perhaps in the >>>>>>>>>>>>> section ? >>>>>>>>>>>>> #3 Each cache by default uses the gobal thread pool >>>>>>>>>>>>> #4 A cache can define its own thread pool, then it would use this one >>>>>>>>>>>>> and not the global thread pool >>>>>>>>>>>>> >>>>>>>>>>>>> I think this gives you a mixture between ease of use and flexibility in >>>>>>>>>>>>> configuring pool per cache if needed >>>>>>>>>>>>> >>>>>>>>>>>>> On 06/11/14 16:23, Pedro Ruivo wrote: >>>>>>>>>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote: >>>>>>>>>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote: >>>>>>>>>>>>>>>> * added a single thread remote executor service. This will handle the >>>>>>>>>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups incoming >>>>>>>>>>>>>>>> threads and with a new executor service, each cache can process their >>>>>>>>>>>>>>>> own FIFO commands concurrently. >>>>>>>>>>>>>>> +1000. This allows multiple updates from the same sender but to >>>>>>>>>>>>>>> different caches to be executed in parallel, and will speed thing up. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Do you intend to share a thread pool between the invocations handlers of >>>>>>>>>>>>>>> the various caches, or do they each have their own thread pool ? Or is >>>>>>>>>>>>>>> this configurable ? >>>>>>>>>>>>>>> >>>>>>>>>>>>>> That is question that cross my mind and I don't have any idea what would >>>>>>>>>>>>>> be the best. So, for now, I will leave the thread pool shared between >>>>>>>>>>>>>> the handlers. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Never thought to make it configurable, but maybe that is the best >>>>>>>>>>>>>> option. And maybe, it should be possible to have different max-thread >>>>>>>>>>>>>> size per cache. For example: >>>>>>>>>>>>>> >>>>>>>>>>>>>> * all caches using this remote executor will share the same instance >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> * all caches using this remote executor will create their own thread >>>>>>>>>>>>>> pool with max-threads equals to 1 >>>>>>>>>>>>>> >>>>>>>>>>>>> max-threads=1 .../> >>>>>>>>>>>>>> >>>>>>>>>>>>>> * all caches using this remote executor will create their own thread >>>>>>>>>>>>>> pool with max-threads equals to 1000 >>>>>>>>>>>>>> >>>>>>>>>>>>> max-thread=1000 .../> >>>>>>>>>>>>>> >>>>>>>>>>>>>> is this what you have in mind? comments? >>>>>>>>>>>>>> >>>>>>>>>>>>>> Cheers, >>>>>>>>>>>>>> Pedro >>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>> infinispan-dev mailing list >>>>>>>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> infinispan-dev mailing list >>>>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>>> >>> >> -- >> Bela Ban, JGroups lead (http://www.jgroups.org) >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From rvansa at redhat.com Mon Nov 10 06:25:53 2014 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 10 Nov 2014 12:25:53 +0100 Subject: [infinispan-dev] Thread pools monitoring In-Reply-To: <5460966B.6010305@redhat.com> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> <545C7649.6090309@redhat.com> <545C835C.3020301@redhat.com> <545CB95A.4020909@redhat.com> <545CBEF2.503@redhat.com> <545CC8C3.40507@redhat.com> <546083BD.2080303@redhat.com> <54608DFD.7070502@redhat.com> <54609479.4060501@redhat.com> <5460966B.6010305@redhat.com> Message-ID: <5460A0C1.9070806@redhat.com> On 11/10/2014 11:41 AM, Bela Ban wrote: > > On 10/11/14 11:33, Radim Vansa wrote: >> No way I'd be aware of (you can specify the rule directly in annotation, >> but that's not what I'd like to do). Though, I don't think it would be >> too complicated to implement. >> But as I've said, I was inclining towards another AOP frameworks, or >> more low-level solutions such as Javassist. > What's the benefit of this ? I don't think you could define the > joinpoint in a strongly-typed fashion, so refactoring would not work > either if you for example change a method name. Or would it ? I am not sure if I understand the objections. It's not strongly typed, but the annotations should describe what is happening inside. When you change the behaviour of code around annotated method, you stop for a while and think whether you should describe the new code somehow. I think you can't perceive what I mean, but I can't blame you - I can't describe it well (and I could be wrong, too!). So, we'll try to code some POC and show it to you - and in case you won't accept it fallback to external description (because support for this will be needed anyway, for runtime classes etc. - annotations in JGroups and Infinispan just make this more maintainable) Radim > >> For example similar tool >> Kamon [1] uses AspectJ Weaver. >> >> Roman, do you have the document describing pros and cons of those other >> AOP frameworks? >> >> [1] http://kamon.io/ >> >> On 11/10/2014 11:05 AM, Bela Ban wrote: >>> Does Byteman allow you to use annotations as injection points ? Didn't >>> know that. Can you show a sample RULE ? >>> >>> On 10/11/14 10:22, Radim Vansa wrote: >>>> On 11/07/2014 02:27 PM, Bela Ban wrote: >>>>> On 07/11/14 13:45, Radim Vansa wrote: >>>>>> Hijacking thread 'Remoting package refactor' as the discussion has shifted. >>>>>> >>>>>> Sure, AOP is another approach. However, besided another limitations, >>>>>> Byteman rules are quite fragile with respect to different versions: if >>>>>> you're injecting code based on internal implementation method, when the >>>>>> name/signature changes, the rule is broken. Sometimes you even have to >>>>>> use AT LINE to formulate the injection point. >>>>> Right. This is the same problem though as when support needs to create a >>>>> (e.f. one-off) patch to be applied by a customer: they need to grab the >>>>> exact same version the customer is running. >>>>> >>>>> So each diagnosis package would have to be dependent on the version (of >>>>> JGroups or JDG) used. Regardless of whether custom rules are added by a >>>>> support engineer, this has to be tested anyway before sending it off to >>>>> the customer. >>>>> >>>>>> Would you accept a compile-time dependency to some annotations package >>>>>> in JGroups that could 'tag' the injection points? The idea is that >>>>>> anyone changing the source code would move the injection point >>>>>> annotations as well. >>>>> You mean something like this ? >>>>> >>>>> @InjectionPoint("down") public void down(Event e) >>>>> >>>>> or >>>>> >>>>> @InjectingPoint ("num_msgs_sent") >>>>> protected int num_msgs_sent; >>>>> >>>>> No, this won't work... how would you do that ? >>>> Yes, this is the annotation syntax I had in mind, though, I was thinking >>>> about more high-level abstraction what's happening than just marking >>>> down injection points. >>>> Such as >>>> >>>> @ReceivedData >>>> public void receive(@From Address sender, byte[] data, int offset, @Size >>>> int length) {...} >>>> >>>> @ProcessingMessage >>>> protected void passMessageUp(@Message msg, ...) { ... } >>>> >>>> @ProcessingBatch >>>> protected void deliverBatch(@Batch MessageBatch batch) { ... } >>>> >>>> >>>>> I don't really like this, on a general principle: AOP should *not* have >>>>> to change the src code in order to work. And the fact of the matter is >>>>> that you won't be able to identify *all* injection points beforehand... >>>>> unless you want to sprinkle your code with annotations. >>>> I have to agree with the fact that AOP should not have to change source. >>>> I had a special case in mind, that is tied to JGroups inspection and >>>> offers a way the monitoring with zero overhead when the monitoring is >>>> not in place. There, you'd just conceptually describe what JGroups does. >>>> >>>>>> I was already thinking about this in relation with Message Flow Tracer >>>>>> [1] (not working right now as the JGroups have changed since I was >>>>>> writing that)? >>>>> I took a quick look: nice ! >>>>> >>>>> This is exactly what I meant. Should be some sort of rule base in a VCS, >>>>> to which support engineers add rules when they have a case which >>>>> requires it and they deem it to be generally useful. >>>>> >>>>> Re API changes: doesn't Byteman have functionality which can check a >>>>> rule set against a code base (offline), to find out incompatibilities ? >>>>> Something like a static rule checker ? >>>> Right, this is possible - but you won't find if you've added another >>>> place that should be checked (e.g. MFT has to determine whether now >>>> you're processing a whole batch, or message alone - when you add a >>>> functionality to grab some stored messages and start processing them, as >>>> you do in UNICASTx, you won't spot that automatically). >>>> >>>> Beyond that, there are many false positives. E.g. if you have a never >>>> terminating loop in Runnable.run(), there is no place to inject the AT >>>> EXIT code and Byteman complains. >>>> >>>> In the end, human intervention is always required. >>>> >>>> Radim >>>> >>>>>> Roman Macor is right now updating the rules and I was >>>>>> hoping that we could insert annotations into JGroups that would be used >>>>>> instead of the rules (I was already considering different AOP framework >>>>>> as Byteman does not allow AT EXIT to catch on leaving exceptions [2]). >>>>> Yes, I've also run into this before, not really nice. >>>>> >>>>>> Radim >>>>>> >>>>>> [1] https://github.com/rvansa/message-flow-tracer >>>>>> [2] https://issues.jboss.org/browse/BYTEMAN-237 >>>>>> >>>>>> On 11/07/2014 01:21 PM, Bela Ban wrote: >>>>>>> Hi Radim, >>>>>>> >>>>>>> no I haven't. However, you can replace the thread pools used by JGroups >>>>>>> and use custom pools. >>>>>>> >>>>>>> I like another idea better: inject Byteman code at runtime that keeps >>>>>>> track of this, and *other useful stats as well*. >>>>>>> >>>>>>> It would be very useful to support if we could ship a package to a >>>>>>> customer that is injected into their running system and grabs all the >>>>>>> vital stats we need for a few minutes, then removes itself again and >>>>>>> those stats are then sent to use as a ZIP file. >>>>>>> The good thing about byteman is that it can remove itself without a >>>>>>> trace; ie. there's no overhead before / after running byteman. >>>>>>> >>>>>>> >>>>>>> On 07/11/14 09:31, Radim Vansa wrote: >>>>>>>> Btw., have you ever considered checks if a thread returns to pool >>>>>>>> reasonably often? Some of the other datagrids use this, though there's >>>>>>>> not much how to react upon that beyond printing out stack traces (but >>>>>>>> you can at least report to management that some node seems to be broken). >>>>>>>> >>>>>>>> Radim >>>>>>>> >>>>>>>> On 11/07/2014 08:35 AM, Bela Ban wrote: >>>>>>>>> That's exactly what I suggested. No config gives you a shared global >>>>>>>>> thread pool for all caches. >>>>>>>>> >>>>>>>>> Those caches which need a separate pool can do that via configuration >>>>>>>>> (and of course also programmatically) >>>>>>>>> >>>>>>>>> On 06/11/14 20:31, Tristan Tarrant wrote: >>>>>>>>>> My opinion is that we should aim for less configuration, i.e. >>>>>>>>>> threadpools should mostly have sensible defaults and be shared by >>>>>>>>>> default unless there are extremely good reasons for not doing so. >>>>>>>>>> >>>>>>>>>> Tristan >>>>>>>>>> >>>>>>>>>> On 06/11/14 19:40, Radim Vansa wrote: >>>>>>>>>>> I second the opinion that any threadpools should be shared by default. >>>>>>>>>>> There are users who have hundreds or thousands of caches and having >>>>>>>>>>> separate threadpool for each of them could easily drain resources. And >>>>>>>>>>> sharing resources is the purpose of threadpools, right? >>>>>>>>>>> >>>>>>>>>>> Radim >>>>>>>>>>> >>>>>>>>>>> On 11/06/2014 04:37 PM, Bela Ban wrote: >>>>>>>>>>>> #1 I would by default have 1 thread pool shared by all caches >>>>>>>>>>>> #2 This global thread pool should be configurable, perhaps in the >>>>>>>>>>>> section ? >>>>>>>>>>>> #3 Each cache by default uses the gobal thread pool >>>>>>>>>>>> #4 A cache can define its own thread pool, then it would use this one >>>>>>>>>>>> and not the global thread pool >>>>>>>>>>>> >>>>>>>>>>>> I think this gives you a mixture between ease of use and flexibility in >>>>>>>>>>>> configuring pool per cache if needed >>>>>>>>>>>> >>>>>>>>>>>> On 06/11/14 16:23, Pedro Ruivo wrote: >>>>>>>>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote: >>>>>>>>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote: >>>>>>>>>>>>>>> * added a single thread remote executor service. This will handle the >>>>>>>>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups incoming >>>>>>>>>>>>>>> threads and with a new executor service, each cache can process their >>>>>>>>>>>>>>> own FIFO commands concurrently. >>>>>>>>>>>>>> +1000. This allows multiple updates from the same sender but to >>>>>>>>>>>>>> different caches to be executed in parallel, and will speed thing up. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Do you intend to share a thread pool between the invocations handlers of >>>>>>>>>>>>>> the various caches, or do they each have their own thread pool ? Or is >>>>>>>>>>>>>> this configurable ? >>>>>>>>>>>>>> >>>>>>>>>>>>> That is question that cross my mind and I don't have any idea what would >>>>>>>>>>>>> be the best. So, for now, I will leave the thread pool shared between >>>>>>>>>>>>> the handlers. >>>>>>>>>>>>> >>>>>>>>>>>>> Never thought to make it configurable, but maybe that is the best >>>>>>>>>>>>> option. And maybe, it should be possible to have different max-thread >>>>>>>>>>>>> size per cache. For example: >>>>>>>>>>>>> >>>>>>>>>>>>> * all caches using this remote executor will share the same instance >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> * all caches using this remote executor will create their own thread >>>>>>>>>>>>> pool with max-threads equals to 1 >>>>>>>>>>>>> >>>>>>>>>>>> max-threads=1 .../> >>>>>>>>>>>>> >>>>>>>>>>>>> * all caches using this remote executor will create their own thread >>>>>>>>>>>>> pool with max-threads equals to 1000 >>>>>>>>>>>>> >>>>>>>>>>>> max-thread=1000 .../> >>>>>>>>>>>>> >>>>>>>>>>>>> is this what you have in mind? comments? >>>>>>>>>>>>> >>>>>>>>>>>>> Cheers, >>>>>>>>>>>>> Pedro >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> infinispan-dev mailing list >>>>>>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> infinispan-dev mailing list >>>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>> >> -- Radim Vansa JBoss DataGrid QA From pedro at infinispan.org Mon Nov 10 09:49:07 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Mon, 10 Nov 2014 14:49:07 +0000 Subject: [infinispan-dev] Total Order non-transactional cache Message-ID: <5460D063.3020806@infinispan.org> Hi, FYI, I've just created a design page: https://github.com/infinispan/infinispan/wiki/Total-Order-non-Transactional-Cache My plan is to implement it in 7.1 release. Feel free to comment. Cheers, Pedro From galder at redhat.com Mon Nov 10 07:50:30 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Mon, 10 Nov 2014 13:50:30 +0100 Subject: [infinispan-dev] Feature request: manage and share a CacheManager across deployments on WildFly In-Reply-To: References: Message-ID: @Paul, your input would be appreciated. My reply is below. On 07 Nov 2014, at 16:47, Sanne Grinovero wrote: > I'm witnessing users of Hibernate Search who say they deploy several > dozens of JPA applications using Hibernate Search in a single > container, and when evaluating usage of Infnispan for index storage > they would like them all to share the CacheManager, rather than > starting a new CacheManager for each and then have to worry about > things like JGroups isolation or rather reuse via FORK. > > This is easy to achieve by configuring the CacheManager in the WildFly > configuration, and then looking it up by JNDI name.. but is not easy > at all to achieve if you want to use the custom modules which we > deliver to allow using a different Infinispan version of what's > included in WildFly. > > That's nasty, because we ultimately want people to use our modules and > leave the ones in WildFly for its internal usage. > > It would be nice if the team could include in the modules.zip a way to > pre-start configured caches, and instructions to mark their > deployments as depending on this service. Would be useful to then > connect this to monitoring too.. If all Hibernate Search apps are using the same cache manager, won?t they have cache conflicts? Or are these caches named in such way that they can run within a single cache manager? The simplest thing I can think of to achieve this would be for an optional service to start a cache manager with a given configuration, and bind that to JNDI. That would be something we can potentially provide, with JMX monitoring at best. However, if you want these cache manager to be registered into Wildfly?s domain model for better monitoring, I don?t really know if this would be something we can just provide without any serious hooks into Wildfly :|, and then it all gets quite complicated IMO because you need to start maintaining yet another integration layer with Wildfly. TBH, it?d be good to hear from Paul et al since they know WF best and see what ideas they might have. Cheers, > > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From sanne at infinispan.org Tue Nov 11 08:57:43 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 11 Nov 2014 13:57:43 +0000 Subject: [infinispan-dev] Feature request: manage and share a CacheManager across deployments on WildFly In-Reply-To: References: Message-ID: On 10 November 2014 12:50, Galder Zamarre?o wrote: > @Paul, your input would be appreciated. My reply is below. > > On 07 Nov 2014, at 16:47, Sanne Grinovero wrote: > >> I'm witnessing users of Hibernate Search who say they deploy several >> dozens of JPA applications using Hibernate Search in a single >> container, and when evaluating usage of Infnispan for index storage >> they would like them all to share the CacheManager, rather than >> starting a new CacheManager for each and then have to worry about >> things like JGroups isolation or rather reuse via FORK. >> >> This is easy to achieve by configuring the CacheManager in the WildFly >> configuration, and then looking it up by JNDI name.. but is not easy >> at all to achieve if you want to use the custom modules which we >> deliver to allow using a different Infinispan version of what's >> included in WildFly. >> >> That's nasty, because we ultimately want people to use our modules and >> leave the ones in WildFly for its internal usage. >> >> It would be nice if the team could include in the modules.zip a way to >> pre-start configured caches, and instructions to mark their >> deployments as depending on this service. Would be useful to then >> connect this to monitoring too.. > > If all Hibernate Search apps are using the same cache manager, won?t they have cache conflicts? Or are these caches named in such way that they can run within a single cache manager? Like different deployment might want to share the same database, sometimes people want to share the index. It should be up to configuration to be able to isolate vs share an index.. and we're flexible about that as you can use different cache names, or different index names sharing the same caches. > The simplest thing I can think of to achieve this would be for an optional service to start a cache manager with a given configuration, and bind that to JNDI. That would be something we can potentially provide, with JMX monitoring at best. +1 That's what I think we're missing. Sounds like very useful and not a big effort to provide. > However, if you want these cache manager to be registered into Wildfly?s domain model for better monitoring, I don?t really know if this would be something we can just provide without any serious hooks into Wildfly :|, and then it all gets quite complicated IMO because you need to start maintaining yet another integration layer with Wildfly. Keep in mind that we want people to use the Infinispan jars, not the WildFly version of Infinispan when it comes to custom/direct usage. For example for EAP users, it's unsupported to use the included Infinispan version so going via the standard configuration files is not an option. So I'm not sure which integration options we have: I'm expecting this to be provided purely as an add-on strongly coupled to the Infinispan release. So I agree it should be highly decoupled from WildFly code. > TBH, it?d be good to hear from Paul et al since they know WF best and see what ideas they might have. +1 Looking forward. Sanne From ttarrant at redhat.com Tue Nov 11 10:12:54 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 11 Nov 2014 16:12:54 +0100 Subject: [infinispan-dev] Feature request: manage and share a CacheManager across deployments on WildFly In-Reply-To: References: Message-ID: <54622776.6020009@redhat.com> My proposal: make the Infinispan + JGroups subsystems which we develop for Infinispan Server installable in any instance of WildFly. They would obviously use a different namespace & slot to avoid conflict. Bonus points if we can also make WildFly use our versions for its clustering stuff. Tristan On 11/11/14 14:57, Sanne Grinovero wrote: > On 10 November 2014 12:50, Galder Zamarre?o wrote: >> @Paul, your input would be appreciated. My reply is below. >> >> On 07 Nov 2014, at 16:47, Sanne Grinovero wrote: >> >>> I'm witnessing users of Hibernate Search who say they deploy several >>> dozens of JPA applications using Hibernate Search in a single >>> container, and when evaluating usage of Infnispan for index storage >>> they would like them all to share the CacheManager, rather than >>> starting a new CacheManager for each and then have to worry about >>> things like JGroups isolation or rather reuse via FORK. >>> >>> This is easy to achieve by configuring the CacheManager in the WildFly >>> configuration, and then looking it up by JNDI name.. but is not easy >>> at all to achieve if you want to use the custom modules which we >>> deliver to allow using a different Infinispan version of what's >>> included in WildFly. >>> >>> That's nasty, because we ultimately want people to use our modules and >>> leave the ones in WildFly for its internal usage. >>> >>> It would be nice if the team could include in the modules.zip a way to >>> pre-start configured caches, and instructions to mark their >>> deployments as depending on this service. Would be useful to then >>> connect this to monitoring too.. >> If all Hibernate Search apps are using the same cache manager, won?t they have cache conflicts? Or are these caches named in such way that they can run within a single cache manager? > Like different deployment might want to share the same database, > sometimes people want to share the index. It should be up to > configuration to be able to isolate vs share an index.. and we're > flexible about that as you can use different cache names, or different > index names sharing the same caches. > >> The simplest thing I can think of to achieve this would be for an optional service to start a cache manager with a given configuration, and bind that to JNDI. That would be something we can potentially provide, with JMX monitoring at best. > +1 > That's what I think we're missing. Sounds like very useful and not a > big effort to provide. > >> However, if you want these cache manager to be registered into Wildfly?s domain model for better monitoring, I don?t really know if this would be something we can just provide without any serious hooks into Wildfly :|, and then it all gets quite complicated IMO because you need to start maintaining yet another integration layer with Wildfly. > Keep in mind that we want people to use the Infinispan jars, not the > WildFly version of Infinispan when it comes to custom/direct usage. > For example for EAP users, it's unsupported to use the included > Infinispan version so going via the standard configuration files is > not an option. So I'm not sure which integration options we have: I'm > expecting this to be provided purely as an add-on strongly coupled to > the Infinispan release. So I agree it should be highly decoupled from > WildFly code. > >> TBH, it?d be good to hear from Paul et al since they know WF best and see what ideas they might have. > +1 Looking forward. > > Sanne > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Thu Nov 13 02:28:11 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Thu, 13 Nov 2014 08:28:11 +0100 Subject: [infinispan-dev] Remoting package refactor In-Reply-To: <545C835C.3020301@redhat.com> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> <545C7649.6090309@redhat.com> <545C835C.3020301@redhat.com> Message-ID: <3FA7F47B-1410-4223-8576-2E54F67C7B29@redhat.com> @Pedro, did you consider using a ForkJoinPool instead? Traditional JDK pools are known to be very hard to configure and get it ?right?. Fork join pools are being used as default thread pools in other libraries, vastly reducing configuration. Jessitron has published some interesting blog posts on the advantages of traditional ExecutorService vs Fork/Join pools and viceversa. See [1] and [3]. She also did a talk on it, see [4]. Cheers, p.s. I?ve not studied your use case in depth to decide whether F/J would suite better, but it?s certainly worth a look now that we?re on Java 7. [1] https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ForkJoinPool.html [2] http://blog.jessitron.com/2014/01/choosing-executorservice.html [3] http://blog.jessitron.com/2014/02/scala-global-executioncontext-makes.html [4] https://www.youtube.com/watch?v=yhguOt863nw On 07 Nov 2014, at 09:31, Radim Vansa wrote: > Btw., have you ever considered checks if a thread returns to pool > reasonably often? Some of the other datagrids use this, though there's > not much how to react upon that beyond printing out stack traces (but > you can at least report to management that some node seems to be broken). > > Radim > > On 11/07/2014 08:35 AM, Bela Ban wrote: >> That's exactly what I suggested. No config gives you a shared global >> thread pool for all caches. >> >> Those caches which need a separate pool can do that via configuration >> (and of course also programmatically) >> >> On 06/11/14 20:31, Tristan Tarrant wrote: >>> My opinion is that we should aim for less configuration, i.e. >>> threadpools should mostly have sensible defaults and be shared by >>> default unless there are extremely good reasons for not doing so. >>> >>> Tristan >>> >>> On 06/11/14 19:40, Radim Vansa wrote: >>>> I second the opinion that any threadpools should be shared by default. >>>> There are users who have hundreds or thousands of caches and having >>>> separate threadpool for each of them could easily drain resources. And >>>> sharing resources is the purpose of threadpools, right? >>>> >>>> Radim >>>> >>>> On 11/06/2014 04:37 PM, Bela Ban wrote: >>>>> #1 I would by default have 1 thread pool shared by all caches >>>>> #2 This global thread pool should be configurable, perhaps in the >>>>> section ? >>>>> #3 Each cache by default uses the gobal thread pool >>>>> #4 A cache can define its own thread pool, then it would use this one >>>>> and not the global thread pool >>>>> >>>>> I think this gives you a mixture between ease of use and flexibility in >>>>> configuring pool per cache if needed >>>>> >>>>> On 06/11/14 16:23, Pedro Ruivo wrote: >>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote: >>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote: >>>>>>>> * added a single thread remote executor service. This will handle the >>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups incoming >>>>>>>> threads and with a new executor service, each cache can process their >>>>>>>> own FIFO commands concurrently. >>>>>>> +1000. This allows multiple updates from the same sender but to >>>>>>> different caches to be executed in parallel, and will speed thing up. >>>>>>> >>>>>>> Do you intend to share a thread pool between the invocations handlers of >>>>>>> the various caches, or do they each have their own thread pool ? Or is >>>>>>> this configurable ? >>>>>>> >>>>>> That is question that cross my mind and I don't have any idea what would >>>>>> be the best. So, for now, I will leave the thread pool shared between >>>>>> the handlers. >>>>>> >>>>>> Never thought to make it configurable, but maybe that is the best >>>>>> option. And maybe, it should be possible to have different max-thread >>>>>> size per cache. For example: >>>>>> >>>>>> * all caches using this remote executor will share the same instance >>>>>> >>>>>> >>>>>> * all caches using this remote executor will create their own thread >>>>>> pool with max-threads equals to 1 >>>>>> >>>>> max-threads=1 .../> >>>>>> >>>>>> * all caches using this remote executor will create their own thread >>>>>> pool with max-threads equals to 1000 >>>>>> >>>>> max-thread=1000 .../> >>>>>> >>>>>> is this what you have in mind? comments? >>>>>> >>>>>> Cheers, >>>>>> Pedro >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> > > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From rvansa at redhat.com Thu Nov 13 02:57:49 2014 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 13 Nov 2014 08:57:49 +0100 Subject: [infinispan-dev] Remoting package refactor In-Reply-To: <3FA7F47B-1410-4223-8576-2E54F67C7B29@redhat.com> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> <545C7649.6090309@redhat.com> <545C835C.3020301@redhat.com> <3FA7F47B-1410-4223-8576-2E54F67C7B29@redhat.com> Message-ID: <5464647D.4020707@redhat.com> F/J tasks should not acquire any locks (or, generally, block) during their execution. At least according to JavaDocs. Are we ready for that? Btw., I really don't like the fact that the commonPool() cannot be properly shutdown. This leads to threadlocal variables leaking when the component using F/J pool is undeployed (the classloader cannot be GCed and you end up with OOME in PermGen space). Radim On 11/13/2014 08:28 AM, Galder Zamarre?o wrote: > @Pedro, did you consider using a ForkJoinPool instead? > > Traditional JDK pools are known to be very hard to configure and get it ?right?. Fork join pools are being used as default thread pools in other libraries, vastly reducing configuration. > > Jessitron has published some interesting blog posts on the advantages of traditional ExecutorService vs Fork/Join pools and viceversa. See [1] and [3]. She also did a talk on it, see [4]. > > Cheers, > > p.s. I?ve not studied your use case in depth to decide whether F/J would suite better, but it?s certainly worth a look now that we?re on Java 7. > > [1] https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ForkJoinPool.html > [2] http://blog.jessitron.com/2014/01/choosing-executorservice.html > [3] http://blog.jessitron.com/2014/02/scala-global-executioncontext-makes.html > [4] https://www.youtube.com/watch?v=yhguOt863nw > > On 07 Nov 2014, at 09:31, Radim Vansa wrote: > >> Btw., have you ever considered checks if a thread returns to pool >> reasonably often? Some of the other datagrids use this, though there's >> not much how to react upon that beyond printing out stack traces (but >> you can at least report to management that some node seems to be broken). >> >> Radim >> >> On 11/07/2014 08:35 AM, Bela Ban wrote: >>> That's exactly what I suggested. No config gives you a shared global >>> thread pool for all caches. >>> >>> Those caches which need a separate pool can do that via configuration >>> (and of course also programmatically) >>> >>> On 06/11/14 20:31, Tristan Tarrant wrote: >>>> My opinion is that we should aim for less configuration, i.e. >>>> threadpools should mostly have sensible defaults and be shared by >>>> default unless there are extremely good reasons for not doing so. >>>> >>>> Tristan >>>> >>>> On 06/11/14 19:40, Radim Vansa wrote: >>>>> I second the opinion that any threadpools should be shared by default. >>>>> There are users who have hundreds or thousands of caches and having >>>>> separate threadpool for each of them could easily drain resources. And >>>>> sharing resources is the purpose of threadpools, right? >>>>> >>>>> Radim >>>>> >>>>> On 11/06/2014 04:37 PM, Bela Ban wrote: >>>>>> #1 I would by default have 1 thread pool shared by all caches >>>>>> #2 This global thread pool should be configurable, perhaps in the >>>>>> section ? >>>>>> #3 Each cache by default uses the gobal thread pool >>>>>> #4 A cache can define its own thread pool, then it would use this one >>>>>> and not the global thread pool >>>>>> >>>>>> I think this gives you a mixture between ease of use and flexibility in >>>>>> configuring pool per cache if needed >>>>>> >>>>>> On 06/11/14 16:23, Pedro Ruivo wrote: >>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote: >>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote: >>>>>>>>> * added a single thread remote executor service. This will handle the >>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups incoming >>>>>>>>> threads and with a new executor service, each cache can process their >>>>>>>>> own FIFO commands concurrently. >>>>>>>> +1000. This allows multiple updates from the same sender but to >>>>>>>> different caches to be executed in parallel, and will speed thing up. >>>>>>>> >>>>>>>> Do you intend to share a thread pool between the invocations handlers of >>>>>>>> the various caches, or do they each have their own thread pool ? Or is >>>>>>>> this configurable ? >>>>>>>> >>>>>>> That is question that cross my mind and I don't have any idea what would >>>>>>> be the best. So, for now, I will leave the thread pool shared between >>>>>>> the handlers. >>>>>>> >>>>>>> Never thought to make it configurable, but maybe that is the best >>>>>>> option. And maybe, it should be possible to have different max-thread >>>>>>> size per cache. For example: >>>>>>> >>>>>>> * all caches using this remote executor will share the same instance >>>>>>> >>>>>>> >>>>>>> * all caches using this remote executor will create their own thread >>>>>>> pool with max-threads equals to 1 >>>>>>> >>>>>> max-threads=1 .../> >>>>>>> >>>>>>> * all caches using this remote executor will create their own thread >>>>>>> pool with max-threads equals to 1000 >>>>>>> >>>>>> max-thread=1000 .../> >>>>>>> >>>>>>> is this what you have in mind? comments? >>>>>>> >>>>>>> Cheers, >>>>>>> Pedro >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >> >> -- >> Radim Vansa >> JBoss DataGrid QA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From ttarrant at redhat.com Thu Nov 13 03:42:53 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 13 Nov 2014 09:42:53 +0100 Subject: [infinispan-dev] 7.0.1.Final tomorrow Message-ID: <54646F0D.4070807@redhat.com> Hi all, tomorrow we'll be cutting 7.0.1.Final so please make sure that all the issues you want fixed have PRs by end-of-day today. Thanks Tristan From ttarrant at redhat.com Thu Nov 13 06:06:04 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 13 Nov 2014 12:06:04 +0100 Subject: [infinispan-dev] Schema versions Message-ID: <5464909C.2050309@redhat.com> Hi all, I have issued a PR [1] which bumps the core parser and schema to 7.1, since we're going to introduce schema changes there. Do you think we should bump all schemas in sync (including the cachestores) or shall we only do it when there are changes ? Tristan [1] https://github.com/infinispan/infinispan/pull/3069 From sanne at infinispan.org Thu Nov 13 06:23:42 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 13 Nov 2014 11:23:42 +0000 Subject: [infinispan-dev] Schema versions In-Reply-To: <5464909C.2050309@redhat.com> References: <5464909C.2050309@redhat.com> Message-ID: Please stop changing the configuration schema ! Sanne On 13 November 2014 11:06, Tristan Tarrant wrote: > Hi all, > > I have issued a PR [1] which bumps the core parser and schema to 7.1, > since we're going to introduce schema changes there. > Do you think we should bump all schemas in sync (including the > cachestores) or shall we only do it when there are changes ? > > Tristan > > [1] https://github.com/infinispan/infinispan/pull/3069 > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Thu Nov 13 07:33:53 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 13 Nov 2014 13:33:53 +0100 Subject: [infinispan-dev] Schema versions In-Reply-To: References: <5464909C.2050309@redhat.com> Message-ID: <5464A531.8060208@redhat.com> The changes are additive, so we're only modifying the minor version of the schema and we will parse old versions. Not that the schema-less mess that are HS's properties is any better :) Tristan On 13/11/14 12:23, Sanne Grinovero wrote: > Please stop changing the configuration schema ! > Sanne > > On 13 November 2014 11:06, Tristan Tarrant wrote: >> Hi all, >> >> I have issued a PR [1] which bumps the core parser and schema to 7.1, >> since we're going to introduce schema changes there. >> Do you think we should bump all schemas in sync (including the >> cachestores) or shall we only do it when there are changes ? >> >> Tristan >> >> [1] https://github.com/infinispan/infinispan/pull/3069 >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > From ttarrant at redhat.com Thu Nov 13 07:38:25 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 13 Nov 2014 13:38:25 +0100 Subject: [infinispan-dev] Schema versions In-Reply-To: <5464A531.8060208@redhat.com> References: <5464909C.2050309@redhat.com> <5464A531.8060208@redhat.com> Message-ID: <5464A641.5020906@redhat.com> On 13/11/14 13:33, Tristan Tarrant wrote: > The changes are additive, so we're only modifying the minor version of > the schema and we will parse old versions. > > Not that the schema-less mess that are HS's properties is any better :) A mess like my grammar. > > Tristan > > On 13/11/14 12:23, Sanne Grinovero wrote: >> Please stop changing the configuration schema ! >> Sanne >> >> On 13 November 2014 11:06, Tristan Tarrant wrote: >>> Hi all, >>> >>> I have issued a PR [1] which bumps the core parser and schema to 7.1, >>> since we're going to introduce schema changes there. >>> Do you think we should bump all schemas in sync (including the >>> cachestores) or shall we only do it when there are changes ? >>> >>> Tristan >>> >>> [1] https://github.com/infinispan/infinispan/pull/3069 >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > From sanne at infinispan.org Thu Nov 13 07:43:48 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 13 Nov 2014 12:43:48 +0000 Subject: [infinispan-dev] Schema versions In-Reply-To: <5464A531.8060208@redhat.com> References: <5464909C.2050309@redhat.com> <5464A531.8060208@redhat.com> Message-ID: On 13 November 2014 12:33, Tristan Tarrant wrote: > The changes are additive, so we're only modifying the minor version of > the schema and we will parse old versions. +1 for backwards compatibility.. always. Remember the average user isn't going to exercise his configuration-editing skills every morning, people want to copy-paste a configuration they've seen working on their previous project, which started some ~6 months ago. > Not that the schema-less mess that are HS's properties is any better :) People love schema-less.. always backwards compatible! :-P Sanne > > Tristan > > On 13/11/14 12:23, Sanne Grinovero wrote: >> Please stop changing the configuration schema ! >> Sanne >> >> On 13 November 2014 11:06, Tristan Tarrant wrote: >>> Hi all, >>> >>> I have issued a PR [1] which bumps the core parser and schema to 7.1, >>> since we're going to introduce schema changes there. >>> Do you think we should bump all schemas in sync (including the >>> cachestores) or shall we only do it when there are changes ? >>> >>> Tristan >>> >>> [1] https://github.com/infinispan/infinispan/pull/3069 >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Thu Nov 13 07:51:49 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Thu, 13 Nov 2014 13:51:49 +0100 Subject: [infinispan-dev] Fix or document? Concurrent replaceWithVersion w/ same value might all return true - ISPN-4972 Message-ID: <47BCDD79-2E37-40AD-B994-AFB33AA8322F@redhat.com> Hi all, Re: https://issues.jboss.org/browse/ISPN-4972 Embedded cache provides atomicity of a replace() call passing in the previous value. This limitation might be lifted when we adopt Java 8 and we can pass in a lambda or similar, which can be executed right when the value is compared now, and if it returns true it?s applied. The lambda could compare both value and metadata for example. Anyway, given the current status, I?m considering whether it?s worth fixing this particular issue. Fixing the issue would require adding some kind of locking in the Hot Rod server so that the version retrieval, comparison and replace call, can all happen atomically. This is not ideal, and on top of that, as Radim said, the chances of this happening in real life are limited, or more precisely it?s effects are minimal. In other words, if two concurrent threads call replace with the same value, the end result is that the new value would be stored, but as a result of the code, both replaces would return true which is not strictly right. I?d rather document this than add unnecessary locking in the Hot Rod server where it deals with the versioned replace call. Thoughts? -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From dan.berindei at gmail.com Thu Nov 13 07:59:08 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 13 Nov 2014 14:59:08 +0200 Subject: [infinispan-dev] Remoting package refactor In-Reply-To: <5464647D.4020707@redhat.com> References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> <545C7649.6090309@redhat.com> <545C835C.3020301@redhat.com> <3FA7F47B-1410-4223-8576-2E54F67C7B29@redhat.com> <5464647D.4020707@redhat.com> Message-ID: Radim, I also knew the 1.7 ForkJoinPool isn't really optimized for blocking tasks, but the ManagedBlocker interface mentioned in [3] seems to be intended just for that. Re: commonPool(), we can (and should) still create our own ForkJoinPool instead of using the global one. Cheers Dan On Thu, Nov 13, 2014 at 9:57 AM, Radim Vansa wrote: > F/J tasks should not acquire any locks (or, generally, block) during > their execution. At least according to JavaDocs. Are we ready for that? > > Btw., I really don't like the fact that the commonPool() cannot be > properly shutdown. This leads to threadlocal variables leaking when the > component using F/J pool is undeployed (the classloader cannot be GCed > and you end up with OOME in PermGen space). > > Radim > > On 11/13/2014 08:28 AM, Galder Zamarre?o wrote: > > @Pedro, did you consider using a ForkJoinPool instead? > > > > Traditional JDK pools are known to be very hard to configure and get it > ?right?. Fork join pools are being used as default thread pools in other > libraries, vastly reducing configuration. > > > > Jessitron has published some interesting blog posts on the advantages of > traditional ExecutorService vs Fork/Join pools and viceversa. See [1] and > [3]. She also did a talk on it, see [4]. > > > > Cheers, > > > > p.s. I?ve not studied your use case in depth to decide whether F/J would > suite better, but it?s certainly worth a look now that we?re on Java 7. > > > > [1] > https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ForkJoinPool.html > > [2] http://blog.jessitron.com/2014/01/choosing-executorservice.html > > [3] > http://blog.jessitron.com/2014/02/scala-global-executioncontext-makes.html > > [4] https://www.youtube.com/watch?v=yhguOt863nw > > > > On 07 Nov 2014, at 09:31, Radim Vansa wrote: > > > >> Btw., have you ever considered checks if a thread returns to pool > >> reasonably often? Some of the other datagrids use this, though there's > >> not much how to react upon that beyond printing out stack traces (but > >> you can at least report to management that some node seems to be > broken). > >> > >> Radim > >> > >> On 11/07/2014 08:35 AM, Bela Ban wrote: > >>> That's exactly what I suggested. No config gives you a shared global > >>> thread pool for all caches. > >>> > >>> Those caches which need a separate pool can do that via configuration > >>> (and of course also programmatically) > >>> > >>> On 06/11/14 20:31, Tristan Tarrant wrote: > >>>> My opinion is that we should aim for less configuration, i.e. > >>>> threadpools should mostly have sensible defaults and be shared by > >>>> default unless there are extremely good reasons for not doing so. > >>>> > >>>> Tristan > >>>> > >>>> On 06/11/14 19:40, Radim Vansa wrote: > >>>>> I second the opinion that any threadpools should be shared by > default. > >>>>> There are users who have hundreds or thousands of caches and having > >>>>> separate threadpool for each of them could easily drain resources. > And > >>>>> sharing resources is the purpose of threadpools, right? > >>>>> > >>>>> Radim > >>>>> > >>>>> On 11/06/2014 04:37 PM, Bela Ban wrote: > >>>>>> #1 I would by default have 1 thread pool shared by all caches > >>>>>> #2 This global thread pool should be configurable, perhaps in the > >>>>>> section ? > >>>>>> #3 Each cache by default uses the gobal thread pool > >>>>>> #4 A cache can define its own thread pool, then it would use this > one > >>>>>> and not the global thread pool > >>>>>> > >>>>>> I think this gives you a mixture between ease of use and > flexibility in > >>>>>> configuring pool per cache if needed > >>>>>> > >>>>>> On 06/11/14 16:23, Pedro Ruivo wrote: > >>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote: > >>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote: > >>>>>>>>> * added a single thread remote executor service. This will > handle the > >>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups > incoming > >>>>>>>>> threads and with a new executor service, each cache can process > their > >>>>>>>>> own FIFO commands concurrently. > >>>>>>>> +1000. This allows multiple updates from the same sender but to > >>>>>>>> different caches to be executed in parallel, and will speed thing > up. > >>>>>>>> > >>>>>>>> Do you intend to share a thread pool between the invocations > handlers of > >>>>>>>> the various caches, or do they each have their own thread pool ? > Or is > >>>>>>>> this configurable ? > >>>>>>>> > >>>>>>> That is question that cross my mind and I don't have any idea what > would > >>>>>>> be the best. So, for now, I will leave the thread pool shared > between > >>>>>>> the handlers. > >>>>>>> > >>>>>>> Never thought to make it configurable, but maybe that is the best > >>>>>>> option. And maybe, it should be possible to have different > max-thread > >>>>>>> size per cache. For example: > >>>>>>> > >>>>>>> * all caches using this remote executor will share the same > instance > >>>>>>> > >>>>>>> > >>>>>>> * all caches using this remote executor will create their own > thread > >>>>>>> pool with max-threads equals to 1 > >>>>>>> >>>>>>> max-threads=1 .../> > >>>>>>> > >>>>>>> * all caches using this remote executor will create their own > thread > >>>>>>> pool with max-threads equals to 1000 > >>>>>>> >>>>>>> max-thread=1000 .../> > >>>>>>> > >>>>>>> is this what you have in mind? comments? > >>>>>>> > >>>>>>> Cheers, > >>>>>>> Pedro > >>>>>>> _______________________________________________ > >>>>>>> infinispan-dev mailing list > >>>>>>> infinispan-dev at lists.jboss.org > >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>>>>> > >>>> _______________________________________________ > >>>> infinispan-dev mailing list > >>>> infinispan-dev at lists.jboss.org > >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>> > >> > >> -- > >> Radim Vansa > >> JBoss DataGrid QA > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > > Galder Zamarre?o > > galder at redhat.com > > twitter.com/galderz > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20141113/e85464b7/attachment-0001.html From rvansa at redhat.com Thu Nov 13 08:08:15 2014 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 13 Nov 2014 14:08:15 +0100 Subject: [infinispan-dev] Fix or document? Concurrent replaceWithVersion w/ same value might all return true - ISPN-4972 In-Reply-To: <47BCDD79-2E37-40AD-B994-AFB33AA8322F@redhat.com> References: <47BCDD79-2E37-40AD-B994-AFB33AA8322F@redhat.com> Message-ID: <5464AD3F.5000805@redhat.com> I agree with Galder, fixing it is not worth the cost. Actually, there are often bugs that I'd call rather 'quirks', not honoring the ConcurrentMap contract (recently we have discussed with Dan [1] and [2]) which are quite complex to fix. Another one that's considered not a bug is that a read does not have transactional semantics. Galder, where will you document that? I think that special page in documentation should accumulate such cases, linked to JIRAs for case that eventually we'll resolve them (with that glorious MVCC). And of course, link from javadoc to this document (though I am not sure whether we can correctly keep that in sync with latest release. Could we have a redirection from http://infinispan.org/docs/latest to http://infinispan.org/docs/7.0.x/ ? Radim [1] https://issues.jboss.org/browse/ISPN-3918 [2] https://issues.jboss.org/browse/ISPN-4286 On 11/13/2014 01:51 PM, Galder Zamarre?o wrote: > Hi all, > > Re: https://issues.jboss.org/browse/ISPN-4972 > > Embedded cache provides atomicity of a replace() call passing in the previous value. This limitation might be lifted when we adopt Java 8 and we can pass in a lambda or similar, which can be executed right when the value is compared now, and if it returns true it?s applied. The lambda could compare both value and metadata for example. > > Anyway, given the current status, I?m considering whether it?s worth fixing this particular issue. Fixing the issue would require adding some kind of locking in the Hot Rod server so that the version retrieval, comparison and replace call, can all happen atomically. > > This is not ideal, and on top of that, as Radim said, the chances of this happening in real life are limited, or more precisely it?s effects are minimal. In other words, if two concurrent threads call replace with the same value, the end result is that the new value would be stored, but as a result of the code, both replaces would return true which is not strictly right. > > I?d rather document this than add unnecessary locking in the Hot Rod server where it deals with the versioned replace call. > > Thoughts? > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From mudokonman at gmail.com Thu Nov 13 08:08:48 2014 From: mudokonman at gmail.com (William Burns) Date: Thu, 13 Nov 2014 08:08:48 -0500 Subject: [infinispan-dev] Remoting package refactor In-Reply-To: References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> <545C7649.6090309@redhat.com> <545C835C.3020301@redhat.com> <3FA7F47B-1410-4223-8576-2E54F67C7B29@redhat.com> <5464647D.4020707@redhat.com> Message-ID: On Thu, Nov 13, 2014 at 7:59 AM, Dan Berindei wrote: > Radim, I also knew the 1.7 ForkJoinPool isn't really optimized for blocking > tasks, but the ManagedBlocker interface mentioned in [3] seems to be > intended just for that. I was actually writing this same thing about ManagedBlocker instance in the fork join thread for operations that will block (acquiring locks - RPC etc.) The cool thing is that it then creates another thread to try to keep CPU utilization higher and then after the blocking is done it removes the next idle thread. Also it is in Java 7 as well, your comment seemed a bit ambiguous as to whether or not you were saying it wasn't. > > Re: commonPool(), we can (and should) still create our own ForkJoinPool > instead of using the global one. I am not familiar with this issue, but if true sounds like an issue with ForkJoinPool implementation that we should log to the JDK. I haven't been able to dig into the code underneath but it sounded like from the Javadoc that the common pool would eventually shut down the worker threads after inactivity which should free the additional references. "Using the common pool normally reduces resource usage (its threads are slowly reclaimed during periods of non-use, and reinstated upon subsequent use)." > > Cheers > Dan > > > On Thu, Nov 13, 2014 at 9:57 AM, Radim Vansa wrote: >> >> F/J tasks should not acquire any locks (or, generally, block) during >> their execution. At least according to JavaDocs. Are we ready for that? >> >> Btw., I really don't like the fact that the commonPool() cannot be >> properly shutdown. This leads to threadlocal variables leaking when the >> component using F/J pool is undeployed (the classloader cannot be GCed >> and you end up with OOME in PermGen space). >> >> Radim >> >> On 11/13/2014 08:28 AM, Galder Zamarre?o wrote: >> > @Pedro, did you consider using a ForkJoinPool instead? >> > >> > Traditional JDK pools are known to be very hard to configure and get it >> > ?right?. Fork join pools are being used as default thread pools in other >> > libraries, vastly reducing configuration. >> > >> > Jessitron has published some interesting blog posts on the advantages of >> > traditional ExecutorService vs Fork/Join pools and viceversa. See [1] and >> > [3]. She also did a talk on it, see [4]. >> > >> > Cheers, >> > >> > p.s. I?ve not studied your use case in depth to decide whether F/J would >> > suite better, but it?s certainly worth a look now that we?re on Java 7. >> > >> > [1] >> > https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ForkJoinPool.html >> > [2] http://blog.jessitron.com/2014/01/choosing-executorservice.html >> > [3] >> > http://blog.jessitron.com/2014/02/scala-global-executioncontext-makes.html >> > [4] https://www.youtube.com/watch?v=yhguOt863nw >> > >> > On 07 Nov 2014, at 09:31, Radim Vansa wrote: >> > >> >> Btw., have you ever considered checks if a thread returns to pool >> >> reasonably often? Some of the other datagrids use this, though there's >> >> not much how to react upon that beyond printing out stack traces (but >> >> you can at least report to management that some node seems to be >> >> broken). >> >> >> >> Radim >> >> >> >> On 11/07/2014 08:35 AM, Bela Ban wrote: >> >>> That's exactly what I suggested. No config gives you a shared global >> >>> thread pool for all caches. >> >>> >> >>> Those caches which need a separate pool can do that via configuration >> >>> (and of course also programmatically) >> >>> >> >>> On 06/11/14 20:31, Tristan Tarrant wrote: >> >>>> My opinion is that we should aim for less configuration, i.e. >> >>>> threadpools should mostly have sensible defaults and be shared by >> >>>> default unless there are extremely good reasons for not doing so. >> >>>> >> >>>> Tristan >> >>>> >> >>>> On 06/11/14 19:40, Radim Vansa wrote: >> >>>>> I second the opinion that any threadpools should be shared by >> >>>>> default. >> >>>>> There are users who have hundreds or thousands of caches and having >> >>>>> separate threadpool for each of them could easily drain resources. >> >>>>> And >> >>>>> sharing resources is the purpose of threadpools, right? >> >>>>> >> >>>>> Radim >> >>>>> >> >>>>> On 11/06/2014 04:37 PM, Bela Ban wrote: >> >>>>>> #1 I would by default have 1 thread pool shared by all caches >> >>>>>> #2 This global thread pool should be configurable, perhaps in the >> >>>>>> section ? >> >>>>>> #3 Each cache by default uses the gobal thread pool >> >>>>>> #4 A cache can define its own thread pool, then it would use this >> >>>>>> one >> >>>>>> and not the global thread pool >> >>>>>> >> >>>>>> I think this gives you a mixture between ease of use and >> >>>>>> flexibility in >> >>>>>> configuring pool per cache if needed >> >>>>>> >> >>>>>> On 06/11/14 16:23, Pedro Ruivo wrote: >> >>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote: >> >>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote: >> >>>>>>>>> * added a single thread remote executor service. This will >> >>>>>>>>> handle the >> >>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups >> >>>>>>>>> incoming >> >>>>>>>>> threads and with a new executor service, each cache can process >> >>>>>>>>> their >> >>>>>>>>> own FIFO commands concurrently. >> >>>>>>>> +1000. This allows multiple updates from the same sender but to >> >>>>>>>> different caches to be executed in parallel, and will speed thing >> >>>>>>>> up. >> >>>>>>>> >> >>>>>>>> Do you intend to share a thread pool between the invocations >> >>>>>>>> handlers of >> >>>>>>>> the various caches, or do they each have their own thread pool ? >> >>>>>>>> Or is >> >>>>>>>> this configurable ? >> >>>>>>>> >> >>>>>>> That is question that cross my mind and I don't have any idea what >> >>>>>>> would >> >>>>>>> be the best. So, for now, I will leave the thread pool shared >> >>>>>>> between >> >>>>>>> the handlers. >> >>>>>>> >> >>>>>>> Never thought to make it configurable, but maybe that is the best >> >>>>>>> option. And maybe, it should be possible to have different >> >>>>>>> max-thread >> >>>>>>> size per cache. For example: >> >>>>>>> >> >>>>>>> * all caches using this remote executor will share the same >> >>>>>>> instance >> >>>>>>> >> >>>>>>> >> >>>>>>> * all caches using this remote executor will create their own >> >>>>>>> thread >> >>>>>>> pool with max-threads equals to 1 >> >>>>>>> > >>>>>>> max-threads=1 .../> >> >>>>>>> >> >>>>>>> * all caches using this remote executor will create their own >> >>>>>>> thread >> >>>>>>> pool with max-threads equals to 1000 >> >>>>>>> > >>>>>>> max-thread=1000 .../> >> >>>>>>> >> >>>>>>> is this what you have in mind? comments? >> >>>>>>> >> >>>>>>> Cheers, >> >>>>>>> Pedro >> >>>>>>> _______________________________________________ >> >>>>>>> infinispan-dev mailing list >> >>>>>>> infinispan-dev at lists.jboss.org >> >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>>>>> >> >>>> _______________________________________________ >> >>>> infinispan-dev mailing list >> >>>> infinispan-dev at lists.jboss.org >> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>> >> >> >> >> -- >> >> Radim Vansa >> >> JBoss DataGrid QA >> >> >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > -- >> > Galder Zamarre?o >> > galder at redhat.com >> > twitter.com/galderz >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Radim Vansa >> JBoss DataGrid QA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From vagvaz at gmail.com Fri Nov 14 00:51:49 2014 From: vagvaz at gmail.com (Evangelos Vazaios) Date: Fri, 14 Nov 2014 07:51:49 +0200 Subject: [infinispan-dev] TopologySafe Map / Reduce In-Reply-To: <7A961E49-612C-47FF-ACC4-64F0B4821022@hibernate.org> References: <5436FB16.3000003@infinispan.org> <20141010154948.GD5052@hibernate.org> <7A961E49-612C-47FF-ACC4-64F0B4821022@hibernate.org> Message-ID: I am really sorry for the ridiculously late response. I will describe briefly our 1st year and our current approach. 1st year approach. During the first year, we used infinispan MR to implement our operators. Most of our operators were Map-only (for example project,filter) and for these we did not use the intermediate cache. For all the other operators (join,group by) we used the collector interface. Our reducers always returned null and the actual output was written to another cache, because we had a workflow of operators. Current approach At the moment we do not use replaced MR, with two dist calls one for the map and another for the reduce phase. The intermediate data are stored in a cache (Cache>). At some point we would like to change to a delta aware cache. We changed from the MR to dist calls, because we want to run MR tasks across multiple micro-clouds and the synchronization of Mappers and reducers it would be more complicated than monitoring the execution of independent dist calls ( 1 for each micro-cloud). The intermediate data are written to a ensemble cache ( a LEADS cache), which spans multiple micro-clouds. In general, I find it quite useful to be able to "consistently" (without missing data that are already inside) iterate over the values of a cache. On Wed, Oct 15, 2014 at 7:41 PM, Emmanuel Bernard wrote: > > On 13 Oct 2014, at 10:45, Dan Berindei wrote: > > > On Fri, Oct 10, 2014 at 6:49 PM, Emmanuel Bernard > wrote: > >> When wrestling with the subject, here is what I had in mind. >> >> The M/R coordinator node sends the M task per segment on the node where >> the segment is primary. >> > > What's M? Is it just a shorthand for "map", or is it a new parameter that > controls the number of map/combine tasks sent at once? > > > M is short for Map. Sorry. > > > >> Each "per-segment" M task is executed and is offered the way to push >> intermediary results in a temp cache. >> > > Just to be clear, the user-provided mapper and combiner don't know > anything about the intermediary cache (which doesn't have to be temporary, > if it's shared by all M/R tasks). They only interact with the Collector > interface. > The map/combine task on the other hand is our code, and it deals with the > intermediary cache directly. > > > Interesting, Evangelos, do you actually use the collector interface or > actual explicit intermediary caches in your approach. > If that?s the collector interface, I guess that?s easier to hide that > sharding business. > We use explicit caches, but should that functionality become available, we could possibly revert back to Infinspan MR. > > > >> The intermediary results are stored with a composite key [imtermKey-i, >> seg-j]. >> The M/R coordinator waits for all M tasks to return. If one does not >> (timeout, rehash), the following happens: >> > > We can't allow time out map tasks, or they will keep writing to the > intermediate cache in parallel with the retried tasks. So the originator > has to wait for a response from each node to which it sent a map task. > > > OK. I guess the originator can see that a node is out of the cluster > though and act accordingly. > > > >> - delete [intermKey-i, seg-i] (that operation could be handled by the >> new per-segment M before the map task is effectively started) >> - ship the M task for that segment-i to the new primary owner of >> segment-i >> >> When all M tasks are received the Reduce phase will read all >> [intermKey-i, *] >> keys and reduce them. >> > Note that if the reduction phase is itself distributed, we could apply >> the same key per segment and shipping split for these. >> > > Sure, we have to retry reduce tasks when the primary owner changes, and it > makes sense to retry as little as possible. > > >> >> Again the tricky part is to expose the ability to write to intermediary >> caches per segment without exposing segments per se as well as let >> someone see a concatenated view if intermKey-i from all segments subkeys >> during reduction. >> > > Writing to and reading from the intermediate cache is already abstracted > from user code (in the Mapper and Reducer interfaces). So we don't need to > worry about exposing extra details to the user. > > >> >> Thoughts? >> >> Dan, I did not quite get what alternative approach you wanted to >> propose. Care to respin it for a slow brain? :) >> > > I think where we differ is that I don't think user code needs to know > about how we store the intermediate values and what we retry, as long as > their mappers/combiners/reducers don't have side effects. > > > Right but my understanding from the LEADS guys was that they had side > effects on their M/Rs. Waiting for Evangelos to speak up. > > Should that be available for MapReduce, and the underlying ensemble cache can correctly handle one of the strategies described above, we might be able to change back to Infinispan MR. > > Otherwise I was thinking on the same lines: send 1 map/combine task for > each segment (maybe with a cap on the number of segments being processed at > the same time on each node), split the intermediate values per input > segment, cancel+retry each map task if the topology changes and the > executing node is no longer an owner. If the reduce phase is distributed, > run 1 reduce task per segment as well, and cancel+retry the reduce task if > the executing node is no longer an owner. > > I had some ideas about assigning each map/combine phase a UUID and making > the intermediate keys [intermKey, seg, mctask] to allow the originator to > retry a map/combine task without waiting for the previous one to finish, > but I don't think I mentioned that before :) > > > Nice touch, that fixes the rogue node / timeout problem. > > There are also some details that I'm worried about: > > 1) If the reduce phase is distributed, and the intermediate cache is > non-transactional, any topology change in the intermediate cache will > require us to retry all the map/combine tasks that were running at the time > on any node (even if some nodes did not detect the topology change yet). So > it would make sense to limit the number of map/combine tasks that are > processed at one time, in order to limit the amount of tasks we retry (OR > require the intermediate cache to be transactional). > > > I am not fully following that. What matters in the end it seems is for the > originator to detect a topology change and discard things accordingly, no? > If the other nodes are slaves of that originator for the purpose of that > M/R, we are good. > > > 2) Running a separate map/combine task for each segment is not really an > option until we implement the the segment-aware data container and cache > stores. Without that change, it will make everything much slower, because > of all the extra iterations for each segment. > > > See my other email about physically merging down the per segment work into > a per node work when you ship that work. > > 3) And finally, all this will be overkill when the input cache is small, > and the time needed to process the data is comparable to the time needed to > send all those extra RPCs. > > > So I'm thinking it might be better to adopt Vladimir's suggestion to retry > everything if we detect a topology change in the input and/or intermediate > cache at the end of the M/R task, at least in the first phase. > > > It would also be an overkill to restart everything MR task if the volume of data is large. I would propose a solution using the distributed iterator and that it would not miss data whenever a topology change happens. > You half lost but I think that with my proposal to physically merge the > RPC calls per node instead of per segment, that problem would be alleviated. > > Emmanuel > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > Cheers, Evangelos -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20141114/b5f4d6f8/attachment-0001.html From ttarrant at redhat.com Fri Nov 14 04:39:08 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 14 Nov 2014 10:39:08 +0100 Subject: [infinispan-dev] Schema versions In-Reply-To: <5464909C.2050309@redhat.com> References: <5464909C.2050309@redhat.com> Message-ID: <5465CDBC.7000508@redhat.com> I have unilaterally decided to bump all schema versions to 7.1. Less confusing. Tristan On 13/11/14 12:06, Tristan Tarrant wrote: > Hi all, > > I have issued a PR [1] which bumps the core parser and schema to 7.1, > since we're going to introduce schema changes there. > Do you think we should bump all schemas in sync (including the > cachestores) or shall we only do it when there are changes ? > > Tristan > > [1] https://github.com/infinispan/infinispan/pull/3069 > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > From pedro at infinispan.org Fri Nov 14 04:46:57 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Fri, 14 Nov 2014 09:46:57 +0000 Subject: [infinispan-dev] Remoting package refactor In-Reply-To: References: <545B877B.9050105@infinispan.org> <545B8D31.1020404@redhat.com> <545B928C.3000307@infinispan.org> <545B95B2.4070506@redhat.com> <545BC0AE.90002@redhat.com> <545BCC93.4010205@redhat.com> <545C7649.6090309@redhat.com> <545C835C.3020301@redhat.com> <3FA7F47B-1410-4223-8576-2E54F67C7B29@redhat.com> <5464647D.4020707@redhat.com> Message-ID: <5465CF91.6080208@infinispan.org> Hi, @Galder, no I didn't. I will take a look. I was not aware of the ManagedBlocker interface but it fits on our problem. The best thing would be if we execute the local and remote operations on the F/J pool so I could wrap all the blocking invocations in ManagedBlocker without having to check if the operation is local or not... but that is subject for another topic :) Thanks for pointing it. Pedro On 11/13/2014 01:08 PM, William Burns wrote: > On Thu, Nov 13, 2014 at 7:59 AM, Dan Berindei wrote: >> Radim, I also knew the 1.7 ForkJoinPool isn't really optimized for blocking >> tasks, but the ManagedBlocker interface mentioned in [3] seems to be >> intended just for that. > > I was actually writing this same thing about ManagedBlocker instance > in the fork join thread for operations that will block (acquiring > locks - RPC etc.) The cool thing is that it then creates another > thread to try to keep CPU utilization higher and then after the > blocking is done it removes the next idle thread. Also it is in Java > 7 as well, your comment seemed a bit ambiguous as to whether or not > you were saying it wasn't. > >> >> Re: commonPool(), we can (and should) still create our own ForkJoinPool >> instead of using the global one. > > I am not familiar with this issue, but if true sounds like an issue > with ForkJoinPool implementation that we should log to the JDK. > > I haven't been able to dig into the code underneath but it sounded > like from the Javadoc that the common pool would eventually shut down > the worker threads after inactivity which should free the additional > references. "Using the common pool normally reduces resource usage > (its threads are slowly reclaimed during periods of non-use, and > reinstated upon subsequent use)." > >> >> Cheers >> Dan >> >> >> On Thu, Nov 13, 2014 at 9:57 AM, Radim Vansa wrote: >>> >>> F/J tasks should not acquire any locks (or, generally, block) during >>> their execution. At least according to JavaDocs. Are we ready for that? >>> >>> Btw., I really don't like the fact that the commonPool() cannot be >>> properly shutdown. This leads to threadlocal variables leaking when the >>> component using F/J pool is undeployed (the classloader cannot be GCed >>> and you end up with OOME in PermGen space). >>> >>> Radim >>> >>> On 11/13/2014 08:28 AM, Galder Zamarre?o wrote: >>>> @Pedro, did you consider using a ForkJoinPool instead? >>>> >>>> Traditional JDK pools are known to be very hard to configure and get it >>>> ?right?. Fork join pools are being used as default thread pools in other >>>> libraries, vastly reducing configuration. >>>> >>>> Jessitron has published some interesting blog posts on the advantages of >>>> traditional ExecutorService vs Fork/Join pools and viceversa. See [1] and >>>> [3]. She also did a talk on it, see [4]. >>>> >>>> Cheers, >>>> >>>> p.s. I?ve not studied your use case in depth to decide whether F/J would >>>> suite better, but it?s certainly worth a look now that we?re on Java 7. >>>> >>>> [1] >>>> https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ForkJoinPool.html >>>> [2] http://blog.jessitron.com/2014/01/choosing-executorservice.html >>>> [3] >>>> http://blog.jessitron.com/2014/02/scala-global-executioncontext-makes.html >>>> [4] https://www.youtube.com/watch?v=yhguOt863nw >>>> >>>> On 07 Nov 2014, at 09:31, Radim Vansa wrote: >>>> >>>>> Btw., have you ever considered checks if a thread returns to pool >>>>> reasonably often? Some of the other datagrids use this, though there's >>>>> not much how to react upon that beyond printing out stack traces (but >>>>> you can at least report to management that some node seems to be >>>>> broken). >>>>> >>>>> Radim >>>>> >>>>> On 11/07/2014 08:35 AM, Bela Ban wrote: >>>>>> That's exactly what I suggested. No config gives you a shared global >>>>>> thread pool for all caches. >>>>>> >>>>>> Those caches which need a separate pool can do that via configuration >>>>>> (and of course also programmatically) >>>>>> >>>>>> On 06/11/14 20:31, Tristan Tarrant wrote: >>>>>>> My opinion is that we should aim for less configuration, i.e. >>>>>>> threadpools should mostly have sensible defaults and be shared by >>>>>>> default unless there are extremely good reasons for not doing so. >>>>>>> >>>>>>> Tristan >>>>>>> >>>>>>> On 06/11/14 19:40, Radim Vansa wrote: >>>>>>>> I second the opinion that any threadpools should be shared by >>>>>>>> default. >>>>>>>> There are users who have hundreds or thousands of caches and having >>>>>>>> separate threadpool for each of them could easily drain resources. >>>>>>>> And >>>>>>>> sharing resources is the purpose of threadpools, right? >>>>>>>> >>>>>>>> Radim >>>>>>>> >>>>>>>> On 11/06/2014 04:37 PM, Bela Ban wrote: >>>>>>>>> #1 I would by default have 1 thread pool shared by all caches >>>>>>>>> #2 This global thread pool should be configurable, perhaps in the >>>>>>>>> section ? >>>>>>>>> #3 Each cache by default uses the gobal thread pool >>>>>>>>> #4 A cache can define its own thread pool, then it would use this >>>>>>>>> one >>>>>>>>> and not the global thread pool >>>>>>>>> >>>>>>>>> I think this gives you a mixture between ease of use and >>>>>>>>> flexibility in >>>>>>>>> configuring pool per cache if needed >>>>>>>>> >>>>>>>>> On 06/11/14 16:23, Pedro Ruivo wrote: >>>>>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote: >>>>>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote: >>>>>>>>>>>> * added a single thread remote executor service. This will >>>>>>>>>>>> handle the >>>>>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups >>>>>>>>>>>> incoming >>>>>>>>>>>> threads and with a new executor service, each cache can process >>>>>>>>>>>> their >>>>>>>>>>>> own FIFO commands concurrently. >>>>>>>>>>> +1000. This allows multiple updates from the same sender but to >>>>>>>>>>> different caches to be executed in parallel, and will speed thing >>>>>>>>>>> up. >>>>>>>>>>> >>>>>>>>>>> Do you intend to share a thread pool between the invocations >>>>>>>>>>> handlers of >>>>>>>>>>> the various caches, or do they each have their own thread pool ? >>>>>>>>>>> Or is >>>>>>>>>>> this configurable ? >>>>>>>>>>> >>>>>>>>>> That is question that cross my mind and I don't have any idea what >>>>>>>>>> would >>>>>>>>>> be the best. So, for now, I will leave the thread pool shared >>>>>>>>>> between >>>>>>>>>> the handlers. >>>>>>>>>> >>>>>>>>>> Never thought to make it configurable, but maybe that is the best >>>>>>>>>> option. And maybe, it should be possible to have different >>>>>>>>>> max-thread >>>>>>>>>> size per cache. For example: >>>>>>>>>> >>>>>>>>>> * all caches using this remote executor will share the same >>>>>>>>>> instance >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> * all caches using this remote executor will create their own >>>>>>>>>> thread >>>>>>>>>> pool with max-threads equals to 1 >>>>>>>>>> >>>>>>>>> max-threads=1 .../> >>>>>>>>>> >>>>>>>>>> * all caches using this remote executor will create their own >>>>>>>>>> thread >>>>>>>>>> pool with max-threads equals to 1000 >>>>>>>>>> >>>>>>>>> max-thread=1000 .../> >>>>>>>>>> >>>>>>>>>> is this what you have in mind? comments? >>>>>>>>>> >>>>>>>>>> Cheers, >>>>>>>>>> Pedro >>>>>>>>>> _______________________________________________ >>>>>>>>>> infinispan-dev mailing list >>>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>> >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> >>>>> >>>>> -- >>>>> Radim Vansa >>>>> JBoss DataGrid QA >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> -- >>>> Galder Zamarre?o >>>> galder at redhat.com >>>> twitter.com/galderz >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> -- >>> Radim Vansa >>> JBoss DataGrid QA >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From pedro at infinispan.org Fri Nov 14 04:49:36 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Fri, 14 Nov 2014 09:49:36 +0000 Subject: [infinispan-dev] Schema versions In-Reply-To: <5465CDBC.7000508@redhat.com> References: <5464909C.2050309@redhat.com> <5465CDBC.7000508@redhat.com> Message-ID: <5465D030.7020600@infinispan.org> On 11/14/2014 09:39 AM, Tristan Tarrant wrote: > I have unilaterally decided to bump all schema versions to 7.1. Less > confusing. > +1 > Tristan > > On 13/11/14 12:06, Tristan Tarrant wrote: >> Hi all, >> >> I have issued a PR [1] which bumps the core parser and schema to 7.1, >> since we're going to introduce schema changes there. >> Do you think we should bump all schemas in sync (including the >> cachestores) or shall we only do it when there are changes ? >> >> Tristan >> >> [1] https://github.com/infinispan/infinispan/pull/3069 >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From dan.berindei at gmail.com Fri Nov 14 05:41:21 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Fri, 14 Nov 2014 12:41:21 +0200 Subject: [infinispan-dev] Schema versions In-Reply-To: <5465D030.7020600@infinispan.org> References: <5464909C.2050309@redhat.com> <5465CDBC.7000508@redhat.com> <5465D030.7020600@infinispan.org> Message-ID: +1 On Fri, Nov 14, 2014 at 11:49 AM, Pedro Ruivo wrote: > > > On 11/14/2014 09:39 AM, Tristan Tarrant wrote: > > I have unilaterally decided to bump all schema versions to 7.1. Less > > confusing. > > > +1 > > > Tristan > > > > On 13/11/14 12:06, Tristan Tarrant wrote: > >> Hi all, > >> > >> I have issued a PR [1] which bumps the core parser and schema to 7.1, > >> since we're going to introduce schema changes there. > >> Do you think we should bump all schemas in sync (including the > >> cachestores) or shall we only do it when there are changes ? > >> > >> Tristan > >> > >> [1] https://github.com/infinispan/infinispan/pull/3069 > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20141114/7a00a5ff/attachment.html From pierre.sutra at unine.ch Mon Nov 17 07:22:39 2014 From: pierre.sutra at unine.ch (Pierre Sutra) Date: Mon, 17 Nov 2014 13:22:39 +0100 Subject: [infinispan-dev] Total Order non-transactional cache In-Reply-To: <5460D063.3020806@infinispan.org> References: <5460D063.3020806@infinispan.org> Message-ID: <5469E88F.4070305@unine.ch> Hello Pedro, I read your design page with interest, and formulated a few remarks/questions below. Despite I do not know in details the internals, I Hope that they might be useful. Cheers, Pierre - It is unclear to me how the protocol execute reads, in particular regarding causality. If a reader waits for a single replica to answer, in case every write requires all replicas to answer, this is fine. However, it seems that a writer can return as soon as a single replica returns an acknowledgement. In such a case, it might be the case that a reader do not see its own modifications, if it retrieves data from a replica that did not apply the modifications yet. - Do you ensure impotency of commands inside ISPN ? In my understanding, it is necessary when switching from a view v1 to a view v2, as commands delivered at the end of v1 might be already executed. - I would call your replication protocol "virtual synchrony based" instead, as it is relying on the virtual synchrony abstraction provided by JGroups. On 10. 11. 14 15:49, Pedro Ruivo wrote: > Hi, > > FYI, I've just created a design page: > https://github.com/infinispan/infinispan/wiki/Total-Order-non-Transactional-Cache > > My plan is to implement it in 7.1 release. > > Feel free to comment. > > Cheers, > Pedro > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rory.odonnell at oracle.com Mon Nov 17 09:05:27 2014 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Mon, 17 Nov 2014 14:05:27 +0000 Subject: [infinispan-dev] Jigsaw early-access builds updated (build 38) Message-ID: <546A00A7.1000903@oracle.com> Hi Galder, JDK 9 Early Access with Project Jigsaw build b38 is available on java.net [1] The goal of Project Jigsaw [2] is to design and implement a standard module system for the Java SE Platform, and to apply that system to the Platform itself and to the JDK. The early-access builds implement the changes described in JEP 220 [3] . The jrt file-system provider is not yet implemented. As of build 38, the extension mechanism has been removed. Please refer to Project Jigsaw's updated project pages [2] & [4] and Mark Reinhold's update [5] for further details. We are very interested in your experiences testing this build. Comments, questions, and suggestions are welcome on the jigsaw-dev mailing list or through bug reports via bugs.java.com . Note: If you haven?t already subscribed to that mailing list then please do so first, otherwise your message will be discarded as spam. Rgds, Rory [1] https://jdk9.java.net/jigsaw/ [2] http://openjdk.java.net/projects/jigsaw/ [3] http://openjdk.java.net/jeps/220 [4] http://openjdk.java.net/projects/jigsaw/ea [5] http://mail.openjdk.java.net/pipermail/jigsaw-dev/2014-November/003946.html -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20141117/5ed94583/attachment-0001.html From galder at redhat.com Mon Nov 17 10:11:19 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Mon, 17 Nov 2014 16:11:19 +0100 Subject: [infinispan-dev] Feature request: manage and share a CacheManager across deployments on WildFly In-Reply-To: <54622776.6020009@redhat.com> References: <54622776.6020009@redhat.com> Message-ID: <8DB0D236-24B5-439E-87C6-850717754F54@redhat.com> On 11 Nov 2014, at 16:12, Tristan Tarrant wrote: > My proposal: > > make the Infinispan + JGroups subsystems which we develop for Infinispan > Server installable in any instance of WildFly. > They would obviously use a different namespace & slot to avoid conflict. ^ +1 > Bonus points if we can also make WildFly use our versions for its > clustering stuff. If we can find an easy way to test this, then yeah :) > > Tristan > > On 11/11/14 14:57, Sanne Grinovero wrote: >> On 10 November 2014 12:50, Galder Zamarre?o wrote: >>> @Paul, your input would be appreciated. My reply is below. >>> >>> On 07 Nov 2014, at 16:47, Sanne Grinovero wrote: >>> >>>> I'm witnessing users of Hibernate Search who say they deploy several >>>> dozens of JPA applications using Hibernate Search in a single >>>> container, and when evaluating usage of Infnispan for index storage >>>> they would like them all to share the CacheManager, rather than >>>> starting a new CacheManager for each and then have to worry about >>>> things like JGroups isolation or rather reuse via FORK. >>>> >>>> This is easy to achieve by configuring the CacheManager in the WildFly >>>> configuration, and then looking it up by JNDI name.. but is not easy >>>> at all to achieve if you want to use the custom modules which we >>>> deliver to allow using a different Infinispan version of what's >>>> included in WildFly. >>>> >>>> That's nasty, because we ultimately want people to use our modules and >>>> leave the ones in WildFly for its internal usage. >>>> >>>> It would be nice if the team could include in the modules.zip a way to >>>> pre-start configured caches, and instructions to mark their >>>> deployments as depending on this service. Would be useful to then >>>> connect this to monitoring too.. >>> If all Hibernate Search apps are using the same cache manager, won?t they have cache conflicts? Or are these caches named in such way that they can run within a single cache manager? >> Like different deployment might want to share the same database, >> sometimes people want to share the index. It should be up to >> configuration to be able to isolate vs share an index.. and we're >> flexible about that as you can use different cache names, or different >> index names sharing the same caches. >> >>> The simplest thing I can think of to achieve this would be for an optional service to start a cache manager with a given configuration, and bind that to JNDI. That would be something we can potentially provide, with JMX monitoring at best. >> +1 >> That's what I think we're missing. Sounds like very useful and not a >> big effort to provide. >> >>> However, if you want these cache manager to be registered into Wildfly?s domain model for better monitoring, I don?t really know if this would be something we can just provide without any serious hooks into Wildfly :|, and then it all gets quite complicated IMO because you need to start maintaining yet another integration layer with Wildfly. >> Keep in mind that we want people to use the Infinispan jars, not the >> WildFly version of Infinispan when it comes to custom/direct usage. >> For example for EAP users, it's unsupported to use the included >> Infinispan version so going via the standard configuration files is >> not an option. So I'm not sure which integration options we have: I'm >> expecting this to be provided purely as an add-on strongly coupled to >> the Infinispan release. So I agree it should be highly decoupled from >> WildFly code. >> >>> TBH, it?d be good to hear from Paul et al since they know WF best and see what ideas they might have. >> +1 Looking forward. >> >> Sanne >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From isavin at redhat.com Tue Nov 18 02:05:19 2014 From: isavin at redhat.com (Ion Savin) Date: Tue, 18 Nov 2014 09:05:19 +0200 Subject: [infinispan-dev] Infinispan 7.0.1.Final is now available! Message-ID: <546AEFAF.3060905@redhat.com> Hi all, Infinispan 7.0.1.Final is now available! For details please consult: http://blog.infinispan.org/2014/11/infinispan-701final-released.html Thanks to everyone involved in this release! From pedro at infinispan.org Tue Nov 18 05:53:12 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Tue, 18 Nov 2014 10:53:12 +0000 Subject: [infinispan-dev] Total Order non-transactional cache In-Reply-To: <5469E88F.4070305@unine.ch> References: <5460D063.3020806@infinispan.org> <5469E88F.4070305@unine.ch> Message-ID: <546B2518.9080205@infinispan.org> Hi Pierre, Thanks for the feedback. My comments are inline. Cheers, Pedro On 11/17/2014 12:22 PM, Pierre Sutra wrote: > Hello Pedro, > > I read your design page with interest, and formulated a few > remarks/questions below. Despite I do not know in details the internals, > I Hope that they might be useful. > > Cheers, > Pierre > > - It is unclear to me how the protocol execute reads, in particular > regarding causality. If a reader waits for a single replica to answer, > in case every write requires all replicas to answer, this is fine. > However, it seems that a writer can return as soon as a single replica > returns an acknowledgement. In such a case, it might be the case that a > reader do not see its own modifications, if it retrieves data from a > replica that did not apply the modifications yet. You're right. The writer must wait for all the replies except if the cache is full replicated (in this case, it can wait for the self-deliver). > > - Do you ensure impotency of commands inside ISPN ? In my understanding, > it is necessary when switching from a view v1 to a view v2, as commands > delivered at the end of v1 might be already executed. I'm lost here. can you be clear? are you talking about JGroups view or Infinispan cache topology? On the latter matters and it will delivered the cache topology changes in total order. So, everybody receives the same order of events. > > - I would call your replication protocol "virtual synchrony based" > instead, as it is relying on the virtual synchrony abstraction provided > by JGroups. I don't think so. If I recall correctly, virtual synchrony ensures that if a message is sent in view *v* then it is delivered in view *v*. First, that cases is not necessary since we retry the commands received in different topologies. Second, the protocol relies on the order in which the operations are delivered. > > > On 10. 11. 14 15:49, Pedro Ruivo wrote: >> Hi, >> >> FYI, I've just created a design page: >> https://github.com/infinispan/infinispan/wiki/Total-Order-non-Transactional-Cache >> >> My plan is to implement it in 7.1 release. >> >> Feel free to comment. >> >> Cheers, >> Pedro >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From pierre.sutra at unine.ch Tue Nov 18 06:47:05 2014 From: pierre.sutra at unine.ch (Pierre Sutra) Date: Tue, 18 Nov 2014 12:47:05 +0100 Subject: [infinispan-dev] Total Order non-transactional cache In-Reply-To: <546B2518.9080205@infinispan.org> References: <5460D063.3020806@infinispan.org> <5469E88F.4070305@unine.ch> <546B2518.9080205@infinispan.org> Message-ID: <546B31B9.6000505@unine.ch> Hi Pedro, I added a few comments inline to your responses. Cheers, Pierre > You're right. The writer must wait for all the replies except if the > cache is full replicated (in this case, it can wait for the self-deliver). Indeed, in the case of full replication, returning after the local replica applies the update is fine. Some precautions must be taken however if, for instance, the client is accessing the system via HotRod in round-robin mode. > - Do you ensure impotency of commands inside ISPN ? In my understanding, > it is necessary when switching from a view v1 to a view v2, as commands > delivered at the end of v1 might be already executed. > I'm lost here. can you be clear? are you talking about JGroups view or > Infinispan cache topology? > > On the latter matters and it will delivered the cache topology changes > in total order. So, everybody receives the same order of events. Sorry if I was not very clear. I was asking if an Infinispan replica knows whether it has applied some command or not. In the case where FLUSH is not at the top of the stack this might be necessary between two view changes, no ? >> - I would call your replication protocol "virtual synchrony based" >> instead, as it is relying on the virtual synchrony abstraction provided >> by JGroups. > I don't think so. If I recall correctly, virtual synchrony ensures that > if a message is sent in view *v* then it is delivered in view *v*. I believe that this is not the case, as processes have to simply agree on the set of messages they receive in the view they are currently leaving. > First, that cases is not necessary since we retry the commands received > in different topologies. Second, the protocol relies on the order in > which the operations are delivered. I see, but it seems to me in such a case you would need either idempotent commands, or that processes agree on the message to deliver before the view changes using FLUSH. In the latter case, I was thinking that the abstraction implemented by JGroups is virtual synchrony [1]. |1] http://www.jgroups.org/manual/html/user-advanced.html#d0e3025 (Section 5.7) From ttarrant at redhat.com Tue Nov 18 07:54:51 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 18 Nov 2014 13:54:51 +0100 Subject: [infinispan-dev] 7.0.x branched, master is now 7.1.0-SNAPSHOT Message-ID: <546B419B.7090506@redhat.com> Hi all, just to let you know I have branched 7.0.x and made master 7.1.0-SNAPSHOT. Remember, keep these branches as CI-clean as possible. Tristan From ttarrant at redhat.com Tue Nov 18 10:10:06 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 18 Nov 2014 16:10:06 +0100 Subject: [infinispan-dev] Infinispan 7.1.x: codename proposals Message-ID: <546B614E.1090902@redhat.com> It's that time of the development cycle when we get to decide the most important feature that will be part of our next release: the codename. As usual, it must be chosen among your favourite fermented combination of barley, hops, yeast and water aka beer. A bit of history for reference: 4.0 Starobrno 4.1 Radegast 4.2 Ursus 5.0 Pagoa 5.1 Brahma 5.2 Delirium 5.3 Tactical Nuclear Penguin 6.0 Infinium 7.0 Guinness And a selection of nominees from the past: Drake's Hopocalypse http://www.beeradvocate.com/beer/profile/3835/46649/?ba=kaseydad Aventinus http://www.beeradvocate.com/beer/profile/72/224/ Dragonstooth http://www.beeradvocate.com/beer/profile/700/2023/ Chocolate Rain http://www.beeradvocate.com/beer/profile/16866/53728/ The above are just suggestions, so feel free to add your own. Nominee selection will end next week at which point we'll have the poll open for another week. Thanks Tristan From isavin at redhat.com Wed Nov 19 08:05:23 2014 From: isavin at redhat.com (Ion Savin) Date: Wed, 19 Nov 2014 15:05:23 +0200 Subject: [infinispan-dev] Infinispan 7.0.2.Final is now available! Message-ID: <546C9593.2070001@redhat.com> Hi all, Infinispan 7.0.2.Final is now available! For details please consult: http://blog.infinispan.org/2014/11/infinispan-702final-released.html Thanks to everyone involved in this release! From bban at redhat.com Wed Nov 19 11:17:31 2014 From: bban at redhat.com (Bela Ban) Date: Wed, 19 Nov 2014 17:17:31 +0100 Subject: [infinispan-dev] Total Order non-transactional cache In-Reply-To: <546B2518.9080205@infinispan.org> References: <5460D063.3020806@infinispan.org> <5469E88F.4070305@unine.ch> <546B2518.9080205@infinispan.org> Message-ID: <546CC29B.6030201@redhat.com> On 11/18/2014 11:53 AM, Pedro Ruivo wrote: > Hi Pierre, > > Thanks for the feedback. > > My comments are inline. > > Cheers, > Pedro > > On 11/17/2014 12:22 PM, Pierre Sutra wrote: >> Hello Pedro, >> >> I read your design page with interest, and formulated a few >> remarks/questions below. Despite I do not know in details the internals, >> I Hope that they might be useful. >> >> Cheers, >> Pierre >> >> - It is unclear to me how the protocol execute reads, in particular >> regarding causality. If a reader waits for a single replica to answer, >> in case every write requires all replicas to answer, this is fine. >> However, it seems that a writer can return as soon as a single replica >> returns an acknowledgement. In such a case, it might be the case that a >> reader do not see its own modifications, if it retrieves data from a >> replica that did not apply the modifications yet. > > You're right. The writer must wait for all the replies except if the > cache is full replicated (in this case, it can wait for the self-deliver). > >> >> - Do you ensure impotency of commands inside ISPN ? In my understanding, >> it is necessary when switching from a view v1 to a view v2, as commands >> delivered at the end of v1 might be already executed. > > I'm lost here. can you be clear? are you talking about JGroups view or > Infinispan cache topology? > > On the latter matters and it will delivered the cache topology changes > in total order. So, everybody receives the same order of events. > >> >> - I would call your replication protocol "virtual synchrony based" >> instead, as it is relying on the virtual synchrony abstraction provided >> by JGroups. > > I don't think so. If I recall correctly, virtual synchrony ensures that > if a message is sent in view *v* then it is delivered in view *v*. Are you using virtual synchrony at all ? IIRC there's no FLUSH protocol in the default configs. TBH, I don't recommend this anyway. > First, that cases is not necessary since we retry the commands received > in different topologies. Second, the protocol relies on the order in > which the operations are delivered. > >> >> >> On 10. 11. 14 15:49, Pedro Ruivo wrote: >>> Hi, >>> >>> FYI, I've just created a design page: >>> https://github.com/infinispan/infinispan/wiki/Total-Order-non-Transactional-Cache >>> >>> My plan is to implement it in 7.1 release. >>> >>> Feel free to comment. >>> >>> Cheers, >>> Pedro >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban Lead JGroups / Clustering Team JBoss From dan.berindei at gmail.com Thu Nov 20 11:35:33 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 20 Nov 2014 18:35:33 +0200 Subject: [infinispan-dev] Fix or document? Concurrent replaceWithVersion w/ same value might all return true - ISPN-4972 In-Reply-To: <5464AD3F.5000805@redhat.com> References: <47BCDD79-2E37-40AD-B994-AFB33AA8322F@redhat.com> <5464AD3F.5000805@redhat.com> Message-ID: I guess you could say this is a regression, this wouldn't have been possible when the version was part of the value :) But I agree an application is very unlikely call replaceWithVersion with the same value as before, so +1 to document it for now and implement replaceWithVersion/replaceWithPredicate in the embedded cache for 8.0. Cheers Dan On Thu, Nov 13, 2014 at 3:08 PM, Radim Vansa wrote: > I agree with Galder, fixing it is not worth the cost. > > Actually, there are often bugs that I'd call rather 'quirks', not > honoring the ConcurrentMap contract (recently we have discussed with Dan > [1] and [2]) which are quite complex to fix. Another one that's > considered not a bug is that a read does not have transactional semantics. > Galder, where will you document that? I think that special page in > documentation should accumulate such cases, linked to JIRAs for case > that eventually we'll resolve them (with that glorious MVCC). And of > course, link from javadoc to this document (though I am not sure whether > we can correctly keep that in sync with latest release. Could we have a > redirection from http://infinispan.org/docs/latest to > http://infinispan.org/docs/7.0.x/ ? > > Radim > > [1] https://issues.jboss.org/browse/ISPN-3918 > [2] https://issues.jboss.org/browse/ISPN-4286 > > On 11/13/2014 01:51 PM, Galder Zamarre?o wrote: > > Hi all, > > > > Re: https://issues.jboss.org/browse/ISPN-4972 > > > > Embedded cache provides atomicity of a replace() call passing in the > previous value. This limitation might be lifted when we adopt Java 8 and we > can pass in a lambda or similar, which can be executed right when the value > is compared now, and if it returns true it?s applied. The lambda could > compare both value and metadata for example. > > > > Anyway, given the current status, I?m considering whether it?s worth > fixing this particular issue. Fixing the issue would require adding some > kind of locking in the Hot Rod server so that the version retrieval, > comparison and replace call, can all happen atomically. > > > > This is not ideal, and on top of that, as Radim said, the chances of > this happening in real life are limited, or more precisely it?s effects are > minimal. In other words, if two concurrent threads call replace with the > same value, the end result is that the new value would be stored, but as a > result of the code, both replaces would return true which is not strictly > right. > > > > I?d rather document this than add unnecessary locking in the Hot Rod > server where it deals with the versioned replace call. > > > > Thoughts? > > -- > > Galder Zamarre?o > > galder at redhat.com > > twitter.com/galderz > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20141120/3e2eb39e/attachment.html From tuomas.kiviaho at iki.fi Fri Nov 21 04:50:04 2014 From: tuomas.kiviaho at iki.fi (Tuomas Kiviaho) Date: Fri, 21 Nov 2014 02:50:04 -0700 (MST) Subject: [infinispan-dev] Example of TUNNEL protocol Message-ID: <1416563404416-4029986.post@n3.nabble.com> Hi, I'm trying to use TUNNEL protocol. JGroupsTransport seems to hang while waiting forever because viewAccepted is never triggered. I can see from JGossipRouter JMX that I've reached it, but it's unclear why there is no VIEW_CHANGE response. Is there an example of using TUNNEL protocol with Infinispan. I only found out about TCPGOSSIP but I've replaced it with JDBC_PING. I guess for clarity there should be some kind of await(timeout) instead of await() and exception message could clarify the situation a bit for beginners like me. -- Tuomas -- View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/Example-of-TUNNEL-protocol-tp4029986.html Sent from the Infinispan Developer List mailing list archive at Nabble.com. From bban at redhat.com Fri Nov 21 05:22:49 2014 From: bban at redhat.com (Bela Ban) Date: Fri, 21 Nov 2014 11:22:49 +0100 Subject: [infinispan-dev] Example of TUNNEL protocol In-Reply-To: <1416563404416-4029986.post@n3.nabble.com> References: <1416563404416-4029986.post@n3.nabble.com> Message-ID: <546F1279.1030906@redhat.com> You're not posting details about what went wrong, a stack trace, your configuration or the version of Infinispan/JGroups you're using. I suggest try a JGroups standalone app like Chat or Draw with tunnel.xml (both are shipped with JGroups), and continue on the JGroups mailing list. Once that works, you can translate your config to Infinispan and it should work there, too. On 21/11/14 10:50, Tuomas Kiviaho wrote: > Hi, > > I'm trying to use TUNNEL protocol. JGroupsTransport seems to hang while > waiting forever because viewAccepted is never triggered. I can see from > JGossipRouter JMX that I've reached it, but it's unclear why there is no > VIEW_CHANGE response. Is there an example of using TUNNEL protocol with > Infinispan. I only found out about TCPGOSSIP but I've replaced it with > JDBC_PING. > > I guess for clarity there should be some kind of await(timeout) instead of > await() and exception message could clarify the situation a bit for > beginners like me. > > -- > Tuomas -- Bela Ban, JGroups lead (http://www.jgroups.org) From andreas.kruthoff at nexustelecom.com Fri Nov 21 05:28:20 2014 From: andreas.kruthoff at nexustelecom.com (Andreas Kruthoff) Date: Fri, 21 Nov 2014 11:28:20 +0100 Subject: [infinispan-dev] AsyncCacheWriter is dead Message-ID: <546F13C4.80700@nexustelecom.com> Hi dev I'm running infinispan-7.0.1 (will soon upgrade to 7.0.2). I've configured 2 distributed caches, both are similar. Example cache1: ... ... Today, I've seen that one of the two caches suddenly doesn't exist anymore somehow (JMX access to numberOfEntries returns nothing?!). After looking into the application logfile, I've found this: 2014-11-19 13:45:47,310 ERROR [AsyncStoreCoordinator-infinicache-lbd-imei] AsyncCacheWriter.java:267 ISPN000055: Unexpected error in AsyncStoreCoordinator thread. AsyncCacheWriter is dead! org.infinispan.util.concurrent.TimeoutException: ISPN000233: Waiting on work threads latch failed: java.util.concurrent.CountDownLatch at 6e30928d[Count = 1] at org.infinispan.persistence.async.AsyncCacheWriter$AsyncStoreCoordinator.workerThreadsAwait(AsyncCacheWriter.java:297) ~[infinispan-embedded.jar:7.0.1.Final] at org.infinispan.persistence.async.AsyncCacheWriter$AsyncStoreCoordinator.run(AsyncCacheWriter.java:254) ~[infinispan-embedded.jar:7.0.1.Final] at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_72] No error before nor after this entry. It looks like it's related to to me, so I'll remove that setting. But I'd like to write asynchronously for best performance. Any help would be appreciated! -andreas This email and any attachment may contain confidential information which is intended for use only by the addressee(s) named above. If you received this email by mistake, please notify the sender immediately, and delete the email from your system. You are prohibited from copying, disseminating or otherwise using the email or any attachment. From rvansa at redhat.com Fri Nov 21 05:38:53 2014 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 21 Nov 2014 11:38:53 +0100 Subject: [infinispan-dev] Functionality based on configuration Message-ID: <546F163D.2080300@redhat.com> Hi, when thinking about strong/eventual consistency and ease of configuration, I was considering whether cache configuration should affect results of operations at all (one example could be read committed/repeatable read, or write skew check). It would seem to me that the configuration would be simpler, and user options more rich if those options that change the result of operation would be purely API-wise (based on flags or method arguments) and the configuration could only change the performance (defining cache store will slow down some operations) or availability of these operations (you cannot start a transaction when the manager is not defined), not the outcome. E.g. is there really a point to be able to change sync/async configuration of the cache when the code expects strong consistency? If it can handle that, it should grab cache.withFlags(FORCE_ASYNCHRONOUS) and work on that. Another example is in the strong/eventual consistency - if I want to see the cache as strongly consistent, I can't read from backup owners [1]. Currently there is no option to force reading from primary owner, therefore, I was wondering whether it should be configurable (together with staggered gets policy - not that this would be implemented) or whether that should be specified as a flag - and it seems to me that it should not be configurable as the administrator could remove the flag from the config (and see increased performance) but eventually a race could occur where this flag matters and the application will behave incorrectly. WDYT? This question is obviously rather for changes on the roadmap (I'd say along with leaving ConcurrentMap interface) than any immediate actions in versions 7.x or 8.x. Radim [1] https://issues.jboss.org/browse/ISPN-4995 -- Radim Vansa JBoss DataGrid QA From tuomas.kiviaho at iki.fi Fri Nov 21 06:31:49 2014 From: tuomas.kiviaho at iki.fi (Tuomas Kiviaho) Date: Fri, 21 Nov 2014 04:31:49 -0700 (MST) Subject: [infinispan-dev] Example of TUNNEL protocol In-Reply-To: <546F1279.1030906@redhat.com> References: <1416563404416-4029986.post@n3.nabble.com> <546F1279.1030906@redhat.com> Message-ID: <1416569509757-4029990.post@n3.nabble.com> Hi, Bela Ban wrote > You're not posting details about what went wrong, a stack trace, your > configuration or the version of Infinispan/JGroups you're using. Infinispan 7.0.0.Final and JGroups 3.6.0.Final. Stack doesn't tell much because the rest is just the usual infinispan config. Bela Ban wrote > I suggest try a JGroups standalone app like Chat or Draw with tunnel.xml > (both are shipped with JGroups), and continue on the JGroups mailing list. > > Once that works, you can translate your config to Infinispan and it > should work there, too. Thanks for pointing me out (to the obvious). I didn't have these... .. along with my config that was previously just TUNNEL/JDBC_PING which where set as per documentation. I can see that the pbcast.GMS finally woke up to it's task. -- Tuomas -- View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/Example-of-TUNNEL-protocol-tp4029986p4029990.html Sent from the Infinispan Developer List mailing list archive at Nabble.com. From dan.berindei at gmail.com Mon Nov 24 06:44:22 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 24 Nov 2014 13:44:22 +0200 Subject: [infinispan-dev] Functionality based on configuration In-Reply-To: <546F163D.2080300@redhat.com> References: <546F163D.2080300@redhat.com> Message-ID: Hi Radim First of all, I don't think this is feasible. For example, read-committed vs repeatable read changes how the entries are stored in the transaction context, so you can't have a repeatable-read get() in the same transaction after a read-committed get. Write skew check also requires versions, so you couldn't skip updating the version in any optimistic cache just in case some transaction might need it in the future. We also can't mix non-transactional, transactional asynchronous, and transactional synchronous operations on the same cache, as they would break each other's consistency. In fact, Infinispan 4.x allowed both transactional and non-transactional operations on the same cache, but at some point we realized that there's no way to ensure the consistency of transactions if there are overlapping with non-transactional operations. I agree that the configuration is very tightly coupled with the code that uses it, so settings that can break the application should be more obvious. We should discuss how we can improve this at the clustering meeting in Berlin. But I think forgetting to add a flag in some part of the application is just as likely as the administrator making a mistake in the configuration, and having different consistency models in the same cache can also make code harder to understand. So instead of allowing flags to control consistency, I would rather add methods for the user to assert that the cache has certain properties. Cheers Dan On Fri, Nov 21, 2014 at 12:38 PM, Radim Vansa wrote: > Hi, > > when thinking about strong/eventual consistency and ease of > configuration, I was considering whether cache configuration should > affect results of operations at all (one example could be read > committed/repeatable read, or write skew check). > > It would seem to me that the configuration would be simpler, and user > options more rich if those options that change the result of operation > would be purely API-wise (based on flags or method arguments) and the > configuration could only change the performance (defining cache store > will slow down some operations) or availability of these operations (you > cannot start a transaction when the manager is not defined), not the > outcome. > > E.g. is there really a point to be able to change sync/async > configuration of the cache when the code expects strong consistency? If > it can handle that, it should grab cache.withFlags(FORCE_ASYNCHRONOUS) > and work on that. > Another example is in the strong/eventual consistency - if I want to see > the cache as strongly consistent, I can't read from backup owners [1]. > Currently there is no option to force reading from primary owner, > therefore, I was wondering whether it should be configurable (together > with staggered gets policy - not that this would be implemented) or > whether that should be specified as a flag - and it seems to me that it > should not be configurable as the administrator could remove the flag > from the config (and see increased performance) but eventually a race > could occur where this flag matters and the application will behave > incorrectly. > > WDYT? This question is obviously rather for changes on the roadmap (I'd > say along with leaving ConcurrentMap interface) than any immediate > actions in versions 7.x or 8.x. > > Radim > > [1] https://issues.jboss.org/browse/ISPN-4995 > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20141124/efdf940f/attachment-0001.html From rvansa at redhat.com Mon Nov 24 08:07:55 2014 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 24 Nov 2014 14:07:55 +0100 Subject: [infinispan-dev] Functionality based on configuration In-Reply-To: References: <546F163D.2080300@redhat.com> Message-ID: <54732DAB.6040101@redhat.com> On 11/24/2014 12:44 PM, Dan Berindei wrote: > Hi Radim > > First of all, I don't think this is feasible. For example, > read-committed vs repeatable read changes how the entries are stored > in the transaction context, so you can't have a repeatable-read get() > in the same transaction after a read-committed get. Write skew check > also requires versions, so you couldn't skip updating the version in > any optimistic cache just in case some transaction might need it in > the future. The isolation level is a property of transaction, not single operation: you should specify this ahead in the transactional context before doing any operations (I would imagine API like AdvancedCache.getTxCache(LockingMode.OPTIMISTIC, IsolationLevel.REPEATABLE_READ)). > > We also can't mix non-transactional, transactional asynchronous, and > transactional synchronous operations on the same cache, as they would > break each other's consistency. In fact, Infinispan 4.x allowed both > transactional and non-transactional operations on the same cache, but > at some point we realized that there's no way to ensure the > consistency of transactions if there are overlapping with > non-transactional operations. Just out of curiosity - Hazelcast allows mixing transactional and non-transactional code, do you know how they do it? Coherence has also all transactions API-wise (but I was not able to get them working). But I agree that allowing both tx and non-tx operations could complicate things a lot (the number of cases that need to be designed and tested grows exponentially with each option). > > I agree that the configuration is very tightly coupled with the code > that uses it, so settings that can break the application should be > more obvious. We should discuss how we can improve this at the > clustering meeting in Berlin. > > But I think forgetting to add a flag in some part of the application > is just as likely as the administrator making a mistake in the > configuration, and having different consistency models in the same > cache can also make code harder to understand. So instead of allowing > flags to control consistency, I would rather add methods for the user > to assert that the cache has certain properties. IMO the probability that two people (programmer who did not write documentation and administrator who did not read the code) make a mistake because of configuration is still larger than the one of single person. Thanks for comments Radim > > Cheers > Dan > > > On Fri, Nov 21, 2014 at 12:38 PM, Radim Vansa > wrote: > > Hi, > > when thinking about strong/eventual consistency and ease of > configuration, I was considering whether cache configuration should > affect results of operations at all (one example could be read > committed/repeatable read, or write skew check). > > It would seem to me that the configuration would be simpler, and user > options more rich if those options that change the result of operation > would be purely API-wise (based on flags or method arguments) and the > configuration could only change the performance (defining cache store > will slow down some operations) or availability of these > operations (you > cannot start a transaction when the manager is not defined), not the > outcome. > > E.g. is there really a point to be able to change sync/async > configuration of the cache when the code expects strong > consistency? If > it can handle that, it should grab cache.withFlags(FORCE_ASYNCHRONOUS) > and work on that. > Another example is in the strong/eventual consistency - if I want > to see > the cache as strongly consistent, I can't read from backup owners [1]. > Currently there is no option to force reading from primary owner, > therefore, I was wondering whether it should be configurable (together > with staggered gets policy - not that this would be implemented) or > whether that should be specified as a flag - and it seems to me > that it > should not be configurable as the administrator could remove the flag > from the config (and see increased performance) but eventually a race > could occur where this flag matters and the application will behave > incorrectly. > > WDYT? This question is obviously rather for changes on the roadmap > (I'd > say along with leaving ConcurrentMap interface) than any immediate > actions in versions 7.x or 8.x. > > Radim > > [1] https://issues.jboss.org/browse/ISPN-4995 > > -- > Radim Vansa > > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20141124/2d77419d/attachment.html From rory.odonnell at oracle.com Mon Nov 24 09:11:34 2014 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Mon, 24 Nov 2014 14:11:34 +0000 Subject: [infinispan-dev] Jigsaw early-access builds updated (JDK 9 build 40) Message-ID: <54733C96.6050808@oracle.com> Hi Galder, JDK 9 Early Access with Project Jigsaw build b40 is available for download at : https://jdk9.java.net/jigsaw/ The goal of Project Jigsaw [2] is to design and implement a standard module system for the Java SE Platform, and to apply that system to the Platform itself and to the JDK. The main change in this build is that it includes the jrt: file-system provider, so it now implements all of the changes described in JEP 220. Please refer to Project Jigsaw's updated project pages [2] & [4] and Mark Reinhold's update [5] for further details. We are very interested in your experiences testing this build. Comments, questions, and suggestions are welcome on the jigsaw-dev mailing list or through bug reports via bugs.java.com . Note: If you haven?t already subscribed to that mailing list then please do so first, otherwise your message will be discarded as spam. Rgds, Rory [1] https://jdk9.java.net/jigsaw/ [2] http://openjdk.java.net/projects/jigsaw/ [3] http://openjdk.java.net/jeps/220 [4] http://openjdk.java.net/projects/jigsaw/ea [5] http://mail.openjdk.java.net/pipermail/jigsaw-dev/2014-November/004014.html -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20141124/7dc62c2a/attachment.html From dan.berindei at gmail.com Mon Nov 24 09:46:19 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 24 Nov 2014 16:46:19 +0200 Subject: [infinispan-dev] AsyncCacheWriter is dead In-Reply-To: <546F13C4.80700@nexustelecom.com> References: <546F13C4.80700@nexustelecom.com> Message-ID: Hi Andreas Have you tried to see much worse your performance is without write-behind? I wouldn't expect write-behind to help a lot with SingleFileStore (file-store): since it never calls fsync, writes only go to the OS write-behind buffers before returning. Regarding the error per se, I'm afraid the error message doesn't say much. You should enable DEBUG logging for org.infinispan.persistence and see if the async writer threads log any "Failed to process async modifications" messages and create a bug in JIRA. It might also help to get a thread dump after you've seen the error. Cheers Dan On Fri, Nov 21, 2014 at 12:28 PM, Andreas Kruthoff < andreas.kruthoff at nexustelecom.com> wrote: > Hi dev > > > I'm running infinispan-7.0.1 (will soon upgrade to 7.0.2). > > I've configured 2 distributed caches, both are similar. > > Example cache1: > ... > async-marshalling="false" owners="2" > l1-lifespan="600000" l1-cleanup-interval="60000" statistics="false"> > > > > path="/data1/infini-lbd" > preload="true" shared="true" purge="false"> > > > > > ... > > Today, I've seen that one of the two caches suddenly doesn't exist > anymore somehow (JMX access to numberOfEntries returns nothing?!). > > After looking into the application logfile, I've found this: > > 2014-11-19 13:45:47,310 ERROR > [AsyncStoreCoordinator-infinicache-lbd-imei] AsyncCacheWriter.java:267 > ISPN000055: Unexpected error in AsyncStoreCoordinator thread. > AsyncCacheWriter is dead! > org.infinispan.util.concurrent.TimeoutException: ISPN000233: Waiting on > work threads latch failed: > java.util.concurrent.CountDownLatch at 6e30928d[Count = 1] > at > > org.infinispan.persistence.async.AsyncCacheWriter$AsyncStoreCoordinator.workerThreadsAwait(AsyncCacheWriter.java:297) > ~[infinispan-embedded.jar:7.0.1.Final] > at > > org.infinispan.persistence.async.AsyncCacheWriter$AsyncStoreCoordinator.run(AsyncCacheWriter.java:254) > ~[infinispan-embedded.jar:7.0.1.Final] > at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_72] > > > > No error before nor after this entry. It looks like it's related to > to me, so I'll remove that setting. > > But I'd like to write asynchronously for best performance. > > Any help would be appreciated! > > -andreas > > This email and any attachment may contain confidential information which > is intended for use only by the addressee(s) named above. If you received > this email by mistake, please notify the sender immediately, and delete the > email from your system. You are prohibited from copying, disseminating or > otherwise using the email or any attachment. > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20141124/8962f2fb/attachment-0001.html From dan.berindei at gmail.com Mon Nov 24 10:54:43 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 24 Nov 2014 17:54:43 +0200 Subject: [infinispan-dev] Functionality based on configuration In-Reply-To: <54732DAB.6040101@redhat.com> References: <546F163D.2080300@redhat.com> <54732DAB.6040101@redhat.com> Message-ID: Hi Radim Please make sure you reply in plain text mode, the replies got a bit mixed up. On Mon, Nov 24, 2014 at 3:07 PM, Radim Vansa wrote: > > On 11/24/2014 12:44 PM, Dan Berindei wrote: > > Hi Radim > >> First of all, I don't think this is feasible. For example, read-committed vs repeatable read changes how the entries are stored in the transaction context, so you can't have a repeatable-read get() in the same transaction after a read-committed get. Write skew check also requires versions, so you couldn't skip updating the version in any optimistic cache just in case some transaction might need it in the future. > > > The isolation level is a property of transaction, not single operation: you should specify this ahead in the transactional context before doing any operations (I would imagine API like AdvancedCache.getTxCache(LockingMode.OPTIMISTIC, IsolationLevel.REPEATABLE_READ)). > How would you handle something like this? public void someMethod() { tm.begin() txCache = manager.getCache().getTxCache(LockingMode.OPTIMISTIC, IsolationLevel.REPEATABLE_READ); txCache.put("k1", "v1"); anotherMethod(); tm.commit() } public void anotherMethod() { nontxCache = manager.getCache().getNonTxCache(); nontxCache.put("k2", "v2"); } > >> We also can't mix non-transactional, transactional asynchronous, and transactional synchronous operations on the same cache, as they would break each other's consistency. In fact, Infinispan 4.x allowed both transactional and non-transactional operations on the same cache, but at some point we realized that there's no way to ensure the consistency of transactions if there are overlapping with non-transactional operations. > > > Just out of curiosity - Hazelcast allows mixing transactional and non-transactional code, do you know how they do it? Coherence has also all transactions API-wise (but I was not able to get them working). But I agree that allowing both tx and non-tx operations could complicate things a lot (the number of cases that need to be designed and tested grows exponentially with each option). > I suspect they use the same locking + replication strategy for non-tx caches as they use for tx caches, just without the link to an external transaction. I wish we would do the same, but I'm not sure we could keep the performance as good as the actual non-tx performance. > >> I agree that the configuration is very tightly coupled with the code that uses it, so settings that can break the application should be more obvious. We should discuss how we can improve this at the clustering meeting in Berlin. >> >> But I think forgetting to add a flag in some part of the application is just as likely as the administrator making a mistake in the configuration, and having different consistency models in the same cache can also make code harder to understand. So instead of allowing flags to control consistency, I would rather add methods for the user to assert that the cache has certain properties. > > > IMO the probability that two people (programmer who did not write documentation and administrator who did not read the code) make a mistake because of configuration is still larger than the one of single person. > Who said there's only one programmer? :) Even if there is a single person writing (or reading) the code, I think it's better to have a single place where you can look and see how a cache is expected to behave instead of having to check all the places where that cache is used. And a paranoid programmer can protect himself from the administrator by configuring the cache programmatically... > Thanks for comments > > Radim > > > > Cheers > Dan > > > On Fri, Nov 21, 2014 at 12:38 PM, Radim Vansa wrote: >> >> Hi, >> >> when thinking about strong/eventual consistency and ease of >> configuration, I was considering whether cache configuration should >> affect results of operations at all (one example could be read >> committed/repeatable read, or write skew check). >> >> It would seem to me that the configuration would be simpler, and user >> options more rich if those options that change the result of operation >> would be purely API-wise (based on flags or method arguments) and the >> configuration could only change the performance (defining cache store >> will slow down some operations) or availability of these operations (you >> cannot start a transaction when the manager is not defined), not the >> outcome. >> >> E.g. is there really a point to be able to change sync/async >> configuration of the cache when the code expects strong consistency? If >> it can handle that, it should grab cache.withFlags(FORCE_ASYNCHRONOUS) >> and work on that. >> Another example is in the strong/eventual consistency - if I want to see >> the cache as strongly consistent, I can't read from backup owners [1]. >> Currently there is no option to force reading from primary owner, >> therefore, I was wondering whether it should be configurable (together >> with staggered gets policy - not that this would be implemented) or >> whether that should be specified as a flag - and it seems to me that it >> should not be configurable as the administrator could remove the flag >> from the config (and see increased performance) but eventually a race >> could occur where this flag matters and the application will behave >> incorrectly. >> >> WDYT? This question is obviously rather for changes on the roadmap (I'd >> say along with leaving ConcurrentMap interface) than any immediate >> actions in versions 7.x or 8.x. >> >> Radim >> >> [1] https://issues.jboss.org/browse/ISPN-4995 >> >> -- >> Radim Vansa >> JBoss DataGrid QA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Radim Vansa > JBoss DataGrid QA > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Mon Nov 24 10:59:49 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 24 Nov 2014 17:59:49 +0200 Subject: [infinispan-dev] Weekly Infinispan IRC meeting 2014-11-24 Message-ID: For people who couldn't attend, the minutes are here: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2014/infinispan.2014-11-24-15.02.log.html Cheers Dan From ttarrant at redhat.com Mon Nov 24 11:20:14 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 24 Nov 2014 17:20:14 +0100 Subject: [infinispan-dev] Weekly Infinispan IRC meeting 2014-11-24 In-Reply-To: References: Message-ID: <54735ABE.9030101@redhat.com> Hi all, my update: worked on a bunch of PRs: - ISPN-5009 Rebase server to WildFly 8.2 - ISPN-4863 Include domain mode in server - ISPN-4961 Bump parsers and schemas to 7.1 I'm also nearly done on: - ISPN-4919 Cache templates And I'm also playing with: - ISPN-5012 ClusterRegistry as a service cache provider - ISPN-5013 Server-side scripting using JSR-223 (javax.script) Last week I was in Udine at the NoSQLDay [1] where I presented Infinispan 7's partition handling to a varied audience which was well received and I got a lot of questions. I also met up with Bela, Ugo Landini and Fabio Marinelli where we discussed RAFT, its implementation in JGroups and the advantages this could bring to Infinispan. Tristan [1] http://2014.nosqlday.it On 24/11/14 16:59, Dan Berindei wrote: > For people who couldn't attend, the minutes are here: > > http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2014/infinispan.2014-11-24-15.02.log.html > > Cheers > Dan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From bban at redhat.com Mon Nov 24 14:57:20 2014 From: bban at redhat.com (Bela Ban) Date: Mon, 24 Nov 2014 20:57:20 +0100 Subject: [infinispan-dev] Weekly Infinispan IRC meeting 2014-11-24 In-Reply-To: <54735ABE.9030101@redhat.com> References: <54735ABE.9030101@redhat.com> Message-ID: <54738DA0.5000402@redhat.com> Do you have the slides of your talk available ? On 24/11/14 17:20, Tristan Tarrant wrote: > Hi all, > > my update: > worked on a bunch of PRs: > - ISPN-5009 Rebase server to WildFly 8.2 > - ISPN-4863 Include domain mode in server > - ISPN-4961 Bump parsers and schemas to 7.1 > > I'm also nearly done on: > > - ISPN-4919 Cache templates > > And I'm also playing with: > - ISPN-5012 ClusterRegistry as a service cache provider > - ISPN-5013 Server-side scripting using JSR-223 (javax.script) > > Last week I was in Udine at the NoSQLDay [1] where I presented > Infinispan 7's partition handling to a varied audience which was well > received and I got a lot of questions. > I also met up with Bela, Ugo Landini and Fabio Marinelli where we > discussed RAFT, its implementation in JGroups and the advantages this > could bring to Infinispan. > > Tristan > > [1] http://2014.nosqlday.it > > On 24/11/14 16:59, Dan Berindei wrote: >> For people who couldn't attend, the minutes are here: >> >> http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2014/infinispan.2014-11-24-15.02.log.html >> >> Cheers >> Dan >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> > > -- Bela Ban, JGroups lead (http://www.jgroups.org) From ttarrant at redhat.com Mon Nov 24 15:10:11 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 24 Nov 2014 21:10:11 +0100 Subject: [infinispan-dev] Weekly Infinispan IRC meeting 2014-11-24 In-Reply-To: <54738DA0.5000402@redhat.com> References: <54735ABE.9030101@redhat.com> <54738DA0.5000402@redhat.com> Message-ID: <547390A3.2000007@redhat.com> https://github.com/tristantarrant/infinispan-presentation-splitbrain Tristan On 24/11/14 20:57, Bela Ban wrote: > Do you have the slides of your talk available ? > > On 24/11/14 17:20, Tristan Tarrant wrote: >> Hi all, >> >> my update: >> worked on a bunch of PRs: >> - ISPN-5009 Rebase server to WildFly 8.2 >> - ISPN-4863 Include domain mode in server >> - ISPN-4961 Bump parsers and schemas to 7.1 >> >> I'm also nearly done on: >> >> - ISPN-4919 Cache templates >> >> And I'm also playing with: >> - ISPN-5012 ClusterRegistry as a service cache provider >> - ISPN-5013 Server-side scripting using JSR-223 (javax.script) >> >> Last week I was in Udine at the NoSQLDay [1] where I presented >> Infinispan 7's partition handling to a varied audience which was well >> received and I got a lot of questions. >> I also met up with Bela, Ugo Landini and Fabio Marinelli where we >> discussed RAFT, its implementation in JGroups and the advantages this >> could bring to Infinispan. >> >> Tristan >> >> [1] http://2014.nosqlday.it >> >> On 24/11/14 16:59, Dan Berindei wrote: >>> For people who couldn't attend, the minutes are here: >>> >>> http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2014/infinispan.2014-11-24-15.02.log.html >>> >>> Cheers >>> Dan >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >> -- -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From ryan.tom at viasat.com Tue Nov 25 18:33:52 2014 From: ryan.tom at viasat.com (rtom) Date: Tue, 25 Nov 2014 16:33:52 -0700 (MST) Subject: [infinispan-dev] Failover Implementation with RANDOM_NODE_FAILOVER not working Message-ID: <1416958432987-4030000.post@n3.nabble.com> I'm trying to implement a basic failover policy for a cluster of 2 nodes. The task that I want to run is a DistributedCallable object and I create a DistributedTask for it. Based on the output files, the task is being run correctly on the cluster (I would sometimes see it run on server 1 and other times on server 2 and it completes). I decided to go with the random node failover policy that is provided and when the task is running and I kill the server that is running the task, I don't see the other server picking up the task and running it. I'm not too sure if I'm missing anything when I'm creating and executing the distributedtask: DistributedTaskBuilder taskBuilder = execService.createDistributedTaskBuilder(usageReportingProcess); taskBuilder = taskBuilder.failoverPolicy(DefaultExecutorService.RANDOM_NODE_FAILOVER); DistributedTask distTask = taskBuilder.build(); Future future = execService.submit(distTask); Any insight or tips would be very helpful Thanks! -- View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/Failover-Implementation-with-RANDOM-NODE-FAILOVER-not-working-tp4030000.html Sent from the Infinispan Developer List mailing list archive at Nabble.com. From sanne at infinispan.org Wed Nov 26 07:33:48 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 26 Nov 2014 12:33:48 +0000 Subject: [infinispan-dev] Fix or document? Concurrent replaceWithVersion w/ same value might all return true - ISPN-4972 In-Reply-To: References: <47BCDD79-2E37-40AD-B994-AFB33AA8322F@redhat.com> <5464AD3F.5000805@redhat.com> Message-ID: That's not Atomic. How can I implement a counter on this? Say the current version is 5, I read it, and then issue a "replace 5 with 6" command. If I send a couple of such commands in parallel I need a guarantee that only one succeeds, so that the other one can retry and get the counter up to 7. Over Hot Rod I have no locking so I have no alternatives other than atomic replacement commands, that's not unlikely to happen: that's a critical showstopper for users. Sanne On 20 November 2014 at 16:35, Dan Berindei wrote: > I guess you could say this is a regression, this wouldn't have been possible > when the version was part of the value :) > > But I agree an application is very unlikely call replaceWithVersion with the > same value as before, so +1 to document it for now and implement > replaceWithVersion/replaceWithPredicate in the embedded cache for 8.0. > > Cheers > Dan > > > On Thu, Nov 13, 2014 at 3:08 PM, Radim Vansa wrote: >> >> I agree with Galder, fixing it is not worth the cost. >> >> Actually, there are often bugs that I'd call rather 'quirks', not >> honoring the ConcurrentMap contract (recently we have discussed with Dan >> [1] and [2]) which are quite complex to fix. Another one that's >> considered not a bug is that a read does not have transactional semantics. >> Galder, where will you document that? I think that special page in >> documentation should accumulate such cases, linked to JIRAs for case >> that eventually we'll resolve them (with that glorious MVCC). And of >> course, link from javadoc to this document (though I am not sure whether >> we can correctly keep that in sync with latest release. Could we have a >> redirection from http://infinispan.org/docs/latest to >> http://infinispan.org/docs/7.0.x/ ? >> >> Radim >> >> [1] https://issues.jboss.org/browse/ISPN-3918 >> [2] https://issues.jboss.org/browse/ISPN-4286 >> >> On 11/13/2014 01:51 PM, Galder Zamarre?o wrote: >> > Hi all, >> > >> > Re: https://issues.jboss.org/browse/ISPN-4972 >> > >> > Embedded cache provides atomicity of a replace() call passing in the >> > previous value. This limitation might be lifted when we adopt Java 8 and we >> > can pass in a lambda or similar, which can be executed right when the value >> > is compared now, and if it returns true it?s applied. The lambda could >> > compare both value and metadata for example. >> > >> > Anyway, given the current status, I?m considering whether it?s worth >> > fixing this particular issue. Fixing the issue would require adding some >> > kind of locking in the Hot Rod server so that the version retrieval, >> > comparison and replace call, can all happen atomically. >> > >> > This is not ideal, and on top of that, as Radim said, the chances of >> > this happening in real life are limited, or more precisely it?s effects are >> > minimal. In other words, if two concurrent threads call replace with the >> > same value, the end result is that the new value would be stored, but as a >> > result of the code, both replaces would return true which is not strictly >> > right. >> > >> > I?d rather document this than add unnecessary locking in the Hot Rod >> > server where it deals with the versioned replace call. >> > >> > Thoughts? >> > -- >> > Galder Zamarre?o >> > galder at redhat.com >> > twitter.com/galderz >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Radim Vansa >> JBoss DataGrid QA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Wed Nov 26 09:17:57 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 26 Nov 2014 16:17:57 +0200 Subject: [infinispan-dev] Fix or document? Concurrent replaceWithVersion w/ same value might all return true - ISPN-4972 In-Reply-To: References: <47BCDD79-2E37-40AD-B994-AFB33AA8322F@redhat.com> <5464AD3F.5000805@redhat.com> Message-ID: Sanne, it will work as long as the previous value is not the same as the new value. If multiple threads read value 5 with version 5, and all of them want to replace it with value 6, only one of them will succeed. But if multiple threads read value 5 with version 5, and want to replace it with value *5*, all of them might succeed. Indeed, it's not atomic, but a basic counter will work. And it's all we can do with the actual core cache API (unless we want to go back to including the HotRod version in the value). Cheers Dan On Wed, Nov 26, 2014 at 2:33 PM, Sanne Grinovero wrote: > That's not Atomic. How can I implement a counter on this? > > Say the current version is 5, I read it, and then issue a "replace 5 > with 6" command. > If I send a couple of such commands in parallel I need a guarantee > that only one succeeds, so that the other one can retry and get the > counter up to 7. > > Over Hot Rod I have no locking so I have no alternatives other than > atomic replacement commands, that's not unlikely to happen: that's a > critical showstopper for users. > > Sanne > > > On 20 November 2014 at 16:35, Dan Berindei wrote: >> I guess you could say this is a regression, this wouldn't have been possible >> when the version was part of the value :) >> >> But I agree an application is very unlikely call replaceWithVersion with the >> same value as before, so +1 to document it for now and implement >> replaceWithVersion/replaceWithPredicate in the embedded cache for 8.0. >> >> Cheers >> Dan >> >> >> On Thu, Nov 13, 2014 at 3:08 PM, Radim Vansa wrote: >>> >>> I agree with Galder, fixing it is not worth the cost. >>> >>> Actually, there are often bugs that I'd call rather 'quirks', not >>> honoring the ConcurrentMap contract (recently we have discussed with Dan >>> [1] and [2]) which are quite complex to fix. Another one that's >>> considered not a bug is that a read does not have transactional semantics. >>> Galder, where will you document that? I think that special page in >>> documentation should accumulate such cases, linked to JIRAs for case >>> that eventually we'll resolve them (with that glorious MVCC). And of >>> course, link from javadoc to this document (though I am not sure whether >>> we can correctly keep that in sync with latest release. Could we have a >>> redirection from http://infinispan.org/docs/latest to >>> http://infinispan.org/docs/7.0.x/ ? >>> >>> Radim >>> >>> [1] https://issues.jboss.org/browse/ISPN-3918 >>> [2] https://issues.jboss.org/browse/ISPN-4286 >>> >>> On 11/13/2014 01:51 PM, Galder Zamarre?o wrote: >>> > Hi all, >>> > >>> > Re: https://issues.jboss.org/browse/ISPN-4972 >>> > >>> > Embedded cache provides atomicity of a replace() call passing in the >>> > previous value. This limitation might be lifted when we adopt Java 8 and we >>> > can pass in a lambda or similar, which can be executed right when the value >>> > is compared now, and if it returns true it?s applied. The lambda could >>> > compare both value and metadata for example. >>> > >>> > Anyway, given the current status, I?m considering whether it?s worth >>> > fixing this particular issue. Fixing the issue would require adding some >>> > kind of locking in the Hot Rod server so that the version retrieval, >>> > comparison and replace call, can all happen atomically. >>> > >>> > This is not ideal, and on top of that, as Radim said, the chances of >>> > this happening in real life are limited, or more precisely it?s effects are >>> > minimal. In other words, if two concurrent threads call replace with the >>> > same value, the end result is that the new value would be stored, but as a >>> > result of the code, both replaces would return true which is not strictly >>> > right. >>> > >>> > I?d rather document this than add unnecessary locking in the Hot Rod >>> > server where it deals with the versioned replace call. >>> > >>> > Thoughts? >>> > -- >>> > Galder Zamarre?o >>> > galder at redhat.com >>> > twitter.com/galderz >>> > >>> > >>> > _______________________________________________ >>> > infinispan-dev mailing list >>> > infinispan-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> -- >>> Radim Vansa >>> JBoss DataGrid QA >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Wed Nov 26 10:43:11 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 26 Nov 2014 15:43:11 +0000 Subject: [infinispan-dev] Fix or document? Concurrent replaceWithVersion w/ same value might all return true - ISPN-4972 In-Reply-To: References: <47BCDD79-2E37-40AD-B994-AFB33AA8322F@redhat.com> <5464AD3F.5000805@redhat.com> Message-ID: On 26 November 2014 at 14:17, Dan Berindei wrote: > Sanne, it will work as long as the previous value is not the same as > the new value. > > If multiple threads read value 5 with version 5, and all of them want > to replace it with value 6, only one of them will succeed. Ok I see I might be confusing value and versions. I hope :) > But if multiple threads read value 5 with version 5, and want to > replace it with value *5*, all of them might succeed. This paragraph is confusing me more. What "value" are you referring to at the third "5"? Is it even legal to replace an entry with a new value but not incrementing its version? Thanks! Sanne > > Indeed, it's not atomic, but a basic counter will work. And it's all > we can do with the actual core cache API (unless we want to go back to > including the HotRod version in the value). > > Cheers > Dan > > > On Wed, Nov 26, 2014 at 2:33 PM, Sanne Grinovero wrote: >> That's not Atomic. How can I implement a counter on this? >> >> Say the current version is 5, I read it, and then issue a "replace 5 >> with 6" command. >> If I send a couple of such commands in parallel I need a guarantee >> that only one succeeds, so that the other one can retry and get the >> counter up to 7. >> >> Over Hot Rod I have no locking so I have no alternatives other than >> atomic replacement commands, that's not unlikely to happen: that's a >> critical showstopper for users. >> >> Sanne >> >> >> On 20 November 2014 at 16:35, Dan Berindei wrote: >>> I guess you could say this is a regression, this wouldn't have been possible >>> when the version was part of the value :) >>> >>> But I agree an application is very unlikely call replaceWithVersion with the >>> same value as before, so +1 to document it for now and implement >>> replaceWithVersion/replaceWithPredicate in the embedded cache for 8.0. >>> >>> Cheers >>> Dan >>> >>> >>> On Thu, Nov 13, 2014 at 3:08 PM, Radim Vansa wrote: >>>> >>>> I agree with Galder, fixing it is not worth the cost. >>>> >>>> Actually, there are often bugs that I'd call rather 'quirks', not >>>> honoring the ConcurrentMap contract (recently we have discussed with Dan >>>> [1] and [2]) which are quite complex to fix. Another one that's >>>> considered not a bug is that a read does not have transactional semantics. >>>> Galder, where will you document that? I think that special page in >>>> documentation should accumulate such cases, linked to JIRAs for case >>>> that eventually we'll resolve them (with that glorious MVCC). And of >>>> course, link from javadoc to this document (though I am not sure whether >>>> we can correctly keep that in sync with latest release. Could we have a >>>> redirection from http://infinispan.org/docs/latest to >>>> http://infinispan.org/docs/7.0.x/ ? >>>> >>>> Radim >>>> >>>> [1] https://issues.jboss.org/browse/ISPN-3918 >>>> [2] https://issues.jboss.org/browse/ISPN-4286 >>>> >>>> On 11/13/2014 01:51 PM, Galder Zamarre?o wrote: >>>> > Hi all, >>>> > >>>> > Re: https://issues.jboss.org/browse/ISPN-4972 >>>> > >>>> > Embedded cache provides atomicity of a replace() call passing in the >>>> > previous value. This limitation might be lifted when we adopt Java 8 and we >>>> > can pass in a lambda or similar, which can be executed right when the value >>>> > is compared now, and if it returns true it?s applied. The lambda could >>>> > compare both value and metadata for example. >>>> > >>>> > Anyway, given the current status, I?m considering whether it?s worth >>>> > fixing this particular issue. Fixing the issue would require adding some >>>> > kind of locking in the Hot Rod server so that the version retrieval, >>>> > comparison and replace call, can all happen atomically. >>>> > >>>> > This is not ideal, and on top of that, as Radim said, the chances of >>>> > this happening in real life are limited, or more precisely it?s effects are >>>> > minimal. In other words, if two concurrent threads call replace with the >>>> > same value, the end result is that the new value would be stored, but as a >>>> > result of the code, both replaces would return true which is not strictly >>>> > right. >>>> > >>>> > I?d rather document this than add unnecessary locking in the Hot Rod >>>> > server where it deals with the versioned replace call. >>>> > >>>> > Thoughts? >>>> > -- >>>> > Galder Zamarre?o >>>> > galder at redhat.com >>>> > twitter.com/galderz >>>> > >>>> > >>>> > _______________________________________________ >>>> > infinispan-dev mailing list >>>> > infinispan-dev at lists.jboss.org >>>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> >>>> -- >>>> Radim Vansa >>>> JBoss DataGrid QA >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Wed Nov 26 10:54:40 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 26 Nov 2014 16:54:40 +0100 Subject: [infinispan-dev] Fix or document? Concurrent replaceWithVersion w/ same value might all return true - ISPN-4972 In-Reply-To: References: <47BCDD79-2E37-40AD-B994-AFB33AA8322F@redhat.com> <5464AD3F.5000805@redhat.com> Message-ID: <5475F7C0.9070806@redhat.com> It depends on the side-effects that replacing something with the same value has: listeners, cachestores, state transfer, etc. In general I'd say: no that's not what I want. Tristan On 26/11/14 16:43, Sanne Grinovero wrote: > On 26 November 2014 at 14:17, Dan Berindei wrote: >> Sanne, it will work as long as the previous value is not the same as >> the new value. >> >> If multiple threads read value 5 with version 5, and all of them want >> to replace it with value 6, only one of them will succeed. > Ok I see I might be confusing value and versions. I hope :) > >> But if multiple threads read value 5 with version 5, and want to >> replace it with value *5*, all of them might succeed. > This paragraph is confusing me more. What "value" are you referring to > at the third "5"? Is it even legal to replace an entry with a new > value but not incrementing its version? > > Thanks! > Sanne > >> Indeed, it's not atomic, but a basic counter will work. And it's all >> we can do with the actual core cache API (unless we want to go back to >> including the HotRod version in the value). >> >> Cheers >> Dan >> >> >> On Wed, Nov 26, 2014 at 2:33 PM, Sanne Grinovero wrote: >>> That's not Atomic. How can I implement a counter on this? >>> >>> Say the current version is 5, I read it, and then issue a "replace 5 >>> with 6" command. >>> If I send a couple of such commands in parallel I need a guarantee >>> that only one succeeds, so that the other one can retry and get the >>> counter up to 7. >>> >>> Over Hot Rod I have no locking so I have no alternatives other than >>> atomic replacement commands, that's not unlikely to happen: that's a >>> critical showstopper for users. >>> >>> Sanne >>> >>> >>> On 20 November 2014 at 16:35, Dan Berindei wrote: >>>> I guess you could say this is a regression, this wouldn't have been possible >>>> when the version was part of the value :) >>>> >>>> But I agree an application is very unlikely call replaceWithVersion with the >>>> same value as before, so +1 to document it for now and implement >>>> replaceWithVersion/replaceWithPredicate in the embedded cache for 8.0. >>>> >>>> Cheers >>>> Dan >>>> >>>> >>>> On Thu, Nov 13, 2014 at 3:08 PM, Radim Vansa wrote: >>>>> I agree with Galder, fixing it is not worth the cost. >>>>> >>>>> Actually, there are often bugs that I'd call rather 'quirks', not >>>>> honoring the ConcurrentMap contract (recently we have discussed with Dan >>>>> [1] and [2]) which are quite complex to fix. Another one that's >>>>> considered not a bug is that a read does not have transactional semantics. >>>>> Galder, where will you document that? I think that special page in >>>>> documentation should accumulate such cases, linked to JIRAs for case >>>>> that eventually we'll resolve them (with that glorious MVCC). And of >>>>> course, link from javadoc to this document (though I am not sure whether >>>>> we can correctly keep that in sync with latest release. Could we have a >>>>> redirection from http://infinispan.org/docs/latest to >>>>> http://infinispan.org/docs/7.0.x/ ? >>>>> >>>>> Radim >>>>> >>>>> [1] https://issues.jboss.org/browse/ISPN-3918 >>>>> [2] https://issues.jboss.org/browse/ISPN-4286 >>>>> >>>>> On 11/13/2014 01:51 PM, Galder Zamarre?o wrote: >>>>>> Hi all, >>>>>> >>>>>> Re: https://issues.jboss.org/browse/ISPN-4972 >>>>>> >>>>>> Embedded cache provides atomicity of a replace() call passing in the >>>>>> previous value. This limitation might be lifted when we adopt Java 8 and we >>>>>> can pass in a lambda or similar, which can be executed right when the value >>>>>> is compared now, and if it returns true it?s applied. The lambda could >>>>>> compare both value and metadata for example. >>>>>> >>>>>> Anyway, given the current status, I?m considering whether it?s worth >>>>>> fixing this particular issue. Fixing the issue would require adding some >>>>>> kind of locking in the Hot Rod server so that the version retrieval, >>>>>> comparison and replace call, can all happen atomically. >>>>>> >>>>>> This is not ideal, and on top of that, as Radim said, the chances of >>>>>> this happening in real life are limited, or more precisely it?s effects are >>>>>> minimal. In other words, if two concurrent threads call replace with the >>>>>> same value, the end result is that the new value would be stored, but as a >>>>>> result of the code, both replaces would return true which is not strictly >>>>>> right. >>>>>> >>>>>> I?d rather document this than add unnecessary locking in the Hot Rod >>>>>> server where it deals with the versioned replace call. >>>>>> >>>>>> Thoughts? >>>>>> -- >>>>>> Galder Zamarre?o >>>>>> galder at redhat.com >>>>>> twitter.com/galderz >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> -- >>>>> Radim Vansa >>>>> JBoss DataGrid QA >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From andreas.kruthoff at nexustelecom.com Wed Nov 26 11:04:17 2014 From: andreas.kruthoff at nexustelecom.com (Andreas Kruthoff) Date: Wed, 26 Nov 2014 17:04:17 +0100 Subject: [infinispan-dev] Caused by: java.lang.OutOfMemoryError: unable to create new native thread Message-ID: <5475FA01.9090604@nexustelecom.com> Hi infinispan-dev I'm running 2 processes with 2 distributed caches each, standard jgroups-tcp configutation. Both caches have a local dat file which is loaded during startup, passivation is true. Each cache contains ~20Mio. entries. I'm writing with async put, peak is over 10'000 entries per second. It performs well. cache.getAdvancedCache() .withFlags(Flag.SKIP_REMOTE_LOOKUP, Flag.SKIP_CACHE_LOAD) .putIfAbsentAsync(Long.valueOf(a), Long.valueOf(b)); As soon as I launch a 3rd process to join the 2 caches, I'm getting the following exception (see below). Does anyone know what I need to tune. It looks like the OS doesn't offer enough resources, or am I wrong? The server has plenty of RAM and CPU's. I'm launching without -Xmx, but with -XX:+UseG1GC. Any help is much appreciated -andreas Exception in thread "main" org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheException: Unable to invoke method public void org.infinispan.remoting.transport.jgroups.JGroupsTransport.start() on object of type JGroupsTransport at org.infinispan.factories.GlobalComponentRegistry.start(GlobalComponentRegistry.java:243) at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:573) at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:539) at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:416) at ch.nexustelecom.lbd.engine.ImsiCache.init(ImsiCache.java:49) at ch.nexustelecom.dexclient.engine.DefaultDexClientEngine.init(DefaultDexClientEngine.java:120) at ch.nexustelecom.dexclient.DexClient.initClient(DexClient.java:169) at ch.nexustelecom.dexclient.tool.DexClientManager.startup(DexClientManager.java:196) at ch.nexustelecom.dexclient.tool.DexClientManager.main(DexClientManager.java:83) Caused by: org.infinispan.commons.CacheException: Unable to invoke method public void org.infinispan.remoting.transport.jgroups.JGroupsTransport.start() on object of type JGroupsTransport at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:170) at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:869) at org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:638) at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:627) at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:530) at org.infinispan.factories.GlobalComponentRegistry.start(GlobalComponentRegistry.java:221) ... 8 more Caused by: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at org.jgroups.protocols.FD_SOCK$ServerSocketHandler.start(FD_SOCK.java:1006) at org.jgroups.protocols.FD_SOCK$ServerSocketHandler.(FD_SOCK.java:999) at org.jgroups.protocols.FD_SOCK.init(FD_SOCK.java:188) at org.jgroups.stack.ProtocolStack.initProtocolStack(ProtocolStack.java:860) at org.jgroups.stack.ProtocolStack.setup(ProtocolStack.java:481) at org.jgroups.JChannel.init(JChannel.java:848) at org.jgroups.JChannel.(JChannel.java:159) at org.jgroups.JChannel.(JChannel.java:129) at org.infinispan.remoting.transport.jgroups.JGroupsTransport.buildChannel(JGroupsTransport.java:381) at org.infinispan.remoting.transport.jgroups.JGroupsTransport.initChannel(JGroupsTransport.java:286) at org.infinispan.remoting.transport.jgroups.JGroupsTransport.initChannelAndRPCDispatcher(JGroupsTransport.java:330) at org.infinispan.remoting.transport.jgroups.JGroupsTransport.start(JGroupsTransport.java:189) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168) ... 13 more This email and any attachment may contain confidential information which is intended for use only by the addressee(s) named above. If you received this email by mistake, please notify the sender immediately, and delete the email from your system. You are prohibited from copying, disseminating or otherwise using the email or any attachment. From sanne at infinispan.org Wed Nov 26 11:47:00 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 26 Nov 2014 16:47:00 +0000 Subject: [infinispan-dev] Caused by: java.lang.OutOfMemoryError: unable to create new native thread In-Reply-To: <5475FA01.9090604@nexustelecom.com> References: <5475FA01.9090604@nexustelecom.com> Message-ID: Hi, my guess is that you're running Linux or OSX? You might need to reconfigure your OS to allow running more threads, we have a note about that here: http://infinispan.org/docs/7.0.x/contributing/contributing.html#_running_the_tests Sanne On 26 November 2014 at 16:04, Andreas Kruthoff wrote: > Hi infinispan-dev > > > I'm running 2 processes with 2 distributed caches each, standard > jgroups-tcp configutation. Both caches have a local dat file which is > loaded during startup, passivation is true. Each cache contains ~20Mio. > entries. > > I'm writing with async put, peak is over 10'000 entries per second. It > performs well. > > cache.getAdvancedCache() > .withFlags(Flag.SKIP_REMOTE_LOOKUP, Flag.SKIP_CACHE_LOAD) > .putIfAbsentAsync(Long.valueOf(a), Long.valueOf(b)); > > As soon as I launch a 3rd process to join the 2 caches, I'm getting the > following exception (see below). > > Does anyone know what I need to tune. It looks like the OS doesn't offer > enough resources, or am I wrong? The server has plenty of RAM and CPU's. > I'm launching without -Xmx, but with -XX:+UseG1GC. > > > Any help is much appreciated > > -andreas > > > Exception in thread "main" > org.infinispan.manager.EmbeddedCacheManagerStartupException: > org.infinispan.commons.CacheException: Unable to invoke method public > void org.infinispan.remoting.transport.jgroups.JGroupsTransport.start() > on object of type JGroupsTransport > at > org.infinispan.factories.GlobalComponentRegistry.start(GlobalComponentRegistry.java:243) > at > org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:573) > at > org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:539) > at > org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:416) > at ch.nexustelecom.lbd.engine.ImsiCache.init(ImsiCache.java:49) > at > ch.nexustelecom.dexclient.engine.DefaultDexClientEngine.init(DefaultDexClientEngine.java:120) > at > ch.nexustelecom.dexclient.DexClient.initClient(DexClient.java:169) > at > ch.nexustelecom.dexclient.tool.DexClientManager.startup(DexClientManager.java:196) > at > ch.nexustelecom.dexclient.tool.DexClientManager.main(DexClientManager.java:83) > Caused by: org.infinispan.commons.CacheException: Unable to invoke > method public void > org.infinispan.remoting.transport.jgroups.JGroupsTransport.start() on > object of type JGroupsTransport > at > org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:170) > at > org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:869) > at > org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:638) > at > org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:627) > at > org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:530) > at > org.infinispan.factories.GlobalComponentRegistry.start(GlobalComponentRegistry.java:221) > ... 8 more > Caused by: java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:714) > at > org.jgroups.protocols.FD_SOCK$ServerSocketHandler.start(FD_SOCK.java:1006) > at > org.jgroups.protocols.FD_SOCK$ServerSocketHandler.(FD_SOCK.java:999) > at org.jgroups.protocols.FD_SOCK.init(FD_SOCK.java:188) > at > org.jgroups.stack.ProtocolStack.initProtocolStack(ProtocolStack.java:860) > at org.jgroups.stack.ProtocolStack.setup(ProtocolStack.java:481) > at org.jgroups.JChannel.init(JChannel.java:848) > at org.jgroups.JChannel.(JChannel.java:159) > at org.jgroups.JChannel.(JChannel.java:129) > at > org.infinispan.remoting.transport.jgroups.JGroupsTransport.buildChannel(JGroupsTransport.java:381) > at > org.infinispan.remoting.transport.jgroups.JGroupsTransport.initChannel(JGroupsTransport.java:286) > at > org.infinispan.remoting.transport.jgroups.JGroupsTransport.initChannelAndRPCDispatcher(JGroupsTransport.java:330) > at > org.infinispan.remoting.transport.jgroups.JGroupsTransport.start(JGroupsTransport.java:189) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168) > ... 13 more > > This email and any attachment may contain confidential information which is intended for use only by the addressee(s) named above. If you received this email by mistake, please notify the sender immediately, and delete the email from your system. You are prohibited from copying, disseminating or otherwise using the email or any attachment. > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Wed Nov 26 12:28:22 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 26 Nov 2014 19:28:22 +0200 Subject: [infinispan-dev] Fix or document? Concurrent replaceWithVersion w/ same value might all return true - ISPN-4972 In-Reply-To: <5475F7C0.9070806@redhat.com> References: <47BCDD79-2E37-40AD-B994-AFB33AA8322F@redhat.com> <5464AD3F.5000805@redhat.com> <5475F7C0.9070806@redhat.com> Message-ID: On Wed, Nov 26, 2014 at 5:54 PM, Tristan Tarrant wrote: > It depends on the side-effects that replacing something with the same > value has: listeners, cachestores, state transfer, etc. > In general I'd say: no that's not what I want. Listeners work with the embedded API, which can only replace based on values. So I don't think that's a problem. > > Tristan > > On 26/11/14 16:43, Sanne Grinovero wrote: >> On 26 November 2014 at 14:17, Dan Berindei wrote: >>> Sanne, it will work as long as the previous value is not the same as >>> the new value. >>> >>> If multiple threads read value 5 with version 5, and all of them want >>> to replace it with value 6, only one of them will succeed. >> Ok I see I might be confusing value and versions. I hope :) >> >>> But if multiple threads read value 5 with version 5, and want to >>> replace it with value *5*, all of them might succeed. >> This paragraph is confusing me more. What "value" are you referring to >> at the third "5"? Is it even legal to replace an entry with a new >> value but not incrementing its version? I see you haven't been reading the HotRod API docs recently :D The HotRod server is the one who increments the version, the client can only supply the *expected* version. So "version 5" is the initial version, "value 5" is the initial value, and the last "value 5" is the new value. >> >> Thanks! >> Sanne >> >>> Indeed, it's not atomic, but a basic counter will work. And it's all >>> we can do with the actual core cache API (unless we want to go back to >>> including the HotRod version in the value). >>> >>> Cheers >>> Dan >>> >>> >>> On Wed, Nov 26, 2014 at 2:33 PM, Sanne Grinovero wrote: >>>> That's not Atomic. How can I implement a counter on this? >>>> >>>> Say the current version is 5, I read it, and then issue a "replace 5 >>>> with 6" command. >>>> If I send a couple of such commands in parallel I need a guarantee >>>> that only one succeeds, so that the other one can retry and get the >>>> counter up to 7. >>>> >>>> Over Hot Rod I have no locking so I have no alternatives other than >>>> atomic replacement commands, that's not unlikely to happen: that's a >>>> critical showstopper for users. >>>> >>>> Sanne >>>> >>>> >>>> On 20 November 2014 at 16:35, Dan Berindei wrote: >>>>> I guess you could say this is a regression, this wouldn't have been possible >>>>> when the version was part of the value :) >>>>> >>>>> But I agree an application is very unlikely call replaceWithVersion with the >>>>> same value as before, so +1 to document it for now and implement >>>>> replaceWithVersion/replaceWithPredicate in the embedded cache for 8.0. >>>>> >>>>> Cheers >>>>> Dan >>>>> >>>>> >>>>> On Thu, Nov 13, 2014 at 3:08 PM, Radim Vansa wrote: >>>>>> I agree with Galder, fixing it is not worth the cost. >>>>>> >>>>>> Actually, there are often bugs that I'd call rather 'quirks', not >>>>>> honoring the ConcurrentMap contract (recently we have discussed with Dan >>>>>> [1] and [2]) which are quite complex to fix. Another one that's >>>>>> considered not a bug is that a read does not have transactional semantics. >>>>>> Galder, where will you document that? I think that special page in >>>>>> documentation should accumulate such cases, linked to JIRAs for case >>>>>> that eventually we'll resolve them (with that glorious MVCC). And of >>>>>> course, link from javadoc to this document (though I am not sure whether >>>>>> we can correctly keep that in sync with latest release. Could we have a >>>>>> redirection from http://infinispan.org/docs/latest to >>>>>> http://infinispan.org/docs/7.0.x/ ? >>>>>> >>>>>> Radim >>>>>> >>>>>> [1] https://issues.jboss.org/browse/ISPN-3918 >>>>>> [2] https://issues.jboss.org/browse/ISPN-4286 >>>>>> >>>>>> On 11/13/2014 01:51 PM, Galder Zamarre?o wrote: >>>>>>> Hi all, >>>>>>> >>>>>>> Re: https://issues.jboss.org/browse/ISPN-4972 >>>>>>> >>>>>>> Embedded cache provides atomicity of a replace() call passing in the >>>>>>> previous value. This limitation might be lifted when we adopt Java 8 and we >>>>>>> can pass in a lambda or similar, which can be executed right when the value >>>>>>> is compared now, and if it returns true it?s applied. The lambda could >>>>>>> compare both value and metadata for example. >>>>>>> >>>>>>> Anyway, given the current status, I?m considering whether it?s worth >>>>>>> fixing this particular issue. Fixing the issue would require adding some >>>>>>> kind of locking in the Hot Rod server so that the version retrieval, >>>>>>> comparison and replace call, can all happen atomically. >>>>>>> >>>>>>> This is not ideal, and on top of that, as Radim said, the chances of >>>>>>> this happening in real life are limited, or more precisely it?s effects are >>>>>>> minimal. In other words, if two concurrent threads call replace with the >>>>>>> same value, the end result is that the new value would be stored, but as a >>>>>>> result of the code, both replaces would return true which is not strictly >>>>>>> right. >>>>>>> >>>>>>> I?d rather document this than add unnecessary locking in the Hot Rod >>>>>>> server where it deals with the versioned replace call. >>>>>>> >>>>>>> Thoughts? >>>>>>> -- >>>>>>> Galder Zamarre?o >>>>>>> galder at redhat.com >>>>>>> twitter.com/galderz >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> >>>>>> -- >>>>>> Radim Vansa >>>>>> JBoss DataGrid QA >>>>>> >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From andreas.kruthoff at nexustelecom.com Thu Nov 27 05:42:29 2014 From: andreas.kruthoff at nexustelecom.com (Andreas Kruthoff) Date: Thu, 27 Nov 2014 11:42:29 +0100 Subject: [infinispan-dev] Caused by: java.lang.OutOfMemoryError: unable to create new native thread In-Reply-To: References: <5475FA01.9090604@nexustelecom.com> Message-ID: <54770015.5050608@nexustelecom.com> RedHat Linux. I've changed the parameters as suggested in the article. And it helped, thank you! I've to observe the system a bit longer, but it looks good so far. I _needed_ to run the caches with mode="ASYNC", as I got timeout problems with jgroups mode=GET_ALL... when launching all 3 processes. I didn't see how to change that within the jgroups configuration, but I switched to ASYNC in Infinispan, and the timeout messages were gone. thx! -andreas On 11/26/2014 05:47 PM, Sanne Grinovero wrote: > Hi, > my guess is that you're running Linux or OSX? > > You might need to reconfigure your OS to allow running more threads, > we have a note about that here: > http://infinispan.org/docs/7.0.x/contributing/contributing.html#_running_the_tests > > Sanne > > > On 26 November 2014 at 16:04, Andreas Kruthoff > wrote: >> Hi infinispan-dev >> >> >> I'm running 2 processes with 2 distributed caches each, standard >> jgroups-tcp configutation. Both caches have a local dat file which is >> loaded during startup, passivation is true. Each cache contains ~20Mio. >> entries. >> >> I'm writing with async put, peak is over 10'000 entries per second. It >> performs well. >> >> cache.getAdvancedCache() >> .withFlags(Flag.SKIP_REMOTE_LOOKUP, Flag.SKIP_CACHE_LOAD) >> .putIfAbsentAsync(Long.valueOf(a), Long.valueOf(b)); >> >> As soon as I launch a 3rd process to join the 2 caches, I'm getting the >> following exception (see below). >> >> Does anyone know what I need to tune. It looks like the OS doesn't offer >> enough resources, or am I wrong? The server has plenty of RAM and CPU's. >> I'm launching without -Xmx, but with -XX:+UseG1GC. >> >> >> Any help is much appreciated >> >> -andreas >> >> >> Exception in thread "main" >> org.infinispan.manager.EmbeddedCacheManagerStartupException: >> org.infinispan.commons.CacheException: Unable to invoke method public >> void org.infinispan.remoting.transport.jgroups.JGroupsTransport.start() >> on object of type JGroupsTransport >> at >> org.infinispan.factories.GlobalComponentRegistry.start(GlobalComponentRegistry.java:243) >> at >> org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:573) >> at >> org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:539) >> at >> org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:416) >> at ch.nexustelecom.lbd.engine.ImsiCache.init(ImsiCache.java:49) >> at >> ch.nexustelecom.dexclient.engine.DefaultDexClientEngine.init(DefaultDexClientEngine.java:120) >> at >> ch.nexustelecom.dexclient.DexClient.initClient(DexClient.java:169) >> at >> ch.nexustelecom.dexclient.tool.DexClientManager.startup(DexClientManager.java:196) >> at >> ch.nexustelecom.dexclient.tool.DexClientManager.main(DexClientManager.java:83) >> Caused by: org.infinispan.commons.CacheException: Unable to invoke >> method public void >> org.infinispan.remoting.transport.jgroups.JGroupsTransport.start() on >> object of type JGroupsTransport >> at >> org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:170) >> at >> org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:869) >> at >> org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:638) >> at >> org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:627) >> at >> org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:530) >> at >> org.infinispan.factories.GlobalComponentRegistry.start(GlobalComponentRegistry.java:221) >> ... 8 more >> Caused by: java.lang.OutOfMemoryError: unable to create new native thread >> at java.lang.Thread.start0(Native Method) >> at java.lang.Thread.start(Thread.java:714) >> at >> org.jgroups.protocols.FD_SOCK$ServerSocketHandler.start(FD_SOCK.java:1006) >> at >> org.jgroups.protocols.FD_SOCK$ServerSocketHandler.(FD_SOCK.java:999) >> at org.jgroups.protocols.FD_SOCK.init(FD_SOCK.java:188) >> at >> org.jgroups.stack.ProtocolStack.initProtocolStack(ProtocolStack.java:860) >> at org.jgroups.stack.ProtocolStack.setup(ProtocolStack.java:481) >> at org.jgroups.JChannel.init(JChannel.java:848) >> at org.jgroups.JChannel.(JChannel.java:159) >> at org.jgroups.JChannel.(JChannel.java:129) >> at >> org.infinispan.remoting.transport.jgroups.JGroupsTransport.buildChannel(JGroupsTransport.java:381) >> at >> org.infinispan.remoting.transport.jgroups.JGroupsTransport.initChannel(JGroupsTransport.java:286) >> at >> org.infinispan.remoting.transport.jgroups.JGroupsTransport.initChannelAndRPCDispatcher(JGroupsTransport.java:330) >> at >> org.infinispan.remoting.transport.jgroups.JGroupsTransport.start(JGroupsTransport.java:189) >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >> at >> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >> at >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >> at java.lang.reflect.Method.invoke(Method.java:606) >> at >> org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168) >> ... 13 more >> >> This email and any attachment may contain confidential information which is intended for use only by the addressee(s) named above. If you received this email by mistake, please notify the sender immediately, and delete the email from your system. You are prohibited from copying, disseminating or otherwise using the email or any attachment. >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > This email and any attachment may contain confidential information which is intended for use only by the addressee(s) named above. If you received this email by mistake, please notify the sender immediately, and delete the email from your system. You are prohibited from copying, disseminating or otherwise using the email or any attachment. From galder at redhat.com Thu Nov 27 09:31:10 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Thu, 27 Nov 2014 15:31:10 +0100 Subject: [infinispan-dev] Fix or document? Concurrent replaceWithVersion w/ same value might all return true - ISPN-4972 In-Reply-To: References: <47BCDD79-2E37-40AD-B994-AFB33AA8322F@redhat.com> <5464AD3F.5000805@redhat.com> Message-ID: <63D97613-AC14-4FBE-8ADD-58BFF981E4A2@redhat.com> On 26 Nov 2014, at 13:33, Sanne Grinovero wrote: > That's not Atomic. How can I implement a counter on this? > > Say the current version is 5, I read it, and then issue a "replace 5 > with 6" command. > If I send a couple of such commands in parallel I need a guarantee > that only one succeeds, so that the other one can retry and get the > counter up to 7. ^ We support this and it works: https://github.com/infinispan/infinispan/blob/master/client/hotrod-client/src/test/java/org/infinispan/client/hotrod/ReplaceWithVersionConcurrencyTest.java > Over Hot Rod I have no locking so I have no alternatives other than > atomic replacement commands, that's not unlikely to happen: that's a > critical showstopper for users. > > Sanne > > > On 20 November 2014 at 16:35, Dan Berindei wrote: >> I guess you could say this is a regression, this wouldn't have been possible >> when the version was part of the value :) >> >> But I agree an application is very unlikely call replaceWithVersion with the >> same value as before, so +1 to document it for now and implement >> replaceWithVersion/replaceWithPredicate in the embedded cache for 8.0. >> >> Cheers >> Dan >> >> >> On Thu, Nov 13, 2014 at 3:08 PM, Radim Vansa wrote: >>> >>> I agree with Galder, fixing it is not worth the cost. >>> >>> Actually, there are often bugs that I'd call rather 'quirks', not >>> honoring the ConcurrentMap contract (recently we have discussed with Dan >>> [1] and [2]) which are quite complex to fix. Another one that's >>> considered not a bug is that a read does not have transactional semantics. >>> Galder, where will you document that? I think that special page in >>> documentation should accumulate such cases, linked to JIRAs for case >>> that eventually we'll resolve them (with that glorious MVCC). And of >>> course, link from javadoc to this document (though I am not sure whether >>> we can correctly keep that in sync with latest release. Could we have a >>> redirection from http://infinispan.org/docs/latest to >>> http://infinispan.org/docs/7.0.x/ ? >>> >>> Radim >>> >>> [1] https://issues.jboss.org/browse/ISPN-3918 >>> [2] https://issues.jboss.org/browse/ISPN-4286 >>> >>> On 11/13/2014 01:51 PM, Galder Zamarre?o wrote: >>>> Hi all, >>>> >>>> Re: https://issues.jboss.org/browse/ISPN-4972 >>>> >>>> Embedded cache provides atomicity of a replace() call passing in the >>>> previous value. This limitation might be lifted when we adopt Java 8 and we >>>> can pass in a lambda or similar, which can be executed right when the value >>>> is compared now, and if it returns true it?s applied. The lambda could >>>> compare both value and metadata for example. >>>> >>>> Anyway, given the current status, I?m considering whether it?s worth >>>> fixing this particular issue. Fixing the issue would require adding some >>>> kind of locking in the Hot Rod server so that the version retrieval, >>>> comparison and replace call, can all happen atomically. >>>> >>>> This is not ideal, and on top of that, as Radim said, the chances of >>>> this happening in real life are limited, or more precisely it?s effects are >>>> minimal. In other words, if two concurrent threads call replace with the >>>> same value, the end result is that the new value would be stored, but as a >>>> result of the code, both replaces would return true which is not strictly >>>> right. >>>> >>>> I?d rather document this than add unnecessary locking in the Hot Rod >>>> server where it deals with the versioned replace call. >>>> >>>> Thoughts? >>>> -- >>>> Galder Zamarre?o >>>> galder at redhat.com >>>> twitter.com/galderz >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> -- >>> Radim Vansa >>> JBoss DataGrid QA >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From galder at redhat.com Thu Nov 27 09:58:34 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Thu, 27 Nov 2014 15:58:34 +0100 Subject: [infinispan-dev] Fix or document? Concurrent replaceWithVersion w/ same value might all return true - ISPN-4972 In-Reply-To: <5464AD3F.5000805@redhat.com> References: <47BCDD79-2E37-40AD-B994-AFB33AA8322F@redhat.com> <5464AD3F.5000805@redhat.com> Message-ID: For those who were wondering, I?ve not forgotten about this thread, but in the last few days I saw something that made my reconsider this... To be more precise, I saw some code of a user that was trying to keep a counter using Hot Rod. When inspected closely, Will and I realised that the user code had a very subtle bug that meant that ISPN-4972 could happen when the counter was first updated. I?ve just tried the code in place of the `incrementCounter` method in the counter stress test we have in the testsuite [1] and the test fails as expected. You end up with incorrect counter number, e.g. should count to 4000 but ends up with 4001. This is quite dangerous and spotting these kind of issues in the user code can take up a lot of time, so I?m rethinking again what to do about this. I?m not sure yet. I?ll reply back. Cheers, [1] https://github.com/infinispan/infinispan/blob/master/client/hotrod-client/src/test/java/org/infinispan/client/hotrod/ReplaceWithVersionConcurrencyTest.java On 13 Nov 2014, at 14:08, Radim Vansa wrote: > I agree with Galder, fixing it is not worth the cost. > > Actually, there are often bugs that I'd call rather 'quirks', not > honoring the ConcurrentMap contract (recently we have discussed with Dan > [1] and [2]) which are quite complex to fix. Another one that's > considered not a bug is that a read does not have transactional semantics. > Galder, where will you document that? I think that special page in > documentation should accumulate such cases, linked to JIRAs for case > that eventually we'll resolve them (with that glorious MVCC). And of > course, link from javadoc to this document (though I am not sure whether > we can correctly keep that in sync with latest release. Could we have a > redirection from http://infinispan.org/docs/latest to > http://infinispan.org/docs/7.0.x/ ? > > Radim > > [1] https://issues.jboss.org/browse/ISPN-3918 > [2] https://issues.jboss.org/browse/ISPN-4286 > > On 11/13/2014 01:51 PM, Galder Zamarre?o wrote: >> Hi all, >> >> Re: https://issues.jboss.org/browse/ISPN-4972 >> >> Embedded cache provides atomicity of a replace() call passing in the previous value. This limitation might be lifted when we adopt Java 8 and we can pass in a lambda or similar, which can be executed right when the value is compared now, and if it returns true it?s applied. The lambda could compare both value and metadata for example. >> >> Anyway, given the current status, I?m considering whether it?s worth fixing this particular issue. Fixing the issue would require adding some kind of locking in the Hot Rod server so that the version retrieval, comparison and replace call, can all happen atomically. >> >> This is not ideal, and on top of that, as Radim said, the chances of this happening in real life are limited, or more precisely it?s effects are minimal. In other words, if two concurrent threads call replace with the same value, the end result is that the new value would be stored, but as a result of the code, both replaces would return true which is not strictly right. >> >> I?d rather document this than add unnecessary locking in the Hot Rod server where it deals with the versioned replace call. >> >> Thoughts? >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From pedro at infinispan.org Thu Nov 27 18:35:20 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Thu, 27 Nov 2014 23:35:20 +0000 Subject: [infinispan-dev] Infinispan 7.1.0.Alpha1 is out! Message-ID: <5477B538.6030201@infinispan.org> Dear Community, FYI: http://blog.infinispan.org/2014/11/infinispan-710-alpha1-is-out.html Cheers, Pedro Ruivo From rvansa at redhat.com Fri Nov 28 03:57:42 2014 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 28 Nov 2014 09:57:42 +0100 Subject: [infinispan-dev] Infinispan 7.1.0.Alpha1 is out! In-Reply-To: <5477B538.6030201@infinispan.org> References: <5477B538.6030201@infinispan.org> Message-ID: <54783906.8070407@redhat.com> Thanks, Pedro Every time we do a release, lot of issues in JIRA have their Fix Versions just shifted to the next release. Why do we set this field before the issue is actually fixed? The fix version does not denote neither a plan to fix it before certain version, when it can be that easily shifted. Radim On 11/28/2014 12:35 AM, Pedro Ruivo wrote: > Dear Community, > > FYI: http://blog.infinispan.org/2014/11/infinispan-710-alpha1-is-out.html > > Cheers, > Pedro Ruivo > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From pedro at infinispan.org Fri Nov 28 05:10:16 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Fri, 28 Nov 2014 10:10:16 +0000 Subject: [infinispan-dev] Infinispan 7.1.0.Alpha1 is out! In-Reply-To: <54783906.8070407@redhat.com> References: <5477B538.6030201@infinispan.org> <54783906.8070407@redhat.com> Message-ID: <54784A08.4050609@infinispan.org> Hi Radim, I use it as a plan to myself (i.e. to prioritize my open JIRA). But I think you are right. Except when we have blocker issues, setting the fix version does not mean anything. Cheers, Pedro On 11/28/2014 08:57 AM, Radim Vansa wrote: > Thanks, Pedro > > Every time we do a release, lot of issues in JIRA have their Fix > Versions just shifted to the next release. Why do we set this field > before the issue is actually fixed? The fix version does not denote > neither a plan to fix it before certain version, when it can be that > easily shifted. > > Radim > > On 11/28/2014 12:35 AM, Pedro Ruivo wrote: >> Dear Community, >> >> FYI: http://blog.infinispan.org/2014/11/infinispan-710-alpha1-is-out.html >> >> Cheers, >> Pedro Ruivo >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > From ttarrant at redhat.com Fri Nov 28 10:18:39 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 28 Nov 2014 16:18:39 +0100 Subject: [infinispan-dev] Infinispan 7.1.x: codename proposals In-Reply-To: <546B614E.1090902@redhat.com> References: <546B614E.1090902@redhat.com> Message-ID: <5478924F.6030406@redhat.com> Infinispan users and beer lovers, you can now choose the codename for Infinispan's next release. Head over to: http://goo.gl/forms/pdERBnVwHD You have until Friday, 5th December 2014 at 12:00 GMT to cast your vote for your favourite. Tristan From dan.berindei at gmail.com Fri Nov 28 10:46:24 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Fri, 28 Nov 2014 17:46:24 +0200 Subject: [infinispan-dev] Infinispan 7.1.x: codename proposals In-Reply-To: <5478924F.6030406@redhat.com> References: <546B614E.1090902@redhat.com> <5478924F.6030406@redhat.com> Message-ID: Apparently Hoptimus Prime is no longer brewed... Cheers Dan On Fri, Nov 28, 2014 at 5:18 PM, Tristan Tarrant wrote: > Infinispan users and beer lovers, > you can now choose the codename for Infinispan's next release. Head over to: > > http://goo.gl/forms/pdERBnVwHD > > You have until Friday, 5th December 2014 at 12:00 GMT to cast your vote > for your favourite. > > Tristan > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From gustavonalle at gmail.com Fri Nov 28 10:49:52 2014 From: gustavonalle at gmail.com (Gustavo Fernandes) Date: Fri, 28 Nov 2014 15:49:52 +0000 Subject: [infinispan-dev] Infinispan 7.1.x: codename proposals In-Reply-To: References: <546B614E.1090902@redhat.com> <5478924F.6030406@redhat.com> Message-ID: In case a substitute is needed, I suggest one nice pale lager from Austria [1] [1] http://en.wikipedia.org/wiki/Fucking_Hell Gustavo On Fri, Nov 28, 2014 at 3:46 PM, Dan Berindei wrote: > Apparently Hoptimus Prime is no longer brewed... > > Cheers > Dan > > On Fri, Nov 28, 2014 at 5:18 PM, Tristan Tarrant wrote: >> Infinispan users and beer lovers, >> you can now choose the codename for Infinispan's next release. Head over to: >> >> http://goo.gl/forms/pdERBnVwHD >> >> You have until Friday, 5th December 2014 at 12:00 GMT to cast your vote >> for your favourite. >> >> Tristan >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev