From ttarrant at redhat.com Mon Jan 5 10:42:15 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 05 Jan 2015 16:42:15 +0100 Subject: [infinispan-dev] Weekly Infinispan IRC meeting 2015/01/05 Message-ID: <54AAB0D7.8060001@redhat.com> Hi all, first meeting of the year. Minutes: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2015/infinispan.2015-01-05-15.05.html Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From ttarrant at redhat.com Mon Jan 5 11:00:19 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 05 Jan 2015 17:00:19 +0100 Subject: [infinispan-dev] Infinispan 7.0.3.Final released Message-ID: <54AAB513.3020100@redhat.com> Dear all, Infinispan 7.0.3.Final is available. Read all about it here: http://blog.infinispan.org/2015/01/infinispan-703final-released.html -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From galder at redhat.com Tue Jan 6 03:17:54 2015 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Tue, 6 Jan 2015 09:17:54 +0100 Subject: [infinispan-dev] Infinispan tutorial In-Reply-To: <545B5DB9.3050405@redhat.com> References: <5450EFF4.6050103@redhat.com> <1627E280-0E25-461F-A967-BD3EFAF56E5C@redhat.com> <545B5DB9.3050405@redhat.com> Message-ID: <025CF13A-2E09-4C40-8FED-13F7DD9565E1@redhat.com> On 06 Nov 2014, at 12:38, Tristan Tarrant wrote: > Thanks Galder, > > - no logging in step-0: that is expected (and why it's called step '0'), > and I will say so in the actual tutorial text > - logging is happening for me, haven't tried with the lower settings > > I have added one more step which makes the cache clustered and I have > updated the tags. > Obviously all of this is done via horrible git force pushing :) ^ I was just thinking about that. If you want to make any changes to each step, say step-0, you have to change that commit and subsequent ones, right? e.g. I wanted to add IntelliJ files to step-0?s .gitignore Cheers, > > Tristan > > On 05/11/14 08:34, Galder Zamarre?o wrote: >> Hi Tristan, >> >> +1 to having a more step-by-step tutorial :) >> >> I?ve tried the tutorial locally and made some notes: >> >> - step-0 is a bit confusing since nothing is logged. However, no logging is not due to not enabling it, but the fact that nothing kicks in until getCache() is called, and that only happens in step-1. >> >> - How do you enable logging? Also, not sure what I need to change in logging.properties to see some logging of Infinispan. For example: how do you enable debug/trace logging? I?ve tried FINER/FINEST too but did not make a difference. Maybe I need a org.infinispan specific level/formatter combination? >> >> - step-4 tag missing. >> >> Great work!! >> >> Cheers, >> >> On 29 Oct 2014, at 14:47, Tristan Tarrant wrote: >> >>> Hi guys, >>> >>> I've been working on how to spruce up our website, docs and code samples. >>> While quickstarts are ok, they come as monolithic blobs which tell you >>> nothing about how you got there. For this reason I believe a >>> step-by-step tutorial approach is better and I've been looking at the >>> AngularJS tutorials [0] as good examples on how to achieve this. >>> I have created a repo [1] on my GitHub user where each commit is a step >>> in the tutorial. I have tagged the commits using 'step-n' so that you >>> can checkout any of the steps and run them: >>> >>> git checkout step-1 >>> mvn clean package exec:java >>> >>> The GitHub web interface can be used to show the diff between steps, so >>> that it can be linked from the docs [2]. >>> >>> Currently I'm not aiming to build a real application (although >>> suggestions are welcome in this sense), but just going through the >>> basics, adding features one by one, etc. >>> >>> Comments are welcome. >>> >>> Tristan >>> >>> --- >>> [0] https://docs.angularjs.org/tutorial/step_00 >>> [1] https://github.com/tristantarrant/infinispan-embedded-tutorial >>> [2] >>> https://github.com/tristantarrant/infinispan-embedded-tutorial/compare/step-0...step-1?diff=unified >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From galder at redhat.com Tue Jan 6 03:21:33 2015 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Tue, 6 Jan 2015 09:21:33 +0100 Subject: [infinispan-dev] Infinispan tutorial In-Reply-To: <025CF13A-2E09-4C40-8FED-13F7DD9565E1@redhat.com> References: <5450EFF4.6050103@redhat.com> <1627E280-0E25-461F-A967-BD3EFAF56E5C@redhat.com> <545B5DB9.3050405@redhat.com> <025CF13A-2E09-4C40-8FED-13F7DD9565E1@redhat.com> Message-ID: <69EEF8A2-349D-418A-9840-2EE10DB15D06@redhat.com> On 06 Jan 2015, at 09:17, Galder Zamarre?o wrote: > > On 06 Nov 2014, at 12:38, Tristan Tarrant wrote: > >> Thanks Galder, >> >> - no logging in step-0: that is expected (and why it's called step '0'), >> and I will say so in the actual tutorial text >> - logging is happening for me, haven't tried with the lower settings >> >> I have added one more step which makes the cache clustered and I have >> updated the tags. >> Obviously all of this is done via horrible git force pushing :) > > ^ I was just thinking about that. If you want to make any changes to each step, say step-0, you have to change that commit and subsequent ones, right? > > e.g. I wanted to add IntelliJ files to step-0?s .gitignore Or other adjustments, such as updating dependencies, update README information...etc. > > Cheers, > >> >> Tristan >> >> On 05/11/14 08:34, Galder Zamarre?o wrote: >>> Hi Tristan, >>> >>> +1 to having a more step-by-step tutorial :) >>> >>> I?ve tried the tutorial locally and made some notes: >>> >>> - step-0 is a bit confusing since nothing is logged. However, no logging is not due to not enabling it, but the fact that nothing kicks in until getCache() is called, and that only happens in step-1. >>> >>> - How do you enable logging? Also, not sure what I need to change in logging.properties to see some logging of Infinispan. For example: how do you enable debug/trace logging? I?ve tried FINER/FINEST too but did not make a difference. Maybe I need a org.infinispan specific level/formatter combination? >>> >>> - step-4 tag missing. >>> >>> Great work!! >>> >>> Cheers, >>> >>> On 29 Oct 2014, at 14:47, Tristan Tarrant wrote: >>> >>>> Hi guys, >>>> >>>> I've been working on how to spruce up our website, docs and code samples. >>>> While quickstarts are ok, they come as monolithic blobs which tell you >>>> nothing about how you got there. For this reason I believe a >>>> step-by-step tutorial approach is better and I've been looking at the >>>> AngularJS tutorials [0] as good examples on how to achieve this. >>>> I have created a repo [1] on my GitHub user where each commit is a step >>>> in the tutorial. I have tagged the commits using 'step-n' so that you >>>> can checkout any of the steps and run them: >>>> >>>> git checkout step-1 >>>> mvn clean package exec:java >>>> >>>> The GitHub web interface can be used to show the diff between steps, so >>>> that it can be linked from the docs [2]. >>>> >>>> Currently I'm not aiming to build a real application (although >>>> suggestions are welcome in this sense), but just going through the >>>> basics, adding features one by one, etc. >>>> >>>> Comments are welcome. >>>> >>>> Tristan >>>> >>>> --- >>>> [0] https://docs.angularjs.org/tutorial/step_00 >>>> [1] https://github.com/tristantarrant/infinispan-embedded-tutorial >>>> [2] >>>> https://github.com/tristantarrant/infinispan-embedded-tutorial/compare/step-0...step-1?diff=unified >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> -- >>> Galder Zamarre?o >>> galder at redhat.com >>> twitter.com/galderz >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From sanne at infinispan.org Tue Jan 6 07:40:45 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 6 Jan 2015 12:40:45 +0000 Subject: [infinispan-dev] Failed Hot Rod tests.. since several weeks Message-ID: Hi all, these tests are failing me regularly since at least November, is someone looking at them? As usual, you might have noticed I stopped sending pull requests since the build fails here. thanks, Sanne Results : Failed tests: MultiHotRodServerIspnDirReplQueryTest>MultiHotRodServerQueryTest.testAttributeQuery:124 expected:<1> but was:<0> MultiHotRodServerIspnDirReplQueryTest>MultiHotRodServerQueryTest.testEmbeddedAttributeQuery:137 expected:<1> but was:<0> MultiHotRodServerIspnDirReplQueryTest>MultiHotRodServerQueryTest.testProjections:167 expected:<1> but was:<0> Tests run: 865, Failures: 3, Errors: 0, Skipped: 0 [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Infinispan BOM ..................................... SUCCESS [ 0.091 s] [INFO] Infinispan Common Parent ........................... SUCCESS [ 1.019 s] [INFO] Infinispan Checkstyle Rules ........................ SUCCESS [ 2.012 s] [INFO] Infinispan Commons ................................. SUCCESS [ 5.641 s] [INFO] Infinispan Core .................................... SUCCESS [06:59 min] [INFO] Infinispan Extended Statistics ..................... SUCCESS [ 34.332 s] [INFO] Parent pom for server modules ...................... SUCCESS [ 0.075 s] [INFO] Infinispan Server - Core Components ................ SUCCESS [ 12.236 s] [INFO] Infinispan Query DSL API ........................... SUCCESS [ 0.735 s] [INFO] Infinispan Object Filtering API .................... SUCCESS [ 1.610 s] [INFO] Parent pom for cachestore modules .................. SUCCESS [ 0.123 s] [INFO] Infinispan JDBC CacheStore ......................... SUCCESS [ 19.649 s] [INFO] Parent pom for the Lucene integration modules ...... SUCCESS [ 0.068 s] [INFO] Infinispan Lucene Directory Implementation ......... SUCCESS [ 9.066 s] [INFO] Infinispan Query API ............................... SUCCESS [ 45.772 s] [INFO] Infinispan Tools ................................... SUCCESS [ 1.343 s] [INFO] Infinispan Remote Query Client ..................... SUCCESS [ 0.457 s] [INFO] Infinispan Remote Query Server ..................... SUCCESS [ 7.949 s] [INFO] Infinispan Tree API ................................ SUCCESS [ 7.558 s] [INFO] Infinispan JPA CacheStore .......................... SUCCESS [ 16.348 s] [INFO] Infinispan Hot Rod Server .......................... SUCCESS [01:14 min] [INFO] Infinispan Hot Rod Client .......................... FAILURE [ 58.719 s] From galder at redhat.com Tue Jan 6 14:25:27 2015 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Tue, 6 Jan 2015 20:25:27 +0100 Subject: [infinispan-dev] Failed Hot Rod tests.. since several weeks In-Reply-To: References: Message-ID: According to Adrian in https://github.com/infinispan/infinispan/pull/3114, WIP... On 06 Jan 2015, at 13:40, Sanne Grinovero wrote: > Hi all, > these tests are failing me regularly since at least November, is > someone looking at them? > As usual, you might have noticed I stopped sending pull requests since > the build fails here. > > thanks, > Sanne > > Results : > > Failed tests: > MultiHotRodServerIspnDirReplQueryTest>MultiHotRodServerQueryTest.testAttributeQuery:124 > expected:<1> but was:<0> > MultiHotRodServerIspnDirReplQueryTest>MultiHotRodServerQueryTest.testEmbeddedAttributeQuery:137 > expected:<1> but was:<0> > MultiHotRodServerIspnDirReplQueryTest>MultiHotRodServerQueryTest.testProjections:167 > expected:<1> but was:<0> > > Tests run: 865, Failures: 3, Errors: 0, Skipped: 0 > > [INFO] ------------------------------------------------------------------------ > [INFO] Reactor Summary: > [INFO] > [INFO] Infinispan BOM ..................................... SUCCESS [ 0.091 s] > [INFO] Infinispan Common Parent ........................... SUCCESS [ 1.019 s] > [INFO] Infinispan Checkstyle Rules ........................ SUCCESS [ 2.012 s] > [INFO] Infinispan Commons ................................. SUCCESS [ 5.641 s] > [INFO] Infinispan Core .................................... SUCCESS [06:59 min] > [INFO] Infinispan Extended Statistics ..................... SUCCESS [ 34.332 s] > [INFO] Parent pom for server modules ...................... SUCCESS [ 0.075 s] > [INFO] Infinispan Server - Core Components ................ SUCCESS [ 12.236 s] > [INFO] Infinispan Query DSL API ........................... SUCCESS [ 0.735 s] > [INFO] Infinispan Object Filtering API .................... SUCCESS [ 1.610 s] > [INFO] Parent pom for cachestore modules .................. SUCCESS [ 0.123 s] > [INFO] Infinispan JDBC CacheStore ......................... SUCCESS [ 19.649 s] > [INFO] Parent pom for the Lucene integration modules ...... SUCCESS [ 0.068 s] > [INFO] Infinispan Lucene Directory Implementation ......... SUCCESS [ 9.066 s] > [INFO] Infinispan Query API ............................... SUCCESS [ 45.772 s] > [INFO] Infinispan Tools ................................... SUCCESS [ 1.343 s] > [INFO] Infinispan Remote Query Client ..................... SUCCESS [ 0.457 s] > [INFO] Infinispan Remote Query Server ..................... SUCCESS [ 7.949 s] > [INFO] Infinispan Tree API ................................ SUCCESS [ 7.558 s] > [INFO] Infinispan JPA CacheStore .......................... SUCCESS [ 16.348 s] > [INFO] Infinispan Hot Rod Server .......................... SUCCESS [01:14 min] > [INFO] Infinispan Hot Rod Client .......................... FAILURE [ 58.719 s] > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From isavin at redhat.com Wed Jan 7 05:17:39 2015 From: isavin at redhat.com (Ion Savin) Date: Wed, 07 Jan 2015 12:17:39 +0200 Subject: [infinispan-dev] Weekly Infinispan IRC meeting 2015/01/05 In-Reply-To: <54AAB0D7.8060001@redhat.com> References: <54AAB0D7.8060001@redhat.com> Message-ID: <54AD07C3.4000607@redhat.com> Hi all, My status for last week: * 1,2 PTO * updated antrun to last to improve the build time ISPN-5110 * uberjar osgi fixes ISPN-5116 * worked on a bundle for the C# client which includes the C++ and C# runtimes WIP HRCPP-186 https://github.com/isavin/dotnet-client/tree/bundle_vc_runtime On 01/05/2015 05:42 PM, Tristan Tarrant wrote: > Hi all, > > first meeting of the year. Minutes: > > http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2015/infinispan.2015-01-05-15.05.html > > Tristan > From mudokonman at gmail.com Wed Jan 7 08:45:46 2015 From: mudokonman at gmail.com (William Burns) Date: Wed, 7 Jan 2015 08:45:46 -0500 Subject: [infinispan-dev] Weekly Infinispan IRC meeting 2015/01/05 In-Reply-To: <54AD07C3.4000607@redhat.com> References: <54AAB0D7.8060001@redhat.com> <54AD07C3.4000607@redhat.com> Message-ID: Hello everyone, I was on PTO and holiday the past couple weeks. On the 22nd and 23rd I worked on: ISPN-5104 ISPN-5088 both of which are in 7.0.3 now too. Also had to do some porting of prod work This week I will start working on: ISPN-5095 ISPN-3023 On Wed, Jan 7, 2015 at 5:17 AM, Ion Savin wrote: > Hi all, > > My status for last week: > * 1,2 PTO > * updated antrun to last to improve the build time ISPN-5110 > * uberjar osgi fixes ISPN-5116 > * worked on a bundle for the C# client which includes the C++ and C# > runtimes WIP HRCPP-186 > https://github.com/isavin/dotnet-client/tree/bundle_vc_runtime > > On 01/05/2015 05:42 PM, Tristan Tarrant wrote: >> Hi all, >> >> first meeting of the year. Minutes: >> >> http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2015/infinispan.2015-01-05-15.05.html >> >> Tristan >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From tsykora at redhat.com Thu Jan 8 04:45:34 2015 From: tsykora at redhat.com (Tomas Sykora) Date: Thu, 8 Jan 2015 04:45:34 -0500 (EST) Subject: [infinispan-dev] Infinispan Management Console project - JIRA/gh issues? In-Reply-To: <1782346067.4560681.1420709947983.JavaMail.zimbra@redhat.com> Message-ID: <22082817.4574538.1420710334234.JavaMail.zimbra@redhat.com> Greetings all! I know that the team puts hands on more important stuff recently but I want to find out (decide) what tool do we want to use for driving issues for Infinispan Management Console sub-project. Currently, I am struggling with the fact that I have some ideas in my mind and I don't have a good place for raising an issue and further discussion. Also I am not aware of other contributors' intentions and I am not sure whether my effort is duplication of someone's work. Can we please decide what tracking tool do we use for Infinispan Management Console so we can start raising feature requests, discussions, issues, etc.? I personally vote for GitHub issues. Thank you very much for any input :) Tomas From ttarrant at redhat.com Thu Jan 8 04:49:42 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 08 Jan 2015 10:49:42 +0100 Subject: [infinispan-dev] Infinispan Management Console project - JIRA/gh issues? In-Reply-To: <22082817.4574538.1420710334234.JavaMail.zimbra@redhat.com> References: <22082817.4574538.1420710334234.JavaMail.zimbra@redhat.com> Message-ID: <54AE52B6.2020600@redhat.com> Jira. The management console will need to follow Infinispan's lifecycle, cross-link to internal issues, etc. We really don't want another tool. Tristan On 08/01/2015 10:45, Tomas Sykora wrote: > Greetings all! > > I know that the team puts hands on more important stuff recently but I want to find out (decide) what tool do we want to use for driving issues for Infinispan Management Console sub-project. > Currently, I am struggling with the fact that I have some ideas in my mind and I don't have a good place for raising an issue and further discussion. Also I am not aware of other contributors' intentions and I am not sure whether my effort is duplication of someone's work. > > Can we please decide what tracking tool do we use for Infinispan Management Console so we can start raising feature requests, discussions, issues, etc.? > > I personally vote for GitHub issues. > > Thank you very much for any input :) > Tomas > > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From tsykora at redhat.com Thu Jan 8 04:56:56 2015 From: tsykora at redhat.com (Tomas Sykora) Date: Thu, 8 Jan 2015 04:56:56 -0500 (EST) Subject: [infinispan-dev] Infinispan Management Console project - JIRA/gh issues? In-Reply-To: <54AE52B6.2020600@redhat.com> References: <22082817.4574538.1420710334234.JavaMail.zimbra@redhat.com> <54AE52B6.2020600@redhat.com> Message-ID: <509509468.4592550.1420711016093.JavaMail.zimbra@redhat.com> Thanks Tristan! Crystal clear now. In that case it has been decided: JIRA will be the tracking tool. Thank you guys. Tom ----- Original Message ----- > From: "Tristan Tarrant" > To: "Tomas Sykora" , "infinispan -Dev List" > Cc: "Vladimir Blagojevic" , "sosic martin" , "matija sosic" > > Sent: Thursday, January 8, 2015 10:49:42 AM > Subject: Re: Infinispan Management Console project - JIRA/gh issues? > > Jira. The management console will need to follow Infinispan's lifecycle, > cross-link to internal issues, etc. We really don't want another tool. > > Tristan > > On 08/01/2015 10:45, Tomas Sykora wrote: > > Greetings all! > > > > I know that the team puts hands on more important stuff recently but I want > > to find out (decide) what tool do we want to use for driving issues for > > Infinispan Management Console sub-project. > > Currently, I am struggling with the fact that I have some ideas in my mind > > and I don't have a good place for raising an issue and further discussion. > > Also I am not aware of other contributors' intentions and I am not sure > > whether my effort is duplication of someone's work. > > > > Can we please decide what tracking tool do we use for Infinispan Management > > Console so we can start raising feature requests, discussions, issues, > > etc.? > > > > I personally vote for GitHub issues. > > > > Thank you very much for any input :) > > Tomas > > > > > > > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > > From sanne at infinispan.org Thu Jan 8 07:35:19 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 8 Jan 2015 12:35:19 +0000 Subject: [infinispan-dev] Failed Hot Rod tests.. since several weeks In-Reply-To: References: Message-ID: Thanks! What about we disable the failing tests if there is no immediate solution? On 6 January 2015 at 19:25, Galder Zamarre?o wrote: > According to Adrian in https://github.com/infinispan/infinispan/pull/3114, WIP... > > On 06 Jan 2015, at 13:40, Sanne Grinovero wrote: > >> Hi all, >> these tests are failing me regularly since at least November, is >> someone looking at them? >> As usual, you might have noticed I stopped sending pull requests since >> the build fails here. >> >> thanks, >> Sanne >> >> Results : >> >> Failed tests: >> MultiHotRodServerIspnDirReplQueryTest>MultiHotRodServerQueryTest.testAttributeQuery:124 >> expected:<1> but was:<0> >> MultiHotRodServerIspnDirReplQueryTest>MultiHotRodServerQueryTest.testEmbeddedAttributeQuery:137 >> expected:<1> but was:<0> >> MultiHotRodServerIspnDirReplQueryTest>MultiHotRodServerQueryTest.testProjections:167 >> expected:<1> but was:<0> >> >> Tests run: 865, Failures: 3, Errors: 0, Skipped: 0 >> >> [INFO] ------------------------------------------------------------------------ >> [INFO] Reactor Summary: >> [INFO] >> [INFO] Infinispan BOM ..................................... SUCCESS [ 0.091 s] >> [INFO] Infinispan Common Parent ........................... SUCCESS [ 1.019 s] >> [INFO] Infinispan Checkstyle Rules ........................ SUCCESS [ 2.012 s] >> [INFO] Infinispan Commons ................................. SUCCESS [ 5.641 s] >> [INFO] Infinispan Core .................................... SUCCESS [06:59 min] >> [INFO] Infinispan Extended Statistics ..................... SUCCESS [ 34.332 s] >> [INFO] Parent pom for server modules ...................... SUCCESS [ 0.075 s] >> [INFO] Infinispan Server - Core Components ................ SUCCESS [ 12.236 s] >> [INFO] Infinispan Query DSL API ........................... SUCCESS [ 0.735 s] >> [INFO] Infinispan Object Filtering API .................... SUCCESS [ 1.610 s] >> [INFO] Parent pom for cachestore modules .................. SUCCESS [ 0.123 s] >> [INFO] Infinispan JDBC CacheStore ......................... SUCCESS [ 19.649 s] >> [INFO] Parent pom for the Lucene integration modules ...... SUCCESS [ 0.068 s] >> [INFO] Infinispan Lucene Directory Implementation ......... SUCCESS [ 9.066 s] >> [INFO] Infinispan Query API ............................... SUCCESS [ 45.772 s] >> [INFO] Infinispan Tools ................................... SUCCESS [ 1.343 s] >> [INFO] Infinispan Remote Query Client ..................... SUCCESS [ 0.457 s] >> [INFO] Infinispan Remote Query Server ..................... SUCCESS [ 7.949 s] >> [INFO] Infinispan Tree API ................................ SUCCESS [ 7.558 s] >> [INFO] Infinispan JPA CacheStore .......................... SUCCESS [ 16.348 s] >> [INFO] Infinispan Hot Rod Server .......................... SUCCESS [01:14 min] >> [INFO] Infinispan Hot Rod Client .......................... FAILURE [ 58.719 s] >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From anistor at redhat.com Fri Jan 9 05:54:42 2015 From: anistor at redhat.com (Adrian Nistor) Date: Fri, 09 Jan 2015 12:54:42 +0200 Subject: [infinispan-dev] Failed Hot Rod tests.. since several weeks In-Reply-To: References: Message-ID: <54AFB372.3040803@redhat.com> Hi Sanne, The failure is avoided by modifying the setup to wait until all lucene index related caches are started on all nodes and initial state transfer was performed (as in https://github.com/infinispan/infinispan/pull/3114/files). But this 'fix' may indicate a problem in infinispan-lucene-directory. Maybe Gustavo can have a look? Adrian On 01/08/2015 02:35 PM, Sanne Grinovero wrote: > Thanks! > What about we disable the failing tests if there is no immediate solution? > > On 6 January 2015 at 19:25, Galder Zamarre?o wrote: >> According to Adrian in https://github.com/infinispan/infinispan/pull/3114, WIP... >> >> On 06 Jan 2015, at 13:40, Sanne Grinovero wrote: >> >>> Hi all, >>> these tests are failing me regularly since at least November, is >>> someone looking at them? >>> As usual, you might have noticed I stopped sending pull requests since >>> the build fails here. >>> >>> thanks, >>> Sanne >>> >>> Results : >>> >>> Failed tests: >>> MultiHotRodServerIspnDirReplQueryTest>MultiHotRodServerQueryTest.testAttributeQuery:124 >>> expected:<1> but was:<0> >>> MultiHotRodServerIspnDirReplQueryTest>MultiHotRodServerQueryTest.testEmbeddedAttributeQuery:137 >>> expected:<1> but was:<0> >>> MultiHotRodServerIspnDirReplQueryTest>MultiHotRodServerQueryTest.testProjections:167 >>> expected:<1> but was:<0> >>> >>> Tests run: 865, Failures: 3, Errors: 0, Skipped: 0 >>> >>> [INFO] ------------------------------------------------------------------------ >>> [INFO] Reactor Summary: >>> [INFO] >>> [INFO] Infinispan BOM ..................................... SUCCESS [ 0.091 s] >>> [INFO] Infinispan Common Parent ........................... SUCCESS [ 1.019 s] >>> [INFO] Infinispan Checkstyle Rules ........................ SUCCESS [ 2.012 s] >>> [INFO] Infinispan Commons ................................. SUCCESS [ 5.641 s] >>> [INFO] Infinispan Core .................................... SUCCESS [06:59 min] >>> [INFO] Infinispan Extended Statistics ..................... SUCCESS [ 34.332 s] >>> [INFO] Parent pom for server modules ...................... SUCCESS [ 0.075 s] >>> [INFO] Infinispan Server - Core Components ................ SUCCESS [ 12.236 s] >>> [INFO] Infinispan Query DSL API ........................... SUCCESS [ 0.735 s] >>> [INFO] Infinispan Object Filtering API .................... SUCCESS [ 1.610 s] >>> [INFO] Parent pom for cachestore modules .................. SUCCESS [ 0.123 s] >>> [INFO] Infinispan JDBC CacheStore ......................... SUCCESS [ 19.649 s] >>> [INFO] Parent pom for the Lucene integration modules ...... SUCCESS [ 0.068 s] >>> [INFO] Infinispan Lucene Directory Implementation ......... SUCCESS [ 9.066 s] >>> [INFO] Infinispan Query API ............................... SUCCESS [ 45.772 s] >>> [INFO] Infinispan Tools ................................... SUCCESS [ 1.343 s] >>> [INFO] Infinispan Remote Query Client ..................... SUCCESS [ 0.457 s] >>> [INFO] Infinispan Remote Query Server ..................... SUCCESS [ 7.949 s] >>> [INFO] Infinispan Tree API ................................ SUCCESS [ 7.558 s] >>> [INFO] Infinispan JPA CacheStore .......................... SUCCESS [ 16.348 s] >>> [INFO] Infinispan Hot Rod Server .......................... SUCCESS [01:14 min] >>> [INFO] Infinispan Hot Rod Client .......................... FAILURE [ 58.719 s] >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Fri Jan 9 18:29:58 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Sat, 10 Jan 2015 00:29:58 +0100 Subject: [infinispan-dev] Infinispan 7.1.0.Beta1 released Message-ID: <54B06476.7060905@redhat.com> Dear Infinispan community, Infinispan 7.1.0.Beta1 is available. Read more at: http://blog.infinispan.org/2015/01/infinispan-710beta1.html -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From sanne at infinispan.org Thu Jan 15 12:08:56 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 15 Jan 2015 17:08:56 +0000 Subject: [infinispan-dev] Indexing deadlock (solution suggestion) In-Reply-To: <5491C450.80208@redhat.com> References: <5491C450.80208@redhat.com> Message-ID: Thanks Radim, so the problem is that the master node is exhausting the OOB threads because they are stuck waiting for the index writes? Assuming I understood, I agree: we should do as you suggested. Sorry for asking the obvious, but I've missed the problem description; I only heard that you've found a deadlock. Is there a JIRA related to this conversation? Sanne On 17 December 2014 at 17:58, Radim Vansa wrote: > Hi, > > what I was suggesting in the call in order to get rid of the indexing: > Currently we're doing this: > > 1. thread on primary owner executes the write and sends indexing request > (synchronous RPC) to index master, waits for the response > 2. remote/OOB thread on indexing master enqueues the indexing request > and waits > 3. indexing thread (on indexing master) retrieves the request, processes > it and wakes up the waiting remote/OOB thread > 4. remote/OOB thread sends RPC response > 5. primary owner receives the RPC response (in OOB thread, inside > JGroups) and wakes up the thread sending the RPC > > What I suggest is that: > 1. thread on primary owner executes the write and sends indexing request > as asynchronous RPC (single message) to index master, and waits on a > custom synchronization primitive > 2. remote/OOB thread on indexing master enqueues the indexing request > and returns back to the threadpool > 3. indexing thread (on indexing master) retrieves the request, processes > it and sends asynchronouse RPC (again single message) to the primary owner > 4. primary owner (in OOB thread) receives the message and wakes up > thread waiting on the custom synchronization primitive (in Infinispan) > > My 2c > > Radim > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rory.odonnell at oracle.com Fri Jan 16 08:28:45 2015 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Fri, 16 Jan 2015 13:28:45 +0000 Subject: [infinispan-dev] Early Access builds for JDK 9 b45, JDK 8u40 b21 & JDK 7u80 b04 are available on java.net Message-ID: <54B9120D.7030808@oracle.com> Hi Galder, Now that JDK 9 Early Access build images are modular [1], there is a fresh Early Access build for JDK 9 b45 available on java.net. The summary of changes are listed here In addition, there are new Early Access builds for the ongoing update releases. The Early Access build for JDK 8u40 b21 is available on java.net, with the summary of changes listed here. Finally, the Early Access build for JDK 7u80 b04 is available on java.net, with the summary of changes listed here. As we enter the later phases of development for JDK 7u80 & JDK 8u40, please log any show stoppers as soon as possible. Rgds,Rory [1] http://mreinhold.org/blog/jigsaw-modular-images -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150116/af65e1ec/attachment.html From manik at infinispan.org Fri Jan 16 20:43:45 2015 From: manik at infinispan.org (Manik Surtani) Date: Fri, 16 Jan 2015 17:43:45 -0800 Subject: [infinispan-dev] Distribution-aware ClusterLoader Message-ID: Greetings. :-) I chatted with a few of you offline about this earlier; anyone has any thoughts around a ClusterLoader implementation that, instead of broadcasting to the entire cluster, unicasts to the owners of a given key by inspecting the DistributionManager. Thinking of using this as a lazy/on-demand form of state transfer in a distributed cluster, so joiners don?t trigger big chunks of data moving around eagerly. - M ? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150116/e0902013/attachment.html From ttarrant at redhat.com Mon Jan 19 11:24:13 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 19 Jan 2015 17:24:13 +0100 Subject: [infinispan-dev] Weekly Infinispan IRC Meeting 2015/01/19 Message-ID: <54BD2FAD.6060202@redhat.com> Hi all, the logs of this weeks meeting: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2015/infinispan.2015-01-19-15.03.log.html Cheers ! -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From sanne at infinispan.org Mon Jan 19 19:48:46 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 20 Jan 2015 00:48:46 +0000 Subject: [infinispan-dev] Experiment: Affinity Tagging Message-ID: Hi all, I'm playing with an idea for some internal components to be able to "tag" the key for an entry to be stored into Infinispan in a very specific segment of the CH. Conceptually the plan is easy to understand by looking at this patch: https://github.com/Sanne/infinispan/commit/45a3d9e62318d5f5f950a60b5bb174d23037335f Hacking the change into ReplicatedConsistentHash is quite barbaric, please bear with me as I couldn't figure a better way to be able to experiment with this. I'll probably want to extend this class, but then I'm not sure how to plug it in? What would you all think of such a "tagging" mechanism? # Why I didn't use the KeyAffinityService - I need to use my own keys, not the meaningless stuff produced by the service - the extensive usage of Random in there doesn't seem suited for a performance critical path # Why I didn't use the Grouping API - I need to pick the specific storage segment, not just co-locate with a different key The general goal is to make it possible to "tag" all entries of an index, and have an independent index for each segment of the CH. So the resulting effect would be, that when a primary owner for any key K is making an update, and this triggers an index update, that update is A) going to happen on the same node -> no need to forwarding to a "master indexing node" B) each such writes on the index happen on the same node which is primary owner for all the written entries of the index. There are two additional nice consequences: - there would be no need to perform a reliable "master election": ownership singleton is already guaranteed by Infinispan's essential logic, so it would reuse that - the propagation of writes on the index from the primary owner (which is the local node by definition) to backup owners could use REPL_ASYNC for most practical use cases. So net result is that the overhead for indexing is reduced to 0 (ZERO) blocking RPCs if the async repl is acceptable, or to only one blocking roundtrip if very strict consistency is required. Thanks, Sanne From anistor at redhat.com Mon Jan 19 21:08:12 2015 From: anistor at redhat.com (Adrian Nistor) Date: Tue, 20 Jan 2015 04:08:12 +0200 Subject: [infinispan-dev] Experiment: Affinity Tagging In-Reply-To: References: Message-ID: <54BDB88C.8030709@redhat.com> Hi Sanne, An alternative approach would be to implement an org.infinispan.commons.hash.Hash which delegates to the stock implementation for all keys except those that need to be assigned to a specific segment. It should return the desired segment for those. Adrian On 01/20/2015 02:48 AM, Sanne Grinovero wrote: > Hi all, > > I'm playing with an idea for some internal components to be able to > "tag" the key for an entry to be stored into Infinispan in a very > specific segment of the CH. > > Conceptually the plan is easy to understand by looking at this patch: > > https://github.com/Sanne/infinispan/commit/45a3d9e62318d5f5f950a60b5bb174d23037335f > > Hacking the change into ReplicatedConsistentHash is quite barbaric, > please bear with me as I couldn't figure a better way to be able to > experiment with this. I'll probably want to extend this class, but > then I'm not sure how to plug it in? > > What would you all think of such a "tagging" mechanism? > > # Why I didn't use the KeyAffinityService > - I need to use my own keys, not the meaningless stuff produced by the service > - the extensive usage of Random in there doesn't seem suited for a > performance critical path > > # Why I didn't use the Grouping API > - I need to pick the specific storage segment, not just co-locate with > a different key > > > The general goal is to make it possible to "tag" all entries of an > index, and have an independent index for each segment of the CH. So > the resulting effect would be, that when a primary owner for any key K > is making an update, and this triggers an index update, that update is > A) going to happen on the same node -> no need to forwarding to a > "master indexing node" > B) each such writes on the index happen on the same node which is > primary owner for all the written entries of the index. > > There are two additional nice consequences: > - there would be no need to perform a reliable "master election": > ownership singleton is already guaranteed by Infinispan's essential > logic, so it would reuse that > - the propagation of writes on the index from the primary owner > (which is the local node by definition) to backup owners could use > REPL_ASYNC for most practical use cases. > > So net result is that the overhead for indexing is reduced to 0 (ZERO) > blocking RPCs if the async repl is acceptable, or to only one blocking > roundtrip if very strict consistency is required. > > Thanks, > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Tue Jan 20 08:32:37 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 20 Jan 2015 15:32:37 +0200 Subject: [infinispan-dev] Experiment: Affinity Tagging In-Reply-To: <54BDB88C.8030709@redhat.com> References: <54BDB88C.8030709@redhat.com> Message-ID: Adrian, I don't think that will work. The Hash doesn't know the number of segments so it can't tell where a particular key will land - even assuming knowledge about how the ConsistentHash will map hash codes to segments. However, I'm all for replacing the current Hash interface with another interface that maps keys directly to segments. Cheers Dan On Tue, Jan 20, 2015 at 4:08 AM, Adrian Nistor wrote: > Hi Sanne, > > An alternative approach would be to implement an > org.infinispan.commons.hash.Hash which delegates to the stock > implementation for all keys except those that need to be assigned to a > specific segment. It should return the desired segment for those. > > Adrian > > > On 01/20/2015 02:48 AM, Sanne Grinovero wrote: > > Hi all, > > > > I'm playing with an idea for some internal components to be able to > > "tag" the key for an entry to be stored into Infinispan in a very > > specific segment of the CH. > > > > Conceptually the plan is easy to understand by looking at this patch: > > > > > https://github.com/Sanne/infinispan/commit/45a3d9e62318d5f5f950a60b5bb174d23037335f > > > > Hacking the change into ReplicatedConsistentHash is quite barbaric, > > please bear with me as I couldn't figure a better way to be able to > > experiment with this. I'll probably want to extend this class, but > > then I'm not sure how to plug it in? > You would need to create your own ConsistentHashFactory, possibly extending ReplicatedConsistentHashFactory. You can then plug the factory in with configurationBuilder.clustering().hash().consistentHashFactory(yourFactory) However, this isn't a really good idea, because then you need a different implementation for distributed mode, and then another implementation for topology-aware clusters (with rack/machine/site ids). And your users would also need to select the proper factory for each cache. > > > > What would you all think of such a "tagging" mechanism? > > > > # Why I didn't use the KeyAffinityService > > - I need to use my own keys, not the meaningless stuff produced by the > service > > - the extensive usage of Random in there doesn't seem suited for a > > performance critical path > You can plug in your own KeyGenerator to generate keys, and maybe replace the Random with a static/thread-local counter. > > > > > # Why I didn't use the Grouping API > > - I need to pick the specific storage segment, not just co-locate with > > a different key > > > This is actually a drawback of the KeyAffinityService more than Grouping. With grouping, you can actually follow the KeyAffinityService strategy and generate random strings until you get one in the proper segment, and then tag all your keys with that exact string. > > > > The general goal is to make it possible to "tag" all entries of an > > index, and have an independent index for each segment of the CH. So > > the resulting effect would be, that when a primary owner for any key K > > is making an update, and this triggers an index update, that update is > > A) going to happen on the same node -> no need to forwarding to a > > "master indexing node" > > B) each such writes on the index happen on the same node which is > > primary owner for all the written entries of the index. > > > > There are two additional nice consequences: > > - there would be no need to perform a reliable "master election": > > ownership singleton is already guaranteed by Infinispan's essential > > logic, so it would reuse that > > - the propagation of writes on the index from the primary owner > > (which is the local node by definition) to backup owners could use > > REPL_ASYNC for most practical use cases. > > > > So net result is that the overhead for indexing is reduced to 0 (ZERO) > > blocking RPCs if the async repl is acceptable, or to only one blocking > > roundtrip if very strict consistency is required. > Sounds very interesting, but I think there may be a problem with your strategy: Infinispan doesn't guarantee you that one of the nodes executing the CommitCommand is the primary owner at the time the CommitCommand is executed. You could have something like this: Cluster [A, B, C, D], key k, owners(k) = [A, B] (A is primary) C initiates a tx that executes put(k, v) Tx prepare succeeds on A and B A crashes, but the other nodes don't detect the crash yet Tx commit succeeds on B, who still thinks is a backup owner B detects the crash, installs a new cluster view consistent hash with owners(k) = [B] > > > > Thanks, > > Sanne > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150120/1a6deb9a/attachment-0001.html From anistor at redhat.com Tue Jan 20 09:33:36 2015 From: anistor at redhat.com (Adrian Nistor) Date: Tue, 20 Jan 2015 16:33:36 +0200 Subject: [infinispan-dev] Experiment: Affinity Tagging In-Reply-To: References: <54BDB88C.8030709@redhat.com> Message-ID: <54BE6740.4090108@redhat.com> None of the existing Hash implementations can, but this new one will be special. It could have access to the config (and CH) of the user's cache so it will know the number of segments. The index cache will have to use the same type of CH as the data cache in order to keep ownership in sync and the Hash implementation will be the special delegating Hash. There is a twist though, the above only works with SyncConsistentHash. Bacause when two caches with identical topology use DefaultConsistentHash they could still not be in sync in terms of key ownership. Only SyncConsistentHash ensures that. Knowledge of how CH currently maps hashcodes to segments is assumed already. I've spotted at least 3 places in code where it happens, so it is time to document it or move this responsibility to the Hash interface as you suggest to make it really pluggable. Adrian On 01/20/2015 03:32 PM, Dan Berindei wrote: > Adrian, I don't think that will work. The Hash doesn't know the number > of segments so it can't tell where a particular key will land - even > assuming knowledge about how the ConsistentHash will map hash codes to > segments. > > However, I'm all for replacing the current Hash interface with another > interface that maps keys directly to segments. > > Cheers > Dan > > > On Tue, Jan 20, 2015 at 4:08 AM, Adrian Nistor > wrote: > > Hi Sanne, > > An alternative approach would be to implement an > org.infinispan.commons.hash.Hash which delegates to the stock > implementation for all keys except those that need to be assigned to a > specific segment. It should return the desired segment for those. > > Adrian > > > On 01/20/2015 02:48 AM, Sanne Grinovero wrote: > > Hi all, > > > > I'm playing with an idea for some internal components to be able to > > "tag" the key for an entry to be stored into Infinispan in a very > > specific segment of the CH. > > > > Conceptually the plan is easy to understand by looking at this > patch: > > > > > https://github.com/Sanne/infinispan/commit/45a3d9e62318d5f5f950a60b5bb174d23037335f > > > > Hacking the change into ReplicatedConsistentHash is quite barbaric, > > please bear with me as I couldn't figure a better way to be able to > > experiment with this. I'll probably want to extend this class, but > > then I'm not sure how to plug it in? > > > You would need to create your own ConsistentHashFactory, possibly > extending ReplicatedConsistentHashFactory. You can then plug the > factory in with > > configurationBuilder.clustering().hash().consistentHashFactory(yourFactory) > > However, this isn't a really good idea, because then you need a > different implementation for distributed mode, and then another > implementation for topology-aware clusters (with rack/machine/site > ids). And your users would also need to select the proper factory for > each cache. > > > > > What would you all think of such a "tagging" mechanism? > > > > # Why I didn't use the KeyAffinityService > > - I need to use my own keys, not the meaningless stuff produced > by the service > > - the extensive usage of Random in there doesn't seem suited for a > > performance critical path > > > You can plug in your own KeyGenerator to generate keys, and maybe > replace the Random with a static/thread-local counter. > > > > > # Why I didn't use the Grouping API > > - I need to pick the specific storage segment, not just > co-locate with > > a different key > > > > > This is actually a drawback of the KeyAffinityService more than > Grouping. With grouping, you can actually follow the > KeyAffinityService strategy and generate random strings until you get > one in the proper segment, and then tag all your keys with that exact > string. > > > > > The general goal is to make it possible to "tag" all entries of an > > index, and have an independent index for each segment of the CH. So > > the resulting effect would be, that when a primary owner for any > key K > > is making an update, and this triggers an index update, that > update is > > A) going to happen on the same node -> no need to forwarding to a > > "master indexing node" > > B) each such writes on the index happen on the same node which is > > primary owner for all the written entries of the index. > > > > There are two additional nice consequences: > > - there would be no need to perform a reliable "master election": > > ownership singleton is already guaranteed by Infinispan's essential > > logic, so it would reuse that > > - the propagation of writes on the index from the primary owner > > (which is the local node by definition) to backup owners could use > > REPL_ASYNC for most practical use cases. > > > > So net result is that the overhead for indexing is reduced to 0 > (ZERO) > > blocking RPCs if the async repl is acceptable, or to only one > blocking > > roundtrip if very strict consistency is required. > > > Sounds very interesting, but I think there may be a problem with your > strategy: Infinispan doesn't guarantee you that one of the nodes > executing the CommitCommand is the primary owner at the time the > CommitCommand is executed. You could have something like this: > > Cluster [A, B, C, D], key k, owners(k) = [A, B] (A is primary) > C initiates a tx that executes put(k, v) > Tx prepare succeeds on A and B > A crashes, but the other nodes don't detect the crash yet > Tx commit succeeds on B, who still thinks is a backup owner > B detects the crash, installs a new cluster view consistent hash with > owners(k) = [B] > > > > > Thanks, > > Sanne > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150120/a957dd87/attachment.html From sanne at infinispan.org Tue Jan 20 19:54:34 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 21 Jan 2015 00:54:34 +0000 Subject: [infinispan-dev] Experiment: Affinity Tagging In-Reply-To: <54BDB88C.8030709@redhat.com> References: <54BDB88C.8030709@redhat.com> Message-ID: Thanks Adrian, right I initially expected doing something like that, but the Hash contract doesn't expose/leak details about segments. I guess I could forge a specific hash result but that seems fragile, while my needs are very simple as I already know the segment id: for a given indexing back-end it's a constant. On 20 January 2015 at 02:08, Adrian Nistor wrote: > Hi Sanne, > > An alternative approach would be to implement an > org.infinispan.commons.hash.Hash which delegates to the stock > implementation for all keys except those that need to be assigned to a > specific segment. It should return the desired segment for those. > > Adrian > > > On 01/20/2015 02:48 AM, Sanne Grinovero wrote: >> Hi all, >> >> I'm playing with an idea for some internal components to be able to >> "tag" the key for an entry to be stored into Infinispan in a very >> specific segment of the CH. >> >> Conceptually the plan is easy to understand by looking at this patch: >> >> https://github.com/Sanne/infinispan/commit/45a3d9e62318d5f5f950a60b5bb174d23037335f >> >> Hacking the change into ReplicatedConsistentHash is quite barbaric, >> please bear with me as I couldn't figure a better way to be able to >> experiment with this. I'll probably want to extend this class, but >> then I'm not sure how to plug it in? >> >> What would you all think of such a "tagging" mechanism? >> >> # Why I didn't use the KeyAffinityService >> - I need to use my own keys, not the meaningless stuff produced by the service >> - the extensive usage of Random in there doesn't seem suited for a >> performance critical path >> >> # Why I didn't use the Grouping API >> - I need to pick the specific storage segment, not just co-locate with >> a different key >> >> >> The general goal is to make it possible to "tag" all entries of an >> index, and have an independent index for each segment of the CH. So >> the resulting effect would be, that when a primary owner for any key K >> is making an update, and this triggers an index update, that update is >> A) going to happen on the same node -> no need to forwarding to a >> "master indexing node" >> B) each such writes on the index happen on the same node which is >> primary owner for all the written entries of the index. >> >> There are two additional nice consequences: >> - there would be no need to perform a reliable "master election": >> ownership singleton is already guaranteed by Infinispan's essential >> logic, so it would reuse that >> - the propagation of writes on the index from the primary owner >> (which is the local node by definition) to backup owners could use >> REPL_ASYNC for most practical use cases. >> >> So net result is that the overhead for indexing is reduced to 0 (ZERO) >> blocking RPCs if the async repl is acceptable, or to only one blocking >> roundtrip if very strict consistency is required. >> >> Thanks, >> Sanne >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Tue Jan 20 20:12:14 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 21 Jan 2015 01:12:14 +0000 Subject: [infinispan-dev] Experiment: Affinity Tagging In-Reply-To: References: <54BDB88C.8030709@redhat.com> Message-ID: On 20 January 2015 at 13:32, Dan Berindei wrote: > Adrian, I don't think that will work. The Hash doesn't know the number of > segments so it can't tell where a particular key will land - even assuming > knowledge about how the ConsistentHash will map hash codes to segments. > > However, I'm all for replacing the current Hash interface with another > interface that maps keys directly to segments. Right, I'll eventually need a different abstraction, or a change to the Hash interface. However my need seems highly specialistic, I'm not sure if there would be a general interest into such a capability for other Hash implementors? > > Cheers > Dan Never ever sign if you have more interesting comments below, I only saw them by chance ;) > On Tue, Jan 20, 2015 at 4:08 AM, Adrian Nistor wrote: >> >> Hi Sanne, >> >> An alternative approach would be to implement an >> org.infinispan.commons.hash.Hash which delegates to the stock >> implementation for all keys except those that need to be assigned to a >> specific segment. It should return the desired segment for those. >> >> Adrian >> >> >> On 01/20/2015 02:48 AM, Sanne Grinovero wrote: >> > Hi all, >> > >> > I'm playing with an idea for some internal components to be able to >> > "tag" the key for an entry to be stored into Infinispan in a very >> > specific segment of the CH. >> > >> > Conceptually the plan is easy to understand by looking at this patch: >> > >> > >> > https://github.com/Sanne/infinispan/commit/45a3d9e62318d5f5f950a60b5bb174d23037335f >> > >> > Hacking the change into ReplicatedConsistentHash is quite barbaric, >> > please bear with me as I couldn't figure a better way to be able to >> > experiment with this. I'll probably want to extend this class, but >> > then I'm not sure how to plug it in? > > > You would need to create your own ConsistentHashFactory, possibly extending > ReplicatedConsistentHashFactory. You can then plug the factory in with > > configurationBuilder.clustering().hash().consistentHashFactory(yourFactory) > > However, this isn't a really good idea, because then you need a different > implementation for distributed mode, and then another implementation for > topology-aware clusters (with rack/machine/site ids). And your users would > also need to select the proper factory for each cache. Right, this is the complexity I was facing. I'll stick to my hack solution for our little POC... but ultimately I'll need to plug this in if none of the solutions below work out. >> > What would you all think of such a "tagging" mechanism? >> > >> > # Why I didn't use the KeyAffinityService >> > - I need to use my own keys, not the meaningless stuff produced by the >> > service >> > - the extensive usage of Random in there doesn't seem suited for a >> > performance critical path > > > You can plug in your own KeyGenerator to generate keys, and maybe replace > the Random with a static/thread-local counter. Thanks for the tip on KeyGenerator, I'll investigate on that :) But I'll never add a static/threadlocal.. I'd rather commit my ugly code from the commit linked above. > >> >> >> >> > >> > # Why I didn't use the Grouping API >> > - I need to pick the specific storage segment, not just co-locate with >> > a different key >> > > > > This is actually a drawback of the KeyAffinityService more than Grouping. > With grouping, you can actually follow the KeyAffinityService strategy and > generate random strings until you get one in the proper segment, and then > tag all your keys with that exact string. Interesting! A bit convoluted but could spare me to plug in the HashFactory. BTW I really dislike this idea of the KeyAffinityService to generate random keys until it works out.. I guess it might not be too bad if you want to pick a node out of ten, but I'm working at segment granularity level and with the right luck it could take a long time. It would be nice to have a function like this which would return in a deterministic amount of time, like simply an inverse Hash. >> > The general goal is to make it possible to "tag" all entries of an >> > index, and have an independent index for each segment of the CH. So >> > the resulting effect would be, that when a primary owner for any key K >> > is making an update, and this triggers an index update, that update is >> > A) going to happen on the same node -> no need to forwarding to a >> > "master indexing node" >> > B) each such writes on the index happen on the same node which is >> > primary owner for all the written entries of the index. >> > >> > There are two additional nice consequences: >> > - there would be no need to perform a reliable "master election": >> > ownership singleton is already guaranteed by Infinispan's essential >> > logic, so it would reuse that >> > - the propagation of writes on the index from the primary owner >> > (which is the local node by definition) to backup owners could use >> > REPL_ASYNC for most practical use cases. >> > >> > So net result is that the overhead for indexing is reduced to 0 (ZERO) >> > blocking RPCs if the async repl is acceptable, or to only one blocking >> > roundtrip if very strict consistency is required. > > > Sounds very interesting, but I think there may be a problem with your > strategy: Infinispan doesn't guarantee you that one of the nodes executing > the CommitCommand is the primary owner at the time the CommitCommand is > executed. You could have something like this: Index storage is generally used without transactions, but even assuming we had transactions enabled or the "vanilla" put operation suffered had a similar timing issue (as we'd determine this node to be owner higher up in the search stack, before the actual put reaches the Infinispan core API) it's not a problem as we'd simply lose locality of the write: it would be slightly less efficient, but still write the "right thing". The intention is to maximise locality with the hints, but failing locality on writes I just expect it to be handled as any other put operation on Infinispan.. with a couple more RPCs, with any race condition regarding topology changes being handled at lower level. Which is exactly why I'm now working on top of container segments: they are a stable building block and allow us to not worry on how you'll actually distribute data or re-route update commands. Thanks all for the suggestions! Sanne > > Cluster [A, B, C, D], key k, owners(k) = [A, B] (A is primary) > C initiates a tx that executes put(k, v) > Tx prepare succeeds on A and B > A crashes, but the other nodes don't detect the crash yet > Tx commit succeeds on B, who still thinks is a backup owner > B detects the crash, installs a new cluster view consistent hash with > owners(k) = [B] > > >> >> > >> > Thanks, >> > Sanne From sanne at infinispan.org Tue Jan 20 20:28:43 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 21 Jan 2015 01:28:43 +0000 Subject: [infinispan-dev] Experiment: Affinity Tagging In-Reply-To: <54BE6740.4090108@redhat.com> References: <54BDB88C.8030709@redhat.com> <54BE6740.4090108@redhat.com> Message-ID: On 20 January 2015 at 14:33, Adrian Nistor wrote: > None of the existing Hash implementations can, but this new one will be > special. It could have access to the config (and CH) of the user's cache so > it will know the number of segments. The index cache will have to use the > same type of CH as the data cache in order to keep ownership in sync and the > Hash implementation will be the special delegating Hash. > > There is a twist though, the above only works with SyncConsistentHash. > Bacause when two caches with identical topology use DefaultConsistentHash > they could still not be in sync in terms of key ownership. Only > SyncConsistentHash ensures that. Many thanks for pointing out the need for a SyncConsistentHashFactory, I was not aware of the limitations described in the javadoc. Side note: I'm surprised of the limitation of the normal ConsistentHashFactory as described in the javadoc of SyncConsistentHashFactory .. is it because our normal implementation is actually not "Consistent" ? Or is it referring to additional properties of our Hash function? Cheers, Sanne > > Knowledge of how CH currently maps hashcodes to segments is assumed already. > I've spotted at least 3 places in code where it happens, so it is time to > document it or move this responsibility to the Hash interface as you suggest > to make it really pluggable. > > Adrian > > > On 01/20/2015 03:32 PM, Dan Berindei wrote: > > Adrian, I don't think that will work. The Hash doesn't know the number of > segments so it can't tell where a particular key will land - even assuming > knowledge about how the ConsistentHash will map hash codes to segments. > > However, I'm all for replacing the current Hash interface with another > interface that maps keys directly to segments. > > Cheers > Dan > > > On Tue, Jan 20, 2015 at 4:08 AM, Adrian Nistor wrote: >> >> Hi Sanne, >> >> An alternative approach would be to implement an >> org.infinispan.commons.hash.Hash which delegates to the stock >> implementation for all keys except those that need to be assigned to a >> specific segment. It should return the desired segment for those. >> >> Adrian >> >> >> On 01/20/2015 02:48 AM, Sanne Grinovero wrote: >> > Hi all, >> > >> > I'm playing with an idea for some internal components to be able to >> > "tag" the key for an entry to be stored into Infinispan in a very >> > specific segment of the CH. >> > >> > Conceptually the plan is easy to understand by looking at this patch: >> > >> > >> > https://github.com/Sanne/infinispan/commit/45a3d9e62318d5f5f950a60b5bb174d23037335f >> > >> > Hacking the change into ReplicatedConsistentHash is quite barbaric, >> > please bear with me as I couldn't figure a better way to be able to >> > experiment with this. I'll probably want to extend this class, but >> > then I'm not sure how to plug it in? > > > You would need to create your own ConsistentHashFactory, possibly extending > ReplicatedConsistentHashFactory. You can then plug the factory in with > > configurationBuilder.clustering().hash().consistentHashFactory(yourFactory) > > However, this isn't a really good idea, because then you need a different > implementation for distributed mode, and then another implementation for > topology-aware clusters (with rack/machine/site ids). And your users would > also need to select the proper factory for each cache. > >> >> > >> > What would you all think of such a "tagging" mechanism? >> > >> > # Why I didn't use the KeyAffinityService >> > - I need to use my own keys, not the meaningless stuff produced by the >> > service >> > - the extensive usage of Random in there doesn't seem suited for a >> > performance critical path > > > You can plug in your own KeyGenerator to generate keys, and maybe replace > the Random with a static/thread-local counter. > >> >> >> >> > >> > # Why I didn't use the Grouping API >> > - I need to pick the specific storage segment, not just co-locate with >> > a different key >> > > > > This is actually a drawback of the KeyAffinityService more than Grouping. > With grouping, you can actually follow the KeyAffinityService strategy and > generate random strings until you get one in the proper segment, and then > tag all your keys with that exact string. > >> >> > >> > The general goal is to make it possible to "tag" all entries of an >> > index, and have an independent index for each segment of the CH. So >> > the resulting effect would be, that when a primary owner for any key K >> > is making an update, and this triggers an index update, that update is >> > A) going to happen on the same node -> no need to forwarding to a >> > "master indexing node" >> > B) each such writes on the index happen on the same node which is >> > primary owner for all the written entries of the index. >> > >> > There are two additional nice consequences: >> > - there would be no need to perform a reliable "master election": >> > ownership singleton is already guaranteed by Infinispan's essential >> > logic, so it would reuse that >> > - the propagation of writes on the index from the primary owner >> > (which is the local node by definition) to backup owners could use >> > REPL_ASYNC for most practical use cases. >> > >> > So net result is that the overhead for indexing is reduced to 0 (ZERO) >> > blocking RPCs if the async repl is acceptable, or to only one blocking >> > roundtrip if very strict consistency is required. > > > Sounds very interesting, but I think there may be a problem with your > strategy: Infinispan doesn't guarantee you that one of the nodes executing > the CommitCommand is the primary owner at the time the CommitCommand is > executed. You could have something like this: > > Cluster [A, B, C, D], key k, owners(k) = [A, B] (A is primary) > C initiates a tx that executes put(k, v) > Tx prepare succeeds on A and B > A crashes, but the other nodes don't detect the crash yet > Tx commit succeeds on B, who still thinks is a backup owner > B detects the crash, installs a new cluster view consistent hash with > owners(k) = [B] > > >> >> > >> > Thanks, >> > Sanne >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Wed Jan 21 03:48:37 2015 From: rvansa at redhat.com (Radim Vansa) Date: Wed, 21 Jan 2015 09:48:37 +0100 Subject: [infinispan-dev] Consistency guarantees after merge *without* partition handling Message-ID: <54BF67E5.9090706@redhat.com> Hi, one question on the forum [1] led me to thinking whether we offer any guarantees at all after merge *without* partition handling. The common sense would suggest that we could have inconsistency on entries overwritten in one of the partitions, but the entry should not be lost completely. Do we have at least some unit tests trying to confirm this? I have to admit that I was not testing this scenario with RadarGun. Radim [1] https://developer.jboss.org/message/916484?et=watches.email.thread#916484 -- Radim Vansa JBoss DataGrid QA From dan.berindei at gmail.com Wed Jan 21 10:27:02 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 21 Jan 2015 17:27:02 +0200 Subject: [infinispan-dev] Consistency guarantees after merge *without* partition handling In-Reply-To: <54BF67E5.9090706@redhat.com> References: <54BF67E5.9090706@redhat.com> Message-ID: No, we do not guarantee that an entry will not be lost. When the split happens, either one partition could end up with 0 owners for a particular segment, and it will allocate new owners for that segment. When the merge happens, that partition's consistent hash may be chosen as the merge consistent hash, and keys on the other partition's owners (which could still have the value) will be ignored. Cheers Dan On Wed, Jan 21, 2015 at 10:48 AM, Radim Vansa wrote: > Hi, > > one question on the forum [1] led me to thinking whether we offer any > guarantees at all after merge *without* partition handling. The common > sense would suggest that we could have inconsistency on entries > overwritten in one of the partitions, but the entry should not be lost > completely. > > Do we have at least some unit tests trying to confirm this? I have to > admit that I was not testing this scenario with RadarGun. > > Radim > > [1] > https://developer.jboss.org/message/916484?et=watches.email.thread#916484 > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150121/739b1550/attachment.html From slaskawi at redhat.com Thu Jan 22 03:52:31 2015 From: slaskawi at redhat.com (=?UTF-8?B?U2ViYXN0aWFuIMWBYXNrYXdpZWM=?=) Date: Thu, 22 Jan 2015 09:52:31 +0100 Subject: [infinispan-dev] allowDuplicateDomains set to true for CDI? Message-ID: <54C0BA4F.2040508@redhat.com> Hey! When I was moving CDI quickstart to a new repository (from infinispan-quickstart to jboss-jdg-quickstarts), I noticed that probably some of our users will try to put Infinispan library inside WAR/lib and run it locally with CDI Extension. This will end up with JmxDomainConflictException on Wildfly (because the domain for " DefaultCacheManager" will probably be already registered). The workaround is simple - the user has to provide his own EmbeddedCacheManager producer with turned on alowDuplicateDomains option. In my opinion this option should be enabled by default for CDI Extension. If you agree with me, I'll make necessary changes. Any thoughts? Best regards Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150122/6ffa581f/attachment.html From dan.berindei at gmail.com Thu Jan 22 04:41:07 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 22 Jan 2015 11:41:07 +0200 Subject: [infinispan-dev] allowDuplicateDomains set to true for CDI? In-Reply-To: <54C0BA4F.2040508@redhat.com> References: <54C0BA4F.2040508@redhat.com> Message-ID: Couldn't WildFly and the CDI provider both set a different globalJmxStatistics().cacheManagerName() for the cache managers they create? Cheers Dan On Thu, Jan 22, 2015 at 10:52 AM, Sebastian ?askawiec wrote: > Hey! > > When I was moving CDI quickstart to a new repository (from > infinispan-quickstart to jboss-jdg-quickstarts), I noticed that probably > some of our users will try to put Infinispan library inside WAR/lib and run > it locally with CDI Extension. > > This will end up with JmxDomainConflictException on Wildfly (because the > domain for " DefaultCacheManager" will probably be already registered). > > The workaround is simple - the user has to provide his own > EmbeddedCacheManager producer with turned on alowDuplicateDomains option. > > In my opinion this option should be enabled by default for CDI Extension. > If you agree with me, I'll make necessary changes. > > Any thoughts? > > Best regards > Sebastian > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150122/ae2fc0b8/attachment.html From dan.berindei at gmail.com Thu Jan 22 10:04:12 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 22 Jan 2015 17:04:12 +0200 Subject: [infinispan-dev] Experiment: Affinity Tagging In-Reply-To: References: <54BDB88C.8030709@redhat.com> Message-ID: On Wed, Jan 21, 2015 at 3:12 AM, Sanne Grinovero wrote: > On 20 January 2015 at 13:32, Dan Berindei wrote: > > Adrian, I don't think that will work. The Hash doesn't know the number of > > segments so it can't tell where a particular key will land - even > assuming > > knowledge about how the ConsistentHash will map hash codes to segments. > > > > However, I'm all for replacing the current Hash interface with another > > interface that maps keys directly to segments. > > Right, I'll eventually need a different abstraction, or a change to > the Hash interface. However my need seems highly specialistic, I'm not > sure if there would be a general interest into such a capability for > other Hash implementors? > If it's better than KeyAffinityService and/or grouping, I'm pretty sure there will be other takers. > Never ever sign if you have more interesting comments below, I only > saw them by chance ;) > Oops, I only intended to reply to Adrian, and I forgot to remove the signature when I went further... I removed it now :) > > > On Tue, Jan 20, 2015 at 4:08 AM, Adrian Nistor > wrote: > >> > >> Hi Sanne, > >> > >> An alternative approach would be to implement an > >> org.infinispan.commons.hash.Hash which delegates to the stock > >> implementation for all keys except those that need to be assigned to a > >> specific segment. It should return the desired segment for those. > >> > >> Adrian > >> > >> > >> On 01/20/2015 02:48 AM, Sanne Grinovero wrote: > >> > Hi all, > >> > > >> > I'm playing with an idea for some internal components to be able to > >> > "tag" the key for an entry to be stored into Infinispan in a very > >> > specific segment of the CH. > >> > > >> > Conceptually the plan is easy to understand by looking at this patch: > >> > > >> > > >> > > https://github.com/Sanne/infinispan/commit/45a3d9e62318d5f5f950a60b5bb174d23037335f > >> > > >> > Hacking the change into ReplicatedConsistentHash is quite barbaric, > >> > please bear with me as I couldn't figure a better way to be able to > >> > experiment with this. I'll probably want to extend this class, but > >> > then I'm not sure how to plug it in? > > > > > > You would need to create your own ConsistentHashFactory, possibly > extending > > ReplicatedConsistentHashFactory. You can then plug the factory in with > > > > > configurationBuilder.clustering().hash().consistentHashFactory(yourFactory) > > > > However, this isn't a really good idea, because then you need a different > > implementation for distributed mode, and then another implementation for > > topology-aware clusters (with rack/machine/site ids). And your users > would > > also need to select the proper factory for each cache. > > Right, this is the complexity I was facing. I'll stick to my hack > solution for our little POC... but ultimately I'll need to plug this > in if none of the solutions below work out. > In the meantime Adrian clarified that you're creating the configuration yourself, so there's no problem picking the proper consistent hash factory - as long as the user didn't plug in his own for the indexed cache. > > >> > What would you all think of such a "tagging" mechanism? > >> > > >> > # Why I didn't use the KeyAffinityService > >> > - I need to use my own keys, not the meaningless stuff produced by the > >> > service > >> > - the extensive usage of Random in there doesn't seem suited for a > >> > performance critical path > > > > > > You can plug in your own KeyGenerator to generate keys, and maybe replace > > the Random with a static/thread-local counter. > > Thanks for the tip on KeyGenerator, I'll investigate on that :) > But I'll never add a static/threadlocal.. I'd rather commit my ugly > code from the commit linked above. > Indeed, the fact that you need to generate a different key every time makes KeyAffinityService harder to work with. > > > >> > >> > >> > >> > > >> > # Why I didn't use the Grouping API > >> > - I need to pick the specific storage segment, not just co-locate with > >> > a different key > >> > > > > > > > This is actually a drawback of the KeyAffinityService more than Grouping. > > With grouping, you can actually follow the KeyAffinityService strategy > and > > generate random strings until you get one in the proper segment, and then > > tag all your keys with that exact string. > > Interesting! A bit convoluted but could spare me to plug in the > HashFactory. > BTW I really dislike this idea of the KeyAffinityService to generate > random keys until it works out.. I guess it might not be too bad if > you want to pick a node out of ten, but I'm working at segment > granularity level and with the right luck it could take a long time. > It would be nice to have a function like this which would return in a > deterministic amount of time, like simply an inverse Hash. > Weird, I was 100% sure that there's no way to reverse MurmurHash3, but it seems there is [1]. Our implementation is a bit different, but it shouldn't be very hard to adopt. On the other hand, AbstractTopologyAwareEncoder1x.denormalizeSegmentHashIds brute-forces MurmurHash3 on each topology update to get a "denormalized" start value for each segment, which has to map to 0.2% of the segment. I don't remember how much it took, but it wasn't that bad. The good part about using grouping is that you can have lots of keys with the same group key, so you would only have to find the inverse once. [1] https://131002.net/siphash/#at > > >> > The general goal is to make it possible to "tag" all entries of an > >> > index, and have an independent index for each segment of the CH. So > >> > the resulting effect would be, that when a primary owner for any key K > >> > is making an update, and this triggers an index update, that update is > >> > A) going to happen on the same node -> no need to forwarding to a > >> > "master indexing node" > >> > B) each such writes on the index happen on the same node which is > >> > primary owner for all the written entries of the index. > >> > > >> > There are two additional nice consequences: > >> > - there would be no need to perform a reliable "master election": > >> > ownership singleton is already guaranteed by Infinispan's essential > >> > logic, so it would reuse that > >> > - the propagation of writes on the index from the primary owner > >> > (which is the local node by definition) to backup owners could use > >> > REPL_ASYNC for most practical use cases. > >> > > >> > So net result is that the overhead for indexing is reduced to 0 (ZERO) > >> > blocking RPCs if the async repl is acceptable, or to only one blocking > >> > roundtrip if very strict consistency is required. > > > > > > Sounds very interesting, but I think there may be a problem with your > > strategy: Infinispan doesn't guarantee you that one of the nodes > executing > > the CommitCommand is the primary owner at the time the CommitCommand is > > executed. You could have something like this: > > Index storage is generally used without transactions, but even > assuming we had transactions enabled or the "vanilla" put operation > suffered had a similar timing issue (as we'd determine this node to be > owner higher up in the search stack, before the actual put reaches the > Infinispan core API) it's not a problem as we'd simply lose locality > of the write: it would be slightly less efficient, but still write the > "right thing". > I was thinking about the indexed cache, not the storage cache. In the storage cache I think the only serious problem is when the originator dies, especially if it's also the primary owner, neither tx nor non-tx deal with that properly ATM. However, if I understood your idea correctly, only the primary owner in the indexed cache will ever write to the index, and the indexed cache may well use transactions. So I think my scenario is relevant, and the index won't be updated by B or anyone else. Non-transactional caches do not have the issue, the index update can only disappear if you use async replication in the index storage cache. Instead you can have a single write triggering multiple index updates, but I'm guessing you have that covered. > The intention is to maximise locality with the hints, but failing > locality on writes I just expect it to be handled as any other put > operation on Infinispan.. with a couple more RPCs, with any race > condition regarding topology changes being handled at lower level. > Which is exactly why I'm now working on top of container segments: > they are a stable building block and allow us to not worry on how > you'll actually distribute data or re-route update commands. > > Thanks all for the suggestions! > Sanne > > > > > Cluster [A, B, C, D], key k, owners(k) = [A, B] (A is primary) > > C initiates a tx that executes put(k, v) > > Tx prepare succeeds on A and B > > A crashes, but the other nodes don't detect the crash yet > > Tx commit succeeds on B, who still thinks is a backup owner > > B detects the crash, installs a new cluster view consistent hash with > > owners(k) = [B] > > > Cheers Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150122/2d6498c8/attachment-0001.html From dan.berindei at gmail.com Thu Jan 22 10:44:20 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 22 Jan 2015 17:44:20 +0200 Subject: [infinispan-dev] Experiment: Affinity Tagging In-Reply-To: References: <54BDB88C.8030709@redhat.com> <54BE6740.4090108@redhat.com> Message-ID: On Wed, Jan 21, 2015 at 3:28 AM, Sanne Grinovero wrote: > On 20 January 2015 at 14:33, Adrian Nistor wrote: > > None of the existing Hash implementations can, but this new one will be > > special. It could have access to the config (and CH) of the user's cache > so > > it will know the number of segments. The index cache will have to use the > > same type of CH as the data cache in order to keep ownership in sync and > the > > Hash implementation will be the special delegating Hash. > > > > There is a twist though, the above only works with SyncConsistentHash. > > Bacause when two caches with identical topology use DefaultConsistentHash > > they could still not be in sync in terms of key ownership. Only > > SyncConsistentHash ensures that. > > Many thanks for pointing out the need for a SyncConsistentHashFactory, > I was not aware of the limitations described in the javadoc. > > Side note: I'm surprised of the limitation of the normal > ConsistentHashFactory as described in the javadoc of > SyncConsistentHashFactory .. is it because our normal implementation > is actually not "Consistent" ? Or is it referring to additional > properties of our Hash function? > > Yes, it's because our DefaultConsistentHash isn't really consistent - i.e. the mapping of segments to nodes depends on more than just the addresses of the nodes. It seemed like the best way to fix the load distribution problems we had at the time, but it is starting to feel a little painful now. Another property of the "real" consistent hash is that a key will only move from an existing owner to a joiner, and there are no unnecessary moves between the existing nodes. It's a nice property, but I'm afraid we never really had it in Infinispan because of the way we handled multiple nodes with the same hashcode. I didn't manage to get this working in SyncConsistentHashFactory while also keeping a nice mostly-even distribution of segments, but I haven't completely given up on it yet... > Cheers, > Sanne > > > > > > Knowledge of how CH currently maps hashcodes to segments is assumed > already. > > I've spotted at least 3 places in code where it happens, so it is time to > > document it or move this responsibility to the Hash interface as you > suggest > > to make it really pluggable. > > > > Adrian > > > > > > On 01/20/2015 03:32 PM, Dan Berindei wrote: > > > > Adrian, I don't think that will work. The Hash doesn't know the number of > > segments so it can't tell where a particular key will land - even > assuming > > knowledge about how the ConsistentHash will map hash codes to segments. > > > > However, I'm all for replacing the current Hash interface with another > > interface that maps keys directly to segments. > > > > Cheers > > Dan > > > > > > On Tue, Jan 20, 2015 at 4:08 AM, Adrian Nistor > wrote: > >> > >> Hi Sanne, > >> > >> An alternative approach would be to implement an > >> org.infinispan.commons.hash.Hash which delegates to the stock > >> implementation for all keys except those that need to be assigned to a > >> specific segment. It should return the desired segment for those. > >> > >> Adrian > >> > >> > >> On 01/20/2015 02:48 AM, Sanne Grinovero wrote: > >> > Hi all, > >> > > >> > I'm playing with an idea for some internal components to be able to > >> > "tag" the key for an entry to be stored into Infinispan in a very > >> > specific segment of the CH. > >> > > >> > Conceptually the plan is easy to understand by looking at this patch: > >> > > >> > > >> > > https://github.com/Sanne/infinispan/commit/45a3d9e62318d5f5f950a60b5bb174d23037335f > >> > > >> > Hacking the change into ReplicatedConsistentHash is quite barbaric, > >> > please bear with me as I couldn't figure a better way to be able to > >> > experiment with this. I'll probably want to extend this class, but > >> > then I'm not sure how to plug it in? > > > > > > You would need to create your own ConsistentHashFactory, possibly > extending > > ReplicatedConsistentHashFactory. You can then plug the factory in with > > > > > configurationBuilder.clustering().hash().consistentHashFactory(yourFactory) > > > > However, this isn't a really good idea, because then you need a different > > implementation for distributed mode, and then another implementation for > > topology-aware clusters (with rack/machine/site ids). And your users > would > > also need to select the proper factory for each cache. > > > >> > >> > > >> > What would you all think of such a "tagging" mechanism? > >> > > >> > # Why I didn't use the KeyAffinityService > >> > - I need to use my own keys, not the meaningless stuff produced by the > >> > service > >> > - the extensive usage of Random in there doesn't seem suited for a > >> > performance critical path > > > > > > You can plug in your own KeyGenerator to generate keys, and maybe replace > > the Random with a static/thread-local counter. > > > >> > >> > >> > >> > > >> > # Why I didn't use the Grouping API > >> > - I need to pick the specific storage segment, not just co-locate with > >> > a different key > >> > > > > > > > This is actually a drawback of the KeyAffinityService more than Grouping. > > With grouping, you can actually follow the KeyAffinityService strategy > and > > generate random strings until you get one in the proper segment, and then > > tag all your keys with that exact string. > > > >> > >> > > >> > The general goal is to make it possible to "tag" all entries of an > >> > index, and have an independent index for each segment of the CH. So > >> > the resulting effect would be, that when a primary owner for any key K > >> > is making an update, and this triggers an index update, that update is > >> > A) going to happen on the same node -> no need to forwarding to a > >> > "master indexing node" > >> > B) each such writes on the index happen on the same node which is > >> > primary owner for all the written entries of the index. > >> > > >> > There are two additional nice consequences: > >> > - there would be no need to perform a reliable "master election": > >> > ownership singleton is already guaranteed by Infinispan's essential > >> > logic, so it would reuse that > >> > - the propagation of writes on the index from the primary owner > >> > (which is the local node by definition) to backup owners could use > >> > REPL_ASYNC for most practical use cases. > >> > > >> > So net result is that the overhead for indexing is reduced to 0 (ZERO) > >> > blocking RPCs if the async repl is acceptable, or to only one blocking > >> > roundtrip if very strict consistency is required. > > > > > > Sounds very interesting, but I think there may be a problem with your > > strategy: Infinispan doesn't guarantee you that one of the nodes > executing > > the CommitCommand is the primary owner at the time the CommitCommand is > > executed. You could have something like this: > > > > Cluster [A, B, C, D], key k, owners(k) = [A, B] (A is primary) > > C initiates a tx that executes put(k, v) > > Tx prepare succeeds on A and B > > A crashes, but the other nodes don't detect the crash yet > > Tx commit succeeds on B, who still thinks is a backup owner > > B detects the crash, installs a new cluster view consistent hash with > > owners(k) = [B] > > > > > >> > >> > > >> > Thanks, > >> > Sanne > >> > _______________________________________________ > >> > infinispan-dev mailing list > >> > infinispan-dev at lists.jboss.org > >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150122/1ddf0ca1/attachment.html From slaskawi at redhat.com Fri Jan 23 05:02:07 2015 From: slaskawi at redhat.com (=?UTF-8?B?U2ViYXN0aWFuIMWBYXNrYXdpZWM=?=) Date: Fri, 23 Jan 2015 11:02:07 +0100 Subject: [infinispan-dev] allowDuplicateDomains set to true for CDI? In-Reply-To: References: <54C0BA4F.2040508@redhat.com> Message-ID: <54C21C1F.5000602@redhat.com> Hey Dan! Regarding to CDI Extension - yes, that's possible. We can set it to "DefaultCDIManager" or something similar. On the other hand it will cause the same exception if user deploys 2 applications and each will use a Default CDI Producer (unfortunately CDI does not expose any deployment information, so we can't use deployment name for that purpose). Nevertheless I think that situation would be perfectly acceptable. Thanks Sebastian On 01/22/2015 10:41 AM, Dan Berindei wrote: > Couldn't WildFly and the CDI provider both set a different > globalJmxStatistics().cacheManagerName() for the cache managers they > create? From ttarrant at redhat.com Fri Jan 23 08:01:10 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 23 Jan 2015 14:01:10 +0100 Subject: [infinispan-dev] Infinispan 7.1.0.CR2 released Message-ID: <54C24616.1050801@redhat.com> Dear Infinispan community, Infinispan 7.1.0.CR1^H2 is available (well, problems can happen ;) Read more at: http://blog.infinispan.org/2015/01/infinispan-710cr2-released.html -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From dan.berindei at gmail.com Fri Jan 23 08:57:37 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Fri, 23 Jan 2015 15:57:37 +0200 Subject: [infinispan-dev] allowDuplicateDomains set to true for CDI? In-Reply-To: <54C21C1F.5000602@redhat.com> References: <54C0BA4F.2040508@redhat.com> <54C21C1F.5000602@redhat.com> Message-ID: Indeed, I wouldn't consider it a problem if the warning appears with two applications deployed. I would suggest also changing the warning message to point to cacheManagerName, not just allowDuplicateDomains. Cheers Dan On Fri, Jan 23, 2015 at 12:02 PM, Sebastian ?askawiec wrote: > Hey Dan! > > Regarding to CDI Extension - yes, that's possible. > We can set it to "DefaultCDIManager" or something similar. On the other > hand it will cause the same exception if user deploys 2 applications and > each will use a Default CDI Producer (unfortunately CDI does not expose > any deployment information, so we can't use deployment name for that > purpose). Nevertheless I think that situation would be perfectly > acceptable. > > Thanks > Sebastian > > On 01/22/2015 10:41 AM, Dan Berindei wrote: > > Couldn't WildFly and the CDI provider both set a different > > globalJmxStatistics().cacheManagerName() for the cache managers they > > create? > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150123/a2ac0300/attachment-0001.html From slaskawi at redhat.com Fri Jan 23 09:36:26 2015 From: slaskawi at redhat.com (=?UTF-8?B?U2ViYXN0aWFuIMWBYXNrYXdpZWM=?=) Date: Fri, 23 Jan 2015 15:36:26 +0100 Subject: [infinispan-dev] allowDuplicateDomains set to true for CDI? In-Reply-To: References: <54C0BA4F.2040508@redhat.com> <54C21C1F.5000602@redhat.com> Message-ID: <54C25C6A.50006@redhat.com> That's a good point. I'll include that in my Pull Request. Thanks for the hint! Sebastian On 01/23/2015 02:57 PM, Dan Berindei wrote: > Indeed, I wouldn't consider it a problem if the warning appears with > two applications deployed. I would suggest also changing the warning > message to point to cacheManagerName, not just allowDuplicateDomains. From galder at redhat.com Fri Jan 23 11:10:33 2015 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Fri, 23 Jan 2015 17:10:33 +0100 Subject: [infinispan-dev] Distribution-aware ClusterLoader In-Reply-To: References: Message-ID: <0B824CD9-4F95-4200-8A61-1EADF469BEED@redhat.com> Hey Manik, I think I remember some JIRA to have a state transfer manually, upon management operation or similar, in order to avoid state transfer mayhem when bringing a lot of nodes at the same time. I don?t know what?s happened to that, but would it work? Cheers, On 17 Jan 2015, at 02:43, Manik Surtani wrote: > Greetings. :-) > > I chatted with a few of you offline about this earlier; anyone has any thoughts around a ClusterLoader implementation that, instead of broadcasting to the entire cluster, unicasts to the owners of a given key by inspecting the DistributionManager. Thinking of using this as a lazy/on-demand form of state transfer in a distributed cluster, so joiners don?t trigger big chunks of data moving around eagerly. > > ? M > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From anistor at redhat.com Fri Jan 23 11:35:21 2015 From: anistor at redhat.com (Adrian Nistor) Date: Fri, 23 Jan 2015 18:35:21 +0200 Subject: [infinispan-dev] Distribution-aware ClusterLoader In-Reply-To: <0B824CD9-4F95-4200-8A61-1EADF469BEED@redhat.com> References: <0B824CD9-4F95-4200-8A61-1EADF469BEED@redhat.com> Message-ID: <54C27849.4000807@redhat.com> Galder, Manik, the jira you mention is ISPN-3140 JMX operation to suppress state transfer [1], implemented quite a long time ago. This should solve the problem of many simultaneous joiners. Does this fit your needs? [1] https://issues.jboss.org/browse/ISPN-3140 On 01/23/2015 06:10 PM, Galder Zamarre?o wrote: > Hey Manik, I think I remember some JIRA to have a state transfer manually, upon management operation or similar, in order to avoid state transfer mayhem when bringing a lot of nodes at the same time. I don?t know what?s happened to that, but would it work? > > Cheers, > > On 17 Jan 2015, at 02:43, Manik Surtani wrote: > >> Greetings. :-) >> >> I chatted with a few of you offline about this earlier; anyone has any thoughts around a ClusterLoader implementation that, instead of broadcasting to the entire cluster, unicasts to the owners of a given key by inspecting the DistributionManager. Thinking of using this as a lazy/on-demand form of state transfer in a distributed cluster, so joiners don?t trigger big chunks of data moving around eagerly. >> >> ? M >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From manik at infinispan.org Fri Jan 23 17:47:35 2015 From: manik at infinispan.org (Manik Surtani) Date: Fri, 23 Jan 2015 14:47:35 -0800 Subject: [infinispan-dev] Distribution-aware ClusterLoader In-Reply-To: <54C27849.4000807@redhat.com> References: <0B824CD9-4F95-4200-8A61-1EADF469BEED@redhat.com> <54C27849.4000807@redhat.com> Message-ID: No it doesn't. That's quite a different problem. I don't want manual intervention. On 23 January 2015 at 08:35, Adrian Nistor wrote: > Galder, Manik, the jira you mention is ISPN-3140 JMX operation to > suppress state transfer [1], implemented quite a long time ago. This > should solve the problem of many simultaneous joiners. Does this fit > your needs? > > [1] https://issues.jboss.org/browse/ISPN-3140 > > On 01/23/2015 06:10 PM, Galder Zamarre?o wrote: > > Hey Manik, I think I remember some JIRA to have a state transfer > manually, upon management operation or similar, in order to avoid state > transfer mayhem when bringing a lot of nodes at the same time. I don?t know > what?s happened to that, but would it work? > > > > Cheers, > > > > On 17 Jan 2015, at 02:43, Manik Surtani wrote: > > > >> Greetings. :-) > >> > >> I chatted with a few of you offline about this earlier; anyone has any > thoughts around a ClusterLoader implementation that, instead of > broadcasting to the entire cluster, unicasts to the owners of a given key > by inspecting the DistributionManager. Thinking of using this as a > lazy/on-demand form of state transfer in a distributed cluster, so joiners > don?t trigger big chunks of data moving around eagerly. > >> > >> ? M > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > > Galder Zamarre?o > > galder at redhat.com > > twitter.com/galderz > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150123/22d11710/attachment.html From ttarrant at redhat.com Wed Jan 28 04:38:58 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 28 Jan 2015 10:38:58 +0100 Subject: [infinispan-dev] Student / Contributor projects Message-ID: <54C8AE32.90200@redhat.com> Hi all, I was told that our student/contributor project page is awfully out-of-date, so we're in need of a big refresh. We should also move that page to the website. Here are some ideas I have collected: - ISPN-5185 Add topology headers to the RESTful server - ISPN-5186 intelligent (L2/L3) Java REST client - ISPN-5187 Node.js HotRod client (either pure-Javascript or based on the C++ client) - ISPN-5188 Support for JSON as indexable/queryable objects using the ProtoBuf schema definitions (this could be extended to XML too) - ISPN-5189 Allow setting a "computing" function (using JDK 8's lambdas) on a cache so that entries can be computed on-demand when they are missing/expired More ideas please Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From tsykora at redhat.com Wed Jan 28 23:55:03 2015 From: tsykora at redhat.com (Tomas Sykora) Date: Wed, 28 Jan 2015 23:55:03 -0500 (EST) Subject: [infinispan-dev] Student / Contributor projects In-Reply-To: <54C8AE32.90200@redhat.com> References: <54C8AE32.90200@redhat.com> Message-ID: <1238144212.2567122.1422507303008.JavaMail.zimbra@redhat.com> Hello :) As one of those who successfully used Infinispan related topic for diploma thesis I am a big fan of this initiative. Radim had an idea about capturing, visualizing and storing a history of inter-node communication. I personally feel that Infinispan Management Console could be possibly the right "platform" where to gather, store and visualize such a kind of information. This can also be used for demonstrative purposes (but not only!). Maybe we would need "a hook" from JGroups (other components?) to gather needed data more easily. Radim's intention was mainly driven by the idea of having ability to see inter-node communication clearly in order to be able to find corrupted message/data flow and spot complicated bugs while one does not need to grep through gigabytes of text-logs. I can definitely do a research about what is possible, what would be needed and provide more information or even define the topic of diploma thesis itself in the future. Then I can try to ask students at faculty if anyone would be interested. Thanks! Tomas ----- Original Message ----- > From: "Tristan Tarrant" > To: "infinispan -Dev List" > Sent: Wednesday, January 28, 2015 10:38:58 AM > Subject: [infinispan-dev] Student / Contributor projects > > Hi all, > > I was told that our student/contributor project page is awfully > out-of-date, so we're in need of a big refresh. We should also move that > page to the website. > Here are some ideas I have collected: > > - ISPN-5185 Add topology headers to the RESTful server > - ISPN-5186 intelligent (L2/L3) Java REST client > - ISPN-5187 Node.js HotRod client (either pure-Javascript or based on > the C++ client) > - ISPN-5188 Support for JSON as indexable/queryable objects using the > ProtoBuf schema definitions (this could be extended to XML too) > - ISPN-5189 Allow setting a "computing" function (using JDK 8's lambdas) > on a cache so that entries can be computed on-demand when they are > missing/expired > > More ideas please > > Tristan > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From rvansa at redhat.com Thu Jan 29 03:12:23 2015 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 29 Jan 2015 09:12:23 +0100 Subject: [infinispan-dev] Student / Contributor projects In-Reply-To: <54C8AE32.90200@redhat.com> References: <54C8AE32.90200@redhat.com> Message-ID: <54C9EB67.2070209@redhat.com> Worth to note that Tristan has put those information on wiki [1], too. I've added my comments there. Radim [1] https://developer.jboss.org/wiki/StudentContributorProjectsWithInfinispan On 01/28/2015 10:38 AM, Tristan Tarrant wrote: > Hi all, > > I was told that our student/contributor project page is awfully > out-of-date, so we're in need of a big refresh. We should also move that > page to the website. > Here are some ideas I have collected: > > - ISPN-5185 Add topology headers to the RESTful server > - ISPN-5186 intelligent (L2/L3) Java REST client > - ISPN-5187 Node.js HotRod client (either pure-Javascript or based on > the C++ client) > - ISPN-5188 Support for JSON as indexable/queryable objects using the > ProtoBuf schema definitions (this could be extended to XML too) > - ISPN-5189 Allow setting a "computing" function (using JDK 8's lambdas) > on a cache so that entries can be computed on-demand when they are > missing/expired > > More ideas please > > Tristan > -- Radim Vansa JBoss DataGrid QA From andreas.kruthoff at nexustelecom.com Thu Jan 29 12:25:22 2015 From: andreas.kruthoff at nexustelecom.com (Andreas Kruthoff) Date: Thu, 29 Jan 2015 18:25:22 +0100 Subject: [infinispan-dev] state transfer timed out, where to configure? Message-ID: <54CA6D02.1030009@nexustelecom.com> Hi dev I'm running into the following exception on the 3rd node out of 2. Distributed cluster, file store with a few millions of entries. The 3rd node times out during startup, I think. "Initial state transfer timed out". How can I configure/increase the timeout in my infinispan.xml? Is it within ? thanks for help -andreas Exception in thread "main" org.infinispan.commons.CacheException: Unable to invoke method public void org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete() th rows java.lang.InterruptedException on object of type StateTransferManagerImpl at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:170) at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:869) at org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:638) at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:627) at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:530) at org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:216) at org.infinispan.cache.impl.CacheImpl.start(CacheImpl.java:813) at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:584) at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:539) at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:416) at ch.nexustelecom.lbd.engine.ImeiCache.init(ImeiCache.java:49) at ch.nexustelecom.dexclient.engine.DefaultDexClientEngine.init(DefaultDexClientEngine.java:120) at ch.nexustelecom.dexclient.DexClient.initClient(DexClient.java:169) at ch.nexustelecom.dexclient.tool.DexClientManager.startup(DexClientManager.java:196) at ch.nexustelecom.dexclient.tool.DexClientManager.main(DexClientManager.java:83) Caused by: org.infinispan.commons.CacheException: Initial state transfer timed out for cache infinicache-lbd-imei on m4sxhpsrm672-11986 at org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete(StateTransferManagerImpl.java:216) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168) ... 14 more This email and any attachment may contain confidential information which is intended for use only by the addressee(s) named above. If you received this email by mistake, please notify the sender immediately, and delete the email from your system. You are prohibited from copying, disseminating or otherwise using the email or any attachment. From galder at redhat.com Fri Jan 30 09:45:03 2015 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Fri, 30 Jan 2015 15:45:03 +0100 Subject: [infinispan-dev] Distribution-aware ClusterLoader In-Reply-To: References: <0B824CD9-4F95-4200-8A61-1EADF469BEED@redhat.com> <54C27849.4000807@redhat.com> Message-ID: <4D9545F4-5136-4C1A-BF3A-04CE46F3254F@redhat.com> On 23 Jan 2015, at 23:47, Manik Surtani wrote: > No it doesn't. That's quite a different problem. I don't want manual intervention. You said: > Thinking of using this as a lazy/on-demand form of state transfer in a distributed cluster, so joiners don?t trigger big chunks of data moving around eagerly. You can still call JMX operations from code, when you want, and hence cause state transfer to happen "on-demand? or lazily, without any manual intervention... Cheers, > > On 23 January 2015 at 08:35, Adrian Nistor wrote: > Galder, Manik, the jira you mention is ISPN-3140 JMX operation to > suppress state transfer [1], implemented quite a long time ago. This > should solve the problem of many simultaneous joiners. Does this fit > your needs? > > [1] https://issues.jboss.org/browse/ISPN-3140 > > On 01/23/2015 06:10 PM, Galder Zamarre?o wrote: > > Hey Manik, I think I remember some JIRA to have a state transfer manually, upon management operation or similar, in order to avoid state transfer mayhem when bringing a lot of nodes at the same time. I don?t know what?s happened to that, but would it work? > > > > Cheers, > > > > On 17 Jan 2015, at 02:43, Manik Surtani wrote: > > > >> Greetings. :-) > >> > >> I chatted with a few of you offline about this earlier; anyone has any thoughts around a ClusterLoader implementation that, instead of broadcasting to the entire cluster, unicasts to the owners of a given key by inspecting the DistributionManager. Thinking of using this as a lazy/on-demand form of state transfer in a distributed cluster, so joiners don?t trigger big chunks of data moving around eagerly. > >> > >> ? M > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > > Galder Zamarre?o > > galder at redhat.com > > twitter.com/galderz > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From mudokonman at gmail.com Fri Jan 30 18:10:51 2015 From: mudokonman at gmail.com (William Burns) Date: Fri, 30 Jan 2015 18:10:51 -0500 Subject: [infinispan-dev] Infinispan 7.1.0 Final has been released ! Message-ID: Hello everyone, The final release is now available for 7.1.0, providing some additional features and enhancements. You can find out all about it at: http://blog.infinispan.org/2015/01/infinispan-710-final-released.html Thanks ! - Will Burns