From galder at redhat.com Mon Sep 1 03:56:47 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Mon, 1 Sep 2014 09:56:47 +0200 Subject: [infinispan-dev] Ant based kill not fully working? Re: ISPN-4567 In-Reply-To: <53F1FF94.5070409@redhat.com> References: <3F75DF34-95F4-482C-9F7E-F8D1D9565BA5@redhat.com> <1263089078.28637630.1407831306586.JavaMail.zimbra@redhat.com> <53F1FF94.5070409@redhat.com> Message-ID: <9BED700F-F106-4541-8309-C4C41BA63956@redhat.com> Hi guys, Thanks a lot for your feedback on this. Having looked closer, the log message says: > [echo] Killing Infinispan server with PID - 3658 29739 And the pattern of how that log message gets computed is: > > We could also try maven-exec-plugin and call the Unix "kill" command from it, instead of using the InfinispanServerKillProcessor. > > Martin > > > On 12.8.2014 10:15, Jakub Markos wrote: >> Hi, >> >> I looked at it and I don't think using InfinispanServerKillProcessor would be any better, >> since it still just calls 'kill -9'. The only difference is that it doesn't kill all >> java processes starting from jboss-modules.jar, but just the one configured for the test. >> >> Is it maybe possible that the kill happened, but the port was still hanging? >> >> Jakub >> >> ----- Original Message ----- >>> From: "Galder Zamarre?o" >>> To: "Jakub Markos" , "Martin Gencur" >>> Cc: "infinispan -Dev List" >>> Sent: Monday, August 4, 2014 12:35:50 PM >>> Subject: Ant based kill not fully working? Re: ISPN-4567 >>> >>> Hi, >>> >>> Dan has reported [1]. It appears as if the last server started in >>> infinispan-as-module-client-integrationtests did not really get killed. From >>> what I see, this kill was done via the specific Ant target present in that >>> Maven module. >>> >>> I also remembered recently [2] was added. Maybe we need to get >>> as-modules/client to be configured with it so that it properly kills >>> servers? >>> >>> What I?m not sure is where we?d put it so that it can be consumed both by >>> server/integration/testsuite and as-modules/client? The problem is that the >>> class, as is, brings in arquillian dependency. If we can separate the >>> arquillian stuff from the actual code, the class itself could maybe go in >>> commons test source directory? >>> >>> @Tristan, thoughts? >>> >>> @Jakub, can I assign this to you? >>> >>> [1] https://issues.jboss.org/browse/ISPN-4567 >>> [2] >>> https://github.com/infinispan/infinispan/blob/master/server/integration/testsuite/src/test/java/org/infinispan/server/test/util/arquillian/extensions/InfinispanServerKillProcessor.java >>> -- >>> Galder Zamarre?o >>> galder at redhat.com >>> twitter.com/galderz >>> >>> > -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From galder at redhat.com Mon Sep 1 05:25:27 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Mon, 1 Sep 2014 11:25:27 +0200 Subject: [infinispan-dev] Infinispan Jira workflow In-Reply-To: <53FAF405.3070404@redhat.com> References: <53FAEC27.1010902@redhat.com> <53FAEFD1.3060302@redhat.com> <53FAF03A.2090004@redhat.com> <53FAF405.3070404@redhat.com> Message-ID: <897DB91D-2380-482B-BF22-DEEFE5AF26D0@redhat.com> Sounds good to me. On 25 Aug 2014, at 10:29, Tristan Tarrant wrote: > Yes, we need to bring sanity to all of that, and that can be done only > if we all do it together :) > > And "New" is probably a bad choice. "Unassigned" is also wrong since we > always have a default assignee. That's why I suggested an "Unverified" > or "Untriaged" state instead. > > Tristan > > On 25/08/14 10:13, Radim Vansa wrote: >> ... marking those issues as "New" would sound somewhat funny :) >> >> Radim >> >> On 08/25/2014 10:12 AM, Radim Vansa wrote: >>> And are there any recommendations about the 767 currently open issues >>> [1]? It seems to me that after 5 years any issue [2] should be resolved >>> or rejected. >>> >>> [1] >>> https://issues.jboss.org/browse/ISPN/?selectedTab=com.atlassian.jira.jira-projects-plugin:issues-panel >>> [2] https://issues.jboss.org/browse/ISPN-3 >>> https://issues.jboss.org/browse/ISPN-19 etc... >>> >>> On 08/25/2014 09:56 AM, Tristan Tarrant wrote: >>>> I was just looking at the Jira workflow for Infinispan and noticed that >>>> all issues start off in the "Open" state and assigned to the default >>>> owner for the component. Unfortunately this does not mean that the >>>> actual "assignee" has taken ownership, or that he intends to work on it >>>> in the near future, or that he has even looked at it. I would therefore >>>> like to introduce a state for fresh issues which is just before "Open". >>>> This can be "New" or "Unverified/Untriaged" and will make it easier to >>>> find all those "lurker" issues which are lost in the noise. >>>> >>>> What do you think ? >>>> >>>> Tristan >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From slaskawi at redhat.com Mon Sep 1 06:50:17 2014 From: slaskawi at redhat.com (Sebastian =?UTF-8?Q?=C5=81askawiec?=) Date: Mon, 01 Sep 2014 12:50:17 +0200 Subject: [infinispan-dev] Test groups Message-ID: <1409568617.3940.63.camel@slaskawiec> Hi! Recently I've been working with some tests and I noticed that we have pretty large number of Maven profiles responsible for selecting which test suites should be executed: * test-CI * test-functional * test-jgroups * test-transaction * test-unit * test-unstable * test-xsite Perhaps some of them might be removed and we could create some more useful hierarchy. Here is my proposition: * test-smoke - all unit test with some basic functional tests based on Arquillian. This profile would be invoked by default and with our CI server. There is already a ticket to implement such profile in Jira [1]. I think it would be a good practice to reject all Pull Requests which fail against this profile. * test-acceptance - Full test suite without performance test. This should be executed once a day (nightly profile?) * test-performance - Reserved for performance/stress tests. * test-code-quality - test-acceptance with Sonar, Firebug, Jacoco or any other tool for measuring code quality. Executed once a day. In my opinion above hierarchy will help to identify serious problems faster and will help us in productisation stream. What do you think? [1] https://issues.jboss.org/browse/ISPN-4665 Best regards Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140901/9e79357f/attachment.html From ttarrant at redhat.com Mon Sep 1 10:50:24 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 01 Sep 2014 16:50:24 +0200 Subject: [infinispan-dev] Weekly Infinispan IRC meeting 2014-09-01 Message-ID: <540487B0.5080409@redhat.com> Get the minutes from here: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2014/infinispan.2014-09-01-14.03.html From galder at redhat.com Mon Sep 1 11:08:45 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Mon, 1 Sep 2014 17:08:45 +0200 Subject: [infinispan-dev] Asynchronous cache's "void put()" call expectations changed from 6.0.0 to 6.0.1/7.0 Message-ID: Hi all, @Paul, this might be important for WF if using async repl caches (the same I think applies to distributed async caches too) Today I?ve been trying to upgrade Infinispan version in Hibernate master from 6.0.0.Final to 7.0.0.Beta1. Overall, it?s all worked fine but there?s one test that has started failing. Essentialy, this is a clustered test for a repl async cache (w/ cluster cache loader) where a non-owner cache node does put() and immediately, on the same cache, it calls a get(). The test is failing because the get() does not see the effects of the put(), even if both operations are called on the same cache instance. According to Dan, this should have been happening since [1] was implemented, but it?s really started happening since [2] when lock delegation was enabled for replicated caches (EntryWrappingInterceptor.isUsingLockDelegation is now true whereas in 6.0.0 it was false). Not sure we set expectations in this regard, but clearly it?s big change in terms of expectations on when ?void put()? completes for async repl caches. I?m not sure how we should handle this, but it definitely needs some discussion and adjuts documentation/javadoc if needed. Can we do something differently? Indepent of how we resolve this, this is the result of once again of trying to shoehole async behaviour into sync APIs. Any async caches (DIST, INV, REPL) should really be accessed exclusively via the AsyncCache API, where you can return quickly and use the future, and any listener to attach to it (a bit ala Java8?s CompletableFuture.map lambda calls) as a way to signal that the operation has completed, and then you have an API and cache mode that make sense and is consistent with how async APIs work. Right now, when a repl async cache?s ?void put()? call is not very well defined. Does it return when message has been put on the network? What impact does it have in the local cache contents? Also, a very big problem of the change of behaviour is that if left like that, you are forcing users to code differently, using the same ?void put()? API depending on the configuration (whether async/sync). As clearly shown by the issue above, this is very confusing. It?s a lot more logical IMO, and I?ve already sent an email on this very same topic [3] back in January, that whether a cache is sync or async should be based purely on the API used and forget about the static configuration flag on whether the cache is async or sync. Cheers, [1] https://issues.jboss.org/browse/ISPN-2772 [2] https://issues.jboss.org/browse/ISPN-3354 [3] http://lists.jboss.org/pipermail/infinispan-dev/2014-January/014448.html -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From vjuranek at redhat.com Mon Sep 1 11:12:52 2014 From: vjuranek at redhat.com (Vojtech Juranek) Date: Mon, 01 Sep 2014 17:12:52 +0200 Subject: [infinispan-dev] Ant based kill not fully working? Re: ISPN-4567 In-Reply-To: <9BED700F-F106-4541-8309-C4C41BA63956@redhat.com> References: <3F75DF34-95F4-482C-9F7E-F8D1D9565BA5@redhat.com> <53F1FF94.5070409@redhat.com> <9BED700F-F106-4541-8309-C4C41BA63956@redhat.com> Message-ID: <1486283.K2g3J7Zd59@localhost> Hi, > So, according to that, ${pid} is ?3658 29739? which looks wrong. > > Not sure what that means, whether there are two processes running and both > should be killed, or the way the PID is computed is buggy. IMHO it means that there are 2 processes running, both need to be killed. If the PID was wrong you'd have seen something like this in the build log: [exec] kill: sending signal to XYZ failed: No such process > InfinispanServerKillProcessor has a slightly different way to compute the > PID, maybe it does it correctly? WDYT? that being said, I don't believe InfinispanServerKillProcessor would help here - as Jakub already wrote, it also calls kill -9. IMHO the question is why the process is not killed by kill -9 (probably some zombie process, but how it can get to this state?). Vojta -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part. Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140901/8609b779/attachment-0001.bin From mgencur at redhat.com Tue Sep 2 02:09:07 2014 From: mgencur at redhat.com (Martin Gencur) Date: Tue, 02 Sep 2014 08:09:07 +0200 Subject: [infinispan-dev] Ant based kill not fully working? Re: ISPN-4567 In-Reply-To: <9BED700F-F106-4541-8309-C4C41BA63956@redhat.com> References: <3F75DF34-95F4-482C-9F7E-F8D1D9565BA5@redhat.com> <1263089078.28637630.1407831306586.JavaMail.zimbra@redhat.com> <53F1FF94.5070409@redhat.com> <9BED700F-F106-4541-8309-C4C41BA63956@redhat.com> Message-ID: <54055F03.2030508@redhat.com> On 1.9.2014 09:56, Galder Zamarre?o wrote: > Hi guys, > > Thanks a lot for your feedback on this. > > Having looked closer, the log message says: > >> [echo] Killing Infinispan server with PID - 3658 29739 > And the pattern of how that log message gets computed is: > >> > Cheers, > > On 18 Aug 2014, at 15:28, Martin Gencur wrote: > >> Hi Galder, >> I haven't seen this before. I thought the ant-based "kill" command was safe and reliable. It's hard to say what went wrong without further logs. Whether the kill command failed or whether there were other processes that were not found by the jps command. >> >> We could also try maven-exec-plugin and call the Unix "kill" command from it, instead of using the InfinispanServerKillProcessor. >> >> Martin >> >> >> On 12.8.2014 10:15, Jakub Markos wrote: >>> Hi, >>> >>> I looked at it and I don't think using InfinispanServerKillProcessor would be any better, >>> since it still just calls 'kill -9'. The only difference is that it doesn't kill all >>> java processes starting from jboss-modules.jar, but just the one configured for the test. >>> >>> Is it maybe possible that the kill happened, but the port was still hanging? >>> >>> Jakub >>> >>> ----- Original Message ----- >>>> From: "Galder Zamarre?o" >>>> To: "Jakub Markos" , "Martin Gencur" >>>> Cc: "infinispan -Dev List" >>>> Sent: Monday, August 4, 2014 12:35:50 PM >>>> Subject: Ant based kill not fully working? Re: ISPN-4567 >>>> >>>> Hi, >>>> >>>> Dan has reported [1]. It appears as if the last server started in >>>> infinispan-as-module-client-integrationtests did not really get killed. From >>>> what I see, this kill was done via the specific Ant target present in that >>>> Maven module. >>>> >>>> I also remembered recently [2] was added. Maybe we need to get >>>> as-modules/client to be configured with it so that it properly kills >>>> servers? >>>> >>>> What I?m not sure is where we?d put it so that it can be consumed both by >>>> server/integration/testsuite and as-modules/client? The problem is that the >>>> class, as is, brings in arquillian dependency. If we can separate the >>>> arquillian stuff from the actual code, the class itself could maybe go in >>>> commons test source directory? >>>> >>>> @Tristan, thoughts? >>>> >>>> @Jakub, can I assign this to you? >>>> >>>> [1] https://issues.jboss.org/browse/ISPN-4567 >>>> [2] >>>> https://github.com/infinispan/infinispan/blob/master/server/integration/testsuite/src/test/java/org/infinispan/server/test/util/arquillian/extensions/InfinispanServerKillProcessor.java >>>> -- >>>> Galder Zamarre?o >>>> galder at redhat.com >>>> twitter.com/galderz >>>> >>>> > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > From dan.berindei at gmail.com Tue Sep 2 05:46:51 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 2 Sep 2014 12:46:51 +0300 Subject: [infinispan-dev] Ant based kill not fully working? Re: ISPN-4567 In-Reply-To: <54055F03.2030508@redhat.com> References: <3F75DF34-95F4-482C-9F7E-F8D1D9565BA5@redhat.com> <1263089078.28637630.1407831306586.JavaMail.zimbra@redhat.com> <53F1FF94.5070409@redhat.com> <9BED700F-F106-4541-8309-C4C41BA63956@redhat.com> <54055F03.2030508@redhat.com> Message-ID: Galder, I think kill is working properly, but the server socket cannot be bound because a client connection has not finished closing: http://stackoverflow.com/questions/14388706/socket-options-so-reuseaddr-and-so-reuseport-how-do-they-differ-do-they-mean-t On Tue, Sep 2, 2014 at 9:09 AM, Martin Gencur wrote: > On 1.9.2014 09:56, Galder Zamarre?o wrote: > > Hi guys, > > > > Thanks a lot for your feedback on this. > > > > Having looked closer, the log message says: > > > >> [echo] Killing Infinispan server with PID - 3658 29739 > > And the pattern of how that log message gets computed is: > > > >> once. And I think this has mostly been working for us. I could not > reproduce the issue on my localhost. > But maybe it makes sense to change the way the processes are searched. > Otherwise I don't know what to do about that:) > > Martin > > > > > > Cheers, > > > > On 18 Aug 2014, at 15:28, Martin Gencur wrote: > > > >> Hi Galder, > >> I haven't seen this before. I thought the ant-based "kill" command was > safe and reliable. It's hard to say what went wrong without further logs. > Whether the kill command failed or whether there were other processes that > were not found by the jps command. > >> > >> We could also try maven-exec-plugin and call the Unix "kill" command > from it, instead of using the InfinispanServerKillProcessor. > >> > >> Martin > >> > >> > >> On 12.8.2014 10:15, Jakub Markos wrote: > >>> Hi, > >>> > >>> I looked at it and I don't think using InfinispanServerKillProcessor > would be any better, > >>> since it still just calls 'kill -9'. The only difference is that it > doesn't kill all > >>> java processes starting from jboss-modules.jar, but just the one > configured for the test. > >>> > >>> Is it maybe possible that the kill happened, but the port was still > hanging? > >>> > >>> Jakub > >>> > >>> ----- Original Message ----- > >>>> From: "Galder Zamarre?o" > >>>> To: "Jakub Markos" , "Martin Gencur" < > mgencur at redhat.com> > >>>> Cc: "infinispan -Dev List" > >>>> Sent: Monday, August 4, 2014 12:35:50 PM > >>>> Subject: Ant based kill not fully working? Re: ISPN-4567 > >>>> > >>>> Hi, > >>>> > >>>> Dan has reported [1]. It appears as if the last server started in > >>>> infinispan-as-module-client-integrationtests did not really get > killed. From > >>>> what I see, this kill was done via the specific Ant target present in > that > >>>> Maven module. > >>>> > >>>> I also remembered recently [2] was added. Maybe we need to get > >>>> as-modules/client to be configured with it so that it properly kills > >>>> servers? > >>>> > >>>> What I?m not sure is where we?d put it so that it can be consumed > both by > >>>> server/integration/testsuite and as-modules/client? The problem is > that the > >>>> class, as is, brings in arquillian dependency. If we can separate the > >>>> arquillian stuff from the actual code, the class itself could maybe > go in > >>>> commons test source directory? > >>>> > >>>> @Tristan, thoughts? > >>>> > >>>> @Jakub, can I assign this to you? > >>>> > >>>> [1] https://issues.jboss.org/browse/ISPN-4567 > >>>> [2] > >>>> > https://github.com/infinispan/infinispan/blob/master/server/integration/testsuite/src/test/java/org/infinispan/server/test/util/arquillian/extensions/InfinispanServerKillProcessor.java > >>>> -- > >>>> Galder Zamarre?o > >>>> galder at redhat.com > >>>> twitter.com/galderz > >>>> > >>>> > > > > -- > > Galder Zamarre?o > > galder at redhat.com > > twitter.com/galderz > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140902/ce488c1f/attachment.html From paul.ferraro at redhat.com Tue Sep 2 08:19:14 2014 From: paul.ferraro at redhat.com (Paul Ferraro) Date: Tue, 2 Sep 2014 08:19:14 -0400 (EDT) Subject: [infinispan-dev] Asynchronous cache's "void put()" call expectations changed from 6.0.0 to 6.0.1/7.0 In-Reply-To: References: Message-ID: <128708287.14780333.1409660354923.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Galder Zamarre?o" > To: "infinispan -Dev List" , "Paul Ferraro" > Sent: Monday, September 1, 2014 11:08:45 AM > Subject: Asynchronous cache's "void put()" call expectations changed from 6.0.0 to 6.0.1/7.0 > > Hi all, > > @Paul, this might be important for WF if using async repl caches (the same I > think applies to distributed async caches too) Luckily, Dan warned me that of this behavioral change well in advance. Any time we need a reliable return value from a Cache.put(...), we use Flag.FORCE_SYNCHRONOUS so that the same code will work for sync and async caches alike. > Today I?ve been trying to upgrade Infinispan version in Hibernate master from > 6.0.0.Final to 7.0.0.Beta1. Overall, it?s all worked fine but there?s one > test that has started failing. > > Essentialy, this is a clustered test for a repl async cache (w/ cluster cache > loader) where a non-owner cache node does put() and immediately, on the same > cache, it calls a get(). The test is failing because the get() does not see > the effects of the put(), even if both operations are called on the same > cache instance. > > According to Dan, this should have been happening since [1] was implemented, > but it?s really started happening since [2] when lock delegation was enabled > for replicated caches (EntryWrappingInterceptor.isUsingLockDelegation is now > true whereas in 6.0.0 it was false). > > Not sure we set expectations in this regard, but clearly it?s big change in > terms of expectations on when ?void put()? completes for async repl caches. > I?m not sure how we should handle this, but it definitely needs some > discussion and adjuts documentation/javadoc if needed. Can we do something > differently? > > Indepent of how we resolve this, this is the result of once again of trying > to shoehole async behaviour into sync APIs. Any async caches (DIST, INV, > REPL) should really be accessed exclusively via the AsyncCache API, where > you can return quickly and use the future, and any listener to attach to it > (a bit ala Java8?s CompletableFuture.map lambda calls) as a way to signal > that the operation has completed, and then you have an API and cache mode > that make sense and is consistent with how async APIs work. > > Right now, when a repl async cache?s ?void put()? call is not very well > defined. Does it return when message has been put on the network? What > impact does it have in the local cache contents? > > Also, a very big problem of the change of behaviour is that if left like > that, you are forcing users to code differently, using the same ?void put()? > API depending on the configuration (whether async/sync). As clearly shown by > the issue above, this is very confusing. It?s a lot more logical IMO, and > I?ve already sent an email on this very same topic [3] back in January, that > whether a cache is sync or async should be based purely on the API used and > forget about the static configuration flag on whether the cache is async or > sync. I would agree would this last statement. Consistent semantics are a good thing. If you do change this, however, just let me know well in advance. > Cheers, > > [1] https://issues.jboss.org/browse/ISPN-2772 > [2] https://issues.jboss.org/browse/ISPN-3354 > [3] http://lists.jboss.org/pipermail/infinispan-dev/2014-January/014448.html > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > From renatorro at comp.ufla.br Tue Sep 2 17:06:17 2014 From: renatorro at comp.ufla.br (Renato Resende Ribeiro de Oliveira) Date: Tue, 2 Sep 2014 18:06:17 -0300 Subject: [infinispan-dev] Infinispan within JBoss EAP 6.2 Message-ID: Hello there, I am not sure if this list is the right place to ask that, but i am getting out of options. I am trying to deploy an application in JBoss EAP 6.2 in cluster mode and i need the feature of shared user sessions across cluster nodes. If i deploy a test application in the EAP the Infinispan subsystem starts normally, printing the following messages on the logs: [Host Controller] 17:50:16,558 INFO [org.jboss.as.repository] (management-handler-thread - 1) JBAS014900: Conte?do adicionado na localiza??o /home/renato/jbdevstudio/runtimes/jboss-eap/domain/data/content/10/880e56bde8806be9fc5736829dd5733d717974/content [Server:server-two] 17:50:16,790 INFO [org.jboss.as.server.deployment] (MSC service thread 1-6) JBAS015876: Iniciando a implanta??o do "cluster.war" (runtime-name: "cluster.war") [Server:server-one] 17:50:16,790 INFO [org.jboss.as.server.deployment] (MSC service thread 1-1) JBAS015876: Iniciando a implanta??o do "cluster.war" (runtime-name: "cluster.war") [Server:server-one] 17:50:18,481 INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 59) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated. [Server:server-one] 17:50:18,481 INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 63) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated. [Server:server-one] 17:50:18,490 INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 59) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated. [Server:server-one] 17:50:18,492 INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 63) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated. [Server:server-one] 17:50:18,494 INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC service thread 1-2) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated. [Server:server-one] 17:50:18,497 INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC service thread 1-2) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated. [Server:server-one] 17:50:18,558 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService Thread Pool -- 50) ISPN000078: Starting JGroups Channel [Server:server-one] 17:50:18,571 WARN [org.jgroups.protocols.UDP] (ServerService Thread Pool -- 50) JGRP000014: the send buffer of socket DatagramSocket was set to 640KB, but the OS only allocated 212,99KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux) [Server:server-one] 17:50:18,572 WARN [org.jgroups.protocols.UDP] (ServerService Thread Pool -- 50) JGRP000014: the receive buffer of socket DatagramSocket was set to 20MB, but the OS only allocated 212,99KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux) [Server:server-one] 17:50:18,572 WARN [org.jgroups.protocols.UDP] (ServerService Thread Pool -- 50) JGRP000014: the send buffer of socket MulticastSocket was set to 640KB, but the OS only allocated 212,99KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux) [Server:server-one] 17:50:18,573 WARN [org.jgroups.protocols.UDP] (ServerService Thread Pool -- 50) JGRP000014: the receive buffer of socket MulticastSocket was set to 25MB, but the OS only allocated 212,99KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux) [Server:server-one] 17:50:18,576 INFO [stdout] (ServerService Thread Pool -- 50) [Server:server-one] 17:50:18,576 INFO [stdout] (ServerService Thread Pool -- 50) ------------------------------------------------------------------- [Server:server-one] 17:50:18,576 INFO [stdout] (ServerService Thread Pool -- 50) GMS: address=master:server-one/web, cluster=web, physical address= 127.0.0.1:55200 [Server:server-one] 17:50:18,577 INFO [stdout] (ServerService Thread Pool -- 50) ------------------------------------------------------------------- [Server:server-two] 17:50:18,654 INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 51) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated. [Server:server-two] 17:50:18,655 INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 53) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated. [Server:server-two] 17:50:18,658 INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 51) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated. [Server:server-two] 17:50:18,660 INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 53) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated. [Server:server-two] 17:50:18,660 INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC service thread 1-6) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated. [Server:server-two] 17:50:18,661 INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC service thread 1-6) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated. [Server:server-two] 17:50:18,715 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService Thread Pool -- 50) ISPN000078: Starting JGroups Channel [Server:server-two] 17:50:18,728 WARN [org.jgroups.protocols.UDP] (ServerService Thread Pool -- 50) JGRP000014: the send buffer of socket DatagramSocket was set to 640KB, but the OS only allocated 212,99KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux) [Server:server-two] 17:50:18,729 WARN [org.jgroups.protocols.UDP] (ServerService Thread Pool -- 50) JGRP000014: the receive buffer of socket DatagramSocket was set to 20MB, but the OS only allocated 212,99KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux) [Server:server-two] 17:50:18,729 WARN [org.jgroups.protocols.UDP] (ServerService Thread Pool -- 50) JGRP000014: the send buffer of socket MulticastSocket was set to 640KB, but the OS only allocated 212,99KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux) [Server:server-two] 17:50:18,729 WARN [org.jgroups.protocols.UDP] (ServerService Thread Pool -- 50) JGRP000014: the receive buffer of socket MulticastSocket was set to 25MB, but the OS only allocated 212,99KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux) [Server:server-two] 17:50:18,731 INFO [stdout] (ServerService Thread Pool -- 50) [Server:server-two] 17:50:18,731 INFO [stdout] (ServerService Thread Pool -- 50) ------------------------------------------------------------------- [Server:server-two] 17:50:18,731 INFO [stdout] (ServerService Thread Pool -- 50) GMS: address=master:server-two/web, cluster=web, physical address= 127.0.0.1:55350 [Server:server-two] 17:50:18,732 INFO [stdout] (ServerService Thread Pool -- 50) ------------------------------------------------------------------- [Server:server-two] 17:50:20,747 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService Thread Pool -- 50) ISPN000094: Received new cluster view: [master:server-two/web|0] [master:server-two/web] [Server:server-two] 17:50:20,797 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService Thread Pool -- 50) ISPN000079: Cache local address is master:server-two/web, physical addresses are [127.0.0.1:55350] [Server:server-two] 17:50:20,802 INFO [org.infinispan.factories.GlobalComponentRegistry] (ServerService Thread Pool -- 50) ISPN000128: Infinispan version: Infinispan 'Delirium' 5.2.7.Final [Server:server-two] 17:50:20,812 INFO [org.jboss.as.clustering] (MSC service thread 1-7) JBAS010238: O n?mero de membroos do clusyer: 1 [Server:server-two] 17:50:20,856 INFO [org.infinispan.factories.TransactionManagerFactory] (ServerService Thread Pool -- 53) ISPN000161: Using a batchMode transaction manager [Server:server-two] 17:50:20,856 INFO [org.infinispan.factories.TransactionManagerFactory] (ServerService Thread Pool -- 50) ISPN000161: Using a batchMode transaction manager [Server:server-two] 17:50:20,990 INFO [org.infinispan.jmx.CacheJmxRegistration] (ServerService Thread Pool -- 53) ISPN000031: MBeans were successfully registered to the platform MBean server. [Server:server-two] 17:50:20,990 INFO [org.infinispan.jmx.CacheJmxRegistration] (ServerService Thread Pool -- 50) ISPN000031: MBeans were successfully registered to the platform MBean server. [Server:server-two] 17:50:20,998 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 53) JBAS010281: Cache default-host/cluster inicializado a partir do recipiente web [Server:server-two] 17:50:20,998 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 50) JBAS010281: Cache repl inicializado a partir do recipiente web [Server:server-two] 17:50:21,038 INFO [org.jboss.web] (ServerService Thread Pool -- 50) JBAS018210: Registra o contexto da web: /cluster [Server:server-two] 17:50:21,166 INFO [org.jboss.as.clustering] (Incoming-1,shared=udp) JBAS010225: Nova visualiza??o do cluster para a parti??o web (id: 1, delta: 1, merge: false) : [master:server-two/web, master:server-one/web] [Server:server-two] 17:50:21,167 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,shared=udp) ISPN000094: Received new cluster view: [master:server-two/web|1] [master:server-two/web, master:server-one/web] [Server:server-one] 17:50:21,188 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService Thread Pool -- 50) ISPN000094: Received new cluster view: [master:server-two/web|1] [master:server-two/web, master:server-one/web] [Server:server-one] 17:50:21,253 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService Thread Pool -- 50) ISPN000079: Cache local address is master:server-one/web, physical addresses are [127.0.0.1:55200] [Server:server-one] 17:50:21,259 INFO [org.infinispan.factories.GlobalComponentRegistry] (ServerService Thread Pool -- 50) ISPN000128: Infinispan version: Infinispan 'Delirium' 5.2.7.Final [Server:server-one] 17:50:21,272 INFO [org.jboss.as.clustering] (MSC service thread 1-4) JBAS010238: O n?mero de membroos do clusyer: 2 [Server:server-one] 17:50:21,307 INFO [org.infinispan.factories.TransactionManagerFactory] (ServerService Thread Pool -- 59) ISPN000161: Using a batchMode transaction manager [Server:server-one] 17:50:21,308 INFO [org.infinispan.factories.TransactionManagerFactory] (ServerService Thread Pool -- 63) ISPN000161: Using a batchMode transaction manager [Server:server-one] 17:50:21,446 INFO [org.infinispan.jmx.CacheJmxRegistration] (ServerService Thread Pool -- 63) ISPN000031: MBeans were successfully registered to the platform MBean server. [Server:server-one] 17:50:21,446 INFO [org.infinispan.jmx.CacheJmxRegistration] (ServerService Thread Pool -- 59) ISPN000031: MBeans were successfully registered to the platform MBean server. [Server:server-one] 17:50:21,517 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 63) JBAS010281: Cache default-host/cluster inicializado a partir do recipiente web [Server:server-one] 17:50:21,527 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 59) JBAS010281: Cache repl inicializado a partir do recipiente web [Server:server-one] 17:50:21,540 INFO [org.jboss.web] (ServerService Thread Pool -- 59) JBAS018210: Registra o contexto da web: /cluster [Server:server-two] 17:50:21,762 INFO [org.jboss.as.server] (host-controller-connection-threads - 1) JBAS018559: Implantado "cluster.war" (runtime-name: "cluster.war") [Server:server-one] 17:50:21,762 INFO [org.jboss.as.server] (host-controller-connection-threads - 1) JBAS018559: Implantado "cluster.war" (runtime-name: "cluster.war") So, for cluster.war everything works perfect. This WAR has no libs and just a web.xml with the tag. So i created another WAR just inserting a lib dependency for my web framework VRaptor. When i try to deploy it without any further modifications, the Infinispan simply doesn't start. The log of this deploy follows: [Host Controller] 17:56:30,287 INFO [org.jboss.as.repository] (management-handler-thread - 6) JBAS014900: Conte?do adicionado na localiza??o /home/renato/jbdevstudio/runtimes/jboss-eap/domain/data/content/68/3e481ea59173e8225b374da58aada4e7ba7924/content [Server:server-two] 17:56:30,646 INFO [org.jboss.as.server.deployment] (MSC service thread 1-4) JBAS015876: Iniciando a implanta??o do "cluster-vraptor.war" (runtime-name: "cluster-vraptor.war") [Server:server-one] 17:56:30,659 INFO [org.jboss.as.server.deployment] (MSC service thread 1-6) JBAS015876: Iniciando a implanta??o do "cluster-vraptor.war" (runtime-name: "cluster-vraptor.war") [Server:server-one] 17:56:31,693 WARN [org.jboss.as.server.deployment] (MSC service thread 1-5) JBAS015893: Foi encontrado o nome de classe inv?lido 'org.xmlpull.mxp1.MXParser,org.xmlpull.mxp1_serializer.MXSerializer' para o tipo de servi?o 'org.xmlpull.v1.XmlPullParserFactory' [Server:server-two] 17:56:31,699 WARN [org.jboss.as.server.deployment] (MSC service thread 1-4) JBAS015893: Foi encontrado o nome de classe inv?lido 'org.xmlpull.mxp1.MXParser,org.xmlpull.mxp1_serializer.MXSerializer' para o tipo de servi?o 'org.xmlpull.v1.XmlPullParserFactory' [Server:server-two] 17:56:32,130 WARN [org.jboss.weld.deployer] (MSC service thread 1-4) JBAS016012: A implanta??o deployment "cluster-vraptor.war" cont?m anota??s CDI mas o beans.xml n?o foi encontrado. [Server:server-one] 17:56:32,261 WARN [org.jboss.weld.deployer] (MSC service thread 1-6) JBAS016012: A implanta??o deployment "cluster-vraptor.war" cont?m anota??s CDI mas o beans.xml n?o foi encontrado. [Server:server-two] 17:56:32,297 INFO [org.jboss.web] (ServerService Thread Pool -- 63) JBAS018210: Registra o contexto da web: /cluster-vraptor [Server:server-one] 17:56:32,397 INFO [org.jboss.web] (ServerService Thread Pool -- 81) JBAS018210: Registra o contexto da web: /cluster-vraptor [Server:server-one] 17:56:33,504 INFO [org.jboss.as.server] (host-controller-connection-threads - 2) JBAS018559: Implantado "cluster-vraptor.war" (runtime-name: "cluster-vraptor.war") [Server:server-two] 17:56:33,503 INFO [org.jboss.as.server] (host-controller-connection-threads - 2) JBAS018559: Implantado "cluster-vraptor.war" (runtime-name: "cluster-vraptor.war") Nothing of the clustering Infinispan module starts. This framework adds some dependecies to the libs, all extra jars included in the second deployment: aopalliance-1.0.jar gson-2.2.4.jar guava-11.0.2.jar guice-3.0.jar guice-multibindings-3.0.jar iogi-1.0.0.jar javassist-3.12.1.GA.jar javax.inject-1.jar jsr305-1.3.9.jar log4j-1.2.16.jar mirror-1.6.1.jar objenesis-1.3.jar paranamer-2.5.2.jar scannotation-1.0.2.jar slf4j-api-1.6.1.jar slf4j-log4j12-1.6.1.jar vraptor-3.5.4.jar xmlpull-1.1.3.1.jar xpp3_min-1.1.4c.jar xstream-1.4.7.jar The VRaptor framework declares a Servlet Filter that is initialized statically, i don't know if this is a relevant information. What i want to know is why this happen and what can i do to fix this issue. There is any further configuration that i can do to make this work? There is any known issue regarding the co-existence of Infinispan 5.2.7 and any of these libs? I completely out of ideas and clues. Thanks for the help. Regards. -- *Renato Resende Ribeiro de Oliveira* DIretor de Produ??o e Tecnologia - ProGolden Solu??es Tecnol?gicas MSc - Computer Science - Universidade Federal de Lavras Skype: renatorro.comp.ufla.br ICQ: 669012672 Phone: +55 (31) 9823-9631 Conhe?a o Pr?mioIdeia - Inova??o Colaborativa na sua empresa! http://www.premioideia.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140902/f36ebcad/attachment-0001.html From renatorro at comp.ufla.br Wed Sep 3 18:24:30 2014 From: renatorro at comp.ufla.br (Renato Resende Ribeiro de Oliveira) Date: Wed, 3 Sep 2014 19:24:30 -0300 Subject: [infinispan-dev] Infinispan within JBoss EAP 6.2 In-Reply-To: References: Message-ID: Just to keep registered, I found the reason of the problem: https://issues.jboss.org/browse/JBAS-9402 The framework has a web-fragment, but it isn't declared Regards. 2014-09-02 18:06 GMT-03:00 Renato Resende Ribeiro de Oliveira < renatorro at comp.ufla.br>: > Hello there, > I am not sure if this list is the right place to ask that, but i am > getting out of options. > > I am trying to deploy an application in JBoss EAP 6.2 in cluster mode and > i need the feature of shared user sessions across cluster nodes. > If i deploy a test application in the EAP the Infinispan subsystem starts > normally, printing the following messages on the logs: > > [Host Controller] 17:50:16,558 INFO [org.jboss.as.repository] > (management-handler-thread - 1) JBAS014900: Conte?do adicionado na > localiza??o > /home/renato/jbdevstudio/runtimes/jboss-eap/domain/data/content/10/880e56bde8806be9fc5736829dd5733d717974/content > [Server:server-two] 17:50:16,790 INFO [org.jboss.as.server.deployment] > (MSC service thread 1-6) JBAS015876: Iniciando a implanta??o do > "cluster.war" (runtime-name: "cluster.war") > [Server:server-one] 17:50:16,790 INFO [org.jboss.as.server.deployment] > (MSC service thread 1-1) JBAS015876: Iniciando a implanta??o do > "cluster.war" (runtime-name: "cluster.war") > [Server:server-one] 17:50:18,481 INFO > [org.infinispan.configuration.cache.EvictionConfigurationBuilder] > (ServerService Thread Pool -- 59) ISPN000152: Passivation configured > without an eviction policy being selected. Only manually evicted entities > will be passivated. > [Server:server-one] 17:50:18,481 INFO > [org.infinispan.configuration.cache.EvictionConfigurationBuilder] > (ServerService Thread Pool -- 63) ISPN000152: Passivation configured > without an eviction policy being selected. Only manually evicted entities > will be passivated. > [Server:server-one] 17:50:18,490 INFO > [org.infinispan.configuration.cache.EvictionConfigurationBuilder] > (ServerService Thread Pool -- 59) ISPN000152: Passivation configured > without an eviction policy being selected. Only manually evicted entities > will be passivated. > [Server:server-one] 17:50:18,492 INFO > [org.infinispan.configuration.cache.EvictionConfigurationBuilder] > (ServerService Thread Pool -- 63) ISPN000152: Passivation configured > without an eviction policy being selected. Only manually evicted entities > will be passivated. > [Server:server-one] 17:50:18,494 INFO > [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC > service thread 1-2) ISPN000152: Passivation configured without an eviction > policy being selected. Only manually evicted entities will be passivated. > [Server:server-one] 17:50:18,497 INFO > [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC > service thread 1-2) ISPN000152: Passivation configured without an eviction > policy being selected. Only manually evicted entities will be passivated. > [Server:server-one] 17:50:18,558 INFO > [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService > Thread Pool -- 50) ISPN000078: Starting JGroups Channel > [Server:server-one] 17:50:18,571 WARN [org.jgroups.protocols.UDP] > (ServerService Thread Pool -- 50) JGRP000014: the send buffer of socket > DatagramSocket was set to 640KB, but the OS only allocated 212,99KB. This > might lead to performance problems. Please set your max send buffer in the > OS correctly (e.g. net.core.wmem_max on Linux) > [Server:server-one] 17:50:18,572 WARN [org.jgroups.protocols.UDP] > (ServerService Thread Pool -- 50) JGRP000014: the receive buffer of socket > DatagramSocket was set to 20MB, but the OS only allocated 212,99KB. This > might lead to performance problems. Please set your max receive buffer in > the OS correctly (e.g. net.core.rmem_max on Linux) > [Server:server-one] 17:50:18,572 WARN [org.jgroups.protocols.UDP] > (ServerService Thread Pool -- 50) JGRP000014: the send buffer of socket > MulticastSocket was set to 640KB, but the OS only allocated 212,99KB. This > might lead to performance problems. Please set your max send buffer in the > OS correctly (e.g. net.core.wmem_max on Linux) > [Server:server-one] 17:50:18,573 WARN [org.jgroups.protocols.UDP] > (ServerService Thread Pool -- 50) JGRP000014: the receive buffer of socket > MulticastSocket was set to 25MB, but the OS only allocated 212,99KB. This > might lead to performance problems. Please set your max receive buffer in > the OS correctly (e.g. net.core.rmem_max on Linux) > [Server:server-one] 17:50:18,576 INFO [stdout] (ServerService Thread Pool > -- 50) > [Server:server-one] 17:50:18,576 INFO [stdout] (ServerService Thread Pool > -- 50) ------------------------------------------------------------------- > [Server:server-one] 17:50:18,576 INFO [stdout] (ServerService Thread Pool > -- 50) GMS: address=master:server-one/web, cluster=web, physical address= > 127.0.0.1:55200 > [Server:server-one] 17:50:18,577 INFO [stdout] (ServerService Thread Pool > -- 50) ------------------------------------------------------------------- > [Server:server-two] 17:50:18,654 INFO > [org.infinispan.configuration.cache.EvictionConfigurationBuilder] > (ServerService Thread Pool -- 51) ISPN000152: Passivation configured > without an eviction policy being selected. Only manually evicted entities > will be passivated. > [Server:server-two] 17:50:18,655 INFO > [org.infinispan.configuration.cache.EvictionConfigurationBuilder] > (ServerService Thread Pool -- 53) ISPN000152: Passivation configured > without an eviction policy being selected. Only manually evicted entities > will be passivated. > [Server:server-two] 17:50:18,658 INFO > [org.infinispan.configuration.cache.EvictionConfigurationBuilder] > (ServerService Thread Pool -- 51) ISPN000152: Passivation configured > without an eviction policy being selected. Only manually evicted entities > will be passivated. > [Server:server-two] 17:50:18,660 INFO > [org.infinispan.configuration.cache.EvictionConfigurationBuilder] > (ServerService Thread Pool -- 53) ISPN000152: Passivation configured > without an eviction policy being selected. Only manually evicted entities > will be passivated. > [Server:server-two] 17:50:18,660 INFO > [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC > service thread 1-6) ISPN000152: Passivation configured without an eviction > policy being selected. Only manually evicted entities will be passivated. > [Server:server-two] 17:50:18,661 INFO > [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC > service thread 1-6) ISPN000152: Passivation configured without an eviction > policy being selected. Only manually evicted entities will be passivated. > [Server:server-two] 17:50:18,715 INFO > [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService > Thread Pool -- 50) ISPN000078: Starting JGroups Channel > [Server:server-two] 17:50:18,728 WARN [org.jgroups.protocols.UDP] > (ServerService Thread Pool -- 50) JGRP000014: the send buffer of socket > DatagramSocket was set to 640KB, but the OS only allocated 212,99KB. This > might lead to performance problems. Please set your max send buffer in the > OS correctly (e.g. net.core.wmem_max on Linux) > [Server:server-two] 17:50:18,729 WARN [org.jgroups.protocols.UDP] > (ServerService Thread Pool -- 50) JGRP000014: the receive buffer of socket > DatagramSocket was set to 20MB, but the OS only allocated 212,99KB. This > might lead to performance problems. Please set your max receive buffer in > the OS correctly (e.g. net.core.rmem_max on Linux) > [Server:server-two] 17:50:18,729 WARN [org.jgroups.protocols.UDP] > (ServerService Thread Pool -- 50) JGRP000014: the send buffer of socket > MulticastSocket was set to 640KB, but the OS only allocated 212,99KB. This > might lead to performance problems. Please set your max send buffer in the > OS correctly (e.g. net.core.wmem_max on Linux) > [Server:server-two] 17:50:18,729 WARN [org.jgroups.protocols.UDP] > (ServerService Thread Pool -- 50) JGRP000014: the receive buffer of socket > MulticastSocket was set to 25MB, but the OS only allocated 212,99KB. This > might lead to performance problems. Please set your max receive buffer in > the OS correctly (e.g. net.core.rmem_max on Linux) > [Server:server-two] 17:50:18,731 INFO [stdout] (ServerService Thread Pool > -- 50) > [Server:server-two] 17:50:18,731 INFO [stdout] (ServerService Thread Pool > -- 50) ------------------------------------------------------------------- > [Server:server-two] 17:50:18,731 INFO [stdout] (ServerService Thread Pool > -- 50) GMS: address=master:server-two/web, cluster=web, physical address= > 127.0.0.1:55350 > [Server:server-two] 17:50:18,732 INFO [stdout] (ServerService Thread Pool > -- 50) ------------------------------------------------------------------- > [Server:server-two] 17:50:20,747 INFO > [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService > Thread Pool -- 50) ISPN000094: Received new cluster view: > [master:server-two/web|0] [master:server-two/web] > [Server:server-two] 17:50:20,797 INFO > [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService > Thread Pool -- 50) ISPN000079: Cache local address is > master:server-two/web, physical addresses are [127.0.0.1:55350] > [Server:server-two] 17:50:20,802 INFO > [org.infinispan.factories.GlobalComponentRegistry] (ServerService Thread > Pool -- 50) ISPN000128: Infinispan version: Infinispan 'Delirium' > 5.2.7.Final > [Server:server-two] 17:50:20,812 INFO [org.jboss.as.clustering] (MSC > service thread 1-7) JBAS010238: O n?mero de membroos do clusyer: 1 > [Server:server-two] 17:50:20,856 INFO > [org.infinispan.factories.TransactionManagerFactory] (ServerService Thread > Pool -- 53) ISPN000161: Using a batchMode transaction manager > [Server:server-two] 17:50:20,856 INFO > [org.infinispan.factories.TransactionManagerFactory] (ServerService Thread > Pool -- 50) ISPN000161: Using a batchMode transaction manager > [Server:server-two] 17:50:20,990 INFO > [org.infinispan.jmx.CacheJmxRegistration] (ServerService Thread Pool -- 53) > ISPN000031: MBeans were successfully registered to the platform MBean > server. > [Server:server-two] 17:50:20,990 INFO > [org.infinispan.jmx.CacheJmxRegistration] (ServerService Thread Pool -- 50) > ISPN000031: MBeans were successfully registered to the platform MBean > server. > [Server:server-two] 17:50:20,998 INFO > [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 53) > JBAS010281: Cache default-host/cluster inicializado a partir do recipiente > web > [Server:server-two] 17:50:20,998 INFO > [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 50) > JBAS010281: Cache repl inicializado a partir do recipiente web > [Server:server-two] 17:50:21,038 INFO [org.jboss.web] (ServerService > Thread Pool -- 50) JBAS018210: Registra o contexto da web: /cluster > [Server:server-two] 17:50:21,166 INFO [org.jboss.as.clustering] > (Incoming-1,shared=udp) JBAS010225: Nova visualiza??o do cluster para a > parti??o web (id: 1, delta: 1, merge: false) : [master:server-two/web, > master:server-one/web] > [Server:server-two] 17:50:21,167 INFO > [org.infinispan.remoting.transport.jgroups.JGroupsTransport] > (Incoming-1,shared=udp) ISPN000094: Received new cluster view: > [master:server-two/web|1] [master:server-two/web, master:server-one/web] > [Server:server-one] 17:50:21,188 INFO > [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService > Thread Pool -- 50) ISPN000094: Received new cluster view: > [master:server-two/web|1] [master:server-two/web, master:server-one/web] > [Server:server-one] 17:50:21,253 INFO > [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService > Thread Pool -- 50) ISPN000079: Cache local address is > master:server-one/web, physical addresses are [127.0.0.1:55200] > [Server:server-one] 17:50:21,259 INFO > [org.infinispan.factories.GlobalComponentRegistry] (ServerService Thread > Pool -- 50) ISPN000128: Infinispan version: Infinispan 'Delirium' > 5.2.7.Final > [Server:server-one] 17:50:21,272 INFO [org.jboss.as.clustering] (MSC > service thread 1-4) JBAS010238: O n?mero de membroos do clusyer: 2 > [Server:server-one] 17:50:21,307 INFO > [org.infinispan.factories.TransactionManagerFactory] (ServerService Thread > Pool -- 59) ISPN000161: Using a batchMode transaction manager > [Server:server-one] 17:50:21,308 INFO > [org.infinispan.factories.TransactionManagerFactory] (ServerService Thread > Pool -- 63) ISPN000161: Using a batchMode transaction manager > [Server:server-one] 17:50:21,446 INFO > [org.infinispan.jmx.CacheJmxRegistration] (ServerService Thread Pool -- 63) > ISPN000031: MBeans were successfully registered to the platform MBean > server. > [Server:server-one] 17:50:21,446 INFO > [org.infinispan.jmx.CacheJmxRegistration] (ServerService Thread Pool -- 59) > ISPN000031: MBeans were successfully registered to the platform MBean > server. > [Server:server-one] 17:50:21,517 INFO > [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 63) > JBAS010281: Cache default-host/cluster inicializado a partir do recipiente > web > [Server:server-one] 17:50:21,527 INFO > [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 59) > JBAS010281: Cache repl inicializado a partir do recipiente web > [Server:server-one] 17:50:21,540 INFO [org.jboss.web] (ServerService > Thread Pool -- 59) JBAS018210: Registra o contexto da web: /cluster > [Server:server-two] 17:50:21,762 INFO [org.jboss.as.server] > (host-controller-connection-threads - 1) JBAS018559: Implantado > "cluster.war" (runtime-name: "cluster.war") > [Server:server-one] 17:50:21,762 INFO [org.jboss.as.server] > (host-controller-connection-threads - 1) JBAS018559: Implantado > "cluster.war" (runtime-name: "cluster.war") > > So, for cluster.war everything works perfect. This WAR has no libs and > just a web.xml with the tag. > So i created another WAR just inserting a lib dependency for my web > framework VRaptor. When i try to deploy it without any further > modifications, the Infinispan simply doesn't start. The log of this deploy > follows: > > [Host Controller] 17:56:30,287 INFO [org.jboss.as.repository] > (management-handler-thread - 6) JBAS014900: Conte?do adicionado na > localiza??o > /home/renato/jbdevstudio/runtimes/jboss-eap/domain/data/content/68/3e481ea59173e8225b374da58aada4e7ba7924/content > [Server:server-two] 17:56:30,646 INFO [org.jboss.as.server.deployment] > (MSC service thread 1-4) JBAS015876: Iniciando a implanta??o do > "cluster-vraptor.war" (runtime-name: "cluster-vraptor.war") > [Server:server-one] 17:56:30,659 INFO [org.jboss.as.server.deployment] > (MSC service thread 1-6) JBAS015876: Iniciando a implanta??o do > "cluster-vraptor.war" (runtime-name: "cluster-vraptor.war") > [Server:server-one] 17:56:31,693 WARN [org.jboss.as.server.deployment] > (MSC service thread 1-5) JBAS015893: Foi encontrado o nome de classe > inv?lido > 'org.xmlpull.mxp1.MXParser,org.xmlpull.mxp1_serializer.MXSerializer' para o > tipo de servi?o 'org.xmlpull.v1.XmlPullParserFactory' > [Server:server-two] 17:56:31,699 WARN [org.jboss.as.server.deployment] > (MSC service thread 1-4) JBAS015893: Foi encontrado o nome de classe > inv?lido > 'org.xmlpull.mxp1.MXParser,org.xmlpull.mxp1_serializer.MXSerializer' para o > tipo de servi?o 'org.xmlpull.v1.XmlPullParserFactory' > [Server:server-two] 17:56:32,130 WARN [org.jboss.weld.deployer] (MSC > service thread 1-4) JBAS016012: A implanta??o deployment > "cluster-vraptor.war" cont?m anota??s CDI mas o beans.xml n?o foi > encontrado. > [Server:server-one] 17:56:32,261 WARN [org.jboss.weld.deployer] (MSC > service thread 1-6) JBAS016012: A implanta??o deployment > "cluster-vraptor.war" cont?m anota??s CDI mas o beans.xml n?o foi > encontrado. > [Server:server-two] 17:56:32,297 INFO [org.jboss.web] (ServerService > Thread Pool -- 63) JBAS018210: Registra o contexto da web: /cluster-vraptor > [Server:server-one] 17:56:32,397 INFO [org.jboss.web] (ServerService > Thread Pool -- 81) JBAS018210: Registra o contexto da web: /cluster-vraptor > [Server:server-one] 17:56:33,504 INFO [org.jboss.as.server] > (host-controller-connection-threads - 2) JBAS018559: Implantado > "cluster-vraptor.war" (runtime-name: "cluster-vraptor.war") > [Server:server-two] 17:56:33,503 INFO [org.jboss.as.server] > (host-controller-connection-threads - 2) JBAS018559: Implantado > "cluster-vraptor.war" (runtime-name: "cluster-vraptor.war") > > Nothing of the clustering Infinispan module starts. This framework adds > some dependecies to the libs, all extra jars included in the second > deployment: > > aopalliance-1.0.jar > gson-2.2.4.jar > guava-11.0.2.jar > guice-3.0.jar > guice-multibindings-3.0.jar > iogi-1.0.0.jar > javassist-3.12.1.GA.jar > javax.inject-1.jar > jsr305-1.3.9.jar > log4j-1.2.16.jar > mirror-1.6.1.jar > objenesis-1.3.jar > paranamer-2.5.2.jar > scannotation-1.0.2.jar > slf4j-api-1.6.1.jar > slf4j-log4j12-1.6.1.jar > vraptor-3.5.4.jar > xmlpull-1.1.3.1.jar > xpp3_min-1.1.4c.jar > xstream-1.4.7.jar > > The VRaptor framework declares a Servlet Filter that is initialized > statically, i don't know if this is a relevant information. > > What i want to know is why this happen and what can i do to fix this issue. > There is any further configuration that i can do to make this work? > There is any known issue regarding the co-existence of Infinispan 5.2.7 > and any of these libs? > > I completely out of ideas and clues. > Thanks for the help. > Regards. > > -- > *Renato Resende Ribeiro de Oliveira* > DIretor de Produ??o e Tecnologia - ProGolden Solu??es Tecnol?gicas > MSc - Computer Science - Universidade Federal de Lavras > > Skype: renatorro.comp.ufla.br > ICQ: 669012672 > Phone: +55 (31) 9823-9631 > > Conhe?a o Pr?mioIdeia - Inova??o Colaborativa na sua empresa! > http://www.premioideia.com/ > -- *Renato Resende Ribeiro de Oliveira* DIretor de Produ??o e Tecnologia - ProGolden Solu??es Tecnol?gicas MSc - Computer Science - Universidade Federal de Lavras Skype: renatorro.comp.ufla.br ICQ: 669012672 Phone: +55 (31) 9823-9631 Conhe?a o Pr?mioIdeia - Inova??o Colaborativa na sua empresa! http://www.premioideia.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140903/0e8b015e/attachment-0001.html From rory.odonnell at oracle.com Fri Sep 5 04:33:07 2014 From: rory.odonnell at oracle.com (Rory O'Donnell Oracle, Dublin Ireland) Date: Fri, 05 Sep 2014 09:33:07 +0100 Subject: [infinispan-dev] Early Access builds for JDK 9 b28 and JDK 8u40 b04 are available on java.net Message-ID: <54097543.2040709@oracle.com> Hi Galder, Early Access build for JDK 9 b28 is available on java.net, summary of changes are listed here Early Access build for JDK 8u40 b04 is available on java.net, summary of changes are listed here. Rgds,Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140905/37e89d94/attachment.html From ttarrant at redhat.com Mon Sep 8 04:07:28 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 08 Sep 2014 10:07:28 +0200 Subject: [infinispan-dev] Jira change: "New" state for Infinispan issues Message-ID: <540D63C0.4070909@redhat.com> Hi all, the Jira workflow for new ISPN issues has been changed to introduce a "New" state in which all newly created issues will start from. There are three transitions from this status: "Hand Over to Development" to switch issue into Open status "Resolve issue" "Close issue" I have attached a diagram of the new workflow. Old issues will still have the old "Git pull request workflow". Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: workflow-design.png Type: image/png Size: 26828 bytes Desc: not available Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140908/e2fb4425/attachment-0001.png From rvansa at redhat.com Mon Sep 8 04:34:47 2014 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 08 Sep 2014 10:34:47 +0200 Subject: [infinispan-dev] Forum: Version, JGroups and Infinispan Config Message-ID: <540D6A27.70308@redhat.com> Hi, I have a feeling that most of the first replies on JBoss forum are asking users about their Infinispan version, configuration and JGroups configuration. Could we add few fields asking for those along with the question? Radim -- Radim Vansa JBoss DataGrid QA From bban at redhat.com Wed Sep 10 06:05:11 2014 From: bban at redhat.com (Bela Ban) Date: Wed, 10 Sep 2014 12:05:11 +0200 Subject: [infinispan-dev] JGRP-1877 Message-ID: <54102257.5020207@redhat.com> Just a quick heads up. I'm currently working on https://issues.jboss.org/browse/JGRP-1877, which it critical as it may: - cause RPCs to return prematurely (possibly with a TimeoutException), or - cause RPCs to blocks for a long time (pick which one is worse :-)) This is due to my misunderstanding of the semantics of System.nanoTime(), I frequently have code like this, which computes a future deadline for a timeout: long wait_time=TimeUnit.NANOSECONDS.convert(timeout, TimeUnit.MILLISECONDS); final long target_time=System.nanoTime() + wait_time; while(wait_time > 0 && !hasResult) { /* Wait for responses: */ wait_time=target_time - System.nanoTime(); if(wait_time > 0) { try {cond.await(wait_time, TimeUnit.NANOSECONDS);} catch(Exception e) {} } } if(!hasResult && wait_time <= 0) throw new TimeoutException(); Variable target_time can possibly become *negative* if nanoTime() returns a negative value. If so, hasResult is false and wait_time negative, and therefore a TimeoutException would be thrown ! While I'm at it, I'll also fix my uses of System.currentTimeMillis(), and replace it with nanoTime(). Our good friend Erik has run into issues with RPCs (using currentTimeMillis()) hanging forever when their NTP-based servers adjusted the time .... backwards ! Please be aware of nanoTime() in your own code, e.g. long t0=nanoTime(); ... long t1=nanoTime(); It is *not* guaranteed that t1 > t0 because of numeric overflow (t0 might be Long.MAX_VALUE-1 and t1 Long.MAX_VALUE +2 !). The only way to compare them is t1 - t0 > 0 (t1 is more recent) or < 0 t0 is more recent. Just thought I wanted to pass this on, in case somebody made the same stupid mistake... Thanks to David Lloyd for pointing this out ! -- Bela Ban, JGroups lead (http://www.jgroups.org) From afield at redhat.com Wed Sep 10 07:58:53 2014 From: afield at redhat.com (Alan Field) Date: Wed, 10 Sep 2014 07:58:53 -0400 (EDT) Subject: [infinispan-dev] JGRP-1877 In-Reply-To: <54102257.5020207@redhat.com> References: <54102257.5020207@redhat.com> Message-ID: <270279591.35894349.1410350333150.JavaMail.zimbra@redhat.com> Hey Bela, > Just a quick heads up. I'm currently working on > https://issues.jboss.org/browse/JGRP-1877, which it critical as it may: > - cause RPCs to return prematurely (possibly with a TimeoutException), or > - cause RPCs to blocks for a long time (pick which one is worse :-)) How frequently can these errors occur? Is this something that is not very likely to happen or something that requires an external action to trigger it? (i.e. changing the time via NTP) Just trying to determine the priority of this issue. Thanks, Alan ----- Original Message ----- > From: "Bela Ban" > To: infinispan-dev at lists.jboss.org > Sent: Wednesday, September 10, 2014 12:05:11 PM > Subject: [infinispan-dev] JGRP-1877 > > Just a quick heads up. I'm currently working on > https://issues.jboss.org/browse/JGRP-1877, which it critical as it may: > - cause RPCs to return prematurely (possibly with a TimeoutException), or > - cause RPCs to blocks for a long time (pick which one is worse :-)) > > This is due to my misunderstanding of the semantics of > System.nanoTime(), I frequently have code like this, which computes a > future deadline for a timeout: > > long wait_time=TimeUnit.NANOSECONDS.convert(timeout, > TimeUnit.MILLISECONDS); > final long target_time=System.nanoTime() + wait_time; > while(wait_time > 0 && !hasResult) { /* Wait for responses: */ > wait_time=target_time - System.nanoTime(); > if(wait_time > 0) { > try {cond.await(wait_time, TimeUnit.NANOSECONDS);} > catch(Exception e) {} > } > } > if(!hasResult && wait_time <= 0) > throw new TimeoutException(); > > Variable target_time can possibly become *negative* if nanoTime() > returns a negative value. If so, hasResult is false and wait_time > negative, and therefore a TimeoutException would be thrown ! > > While I'm at it, I'll also fix my uses of System.currentTimeMillis(), > and replace it with nanoTime(). Our good friend Erik has run into issues > with RPCs (using currentTimeMillis()) hanging forever when their > NTP-based servers adjusted the time .... backwards ! > > Please be aware of nanoTime() in your own code, e.g. > long t0=nanoTime(); > ... > long t1=nanoTime(); > > It is *not* guaranteed that t1 > t0 because of numeric overflow (t0 > might be Long.MAX_VALUE-1 and t1 Long.MAX_VALUE +2 !). The only way to > compare them is t1 - t0 > 0 (t1 is more recent) or < 0 t0 is more recent. > > Just thought I wanted to pass this on, in case somebody made the same > stupid mistake... > > Thanks to David Lloyd for pointing this out ! > > -- > Bela Ban, JGroups lead (http://www.jgroups.org) > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From belaran at gmail.com Wed Sep 10 08:45:41 2014 From: belaran at gmail.com (Romain Pelisse) Date: Wed, 10 Sep 2014 14:45:41 +0200 Subject: [infinispan-dev] JGRP-1877 In-Reply-To: <270279591.35894349.1410350333150.JavaMail.zimbra@redhat.com> References: <54102257.5020207@redhat.com> <270279591.35894349.1410350333150.JavaMail.zimbra@redhat.com> Message-ID: Given the pattern of it, I might be able to produce a PMD checks for this. This way, you could run it across the code base and check for all occurences of it. On 10 September 2014 13:58, Alan Field wrote: > Hey Bela, > > > Just a quick heads up. I'm currently working on > > https://issues.jboss.org/browse/JGRP-1877, which it critical as it may: > > - cause RPCs to return prematurely (possibly with a TimeoutException), or > > - cause RPCs to blocks for a long time (pick which one is worse :-)) > > How frequently can these errors occur? Is this something that is not very > likely to happen or something that requires an external action to trigger > it? (i.e. changing the time via NTP) Just trying to determine the priority > of this issue. > > Thanks, > Alan > > ----- Original Message ----- > > From: "Bela Ban" > > To: infinispan-dev at lists.jboss.org > > Sent: Wednesday, September 10, 2014 12:05:11 PM > > Subject: [infinispan-dev] JGRP-1877 > > > > Just a quick heads up. I'm currently working on > > https://issues.jboss.org/browse/JGRP-1877, which it critical as it may: > > - cause RPCs to return prematurely (possibly with a TimeoutException), or > > - cause RPCs to blocks for a long time (pick which one is worse :-)) > > > > This is due to my misunderstanding of the semantics of > > System.nanoTime(), I frequently have code like this, which computes a > > future deadline for a timeout: > > > > long wait_time=TimeUnit.NANOSECONDS.convert(timeout, > > TimeUnit.MILLISECONDS); > > final long target_time=System.nanoTime() + wait_time; > > while(wait_time > 0 && !hasResult) { /* Wait for responses: > */ > > wait_time=target_time - System.nanoTime(); > > if(wait_time > 0) { > > try {cond.await(wait_time, TimeUnit.NANOSECONDS);} > > catch(Exception e) {} > > } > > } > > if(!hasResult && wait_time <= 0) > > throw new TimeoutException(); > > > > Variable target_time can possibly become *negative* if nanoTime() > > returns a negative value. If so, hasResult is false and wait_time > > negative, and therefore a TimeoutException would be thrown ! > > > > While I'm at it, I'll also fix my uses of System.currentTimeMillis(), > > and replace it with nanoTime(). Our good friend Erik has run into issues > > with RPCs (using currentTimeMillis()) hanging forever when their > > NTP-based servers adjusted the time .... backwards ! > > > > Please be aware of nanoTime() in your own code, e.g. > > long t0=nanoTime(); > > ... > > long t1=nanoTime(); > > > > It is *not* guaranteed that t1 > t0 because of numeric overflow (t0 > > might be Long.MAX_VALUE-1 and t1 Long.MAX_VALUE +2 !). The only way to > > compare them is t1 - t0 > 0 (t1 is more recent) or < 0 t0 is more recent. > > > > Just thought I wanted to pass this on, in case somebody made the same > > stupid mistake... > > > > Thanks to David Lloyd for pointing this out ! > > > > -- > > Bela Ban, JGroups lead (http://www.jgroups.org) > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Romain PELISSE, *"The trouble with having an open mind, of course, is that people will insist on coming along and trying to put things in it" -- Terry Pratchett* Belaran ins Prussia (blog) (... finally up and running !) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140910/f6749c04/attachment.html From bban at redhat.com Wed Sep 10 09:04:32 2014 From: bban at redhat.com (Bela Ban) Date: Wed, 10 Sep 2014 15:04:32 +0200 Subject: [infinispan-dev] JGRP-1877 In-Reply-To: References: <54102257.5020207@redhat.com> <270279591.35894349.1410350333150.JavaMail.zimbra@redhat.com> Message-ID: <54104C60.3000106@redhat.com> Yep - meanwhile I just search for System.currentTimeMillis() and System.nanoTime(). On 10/09/14 14:45, Romain Pelisse wrote: > Given the pattern of it, I might be able to produce a PMD checks for > this. This way, you could run it across the code base and check for all > occurences of it. > > On 10 September 2014 13:58, Alan Field > wrote: > > Hey Bela, > > > Just a quick heads up. I'm currently working on > >https://issues.jboss.org/browse/JGRP-1877, which it critical as it may: > > - cause RPCs to return prematurely (possibly with a TimeoutException), or > > - cause RPCs to blocks for a long time (pick which one is worse :-)) > > How frequently can these errors occur? Is this something that is not > very likely to happen or something that requires an external action > to trigger it? (i.e. changing the time via NTP) Just trying to > determine the priority of this issue. > > Thanks, > Alan > > ----- Original Message ----- > > From: "Bela Ban" > > > To: infinispan-dev at lists.jboss.org > > > Sent: Wednesday, September 10, 2014 12:05:11 PM > > Subject: [infinispan-dev] JGRP-1877 > > > > Just a quick heads up. I'm currently working on > > https://issues.jboss.org/browse/JGRP-1877, which it critical as > it may: > > - cause RPCs to return prematurely (possibly with a > TimeoutException), or > > - cause RPCs to blocks for a long time (pick which one is worse :-)) > > > > This is due to my misunderstanding of the semantics of > > System.nanoTime(), I frequently have code like this, which computes a > > future deadline for a timeout: > > > > long wait_time=TimeUnit.NANOSECONDS.convert(timeout, > > TimeUnit.MILLISECONDS); > > final long target_time=System.nanoTime() + wait_time; > > while(wait_time > 0 && !hasResult) { /* Wait for > responses: */ > > wait_time=target_time - System.nanoTime(); > > if(wait_time > 0) { > > try {cond.await(wait_time, > TimeUnit.NANOSECONDS);} > > catch(Exception e) {} > > } > > } > > if(!hasResult && wait_time <= 0) > > throw new TimeoutException(); > > > > Variable target_time can possibly become *negative* if nanoTime() > > returns a negative value. If so, hasResult is false and wait_time > > negative, and therefore a TimeoutException would be thrown ! > > > > While I'm at it, I'll also fix my uses of System.currentTimeMillis(), > > and replace it with nanoTime(). Our good friend Erik has run into > issues > > with RPCs (using currentTimeMillis()) hanging forever when their > > NTP-based servers adjusted the time .... backwards ! > > > > Please be aware of nanoTime() in your own code, e.g. > > long t0=nanoTime(); > > ... > > long t1=nanoTime(); > > > > It is *not* guaranteed that t1 > t0 because of numeric overflow (t0 > > might be Long.MAX_VALUE-1 and t1 Long.MAX_VALUE +2 !). The only > way to > > compare them is t1 - t0 > 0 (t1 is more recent) or < 0 t0 is more > recent. > > > > Just thought I wanted to pass this on, in case somebody made the same > > stupid mistake... > > > > Thanks to David Lloyd for pointing this out ! > > > > -- > > Bela Ban, JGroups lead (http://www.jgroups.org) > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > -- > Romain PELISSE, > /"The trouble with having an open mind, of course, is that people will > insist on coming along and trying to put things in it" -- Terry Pratchett/ > Belaran ins Prussia (blog) (... > finally up and running !) > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban, JGroups lead (http://www.jgroups.org) From bban at redhat.com Wed Sep 10 09:08:30 2014 From: bban at redhat.com (Bela Ban) Date: Wed, 10 Sep 2014 15:08:30 +0200 Subject: [infinispan-dev] JGRP-1877 In-Reply-To: <270279591.35894349.1410350333150.JavaMail.zimbra@redhat.com> References: <54102257.5020207@redhat.com> <270279591.35894349.1410350333150.JavaMail.zimbra@redhat.com> Message-ID: <54104D4E.8040205@redhat.com> On 10/09/14 13:58, Alan Field wrote: > Hey Bela, > >> Just a quick heads up. I'm currently working on >> https://issues.jboss.org/browse/JGRP-1877, which it critical as it >> may: - cause RPCs to return prematurely (possibly with a >> TimeoutException), or - cause RPCs to blocks for a long time (pick >> which one is worse :-)) > > How frequently can these errors occur? Is this something that is not > very likely to happen or something that requires an external action > to trigger it? (i.e. changing the time via NTP) Just trying to > determine the priority of this issue. Changing the system time will definitely screw up code that relies on System.currentTimeMillis(). Once I replace this with nanoTime(), this problem should be eliminated. The nanoTime() problem is that an 'origin' chosen by the JVM can be in the future, so all calls to nanoTime() return negative values. Or - if positive - due to numeric overflow, the long can wrap around and become negative. Once this happens, all RPCs (for example) will return immediately, without any response, or throw TimeoutExceptions. This will last for 292 years... :-) > Thanks, Alan > > ----- Original Message ----- >> From: "Bela Ban" To: >> infinispan-dev at lists.jboss.org Sent: Wednesday, September 10, 2014 >> 12:05:11 PM Subject: [infinispan-dev] JGRP-1877 >> >> Just a quick heads up. I'm currently working on >> https://issues.jboss.org/browse/JGRP-1877, which it critical as it >> may: - cause RPCs to return prematurely (possibly with a >> TimeoutException), or - cause RPCs to blocks for a long time (pick >> which one is worse :-)) >> >> This is due to my misunderstanding of the semantics of >> System.nanoTime(), I frequently have code like this, which computes >> a future deadline for a timeout: >> >> long wait_time=TimeUnit.NANOSECONDS.convert(timeout, >> TimeUnit.MILLISECONDS); final long target_time=System.nanoTime() + >> wait_time; while(wait_time > 0 && !hasResult) { /* Wait for >> responses: */ wait_time=target_time - System.nanoTime(); >> if(wait_time > 0) { try {cond.await(wait_time, >> TimeUnit.NANOSECONDS);} catch(Exception e) {} } } if(!hasResult && >> wait_time <= 0) throw new TimeoutException(); >> >> Variable target_time can possibly become *negative* if nanoTime() >> returns a negative value. If so, hasResult is false and wait_time >> negative, and therefore a TimeoutException would be thrown ! >> >> While I'm at it, I'll also fix my uses of >> System.currentTimeMillis(), and replace it with nanoTime(). Our >> good friend Erik has run into issues with RPCs (using >> currentTimeMillis()) hanging forever when their NTP-based servers >> adjusted the time .... backwards ! >> >> Please be aware of nanoTime() in your own code, e.g. long >> t0=nanoTime(); ... long t1=nanoTime(); >> >> It is *not* guaranteed that t1 > t0 because of numeric overflow >> (t0 might be Long.MAX_VALUE-1 and t1 Long.MAX_VALUE +2 !). The only >> way to compare them is t1 - t0 > 0 (t1 is more recent) or < 0 t0 is >> more recent. >> >> Just thought I wanted to pass this on, in case somebody made the >> same stupid mistake... >> >> Thanks to David Lloyd for pointing this out ! >> >> -- Bela Ban, JGroups lead (http://www.jgroups.org) >> _______________________________________________ infinispan-dev >> mailing list infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ infinispan-dev > mailing list infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban, JGroups lead (http://www.jgroups.org) From sanne at infinispan.org Wed Sep 10 17:03:43 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 10 Sep 2014 22:03:43 +0100 Subject: [infinispan-dev] JGRP-1877 In-Reply-To: <54104D4E.8040205@redhat.com> References: <54102257.5020207@redhat.com> <270279591.35894349.1410350333150.JavaMail.zimbra@redhat.com> <54104D4E.8040205@redhat.com> Message-ID: Very interesting, I didn't know about the negative value being an option; don't replace all occurrences though as in some cases System.currentTimeMillis() is more appropriate, you can find some interesting discussions here: http://lists.jboss.org/pipermail/infinispan-dev/2011-October/009277.html I'm having a test shutdown "hung" right now; all other nodes have stopped since minutes already, but the following is still hung.. could it be the same problem? I'm extremely surprised I could hit it. java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000000071b442558> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.jgroups.blocks.Request.responsesComplete(Request.java:197) at org.jgroups.blocks.Request.execute(Request.java:89) at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:406) at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:370) at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:167) at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:536) at org.infinispan.topology.LocalTopologyManagerImpl.executeOnCoordinator(LocalTopologyManagerImpl.java:324) at org.infinispan.topology.LocalTopologyManagerImpl.leave(LocalTopologyManagerImpl.java:128) at org.infinispan.statetransfer.StateTransferManagerImpl.stop(StateTransferManagerImpl.java:236) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168) at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:869) at org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:674) at org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:552) - locked <0x00000007195f4dc8> (a org.infinispan.factories.ComponentRegistry) at org.infinispan.factories.ComponentRegistry.stop(ComponentRegistry.java:241) at org.infinispan.cache.impl.CacheImpl.stop(CacheImpl.java:782) at org.infinispan.cache.impl.CacheImpl.stop(CacheImpl.java:777) at org.infinispan.query.helper.TestableCluster$Node.kill(TestableCluster.java:270) at org.infinispan.query.helper.TestableCluster$Node.access$2(TestableCluster.java:265) at org.infinispan.query.helper.TestableCluster.killAll(TestableCluster.java:94) -- Sanne On 10 September 2014 14:08, Bela Ban wrote: > > > On 10/09/14 13:58, Alan Field wrote: >> Hey Bela, >> >>> Just a quick heads up. I'm currently working on >>> https://issues.jboss.org/browse/JGRP-1877, which it critical as it >>> may: - cause RPCs to return prematurely (possibly with a >>> TimeoutException), or - cause RPCs to blocks for a long time (pick >>> which one is worse :-)) >> >> How frequently can these errors occur? Is this something that is not >> very likely to happen or something that requires an external action >> to trigger it? (i.e. changing the time via NTP) Just trying to >> determine the priority of this issue. > > > Changing the system time will definitely screw up code that relies on > System.currentTimeMillis(). Once I replace this with nanoTime(), this > problem should be eliminated. > > The nanoTime() problem is that an 'origin' chosen by the JVM can be in > the future, so all calls to nanoTime() return negative values. Or - if > positive - due to numeric overflow, the long can wrap around and become > negative. > > Once this happens, all RPCs (for example) will return immediately, > without any response, or throw TimeoutExceptions. This will last for 292 > years... :-) > > >> Thanks, Alan >> >> ----- Original Message ----- >>> From: "Bela Ban" To: >>> infinispan-dev at lists.jboss.org Sent: Wednesday, September 10, 2014 >>> 12:05:11 PM Subject: [infinispan-dev] JGRP-1877 >>> >>> Just a quick heads up. I'm currently working on >>> https://issues.jboss.org/browse/JGRP-1877, which it critical as it >>> may: - cause RPCs to return prematurely (possibly with a >>> TimeoutException), or - cause RPCs to blocks for a long time (pick >>> which one is worse :-)) >>> >>> This is due to my misunderstanding of the semantics of >>> System.nanoTime(), I frequently have code like this, which computes >>> a future deadline for a timeout: >>> >>> long wait_time=TimeUnit.NANOSECONDS.convert(timeout, >>> TimeUnit.MILLISECONDS); final long target_time=System.nanoTime() + >>> wait_time; while(wait_time > 0 && !hasResult) { /* Wait for >>> responses: */ wait_time=target_time - System.nanoTime(); >>> if(wait_time > 0) { try {cond.await(wait_time, >>> TimeUnit.NANOSECONDS);} catch(Exception e) {} } } if(!hasResult && >>> wait_time <= 0) throw new TimeoutException(); >>> >>> Variable target_time can possibly become *negative* if nanoTime() >>> returns a negative value. If so, hasResult is false and wait_time >>> negative, and therefore a TimeoutException would be thrown ! >>> >>> While I'm at it, I'll also fix my uses of >>> System.currentTimeMillis(), and replace it with nanoTime(). Our >>> good friend Erik has run into issues with RPCs (using >>> currentTimeMillis()) hanging forever when their NTP-based servers >>> adjusted the time .... backwards ! >>> >>> Please be aware of nanoTime() in your own code, e.g. long >>> t0=nanoTime(); ... long t1=nanoTime(); >>> >>> It is *not* guaranteed that t1 > t0 because of numeric overflow >>> (t0 might be Long.MAX_VALUE-1 and t1 Long.MAX_VALUE +2 !). The only >>> way to compare them is t1 - t0 > 0 (t1 is more recent) or < 0 t0 is >>> more recent. >>> >>> Just thought I wanted to pass this on, in case somebody made the >>> same stupid mistake... >>> >>> Thanks to David Lloyd for pointing this out ! >>> >>> -- Bela Ban, JGroups lead (http://www.jgroups.org) >>> _______________________________________________ infinispan-dev >>> mailing list infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> _______________________________________________ infinispan-dev >> mailing list infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > -- > Bela Ban, JGroups lead (http://www.jgroups.org) > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From bban at redhat.com Thu Sep 11 07:44:50 2014 From: bban at redhat.com (Bela Ban) Date: Thu, 11 Sep 2014 13:44:50 +0200 Subject: [infinispan-dev] JGRP-1877 In-Reply-To: References: <54102257.5020207@redhat.com> <270279591.35894349.1410350333150.JavaMail.zimbra@redhat.com> <54104D4E.8040205@redhat.com> Message-ID: <54118B32.9050603@redhat.com> On 10/09/14 23:03, Sanne Grinovero wrote: > Very interesting, I didn't know about the negative value being an > option; don't replace all occurrences though as in some cases > System.currentTimeMillis() is more appropriate, you can find some > interesting discussions here: > http://lists.jboss.org/pipermail/infinispan-dev/2011-October/009277.html I don't see a single case where I should use currentTimeMillis() (except for some simple testing code): nanoTime() is not affected by system clock changes, currentTimeMillis() is. JGroups mostly uses both calls to measure elapsed time, or to determine when something has timed out (e.g. an RPC). > I'm having a test shutdown "hung" right now; all other nodes have > stopped since minutes already, but the following is still hung.. could > it be the same problem? Could be, but I don't know. Chances of this happening are very low. I'm going to see if I should log the time to wait in a TRACE statement, so we can determine extremely high waits (or maybe just log if it exceeds a threshold). This would also show negative wait times > I'm extremely surprised I could hit it. > > java.lang.Thread.State: TIMED_WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x000000071b442558> (a > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) > at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) > at org.jgroups.blocks.Request.responsesComplete(Request.java:197) > at org.jgroups.blocks.Request.execute(Request.java:89) > at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:406) > at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:370) > at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:167) > at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:536) > at org.infinispan.topology.LocalTopologyManagerImpl.executeOnCoordinator(LocalTopologyManagerImpl.java:324) > at org.infinispan.topology.LocalTopologyManagerImpl.leave(LocalTopologyManagerImpl.java:128) > at org.infinispan.statetransfer.StateTransferManagerImpl.stop(StateTransferManagerImpl.java:236) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:483) > at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168) > at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:869) > at org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:674) > at org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:552) > - locked <0x00000007195f4dc8> (a org.infinispan.factories.ComponentRegistry) > at org.infinispan.factories.ComponentRegistry.stop(ComponentRegistry.java:241) > at org.infinispan.cache.impl.CacheImpl.stop(CacheImpl.java:782) > at org.infinispan.cache.impl.CacheImpl.stop(CacheImpl.java:777) > at org.infinispan.query.helper.TestableCluster$Node.kill(TestableCluster.java:270) > at org.infinispan.query.helper.TestableCluster$Node.access$2(TestableCluster.java:265) > at org.infinispan.query.helper.TestableCluster.killAll(TestableCluster.java:94) > > -- Sanne > > > On 10 September 2014 14:08, Bela Ban wrote: >> >> >> On 10/09/14 13:58, Alan Field wrote: >>> Hey Bela, >>> >>>> Just a quick heads up. I'm currently working on >>>> https://issues.jboss.org/browse/JGRP-1877, which it critical as it >>>> may: - cause RPCs to return prematurely (possibly with a >>>> TimeoutException), or - cause RPCs to blocks for a long time (pick >>>> which one is worse :-)) >>> >>> How frequently can these errors occur? Is this something that is not >>> very likely to happen or something that requires an external action >>> to trigger it? (i.e. changing the time via NTP) Just trying to >>> determine the priority of this issue. >> >> >> Changing the system time will definitely screw up code that relies on >> System.currentTimeMillis(). Once I replace this with nanoTime(), this >> problem should be eliminated. >> >> The nanoTime() problem is that an 'origin' chosen by the JVM can be in >> the future, so all calls to nanoTime() return negative values. Or - if >> positive - due to numeric overflow, the long can wrap around and become >> negative. >> >> Once this happens, all RPCs (for example) will return immediately, >> without any response, or throw TimeoutExceptions. This will last for 292 >> years... :-) >> >> >>> Thanks, Alan >>> >>> ----- Original Message ----- >>>> From: "Bela Ban" To: >>>> infinispan-dev at lists.jboss.org Sent: Wednesday, September 10, 2014 >>>> 12:05:11 PM Subject: [infinispan-dev] JGRP-1877 >>>> >>>> Just a quick heads up. I'm currently working on >>>> https://issues.jboss.org/browse/JGRP-1877, which it critical as it >>>> may: - cause RPCs to return prematurely (possibly with a >>>> TimeoutException), or - cause RPCs to blocks for a long time (pick >>>> which one is worse :-)) >>>> >>>> This is due to my misunderstanding of the semantics of >>>> System.nanoTime(), I frequently have code like this, which computes >>>> a future deadline for a timeout: >>>> >>>> long wait_time=TimeUnit.NANOSECONDS.convert(timeout, >>>> TimeUnit.MILLISECONDS); final long target_time=System.nanoTime() + >>>> wait_time; while(wait_time > 0 && !hasResult) { /* Wait for >>>> responses: */ wait_time=target_time - System.nanoTime(); >>>> if(wait_time > 0) { try {cond.await(wait_time, >>>> TimeUnit.NANOSECONDS);} catch(Exception e) {} } } if(!hasResult && >>>> wait_time <= 0) throw new TimeoutException(); >>>> >>>> Variable target_time can possibly become *negative* if nanoTime() >>>> returns a negative value. If so, hasResult is false and wait_time >>>> negative, and therefore a TimeoutException would be thrown ! >>>> >>>> While I'm at it, I'll also fix my uses of >>>> System.currentTimeMillis(), and replace it with nanoTime(). Our >>>> good friend Erik has run into issues with RPCs (using >>>> currentTimeMillis()) hanging forever when their NTP-based servers >>>> adjusted the time .... backwards ! >>>> >>>> Please be aware of nanoTime() in your own code, e.g. long >>>> t0=nanoTime(); ... long t1=nanoTime(); >>>> >>>> It is *not* guaranteed that t1 > t0 because of numeric overflow >>>> (t0 might be Long.MAX_VALUE-1 and t1 Long.MAX_VALUE +2 !). The only >>>> way to compare them is t1 - t0 > 0 (t1 is more recent) or < 0 t0 is >>>> more recent. >>>> >>>> Just thought I wanted to pass this on, in case somebody made the >>>> same stupid mistake... >>>> >>>> Thanks to David Lloyd for pointing this out ! >>>> >>>> -- Bela Ban, JGroups lead (http://www.jgroups.org) >>>> _______________________________________________ infinispan-dev >>>> mailing list infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>> _______________________________________________ infinispan-dev >>> mailing list infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> -- >> Bela Ban, JGroups lead (http://www.jgroups.org) >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban, JGroups lead (http://www.jgroups.org) From ttarrant at redhat.com Thu Sep 11 08:58:48 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 11 Sep 2014 14:58:48 +0200 Subject: [infinispan-dev] Weekly Infinispan IRC meeting minutes 2014-09- Message-ID: <54119C88.5010009@redhat.com> Get the minutes from here: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2014/infinispan.2014-09-08-14.04.html From galder at redhat.com Thu Sep 11 10:51:02 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Thu, 11 Sep 2014 16:51:02 +0200 Subject: [infinispan-dev] Throwing an IllegalStateException subclass when cache/cachemanager stopping/stopped Re: ISPN-4717 Message-ID: <82CAFEEB-1F3A-4F41-AEF8-D2DDD5444C5B@redhat.com> Hi, Re: https://issues.jboss.org/browse/ISPN-4717 While investigating [1], I discovered that when clients send operations to terminated/terminating caches, these are not recovered from. To make this easier to handle, I?d like to change cache/cachemanager from throwing IllegalStateException to throwing a new exception that extends IllegalStateException, e.g. CacheStopping/StoppedException or similar. By making it IllegalStateException, it should create minimal disruption for anyone expecting IllegalStateException, although I don?t think this is documented per se. This, together with a HR error code that accompanies it, should make it easier for clients to deal with it and retry. A new error code will also be added for suspected caches since these are still propagated to clients. Up until know, this has been dealt with by checking the error message, but that could break easily, so again, the later stages of HR 2.0 protocol implementation is good moment for implement these two things. If anyone has any objections, speak up :) Cheers, [1] https://issues.jboss.org/browse/ISPN-4707 -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From mudokonman at gmail.com Thu Sep 11 10:59:29 2014 From: mudokonman at gmail.com (William Burns) Date: Thu, 11 Sep 2014 10:59:29 -0400 Subject: [infinispan-dev] Throwing an IllegalStateException subclass when cache/cachemanager stopping/stopped Re: ISPN-4717 In-Reply-To: <82CAFEEB-1F3A-4F41-AEF8-D2DDD5444C5B@redhat.com> References: <82CAFEEB-1F3A-4F41-AEF8-D2DDD5444C5B@redhat.com> Message-ID: +1 Actually while looking at [1] I encountered the same error you were getting (not the same as the JIRA itself) and thought about how we could remedy that issue on a get as well. Being able to detect this new CacheStoppingException would allow for some options. [1] https://issues.jboss.org/browse/ISPN-4706 - Will On Thu, Sep 11, 2014 at 10:51 AM, Galder Zamarre?o wrote: > Hi, > > Re: https://issues.jboss.org/browse/ISPN-4717 > > While investigating [1], I discovered that when clients send operations to terminated/terminating caches, these are not recovered from. To make this easier to handle, I?d like to change cache/cachemanager from throwing IllegalStateException to throwing a new exception that extends IllegalStateException, e.g. CacheStopping/StoppedException or similar. By making it IllegalStateException, it should create minimal disruption for anyone expecting IllegalStateException, although I don?t think this is documented per se. This, together with a HR error code that accompanies it, should make it easier for clients to deal with it and retry. > > A new error code will also be added for suspected caches since these are still propagated to clients. Up until know, this has been dealt with by checking the error message, but that could break easily, so again, the later stages of HR 2.0 protocol implementation is good moment for implement these two things. > > If anyone has any objections, speak up :) > > Cheers, > > [1] https://issues.jboss.org/browse/ISPN-4707 > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Fri Sep 12 04:03:33 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Fri, 12 Sep 2014 10:03:33 +0200 Subject: [infinispan-dev] Jira change: "New" state for Infinispan issues In-Reply-To: <540D63C0.4070909@redhat.com> References: <540D63C0.4070909@redhat.com> Message-ID: Looks good, thanks Tristan! On 08 Sep 2014, at 10:07, Tristan Tarrant wrote: > Hi all, > > the Jira workflow for new ISPN issues has been changed to introduce a "New" state in which all newly created issues will start from. There are three transitions from this status: > > "Hand Over to Development" to switch issue into Open status > "Resolve issue" > "Close issue" > > I have attached a diagram of the new workflow. > > Old issues will still have the old "Git pull request workflow". > > Tristan > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From rory.odonnell at oracle.com Fri Sep 12 04:28:55 2014 From: rory.odonnell at oracle.com (Rory O'Donnell Oracle, Dublin Ireland) Date: Fri, 12 Sep 2014 09:28:55 +0100 Subject: [infinispan-dev] Early Access builds for JDK 9 b29 and JDK 8u40 b05 are available on java.net Message-ID: <5412AEC7.6040302@oracle.com> Hi Galder, Early Access build for JDK 9 b29 is available on java.net, summary of changes are listed here Early Access build for JDK 8u40 b05 is available on java.net, summary of changes are listed here. Rgds,Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140912/fb8f4683/attachment.html From ttarrant at redhat.com Fri Sep 12 05:52:21 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 12 Sep 2014 11:52:21 +0200 Subject: [infinispan-dev] Beta2 pending PRs Message-ID: <5412C255.4030606@redhat.com> Hi guys, there are two PRs that are "blocking" Beta2: ISPN-4574 PartitionHandling: consider less than numOwners partitions https://github.com/infinispan/infinispan/pull/2860 ISPN-4333 Uber Jars https://github.com/infinispan/infinispan/pull/2589 And obviously any others you feel are ready to integrate. Can you please dedicate some time to reviewing these ? Tristan From mmarkus at redhat.com Fri Sep 12 08:29:34 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Fri, 12 Sep 2014 15:29:34 +0300 Subject: [infinispan-dev] Beta2 pending PRs In-Reply-To: <5412C255.4030606@redhat.com> References: <5412C255.4030606@redhat.com> Message-ID: <772666D2-FD86-451E-AAEB-07CA1F523AFE@redhat.com> On Sep 12, 2014, at 12:52, Tristan Tarrant wrote: > ISPN-4574 PartitionHandling: consider less than numOwners partitions > https://github.com/infinispan/infinispan/pull/2860 I'm looking at this one. Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From pierre.sutra at unine.ch Fri Sep 12 08:41:03 2014 From: pierre.sutra at unine.ch (Pierre Sutra) Date: Fri, 12 Sep 2014 14:41:03 +0200 Subject: [infinispan-dev] Data versioning Message-ID: <5412E9DF.3070904@unine.ch> Hello, In the context of the LEADS project, we recently wrote a paper |1] regarding data versioning in key-value stores, and using Infinispan as a basis to explore various implementations. It will be presented at the IEEE SRDS'14 conference this October [2]. We hope that it might interest you. Do not hesitate to address us comments and/or questions. Regards, Pierre [1] http://tinyurl.com/srds14versioning [2] www-nishio.ist.osaka-u.ac.jp/conf/srds2014/ From emmanuel at hibernate.org Fri Sep 12 12:14:02 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Fri, 12 Sep 2014 18:14:02 +0200 Subject: [infinispan-dev] Data versioning In-Reply-To: <5412E9DF.3070904@unine.ch> References: <5412E9DF.3070904@unine.ch> Message-ID: <20140912161402.GG24677@hibernate.org> Mircea, Tristan, I proposed to Pierre to be an invited writer on blog.infinispan.org to talk about it and do a blog sized version of it. Any reason not to ? Emmanuel On Fri 2014-09-12 14:41, Pierre Sutra wrote: > Hello, > > In the context of the LEADS project, we recently wrote a paper |1] > regarding data versioning in key-value stores, and using Infinispan as a > basis to explore various implementations. It will be presented at the > IEEE SRDS'14 conference this October [2]. We hope that it might interest > you. Do not hesitate to address us comments and/or questions. > > Regards, > Pierre > > [1] http://tinyurl.com/srds14versioning > [2] www-nishio.ist.osaka-u.ac.jp/conf/srds2014/ > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mohan.dhawan at gmail.com Sat Sep 13 01:56:58 2014 From: mohan.dhawan at gmail.com (Mohan Dhawan) Date: Sat, 13 Sep 2014 11:26:58 +0530 Subject: [infinispan-dev] origin of cache events in Infinispan Message-ID: <5413DCAA.70505@gmail.com> Hi All, Apologies for posting to the dev-list, but no one on the support forum replied. :( How does Infinispan determine the origin of the cache events ? Specifically, when a CacheEntryModified or other notifications are thrown, then how does Infinispan compute the origin of the event ? In other words, if one uses ctx.getOrigin() within an interceptor, how is the origin calculated ? Is is determined using TCP connections at the receiver node or is it passed along with the message from the sender ? In case it is the latter, then it is possible for a node to masquerade as another node. Also, is event.getGlobalTransaction().getAddress() equivalent to ctx.getOrigin() ? Thanks for your help. Regards, mohan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: OpenPGP digital signature Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140913/3e9cb146/attachment.bin From ttarrant at redhat.com Mon Sep 15 03:59:23 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 15 Sep 2014 09:59:23 +0200 Subject: [infinispan-dev] origin of cache events in Infinispan In-Reply-To: <5413DCAA.70505@gmail.com> References: <5413DCAA.70505@gmail.com> Message-ID: <54169C5B.1080307@redhat.com> Hi Mowan, I replied on the forum at https://developer.jboss.org/thread/248783 Also: you posted your request on mid-day Friday and you expected a quick reply with a weekend in the middle. We do our best to help our users, but we also have our own private lives which do not involve Infinispan :) Tristan On 13/09/14 07:56, Mohan Dhawan wrote: > Hi All, > > Apologies for posting to the dev-list, but no one on the support forum > replied. :( > > How does Infinispan determine the origin of the cache events ? > Specifically, when a CacheEntryModified or other notifications are > thrown, then how does Infinispan compute the origin of the event ? > > In other words, if one uses ctx.getOrigin() within an interceptor, how > is the origin calculated ? Is is determined using TCP connections at the > receiver node or is it passed along with the message from the sender ? > In case it is the latter, then it is possible for a node to masquerade > as another node. > > Also, is event.getGlobalTransaction().getAddress() equivalent to > ctx.getOrigin() ? > > Thanks for your help. > > Regards, > mohan > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mohan.dhawan at gmail.com Mon Sep 15 04:26:49 2014 From: mohan.dhawan at gmail.com (Mohan Dhawan) Date: Mon, 15 Sep 2014 13:56:49 +0530 Subject: [infinispan-dev] origin of cache events in Infinispan In-Reply-To: <54169C5B.1080307@redhat.com> References: <5413DCAA.70505@gmail.com> <54169C5B.1080307@redhat.com> Message-ID: <5416A2C9.8070203@gmail.com> Hi Tristan, Thanks for the prompt reply. I apologize for the hasty post on the dev-list. :( . Regards, mohan On Monday 15 September 2014 01:29 PM, Tristan Tarrant wrote: > Hi Mowan, > > I replied on the forum at https://developer.jboss.org/thread/248783 > > Also: you posted your request on mid-day Friday and you expected a quick > reply with a weekend in the middle. We do our best to help our users, > but we also have our own private lives which do not involve Infinispan :) > > Tristan > > > On 13/09/14 07:56, Mohan Dhawan wrote: >> Hi All, >> >> Apologies for posting to the dev-list, but no one on the support forum >> replied. :( >> >> How does Infinispan determine the origin of the cache events ? >> Specifically, when a CacheEntryModified or other notifications are >> thrown, then how does Infinispan compute the origin of the event ? >> >> In other words, if one uses ctx.getOrigin() within an interceptor, how >> is the origin calculated ? Is is determined using TCP connections at the >> receiver node or is it passed along with the message from the sender ? >> In case it is the latter, then it is possible for a node to masquerade >> as another node. >> >> Also, is event.getGlobalTransaction().getAddress() equivalent to >> ctx.getOrigin() ? >> >> Thanks for your help. >> >> Regards, >> mohan >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: OpenPGP digital signature Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140915/0c360a6e/attachment.bin From afield at redhat.com Tue Sep 16 06:04:08 2014 From: afield at redhat.com (Alan Field) Date: Tue, 16 Sep 2014 06:04:08 -0400 (EDT) Subject: [infinispan-dev] Differences between default values in the XSD and the code...Part One In-Reply-To: <1427044870.49904197.1410798829973.JavaMail.zimbra@redhat.com> References: <1224160165.38268955.1410794857074.JavaMail.zimbra@redhat.com> <1427044870.49904197.1410798829973.JavaMail.zimbra@redhat.com> Message-ID: <2020170102.38769728.1410861848650.JavaMail.zimbra@redhat.com> Hey, I have been looking at the differences between default values in the XSD vs the default values in the configuration builders. [1] I created a list of differences and talked to Dan about his suggestion for the defaults. The numbers in parentheses are Dan's suggestions, but he also asked me to post here to get a wider set of opinions on these values. This list is based on the code used in infinispan-core, so I still need to go through the server code to check the default values there. 1) For locking, the code has concurrency level set to 32, and the XSD has 1000 (32) 2) For eviction: a) the code has max entries set to -1, and the XSD has 10000 (-1) b) the code has interval set to 60000, and the XSD has 5000 (60000) 3) For async configuration: a) the code has queue size set to 1000, and the XSD has 0 (0) b) the code has queue flush interval set to 5000, and the XSD has 10 (10) c) the code has remote timeout set to 15000, and the XSD has 17500 (15000) 4) For hash, the code has number of segments set to 60, and the XSD has 80 (60) 5) For l1, the code has l1 cleanup interval set to 600000, and the XSD has 60000 (60000) Please let me know if you have any opinions on these default values, and also if you have any ideas for avoiding these differences in the future. It seems like there are two possibilities at this point: 1) Generating the XSD from the source code 2) Creating a test case that parses the XSD, creates a cache, and verifies the default values against the parsed values 3) ??? Thanks, Alan [1] https://issues.jboss.org/browse/ISPN-4645 From ttarrant at redhat.com Tue Sep 16 06:11:50 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 16 Sep 2014 12:11:50 +0200 Subject: [infinispan-dev] Differences between default values in the XSD and the code...Part One In-Reply-To: <2020170102.38769728.1410861848650.JavaMail.zimbra@redhat.com> References: <1224160165.38268955.1410794857074.JavaMail.zimbra@redhat.com> <1427044870.49904197.1410798829973.JavaMail.zimbra@redhat.com> <2020170102.38769728.1410861848650.JavaMail.zimbra@redhat.com> Message-ID: <54180CE6.90908@redhat.com> On 16/09/14 12:04, Alan Field wrote: > Hey, > > I have been looking at the differences between default values in the XSD vs the default values in the configuration builders. [1] I created a list of differences and talked to Dan about his suggestion for the defaults. The numbers in parentheses are Dan's suggestions, but he also asked me to post here to get a wider set of opinions on these values. This list is based on the code used in infinispan-core, so I still need to go through the server code to check the default values there. > > 1) For locking, the code has concurrency level set to 32, and the XSD has 1000 (32) > 2) For eviction: > a) the code has max entries set to -1, and the XSD has 10000 (-1) > b) the code has interval set to 60000, and the XSD has 5000 (60000) > 3) For async configuration: > a) the code has queue size set to 1000, and the XSD has 0 (0) > b) the code has queue flush interval set to 5000, and the XSD has 10 (10) > c) the code has remote timeout set to 15000, and the XSD has 17500 (15000) > 4) For hash, the code has number of segments set to 60, and the XSD has 80 (60) > 5) For l1, the code has l1 cleanup interval set to 600000, and the XSD has 60000 (60000) > > Please let me know if you have any opinions on these default values, and also if you have any ideas for avoiding these differences in the future. It seems like there are two possibilities at this point: > > 1) Generating the XSD from the source code Impractical without a ton of annotations, since the builder structure is very different from the XSD structure. > 2) Creating a test case that parses the XSD, creates a cache, and verifies the default values against the parsed values Server has a subsystem writer which recreates the configuration from the in-memory model, maybe it's worth adapting that. Tristan From afield at redhat.com Tue Sep 16 07:06:16 2014 From: afield at redhat.com (Alan Field) Date: Tue, 16 Sep 2014 07:06:16 -0400 (EDT) Subject: [infinispan-dev] Differences between default values in the XSD and the code...Part One In-Reply-To: <54180CE6.90908@redhat.com> References: <1224160165.38268955.1410794857074.JavaMail.zimbra@redhat.com> <1427044870.49904197.1410798829973.JavaMail.zimbra@redhat.com> <2020170102.38769728.1410861848650.JavaMail.zimbra@redhat.com> <54180CE6.90908@redhat.com> Message-ID: <1419083869.38812236.1410865576436.JavaMail.zimbra@redhat.com> Hey Tristan, ----- Original Message ----- > From: "Tristan Tarrant" > To: "infinispan -Dev List" > Cc: "Dan Berindei" > Sent: Tuesday, September 16, 2014 12:11:50 PM > Subject: Re: [infinispan-dev] Differences between default values in the XSD and the code...Part One > > On 16/09/14 12:04, Alan Field wrote: > > Hey, > > > > I have been looking at the differences between default values in the XSD vs > > the default values in the configuration builders. [1] I created a list of > > differences and talked to Dan about his suggestion for the defaults. The > > numbers in parentheses are Dan's suggestions, but he also asked me to post > > here to get a wider set of opinions on these values. This list is based on > > the code used in infinispan-core, so I still need to go through the server > > code to check the default values there. > > > > 1) For locking, the code has concurrency level set to 32, and the XSD has > > 1000 (32) > > 2) For eviction: > > a) the code has max entries set to -1, and the XSD has 10000 (-1) > > b) the code has interval set to 60000, and the XSD has 5000 (60000) > > 3) For async configuration: > > a) the code has queue size set to 1000, and the XSD has 0 (0) > > b) the code has queue flush interval set to 5000, and the XSD has 10 > > (10) > > c) the code has remote timeout set to 15000, and the XSD has 17500 > > (15000) > > 4) For hash, the code has number of segments set to 60, and the XSD has 80 > > (60) > > 5) For l1, the code has l1 cleanup interval set to 600000, and the XSD has > > 60000 (60000) > > > > Please let me know if you have any opinions on these default values, and > > also if you have any ideas for avoiding these differences in the future. > > It seems like there are two possibilities at this point: > > > > 1) Generating the XSD from the source code > Impractical without a ton of annotations, since the builder structure is > very different from the XSD structure. I think it would also require a lot of renaming variables in the code to match the names in XSD. > > 2) Creating a test case that parses the XSD, creates a cache, and verifies > > the default values against the parsed values > Server has a subsystem writer which recreates the configuration from the > in-memory model, maybe it's worth adapting that. This sounds interesting. Can you point me to this code? Thanks, Alan > > Tristan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From mmarkus at redhat.com Tue Sep 16 08:21:42 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Tue, 16 Sep 2014 15:21:42 +0300 Subject: [infinispan-dev] Differences between default values in the XSD and the code...Part One In-Reply-To: <54180CE6.90908@redhat.com> References: <1224160165.38268955.1410794857074.JavaMail.zimbra@redhat.com> <1427044870.49904197.1410798829973.JavaMail.zimbra@redhat.com> <2020170102.38769728.1410861848650.JavaMail.zimbra@redhat.com> <54180CE6.90908@redhat.com> Message-ID: <586E6531-A5E5-46E8-B126-4DFF1105143C@redhat.com> On Sep 16, 2014, at 13:11, Tristan Tarrant wrote: >> Hey, >> >> I have been looking at the differences between default values in the XSD vs the default values in the configuration builders. [1] I created a list of differences and talked to Dan about his suggestion for the defaults. The numbers in parentheses are Dan's suggestions, but he also asked me to post here to get a wider set of opinions on these values. This list is based on the code used in infinispan-core, so I still need to go through the server code to check the default values there. >> >> 1) For locking, the code has concurrency level set to 32, and the XSD has 1000 (32) >> 2) For eviction: >> a) the code has max entries set to -1, and the XSD has 10000 (-1) >> b) the code has interval set to 60000, and the XSD has 5000 (60000) >> 3) For async configuration: >> a) the code has queue size set to 1000, and the XSD has 0 (0) >> b) the code has queue flush interval set to 5000, and the XSD has 10 (10) >> c) the code has remote timeout set to 15000, and the XSD has 17500 (15000) >> 4) For hash, the code has number of segments set to 60, and the XSD has 80 (60) >> 5) For l1, the code has l1 cleanup interval set to 600000, and the XSD has 60000 (60000) >> >> Please let me know if you have any opinions on these default values, and also if you have any ideas for avoiding these differences in the future. It seems like there are two possibilities at this point: >> >> 1) Generating the XSD from the source code > Impractical without a ton of annotations, since the builder structure is > very different from the XSD structure. In past, schema used to be generated from annotations on the configuration objects. I don't know why we stopped doing that, though - Vladimir might comment more. Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From ttarrant at redhat.com Tue Sep 16 10:35:18 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 16 Sep 2014 16:35:18 +0200 Subject: [infinispan-dev] Distribution package restructuring Message-ID: <54184AA6.9050300@redhat.com> Hi all, now that the uberjars have been folded into 7.0, we really need to restructure our zip distributions to accommodate this. First, the naming. We currently have -bin, -all and -src distributions. I would like to rename the "-bin" distribution to "-minimal" which would only include: - infinispan-embedded - infinispan-embedded-query - Javadocs and XSDs for the above - Example configurations - Demos for any of the above Then we would have an -all distribution which would include - everything from "-minimal" - additional cachestores (leveldb, remote, rest) - extra modules (cdi, jcache, tree, spring) - embedded CLI - RHQ plugin I'd also filter the dependencies, e.g. why package the spring deps when Spring users will be using theirs anyway. The following is an example layout: infinispan-X.Y.Z.Final-[minimal|all] - infinispan-embedded.jar - infinispan-embedded-query.jar - README.txt - README-modules.txt + bin - functions.sh - ispn-cli.sh - ispn-cli.bat + config - distributed-udp.xml - ... + schema - infinispan-config-7.0.xsd - infinispan-cachestore-jdbc-config-7.0.xsd - infinispan-cachestore-jpa-config-7.0.xsd - infinispan-cachestore-leveldb-config-7.0.xsd (all) - infinispan-cachestore-remote-config-7.0.xsd (all) - infinispan-cachestore-rest-config-7.0.xsd (all) + demos + ... + doc + api - ... + licenses - ... + management + rhq-plugin - infinispan-rhq-plugin.jar + modules (all) + cli - infinispan-cli-interpreter.jar + jcache - infinispan-cdi.jar - infinispan-jcache.jar - cache-api.jar + persistence + leveldb + remote + rest + spring - infinispan-spring.jar + tree - infinispan-tree.jar From anistor at redhat.com Tue Sep 16 15:19:02 2014 From: anistor at redhat.com (Adrian Nistor) Date: Tue, 16 Sep 2014 22:19:02 +0300 Subject: [infinispan-dev] Infinispan 7.0.0.Beta2 is available! Message-ID: <54188D26.2040408@redhat.com> Dear Infinispan community, We are proud to announce the second beta release for Infinispan 7.0.0. More info at http://blog.infinispan.org/2014/09/infinispan-700beta2-is-out.html Thanks to everyone for their involvement and contributions! From sanne at infinispan.org Tue Sep 16 19:44:10 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 17 Sep 2014 00:44:10 +0100 Subject: [infinispan-dev] JBoss Modules: Unable to upgrade Hibernate Search to Infinispan 7.0.0.Beta2 Message-ID: Hi all, during bootstrap of tests of the Hibernate Search modules on WildFly 8.1 (via Arquillian), when using Infinispan 7.0.0.Beta1 everything works fine. When upgrading to latest Beta2 - and no other changes - I get: Caused by: java.lang.IllegalAccessError: tried to access class org.hibernate.search.util.impl.ConcurrentReferenceHashMap from class org.hibernate.search.util.impl.Maps at org.hibernate.search.util.impl.Maps.createIdentityWeakKeyConcurrentMap(Maps.java:39) [hibernate-search-engine-5.0.0-SNAPSHOT.jar:5.0.0-SNAPSHOT] at org.hibernate.search.event.impl.FullTextIndexEventListener.(FullTextIndexEventListener.java:81) [hibernate-search-orm-5.0.0-SNAPSHOT.jar:5.0.0-SNAPSHOT] The code in Maps.java:39 is simply invoking the constructor of the ConcurrentReferenceHashMap, which is public and located in the same jar, in the same package. It seems that the problem is that the infinispan module is now depending on the infinispan-query module, which is depending on the hibernate-search module distributed by the Infinispan project. In other words, I'm having a duplicate of the Hibernate Search jars on classpath, specifically an older version of what I'm aiming to test. Ideas? A workaround I could apply is to not use the modules published by the Infinispan project and assemble my own modules, removing infinispan-query and all other stuff I don't need, but I hope for a better solution. Ideally like Infinispan uses slot "ispn-7.0", which we download and use, I think Infinispan should depend (and download) an Hibernate Search specific slot, rather then re-bundling a specific micro version without our permission :-P Modules released by Hibernate Search are currently released using a slot which matches exactly the release version (so slot="5.0.0-SNAPSHOT" as built in this test), but I'd be happy to change that to say "5.0". Could we try that please? -- Sanne From mmarkus at redhat.com Wed Sep 17 05:00:32 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Wed, 17 Sep 2014 12:00:32 +0300 Subject: [infinispan-dev] Distribution package restructuring In-Reply-To: <54184AA6.9050300@redhat.com> References: <54184AA6.9050300@redhat.com> Message-ID: <390BB0A5-6B63-440A-A3FC-27A6E9AAF224@redhat.com> On Sep 16, 2014, at 17:35, Tristan Tarrant wrote: > Hi all, > > now that the uberjars have been folded into 7.0, we really need to > restructure our zip distributions to accommodate this. > > First, the naming. We currently have -bin, -all and -src distributions. > > I would like to rename the "-bin" distribution to "-minimal" +1, we already use "Minimal" for the "-bin" on the website: http://infinispan.org/download/ > which would > only include: > > - infinispan-embedded > - infinispan-embedded-query > - Javadocs and XSDs for the above > - Example configurations > - Demos for any of the above > > Then we would have an -all distribution which would include > > - everything from "-minimal" > - additional cachestores (leveldb, remote, rest) > - extra modules (cdi, jcache, tree, spring) > - embedded CLI > - RHQ plugin > > I'd also filter the dependencies, e.g. why package the spring deps when > Spring users will be using theirs anyway. +1 > > The following is an example layout: > > infinispan-X.Y.Z.Final-[minimal|all] > - infinispan-embedded.jar > - infinispan-embedded-query.jar > - README.txt > - README-modules.txt > + bin > - functions.sh > - ispn-cli.sh > - ispn-cli.bat > + config > - distributed-udp.xml > - ... > + schema not totally sure about it, moving schema one level up would increase its visibility. > - infinispan-config-7.0.xsd > - infinispan-cachestore-jdbc-config-7.0.xsd > - infinispan-cachestore-jpa-config-7.0.xsd > - infinispan-cachestore-leveldb-config-7.0.xsd (all) > - infinispan-cachestore-remote-config-7.0.xsd (all) > - infinispan-cachestore-rest-config-7.0.xsd (all) > + demos > + ... > + doc > + api > - ... > + licenses > - ... > + management > + rhq-plugin > - infinispan-rhq-plugin.jar > + modules (all) > + cli > - infinispan-cli-interpreter.jar > + jcache > - infinispan-cdi.jar > - infinispan-jcache.jar > - cache-api.jar > + persistence > + leveldb > + remote > + rest > + spring > - infinispan-spring.jar > + tree > - infinispan-tree.jar > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From radhamohanmaheshwari at gmail.com Wed Sep 17 05:55:22 2014 From: radhamohanmaheshwari at gmail.com (Radha Mohan Maheshwari) Date: Wed, 17 Sep 2014 15:25:22 +0530 Subject: [infinispan-dev] Configure named cache in remote infinispan 6.0.2 cluster Message-ID: Hi all, how to pass custom named cache config to infinispan 6.0.2 server as -c option is only taking server configuration not cache config getting this exception while passing custom cache config xml 15:23:26,978 ERROR [org.jboss.as.server] (Controller Boot Thread) JBAS015956: Caught exception during boot: org.jboss.as.controller.persistence.ConfigurationPersistenceException: JBAS014676: Failed to parse configuration at org.jboss.as.controller.persistence.XmlConfigurationPersister.load(XmlConfigurationPersister.java:141) [jboss-as-controller-7.2.0.Final.jar:7.2.0.Final] at org.jboss.as.server.ServerService.boot(ServerService.java:308) [jboss-as-server-7.2.0.Final.jar:7.2.0.Final] at org.jboss.as.controller.AbstractControllerService$1.run(AbstractControllerService.java:188) [jboss-as-controller-7.2.0.Final.jar:7.2.0.Final] at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_51] Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[2,1] Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' at org.jboss.staxmapper.XMLMapperImpl.processNested(XMLMapperImpl.java:108) [staxmapper-1.1.0.Final.jar:1.1.0.Final] at org.jboss.staxmapper.XMLMapperImpl.parseDocument(XMLMapperImpl.java:69) [staxmapper-1.1.0.Final.jar:1.1.0.Final] at org.jboss.as.controller.persistence.XmlConfigurationPersister.load(XmlConfigurationPersister.java:133) [jboss-as-controller-7.2.0.Final.jar:7.2.0.Final] ... 3 more PFA custom config file -- Radha Mohan Maheshwari -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140917/ae86359d/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: cache.xml Type: text/xml Size: 1471 bytes Desc: not available Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140917/ae86359d/attachment-0001.xml From ttarrant at redhat.com Wed Sep 17 06:28:07 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 17 Sep 2014 12:28:07 +0200 Subject: [infinispan-dev] Configure named cache in remote infinispan 6.0.2 cluster In-Reply-To: References: Message-ID: <54196237.2060906@redhat.com> Hi Radha, this is a mailing list devoted to Infinispan development. Questions like your should be asked on the user forum [1] Unfortunately there is no solution for you: embedded and server use different configuration in Infinispan 6.x. We have rectified this in 7.x and now everything uses server-style configuration (albeit with minor differences in root elements and such). Tristan [1] https://developer.jboss.org/en/infinispan On 17/09/14 11:55, Radha Mohan Maheshwari wrote: > Hi all, > > how to pass custom named cache config to infinispan 6.0.2 server > as -c option is only taking server configuration not cache config > > > getting this exception while passing custom cache config xml > > 15:23:26,978 ERROR [org.jboss.as.server] (Controller Boot Thread) > JBAS015956: Caught exception during boot: > org.jboss.as.controller.persistence.ConfigurationPersistenceException: > JBAS014676: Failed to parse configuration > at > org.jboss.as.controller.persistence.XmlConfigurationPersister.load(XmlConfigurationPersister.java:141) > [jboss-as-controller-7.2.0.Final.jar:7.2.0.Final] > at > org.jboss.as.server.ServerService.boot(ServerService.java:308) > [jboss-as-server-7.2.0.Final.jar:7.2.0.Final] > at > org.jboss.as.controller.AbstractControllerService$1.run(AbstractControllerService.java:188) > [jboss-as-controller-7.2.0.Final.jar:7.2.0.Final] > at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_51] > Caused by: javax.xml.stream.XMLStreamException: ParseError at > [row,col]:[2,1] > Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' > at > org.jboss.staxmapper.XMLMapperImpl.processNested(XMLMapperImpl.java:108) > [staxmapper-1.1.0.Final.jar:1.1.0.Final] > at > org.jboss.staxmapper.XMLMapperImpl.parseDocument(XMLMapperImpl.java:69) [staxmapper-1.1.0.Final.jar:1.1.0.Final] > at > org.jboss.as.controller.persistence.XmlConfigurationPersister.load(XmlConfigurationPersister.java:133) > [jboss-as-controller-7.2.0.Final.jar:7.2.0.Final] > ... 3 more > > > PFA custom config file > > > > -- > Radha Mohan Maheshwari > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From pedro at infinispan.org Wed Sep 17 12:08:07 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Wed, 17 Sep 2014 17:08:07 +0100 Subject: [infinispan-dev] New algorithm to handle remote commands Message-ID: <5419B1E7.9000408@infinispan.org> Hi, I've just wrote on the wiki a new algorithm to better handle the remote commands. You can find it in [1]. If you have questions, suggestion or just want to discuss some aspect, please do in thread. I'll update the wiki page based on this discussion Thanks. Cheers, Pedro [1]https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler-(Work-In-Progress...) From pedro at infinispan.org Wed Sep 17 12:17:28 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Wed, 17 Sep 2014 17:17:28 +0100 Subject: [infinispan-dev] New algorithm to handle remote commands In-Reply-To: <5419B1E7.9000408@infinispan.org> References: <5419B1E7.9000408@infinispan.org> Message-ID: <5419B418.5030506@infinispan.org> new link: https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler On 09/17/2014 05:08 PM, Pedro Ruivo wrote: > Hi, > > I've just wrote on the wiki a new algorithm to better handle the remote > commands. You can find it in [1]. > > If you have questions, suggestion or just want to discuss some aspect, > please do in thread. I'll update the wiki page based on this discussion > > Thanks. > > Cheers, > Pedro > > [1]https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler-(Work-In-Progress...) > From gustavonalle at gmail.com Thu Sep 18 04:00:58 2014 From: gustavonalle at gmail.com (Gustavo Fernandes) Date: Thu, 18 Sep 2014 09:00:58 +0100 Subject: [infinispan-dev] JBoss Modules: Unable to upgrade Hibernate Search to Infinispan 7.0.0.Beta2 In-Reply-To: References: Message-ID: Using "5.0" will likely solve the issue for now, but what happens if search starts using slot "6.0" because of a version change? The test would suddenly fail again, because ispn would still drag "5.0" into the classpath Gustavo On Wed, Sep 17, 2014 at 12:44 AM, Sanne Grinovero wrote: > Hi all, > during bootstrap of tests of the Hibernate Search modules on WildFly > 8.1 (via Arquillian), when using Infinispan 7.0.0.Beta1 everything > works fine. > > When upgrading to latest Beta2 - and no other changes - I get: > > Caused by: java.lang.IllegalAccessError: tried to access class > org.hibernate.search.util.impl.ConcurrentReferenceHashMap from class > org.hibernate.search.util.impl.Maps > at > org.hibernate.search.util.impl.Maps.createIdentityWeakKeyConcurrentMap(Maps.java:39) > [hibernate-search-engine-5.0.0-SNAPSHOT.jar:5.0.0-SNAPSHOT] > at > org.hibernate.search.event.impl.FullTextIndexEventListener.(FullTextIndexEventListener.java:81) > [hibernate-search-orm-5.0.0-SNAPSHOT.jar:5.0.0-SNAPSHOT] > > The code in Maps.java:39 is simply invoking the constructor of the > ConcurrentReferenceHashMap, which is public and located in the same > jar, in the same package. > > It seems that the problem is that the infinispan module is now > depending on the infinispan-query module, which is depending on the > hibernate-search module distributed by the Infinispan project. > In other words, I'm having a duplicate of the Hibernate Search jars on > classpath, specifically an older version of what I'm aiming to test. > > Ideas? > > A workaround I could apply is to not use the modules published by the > Infinispan project and assemble my own modules, removing > infinispan-query and all other stuff I don't need, but I hope for a > better solution. > > Ideally like Infinispan uses slot "ispn-7.0", which we download and > use, I think Infinispan should depend (and download) an Hibernate > Search specific slot, rather then re-bundling a specific micro version > without our permission :-P > Modules released by Hibernate Search are currently released using a > slot which matches exactly the release version (so > slot="5.0.0-SNAPSHOT" as built in this test), but I'd be happy to > change that to say "5.0". > > Could we try that please? > > -- Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140918/ec2ccbd6/attachment.html From galder at redhat.com Thu Sep 18 04:11:28 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Thu, 18 Sep 2014 10:11:28 +0200 Subject: [infinispan-dev] Hot Rod Remote Events #3: Customizing events In-Reply-To: <541A78BD.3030709@redhat.com> References: <541A78BD.3030709@redhat.com> Message-ID: Radim, adding -dev list since others might have the same qs: @Will, some important information below: On 18 Sep 2014, at 08:16, Radim Vansa wrote: > Hi Galder, > > re: to your last blogpost $SUBJ: I miss two information there: > > 1) You say that the filter/converter factories are deployed as JAR - do you need to update infinispan modules' dependencies on the server, or can you do that in any other way (via configuration)? There?s nothing to be updated. The jars are deployed in the deployments/ folder or via CLI or whatever other standard deployment method is used. We have purpousefully built a deployment processor that processes these jars and does all the hard work for the user. For more info, see the filter/converter tests in the Infinispan Server integration testsuite. > This is more general question (I've ran into that with compatibility mode as well), could you provide a link how custom JARs that Infinispan should use are deployed? There?s no generic solution at the moment. The current solution is limited to filter/converter jars for remote eventing because we depend on service definitions in the jar to find the SPIs that we need to plugin to the Infinispan Server. > 2) Let's say that I want to use the converter to produce diffs, therefore the converter needs the previous (overwritten) value as well. Would injecting the cache through CDI work, or is the cache already updated when the converter runs? Can this be reliable at all? Initially when I started working on remote events stuff, I considered the need of previous value in both converter and filter interfaces. I think they can be useful, but here I?m relying on Will?s core filter/converter instances to provide them to the Hot Rod remote events and at the moment they don't. @Will, are you considering adding this? Since it affects API, it might be a good time to do this now. In terms of how to workaround it, a relatively heavy weight solution would be for the converter to track key/values as it gets events and them compare event contents with its cache. Values should be refs, so should not take too much space? I doubt injecting a CDI cache would work. Cheers, > > Thanks > > Radim > > -- > Radim Vansa > JBoss DataGrid QA > -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From sanne at infinispan.org Thu Sep 18 04:24:04 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 18 Sep 2014 09:24:04 +0100 Subject: [infinispan-dev] JBoss Modules: Unable to upgrade Hibernate Search to Infinispan 7.0.0.Beta2 In-Reply-To: References: Message-ID: On 18 September 2014 09:00, Gustavo Fernandes wrote: > Using "5.0" will likely solve the issue for now, but what happens if search > starts using slot "6.0" because of a version change? > The test would suddenly fail again, because ispn would still drag "5.0" into > the classpath I don't expect Search "6.0" to be drop-in compatible with "5.0", so if someone is using an Infinispan version which requires "5.0" these shouldn't be mixed up. Sanne > > On Wed, Sep 17, 2014 at 12:44 AM, Sanne Grinovero > wrote: >> >> Hi all, >> during bootstrap of tests of the Hibernate Search modules on WildFly >> 8.1 (via Arquillian), when using Infinispan 7.0.0.Beta1 everything >> works fine. >> >> When upgrading to latest Beta2 - and no other changes - I get: >> >> Caused by: java.lang.IllegalAccessError: tried to access class >> org.hibernate.search.util.impl.ConcurrentReferenceHashMap from class >> org.hibernate.search.util.impl.Maps >> at >> org.hibernate.search.util.impl.Maps.createIdentityWeakKeyConcurrentMap(Maps.java:39) >> [hibernate-search-engine-5.0.0-SNAPSHOT.jar:5.0.0-SNAPSHOT] >> at >> org.hibernate.search.event.impl.FullTextIndexEventListener.(FullTextIndexEventListener.java:81) >> [hibernate-search-orm-5.0.0-SNAPSHOT.jar:5.0.0-SNAPSHOT] >> >> The code in Maps.java:39 is simply invoking the constructor of the >> ConcurrentReferenceHashMap, which is public and located in the same >> jar, in the same package. >> >> It seems that the problem is that the infinispan module is now >> depending on the infinispan-query module, which is depending on the >> hibernate-search module distributed by the Infinispan project. >> In other words, I'm having a duplicate of the Hibernate Search jars on >> classpath, specifically an older version of what I'm aiming to test. >> >> Ideas? >> >> A workaround I could apply is to not use the modules published by the >> Infinispan project and assemble my own modules, removing >> infinispan-query and all other stuff I don't need, but I hope for a >> better solution. >> >> Ideally like Infinispan uses slot "ispn-7.0", which we download and >> use, I think Infinispan should depend (and download) an Hibernate >> Search specific slot, rather then re-bundling a specific micro version >> without our permission :-P >> Modules released by Hibernate Search are currently released using a >> slot which matches exactly the release version (so >> slot="5.0.0-SNAPSHOT" as built in this test), but I'd be happy to >> change that to say "5.0". >> >> Could we try that please? >> >> -- Sanne >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Thu Sep 18 07:03:04 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 18 Sep 2014 14:03:04 +0300 Subject: [infinispan-dev] New algorithm to handle remote commands In-Reply-To: <5419B418.5030506@infinispan.org> References: <5419B1E7.9000408@infinispan.org> <5419B418.5030506@infinispan.org> Message-ID: Thanks Pedro, this looks great. However, I don't think it's ok to treat CommitCommands/Pessimistic PrepareCommands as RemoteLockCommands just because they may send L1 invalidation commands. It's true that those commands will block, but there's no need to wait for any other command before doing the L1 invalidation. In fact, the non-tx writes on backup owners, which you consider to be non-blocking, can also send L1 invalidation commands (see L1NonTxInterceptor.invalidateL1). On the other hand, one of the good things that the remote executor did was to allow queueing lots of commands with a higher topology id, when one of the nodes receives the new topology much later than the others. We still have to consider each TopologyAffectedCommand as potentially blocking and put it through the remote executor. And InvalidateL1Commands are also TopologyAffectedCommands, so there's still a potential for deadlock when L1 is enabled and we have maxThreads write commands blocked sending L1 invalidations and those L1 invalidation commands are stuck in the remote executor's queue on another node. And with (very) unlucky timing the remote executor might not even get to create maxThreads threads before the deadlock appears. I wonder if we could write a custom executor that checks what the first task in the queue is every second or so, and creates a bunch of new threads if the first task in the queue hasn't changed. You're right about the remote executor getting full as well, we're lacking any feedback mechanism to tell the sender to slow down, except for blocking the OOB thread. I wonder if we could tell JGroups somehow to discard the message from inside MessageDispatcher.handle (e.g. throw a DiscardMessageException), so the sender has to retransmit it and we don't block the OOB thread. That should allow us to set a size limit on the BlockingTaskAwareExecutor's blockedTasks collection as well. Bela, WDYT? Cheers Dan On Wed, Sep 17, 2014 at 7:17 PM, Pedro Ruivo wrote: > new link: > https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler > > On 09/17/2014 05:08 PM, Pedro Ruivo wrote: > > Hi, > > > > I've just wrote on the wiki a new algorithm to better handle the remote > > commands. You can find it in [1]. > > > > If you have questions, suggestion or just want to discuss some aspect, > > please do in thread. I'll update the wiki page based on this discussion > > > > Thanks. > > > > Cheers, > > Pedro > > > > [1] > https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler-(Work-In-Progress.. > .) > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140918/f7fcfe79/attachment-0001.html From bban at redhat.com Thu Sep 18 08:09:17 2014 From: bban at redhat.com (Bela Ban) Date: Thu, 18 Sep 2014 14:09:17 +0200 Subject: [infinispan-dev] New algorithm to handle remote commands In-Reply-To: References: <5419B1E7.9000408@infinispan.org> <5419B418.5030506@infinispan.org> Message-ID: <541ACB6D.3040901@redhat.com> On 18/09/14 13:03, Dan Berindei wrote: > Thanks Pedro, this looks great. > > However, I don't think it's ok to treat CommitCommands/Pessimistic > PrepareCommands as RemoteLockCommands just because they may send L1 > invalidation commands. It's true that those commands will block, but > there's no need to wait for any other command before doing the L1 > invalidation. In fact, the non-tx writes on backup owners, which you > consider to be non-blocking, can also send L1 invalidation commands (see > L1NonTxInterceptor.invalidateL1). > > On the other hand, one of the good things that the remote executor did > was to allow queueing lots of commands with a higher topology id, when > one of the nodes receives the new topology much later than the others. > We still have to consider each TopologyAffectedCommand as potentially > blocking and put it through the remote executor. > > And InvalidateL1Commands are also TopologyAffectedCommands, so there's > still a potential for deadlock when L1 is enabled and we have maxThreads > write commands blocked sending L1 invalidations and those L1 > invalidation commands are stuck in the remote executor's queue on > another node. And with (very) unlucky timing the remote executor might > not even get to create maxThreads threads before the deadlock appears. I > wonder if we could write a custom executor that checks what the first > task in the queue is every second or so, and creates a bunch of new > threads if the first task in the queue hasn't changed. > > You're right about the remote executor getting full as well, we're > lacking any feedback mechanism to tell the sender to slow down, except > for blocking the OOB thread. JGroups sends credits back to the sender *after* the message has been delivered into the application. If the application is slow in processing the messages, or blocks for some time, then the sender will not receive enough credits and thus also slow down, or even block. > I wonder if we could tell JGroups somehow > to discard the message from inside MessageDispatcher.handle (e.g. throw > a DiscardMessageException), so the sender has to retransmit it At this point, JGroups considers the message *delivered* (as it has passed the UNICAST or NAKACK protocols), and it won't get resent. You cannot discard it either, as this will be a message loss. However, if you can tolerate loss, all is fine. E.g. if you discard a topo message with a lower ID, I don't think any harm is done in Infinispan. (?). To discard or not, is Infinispan's decision. The other thing is to block, but this can have an impact back into the JGroups thread pools. > and we > don't block the OOB thread. That should allow us to set a size limit on > the BlockingTaskAwareExecutor's blockedTasks collection as well. Bela, WDYT? > > Cheers > Dan > > > On Wed, Sep 17, 2014 at 7:17 PM, Pedro Ruivo > wrote: > > new link: > https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler > > On 09/17/2014 05:08 PM, Pedro Ruivo wrote: > > Hi, > > > > I've just wrote on the wiki a new algorithm to better handle the > remote > > commands. You can find it in [1]. > > > > If you have questions, suggestion or just want to discuss some > aspect, > > please do in thread. I'll update the wiki page based on this > discussion > > > > Thanks. > > > > Cheers, > > Pedro > > > > > [1]https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler-(Work-In-Progress...) > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban, JGroups lead (http://www.jgroups.org) From galder at redhat.com Thu Sep 18 08:14:53 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Thu, 18 Sep 2014 14:14:53 +0200 Subject: [infinispan-dev] Distribution package restructuring In-Reply-To: <54184AA6.9050300@redhat.com> References: <54184AA6.9050300@redhat.com> Message-ID: That sounds good to me. Thanks Tristan! On 16 Sep 2014, at 16:35, Tristan Tarrant wrote: > Hi all, > > now that the uberjars have been folded into 7.0, we really need to > restructure our zip distributions to accommodate this. > > First, the naming. We currently have -bin, -all and -src distributions. > > I would like to rename the "-bin" distribution to "-minimal" which would > only include: > > - infinispan-embedded > - infinispan-embedded-query > - Javadocs and XSDs for the above > - Example configurations > - Demos for any of the above > > Then we would have an -all distribution which would include > > - everything from "-minimal" > - additional cachestores (leveldb, remote, rest) > - extra modules (cdi, jcache, tree, spring) > - embedded CLI > - RHQ plugin > > I'd also filter the dependencies, e.g. why package the spring deps when > Spring users will be using theirs anyway. > > The following is an example layout: > > infinispan-X.Y.Z.Final-[minimal|all] > - infinispan-embedded.jar > - infinispan-embedded-query.jar > - README.txt > - README-modules.txt > + bin > - functions.sh > - ispn-cli.sh > - ispn-cli.bat > + config > - distributed-udp.xml > - ... > + schema > - infinispan-config-7.0.xsd > - infinispan-cachestore-jdbc-config-7.0.xsd > - infinispan-cachestore-jpa-config-7.0.xsd > - infinispan-cachestore-leveldb-config-7.0.xsd (all) > - infinispan-cachestore-remote-config-7.0.xsd (all) > - infinispan-cachestore-rest-config-7.0.xsd (all) > + demos > + ... > + doc > + api > - ... > + licenses > - ... > + management > + rhq-plugin > - infinispan-rhq-plugin.jar > + modules (all) > + cli > - infinispan-cli-interpreter.jar > + jcache > - infinispan-cdi.jar > - infinispan-jcache.jar > - cache-api.jar > + persistence > + leveldb > + remote > + rest > + spring > - infinispan-spring.jar > + tree > - infinispan-tree.jar > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From pedro at infinispan.org Thu Sep 18 08:29:36 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Thu, 18 Sep 2014 13:29:36 +0100 Subject: [infinispan-dev] New algorithm to handle remote commands In-Reply-To: References: <5419B1E7.9000408@infinispan.org> <5419B418.5030506@infinispan.org> Message-ID: <541AD030.6030200@infinispan.org> On 09/18/2014 12:03 PM, Dan Berindei wrote: > Thanks Pedro, this looks great. > > However, I don't think it's ok to treat CommitCommands/Pessimistic > PrepareCommands as RemoteLockCommands just because they may send L1 > invalidation commands. It's true that those commands will block, but > there's no need to wait for any other command before doing the L1 > invalidation. In fact, the non-tx writes on backup owners, which you > consider to be non-blocking, can also send L1 invalidation commands (see > L1NonTxInterceptor.invalidateL1). They are not treated as RemoteLockCommands. I just said that they are processed in the remote executor service (need to double check what I wrote in the wiki). Unfortunately, I haven't think about the L1 in that scenario... :( > > On the other hand, one of the good things that the remote executor did > was to allow queueing lots of commands with a higher topology id, when > one of the nodes receives the new topology much later than the others. > We still have to consider each TopologyAffectedCommand as potentially > blocking and put it through the remote executor. > > And InvalidateL1Commands are also TopologyAffectedCommands, so there's > still a potential for deadlock when L1 is enabled and we have maxThreads > write commands blocked sending L1 invalidations and those L1 > invalidation commands are stuck in the remote executor's queue on > another node. And with (very) unlucky timing the remote executor might > not even get to create maxThreads threads before the deadlock appears. I > wonder if we could write a custom executor that checks what the first > task in the queue is every second or so, and creates a bunch of new > threads if the first task in the queue hasn't changed. I need to think a little more about it. So, a single put can originate: 1 RPC to the primary owner (to lock) X RPC to invalidate L1 from the primary owner R RPC for the primary owner to the backups owner Y RPC to invalidate L1 from the backup owner is this correct? any suggestions are welcome. > > You're right about the remote executor getting full as well, we're > lacking any feedback mechanism to tell the sender to slow down, except > for blocking the OOB thread. I wonder if we could tell JGroups somehow > to discard the message from inside MessageDispatcher.handle (e.g. throw > a DiscardMessageException), so the sender has to retransmit it and we > don't block the OOB thread. That should allow us to set a size limit on > the BlockingTaskAwareExecutor's blockedTasks collection as well. Bela, WDYT? Even if we have a way to tell the JGroups to resend the message, we have no idea if the executor service is full or not. We allow a user to inject their own implementation of it. > > Cheers > Dan > > > On Wed, Sep 17, 2014 at 7:17 PM, Pedro Ruivo > wrote: > > new link: > https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler > > On 09/17/2014 05:08 PM, Pedro Ruivo wrote: > > Hi, > > > > I've just wrote on the wiki a new algorithm to better handle the > remote > > commands. You can find it in [1]. > > > > If you have questions, suggestion or just want to discuss some > aspect, > > please do in thread. I'll update the wiki page based on this > discussion > > > > Thanks. > > > > Cheers, > > Pedro > > > > > [1]https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler-(Work-In-Progress...) > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From dan.berindei at gmail.com Thu Sep 18 09:28:39 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 18 Sep 2014 16:28:39 +0300 Subject: [infinispan-dev] New algorithm to handle remote commands In-Reply-To: <541ACB6D.3040901@redhat.com> References: <5419B1E7.9000408@infinispan.org> <5419B418.5030506@infinispan.org> <541ACB6D.3040901@redhat.com> Message-ID: On Thu, Sep 18, 2014 at 3:09 PM, Bela Ban wrote: > > > On 18/09/14 13:03, Dan Berindei wrote: > > Thanks Pedro, this looks great. > > > > However, I don't think it's ok to treat CommitCommands/Pessimistic > > PrepareCommands as RemoteLockCommands just because they may send L1 > > invalidation commands. It's true that those commands will block, but > > there's no need to wait for any other command before doing the L1 > > invalidation. In fact, the non-tx writes on backup owners, which you > > consider to be non-blocking, can also send L1 invalidation commands (see > > L1NonTxInterceptor.invalidateL1). > > > > On the other hand, one of the good things that the remote executor did > > was to allow queueing lots of commands with a higher topology id, when > > one of the nodes receives the new topology much later than the others. > > We still have to consider each TopologyAffectedCommand as potentially > > blocking and put it through the remote executor. > > > > And InvalidateL1Commands are also TopologyAffectedCommands, so there's > > still a potential for deadlock when L1 is enabled and we have maxThreads > > write commands blocked sending L1 invalidations and those L1 > > invalidation commands are stuck in the remote executor's queue on > > another node. And with (very) unlucky timing the remote executor might > > not even get to create maxThreads threads before the deadlock appears. I > > wonder if we could write a custom executor that checks what the first > > task in the queue is every second or so, and creates a bunch of new > > threads if the first task in the queue hasn't changed. > > > > You're right about the remote executor getting full as well, we're > > lacking any feedback mechanism to tell the sender to slow down, except > > for blocking the OOB thread. > > JGroups sends credits back to the sender *after* the message has been > delivered into the application. If the application is slow in processing > the messages, or blocks for some time, then the sender will not receive > enough credits and thus also slow down, or even block. > > > I wonder if we could tell JGroups somehow > > to discard the message from inside MessageDispatcher.handle (e.g. throw > > a DiscardMessageException), so the sender has to retransmit it > > At this point, JGroups considers the message *delivered* (as it has > passed the UNICAST or NAKACK protocols), and it won't get resent. You > cannot discard it either, as this will be a message loss. However, if > you can tolerate loss, all is fine. E.g. if you discard a topo message > with a lower ID, I don't think any harm is done in Infinispan. (?). To > discard or not, is Infinispan's decision. > The other thing is to block, but this can have an impact back into the > JGroups thread pools. > Right, I was hoping the message would be marked as delivered only after Infinispan finished processing the message (i.e. when up() returns in UNICAST/NAKACK). Perhaps we could delay sending the credits instead? When process a message on our internal thread pool, it would be nice if we could tell JGroups to send credits back only when we really finished processing the message. > > and we > > don't block the OOB thread. That should allow us to set a size limit on > > the BlockingTaskAwareExecutor's blockedTasks collection as well. Bela, > WDYT? > > > > Cheers > > Dan > > > > > > On Wed, Sep 17, 2014 at 7:17 PM, Pedro Ruivo > > wrote: > > > > new link: > > https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler > > > > On 09/17/2014 05:08 PM, Pedro Ruivo wrote: > > > Hi, > > > > > > I've just wrote on the wiki a new algorithm to better handle the > > remote > > > commands. You can find it in [1]. > > > > > > If you have questions, suggestion or just want to discuss some > > aspect, > > > please do in thread. I'll update the wiki page based on this > > discussion > > > > > > Thanks. > > > > > > Cheers, > > > Pedro > > > > > > > > [1] > https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler-(Work-In-Progress.. > .) > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org infinispan-dev at lists.jboss.org> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Bela Ban, JGroups lead (http://www.jgroups.org) > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140918/87765182/attachment-0001.html From emmanuel at hibernate.org Thu Sep 18 12:24:17 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Thu, 18 Sep 2014 18:24:17 +0200 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners Message-ID: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> Hi all, I have had a good exchange on how someone would use clustered / remote listeners to do custom continuous query features. I have a few questions and requests to make this fully and easily doable ## Value as bytes or as objects Assuming a Hot Rod based usage and protobuf as the serialization layer. What are KeyValueFilter and Converter seeing? I assume today the bytes are unmarshalled and the Java object is provided to these interfaces. In a protobuf based storage, does that mean that the user must create the Java objects out of a protobuf compiler and deploy these classes in the classpath of each server node? Alternatively, could we pass the raw protobuf data to the KeyValueFilter and Converter? They could read the relevant properties at no deserialization cost and with lss problems related to the classloader. Thoughts? ## Synced listeners In a transactional clustered listener marked as sync. Does the transaction commits and then waits for the relevant clustered listeners to proceed before returning the hand to the Tx client? Or is there something else going on? ## oldValue and newValue I understand why the oldValue was not provided in the initial work. It requires to send more data across the network and at least double the number of values unmarshalled. But for continuous queries, being able to compare the old and the new value is critical to reduce the number of events sent to the listener. Imagine the following use case. A listener exposes the average age for a certain type of customer. You would implement it the following way. 1. Add a KeyValueFilter that - upon creation, filter out the customers of the wrong type - upon update, keep customers that - *were* of the right time but no longer are - were not of the right type but now now *are* - remains of the right type and whose age has changed - upon deletion, keep customers that *were* of the right type 2. Converter In the converter, one could send the whole customer but it would be more efficient to only send the age of the customer as well as wether it is added to or removed from the matching customers - upon creation, you send the customer age and mark it as addition - upon deletion, you send the customer age and mark it as deletion - upon update - if the customer was of the right type but no longer is, send the age as well as a deletion flag - if the customer was not of the right type but now is, send the age as well as an addition flag - if the customer age has changed, send the difference with a modification flag 3. The listener then needs to keep the total sum of all ages as well as the total number of customers of the right type. Based on the sent events, it can adjust these two counters. That requires us to be able to provide the old and new value to the KeyValueFilter and the Converter interface as well as the type of event (creation, update, deletion). If you keep the existing interfaces and their data, the data send and the memory consumed becomes much much bigger. I leave it as an exercise but I think you need to: - send *all* remove and update events regardless of the value (essentially no KeyValueFilter) - in the listener, keep a list of *all* matching keys so that you know if a new event is about a data that was already matching your criteria or not and act accordingly. BTW, you need the old and new value even if your listener returns actual matching results instead of an aggregation. More or less for the same reasons. Continuous query is about the most important use case for remote and clustered listeners and I think we should address it properly and as efficiently as possible. Adding continuous query to Infinispan will then ?simply? be a matter of agreeing on the query syntax and implement the predicates as smartly as possible. With the use case I describe, I think the best approach is to merge the KVF and Converter into a single Listener like interface that is able to send or silence an event payload. But that?s guestimate. Because oldValue / newValue implies an unmarshalling overhead we might want to make it an annotation based flag on the class that is executed on each node (somewhat similar to the settings hosted on @Listener). ## includeCurrentState and very narrow filtering The existing approach is fine (send a create event notif for all existing keys and queue changes in the mean time) as long as the listener plans to consume most of these events. But in case of a big data grid, with a lot of passivated entries, the cost would become non negligible. An alternative approach is to first do a query matching the elements the listener is interested in and queue up the events until the query is fully processed. Can a listener access a cache and do a query? Should we offer such option in a more packaged way? For a listener that is only interested in keys whose value city contains Springfield, Virginia, the gain would be massive. ## Remote listener and non Java HR clients Does the API of non Java HR clients support the enlistements of listeners and attach registered keyValueFilter / Converter? Or is that planned? Just curious. Emmanuel From dan.berindei at gmail.com Thu Sep 18 12:32:32 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 18 Sep 2014 19:32:32 +0300 Subject: [infinispan-dev] New algorithm to handle remote commands In-Reply-To: <541AD030.6030200@infinispan.org> References: <5419B1E7.9000408@infinispan.org> <5419B418.5030506@infinispan.org> <541AD030.6030200@infinispan.org> Message-ID: On Thu, Sep 18, 2014 at 3:29 PM, Pedro Ruivo wrote: > > > On 09/18/2014 12:03 PM, Dan Berindei wrote: > > Thanks Pedro, this looks great. > > > > However, I don't think it's ok to treat CommitCommands/Pessimistic > > PrepareCommands as RemoteLockCommands just because they may send L1 > > invalidation commands. It's true that those commands will block, but > > there's no need to wait for any other command before doing the L1 > > invalidation. In fact, the non-tx writes on backup owners, which you > > consider to be non-blocking, can also send L1 invalidation commands (see > > L1NonTxInterceptor.invalidateL1). > > They are not treated as RemoteLockCommands. I just said that they are > processed in the remote executor service (need to double check what I > wrote in the wiki). Unfortunately, I haven't think about the L1 in that > scenario... :( > Ok, sorry I leapt to conclusions :) > > > > > On the other hand, one of the good things that the remote executor did > > was to allow queueing lots of commands with a higher topology id, when > > one of the nodes receives the new topology much later than the others. > > We still have to consider each TopologyAffectedCommand as potentially > > blocking and put it through the remote executor. > > > > And InvalidateL1Commands are also TopologyAffectedCommands, so there's > > still a potential for deadlock when L1 is enabled and we have maxThreads > > write commands blocked sending L1 invalidations and those L1 > > invalidation commands are stuck in the remote executor's queue on > > another node. And with (very) unlucky timing the remote executor might > > not even get to create maxThreads threads before the deadlock appears. I > > wonder if we could write a custom executor that checks what the first > > task in the queue is every second or so, and creates a bunch of new > > threads if the first task in the queue hasn't changed. > > I need to think a little more about it. > > So, a single put can originate: > 1 RPC to the primary owner (to lock) > X RPC to invalidate L1 from the primary owner > R RPC for the primary owner to the backups owner > Y RPC to invalidate L1 from the backup owner > > is this correct? > That is correct when "smart" L1 invalidation is enabled (l1.invalidationThreshold > 0). But it is disabled by default, so it's more like this: 1 RPC to the primary owner 0 or 1 broadcast RPC to invalidate L1 from the primary owner numOwners - 1 from the primary owner to the backup owners 0 or 1 broadcast RPCs from each backup owners In rare circumstances there might be some more L1 invalidations from the L1LastChanceInterceptor. > any suggestions are welcome. > > > > > You're right about the remote executor getting full as well, we're > > lacking any feedback mechanism to tell the sender to slow down, except > > for blocking the OOB thread. I wonder if we could tell JGroups somehow > > to discard the message from inside MessageDispatcher.handle (e.g. throw > > a DiscardMessageException), so the sender has to retransmit it and we > > don't block the OOB thread. That should allow us to set a size limit on > > the BlockingTaskAwareExecutor's blockedTasks collection as well. Bela, > WDYT? > > Even if we have a way to tell the JGroups to resend the message, we have > no idea if the executor service is full or not. We allow a user to > inject their own implementation of it. > We do allow a custom executor implementation, but it's our SPI. So we can require the custom executor to be configured to throw a RejectedExecutionException when the queue is full instead of blocking the caller thread, if it helps us. > > > > > Cheers > > Dan > > > > > > On Wed, Sep 17, 2014 at 7:17 PM, Pedro Ruivo > > wrote: > > > > new link: > > https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler > > > > On 09/17/2014 05:08 PM, Pedro Ruivo wrote: > > > Hi, > > > > > > I've just wrote on the wiki a new algorithm to better handle the > > remote > > > commands. You can find it in [1]. > > > > > > If you have questions, suggestion or just want to discuss some > > aspect, > > > please do in thread. I'll update the wiki page based on this > > discussion > > > > > > Thanks. > > > > > > Cheers, > > > Pedro > > > > > > > > [1] > https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler-(Work-In-Progress.. > .) > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org infinispan-dev at lists.jboss.org> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140918/4d4cced7/attachment.html From mudokonman at gmail.com Thu Sep 18 13:21:08 2014 From: mudokonman at gmail.com (William Burns) Date: Thu, 18 Sep 2014 13:21:08 -0400 Subject: [infinispan-dev] Hot Rod Remote Events #3: Customizing events In-Reply-To: References: <541A78BD.3030709@redhat.com> Message-ID: On Thu, Sep 18, 2014 at 4:11 AM, Galder Zamarre?o wrote: > Radim, adding -dev list since others might have the same qs: > > @Will, some important information below: > > On 18 Sep 2014, at 08:16, Radim Vansa wrote: > >> Hi Galder, >> >> re: to your last blogpost $SUBJ: I miss two information there: >> >> 1) You say that the filter/converter factories are deployed as JAR - do you need to update infinispan modules' dependencies on the server, or can you do that in any other way (via configuration)? > > There?s nothing to be updated. The jars are deployed in the deployments/ folder or via CLI or whatever other standard deployment method is used. We have purpousefully built a deployment processor that processes these jars and does all the hard work for the user. For more info, see the filter/converter tests in the Infinispan Server integration testsuite. > >> This is more general question (I've ran into that with compatibility mode as well), could you provide a link how custom JARs that Infinispan should use are deployed? > > There?s no generic solution at the moment. The current solution is limited to filter/converter jars for remote eventing because we depend on service definitions in the jar to find the SPIs that we need to plugin to the Infinispan Server. > >> 2) Let's say that I want to use the converter to produce diffs, therefore the converter needs the previous (overwritten) value as well. Would injecting the cache through CDI work, or is the cache already updated when the converter runs? Can this be reliable at all? When the notification is raised it has already been committed into the data container so it is not possible to do a get at this point. > > Initially when I started working on remote events stuff, I considered the need of previous value in both converter and filter interfaces. I think they can be useful, but here I?m relying on Will?s core filter/converter instances to provide them to the Hot Rod remote events and at the moment they don't. @Will, are you considering adding this? Since it affects API, it might be a good time to do this now. I actually was talking to Emmanuel about this yesterday for a bit. It seems that we will need to expose the previous value to at least the KeyValueFilter, but it might be best to also do this for the Converter as well. I as thinking of adding another interface that extends the KeyValueFilter that would be kept in the notification package that passes both the previous value and the new value (the same could be done for Converter). With this change I am also thinking maybe the addListener methods would take the new interface instead of KeyValueFilter as well possibly. What do you guys think? > > In terms of how to workaround it, a relatively heavy weight solution would be for the converter to track key/values as it gets events and them compare event contents with its cache. Values should be refs, so should not take too much space? I doubt injecting a CDI cache would work. > > Cheers, > >> >> Thanks >> >> Radim >> >> -- >> Radim Vansa >> JBoss DataGrid QA >> > > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Thu Sep 18 13:37:50 2014 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 18 Sep 2014 19:37:50 +0200 Subject: [infinispan-dev] Hot Rod Remote Events #3: Customizing events In-Reply-To: References: <541A78BD.3030709@redhat.com> Message-ID: <541B186E.7040308@redhat.com> On 09/18/2014 07:21 PM, William Burns wrote: > On Thu, Sep 18, 2014 at 4:11 AM, Galder Zamarre?o wrote: >> Radim, adding -dev list since others might have the same qs: >> >> @Will, some important information below: >> >> On 18 Sep 2014, at 08:16, Radim Vansa wrote: >> >>> Hi Galder, >>> >>> re: to your last blogpost $SUBJ: I miss two information there: >>> >>> 1) You say that the filter/converter factories are deployed as JAR - do you need to update infinispan modules' dependencies on the server, or can you do that in any other way (via configuration)? >> There?s nothing to be updated. The jars are deployed in the deployments/ folder or via CLI or whatever other standard deployment method is used. We have purpousefully built a deployment processor that processes these jars and does all the hard work for the user. For more info, see the filter/converter tests in the Infinispan Server integration testsuite. >> >>> This is more general question (I've ran into that with compatibility mode as well), could you provide a link how custom JARs that Infinispan should use are deployed? >> There?s no generic solution at the moment. The current solution is limited to filter/converter jars for remote eventing because we depend on service definitions in the jar to find the SPIs that we need to plugin to the Infinispan Server. >> >>> 2) Let's say that I want to use the converter to produce diffs, therefore the converter needs the previous (overwritten) value as well. Would injecting the cache through CDI work, or is the cache already updated when the converter runs? Can this be reliable at all? > When the notification is raised it has already been committed into the > data container so it is not possible to do a get at this point. > >> Initially when I started working on remote events stuff, I considered the need of previous value in both converter and filter interfaces. I think they can be useful, but here I?m relying on Will?s core filter/converter instances to provide them to the Hot Rod remote events and at the moment they don't. @Will, are you considering adding this? Since it affects API, it might be a good time to do this now. > I actually was talking to Emmanuel about this yesterday for a bit. It > seems that we will need to expose the previous value to at least the > KeyValueFilter, but it might be best to also do this for the Converter > as well. I as thinking of adding another interface that extends the > KeyValueFilter that would be kept in the notification package that > passes both the previous value and the new value (the same could be > done for Converter). With this change I am also thinking maybe the > addListener methods would take the new interface instead of > KeyValueFilter as well possibly. What do you guys think? Please, consider also the corner cases such as overwriting already updated value, e.g. after OutdatedTopologyException. Sometimes the oldValue might not be correct (we probably can't evade this but I hope we can detect that it might have happened) and the Converter should react to that - e.g. by sending full new value instead of empty diff (because oldValue == newValue). Radim >> In terms of how to workaround it, a relatively heavy weight solution would be for the converter to track key/values as it gets events and them compare event contents with its cache. Values should be refs, so should not take too much space? I doubt injecting a CDI cache would work. >> >> Cheers, >> >>> Thanks >>> >>> Radim >>> >>> -- >>> Radim Vansa >>> JBoss DataGrid QA >>> >> >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From bban at redhat.com Fri Sep 19 02:00:09 2014 From: bban at redhat.com (Bela Ban) Date: Fri, 19 Sep 2014 08:00:09 +0200 Subject: [infinispan-dev] New algorithm to handle remote commands In-Reply-To: References: <5419B1E7.9000408@infinispan.org> <5419B418.5030506@infinispan.org> <541ACB6D.3040901@redhat.com> Message-ID: <541BC669.4000305@redhat.com> On 18/09/14 15:28, Dan Berindei wrote: > > > On Thu, Sep 18, 2014 at 3:09 PM, Bela Ban > wrote: > > > > On 18/09/14 13:03, Dan Berindei wrote: > > Thanks Pedro, this looks great. > > > > However, I don't think it's ok to treat CommitCommands/Pessimistic > > PrepareCommands as RemoteLockCommands just because they may send L1 > > invalidation commands. It's true that those commands will block, but > > there's no need to wait for any other command before doing the L1 > > invalidation. In fact, the non-tx writes on backup owners, which you > > consider to be non-blocking, can also send L1 invalidation > commands (see > > L1NonTxInterceptor.invalidateL1). > > > > On the other hand, one of the good things that the remote > executor did > > was to allow queueing lots of commands with a higher topology id, > when > > one of the nodes receives the new topology much later than the > others. > > We still have to consider each TopologyAffectedCommand as potentially > > blocking and put it through the remote executor. > > > > And InvalidateL1Commands are also TopologyAffectedCommands, so > there's > > still a potential for deadlock when L1 is enabled and we have > maxThreads > > write commands blocked sending L1 invalidations and those L1 > > invalidation commands are stuck in the remote executor's queue on > > another node. And with (very) unlucky timing the remote executor > might > > not even get to create maxThreads threads before the deadlock > appears. I > > wonder if we could write a custom executor that checks what the first > > task in the queue is every second or so, and creates a bunch of new > > threads if the first task in the queue hasn't changed. > > > > You're right about the remote executor getting full as well, we're > > lacking any feedback mechanism to tell the sender to slow down, > except > > for blocking the OOB thread. > > JGroups sends credits back to the sender *after* the message has been > delivered into the application. If the application is slow in processing > the messages, or blocks for some time, then the sender will not receive > enough credits and thus also slow down, or even block. > > > > I wonder if we could tell JGroups somehow > > to discard the message from inside MessageDispatcher.handle (e.g. throw > > a DiscardMessageException), so the sender has to retransmit it > > At this point, JGroups considers the message *delivered* (as it has > passed the UNICAST or NAKACK protocols), and it won't get resent. You > cannot discard it either, as this will be a message loss. However, if > you can tolerate loss, all is fine. E.g. if you discard a topo message > with a lower ID, I don't think any harm is done in Infinispan. (?). To > discard or not, is Infinispan's decision. > The other thing is to block, but this can have an impact back into the > JGroups thread pools. > > > Right, I was hoping the message would be marked as delivered only after > Infinispan finished processing the message (i.e. when up() returns in > UNICAST/NAKACK). No, that's not the case. JGroups is a reliable transport and works similarly to TCP: a message is considered delivered when it leaves the NAKACK or UNICAST protocols. Same for TCP: a read reads bytes available from the socket, and those bytes are considered delivered by TCP. > Perhaps we could delay sending the credits instead? When process a > message on our internal thread pool, it would be nice if we could tell > JGroups to send credits back only when we really finished processing the > message. Not nice, as this break encapsulation. This stuff is supposed to be hidden from you. But what you can do is to block the incoming thread: only when it returns will JGroups send credits back to the sender. If there are a lot of requests, then at one point the internal ISPN thread pool will have to start blocking selected threads, and possibly start discarding selected messages IMO. -- Bela Ban, JGroups lead (http://www.jgroups.org) From pedro at infinispan.org Fri Sep 19 06:08:16 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Fri, 19 Sep 2014 11:08:16 +0100 Subject: [infinispan-dev] New algorithm to handle remote commands In-Reply-To: References: <5419B1E7.9000408@infinispan.org> <5419B418.5030506@infinispan.org> <541AD030.6030200@infinispan.org> Message-ID: <541C0090.1040801@infinispan.org> On 09/18/2014 05:32 PM, Dan Berindei wrote: > > > > > You're right about the remote executor getting full as well, we're > > lacking any feedback mechanism to tell the sender to slow down, except > > for blocking the OOB thread. I wonder if we could tell JGroups somehow > > to discard the message from inside MessageDispatcher.handle (e.g. throw > > a DiscardMessageException), so the sender has to retransmit it and we > > don't block the OOB thread. That should allow us to set a size limit on > > the BlockingTaskAwareExecutor's blockedTasks collection as well. Bela, WDYT? > > Even if we have a way to tell the JGroups to resend the message, we have > no idea if the executor service is full or not. We allow a user to > inject their own implementation of it. > > > We do allow a custom executor implementation, but it's our SPI. So we > can require the custom executor to be configured to throw a > RejectedExecutionException when the queue is full instead of blocking > the caller thread, if it helps us. > and about jbossas/wildfly? aren't they inject their own executor service? the better approach will be to have a custom rejection policy that put back the task in the BlockingTaskAwareExecutor's queue. From mmarkus at redhat.com Fri Sep 19 06:57:46 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Fri, 19 Sep 2014 13:57:46 +0300 Subject: [infinispan-dev] New algorithm to handle remote commands In-Reply-To: <5419B418.5030506@infinispan.org> References: <5419B1E7.9000408@infinispan.org> <5419B418.5030506@infinispan.org> Message-ID: <9FEB74C4-7273-47EB-AE80-C050CB48A14A@redhat.com> "Read Command : No changes are made. When a read command is delivered, it is processed directly in the JGroups executor service." - you might want to check if here's a force write lock flag present as well On Sep 17, 2014, at 19:17, Pedro Ruivo wrote: > new link: > https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler > > On 09/17/2014 05:08 PM, Pedro Ruivo wrote: >> Hi, >> >> I've just wrote on the wiki a new algorithm to better handle the remote >> commands. You can find it in [1]. >> >> If you have questions, suggestion or just want to discuss some aspect, >> please do in thread. I'll update the wiki page based on this discussion >> >> Thanks. >> >> Cheers, >> Pedro >> >> [1]https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler-(Work-In-Progress...) >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From mudokonman at gmail.com Fri Sep 19 11:09:51 2014 From: mudokonman at gmail.com (William Burns) Date: Fri, 19 Sep 2014 11:09:51 -0400 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> Message-ID: Comments regarding embedded usage are inline. I am not quite sure on the hot rod client ones. On Thu, Sep 18, 2014 at 12:24 PM, Emmanuel Bernard wrote: > Hi all, > > I have had a good exchange on how someone would use clustered / remote listeners to do custom continuous query features. > > I have a few questions and requests to make this fully and easily doable > > ## Value as bytes or as objects > > Assuming a Hot Rod based usage and protobuf as the serialization layer. What are KeyValueFilter and Converter seeing? > I assume today the bytes are unmarshalled and the Java object is provided to these interfaces. > In a protobuf based storage, does that mean that the user must create the Java objects out of a protobuf compiler and deploy these classes in the classpath of each server node? > Alternatively, could we pass the raw protobuf data to the KeyValueFilter and Converter? They could read the relevant properties at no deserialization cost and with lss problems related to the classloader. > > Thoughts? > > ## Synced listeners > > In a transactional clustered listener marked as sync. Does the transaction commits and then waits for the relevant clustered listeners to proceed before returning the hand to the Tx client? Or is there something else going on? It commits the transaction and then notifies the listeners. The listener notification is done while still holding all appropriate locks for the given entry to guarantee proper ordering. > > ## oldValue and newValue > > I understand why the oldValue was not provided in the initial work. It requires to send more data across the network and at least double the number of values unmarshalled. > But for continuous queries, being able to compare the old and the new value is critical to reduce the number of events sent to the listener. > > Imagine the following use case. A listener exposes the average age for a certain type of customer. You would implement it the following way. > > 1. Add a KeyValueFilter that > - upon creation, filter out the customers of the wrong type > - upon update, keep customers that > - *were* of the right time but no longer are > - were not of the right type but now now *are* > - remains of the right type and whose age has changed > - upon deletion, keep customers that *were* of the right type > > 2. Converter > In the converter, one could send the whole customer but it would be more efficient to only send the age of the customer as well as wether it is added to or removed from the matching customers > - upon creation, you send the customer age and mark it as addition > - upon deletion, you send the customer age and mark it as deletion > - upon update > - if the customer was of the right type but no longer is, send the age as well as a deletion flag > - if the customer was not of the right type but now is, send the age as well as an addition flag > - if the customer age has changed, send the difference with a modification flag > > 3. The listener then needs to keep the total sum of all ages as well as the total number of customers of the right type. Based on the sent events, it can adjust these two counters. > > That requires us to be able to provide the old and new value to the KeyValueFilter and the Converter interface as well as the type of event (creation, update, deletion). I agree the oldValue is required for most efficient usage. From the oldValue though it seems you can infer what operation it is. Create has null oldValue and delete has null newValue I would think. This also came up here http://markmail.org/search/?q=infinispan#query:infinispan%20list%3Aorg.jboss.lists.infinispan-dev%20order%3Adate-backward+page:1+mid:nn6r3uuabq3hyzmd+state:results and I am debating if this interface should be separate or just an extension from KeyValueFilter etc. The thing is the new interface is mostly beneficial only to clustered listeners since non cluster listeners get both the pre and post event which makes the old value accessible. I may have to just try to write it up and see how it goes unless anyone has any suggestions. > > If you keep the existing interfaces and their data, the data send and the memory consumed becomes much much bigger. I leave it as an exercise but I think you need to: > - send *all* remove and update events regardless of the value (essentially no KeyValueFilter) > - in the listener, keep a list of *all* matching keys so that you know if a new event is about a data that was already matching your criteria or not and act accordingly. > > BTW, you need the old and new value even if your listener returns actual matching results instead of an aggregation. More or less for the same reasons. > > Continuous query is about the most important use case for remote and clustered listeners and I think we should address it properly and as efficiently as possible. Adding continuous query to Infinispan will then ?simply? be a matter of agreeing on the query syntax and implement the predicates as smartly as possible. > > With the use case I describe, I think the best approach is to merge the KVF and Converter into a single Listener like interface that is able to send or silence an event payload. But that?s guestimate. > Because oldValue / newValue implies an unmarshalling overhead we might want to make it an annotation based flag on the class that is executed on each node (somewhat similar to the settings hosted on @Listener). We actually have an interface that combines the 2 interfaces, it is called KeyValueFilterConverter. It was added to more efficiently perform indexless querying using entry retriever. This interface is not supported for cluster listeners at this time though. > > ## includeCurrentState and very narrow filtering > > The existing approach is fine (send a create event notif for all existing keys and queue changes in the mean time) as long as the listener plans to consume most of these events. > But in case of a big data grid, with a lot of passivated entries, the cost would become non negligible. The filter and converter are applied while doing the current state so it should be performant in that case. Also to note while the current state operation is ongoing any new notifications are enqueued until the current state is applied. These new events will not cause blocking as you mentioned earlier with sync since they are immediately enqueued. The queueing may be something we have to add blocking though possibly to prevent memory exhaustion in the case when the initial iteration is extremely slow and there are a lot of updates during that period. The code currently has code to release queued events by segment as the segments are completed, I have thought about also releasing events by key instead which should relieve a lot of possible memory usage. > > An alternative approach is to first do a query matching the elements the listener is interested in and queue up the events until the query is fully processed. Can a listener access a cache and do a query? Should we offer such option in a more packaged way? The provided filter be doing this already. But maybe more info on what you are proposing. Either way it seems we have to have the listener installed before we can run the query so we can properly tell what events should be raised in the event of concurrent events while the query is running. > > For a listener that is only interested in keys whose value city contains Springfield, Virginia, the gain would be massive. > > ## Remote listener and non Java HR clients > > Does the API of non Java HR clients support the enlistements of listeners and attach registered keyValueFilter / Converter? Or is that planned? Just curious. > > Emmanuel > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From emmanuel at hibernate.org Fri Sep 19 12:39:37 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Fri, 19 Sep 2014 18:39:37 +0200 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> Message-ID: <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> On 19 Sep 2014, at 17:09, William Burns wrote: > Comments regarding embedded usage are inline. I am not quite sure on > the hot rod client ones. > > On Thu, Sep 18, 2014 at 12:24 PM, Emmanuel Bernard > wrote: >> >> >> That requires us to be able to provide the old and new value to the KeyValueFilter and the Converter interface as well as the type of event (creation, update, deletion). > > I agree the oldValue is required for most efficient usage. From the > oldValue though it seems you can infer what operation it is. Create > has null oldValue and delete has null newValue I would think. well except when I do cache.put(key, null) but that might not matter. The other use case is the includeInitialState where the old value would be either null or the same as the new one? Could a user detect that state based on old == new? At any rate the programming model becomes quite awkward and rely on strong understanding, I?d prefer to stick an enum showing the transition explicitly to make things easier. > > This also came up here > http://markmail.org/search/?q=infinispan#query:infinispan%20list%3Aorg.jboss.lists.infinispan-dev%20order%3Adate-backward+page:1+mid:nn6r3uuabq3hyzmd+state:results > and I am debating if this interface should be separate or just an > extension from KeyValueFilter etc. > > The thing is the new interface is mostly beneficial only to clustered > listeners since non cluster listeners get both the pre and post event > which makes the old value accessible. I may have to just try to write > it up and see how it goes unless anyone has any suggestions. +1, feel free to send even gists of your progresses, I?m happy to provide feedback. > >> >> With the use case I describe, I think the best approach is to merge the KVF and Converter into a single Listener like interface that is able to send or silence an event payload. But that?s guestimate. >> Because oldValue / newValue implies an unmarshalling overhead we might want to make it an annotation based flag on the class that is executed on each node (somewhat similar to the settings hosted on @Listener). > > We actually have an interface that combines the 2 interfaces, it is > called KeyValueFilterConverter. It was added to more efficiently > perform indexless querying using entry retriever. This interface is > not supported for cluster listeners at this time though. That interface would do - assuming we get the old / new values and the transition. But then it begs the question, do we really want to keep the KeyFilter, KeyValueFilter and Converter interfaces around. That?s a lot of interface for features quite interrelated. I can see why they can speed things up (esp KeyFilter that does not require to unmarshal the value). > >> >> ## includeCurrentState and very narrow filtering >> >> The existing approach is fine (send a create event notif for all existing keys and queue changes in the mean time) as long as the listener plans to consume most of these events. >> But in case of a big data grid, with a lot of passivated entries, the cost would become non negligible. > > The filter and converter are applied while doing the current state so > it should be performant in that case. I don?t understand, the code still has to look all key/value pairs of a given node (at least the primary ones) and send them through the KVF / Converter logic. So you need to unmarshal all of them as well as load from cachestore the passivated ones. Correct? That?s the cost I am describing here. > Also to note while the current > state operation is ongoing any new notifications are enqueued until > the current state is applied. These new events will not cause > blocking as you mentioned earlier with sync since they are immediately > enqueued. The queueing may be something we have to add blocking > though possibly to prevent memory exhaustion in the case when the > initial iteration is extremely slow and there are a lot of updates > during that period. The code currently has code to release queued > events by segment as the segments are completed, I have thought about > also releasing events by key instead which should relieve a lot of > possible memory usage. > >> >> An alternative approach is to first do a query matching the elements the listener is interested in and queue up the events until the query is fully processed. Can a listener access a cache and do a query? Should we offer such option in a more packaged way? > > The provided filter be doing this already. > > But maybe more info on what you are proposing. Either way it seems we > have to have the listener installed before we can run the query so we > can properly tell what events should be raised in the event of > concurrent events while the query is running. You lost me here :) From rory.odonnell at oracle.com Mon Sep 22 03:48:05 2014 From: rory.odonnell at oracle.com (Rory O'Donnell Oracle, Dublin Ireland) Date: Mon, 22 Sep 2014 08:48:05 +0100 Subject: [infinispan-dev] Early Access builds for JDK 9 b31 and JDK 8u40 b06 are available on java.net Message-ID: <541FD435.80709@oracle.com> Hi Galder, Early Access build for JDK 9 b31 is available on java.net, summary of changes are listed here Early Access build for JDK 8u40 b06 is available on java.net, summary of changes are listed here. Rgds,Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140922/ea5ab5d2/attachment.html From pedro at infinispan.org Mon Sep 22 05:20:46 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Mon, 22 Sep 2014 10:20:46 +0100 Subject: [infinispan-dev] New algorithm to handle remote commands In-Reply-To: <9FEB74C4-7273-47EB-AE80-C050CB48A14A@redhat.com> References: <5419B1E7.9000408@infinispan.org> <5419B418.5030506@infinispan.org> <9FEB74C4-7273-47EB-AE80-C050CB48A14A@redhat.com> Message-ID: <541FE9EE.70009@infinispan.org> On 09/19/2014 11:57 AM, Mircea Markus wrote: > "Read Command : No changes are made. When a read command is delivered, it is processed directly in the JGroups executor service." > - you might want to check if here's a force write lock flag present as well > No need. I double check and the get sends a LockControlCommand in the originator before the remote get. From sanne at infinispan.org Mon Sep 22 05:26:25 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 22 Sep 2014 10:26:25 +0100 Subject: [infinispan-dev] Early Access builds for JDK 9 b31 and JDK 8u40 b06 are available on java.net In-Reply-To: <541FD435.80709@oracle.com> References: <541FD435.80709@oracle.com> Message-ID: Build 8u40 might be of particular interest to our performance testing because it includes : http://bugs.java.com/bugdatabase/view_bug.do?bug_id=6642881 -- Sanne On 22 September 2014 08:48, Rory O'Donnell Oracle, Dublin Ireland wrote: > Hi Galder, > > > Early Access build for JDK 9 b31 is available on java.net, summary of > changes are listed here > > Early Access build for JDK 8u40 b06 is available on java.net, summary of > changes are listed here. > > Rgds,Rory > > -- > Rgds,Rory O'Donnell > Quality Engineering Manager > Oracle EMEA , Dublin, Ireland > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Mon Sep 22 05:38:20 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Mon, 22 Sep 2014 11:38:20 +0200 Subject: [infinispan-dev] My weekly status update Message-ID: Hi, I won?t be around for the weekly IRC meeting, so here?s my status updated. Last week: - Sent PRs for: - ISPN-4707 Hot Rod 2.0 should add error codes for suspected nodes and stopping/stopped caches. - ISPN-4717 On leave, Hot Rod client ends up with old cluster formation - ISPN-4567 Sent PR to get more logs on why sometimes Arquillian containers are not closed. - ISPN-4563 Race condition in JCache creation for interceptors. In conjunction with Sebastian. - ISPN-4579 SingleNodeJdbcStoreIT.cleanup NPE after test failure. Trivial stuff. - Created new blog post ?Hot Rod Remote Events #3: Customizing Events? This week: - Working on: - ISPN-4736 Add size() operation to Hot Rod. - Closely related, complete ISPN-4470 using the new size operation. - ISPN-4737 Noisy exceptions in Hot Rod client when node goes down - ISPN-4734 Hot Rod marshaller for custom events...etc, needs to be configurable in server - I?ll write up 4th blog post of the remote event series which will focus on receiving events in a clustered environment. Cheers, -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From mudokonman at gmail.com Mon Sep 22 07:45:21 2014 From: mudokonman at gmail.com (William Burns) Date: Mon, 22 Sep 2014 07:45:21 -0400 Subject: [infinispan-dev] My weekly status update In-Reply-To: References: Message-ID: I also will miss the IRC meeting this week. Last week: - I worked primarily on and sent a PR for ISPN-3402. Unfortunately I got tripped up because of an issue in RHQ which I also submitted a PR for https://github.com/rhq-project/rhq/pull/128 - Also was looking into continuous query discussion for cluster listeners which led me to find ISPN-4745 which is also now in a PR. This week: - Mostly centered around cluster listener changes that will be required to get in. Still need to continue discussion further. - ISPN-4733 Checking for potential issues with System.nanoTime - Any other issues that come up in core that need to be addressed. Also have some product stuff I worked on and will probably need to do this week. Thanks, - Will On Mon, Sep 22, 2014 at 5:38 AM, Galder Zamarre?o wrote: > Hi, > > I won?t be around for the weekly IRC meeting, so here?s my status updated. > > Last week: > - Sent PRs for: > - ISPN-4707 Hot Rod 2.0 should add error codes for suspected nodes and stopping/stopped caches. > - ISPN-4717 On leave, Hot Rod client ends up with old cluster formation > - ISPN-4567 Sent PR to get more logs on why sometimes Arquillian containers are not closed. > - ISPN-4563 Race condition in JCache creation for interceptors. In conjunction with Sebastian. > - ISPN-4579 SingleNodeJdbcStoreIT.cleanup NPE after test failure. Trivial stuff. > - Created new blog post ?Hot Rod Remote Events #3: Customizing Events? > > This week: > - Working on: > - ISPN-4736 Add size() operation to Hot Rod. > - Closely related, complete ISPN-4470 using the new size operation. > - ISPN-4737 Noisy exceptions in Hot Rod client when node goes down > - ISPN-4734 Hot Rod marshaller for custom events...etc, needs to be configurable in server > - I?ll write up 4th blog post of the remote event series which will focus on receiving events in a clustered environment. > > Cheers, > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From pedro at infinispan.org Mon Sep 22 09:51:56 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Mon, 22 Sep 2014 14:51:56 +0100 Subject: [infinispan-dev] New algorithm to handle remote commands In-Reply-To: References: <5419B1E7.9000408@infinispan.org> <5419B418.5030506@infinispan.org> <541AD030.6030200@infinispan.org> Message-ID: <5420297C.5070508@infinispan.org> On 09/18/2014 05:32 PM, Dan Berindei wrote: > > > > > And InvalidateL1Commands are also TopologyAffectedCommands, so there's > > still a potential for deadlock when L1 is enabled and we have maxThreads > > write commands blocked sending L1 invalidations and those L1 > > invalidation commands are stuck in the remote executor's queue on > > another node. And with (very) unlucky timing the remote executor might > > not even get to create maxThreads threads before the deadlock appears. I > > wonder if we could write a custom executor that checks what the first > > task in the queue is every second or so, and creates a bunch of new > > threads if the first task in the queue hasn't changed. > I found another potential deadlock with the transaction boundary commands when they are forwarded (state transfer) to the new owners. The forward commands may not have threads available to process them. From mmarkus at redhat.com Mon Sep 22 10:43:36 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Mon, 22 Sep 2014 17:43:36 +0300 Subject: [infinispan-dev] logs for the weekly meeting Message-ID: <2AFD0974-937D-4FB6-9988-19788584839B@redhat.com> http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2014/infinispan.2014-09-22-14.03.log.html Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From mudokonman at gmail.com Mon Sep 22 13:23:36 2014 From: mudokonman at gmail.com (William Burns) Date: Mon, 22 Sep 2014 13:23:36 -0400 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> Message-ID: On Fri, Sep 19, 2014 at 12:39 PM, Emmanuel Bernard wrote: > > On 19 Sep 2014, at 17:09, William Burns wrote: > >> Comments regarding embedded usage are inline. I am not quite sure on >> the hot rod client ones. >> >> On Thu, Sep 18, 2014 at 12:24 PM, Emmanuel Bernard >> wrote: >>> >>> >>> That requires us to be able to provide the old and new value to the KeyValueFilter and the Converter interface as well as the type of event (creation, update, deletion). >> >> I agree the oldValue is required for most efficient usage. From the >> oldValue though it seems you can infer what operation it is. Create >> has null oldValue and delete has null newValue I would think. > > well except when I do cache.put(key, null) but that might not matter. We don't allow a null value to be passed to put. > The other use case is the includeInitialState where the old value would be either null or the same as the new one? Could a user detect that state based on old == new? It would have prevValue as null in this case. > At any rate the programming model becomes quite awkward and rely on strong understanding, I?d prefer to stick an enum showing the transition explicitly to make things easier. I am not sold on this as it seems pretty trivial to decipher which operation is which and the information would be present on the javadocs as well. > >> >> This also came up here >> http://markmail.org/search/?q=infinispan#query:infinispan%20list%3Aorg.jboss.lists.infinispan-dev%20order%3Adate-backward+page:1+mid:nn6r3uuabq3hyzmd+state:results >> and I am debating if this interface should be separate or just an >> extension from KeyValueFilter etc. >> >> The thing is the new interface is mostly beneficial only to clustered >> listeners since non cluster listeners get both the pre and post event >> which makes the old value accessible. I may have to just try to write >> it up and see how it goes unless anyone has any suggestions. > > +1, feel free to send even gists of your progresses, I?m happy to provide feedback. Sure I am going to update the other posting with what I was going to propose/try out for now. Please feel free to make any suggestions. > >> >>> >>> With the use case I describe, I think the best approach is to merge the KVF and Converter into a single Listener like interface that is able to send or silence an event payload. But that?s guestimate. >>> Because oldValue / newValue implies an unmarshalling overhead we might want to make it an annotation based flag on the class that is executed on each node (somewhat similar to the settings hosted on @Listener). >> >> We actually have an interface that combines the 2 interfaces, it is >> called KeyValueFilterConverter. It was added to more efficiently >> perform indexless querying using entry retriever. This interface is >> not supported for cluster listeners at this time though. > > That interface would do - assuming we get the old / new values and the transition. > But then it begs the question, do we really want to keep the KeyFilter, KeyValueFilter and Converter interfaces around. That?s a lot of interface for features quite interrelated. > I can see why they can speed things up (esp KeyFilter that does not require to unmarshal the value). >> >>> >>> ## includeCurrentState and very narrow filtering >>> >>> The existing approach is fine (send a create event notif for all existing keys and queue changes in the mean time) as long as the listener plans to consume most of these events. >>> But in case of a big data grid, with a lot of passivated entries, the cost would become non negligible. >> >> The filter and converter are applied while doing the current state so >> it should be performant in that case. > > I don?t understand, the code still has to look all key/value pairs of a given node (at least the primary ones) and send them through the KVF / Converter logic. So you need to unmarshal all of them as well as load from cachestore the passivated ones. Correct? That?s the cost I am describing here. Sorry I didn't realize you were referring to an indexed query. Yes that could improve performance of the initial retrieval. I am not as familiar with indexed query, but I don't know if it lends itself well to the individual filtering that is done as each event is fired. I think this needs to be discussed/investigated further. > >> Also to note while the current >> state operation is ongoing any new notifications are enqueued until >> the current state is applied. These new events will not cause >> blocking as you mentioned earlier with sync since they are immediately >> enqueued. The queueing may be something we have to add blocking >> though possibly to prevent memory exhaustion in the case when the >> initial iteration is extremely slow and there are a lot of updates >> during that period. The code currently has code to release queued >> events by segment as the segments are completed, I have thought about >> also releasing events by key instead which should relieve a lot of >> possible memory usage. >> >>> >>> An alternative approach is to first do a query matching the elements the listener is interested in and queue up the events until the query is fully processed. Can a listener access a cache and do a query? Should we offer such option in a more packaged way? >> >> The provided filter be doing this already. >> >> But maybe more info on what you are proposing. Either way it seems we >> have to have the listener installed before we can run the query so we >> can properly tell what events should be raised in the event of >> concurrent events while the query is running. > > You lost me here :) This is just details with how the current state doesn't lose events in the middle. > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mudokonman at gmail.com Mon Sep 22 13:38:18 2014 From: mudokonman at gmail.com (William Burns) Date: Mon, 22 Sep 2014 13:38:18 -0400 Subject: [infinispan-dev] Hot Rod Remote Events #3: Customizing events In-Reply-To: <541B186E.7040308@redhat.com> References: <541A78BD.3030709@redhat.com> <541B186E.7040308@redhat.com> Message-ID: On Thu, Sep 18, 2014 at 1:37 PM, Radim Vansa wrote: > On 09/18/2014 07:21 PM, William Burns wrote: >> >> On Thu, Sep 18, 2014 at 4:11 AM, Galder Zamarre?o >> wrote: >>> >>> Radim, adding -dev list since others might have the same qs: >>> >>> @Will, some important information below: >>> >>> On 18 Sep 2014, at 08:16, Radim Vansa wrote: >>> >>>> Hi Galder, >>>> >>>> re: to your last blogpost $SUBJ: I miss two information there: >>>> >>>> 1) You say that the filter/converter factories are deployed as JAR - do >>>> you need to update infinispan modules' dependencies on the server, or can >>>> you do that in any other way (via configuration)? >>> >>> There?s nothing to be updated. The jars are deployed in the deployments/ >>> folder or via CLI or whatever other standard deployment method is used. We >>> have purpousefully built a deployment processor that processes these jars >>> and does all the hard work for the user. For more info, see the >>> filter/converter tests in the Infinispan Server integration testsuite. >>> >>>> This is more general question (I've ran into that with compatibility >>>> mode as well), could you provide a link how custom JARs that Infinispan >>>> should use are deployed? >>> >>> There?s no generic solution at the moment. The current solution is >>> limited to filter/converter jars for remote eventing because we depend on >>> service definitions in the jar to find the SPIs that we need to plugin to >>> the Infinispan Server. >>> >>>> 2) Let's say that I want to use the converter to produce diffs, >>>> therefore the converter needs the previous (overwritten) value as well. >>>> Would injecting the cache through CDI work, or is the cache already updated >>>> when the converter runs? Can this be reliable at all? >> >> When the notification is raised it has already been committed into the >> data container so it is not possible to do a get at this point. >> >>> Initially when I started working on remote events stuff, I considered the >>> need of previous value in both converter and filter interfaces. I think they >>> can be useful, but here I?m relying on Will?s core filter/converter >>> instances to provide them to the Hot Rod remote events and at the moment >>> they don't. @Will, are you considering adding this? Since it affects API, it >>> might be a good time to do this now. >> >> I actually was talking to Emmanuel about this yesterday for a bit. It >> seems that we will need to expose the previous value to at least the >> KeyValueFilter, but it might be best to also do this for the Converter >> as well. I as thinking of adding another interface that extends the >> KeyValueFilter that would be kept in the notification package that >> passes both the previous value and the new value (the same could be >> done for Converter). With this change I am also thinking maybe the >> addListener methods would take the new interface instead of >> KeyValueFilter as well possibly. What do you guys think? Talking with Mircea we are thinking that the least confusing way of implementing this is to instead just change the KeyValueFilter and Converter interfaces to instead have an additional parameter of oldValue passed along in addition to the others. Unfortunately some uses of these interfaces does not fully make sense, especially outside of events, but in those cases we will always pass a null oldValue and I will update the other methods to reflect this. Such cases are cluster entry iterator and data container iteration. Any objections or thoughts? > > > Please, consider also the corner cases such as overwriting already updated > value, e.g. after OutdatedTopologyException. Sometimes the oldValue might > not be correct (we probably can't evade this but I hope we can detect that > it might have happened) and the Converter should react to that - e.g. by > sending full new value instead of empty diff (because oldValue == newValue). Unfortunately it is too late to retrieve the old value by the time we do the retry if it was already replicated to a backup owner. We do detect this and provide that info the Listener event, but talking with some others I am unsure if providing this information to the Filter/Converter is fully needed. > > Radim > > >>> In terms of how to workaround it, a relatively heavy weight solution >>> would be for the converter to track key/values as it gets events and them >>> compare event contents with its cache. Values should be refs, so should not >>> take too much space? I doubt injecting a CDI cache would work. >>> >>> Cheers, >>> >>>> Thanks >>>> >>>> Radim >>>> >>>> -- >>>> Radim Vansa >>>> JBoss DataGrid QA >>>> >>> >>> -- >>> Galder Zamarre?o >>> galder at redhat.com >>> twitter.com/galderz >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Radim Vansa > JBoss DataGrid QA > From rvansa at redhat.com Tue Sep 23 03:15:49 2014 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 23 Sep 2014 09:15:49 +0200 Subject: [infinispan-dev] Hot Rod Remote Events #3: Customizing events In-Reply-To: References: <541A78BD.3030709@redhat.com> <541B186E.7040308@redhat.com> Message-ID: <54211E25.9090904@redhat.com> On 09/22/2014 07:38 PM, William Burns wrote: > On Thu, Sep 18, 2014 at 1:37 PM, Radim Vansa wrote: >> On 09/18/2014 07:21 PM, William Burns wrote: >>> On Thu, Sep 18, 2014 at 4:11 AM, Galder Zamarre?o >>> wrote: >>>> Radim, adding -dev list since others might have the same qs: >>>> >>>> @Will, some important information below: >>>> >>>> On 18 Sep 2014, at 08:16, Radim Vansa wrote: >>>> >>>>> Hi Galder, >>>>> >>>>> re: to your last blogpost $SUBJ: I miss two information there: >>>>> >>>>> 1) You say that the filter/converter factories are deployed as JAR - do >>>>> you need to update infinispan modules' dependencies on the server, or can >>>>> you do that in any other way (via configuration)? >>>> There?s nothing to be updated. The jars are deployed in the deployments/ >>>> folder or via CLI or whatever other standard deployment method is used. We >>>> have purpousefully built a deployment processor that processes these jars >>>> and does all the hard work for the user. For more info, see the >>>> filter/converter tests in the Infinispan Server integration testsuite. >>>> >>>>> This is more general question (I've ran into that with compatibility >>>>> mode as well), could you provide a link how custom JARs that Infinispan >>>>> should use are deployed? >>>> There?s no generic solution at the moment. The current solution is >>>> limited to filter/converter jars for remote eventing because we depend on >>>> service definitions in the jar to find the SPIs that we need to plugin to >>>> the Infinispan Server. >>>> >>>>> 2) Let's say that I want to use the converter to produce diffs, >>>>> therefore the converter needs the previous (overwritten) value as well. >>>>> Would injecting the cache through CDI work, or is the cache already updated >>>>> when the converter runs? Can this be reliable at all? >>> When the notification is raised it has already been committed into the >>> data container so it is not possible to do a get at this point. >>> >>>> Initially when I started working on remote events stuff, I considered the >>>> need of previous value in both converter and filter interfaces. I think they >>>> can be useful, but here I?m relying on Will?s core filter/converter >>>> instances to provide them to the Hot Rod remote events and at the moment >>>> they don't. @Will, are you considering adding this? Since it affects API, it >>>> might be a good time to do this now. >>> I actually was talking to Emmanuel about this yesterday for a bit. It >>> seems that we will need to expose the previous value to at least the >>> KeyValueFilter, but it might be best to also do this for the Converter >>> as well. I as thinking of adding another interface that extends the >>> KeyValueFilter that would be kept in the notification package that >>> passes both the previous value and the new value (the same could be >>> done for Converter). With this change I am also thinking maybe the >>> addListener methods would take the new interface instead of >>> KeyValueFilter as well possibly. What do you guys think? > Talking with Mircea we are thinking that the least confusing way of > implementing this is to instead just change the KeyValueFilter and > Converter interfaces to instead have an additional parameter of > oldValue passed along in addition to the others. Unfortunately some > uses of these interfaces does not fully make sense, especially outside > of events, but in those cases we will always pass a null oldValue and > I will update the other methods to reflect this. Such cases are > cluster entry iterator and data container iteration. > > Any objections or thoughts? I understand that you don't want zillions of interfaces, but in my opinion, the interface should always fit its purpose. I would rather have UpdateFilter.accept(key, oldValue, oldMetadata, newValue, newMetadata) and similar UpdateConverter than reusing the interface with dummy arguments elsewhere. > >> >> Please, consider also the corner cases such as overwriting already updated >> value, e.g. after OutdatedTopologyException. Sometimes the oldValue might >> not be correct (we probably can't evade this but I hope we can detect that >> it might have happened) and the Converter should react to that - e.g. by >> sending full new value instead of empty diff (because oldValue == newValue). > Unfortunately it is too late to retrieve the old value by the time we > do the retry if it was already replicated to a backup owner. We do > detect this and provide that info the Listener event, but talking with > some others I am unsure if providing this information to the > Filter/Converter is fully needed. Not providing that info to Converter limits the use-case of converter producing deltas. In fact it's even worse - users will write that converter (because the won't expect incorrect old values - nobody reads documentation) and it will give them unreliable results. Radim -- Radim Vansa JBoss DataGrid QA From mmarkus at redhat.com Tue Sep 23 06:45:07 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Tue, 23 Sep 2014 13:45:07 +0300 Subject: [infinispan-dev] Data versioning In-Reply-To: <20140912161402.GG24677@hibernate.org> References: <5412E9DF.3070904@unine.ch> <20140912161402.GG24677@hibernate.org> Message-ID: <3E072F6C-888B-4427-9A40-A0C9FC917E75@redhat.com> On Sep 12, 2014, at 19:14, Emmanuel Bernard wrote: > Mircea, Tristan, > > I proposed to Pierre to be an invited writer on blog.infinispan.org to talk > about it and do a blog sized version of it. +1 > > Any reason not to ? > > Emmanuel > > On Fri 2014-09-12 14:41, Pierre Sutra wrote: >> Hello, >> >> In the context of the LEADS project, we recently wrote a paper |1] >> regarding data versioning in key-value stores, and using Infinispan as a >> basis to explore various implementations. It will be presented at the >> IEEE SRDS'14 conference this October [2]. We hope that it might interest >> you. Do not hesitate to address us comments and/or questions. >> >> Regards, >> Pierre >> >> [1] http://tinyurl.com/srds14versioning >> [2] www-nishio.ist.osaka-u.ac.jp/conf/srds2014/ >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From emmanuel at hibernate.org Tue Sep 23 08:08:45 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Tue, 23 Sep 2014 14:08:45 +0200 Subject: [infinispan-dev] Hot Rod Remote Events #3: Customizing events In-Reply-To: References: <541A78BD.3030709@redhat.com> <541B186E.7040308@redhat.com> Message-ID: <5F1BAF5D-69CF-4B39-83E6-683273CD751D@hibernate.org> I think you should pass the event type as enum as well. It is essentially free in memory and bits on the wire and avoids confusing guesses involving old/new value comparison with null to guess the event and transition happening. > On 22 sept. 2014, at 19:38, William Burns wrote: > > Talking with Mircea we are thinking that the least confusing way of > implementing this is to instead just change the KeyValueFilter and > Converter interfaces to instead have an additional parameter of > oldValue passed along in addition to the others. Unfortunately some > uses of these interfaces does not fully make sense, especially outside > of events, but in those cases we will always pass a null oldValue and > I will update the other methods to reflect this. Such cases are > cluster entry iterator and data container iteration. > > Any objections or thoughts? From galder at redhat.com Tue Sep 23 08:11:30 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Tue, 23 Sep 2014 14:11:30 +0200 Subject: [infinispan-dev] Differences between default values in the XSD and the code...Part One In-Reply-To: <586E6531-A5E5-46E8-B126-4DFF1105143C@redhat.com> References: <1224160165.38268955.1410794857074.JavaMail.zimbra@redhat.com> <1427044870.49904197.1410798829973.JavaMail.zimbra@redhat.com> <2020170102.38769728.1410861848650.JavaMail.zimbra@redhat.com> <54180CE6.90908@redhat.com> <586E6531-A5E5-46E8-B126-4DFF1105143C@redhat.com> Message-ID: On 16 Sep 2014, at 14:21, Mircea Markus wrote: > > On Sep 16, 2014, at 13:11, Tristan Tarrant wrote: > >>> Hey, >>> >>> I have been looking at the differences between default values in the XSD vs the default values in the configuration builders. [1] I created a list of differences and talked to Dan about his suggestion for the defaults. The numbers in parentheses are Dan's suggestions, but he also asked me to post here to get a wider set of opinions on these values. This list is based on the code used in infinispan-core, so I still need to go through the server code to check the default values there. >>> >>> 1) For locking, the code has concurrency level set to 32, and the XSD has 1000 (32) >>> 2) For eviction: >>> a) the code has max entries set to -1, and the XSD has 10000 (-1) >>> b) the code has interval set to 60000, and the XSD has 5000 (60000) >>> 3) For async configuration: >>> a) the code has queue size set to 1000, and the XSD has 0 (0) >>> b) the code has queue flush interval set to 5000, and the XSD has 10 (10) >>> c) the code has remote timeout set to 15000, and the XSD has 17500 (15000) >>> 4) For hash, the code has number of segments set to 60, and the XSD has 80 (60) >>> 5) For l1, the code has l1 cleanup interval set to 600000, and the XSD has 60000 (60000) >>> >>> Please let me know if you have any opinions on these default values, and also if you have any ideas for avoiding these differences in the future. It seems like there are two possibilities at this point: >>> >>> 1) Generating the XSD from the source code >> Impractical without a ton of annotations, since the builder structure is >> very different from the XSD structure. > > In past, schema used to be generated from annotations on the configuration objects. I don't know why we stopped doing that, though - Vladimir might comment more. That happened when we moved away from JAXB. History: https://issues.jboss.org/browse/ISPN-1065 Cheers, > > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From emmanuel at hibernate.org Tue Sep 23 08:11:16 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Tue, 23 Sep 2014 14:11:16 +0200 Subject: [infinispan-dev] Hot Rod Remote Events #3: Customizing events In-Reply-To: <54211E25.9090904@redhat.com> References: <541A78BD.3030709@redhat.com> <541B186E.7040308@redhat.com> <54211E25.9090904@redhat.com> Message-ID: <3BBDED92-55D3-4401-8420-1167BEF17671@hibernate.org> On 23 sept. 2014, at 09:15, Radim Vansa wrote: >>> Please, consider also the corner cases such as overwriting already updated >>> value, e.g. after OutdatedTopologyException. Sometimes the oldValue might >>> not be correct (we probably can't evade this but I hope we can detect that >>> it might have happened) and the Converter should react to that - e.g. by >>> sending full new value instead of empty diff (because oldValue == newValue). >> Unfortunately it is too late to retrieve the old value by the time we >> do the retry if it was already replicated to a backup owner. We do >> detect this and provide that info the Listener event, but talking with >> some others I am unsure if providing this information to the >> Filter/Converter is fully needed. > > Not providing that info to Converter limits the use-case of converter > producing deltas. In fact it's even worse - users will write that > converter (because the won't expect incorrect old values - nobody reads > documentation) and it will give them unreliable results. Would passing an enum representing the transition - in this case the error state - be sufficient? From emmanuel at hibernate.org Tue Sep 23 08:18:12 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Tue, 23 Sep 2014 14:18:12 +0200 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> Message-ID: <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> > On 22 sept. 2014, at 19:23, William Burns wrote: > > On Fri, Sep 19, 2014 at 12:39 PM, Emmanuel Bernard > wrote: >> >>> On 19 Sep 2014, at 17:09, William Burns wrote: >>> >>> Comments regarding embedded usage are inline. I am not quite sure on >>> the hot rod client ones. >>> >>> On Thu, Sep 18, 2014 at 12:24 PM, Emmanuel Bernard >>> wrote: >>>> >>>> >>>> That requires us to be able to provide the old and new value to the KeyValueFilter and the Converter interface as well as the type of event (creation, update, deletion). >>> >>> I agree the oldValue is required for most efficient usage. From the >>> oldValue though it seems you can infer what operation it is. Create >>> has null oldValue and delete has null newValue I would think. >> >> well except when I do cache.put(key, null) but that might not matter. > > We don't allow a null value to be passed to put. > >> The other use case is the includeInitialState where the old value would be either null or the same as the new one? Could a user detect that state based on old == new? > > It would have prevValue as null in this case. > >> At any rate the programming model becomes quite awkward and rely on strong understanding, I?d prefer to stick an enum showing the transition explicitly to make things easier. > > I am not sold on this as it seems pretty trivial to decipher which > operation is which and the information would be present on the > javadocs as well. I very strongly disagree. Cf the other thread with Radim 's comment on topology error. And think about *future* evolutions. The enum would make that much safer. In the bin enum world you would have to introduce a new YetAnotherKeyValueFilter interface :) >>>> >>>> ## includeCurrentState and very narrow filtering >>>> >>>> The existing approach is fine (send a create event notif for all existing keys and queue changes in the mean time) as long as the listener plans to consume most of these events. >>>> But in case of a big data grid, with a lot of passivated entries, the cost would become non negligible. >>> >>> The filter and converter are applied while doing the current state so >>> it should be performant in that case. >> >> I don?t understand, the code still has to look all key/value pairs of a given node (at least the primary ones) and send them through the KVF / Converter logic. So you need to unmarshal all of them as well as load from cachestore the passivated ones. Correct? That?s the cost I am describing here. > > Sorry I didn't realize you were referring to an indexed query. Yes > that could improve performance of the initial retrieval. I am not as > familiar with indexed query, but I don't know if it lends itself well > to the individual filtering that is done as each event is fired. I > think this needs to be discussed/investigated further. Ok. How do we go about this ? JIRA ? Different email thread? From mmarkus at redhat.com Tue Sep 23 08:53:18 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Tue, 23 Sep 2014 15:53:18 +0300 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> Message-ID: <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> On Sep 23, 2014, at 15:18, Emmanuel Bernard wrote: >> I am not sold on this as it seems pretty trivial to decipher which >> operation is which and the information would be present on the >> javadocs as well. > > I very strongly disagree. Cf the other thread with Radim 's comment on topology error. > And think about *future* evolutions. The enum would make that much safer. In the bin enum world you would have to introduce a new YetAnotherKeyValueFilter interface :) Nicer than an enum would be an explicit method, e.g. handlePut/handleDelete/handleCreate/handleUpdate, as these would also receive the appropriate param list. Of course this means moving away from the KeyValueFilter to an UpdateFilter (good name, Radim) used only for cluster listeners. Will, what would be the overall impact on the API as right now the KeyValueFilter is reused between several components, like the cluster iterator. Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From emmanuel at hibernate.org Tue Sep 23 09:27:39 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Tue, 23 Sep 2014 15:27:39 +0200 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> Message-ID: <6FDDA588-82B0-4DB3-95C9-527CDD8B5219@hibernate.org> On 23 Sep 2014, at 14:53, Mircea Markus wrote: > On Sep 23, 2014, at 15:18, Emmanuel Bernard wrote: > >>> I am not sold on this as it seems pretty trivial to decipher which >>> operation is which and the information would be present on the >>> javadocs as well. >> >> I very strongly disagree. Cf the other thread with Radim 's comment on topology error. >> And think about *future* evolutions. The enum would make that much safer. In the bin enum world you would have to introduce a new YetAnotherKeyValueFilter interface :) > > Nicer than an enum would be an explicit method, e.g. handlePut/handleDelete/handleCreate/handleUpdate, as these would also receive the appropriate param list. Of course this means moving away from the KeyValueFilter to an UpdateFilter (good name, Radim) used only for cluster listeners. > Will, what would be the overall impact on the A If you do that you must also provide an abstract class with default noop operations that filter implementations would extend. Otherwise you are back with backward compatibility problems. From mmarkus at redhat.com Tue Sep 23 09:31:17 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Tue, 23 Sep 2014 16:31:17 +0300 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: <6FDDA588-82B0-4DB3-95C9-527CDD8B5219@hibernate.org> References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> <6FDDA588-82B0-4DB3-95C9-527CDD8B5219@hibernate.org> Message-ID: <821068D0-D857-4006-AC70-A0798C4756C1@redhat.com> On Sep 23, 2014, at 16:27, Emmanuel Bernard wrote: > > On 23 Sep 2014, at 14:53, Mircea Markus wrote: > >> On Sep 23, 2014, at 15:18, Emmanuel Bernard wrote: >> >>>> I am not sold on this as it seems pretty trivial to decipher which >>>> operation is which and the information would be present on the >>>> javadocs as well. >>> >>> I very strongly disagree. Cf the other thread with Radim 's comment on topology error. >>> And think about *future* evolutions. The enum would make that much safer. In the bin enum world you would have to introduce a new YetAnotherKeyValueFilter interface :) >> >> Nicer than an enum would be an explicit method, e.g. handlePut/handleDelete/handleCreate/handleUpdate, as these would also receive the appropriate param list. Of course this means moving away from the KeyValueFilter to an UpdateFilter (good name, Radim) used only for cluster listeners. >> Will, what would be the overall impact on the A > > If you do that you must also provide an abstract class with default noop operations that filter implementations would extend. Otherwise you are back with backward compatibility problems. KeyValueFilter was introduced in 7.0, or other backward compatibility problem you have in mind? Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From mudokonman at gmail.com Tue Sep 23 09:39:26 2014 From: mudokonman at gmail.com (William Burns) Date: Tue, 23 Sep 2014 09:39:26 -0400 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: <821068D0-D857-4006-AC70-A0798C4756C1@redhat.com> References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> <6FDDA588-82B0-4DB3-95C9-527CDD8B5219@hibernate.org> <821068D0-D857-4006-AC70-A0798C4756C1@redhat.com> Message-ID: On Tue, Sep 23, 2014 at 9:31 AM, Mircea Markus wrote: > > On Sep 23, 2014, at 16:27, Emmanuel Bernard wrote: > >> >> On 23 Sep 2014, at 14:53, Mircea Markus wrote: >> >>> On Sep 23, 2014, at 15:18, Emmanuel Bernard wrote: >>> >>>>> I am not sold on this as it seems pretty trivial to decipher which >>>>> operation is which and the information would be present on the >>>>> javadocs as well. >>>> >>>> I very strongly disagree. Cf the other thread with Radim 's comment on topology error. >>>> And think about *future* evolutions. The enum would make that much safer. In the bin enum world you would have to introduce a new YetAnotherKeyValueFilter interface :) >>> >>> Nicer than an enum would be an explicit method, e.g. handlePut/handleDelete/handleCreate/handleUpdate, as these would also receive the appropriate param list. Of course this means moving away from the KeyValueFilter to an UpdateFilter (good name, Radim) used only for cluster listeners. I like the name as well :) The only thing that I dislike about the extra methods is the fact that it isn't a Functional interface, which would be nice to have when we ever move to Java 8, but that may be thinking too far into the future :P >>> Will, what would be the overall impact on the A The biggest part is the usage with the cluster iterator. Currently the Listener uses the same filter that it is provided to also do the iteration. If we want to go down the line of having the extra interface(s), which overall I do like, then I am thinking we may want to change the Listener annotation to no longer have an includeCurrentState parameter and instead add a new method to the addListener method of Cache that takes a KeyValueFilter and the new UpdateFilter (as well as the 2 converters). I can then add in 2 bridge implementations so that you don't have to implement the other if your implementation can handle both types. Also from the other post it seems that I should add the retry boolean to all the appropriate methods so that you can have a chance to detect if an update was missed. Unless this seems to cumbersome? >> >> If you do that you must also provide an abstract class with default noop operations that filter implementations would extend. Otherwise you are back with backward compatibility problems. > > KeyValueFilter was introduced in 7.0, or other backward compatibility problem you have in mind? I believe Emmanuel is referring to if we added additional operations to the filter, but I am not sure what other operations we would want to add to it. If anything we would probably make a different type of filter specific to its use case. > > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mudokonman at gmail.com Tue Sep 23 09:42:24 2014 From: mudokonman at gmail.com (William Burns) Date: Tue, 23 Sep 2014 09:42:24 -0400 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> <6FDDA588-82B0-4DB3-95C9-527CDD8B5219@hibernate.org> <821068D0-D857-4006-AC70-A0798C4756C1@redhat.com> Message-ID: On Tue, Sep 23, 2014 at 9:39 AM, William Burns wrote: > On Tue, Sep 23, 2014 at 9:31 AM, Mircea Markus wrote: >> >> On Sep 23, 2014, at 16:27, Emmanuel Bernard wrote: >> >>> >>> On 23 Sep 2014, at 14:53, Mircea Markus wrote: >>> >>>> On Sep 23, 2014, at 15:18, Emmanuel Bernard wrote: >>>> >>>>>> I am not sold on this as it seems pretty trivial to decipher which >>>>>> operation is which and the information would be present on the >>>>>> javadocs as well. >>>>> >>>>> I very strongly disagree. Cf the other thread with Radim 's comment on topology error. >>>>> And think about *future* evolutions. The enum would make that much safer. In the bin enum world you would have to introduce a new YetAnotherKeyValueFilter interface :) >>>> >>>> Nicer than an enum would be an explicit method, e.g. handlePut/handleDelete/handleCreate/handleUpdate, as these would also receive the appropriate param list. Of course this means moving away from the KeyValueFilter to an UpdateFilter (good name, Radim) used only for cluster listeners. > > I like the name as well :) The only thing that I dislike about the > extra methods is the fact that it isn't a Functional interface, which > would be nice to have when we ever move to Java 8, but that may be > thinking too far into the future :P > >>>> Will, what would be the overall impact on the A > > The biggest part is the usage with the cluster iterator. Currently > the Listener uses the same filter that it is provided to also do the > iteration. If we want to go down the line of having the extra > interface(s), which overall I do like, then I am thinking we may want > to change the Listener annotation to no longer have an > includeCurrentState parameter and instead add a new method to the > addListener method of Cache that takes a KeyValueFilter and the new > UpdateFilter (as well as the 2 converters). I can then add in 2 > bridge implementations so that you don't have to implement the other > if your implementation can handle both types. Also from the other > post it seems that I should add the retry boolean to all the > appropriate methods so that you can have a chance to detect if an > update was missed. Unless this seems to cumbersome? > >>> >>> If you do that you must also provide an abstract class with default noop operations that filter implementations would extend. Otherwise you are back with backward compatibility problems. >> >> KeyValueFilter was introduced in 7.0, or other backward compatibility problem you have in mind? > > I believe Emmanuel is referring to if we added additional operations > to the filter, but I am not sure what other operations we would want > to add to it. If anything we would probably make a different type of > filter specific to its use case. Reread the other email again and actually it could be used to show different permutations like the retry case (eq RETRIED_CREATE), but it seems like the code in that one method would get pretty complex pretty fast having to handle all the various cases. > >> >> Cheers, >> -- >> Mircea Markus >> Infinispan lead (www.infinispan.org) >> >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev From emmanuel at hibernate.org Tue Sep 23 09:56:16 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Tue, 23 Sep 2014 15:56:16 +0200 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> <6FDDA588-82B0-4DB3-95C9-527CDD8B5219@hibernate.org> <821068D0-D857-4006-AC70-A0798C4756C1@redhat.com> Message-ID: <9BD6976B-B611-49D3-B975-5C564CF3D21D@hibernate.org> On 23 Sep 2014, at 15:39, William Burns wrote: >>> >>> If you do that you must also provide an abstract class with default noop operations that filter implementations would extend. Otherwise you are back with backward compatibility problems. >> >> KeyValueFilter was introduced in 7.0, or other backward compatibility problem you have in mind? > > I believe Emmanuel is referring to if we added additional operations > to the filter, but I am not sure what other operations we would want > to add to it. If anything we would probably make a different type of > filter specific to its use case. Right, say at some point you offer a cluster wide topology change event and send the keys involved. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140923/f0621da1/attachment-0001.html From mmarkus at redhat.com Tue Sep 23 09:57:12 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Tue, 23 Sep 2014 16:57:12 +0300 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> <6FDDA588-82B0-4DB3-95C9-527CDD8B5219@hibernate.org> <821068D0-D857-4006-AC70-A0798C4756C1@redhat.com> Message-ID: On Sep 23, 2014, at 16:39, William Burns wrote: > On Tue, Sep 23, 2014 at 9:31 AM, Mircea Markus wrote: >> >> On Sep 23, 2014, at 16:27, Emmanuel Bernard wrote: >> >>> >>> On 23 Sep 2014, at 14:53, Mircea Markus wrote: >>> >>>> On Sep 23, 2014, at 15:18, Emmanuel Bernard wrote: >>>> >>>>>> I am not sold on this as it seems pretty trivial to decipher which >>>>>> operation is which and the information would be present on the >>>>>> javadocs as well. >>>>> >>>>> I very strongly disagree. Cf the other thread with Radim 's comment on topology error. >>>>> And think about *future* evolutions. The enum would make that much safer. In the bin enum world you would have to introduce a new YetAnotherKeyValueFilter interface :) >>>> >>>> Nicer than an enum would be an explicit method, e.g. handlePut/handleDelete/handleCreate/handleUpdate, as these would also receive the appropriate param list. Of course this means moving away from the KeyValueFilter to an UpdateFilter (good name, Radim) used only for cluster listeners. > > I like the name as well :) The only thing that I dislike about the > extra methods is the fact that it isn't a Functional interface, which > would be nice to have when we ever move to Java 8, but that may be > thinking too f?ar into the future :P Agreed, OTOH having a functional interface implemented with a switch statement around the op type wouldn't be too nice either. > >>>> Will, what would be the overall impact on the A > > The biggest part is the usage with the cluster iterator. Currently > the Listener uses the same filter that it is provided to also do the > iteration. If we want to go down the line of having the extra > interface(s), which overall I do like, then I am thinking we may want > to change the Listener annotation to no longer have an > includeCurrentState parameter and instead add a new method to the > addListener method of Cache that takes a KeyValueFilter and the new > UpdateFilter (as well as the 2 converters). Do we still want to keep the KeyValueFilter method or replace it entirely with the UpdateFilter version? > I can then add in 2 > bridge implementations so that you don't have to implement the other > if your implementation can handle both types. Also from the other > post it seems that I should add the retry boolean to all the > appropriate methods so that you can have a chance to detect if an > update was missed. Unless this seems to cumbersome? Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From emmanuel at hibernate.org Tue Sep 23 09:59:35 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Tue, 23 Sep 2014 15:59:35 +0200 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> <6FDDA588-82B0-4DB3-95C9-527CDD8B5219@hibernate.org> <821068D0-D857-4006-AC70-A0798C4756C1@redhat.com> Message-ID: On 23 Sep 2014, at 15:42, William Burns wrote: > Reread the other email again and actually it could be used to show > different permutations like the retry case (eq RETRIED_CREATE), but it > seems like the code in that one method would get pretty complex pretty > fast having to handle all the various cases. If the combiname becomes complicated for a specific complex filter, nothing prevents the implementor to split it into different methods public boolean filter(Key key, Value old, Value new, EventType eventType, Metadata metadata) { switch(eventType) { case CREATE: return filterForCreate(?); break; ?. default: throw new AssertionFailure(?oops: ? + eventType); } } private boolean filterForCreate(?) { ?. } -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140923/6834e5ac/attachment.html From emmanuel at hibernate.org Tue Sep 23 10:06:35 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Tue, 23 Sep 2014 16:06:35 +0200 Subject: [infinispan-dev] putAll, getAll and optimization fruits Message-ID: <027BBC9B-E1B0-4D35-894B-D51009805F61@hibernate.org> We have had a short discussion on putAll, getAll. I?m pushing the info here >>> getAll and putAll and nothing more than a glorified sequential call to get / put in a for loop. >>> The execution of gets and puts in parallel for it costs o(n) network trip in latency rather than o(1). >>> How could we improve it ? >> Historically getall and putall were not intended as hotrod operations and were actually implemented only to honor the map interface. >> The most we can do RPC wise to optimize these operations is to group all keys mapping to the same server node into a single request. That would reduce the number of RPCs, but in big O talk it would still be O(numKeys). Executing them in parallel sounds like a good idea to me. Curious to hear other thoughts on this. Galder? > > So there are actually three improvements: > > * getall and putall as hotrod operations (no matter how that will be implemented by the server itself) > Galder, is it possible in current HR design to execute requests in parallel, without consuming one thread for each node? That was something my async client should solve, but afaik it's not possible without substantial changes, and we were rather targetting that for future JDK 8 only client. Doing that it?s 1/2 of the story, because as I?ve already explained in Wolf?s efforts around putAll, the Netty server implementation just calls to Infinispan synchronous cache operations, which often block. So, using an async client will get you 1/2 of the job done. The way to limit the blocking is by limiting that blocking, e.g. splitting keys and sending gets to nodes that own it would mean the gets get resolved locally, similar thing with puts but as Bela/Pedro found out, there could be some blocking still. > Anyway, we could route the HR request to the node with most matching keys. I don?t think that?s a good idea. The best option is to take all keys, divide them by server according to hashing and send paralell native Hot Rod getAll operations contain N keys requested to each server. The same thing for putAll. I?ve create a JIRA to get getAll/putAll implemented in the Hot Rod 2.0 timeframe: https://issues.jboss.org/browse/ISPN-4752 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140923/bff88c11/attachment.html From emmanuel at hibernate.org Tue Sep 23 10:15:54 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Tue, 23 Sep 2014 16:15:54 +0200 Subject: [infinispan-dev] putAll, getAll and optimization fruits In-Reply-To: <027BBC9B-E1B0-4D35-894B-D51009805F61@hibernate.org> References: <027BBC9B-E1B0-4D35-894B-D51009805F61@hibernate.org> Message-ID: <6B46DA8E-A0E8-4D6A-A7D6-7481347E670F@hibernate.org> On 23 Sep 2014, at 16:06, Emmanuel Bernard wrote: > Doing that it?s 1/2 of the story, because as I?ve already explained in Wolf?s efforts around putAll, the Netty server implementation just calls to Infinispan synchronous cache operations, which often block. So, using an async client will get you 1/2 of the job done. The way to limit the blocking is by limiting that blocking, e.g. splitting keys and sending gets to nodes that own it would mean the gets get resolved locally, similar thing with puts but as Bela/Pedro found out, there could be some blocking still. Galder, are you saying that I cannot execute put operations in parallel on the same node for the same transaction? I.e. could the netty server use a bounded queue + thead pool to parallelize put operation in case of putAll (the other case does not matter). Also, even if we keep the server synchronous and split the payload per server to run that in parallel, we will still gain enough I think: - we avoid i-1 times the latency between the client and a specific node (i is the number of keys going to a specific node) - with the key evenly distributed, you divide the overall latency by O(m) where m is the number of servers. Something like that :) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140923/713a27fe/attachment.html From mudokonman at gmail.com Tue Sep 23 10:11:27 2014 From: mudokonman at gmail.com (William Burns) Date: Tue, 23 Sep 2014 10:11:27 -0400 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> Message-ID: On Tue, Sep 23, 2014 at 8:18 AM, Emmanuel Bernard wrote: > > > >> On 22 sept. 2014, at 19:23, William Burns wrote: >> >> On Fri, Sep 19, 2014 at 12:39 PM, Emmanuel Bernard >> wrote: >>> >>>> On 19 Sep 2014, at 17:09, William Burns wrote: >>>> >>>> Comments regarding embedded usage are inline. I am not quite sure on >>>> the hot rod client ones. >>>> >>>> On Thu, Sep 18, 2014 at 12:24 PM, Emmanuel Bernard >>>> wrote: >>>>> >>>>> >>>>> That requires us to be able to provide the old and new value to the KeyValueFilter and the Converter interface as well as the type of event (creation, update, deletion). >>>> >>>> I agree the oldValue is required for most efficient usage. From the >>>> oldValue though it seems you can infer what operation it is. Create >>>> has null oldValue and delete has null newValue I would think. >>> >>> well except when I do cache.put(key, null) but that might not matter. >> >> We don't allow a null value to be passed to put. >> >>> The other use case is the includeInitialState where the old value would be either null or the same as the new one? Could a user detect that state based on old == new? >> >> It would have prevValue as null in this case. >> >>> At any rate the programming model becomes quite awkward and rely on strong understanding, I?d prefer to stick an enum showing the transition explicitly to make things easier. >> >> I am not sold on this as it seems pretty trivial to decipher which >> operation is which and the information would be present on the >> javadocs as well. > > I very strongly disagree. Cf the other thread with Radim 's comment on topology error. > And think about *future* evolutions. The enum would make that much safer. In the bin enum world you would have to introduce a new YetAnotherKeyValueFilter interface :) > >>>>> >>>>> ## includeCurrentState and very narrow filtering >>>>> >>>>> The existing approach is fine (send a create event notif for all existing keys and queue changes in the mean time) as long as the listener plans to consume most of these events. >>>>> But in case of a big data grid, with a lot of passivated entries, the cost would become non negligible. >>>> >>>> The filter and converter are applied while doing the current state so >>>> it should be performant in that case. >>> >>> I don?t understand, the code still has to look all key/value pairs of a given node (at least the primary ones) and send them through the KVF / Converter logic. So you need to unmarshal all of them as well as load from cachestore the passivated ones. Correct? That?s the cost I am describing here. >> >> Sorry I didn't realize you were referring to an indexed query. Yes >> that could improve performance of the initial retrieval. I am not as >> familiar with indexed query, but I don't know if it lends itself well >> to the individual filtering that is done as each event is fired. I >> think this needs to be discussed/investigated further. > > Ok. How do we go about this ? JIRA ? Different email thread? I would suggest both. We can probably also get some time to discuss this at the F2F in a few months, unless you think this is more critical? I am just thinking this feature might be a bit too late to get into 7.0 at this point. > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mudokonman at gmail.com Tue Sep 23 10:27:51 2014 From: mudokonman at gmail.com (William Burns) Date: Tue, 23 Sep 2014 10:27:51 -0400 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> <6FDDA588-82B0-4DB3-95C9-527CDD8B5219@hibernate.org> <821068D0-D857-4006-AC70-A0798C4756C1@redhat.com> Message-ID: On Tue, Sep 23, 2014 at 9:57 AM, Mircea Markus wrote: > > On Sep 23, 2014, at 16:39, William Burns wrote: > >> On Tue, Sep 23, 2014 at 9:31 AM, Mircea Markus wrote: >>> >>> On Sep 23, 2014, at 16:27, Emmanuel Bernard wrote: >>> >>>> >>>> On 23 Sep 2014, at 14:53, Mircea Markus wrote: >>>> >>>>> On Sep 23, 2014, at 15:18, Emmanuel Bernard wrote: >>>>> >>>>>>> I am not sold on this as it seems pretty trivial to decipher which >>>>>>> operation is which and the information would be present on the >>>>>>> javadocs as well. >>>>>> >>>>>> I very strongly disagree. Cf the other thread with Radim 's comment on topology error. >>>>>> And think about *future* evolutions. The enum would make that much safer. In the bin enum world you would have to introduce a new YetAnotherKeyValueFilter interface :) >>>>> >>>>> Nicer than an enum would be an explicit method, e.g. handlePut/handleDelete/handleCreate/handleUpdate, as these would also receive the appropriate param list. Of course this means moving away from the KeyValueFilter to an UpdateFilter (good name, Radim) used only for cluster listeners. >> >> I like the name as well :) The only thing that I dislike about the >> extra methods is the fact that it isn't a Functional interface, which >> would be nice to have when we ever move to Java 8, but that may be >> thinking too f?ar into the future :P > > Agreed, OTOH having a functional interface implemented with a switch statement around the op type wouldn't be too nice either. > >> >>>>> Will, what would be the overall impact on the A >> >> The biggest part is the usage with the cluster iterator. Currently >> the Listener uses the same filter that it is provided to also do the >> iteration. If we want to go down the line of having the extra >> interface(s), which overall I do like, then I am thinking we may want >> to change the Listener annotation to no longer have an >> includeCurrentState parameter and instead add a new method to the >> addListener method of Cache that takes a KeyValueFilter and the new >> UpdateFilter (as well as the 2 converters). > > Do we still want to keep the KeyValueFilter method or replace it entirely with the UpdateFilter version? In this case I would assume this new UpdateFilter would be completely separate (doesn't extend) and would not contain the KeyValueFilter method. Also I would think UpdateFilter would live only in the notifications package as it doesn't make much sense outside of this context (the others would stay in filter package). > >> I can then add in 2 >> bridge implementations so that you don't have to implement the other >> if your implementation can handle both types. Also from the other >> post it seems that I should add the retry boolean to all the >> appropriate methods so that you can have a chance to detect if an >> update was missed. Unless this seems to cumbersome? > > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mmarkus at redhat.com Tue Sep 23 10:36:56 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Tue, 23 Sep 2014 17:36:56 +0300 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> Message-ID: On Sep 23, 2014, at 17:11, William Burns wrote: >>> >>> Sorry I didn't realize you were referring to an indexed query. Yes >>> that could improve performance of the initial retrieval. I am not as >>> familiar with indexed query, but I don't know if it lends itself well >>> to the individual filtering that is done as each event is fired. I >>> think this needs to be discussed/investigated further. >> >> Ok. How do we go about this ? JIRA ? Different email thread? > > I would suggest both. We can probably also get some time to discuss > this at the F2F in a few months, unless you think this is more > critical? I am just thinking this feature might be a bit too late to > get into 7.0 at this point. +1. This is a good performance optimization. Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From mmarkus at redhat.com Tue Sep 23 10:38:20 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Tue, 23 Sep 2014 17:38:20 +0300 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> <6FDDA588-82B0-4DB3-95C9-527CDD8B5219@hibernate.org> <821068D0-D857-4006-AC70-A0798C4756C1@redhat.com> Message-ID: On Sep 23, 2014, at 17:27, William Burns wrote: >>>>>> Will, what would be the overall impact on the A >>> >>> The biggest part is the usage with the cluster iterator. Currently >>> the Listener uses the same filter that it is provided to also do the >>> iteration. If we want to go down the line of having the extra >>> interface(s), which overall I do like, then I am thinking we may want >>> to change the Listener annotation to no longer have an >>> includeCurrentState parameter and instead add a new method to the >>> addListener method of Cache that takes a KeyValueFilter and the new >>> UpdateFilter (as well as the 2 converters). >> >> Do we still want to keep the KeyValueFilter method or replace it entirely with the UpdateFilter version? > > In this case I would assume this new UpdateFilter would be completely > separate (doesn't extend) and would not contain the KeyValueFilter > method. Also I would think UpdateFilter would live only in the > notifications package as it doesn't make much sense outside of this > context (the others would stay in filter package). that's how I thought about it as well Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From mmarkus at redhat.com Tue Sep 23 10:42:22 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Tue, 23 Sep 2014 17:42:22 +0300 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> <6FDDA588-82B0-4DB3-95C9-527CDD8B5219@hibernate.org> <821068D0-D857-4006-AC70-A0798C4756C1@redhat.com> Message-ID: <6F419DA8-749B-4EFA-91CC-EF11DB16459D@redhat.com> On Sep 23, 2014, at 16:59, Emmanuel Bernard wrote: >> Reread the other email again and actually it could be used to show >> different permutations like the retry case (eq RETRIED_CREATE), but it >> seems like the code in that one method would get pretty complex pretty >> fast having to handle all the various cases. > > If the combiname becomes complicated for a specific complex filter, nothing prevents the implementor to split it into different methods > > public boolean filter(Key key, Value old, Value new, EventType eventType, Metadata metadata) { > switch(eventType) { > case CREATE: > return filterForCreate(?); > break; > ?. > default: > throw new AssertionFailure(?oops: ? + eventType); > } > } > > private boolean filterForCreate(?) { > ?. > } We can provide this as a default abstract impl for the Functional interface Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From galder at redhat.com Tue Sep 23 12:12:36 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Tue, 23 Sep 2014 18:12:36 +0200 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> Message-ID: <35C618EF-0A4D-4EAB-A7AF-83B3DD0DBBC5@redhat.com> On 22 Sep 2014, at 19:23, William Burns wrote: > On Fri, Sep 19, 2014 at 12:39 PM, Emmanuel Bernard > wrote: >> >> On 19 Sep 2014, at 17:09, William Burns wrote: >> >>> > >> At any rate the programming model becomes quite awkward and rely on strong understanding, I?d prefer to stick an enum showing the transition explicitly to make things easier. > > I am not sold on this as it seems pretty trivial to decipher which > operation is which and the information would be present on the > javadocs as well. I?m with Emmanuel on this. I?d much rather avoid relying on whether something is null/not-null and instead rely on a more typesafe solution. In fact, for remote listeners, org.infinispan.client.hotrod.event.ClientEvent has a type that allows the client to detect the type of the event received, independent of what the value(s) contain. Cheers, -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From galder at redhat.com Tue Sep 23 12:17:29 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Tue, 23 Sep 2014 18:17:29 +0200 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> Message-ID: <6912B1A0-36D9-4363-B4FC-05AC0ACD7152@redhat.com> On 18 Sep 2014, at 18:24, Emmanuel Bernard wrote: > Hi all, > > I have had a good exchange on how someone would use clustered / remote listeners to do custom continuous query features. > > I have a few questions and requests to make this fully and easily doable > > ## Value as bytes or as objects > > Assuming a Hot Rod based usage and protobuf as the serialization layer. What are KeyValueFilter and Converter seeing? > I assume today the bytes are unmarshalled and the Java object is provided to these interfaces. > In a protobuf based storage, does that mean that the user must create the Java objects out of a protobuf compiler and deploy these classes in the classpath of each server node? > Alternatively, could we pass the raw protobuf data to the KeyValueFilter and Converter? They could read the relevant properties at no deserialization cost and with lss problems related to the classloader. Following on my reply to this, you can kinda achieve this already today with a little hack. If you plug a converter, you?ll get the Java object as parameter and you can re-convert it to binary payload and send it to the client listener which does what it needs to do. Of course, less performant and still has potential classloader issues, but just mention it. Cheers, -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From galder at redhat.com Tue Sep 23 12:20:35 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Tue, 23 Sep 2014 18:20:35 +0200 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> Message-ID: <962B47C5-F061-497F-8ACC-E8B0541F4917@redhat.com> On 23 Sep 2014, at 14:53, Mircea Markus wrote: > On Sep 23, 2014, at 15:18, Emmanuel Bernard wrote: > >>> I am not sold on this as it seems pretty trivial to decipher which >>> operation is which and the information would be present on the >>> javadocs as well. >> >> I very strongly disagree. Cf the other thread with Radim 's comment on topology error. >> And think about *future* evolutions. The enum would make that much safer. In the bin enum world you would have to introduce a new YetAnotherKeyValueFilter interface :) > > Nicer than an enum would be an explicit method, e.g. handlePut/handleDelete/handleCreate/handleUpdate, as these would also receive the appropriate param list. ^ Hmmmmm, not sure I like that. If you look at the remote event blog posts, you?ll see that I use create/modify/remove annotations and then the parameter to the callback varies depending on whether you had converter applied to it or not. IOW, without a converter, a created event parameter is a ClientCacheEntryCreatedEvent, whereas with a converter, the parameter is a ClientCacheEntryCustomEvent. Two different types of events for the same event type. If you did it with explicit methods, you?d have to duplicate them for custom events. > Of course this means moving away from the KeyValueFilter to an UpdateFilter (good name, Radim) used only for cluster listeners. I don?t like the name actually. I associate update with modifications, and in similar vein, inserts with creation and delete with removals. > Will, what would be the overall impact on the API as right now the KeyValueFilter is reused between several components, like the cluster iterator. > > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From galder at redhat.com Tue Sep 23 12:22:11 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Tue, 23 Sep 2014 18:22:11 +0200 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> Message-ID: <3A3DA273-6136-4258-884C-95467857BBD2@redhat.com> Hi Emmanuel, Apologies for the delay getting back on the topic. Replies below from the remote listeners POV... On 18 Sep 2014, at 18:24, Emmanuel Bernard wrote: > Hi all, > > I have had a good exchange on how someone would use clustered / remote listeners to do custom continuous query features. > > I have a few questions and requests to make this fully and easily doable > > ## Value as bytes or as objects > > Assuming a Hot Rod based usage and protobuf as the serialization layer. What are KeyValueFilter and Converter seeing? > I assume today the bytes are unmarshalled and the Java object is provided to these interfaces. Yes. For protobuf serialization to be used, the client would need a custom Marshaller plugged that converts an Object into protobuf bytes, and this same marshaller should be plugged server side so that the server can do the opposite translation from protobuf bytes to Object. Plugging the server with a marshaller for the filter/converter is not yet there but would be once [1] is in place. [1] https://issues.jboss.org/browse/ISPN-4734 > In a protobuf based storage, does that mean that the user must create the Java objects out of a protobuf compiler and deploy these classes in the classpath of each server node? Yes, those classes could be part of the filter/converter/marshaller deployment jar. > Alternatively, could we pass the raw protobuf data to the KeyValueFilter and Converter? They could read the relevant properties at no deserialization cost and with lss problems related to the classloader. ^ I don?t see why not. Bear in mind that filter/converter callbacks happen server side, but as long as implementations can make out what they need from those byte arrays, all good IMO. I?ll create a JIRA to track this. Not sure it could be done wo/ a configuration option but I?ll try to do so if possible. > Thoughts? So far so good :) > ## Synced listeners > > In a transactional clustered listener marked as sync. Does the transaction commits and then waits for the relevant clustered listeners to proceed before returning the hand to the Tx client? Or is there something else going on? > > ## oldValue and newValue > > I understand why the oldValue was not provided in the initial work. It requires to send more data across the network and at least double the number of values unmarshalled. Yes, but to clarify, the cost is on the clustered listener side to ship the old values to the node where the clustered listener runs, which in turn feeds to the cluster listener delegate and server-side remote filter/converter. > But for continuous queries, being able to compare the old and the new value is critical to reduce the number of events sent to the listener. > > Imagine the following use case. A listener exposes the average age for a certain type of customer. You would implement it the following way. > > 1. Add a KeyValueFilter that > - upon creation, filter out the customers of the wrong type > - upon update, keep customers that > - *were* of the right time but no longer are > - were not of the right type but now now *are* > - remains of the right type and whose age has changed > - upon deletion, keep customers that *were* of the right type > > 2. Converter > In the converter, one could send the whole customer but it would be more efficient to only send the age of the customer as well as wether it is added to or removed from the matching customers > - upon creation, you send the customer age and mark it as addition > - upon deletion, you send the customer age and mark it as deletion > - upon update > - if the customer was of the right type but no longer is, send the age as well as a deletion flag > - if the customer was not of the right type but now is, send the age as well as an addition flag > - if the customer age has changed, send the difference with a modification flag > > 3. The listener then needs to keep the total sum of all ages as well as the total number of customers of the right type. Based on the sent events, it can adjust these two counters. > > That requires us to be able to provide the old and new value to the KeyValueFilter and the Converter interface as well as the type of event (creation, update, deletion). > > If you keep the existing interfaces and their data, the data send and the memory consumed becomes much much bigger. I leave it as an exercise but I think you need to: > - send *all* remove and update events regardless of the value (essentially no KeyValueFilter) > - in the listener, keep a list of *all* matching keys so that you know if a new event is about a data that was already matching your criteria or not and act accordingly. Yup?, that?s kinda the workaround I suggested Radim in an earlier email. > > BTW, you need the old and new value even if your listener returns actual matching results instead of an aggregation. More or less for the same reasons. > > Continuous query is about the most important use case for remote and clustered listeners and I think we should address it properly and as efficiently as possible. Adding continuous query to Infinispan will then ?simply? be a matter of agreeing on the query syntax and implement the predicates as smartly as possible. > > With the use case I describe, I think the best approach is to merge the KVF and Converter into a single Listener like interface that is able to send or silence an event payload. But that?s guestimate. > Because oldValue / newValue implies an unmarshalling overhead we might want to make it an annotation based flag on the class that is executed on each node (somewhat similar to the settings hosted on @Listener). The majority of work here falls on the clustered listener side, to be send the old value when it needs to do it. From a remote eventing perspective, there?s little to be done other than bridge over to what cluster listener provides us. > ## includeCurrentState and very narrow filtering > > The existing approach is fine (send a create event notif for all existing keys and queue changes in the mean time) as long as the listener plans to consume most of these events. > But in case of a big data grid, with a lot of passivated entries, the cost would become non negligible. > > An alternative approach is to first do a query matching the elements the listener is interested in and queue up the events until the query is fully processed. Can a listener access a cache and do a query? Should we offer such option in a more packaged way? > > For a listener that is only interested in keys whose value city contains Springfield, Virginia, the gain would be massive. That sounds like a good idea, though not sure how that would work from a remote query perspective (Adrian?). With the little knowledge I have on that, I?d imagine that the remote client could maybe pass an optional query of some sort when adding the listener, with this being bundle inside the addlistener HR operation, and then somehow have it plugged into clustered listeners. > ## Remote listener and non Java HR clients > > Does the API of non Java HR clients support the enlistements of listeners and attach registered keyValueFilter / Converter? Or is that planned? Just curious. AFAIK, only the Java HR client has those implemented so far. If language experts want to help out on other with other impls, that?d be awesome :) > > Emmanuel > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From mudokonman at gmail.com Tue Sep 23 13:02:28 2014 From: mudokonman at gmail.com (William Burns) Date: Tue, 23 Sep 2014 13:02:28 -0400 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: <962B47C5-F061-497F-8ACC-E8B0541F4917@redhat.com> References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> <962B47C5-F061-497F-8ACC-E8B0541F4917@redhat.com> Message-ID: On Tue, Sep 23, 2014 at 12:20 PM, Galder Zamarre?o wrote: > > On 23 Sep 2014, at 14:53, Mircea Markus wrote: > >> On Sep 23, 2014, at 15:18, Emmanuel Bernard wrote: >> >>>> I am not sold on this as it seems pretty trivial to decipher which >>>> operation is which and the information would be present on the >>>> javadocs as well. >>> >>> I very strongly disagree. Cf the other thread with Radim 's comment on topology error. >>> And think about *future* evolutions. The enum would make that much safer. In the bin enum world you would have to introduce a new YetAnotherKeyValueFilter interface :) >> >> Nicer than an enum would be an explicit method, e.g. handlePut/handleDelete/handleCreate/handleUpdate, as these would also receive the appropriate param list. > > ^ Hmmmmm, not sure I like that. If you look at the remote event blog posts, you?ll see that I use create/modify/remove annotations and then the parameter to the callback varies depending on whether you had converter applied to it or not. IOW, without a converter, a created event parameter is a ClientCacheEntryCreatedEvent, whereas with a converter, the parameter is a ClientCacheEntryCustomEvent. Two different types of events for the same event type. If you did it with explicit methods, you?d have to duplicate them for custom events. This is for embedded and is done in the filter and converter (not in the listener). Unless I am missing something this shouldn't directly affect the client methods. > >> Of course this means moving away from the KeyValueFilter to an UpdateFilter (good name, Radim) used only for cluster listeners. > > I don?t like the name actually. I associate update with modifications, and in similar vein, inserts with creation and delete with removals. What would you suggest? A few things I thought of quickly: CacheOperationFilter, CacheEventFilter, CacheWriteFilter - the only reason I preface Cache is because we have CacheManager events as well. > >> Will, what would be the overall impact on the API as right now the KeyValueFilter is reused between several components, like the cluster iterator. >> >> Cheers, >> -- >> Mircea Markus >> Infinispan lead (www.infinispan.org) >> >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mudokonman at gmail.com Tue Sep 23 13:04:10 2014 From: mudokonman at gmail.com (William Burns) Date: Tue, 23 Sep 2014 13:04:10 -0400 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: <35C618EF-0A4D-4EAB-A7AF-83B3DD0DBBC5@redhat.com> References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <35C618EF-0A4D-4EAB-A7AF-83B3DD0DBBC5@redhat.com> Message-ID: On Tue, Sep 23, 2014 at 12:12 PM, Galder Zamarre?o wrote: > > On 22 Sep 2014, at 19:23, William Burns wrote: > >> On Fri, Sep 19, 2014 at 12:39 PM, Emmanuel Bernard >> wrote: >>> >>> On 19 Sep 2014, at 17:09, William Burns wrote: >>> >>>> >> >>> At any rate the programming model becomes quite awkward and rely on strong understanding, I?d prefer to stick an enum showing the transition explicitly to make things easier. >> >> I am not sold on this as it seems pretty trivial to decipher which >> operation is which and the information would be present on the >> javadocs as well. > > I?m with Emmanuel on this. I?d much rather avoid relying on whether something is null/not-null and instead rely on a more typesafe solution. In fact, for remote listeners, org.infinispan.client.hotrod.event.ClientEvent has a type that allows the client to detect the type of the event received, independent of what the value(s) contain. Yeah I think the idea I was going down was to do a hybrid approach where we have the single method (containing the enum type) and then provide an abstract class that does the method separation if an implementer wants some boilerplate code implemented already. > > Cheers, > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Tue Sep 23 15:05:36 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 23 Sep 2014 20:05:36 +0100 Subject: [infinispan-dev] Assistance to write a custom CacheStore Message-ID: I noticed that we often have questions about how to implement a custom CacheStore. If someone had some time to write a nice guide for that, we might have some more luck in getting help to upgrade all the CacheStores which have been granted the status of "Abandonware" as defined by a user (!). https://issues.jboss.org/browse/ISPN-4751 From galder at redhat.com Wed Sep 24 02:16:04 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Wed, 24 Sep 2014 08:16:04 +0200 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> <962B47C5-F061-497F-8ACC-E8B0541F4917@redhat.com> Message-ID: <4910DFC4-3D14-43A4-B3C8-EBD1C34D9E3E@redhat.com> On 23 Sep 2014, at 19:02, William Burns wrote: > On Tue, Sep 23, 2014 at 12:20 PM, Galder Zamarre?o wrote: >> >> On 23 Sep 2014, at 14:53, Mircea Markus wrote: >> >>> On Sep 23, 2014, at 15:18, Emmanuel Bernard wrote: >>> >>>>> I am not sold on this as it seems pretty trivial to decipher which >>>>> operation is which and the information would be present on the >>>>> javadocs as well. >>>> >>>> I very strongly disagree. Cf the other thread with Radim 's comment on topology error. >>>> And think about *future* evolutions. The enum would make that much safer. In the bin enum world you would have to introduce a new YetAnotherKeyValueFilter interface :) >>> >>> Nicer than an enum would be an explicit method, e.g. handlePut/handleDelete/handleCreate/handleUpdate, as these would also receive the appropriate param list. >> >> ^ Hmmmmm, not sure I like that. If you look at the remote event blog posts, you?ll see that I use create/modify/remove annotations and then the parameter to the callback varies depending on whether you had converter applied to it or not. IOW, without a converter, a created event parameter is a ClientCacheEntryCreatedEvent, whereas with a converter, the parameter is a ClientCacheEntryCustomEvent. Two different types of events for the same event type. If you did it with explicit methods, you?d have to duplicate them for custom events. > > This is for embedded and is done in the filter and converter (not in > the listener). Unless I am missing something this shouldn't directly > affect the client methods. Ah ok. > >> >>> Of course this means moving away from the KeyValueFilter to an UpdateFilter (good name, Radim) used only for cluster listeners. >> >> I don?t like the name actually. I associate update with modifications, and in similar vein, inserts with creation and delete with removals. > > What would you suggest? A few things I thought of quickly: > CacheOperationFilter, CacheEventFilter, CacheWriteFilter - the only > reason I preface Cache is because we have CacheManager events as well. Maybe CacheEventFilter... > >> >>> Will, what would be the overall impact on the API as right now the KeyValueFilter is reused between several components, like the cluster iterator. >>> >>> Cheers, >>> -- >>> Mircea Markus >>> Infinispan lead (www.infinispan.org) >>> >>> >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From emmanuel at hibernate.org Wed Sep 24 02:38:23 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Wed, 24 Sep 2014 08:38:23 +0200 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: <6912B1A0-36D9-4363-B4FC-05AC0ACD7152@redhat.com> References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <6912B1A0-36D9-4363-B4FC-05AC0ACD7152@redhat.com> Message-ID: <6FD5D7D5-B812-4B51-ABBC-70EE5F45D1EE@hibernate.org> > On 23 sept. 2014, at 18:17, Galder Zamarre?o wrote: > > >> On 18 Sep 2014, at 18:24, Emmanuel Bernard wrote: >> >> Hi all, >> >> I have had a good exchange on how someone would use clustered / remote listeners to do custom continuous query features. >> >> I have a few questions and requests to make this fully and easily doable >> >> ## Value as bytes or as objects >> >> Assuming a Hot Rod based usage and protobuf as the serialization layer. What are KeyValueFilter and Converter seeing? >> I assume today the bytes are unmarshalled and the Java object is provided to these interfaces. >> In a protobuf based storage, does that mean that the user must create the Java objects out of a protobuf compiler and deploy these classes in the classpath of each server node? >> Alternatively, could we pass the raw protobuf data to the KeyValueFilter and Converter? They could read the relevant properties at no deserialization cost and with lss problems related to the classloader. > > Following on my reply to this, you can kinda achieve this already today with a little hack. If you plug a converter, you?ll get the Java object as parameter and you can re-convert it to binary payload and send it to the client listener which does what it needs to do. Of course, less performant and still has potential classloader issues, but just mention it. > Right. I was trying to avoid the class loader issue on the server side. I.e. Not have to deploy my app classes on the grid. From rvansa at redhat.com Wed Sep 24 03:10:05 2014 From: rvansa at redhat.com (Radim Vansa) Date: Wed, 24 Sep 2014 09:10:05 +0200 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: <4910DFC4-3D14-43A4-B3C8-EBD1C34D9E3E@redhat.com> References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> <962B47C5-F061-497F-8ACC-E8B0541F4917@redhat.com> <4910DFC4-3D14-43A4-B3C8-EBD1C34D9E3E@redhat.com> Message-ID: <54226E4D.4010702@redhat.com> On 09/24/2014 08:16 AM, Galder Zamarre?o wrote: > On 23 Sep 2014, at 19:02, William Burns wrote: > >> On Tue, Sep 23, 2014 at 12:20 PM, Galder Zamarre?o wrote: >>> On 23 Sep 2014, at 14:53, Mircea Markus wrote: >>> >>>> On Sep 23, 2014, at 15:18, Emmanuel Bernard wrote: >>>> >>>>>> I am not sold on this as it seems pretty trivial to decipher which >>>>>> operation is which and the information would be present on the >>>>>> javadocs as well. >>>>> I very strongly disagree. Cf the other thread with Radim 's comment on topology error. >>>>> And think about *future* evolutions. The enum would make that much safer. In the bin enum world you would have to introduce a new YetAnotherKeyValueFilter interface :) >>>> Nicer than an enum would be an explicit method, e.g. handlePut/handleDelete/handleCreate/handleUpdate, as these would also receive the appropriate param list. >>> ^ Hmmmmm, not sure I like that. If you look at the remote event blog posts, you?ll see that I use create/modify/remove annotations and then the parameter to the callback varies depending on whether you had converter applied to it or not. IOW, without a converter, a created event parameter is a ClientCacheEntryCreatedEvent, whereas with a converter, the parameter is a ClientCacheEntryCustomEvent. Two different types of events for the same event type. If you did it with explicit methods, you?d have to duplicate them for custom events. >> This is for embedded and is done in the filter and converter (not in >> the listener). Unless I am missing something this shouldn't directly >> affect the client methods. > Ah ok. > >>>> Of course this means moving away from the KeyValueFilter to an UpdateFilter (good name, Radim) used only for cluster listeners. >>> I don?t like the name actually. I associate update with modifications, and in similar vein, inserts with creation and delete with removals. >> What would you suggest? A few things I thought of quickly: >> CacheOperationFilter, CacheEventFilter, CacheWriteFilter - the only >> reason I preface Cache is because we have CacheManager events as well. > Maybe CacheEventFilter... Does expiration trigger clustered listeners? 'Event' sounds quite generic, I would expect the EventFilter to be able to handle expirations, invalidations, evictions etc., maybe even reads as well (whether through separate methods or enums). ModificationFilter could be better (in TX I think we use term 'modifications' for all CUD ops), but we have already used CacheEntryModified for 'update'. > >>>> Will, what would be the overall impact on the API as right now the KeyValueFilter is reused between several components, like the cluster iterator. >>>> >>>> Cheers, >>>> -- >>>> Mircea Markus >>>> Infinispan lead (www.infinispan.org) >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> -- >>> Galder Zamarre?o >>> galder at redhat.com >>> twitter.com/galderz >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From rory.odonnell at oracle.com Wed Sep 24 03:55:45 2014 From: rory.odonnell at oracle.com (Rory O'Donnell Oracle, Dublin Ireland) Date: Wed, 24 Sep 2014 08:55:45 +0100 Subject: [infinispan-dev] Analysis of infinispan-6.0.2 dependency on JDK-Internal APIs Message-ID: <54227901.8020509@oracle.com> Hi Galder, As part of the preparations for JDK 9, Oracle?s engineers have been analyzing open source projects like yours to understand usage. One area of concern involves identifying compatibility problems, such as reliance on JDK-internal APIs. Our engineers have already prepared guidance on migrating some of the more common usage patterns of JDK-internal APIs to supported public interfaces. The list is on the OpenJDK wiki [0], along with instructions on how to run the jdeps analysis tool yourself . As part of the ongoing development of JDK 9, I would like to encourage migration from JDK-internal APIs towards the supported Java APIs. I have prepared a report for your project rele ase infinispan-6.0.2 based on the jdeps output. The report is attached to this e-mail. For anything where your migration path is unclear, I would appreciate comments on the JDK-internal API usage patterns in the attached jdeps report - in particular comments elaborating on the rationale for them - either to me or on this mailing list. Finding suitable replacements for unsupported interfaces is not always straightforward, which is why I am reaching out to you early in the JDK 9 development cycle so you can give feedback about new APIs that may be needed to facilitate this exercise. Thank you in advance for any efforts and feedback helping us make JDK 9 better. Rgds,Rory [0] https://wiki.openjdk.java.net/display/JDK8/Java+Dependency+Analysis+Tool -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140924/f2635552/attachment-0002.html -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140924/f2635552/attachment-0003.html From afield at redhat.com Wed Sep 24 04:09:01 2014 From: afield at redhat.com (Alan Field) Date: Wed, 24 Sep 2014 04:09:01 -0400 (EDT) Subject: [infinispan-dev] Differences between default values in the XSD and the code...Part One In-Reply-To: <2020170102.38769728.1410861848650.JavaMail.zimbra@redhat.com> References: <1224160165.38268955.1410794857074.JavaMail.zimbra@redhat.com> <1427044870.49904197.1410798829973.JavaMail.zimbra@redhat.com> <2020170102.38769728.1410861848650.JavaMail.zimbra@redhat.com> Message-ID: <1469221175.43109002.1411546141729.JavaMail.zimbra@redhat.com> Hey, At this point, no one has disagreed with the default values that Dan suggested, so I am going to use these in the coming pull request. I am still checking the server defaults vs the XSD. The discussion about keeping the XSD and code synchronized should continue, and hopefully we'll come to a solution that is more automatic than the current state. Removing JAXB may have been the correct decision, but the replacement seems to have created a bigger maintenance problem. Thanks, Alan ----- Original Message ----- > From: "Alan Field" > To: "infinispan -Dev List" > Cc: "Dan Berindei" > Sent: Tuesday, September 16, 2014 12:04:08 PM > Subject: [infinispan-dev] Differences between default values in the XSD and the code...Part One > > Hey, > > I have been looking at the differences between default values in the XSD vs > the default values in the configuration builders. [1] I created a list of > differences and talked to Dan about his suggestion for the defaults. The > numbers in parentheses are Dan's suggestions, but he also asked me to post > here to get a wider set of opinions on these values. This list is based on > the code used in infinispan-core, so I still need to go through the server > code to check the default values there. > > 1) For locking, the code has concurrency level set to 32, and the XSD has > 1000 (32) > 2) For eviction: > a) the code has max entries set to -1, and the XSD has 10000 (-1) > b) the code has interval set to 60000, and the XSD has 5000 (60000) > 3) For async configuration: > a) the code has queue size set to 1000, and the XSD has 0 (0) > b) the code has queue flush interval set to 5000, and the XSD has 10 (10) > c) the code has remote timeout set to 15000, and the XSD has 17500 (15000) > 4) For hash, the code has number of segments set to 60, and the XSD has 80 > (60) > 5) For l1, the code has l1 cleanup interval set to 600000, and the XSD has > 60000 (60000) > > Please let me know if you have any opinions on these default values, and also > if you have any ideas for avoiding these differences in the future. It seems > like there are two possibilities at this point: > > 1) Generating the XSD from the source code > 2) Creating a test case that parses the XSD, creates a cache, and verifies > the default values against the parsed values > 3) ??? > > Thanks, > Alan > > [1] https://issues.jboss.org/browse/ISPN-4645 > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From rory.odonnell at oracle.com Wed Sep 24 04:21:13 2014 From: rory.odonnell at oracle.com (Rory O'Donnell Oracle, Dublin Ireland) Date: Wed, 24 Sep 2014 09:21:13 +0100 Subject: [infinispan-dev] Analysis of infinispan-6.0.2 dependency on JDK-Internal APIs In-Reply-To: <54227901.8020509@oracle.com> References: <54227901.8020509@oracle.com> Message-ID: <54227EF9.6090403@oracle.com> Below is a text output of the report for infinispan-6.0.2. Rgds,Rory ------------------------------------------------------------------------ JDK Internal API Usage Report for infinispan-6.0.2.Final-all The OpenJDK Quality Outreach campaign has run a compatibility report to identify usage of JDK-internal APIs. Usage of these JDK-internal APIs could pose compatibility issues, as the Java team explained in 1996 . We have created this report to help you identify which JDK-internal APIs your project uses, what to use instead, and where those changes should go. Making these changes will improve your compatibility, and in some cases give better performance. Migrating away from the JDK-internal APIs now will give your team adequate time for testing before the release of JDK 9. If you are unable to migrate away from an internal API, please provide us with an explanation below to help us understand it better. As a reminder, supported APIs are determined by the OpenJDK's Java Community Process and not by Oracle. This report was generated by jdeps through static analysis of artifacts: it does not identify any usage of those APIs through reflection or dynamic bytecode. You may also run jdeps on your own if you would prefer. Summary of the analysis of the jar files within infinispan-6.0.2.Final-all: * Numer of jar files depending on JDK-internal APIs: 10 * Internal APIs that have known replacements: 0 * Internal APIs that have no supported replacements: 73 APIs that have known replacements : ID Replace Usage of With Inside JDK-internal APIs without supported replacements: ID Internal APIs (do not use) Used by 1 com.sun.org.apache.xml.internal.utils.PrefixResolver * lib/freemarker-2.3.11.jar Explanation... 2 com.sun.org.apache.xpath.internal.XPath * lib/freemarker-2.3.11.jar Explanation... 3 com.sun.org.apache.xpath.internal.XPathContext * lib/freemarker-2.3.11.jar Explanation... 4 com.sun.org.apache.xpath.internal.objects.XBoolean * lib/freemarker-2.3.11.jar Explanation... 5 com.sun.org.apache.xpath.internal.objects.XNodeSet * lib/freemarker-2.3.11.jar Explanation... 6 com.sun.org.apache.xpath.internal.objects.XNull * lib/freemarker-2.3.11.jar Explanation... 7 com.sun.org.apache.xpath.internal.objects.XNumber * lib/freemarker-2.3.11.jar Explanation... 8 com.sun.org.apache.xpath.internal.objects.XObject * lib/freemarker-2.3.11.jar Explanation... 9 com.sun.org.apache.xpath.internal.objects.XString * lib/freemarker-2.3.11.jar Explanation... 10 org.w3c.dom.html.HTMLAnchorElement * lib/xercesImpl-2.9.1.jar Explanation... 11 org.w3c.dom.html.HTMLAppletElement * lib/xercesImpl-2.9.1.jar Explanation... 12 org.w3c.dom.html.HTMLAreaElement * lib/xercesImpl-2.9.1.jar Explanation... 13 org.w3c.dom.html.HTMLBRElement * lib/xercesImpl-2.9.1.jar Explanation... 14 org.w3c.dom.html.HTMLBaseElement * lib/xercesImpl-2.9.1.jar Explanation... 15 org.w3c.dom.html.HTMLBaseFontElement * lib/xercesImpl-2.9.1.jar Explanation... 16 org.w3c.dom.html.HTMLBodyElement * lib/xercesImpl-2.9.1.jar Explanation... 17 org.w3c.dom.html.HTMLButtonElement * lib/xercesImpl-2.9.1.jar Explanation... 18 org.w3c.dom.html.HTMLCollection * lib/xercesImpl-2.9.1.jar Explanation... 19 org.w3c.dom.html.HTMLDListElement * lib/xercesImpl-2.9.1.jar Explanation... 20 org.w3c.dom.html.HTMLDirectoryElement * lib/xercesImpl-2.9.1.jar Explanation... 21 org.w3c.dom.html.HTMLDivElement * lib/xercesImpl-2.9.1.jar Explanation... 22 org.w3c.dom.html.HTMLDocument * lib/xercesImpl-2.9.1.jar Explanation... 23 org.w3c.dom.html.HTMLElement * lib/xercesImpl-2.9.1.jar Explanation... 24 org.w3c.dom.html.HTMLFieldSetElement * lib/xercesImpl-2.9.1.jar Explanation... 25 org.w3c.dom.html.HTMLFontElement * lib/xercesImpl-2.9.1.jar Explanation... 26 org.w3c.dom.html.HTMLFormElement * lib/xercesImpl-2.9.1.jar Explanation... 27 org.w3c.dom.html.HTMLFrameElement * lib/xercesImpl-2.9.1.jar Explanation... 28 org.w3c.dom.html.HTMLFrameSetElement * lib/xercesImpl-2.9.1.jar Explanation... 29 org.w3c.dom.html.HTMLHRElement * lib/xercesImpl-2.9.1.jar Explanation... 30 org.w3c.dom.html.HTMLHeadElement * lib/xercesImpl-2.9.1.jar Explanation... 31 org.w3c.dom.html.HTMLHeadingElement * lib/xercesImpl-2.9.1.jar Explanation... 32 org.w3c.dom.html.HTMLHtmlElement * lib/xercesImpl-2.9.1.jar Explanation... 33 org.w3c.dom.html.HTMLIFrameElement * lib/xercesImpl-2.9.1.jar Explanation... 34 org.w3c.dom.html.HTMLImageElement * lib/xercesImpl-2.9.1.jar Explanation... 35 org.w3c.dom.html.HTMLInputElement * lib/xercesImpl-2.9.1.jar Explanation... 36 org.w3c.dom.html.HTMLIsIndexElement * lib/xercesImpl-2.9.1.jar Explanation... 37 org.w3c.dom.html.HTMLLIElement * lib/xercesImpl-2.9.1.jar Explanation... 38 org.w3c.dom.html.HTMLLabelElement * lib/xercesImpl-2.9.1.jar Explanation... 39 org.w3c.dom.html.HTMLLegendElement * lib/xercesImpl-2.9.1.jar Explanation... 40 org.w3c.dom.html.HTMLLinkElement * lib/xercesImpl-2.9.1.jar Explanation... 41 org.w3c.dom.html.HTMLMapElement * lib/xercesImpl-2.9.1.jar Explanation... 42 org.w3c.dom.html.HTMLMenuElement * lib/xercesImpl-2.9.1.jar Explanation... 43 org.w3c.dom.html.HTMLMetaElement * lib/xercesImpl-2.9.1.jar Explanation... 44 org.w3c.dom.html.HTMLModElement * lib/xercesImpl-2.9.1.jar Explanation... 45 org.w3c.dom.html.HTMLOListElement * lib/xercesImpl-2.9.1.jar Explanation... 46 org.w3c.dom.html.HTMLObjectElement * lib/xercesImpl-2.9.1.jar Explanation... 47 org.w3c.dom.html.HTMLOptGroupElement * lib/xercesImpl-2.9.1.jar Explanation... 48 org.w3c.dom.html.HTMLOptionElement * lib/xercesImpl-2.9.1.jar Explanation... 49 org.w3c.dom.html.HTMLParagraphElement * lib/xercesImpl-2.9.1.jar Explanation... 50 org.w3c.dom.html.HTMLParamElement * lib/xercesImpl-2.9.1.jar Explanation... 51 org.w3c.dom.html.HTMLPreElement * lib/xercesImpl-2.9.1.jar Explanation... 52 org.w3c.dom.html.HTMLQuoteElement * lib/xercesImpl-2.9.1.jar Explanation... 53 org.w3c.dom.html.HTMLScriptElement * lib/xercesImpl-2.9.1.jar Explanation... 54 org.w3c.dom.html.HTMLSelectElement * lib/xercesImpl-2.9.1.jar Explanation... 55 org.w3c.dom.html.HTMLStyleElement * lib/xercesImpl-2.9.1.jar Explanation... 56 org.w3c.dom.html.HTMLTableCaptionElement * lib/xercesImpl-2.9.1.jar Explanation... 57 org.w3c.dom.html.HTMLTableCellElement * lib/xercesImpl-2.9.1.jar Explanation... 58 org.w3c.dom.html.HTMLTableColElement * lib/xercesImpl-2.9.1.jar Explanation... 59 org.w3c.dom.html.HTMLTableElement * lib/xercesImpl-2.9.1.jar Explanation... 60 org.w3c.dom.html.HTMLTableRowElement * lib/xercesImpl-2.9.1.jar Explanation... 61 org.w3c.dom.html.HTMLTableSectionElement * lib/xercesImpl-2.9.1.jar Explanation... 62 org.w3c.dom.html.HTMLTextAreaElement * lib/xercesImpl-2.9.1.jar Explanation... 63 org.w3c.dom.html.HTMLTitleElement * lib/xercesImpl-2.9.1.jar Explanation... 64 org.w3c.dom.html.HTMLUListElement * lib/xercesImpl-2.9.1.jar Explanation... 65 org.w3c.dom.ranges.DocumentRange * lib/xercesImpl-2.9.1.jar Explanation... 66 org.w3c.dom.ranges.Range * lib/xercesImpl-2.9.1.jar Explanation... 67 org.w3c.dom.ranges.RangeException * lib/xercesImpl-2.9.1.jar Explanation... 68 sun.misc.Signal * lib/aesh-0.33.7.jar Explanation... 69 sun.misc.SignalHandler * lib/aesh-0.33.7.jar Explanation... 70 sun.misc.Unsafe * lib/avro-1.7.5.jar * lib/guava-12.0.jar * lib/infinispan-commons-6.0.2.Final.jar * lib/mvel2-2.0.12.jar * lib/scala-library-2.10.2.jar Explanation... 71 sun.nio.ch.FileChannelImpl * lib/leveldb-0.5.jar Explanation... 72 sun.reflect.ReflectionFactory * lib/jboss-marshalling-1.4.4.Final.jar Explanation... 73 sun.reflect.ReflectionFactory$GetReflectionFactoryAction * lib/jboss-marshalling-1.4.4.Final.jar Explanation... Identify External Replacements You should use a separate third-party library that performs this functionality. ID Internal API (grouped by package) Used By Identify External Replacement ------------------------------------------------------------------------ On 24/09/2014 08:55, Rory O'Donnell Oracle, Dublin Ireland wrote: > Hi Galder, > > As part of the preparations for JDK 9, Oracle?s engineers have been > analyzing open source projects like yours to understand usage. One > area of concern involves identifying compatibility problems, such as > reliance on JDK-internal APIs. > > Our engineers have already prepared guidance on migrating some of the > more common usage patterns of JDK-internal APIs to supported public > interfaces. The list is on the OpenJDK wiki [0], along with > instructions on how to run the jdeps analysis tool yourself . > > As part of the ongoing development of JDK 9, I would like to encourage > migration from JDK-internal APIs towards the supported Java APIs. I > have prepared a report for your project rele ase infinispan-6.0.2 > based on the jdeps output. > > The report is attached to this e-mail. > > For anything where your migration path is unclear, I would appreciate > comments on the JDK-internal API usage patterns in the attached jdeps > report - in particular comments elaborating on the rationale for them > - either to me or on this mailing list. > > Finding suitable replacements for unsupported interfaces is not always > straightforward, which is why I am reaching out to you early in the > JDK 9 development cycle so you can give feedback about new APIs that > may be needed to facilitate this exercise. > > Thank you in advance for any efforts and feedback helping us make JDK > 9 better. > > Rgds,Rory > > [0] > https://wiki.openjdk.java.net/display/JDK8/Java+Dependency+Analysis+Tool > > > -- > Rgds,Rory O'Donnell > Quality Engineering Manager > Oracle EMEA , Dublin, Ireland > > > > -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140924/3726b79e/attachment-0001.html From galder at redhat.com Wed Sep 24 03:13:25 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Wed, 24 Sep 2014 09:13:25 +0200 Subject: [infinispan-dev] putAll, getAll and optimization fruits In-Reply-To: <6B46DA8E-A0E8-4D6A-A7D6-7481347E670F@hibernate.org> References: <027BBC9B-E1B0-4D35-894B-D51009805F61@hibernate.org> <6B46DA8E-A0E8-4D6A-A7D6-7481347E670F@hibernate.org> Message-ID: <95E61F5A-70EF-48AF-B856-AA92F52C37B7@redhat.com> On 23 Sep 2014, at 16:15, Emmanuel Bernard wrote: > > On 23 Sep 2014, at 16:06, Emmanuel Bernard wrote: > >> Doing that it?s 1/2 of the story, because as I?ve already explained in Wolf?s efforts around putAll, the Netty server implementation just calls to Infinispan synchronous cache operations, which often block. So, using an async client will get you 1/2 of the job done. The way to limit the blocking is by limiting that blocking, e.g. splitting keys and sending gets to nodes that own it would mean the gets get resolved locally, similar thing with puts but as Bela/Pedro found out, there could be some blocking still. > > Galder, are you saying that I cannot execute put operations in parallel on the same node for the same transaction? Err, not sure what use case you are talking about here but Hot Rod does not have transactions. > I.e. could the netty server use a bounded queue + thead pool to parallelize put operation in case of putAll (the other case does not matter). More than Netty, the Hot Rod server implementation could take a N-key putAll and parallize it if needed. > Also, even if we keep the server synchronous and split the payload per server to run that in parallel, we will still gain enough I think: > > - we avoid i-1 times the latency between the client and a specific node (i is the number of keys going to a specific node) > - with the key evenly distributed, you divide the overall latency by O(m) where m is the number of servers. > > Something like that :) Sure, I agree that there?s benefit of course, but as you rightly pointed out, these puts either need to be putAsyncs and wait for all replies and then send the response to the client, or paralellize sync put calls manually. Cheers, > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From galder at redhat.com Wed Sep 24 06:41:07 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Wed, 24 Sep 2014 12:41:07 +0200 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: <54226E4D.4010702@redhat.com> References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> <962B47C5-F061-497F-8ACC-E8B0541F4917@redhat.com> <4910DFC4-3D14-43A4-B3C8-EBD1C34D9E3E@redhat.com> <54226E4D.4010702@redhat.com> Message-ID: <8A8C1DBA-A69E-4156-8699-88B530C57178@redhat.com> On 24 Sep 2014, at 09:10, Radim Vansa wrote: > On 09/24/2014 08:16 AM, Galder Zamarre?o wrote: >> On 23 Sep 2014, at 19:02, William Burns wrote: >> >>> On Tue, Sep 23, 2014 at 12:20 PM, Galder Zamarre?o wrote: >>>> On 23 Sep 2014, at 14:53, Mircea Markus wrote: >>>> >>>>> On Sep 23, 2014, at 15:18, Emmanuel Bernard wrote: >>>>> >>>>>>> I am not sold on this as it seems pretty trivial to decipher which >>>>>>> operation is which and the information would be present on the >>>>>>> javadocs as well. >>>>>> I very strongly disagree. Cf the other thread with Radim 's comment on topology error. >>>>>> And think about *future* evolutions. The enum would make that much safer. In the bin enum world you would have to introduce a new YetAnotherKeyValueFilter interface :) >>>>> Nicer than an enum would be an explicit method, e.g. handlePut/handleDelete/handleCreate/handleUpdate, as these would also receive the appropriate param list. >>>> ^ Hmmmmm, not sure I like that. If you look at the remote event blog posts, you?ll see that I use create/modify/remove annotations and then the parameter to the callback varies depending on whether you had converter applied to it or not. IOW, without a converter, a created event parameter is a ClientCacheEntryCreatedEvent, whereas with a converter, the parameter is a ClientCacheEntryCustomEvent. Two different types of events for the same event type. If you did it with explicit methods, you?d have to duplicate them for custom events. >>> This is for embedded and is done in the filter and converter (not in >>> the listener). Unless I am missing something this shouldn't directly >>> affect the client methods. >> Ah ok. >> >>>>> Of course this means moving away from the KeyValueFilter to an UpdateFilter (good name, Radim) used only for cluster listeners. >>>> I don?t like the name actually. I associate update with modifications, and in similar vein, inserts with creation and delete with removals. >>> What would you suggest? A few things I thought of quickly: >>> CacheOperationFilter, CacheEventFilter, CacheWriteFilter - the only >>> reason I preface Cache is because we have CacheManager events as well. >> Maybe CacheEventFilter... > > Does expiration trigger clustered listeners? Expiration events are not there yet: https://issues.jboss.org/browse/ISPN-694 It?s assigned to me but I?ve had my hands full with HR 2.0 related stuff... > 'Event' sounds quite > generic, I would expect the EventFilter to be able to handle > expirations, invalidations, evictions etc., maybe even reads as well > (whether through separate methods or enums). You can indeed listen for cache entry visited events, at least in local listeners. > > ModificationFilter could be better (in TX I think we use term > 'modifications' for all CUD ops), but we have already used > CacheEntryModified for 'update'. > > >> >>>>> Will, what would be the overall impact on the API as right now the KeyValueFilter is reused between several components, like the cluster iterator. >>>>> >>>>> Cheers, >>>>> -- >>>>> Mircea Markus >>>>> Infinispan lead (www.infinispan.org) >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> -- >>>> Galder Zamarre?o >>>> galder at redhat.com >>>> twitter.com/galderz >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From galder at redhat.com Wed Sep 24 06:53:11 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Wed, 24 Sep 2014 12:53:11 +0200 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: <3A3DA273-6136-4258-884C-95467857BBD2@redhat.com> References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <3A3DA273-6136-4258-884C-95467857BBD2@redhat.com> Message-ID: <27991555-BE6E-4D73-AFE1-B32B00AF6296@redhat.com> On 23 Sep 2014, at 18:22, Galder Zamarre?o wrote: > Hi Emmanuel, > > Apologies for the delay getting back on the topic. Replies below from the remote listeners POV... > > > >> Alternatively, could we pass the raw protobuf data to the KeyValueFilter and Converter? They could read the relevant properties at no deserialization cost and with lss problems related to the classloader. > > ^ I don?t see why not. Bear in mind that filter/converter callbacks happen server side, but as long as implementations can make out what they need from those byte arrays, all good IMO. I?ll create a JIRA to track this. Not sure it could be done wo/ a configuration option but I?ll try to do so if possible. Created https://issues.jboss.org/browse/ISPN-4757 to track this. Cheers, -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From mudokonman at gmail.com Thu Sep 25 09:20:37 2014 From: mudokonman at gmail.com (William Burns) Date: Thu, 25 Sep 2014 09:20:37 -0400 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> <6FDDA588-82B0-4DB3-95C9-527CDD8B5219@hibernate.org> <821068D0-D857-4006-AC70-A0798C4756C1@redhat.com> Message-ID: On Tue, Sep 23, 2014 at 9:39 AM, William Burns wrote: > On Tue, Sep 23, 2014 at 9:31 AM, Mircea Markus wrote: >> >> On Sep 23, 2014, at 16:27, Emmanuel Bernard wrote: >> >>> >>> On 23 Sep 2014, at 14:53, Mircea Markus wrote: >>> >>>> On Sep 23, 2014, at 15:18, Emmanuel Bernard wrote: >>>> >>>>>> I am not sold on this as it seems pretty trivial to decipher which >>>>>> operation is which and the information would be present on the >>>>>> javadocs as well. >>>>> >>>>> I very strongly disagree. Cf the other thread with Radim 's comment on topology error. >>>>> And think about *future* evolutions. The enum would make that much safer. In the bin enum world you would have to introduce a new YetAnotherKeyValueFilter interface :) >>>> >>>> Nicer than an enum would be an explicit method, e.g. handlePut/handleDelete/handleCreate/handleUpdate, as these would also receive the appropriate param list. Of course this means moving away from the KeyValueFilter to an UpdateFilter (good name, Radim) used only for cluster listeners. > > I like the name as well :) The only thing that I dislike about the > extra methods is the fact that it isn't a Functional interface, which > would be nice to have when we ever move to Java 8, but that may be > thinking too far into the future :P > >>>> Will, what would be the overall impact on the A > > The biggest part is the usage with the cluster iterator. Currently > the Listener uses the same filter that it is provided to also do the > iteration. If we want to go down the line of having the extra > interface(s), which overall I do like, then I am thinking we may want > to change the Listener annotation to no longer have an > includeCurrentState parameter and instead add a new method to the > addListener method of Cache that takes a KeyValueFilter and the new > UpdateFilter (as well as the 2 converters). I can then add in 2 Actually while working and thinking on this it seems it may be easiest to exclude the usage of KeyValueFilter in the listener pieces completely and instead leave the annotation as it is now. Instead the provided CacheEventFilter would be wrapped by a KeyValueFilter implement that just called the new method as if it was a create event for each value while iterating on them. I am thinking this is the cleanest. Do you guys have any opinions? It would also keep intact a lot of existing code and APIs. > bridge implementations so that you don't have to implement the other > if your implementation can handle both types. Also from the other > post it seems that I should add the retry boolean to all the > appropriate methods so that you can have a chance to detect if an > update was missed. Unless this seems to cumbersome? > >>> >>> If you do that you must also provide an abstract class with default noop operations that filter implementations would extend. Otherwise you are back with backward compatibility problems. >> >> KeyValueFilter was introduced in 7.0, or other backward compatibility problem you have in mind? > > I believe Emmanuel is referring to if we added additional operations > to the filter, but I am not sure what other operations we would want > to add to it. If anything we would probably make a different type of > filter specific to its use case. > >> >> Cheers, >> -- >> Mircea Markus >> Infinispan lead (www.infinispan.org) >> >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Thu Sep 25 11:31:59 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 25 Sep 2014 17:31:59 +0200 Subject: [infinispan-dev] Infinispan 7.0 feature freeze and future planning Message-ID: <5424356F.3010700@redhat.com> Hi all, Infinispan 7.0 has been in development for over 9 months now and we really need to release it into the wild since it contains a lot of juicy stuff :) For this reason I'm calling a feature freeze and all new features need to be reassigned over to 7.1 or 7.2. For the next minor releases I would like to suggest the following strategy: - use a 3 month timebox where we strive to maintain master in an "always releasable" state - complex feature work will need to happen onto dedicated feature branches, using the usual GitHub pull-request workflow - only when a feature is complete (code, tests, docs, reviewed, CI-checked) it will be merged back into master - if a feature is running late it will be postponed to the following minor release so as not to hinder other development Suggestions, amendments to the above are welcome. Thanks ! Tristan From sanne at infinispan.org Thu Sep 25 12:35:16 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 25 Sep 2014 17:35:16 +0100 Subject: [infinispan-dev] Infinispan 7.0 feature freeze and future planning In-Reply-To: <5424356F.3010700@redhat.com> References: <5424356F.3010700@redhat.com> Message-ID: On 25 September 2014 16:31, Tristan Tarrant wrote: > Hi all, > > Infinispan 7.0 has been in development for over 9 months now and we > really need to release it into the wild since it contains a lot of juicy > stuff :) > For this reason I'm calling a feature freeze and all new features need > to be reassigned over to 7.1 or 7.2. > > For the next minor releases I would like to suggest the following strategy: > - use a 3 month timebox where we strive to maintain master in an "always > releasable" state > - complex feature work will need to happen onto dedicated feature > branches, using the usual GitHub pull-request workflow > - only when a feature is complete (code, tests, docs, reviewed, > CI-checked) it will be merged back into master > - if a feature is running late it will be postponed to the following > minor release so as not to hinder other development > > Suggestions, amendments to the above are welcome. +1000 , as the thousand good reasons for which that is the only sustainable development model. Also a suggestion of a model which worked pretty well for me - and is in no way in contrast with the above - is that if you're working on a complex feature which you'd rather "rush in" because rebasing is getting complex, is to extract from your branch the large refactorings which you're needing and propose those already, even if the full feature isn't finished. Needless to say that is only acceptable when - you know for sure that refactoring is going to be needed (i.e. your full work isn't finished but is in advanced state enough to have this knowledge) - it doesn't break anything whatsoever - isn't a pain for others, or won't otherwise slow down others This generally works well, as if you don't have a large refactoring which you can somehow extract from your work in progress, it means you're not in trouble maintaining the constant rebase either, and is a good exercise to keep your flow of changes reorganized and under control. Sanne > > Thanks ! > > Tristan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mmarkus at redhat.com Thu Sep 25 12:59:22 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Thu, 25 Sep 2014 17:59:22 +0100 Subject: [infinispan-dev] Assistance to write a custom CacheStore In-Reply-To: References: Message-ID: <29DF296B-3D45-467D-8266-633114D98C6D@redhat.com> We have a section in docs[1] describing the new API and what should be implemented for a custom store. As an exmaple the zero deps filte store is offered. I guess a maven project to help people boot things up would be handy, though. [1] http://infinispan.org/docs/7.0.x/user_guide/user_guide.html#_persistence On Sep 23, 2014, at 22:05, Sanne Grinovero wrote: > I noticed that we often have questions about how to implement a custom > CacheStore. > > If someone had some time to write a nice guide for that, we might have > some more luck in getting help to upgrade all the CacheStores which > have been granted the status of "Abandonware" as defined by a user > (!). > > https://issues.jboss.org/browse/ISPN-4751 > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From emmanuel at hibernate.org Fri Sep 26 04:02:33 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Fri, 26 Sep 2014 10:02:33 +0200 Subject: [infinispan-dev] Infinispan 7.0 feature freeze and future planning In-Reply-To: <5424356F.3010700@redhat.com> References: <5424356F.3010700@redhat.com> Message-ID: <09E1E7AD-FF4B-4C77-B824-1FF9E339A1F2@hibernate.org> Not so theoretical question. What about features that are being refined (like clustered / remote listener as seen int he recent days). Are these improvements to be removed under the feature freeze hammer. That would possibly impact our ability to do them in 7.x if APIs change. Or are they part of the maturation cycle after feature freezing? > On 25 sept. 2014, at 17:31, Tristan Tarrant wrote: > > Hi all, > > Infinispan 7.0 has been in development for over 9 months now and we > really need to release it into the wild since it contains a lot of juicy > stuff :) > For this reason I'm calling a feature freeze and all new features need > to be reassigned over to 7.1 or 7.2. > > For the next minor releases I would like to suggest the following strategy: > - use a 3 month timebox where we strive to maintain master in an "always > releasable" state > - complex feature work will need to happen onto dedicated feature > branches, using the usual GitHub pull-request workflow > - only when a feature is complete (code, tests, docs, reviewed, > CI-checked) it will be merged back into master > - if a feature is running late it will be postponed to the following > minor release so as not to hinder other development > > Suggestions, amendments to the above are welcome. > > Thanks ! > > Tristan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From emmanuel at hibernate.org Fri Sep 26 04:06:49 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Fri, 26 Sep 2014 10:06:49 +0200 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> <6FDDA588-82B0-4DB3-95C9-527CDD8B5219@hibernate.org> <821068D0-D857-4006-AC70-A0798C4756C1@redhat.com> Message-ID: <4FA0A74E-4DA4-42F1-BE0C-C07E401633A0@hibernate.org> You lost me at actually ;) but if you have some code or even a gist showing how a user would use and interact with these changes, I can give you some feedback on the use cases I had in mind and if they fit. > On 25 sept. 2014, at 15:20, William Burns wrote: > > Actually while working and thinking on this it seems it may be easiest > to exclude the usage of KeyValueFilter in the listener pieces > completely and instead leave the annotation as it is now. Instead the > provided CacheEventFilter would be wrapped by a KeyValueFilter > implement that just called the new method as if it was a create event > for each value while iterating on them. I am thinking this is the > cleanest. Do you guys have any opinions? It would also keep intact a > lot of existing code and APIs. From ttarrant at redhat.com Fri Sep 26 04:18:04 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 26 Sep 2014 10:18:04 +0200 Subject: [infinispan-dev] Infinispan 7.0 feature freeze and future planning In-Reply-To: <09E1E7AD-FF4B-4C77-B824-1FF9E339A1F2@hibernate.org> References: <5424356F.3010700@redhat.com> <09E1E7AD-FF4B-4C77-B824-1FF9E339A1F2@hibernate.org> Message-ID: <5425213C.5010900@redhat.com> Obviously this is not a hard freeze: 7.0.0.Final is still a month away, so there is time to refine things that are already in there. To bring a couple of examples: ISPN-4753 Add oldValue, oldMetadata and retry flag to filter and converter for Cluster Listeners This one is obviously a feature refinement, and essential to have in 7.0. ISPN-4752 Implement native getAll/putAll operations in Hot Rod 2.0 This one introduces a modification to the HotRod protocol (probably 2.1), but it can safely be pushed to 7.1, since it is augmentative. In essence, I trust everybody's common sense to focus on the essentials to avoid further meandering :) Tristan On 26/09/14 10:02, Emmanuel Bernard wrote: > Not so theoretical question. > What about features that are being refined (like clustered / remote listener as seen int he recent days). > Are these improvements to be removed under the feature freeze hammer. That would possibly impact our ability to do them in 7.x if APIs change. > Or are they part of the maturation cycle after feature freezing? > > >> On 25 sept. 2014, at 17:31, Tristan Tarrant wrote: >> >> Hi all, >> >> Infinispan 7.0 has been in development for over 9 months now and we >> really need to release it into the wild since it contains a lot of juicy >> stuff :) >> For this reason I'm calling a feature freeze and all new features need >> to be reassigned over to 7.1 or 7.2. >> >> For the next minor releases I would like to suggest the following strategy: >> - use a 3 month timebox where we strive to maintain master in an "always >> releasable" state >> - complex feature work will need to happen onto dedicated feature >> branches, using the usual GitHub pull-request workflow >> - only when a feature is complete (code, tests, docs, reviewed, >> CI-checked) it will be merged back into master >> - if a feature is running late it will be postponed to the following >> minor release so as not to hinder other development >> >> Suggestions, amendments to the above are welcome. >> >> Thanks ! >> >> Tristan >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > From belaran at gmail.com Fri Sep 26 04:39:36 2014 From: belaran at gmail.com (Romain Pelisse) Date: Fri, 26 Sep 2014 10:39:36 +0200 Subject: [infinispan-dev] JIRA notification Message-ID: Hi all, I would like to be notified whenever a new bug/feature request is created in ISPN JIRA, but I utterly failed at finding out how to do that. Is this possible ? (I would guess so) ? If somebody has done that can you give me a hint on how to set it up ? Thanks ! -- Romain PELISSE, *"The trouble with having an open mind, of course, is that people will insist on coming along and trying to put things in it" -- Terry Pratchett* Belaran ins Prussia (blog) (... finally up and running !) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140926/94cfc75d/attachment.html From ttarrant at redhat.com Fri Sep 26 05:30:28 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 26 Sep 2014 11:30:28 +0200 Subject: [infinispan-dev] JIRA notification In-Reply-To: References: Message-ID: <54253234.3060603@redhat.com> You need to subscribe to https://lists.jboss.org/mailman/listinfo/infinispan-issues On 26/09/14 10:39, Romain Pelisse wrote: > Hi all, > > I would like to be notified whenever a new bug/feature request is > created in ISPN JIRA, but I utterly failed at finding out how to do > that. Is this possible ? (I would guess so) ? If somebody has done > that can you give me a hint on how to set it up ? > > Thanks ! > > -- > Romain PELISSE, > /"The trouble with having an open mind, of course, is that people will > insist on coming along and trying to put things in it" -- Terry Pratchett/ > Belaran ins Prussia (blog) > (... finally up and running !) > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Fri Sep 26 09:03:56 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Fri, 26 Sep 2014 15:03:56 +0200 Subject: [infinispan-dev] Clustering standalone Infinispan w/ WF running Infinispan Message-ID: Hey Paul, In the last couple of days, a couple of people have encountered the exception in [1] when trying to cluster a standalone Infinispan app with its own JGroups configuration file with a AS/WF running Infinispan cache. >From my POV, 3 possible causes: 1. Dependency mismatches between AS/WF and the standalone app. Having done some quick study of Kurt?s case, apart from micro version changes, all looks good. 2. Mismatch in the Infinispan and/or JGroups configuration file. 3. AS/WF puts something on the clustered wire that standalone Infinispan does not expect. Are you still doing multiplexing? Could you be adding extra info to the wire? With this email, I?m trying to get some clarification from you if the issue could be due to 3rd option. If it?s either of the first two, it?s a matter of digging and finding the difference, but if it?s 3rd one, it?s more problematic. Any ideas? [1] https://gist.github.com/skoussou/92f062f2d0bd17168e01 -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From sanne at infinispan.org Fri Sep 26 09:35:16 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 26 Sep 2014 14:35:16 +0100 Subject: [infinispan-dev] Lucene compatibility PR Message-ID: All, I will be away next week but I highly need my Lucene compatibility PR [1] to be included in next week's Infinispan release. If it's not good as-is, please take ownership of it and don't wait for me to get back before integrating it. Cheers, Sanne [1] https://github.com/infinispan/infinispan/pull/2904 From rhusar at redhat.com Fri Sep 26 10:47:16 2014 From: rhusar at redhat.com (Radoslav Husar) Date: Fri, 26 Sep 2014 16:47:16 +0200 Subject: [infinispan-dev] Clustering standalone Infinispan w/ WF running Infinispan In-Reply-To: References: Message-ID: <54257C74.2050600@redhat.com> From what Stelios is telling me the question is a little bit other way round: he is using library mode infinispan and jgroups in EAP and connecting to JDG. So the question is what JDG is doing with the stack, not AS/WF as its infinispan/jgroups subsystem is not used. Unfortunately I don't have access to the JDG repo so I don't know what changes have been made there but if you are using the same jgroups logic, IMO the channel needs to be wrapped as org.jboss.as.clustering.jgroups.MuxChannel before passing to infinispan. Rado On 26/09/14 15:03, Galder Zamarre?o wrote: > Hey Paul, > > In the last couple of days, a couple of people have encountered the exception in [1] when trying to cluster a standalone Infinispan app with its own JGroups configuration file with a AS/WF running Infinispan cache. > > From my POV, 3 possible causes: > > 1. Dependency mismatches between AS/WF and the standalone app. Having done some quick study of Kurt?s case, apart from micro version changes, all looks good. > > 2. Mismatch in the Infinispan and/or JGroups configuration file. > > 3. AS/WF puts something on the clustered wire that standalone Infinispan does not expect. Are you still doing multiplexing? Could you be adding extra info to the wire? > > With this email, I?m trying to get some clarification from you if the issue could be due to 3rd option. If it?s either of the first two, it?s a matter of digging and finding the difference, but if it?s 3rd one, it?s more problematic. > > Any ideas? > > [1] https://gist.github.com/skoussou/92f062f2d0bd17168e01 > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > From dereed at redhat.com Fri Sep 26 12:24:02 2014 From: dereed at redhat.com (Dennis Reed) Date: Fri, 26 Sep 2014 11:24:02 -0500 Subject: [infinispan-dev] Clustering standalone Infinispan w/ WF running Infinispan In-Reply-To: References: Message-ID: <54259322.9010403@redhat.com> The error in that link is when reading the class data from the serialized stream. EAP (and therefore I assume JDG server, but I haven't confirmed) uses a custom class resolver, which includes the JBoss Modules module that the class came from in the serialized class data. Library mode JDG would not by default. Therefore the data format is not compatible. -Dennis On 09/26/2014 08:03 AM, Galder Zamarre?o wrote: > Hey Paul, > > In the last couple of days, a couple of people have encountered the exception in [1] when trying to cluster a standalone Infinispan app with its own JGroups configuration file with a AS/WF running Infinispan cache. > >>From my POV, 3 possible causes: > > 1. Dependency mismatches between AS/WF and the standalone app. Having done some quick study of Kurt?s case, apart from micro version changes, all looks good. > > 2. Mismatch in the Infinispan and/or JGroups configuration file. > > 3. AS/WF puts something on the clustered wire that standalone Infinispan does not expect. Are you still doing multiplexing? Could you be adding extra info to the wire? > > With this email, I?m trying to get some clarification from you if the issue could be due to 3rd option. If it?s either of the first two, it?s a matter of digging and finding the difference, but if it?s 3rd one, it?s more problematic. > > Any ideas? > > [1] https://gist.github.com/skoussou/92f062f2d0bd17168e01 > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From paul.ferraro at redhat.com Mon Sep 29 12:57:20 2014 From: paul.ferraro at redhat.com (Paul Ferraro) Date: Mon, 29 Sep 2014 12:57:20 -0400 (EDT) Subject: [infinispan-dev] Clustering standalone Infinispan w/ WF running Infinispan In-Reply-To: <54297D4F.9060009@jboss.com> References: <54257C74.2050600@redhat.com> <1898996727.37561681.1411744742097.JavaMail.zimbra@redhat.com> <54297D4F.9060009@jboss.com> Message-ID: <133530692.31414088.1412009840434.JavaMail.zimbra@redhat.com> You should not need to use a MuxChannel. This would only be necessary if there are other EAP services sharing the channel. Using a MuxChannel allows your standalone Infinispan instance to filter these irrelevant messages. However, in JDG, there should be no other services other than Infinispan using the channel - hence the MuxChannel stuff is unnecessary. I think Dennis earlier response was spot on. EAP/JDG configures it's cache managers using a ModularClassResolver (which includes a module name along with the class name when marshalling). Your standalone Infinispan instances do not use this and therefore cannot make sense of the message body. Paul ----- Original Message ----- > From: "Kurt T Stam" > To: "Stelios Koussouris" , "Radoslav Husar" > Cc: "Galder Zamarre?o" , "Paul Ferraro" , "Richard Achmatowicz" > , "infinispan -Dev List" > Sent: Monday, September 29, 2014 11:39:59 AM > Subject: Re: Clustering standalone Infinispan w/ WF running Infinispan > > Thanks for following up Stelios, I think Galder is traveling the next 2 > weeks. > > So - do we need fixes on both ends then so that the boot order does not > matter? In which project(s) would we apply > there changes? Or can they be applied in the end-user's code? > > Thx, > > --Kurt > > > > On 9/26/14, 11:19 AM, Stelios Koussouris wrote: > > Hi, > > > > Rado: It is both ways. ie. if I start first the JDG Server I get the issue > > on the library mode side when I start that one. If reverse the order of > > startup I get it in the JDG Server side. > > > > Question: > > ----------------------------------------------------------------------------------------------------------------------- > > ...IMO the channel needs to be wrapped as > > org.jboss.as.clustering.jgroups.MuxChannel before passing to infinispan. > > ... > > ----------------------------------------------------------------------------------------------------------------------- > > For now that this is not being done. If I wanted to do it manually on the > > library side where I can create the protocol programmatically we are > > talking about something like this? > > > > ProtocolStackConfigurator configurator = > > ConfiguratorFactory.getStackConfigurator("jgroups-udp.xml"); > > MuxChannel channel = new MuxChannel(configurator); > > org.infinispan.remoting.transport.Transport transport = new > > org.infinispan.remoting.transport.jgroups.JGroupsTransport(channel); > > > > .... > > then replace the below > > new > > GlobalConfigurationBuilder().clusteredDefault().globalJmxStatistics().cacheManagerName("RDSCacheManager").allowDuplicateDomains(true).enable().transport().clusterName("UDM-CLUSTER").addProperty("configurationFile", > > "jgroups-udp.xml") > > > > WITH > > new > > GlobalConfigurationBuilder().clusteredDefault().globalJmxStatistics().cacheManagerName("RDSCacheManager").allowDuplicateDomains(true).enable().transport(Transport).clusterName("UDM-CLUSTER") > > > > Btw, someone mentioned that if I follow this method I need to to know the > > assigned mux ids, but that is not quite clear what it means with regards > > to the JGroupsTransport configuration > > > > Thanks, > > > > Stylianos Kousouris > > Red Hat Middleware Consultant > > > > ----- Original Message ----- > > From: "Radoslav Husar" > > To: "Galder Zamarre?o" , "Paul Ferraro" > > > > Cc: "Richard Achmatowicz" , "infinispan -Dev List" > > , "Stelios Koussouris" > > , "Kurt T Stam" > > Sent: Friday, 26 September, 2014 3:47:16 PM > > Subject: Re: Clustering standalone Infinispan w/ WF running Infinispan > > > > From what Stelios is telling me the question is a little bit other way > > round: he is using library mode infinispan and jgroups in EAP and > > connecting to JDG. So the question is what JDG is doing with the stack, > > not AS/WF as its infinispan/jgroups subsystem is not used. > > > > Unfortunately I don't have access to the JDG repo so I don't know what > > changes have been made there but if you are using the same jgroups > > logic, IMO the channel needs to be wrapped as > > org.jboss.as.clustering.jgroups.MuxChannel before passing to infinispan. > > > > Rado > > > > On 26/09/14 15:03, Galder Zamarre?o wrote: > >> Hey Paul, > >> > >> In the last couple of days, a couple of people have encountered the > >> exception in [1] when trying to cluster a standalone Infinispan app with > >> its own JGroups configuration file with a AS/WF running Infinispan cache. > >> > >> From my POV, 3 possible causes: > >> > >> 1. Dependency mismatches between AS/WF and the standalone app. Having done > >> some quick study of Kurt?s case, apart from micro version changes, all > >> looks good. > >> > >> 2. Mismatch in the Infinispan and/or JGroups configuration file. > >> > >> 3. AS/WF puts something on the clustered wire that standalone Infinispan > >> does not expect. Are you still doing multiplexing? Could you be adding > >> extra info to the wire? > >> > >> With this email, I?m trying to get some clarification from you if the > >> issue could be due to 3rd option. If it?s either of the first two, it?s a > >> matter of digging and finding the difference, but if it?s 3rd one, it?s > >> more problematic. > >> > >> Any ideas? > >> > >> [1] https://gist.github.com/skoussou/92f062f2d0bd17168e01 > >> -- > >> Galder Zamarre?o > >> galder at redhat.com > >> twitter.com/galderz > >> > > From wfink at redhat.com Mon Sep 29 13:31:28 2014 From: wfink at redhat.com (Wolf-Dieter Fink) Date: Mon, 29 Sep 2014 19:31:28 +0200 Subject: [infinispan-dev] Assistance to write a custom CacheStore In-Reply-To: <29DF296B-3D45-467D-8266-633114D98C6D@redhat.com> References: <29DF296B-3D45-467D-8266-633114D98C6D@redhat.com> Message-ID: <54299770.8070800@redhat.com> Doc did hot help me for all questions. If I have a running example I'll write a document how to do it :) Wolf On 25/09/14 18:59, Mircea Markus wrote: > We have a section in docs[1] describing the new API and what should be implemented for a custom store. As an exmaple the zero deps filte store is offered. > I guess a maven project to help people boot things up would be handy, though. > > [1] http://infinispan.org/docs/7.0.x/user_guide/user_guide.html#_persistence > On Sep 23, 2014, at 22:05, Sanne Grinovero wrote: > >> I noticed that we often have questions about how to implement a custom >> CacheStore. >> >> If someone had some time to write a nice guide for that, we might have >> some more luck in getting help to upgrade all the CacheStores which >> have been granted the status of "Abandonware" as defined by a user >> (!). >> >> https://issues.jboss.org/browse/ISPN-4751 >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > Cheers, From ttarrant at redhat.com Tue Sep 30 03:02:27 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 30 Sep 2014 09:02:27 +0200 Subject: [infinispan-dev] Clustering standalone Infinispan w/ WF running Infinispan In-Reply-To: <133530692.31414088.1412009840434.JavaMail.zimbra@redhat.com> References: <54257C74.2050600@redhat.com> <1898996727.37561681.1411744742097.JavaMail.zimbra@redhat.com> <54297D4F.9060009@jboss.com> <133530692.31414088.1412009840434.JavaMail.zimbra@redhat.com> Message-ID: <542A5583.4030908@redhat.com> I don't know what Kurt is doing, but Stelios is attempting to cluster an application using embedded Infinispan deployed within WF together with an Infinispan Server instance. The application is managing its own caches, and therefore it is not interacting with the underlying Infinispan and JGroups subsystems in WF. Infinispan Server uses its Infinispan and JGroups subsystems (which are forked from WF's) and therefore are using MuxChannels. I told Stelios to use a MuxChannel-wrapped Channel in his application and it solved part of the issue (he was initially importing the one included in the WF's jgroups subsystem, but now he's using his local copy), but now he has run into further problems and I believe what Paul & Dennis have written might be correct. The code that configures this is in EmbeddedCacheManagerConfigurationService: GlobalConfigurationBuilder builder = new GlobalConfigurationBuilder(); ModuleLoader moduleLoader = this.dependencies.getModuleLoader(); builder.serialization().classResolver(ModularClassResolver.getInstance(moduleLoader)); I don't know how you'd get a ModuleLoader from within a WF deployment, but I'm sure it can be done. Tristan On 29/09/14 18:57, Paul Ferraro wrote: > You should not need to use a MuxChannel. This would only be necessary if there are other EAP services sharing the channel. Using a MuxChannel allows your standalone Infinispan instance to filter these irrelevant messages. However, in JDG, there should be no other services other than Infinispan using the channel - hence the MuxChannel stuff is unnecessary. > > I think Dennis earlier response was spot on. EAP/JDG configures it's cache managers using a ModularClassResolver (which includes a module name along with the class name when marshalling). Your standalone Infinispan instances do not use this and therefore cannot make sense of the message body. > > Paul > > ----- Original Message ----- >> From: "Kurt T Stam" >> To: "Stelios Koussouris" , "Radoslav Husar" >> Cc: "Galder Zamarre?o" , "Paul Ferraro" , "Richard Achmatowicz" >> , "infinispan -Dev List" >> Sent: Monday, September 29, 2014 11:39:59 AM >> Subject: Re: Clustering standalone Infinispan w/ WF running Infinispan >> >> Thanks for following up Stelios, I think Galder is traveling the next 2 >> weeks. >> >> So - do we need fixes on both ends then so that the boot order does not >> matter? In which project(s) would we apply >> there changes? Or can they be applied in the end-user's code? >> >> Thx, >> >> --Kurt >> >> >> >> On 9/26/14, 11:19 AM, Stelios Koussouris wrote: >>> Hi, >>> >>> Rado: It is both ways. ie. if I start first the JDG Server I get the issue >>> on the library mode side when I start that one. If reverse the order of >>> startup I get it in the JDG Server side. >>> >>> Question: >>> ----------------------------------------------------------------------------------------------------------------------- >>> ...IMO the channel needs to be wrapped as >>> org.jboss.as.clustering.jgroups.MuxChannel before passing to infinispan. >>> ... >>> ----------------------------------------------------------------------------------------------------------------------- >>> For now that this is not being done. If I wanted to do it manually on the >>> library side where I can create the protocol programmatically we are >>> talking about something like this? >>> >>> ProtocolStackConfigurator configurator = >>> ConfiguratorFactory.getStackConfigurator("jgroups-udp.xml"); >>> MuxChannel channel = new MuxChannel(configurator); >>> org.infinispan.remoting.transport.Transport transport = new >>> org.infinispan.remoting.transport.jgroups.JGroupsTransport(channel); >>> >>> .... >>> then replace the below >>> new >>> GlobalConfigurationBuilder().clusteredDefault().globalJmxStatistics().cacheManagerName("RDSCacheManager").allowDuplicateDomains(true).enable().transport().clusterName("UDM-CLUSTER").addProperty("configurationFile", >>> "jgroups-udp.xml") >>> >>> WITH >>> new >>> GlobalConfigurationBuilder().clusteredDefault().globalJmxStatistics().cacheManagerName("RDSCacheManager").allowDuplicateDomains(true).enable().transport(Transport).clusterName("UDM-CLUSTER") >>> >>> Btw, someone mentioned that if I follow this method I need to to know the >>> assigned mux ids, but that is not quite clear what it means with regards >>> to the JGroupsTransport configuration >>> >>> Thanks, >>> >>> Stylianos Kousouris >>> Red Hat Middleware Consultant >>> >>> ----- Original Message ----- >>> From: "Radoslav Husar" >>> To: "Galder Zamarre?o" , "Paul Ferraro" >>> >>> Cc: "Richard Achmatowicz" , "infinispan -Dev List" >>> , "Stelios Koussouris" >>> , "Kurt T Stam" >>> Sent: Friday, 26 September, 2014 3:47:16 PM >>> Subject: Re: Clustering standalone Infinispan w/ WF running Infinispan >>> >>> From what Stelios is telling me the question is a little bit other way >>> round: he is using library mode infinispan and jgroups in EAP and >>> connecting to JDG. So the question is what JDG is doing with the stack, >>> not AS/WF as its infinispan/jgroups subsystem is not used. >>> >>> Unfortunately I don't have access to the JDG repo so I don't know what >>> changes have been made there but if you are using the same jgroups >>> logic, IMO the channel needs to be wrapped as >>> org.jboss.as.clustering.jgroups.MuxChannel before passing to infinispan. >>> >>> Rado >>> >>> On 26/09/14 15:03, Galder Zamarre?o wrote: >>>> Hey Paul, >>>> >>>> In the last couple of days, a couple of people have encountered the >>>> exception in [1] when trying to cluster a standalone Infinispan app with >>>> its own JGroups configuration file with a AS/WF running Infinispan cache. >>>> >>>> From my POV, 3 possible causes: >>>> >>>> 1. Dependency mismatches between AS/WF and the standalone app. Having done >>>> some quick study of Kurt?s case, apart from micro version changes, all >>>> looks good. >>>> >>>> 2. Mismatch in the Infinispan and/or JGroups configuration file. >>>> >>>> 3. AS/WF puts something on the clustered wire that standalone Infinispan >>>> does not expect. Are you still doing multiplexing? Could you be adding >>>> extra info to the wire? >>>> >>>> With this email, I?m trying to get some clarification from you if the >>>> issue could be due to 3rd option. If it?s either of the first two, it?s a >>>> matter of digging and finding the difference, but if it?s 3rd one, it?s >>>> more problematic. >>>> >>>> Any ideas? >>>> >>>> [1] https://gist.github.com/skoussou/92f062f2d0bd17168e01 >>>> -- >>>> Galder Zamarre?o >>>> galder at redhat.com >>>> twitter.com/galderz >>>> >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Tue Sep 30 03:28:34 2014 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 30 Sep 2014 09:28:34 +0200 Subject: [infinispan-dev] Assistance to write a custom CacheStore In-Reply-To: <54299770.8070800@redhat.com> References: <29DF296B-3D45-467D-8266-633114D98C6D@redhat.com> <54299770.8070800@redhat.com> Message-ID: <542A5BA2.6050607@redhat.com> As Mircea said, the simplest reference implementations are SingleFileStore [1] and DummyInMemoryStore [2]. If you have further questions, just ask. Radim [1] https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/persistence/file/SingleFileStore.java [2] https://github.com/infinispan/infinispan/blob/master/core/src/test/java/org/infinispan/persistence/dummy/DummyInMemoryStore.java On 09/29/2014 07:31 PM, Wolf-Dieter Fink wrote: > Doc did hot help me for all questions. > > If I have a running example I'll write a document how to do it :) > > Wolf > > On 25/09/14 18:59, Mircea Markus wrote: >> We have a section in docs[1] describing the new API and what should be implemented for a custom store. As an exmaple the zero deps filte store is offered. >> I guess a maven project to help people boot things up would be handy, though. >> >> [1] http://infinispan.org/docs/7.0.x/user_guide/user_guide.html#_persistence >> On Sep 23, 2014, at 22:05, Sanne Grinovero wrote: >> >>> I noticed that we often have questions about how to implement a custom >>> CacheStore. >>> >>> If someone had some time to write a nice guide for that, we might have >>> some more luck in getting help to upgrade all the CacheStores which >>> have been granted the status of "Abandonware" as defined by a user >>> (!). >>> >>> https://issues.jboss.org/browse/ISPN-4751 >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> Cheers, > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From rory.odonnell at oracle.com Tue Sep 30 05:58:01 2014 From: rory.odonnell at oracle.com (Rory O'Donnell Oracle, Dublin Ireland) Date: Tue, 30 Sep 2014 10:58:01 +0100 Subject: [infinispan-dev] Early Access builds for JDK 9 b32 and JDK 8u40 b07 are available on java.net Message-ID: <542A7EA9.20407@oracle.com> Hi Galder, Early Access build for JDK 9 b32 is available on java.net, summary of changes are listed here Early Access build for JDK 8u40 b07 is available on java.net, summary of changes are listed here. Rgds,Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140930/77053258/attachment.html From mudokonman at gmail.com Tue Sep 30 12:13:25 2014 From: mudokonman at gmail.com (William Burns) Date: Tue, 30 Sep 2014 12:13:25 -0400 Subject: [infinispan-dev] Feedback and requests on clustered and remote listeners In-Reply-To: <4FA0A74E-4DA4-42F1-BE0C-C07E401633A0@hibernate.org> References: <1C064841-B234-4EC8-AEE3-467B1E52ED0B@hibernate.org> <2FD61F5D-67A6-4D6C-A0A0-D924914BAD14@hibernate.org> <21CE8853-8A48-4061-BD6C-55EE93BECE33@hibernate.org> <06B6FC9A-9886-4C02-B821-EC5C0864A948@redhat.com> <6FDDA588-82B0-4DB3-95C9-527CDD8B5219@hibernate.org> <821068D0-D857-4006-AC70-A0798C4756C1@redhat.com> <4FA0A74E-4DA4-42F1-BE0C-C07E401633A0@hibernate.org> Message-ID: I have put it on a branch on github and you can try it out and let me know what you think. I still have a few things I may want to change though: 1. I don't like how pre events are yet as they don't give you the previous value and new value as post events do 2. The enum to tell the type has become a bit more complicated and I think I am going to change it to a class 3. I also have some internal changes that should require less memory allocations I wanted to clean up. https://github.com/wburns/infinispan/tree/ISPN-4753 Thanks, - Will On Fri, Sep 26, 2014 at 4:06 AM, Emmanuel Bernard wrote: > You lost me at actually ;) but if you have some code or even a gist showing how a user would use and interact with these changes, I can give you some feedback on the use cases I had in mind and if they fit. > > >> On 25 sept. 2014, at 15:20, William Burns wrote: >> >> Actually while working and thinking on this it seems it may be easiest >> to exclude the usage of KeyValueFilter in the listener pieces >> completely and instead leave the annotation as it is now. Instead the >> provided CacheEventFilter would be wrapped by a KeyValueFilter >> implement that just called the new method as if it was a create event >> for each value while iterating on them. I am thinking this is the >> cleanest. Do you guys have any opinions? It would also keep intact a >> lot of existing code and APIs. > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From paul.ferraro at redhat.com Tue Sep 30 16:08:09 2014 From: paul.ferraro at redhat.com (Paul Ferraro) Date: Tue, 30 Sep 2014 16:08:09 -0400 (EDT) Subject: [infinispan-dev] Clustering standalone Infinispan w/ WF running Infinispan In-Reply-To: <542A5583.4030908@redhat.com> References: <54257C74.2050600@redhat.com> <1898996727.37561681.1411744742097.JavaMail.zimbra@redhat.com> <54297D4F.9060009@jboss.com> <133530692.31414088.1412009840434.JavaMail.zimbra@redhat.com> <542A5583.4030908@redhat.com> Message-ID: <1206312991.259534.1412107689393.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Tristan Tarrant" > To: "infinispan -Dev List" , "Kurt T Stam" > Cc: "Stelios Koussouris" , "Richard Achmatowicz" > Sent: Tuesday, September 30, 2014 3:02:27 AM > Subject: Re: [infinispan-dev] Clustering standalone Infinispan w/ WF running Infinispan > > I don't know what Kurt is doing, but Stelios is attempting to cluster an > application using embedded Infinispan deployed within WF together with > an Infinispan Server instance. > The application is managing its own caches, and therefore it is not > interacting with the underlying Infinispan and JGroups subsystems in WF. > Infinispan Server uses its Infinispan and JGroups subsystems (which are > forked from WF's) and therefore are using MuxChannels. > > I told Stelios to use a MuxChannel-wrapped Channel in his application > and it solved part of the issue (he was initially importing the one > included in the WF's jgroups subsystem, but now he's using his local > copy), but now he has run into further problems and I believe what Paul > & Dennis have written might be correct. > > The code that configures this is in > EmbeddedCacheManagerConfigurationService: > > GlobalConfigurationBuilder builder = new GlobalConfigurationBuilder(); > ModuleLoader moduleLoader = this.dependencies.getModuleLoader(); > builder.serialization().classResolver(ModularClassResolver.getInstance(moduleLoader)); > > I don't know how you'd get a ModuleLoader from within a WF deployment, > but I'm sure it can be done. GlobalConfigurationBuilder builder = new GlobalConfigurationBuilder(); ClassLoader loader = this.getClass().getClassLoader(); if (loader instanceof ModuleClassLoader) { Module module = ((ModuleClassLoader) loader).getModule(); builder.serialization().classResolver(ModularClassResolver.getInstance(module.getModuleLoader()); } Paul > Tristan > > On 29/09/14 18:57, Paul Ferraro wrote: > > You should not need to use a MuxChannel. This would only be necessary if > > there are other EAP services sharing the channel. Using a MuxChannel > > allows your standalone Infinispan instance to filter these irrelevant > > messages. However, in JDG, there should be no other services other than > > Infinispan using the channel - hence the MuxChannel stuff is unnecessary. > > > > I think Dennis earlier response was spot on. EAP/JDG configures it's cache > > managers using a ModularClassResolver (which includes a module name along > > with the class name when marshalling). Your standalone Infinispan > > instances do not use this and therefore cannot make sense of the message > > body. > > > > Paul > > > > ----- Original Message ----- > >> From: "Kurt T Stam" > >> To: "Stelios Koussouris" , "Radoslav Husar" > >> > >> Cc: "Galder Zamarre?o" , "Paul Ferraro" > >> , "Richard Achmatowicz" > >> , "infinispan -Dev List" > >> > >> Sent: Monday, September 29, 2014 11:39:59 AM > >> Subject: Re: Clustering standalone Infinispan w/ WF running Infinispan > >> > >> Thanks for following up Stelios, I think Galder is traveling the next 2 > >> weeks. > >> > >> So - do we need fixes on both ends then so that the boot order does not > >> matter? In which project(s) would we apply > >> there changes? Or can they be applied in the end-user's code? > >> > >> Thx, > >> > >> --Kurt > >> > >> > >> > >> On 9/26/14, 11:19 AM, Stelios Koussouris wrote: > >>> Hi, > >>> > >>> Rado: It is both ways. ie. if I start first the JDG Server I get the > >>> issue > >>> on the library mode side when I start that one. If reverse the order of > >>> startup I get it in the JDG Server side. > >>> > >>> Question: > >>> ----------------------------------------------------------------------------------------------------------------------- > >>> ...IMO the channel needs to be wrapped as > >>> org.jboss.as.clustering.jgroups.MuxChannel before passing to infinispan. > >>> ... > >>> ----------------------------------------------------------------------------------------------------------------------- > >>> For now that this is not being done. If I wanted to do it manually on the > >>> library side where I can create the protocol programmatically we are > >>> talking about something like this? > >>> > >>> ProtocolStackConfigurator configurator = > >>> ConfiguratorFactory.getStackConfigurator("jgroups-udp.xml"); > >>> MuxChannel channel = new MuxChannel(configurator); > >>> org.infinispan.remoting.transport.Transport transport = new > >>> org.infinispan.remoting.transport.jgroups.JGroupsTransport(channel); > >>> > >>> .... > >>> then replace the below > >>> new > >>> GlobalConfigurationBuilder().clusteredDefault().globalJmxStatistics().cacheManagerName("RDSCacheManager").allowDuplicateDomains(true).enable().transport().clusterName("UDM-CLUSTER").addProperty("configurationFile", > >>> "jgroups-udp.xml") > >>> > >>> WITH > >>> new > >>> GlobalConfigurationBuilder().clusteredDefault().globalJmxStatistics().cacheManagerName("RDSCacheManager").allowDuplicateDomains(true).enable().transport(Transport).clusterName("UDM-CLUSTER") > >>> > >>> Btw, someone mentioned that if I follow this method I need to to know the > >>> assigned mux ids, but that is not quite clear what it means with regards > >>> to the JGroupsTransport configuration > >>> > >>> Thanks, > >>> > >>> Stylianos Kousouris > >>> Red Hat Middleware Consultant > >>> > >>> ----- Original Message ----- > >>> From: "Radoslav Husar" > >>> To: "Galder Zamarre?o" , "Paul Ferraro" > >>> > >>> Cc: "Richard Achmatowicz" , "infinispan -Dev List" > >>> , "Stelios Koussouris" > >>> , "Kurt T Stam" > >>> Sent: Friday, 26 September, 2014 3:47:16 PM > >>> Subject: Re: Clustering standalone Infinispan w/ WF running Infinispan > >>> > >>> From what Stelios is telling me the question is a little bit other way > >>> round: he is using library mode infinispan and jgroups in EAP and > >>> connecting to JDG. So the question is what JDG is doing with the stack, > >>> not AS/WF as its infinispan/jgroups subsystem is not used. > >>> > >>> Unfortunately I don't have access to the JDG repo so I don't know what > >>> changes have been made there but if you are using the same jgroups > >>> logic, IMO the channel needs to be wrapped as > >>> org.jboss.as.clustering.jgroups.MuxChannel before passing to infinispan. > >>> > >>> Rado > >>> > >>> On 26/09/14 15:03, Galder Zamarre?o wrote: > >>>> Hey Paul, > >>>> > >>>> In the last couple of days, a couple of people have encountered the > >>>> exception in [1] when trying to cluster a standalone Infinispan app with > >>>> its own JGroups configuration file with a AS/WF running Infinispan > >>>> cache. > >>>> > >>>> From my POV, 3 possible causes: > >>>> > >>>> 1. Dependency mismatches between AS/WF and the standalone app. Having > >>>> done > >>>> some quick study of Kurt?s case, apart from micro version changes, all > >>>> looks good. > >>>> > >>>> 2. Mismatch in the Infinispan and/or JGroups configuration file. > >>>> > >>>> 3. AS/WF puts something on the clustered wire that standalone Infinispan > >>>> does not expect. Are you still doing multiplexing? Could you be adding > >>>> extra info to the wire? > >>>> > >>>> With this email, I?m trying to get some clarification from you if the > >>>> issue could be due to 3rd option. If it?s either of the first two, it?s > >>>> a > >>>> matter of digging and finding the difference, but if it?s 3rd one, it?s > >>>> more problematic. > >>>> > >>>> Any ideas? > >>>> > >>>> [1] https://gist.github.com/skoussou/92f062f2d0bd17168e01 > >>>> -- > >>>> Galder Zamarre?o > >>>> galder at redhat.com > >>>> twitter.com/galderz > >>>> > >> > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev >