From sanne at infinispan.org Fri Aug 1 15:50:06 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 1 Aug 2014 20:50:06 +0100 Subject: [infinispan-dev] To make it clear which PRs need attention ... Message-ID: .. and from who. It's sometimes unclear which PRs are there in need for review, or which have been commented on and are waiting for fixes / polishing / rebase / denial. Hope these labels help: https://github.com/infinispan/infinispan/pulls And you can bookmark them! https://github.com/infinispan/infinispan/pulls?q=is%3Aopen+is%3Apr+label%3A%22Ready+for+Review%22 Cheers, Sanne From rvansa at redhat.com Mon Aug 4 04:02:44 2014 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 04 Aug 2014 10:02:44 +0200 Subject: [infinispan-dev] To make it clear which PRs need attention ... In-Reply-To: References: Message-ID: <53DF3E24.8090301@redhat.com> Great, thanks, Sanne! I was often in-doubt what's the actual status of my PR, now I will check these :) Btw., "Ready for review" suggests that I think that it could be integrated after a proper review. There are situations (such as my [1]) where I need some advice about the PR - should that be considered "Ready for review", or would be some label "Advice/Review requested" fit better? Of course, having thousand labels is not desirable, that's why I am asking how coarse grained this should be. Radim [1] https://github.com/infinispan/infinispan/pull/2585 On 08/01/2014 09:50 PM, Sanne Grinovero wrote: > .. and from who. > > It's sometimes unclear which PRs are there in need for review, or > which have been commented on and are waiting for fixes / polishing / > rebase / denial. > > Hope these labels help: > https://github.com/infinispan/infinispan/pulls > > And you can bookmark them! > https://github.com/infinispan/infinispan/pulls?q=is%3Aopen+is%3Apr+label%3A%22Ready+for+Review%22 > > Cheers, > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From galder at redhat.com Mon Aug 4 06:35:50 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Mon, 4 Aug 2014 12:35:50 +0200 Subject: [infinispan-dev] Ant based kill not fully working? Re: ISPN-4567 Message-ID: <3F75DF34-95F4-482C-9F7E-F8D1D9565BA5@redhat.com> Hi, Dan has reported [1]. It appears as if the last server started in infinispan-as-module-client-integrationtests did not really get killed. From what I see, this kill was done via the specific Ant target present in that Maven module. I also remembered recently [2] was added. Maybe we need to get as-modules/client to be configured with it so that it properly kills servers? What I?m not sure is where we?d put it so that it can be consumed both by server/integration/testsuite and as-modules/client? The problem is that the class, as is, brings in arquillian dependency. If we can separate the arquillian stuff from the actual code, the class itself could maybe go in commons test source directory? @Tristan, thoughts? @Jakub, can I assign this to you? [1] https://issues.jboss.org/browse/ISPN-4567 [2] https://github.com/infinispan/infinispan/blob/master/server/integration/testsuite/src/test/java/org/infinispan/server/test/util/arquillian/extensions/InfinispanServerKillProcessor.java -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From dan.berindei at gmail.com Mon Aug 4 07:54:58 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 4 Aug 2014 14:54:58 +0300 Subject: [infinispan-dev] To make it clear which PRs need attention ... In-Reply-To: <53DF3E24.8090301@redhat.com> References: <53DF3E24.8090301@redhat.com> Message-ID: Nice idea! I am using [1] to monitor the PRs I was involved in, which does a pretty good job, but it's annoying that it misses some updates (like the build status, most of the time). I have one suggestion: most PRs are ready for review the moment they are issued, so I think that should be the default - no label required. I would add instead a "Do not integrate yet" label :) [1] https://prs.paas.allizom.org/infinispan/infinispan On Mon, Aug 4, 2014 at 11:02 AM, Radim Vansa wrote: > Great, thanks, Sanne! I was often in-doubt what's the actual status of > my PR, now I will check these :) > > Btw., "Ready for review" suggests that I think that it could be > integrated after a proper review. There are situations (such as my [1]) > where I need some advice about the PR - should that be considered "Ready > for review", or would be some label "Advice/Review requested" fit > better? Of course, having thousand labels is not desirable, that's why I > am asking how coarse grained this should be. > > Radim > > [1] https://github.com/infinispan/infinispan/pull/2585 > > On 08/01/2014 09:50 PM, Sanne Grinovero wrote: > > .. and from who. > > > > It's sometimes unclear which PRs are there in need for review, or > > which have been commented on and are waiting for fixes / polishing / > > rebase / denial. > > > > Hope these labels help: > > https://github.com/infinispan/infinispan/pulls > > > > And you can bookmark them! > > > https://github.com/infinispan/infinispan/pulls?q=is%3Aopen+is%3Apr+label%3A%22Ready+for+Review%22 > > > > Cheers, > > Sanne > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140804/715e6cce/attachment.html From ttarrant at redhat.com Mon Aug 4 10:49:51 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 04 Aug 2014 16:49:51 +0200 Subject: [infinispan-dev] Meeting minutes 2014-08-04 Message-ID: <53DF9D8F.1000504@redhat.com> Minutes: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2014/infinispan.2014-08-04-14.01.html Minutes (text): http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2014/infinispan.2014-08-04-14.01.txt Log: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2014/infinispan.2014-08-04-14.01.log.html Tristan From sanne at infinispan.org Mon Aug 4 11:09:16 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 4 Aug 2014 16:09:16 +0100 Subject: [infinispan-dev] To make it clear which PRs need attention ... In-Reply-To: <53DF3E24.8090301@redhat.com> References: <53DF3E24.8090301@redhat.com> Message-ID: On 4 August 2014 09:02, Radim Vansa wrote: > Great, thanks, Sanne! I was often in-doubt what's the actual status of > my PR, now I will check these :) > > Btw., "Ready for review" suggests that I think that it could be > integrated after a proper review. There are situations (such as my [1]) > where I need some advice about the PR - should that be considered "Ready > for review", or would be some label "Advice/Review requested" fit > better? Of course, having thousand labels is not desirable, that's why I > am asking how coarse grained this should be. Good point, I've created an intense blue label for that. +1 to not have many labels but we're free to experiment a bit. Sanne > > Radim > > [1] https://github.com/infinispan/infinispan/pull/2585 > > On 08/01/2014 09:50 PM, Sanne Grinovero wrote: >> .. and from who. >> >> It's sometimes unclear which PRs are there in need for review, or >> which have been commented on and are waiting for fixes / polishing / >> rebase / denial. >> >> Hope these labels help: >> https://github.com/infinispan/infinispan/pulls >> >> And you can bookmark them! >> https://github.com/infinispan/infinispan/pulls?q=is%3Aopen+is%3Apr+label%3A%22Ready+for+Review%22 >> >> Cheers, >> Sanne >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Mon Aug 4 11:23:19 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 4 Aug 2014 16:23:19 +0100 Subject: [infinispan-dev] To make it clear which PRs need attention ... In-Reply-To: References: <53DF3E24.8090301@redhat.com> Message-ID: On 4 August 2014 12:54, Dan Berindei wrote: > Nice idea! I am using [1] to monitor the PRs I was involved in, which does a > pretty good job, but it's annoying that it misses some updates (like the > build status, most of the time). Nice thing. Were you keeping that for yourself? :-) A bit slowish to load though, and doesn't give me the quick overview I need, which is to answer the question: "Any PR I can help merging in short time?" > I have one suggestion: most PRs are ready for review the moment they are > issued, so I think that should be the default - no label required. > I would add instead a "Do not integrate yet" label :) Yes it would be nice to have them added the "Ready for Review" label by default, still I highly prefer having a bold green label so that we can quickly find one to merge.. it's more about the colour codes personally, I don't usually have time for Infinispan PRs and since you all tend to leave them lingering for a long time, it's often hard to find one which I could merge. Normally by the time I find one, my time slot on Infinispan is over so your opportunity to get a merge done is gone ;-) Another problem we have is that sometimes after a couple of comments it's unclear who needs to act next. Is the PR crap and needs to be rewritten? Is the reviewer done with comments, or was he distracted? etc.. so better flag things visually for the others to be able to help. I'd prefer to leave the "no label" case to means something like "needs to be categorised"(labelled), for example what we'd do for a first high level screening for new contributors PRs. Cheers, Sanne > > > [1] https://prs.paas.allizom.org/infinispan/infinispan > > > On Mon, Aug 4, 2014 at 11:02 AM, Radim Vansa wrote: >> >> Great, thanks, Sanne! I was often in-doubt what's the actual status of >> my PR, now I will check these :) >> >> Btw., "Ready for review" suggests that I think that it could be >> integrated after a proper review. There are situations (such as my [1]) >> where I need some advice about the PR - should that be considered "Ready >> for review", or would be some label "Advice/Review requested" fit >> better? Of course, having thousand labels is not desirable, that's why I >> am asking how coarse grained this should be. >> >> Radim >> >> [1] https://github.com/infinispan/infinispan/pull/2585 >> >> On 08/01/2014 09:50 PM, Sanne Grinovero wrote: >> > .. and from who. >> > >> > It's sometimes unclear which PRs are there in need for review, or >> > which have been commented on and are waiting for fixes / polishing / >> > rebase / denial. >> > >> > Hope these labels help: >> > https://github.com/infinispan/infinispan/pulls >> > >> > And you can bookmark them! >> > >> > https://github.com/infinispan/infinispan/pulls?q=is%3Aopen+is%3Apr+label%3A%22Ready+for+Review%22 >> > >> > Cheers, >> > Sanne >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Radim Vansa >> JBoss DataGrid QA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Mon Aug 4 19:33:17 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 5 Aug 2014 00:33:17 +0100 Subject: [infinispan-dev] Weird ClassCastException .. Message-ID: I'm (rarely) seeing this exception in one of my stress tests.. any clue about what could be wrong? I reported a similar one approx a year ago, in that case it was a value type being unmarshalled as an instance of Class (was also never resolved). 2014-08-05 00:22:29,521 WARN [CommandAwareRpcDispatcher] (OOB-1,main-NodeD-22196) ISPN000220: Problems un-marshalling remote command from byte buffer java.lang.ClassCastException: java.lang.String cannot be cast to org.infinispan.metadata.Metadata at org.infinispan.commands.write.PutKeyValueCommand.setParameters(PutKeyValueCommand.java:114) at org.infinispan.commands.RemoteCommandsFactory.fromStream(RemoteCommandsFactory.java:138) at org.infinispan.marshall.exts.ReplicableCommandExternalizer.readObject(ReplicableCommandExternalizer.java:85) at org.infinispan.marshall.exts.ReplicableCommandExternalizer.readObject(ReplicableCommandExternalizer.java:1) at org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:409) at org.infinispan.marshall.core.ExternalizerTable.readObject(ExternalizerTable.java:214) at org.infinispan.marshall.core.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:148) at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351) at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209) at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41) at org.infinispan.marshall.exts.ReplicableCommandExternalizer.readParameters(ReplicableCommandExternalizer.java:101) at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:153) at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:1) at org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:409) at org.infinispan.marshall.core.ExternalizerTable.readObject(ExternalizerTable.java:214) at org.infinispan.marshall.core.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:148) at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351) at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209) at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41) at org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:135) at org.infinispan.marshall.core.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:101) at org.infinispan.commons.marshall.AbstractDelegatingMarshaller.objectFromByteBuffer(AbstractDelegatingMarshaller.java:80) at org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectFromBuffer(MarshallerAdapter.java:28) at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:204) at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460) at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377) at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:250) at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:674) at org.jgroups.JChannel.up(JChannel.java:733) at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030) at org.jgroups.protocols.RSVP.up(RSVP.java:190) at org.jgroups.protocols.FRAG2.up(FRAG2.java:165) at org.jgroups.protocols.FlowControl.up(FlowControl.java:390) at org.jgroups.protocols.tom.TOA.up(TOA.java:121) at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1041) at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234) at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1034) at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:752) at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:399) at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:610) at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:297) at org.jgroups.protocols.Discovery.up(Discovery.java:245) at org.jgroups.protocols.TP.passMessageUp(TP.java:1551) at org.jgroups.protocols.TP$MyHandler.run(TP.java:1770) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) From dereed at redhat.com Tue Aug 5 00:06:35 2014 From: dereed at redhat.com (Dennis Reed) Date: Mon, 04 Aug 2014 23:06:35 -0500 Subject: [infinispan-dev] Weird ClassCastException .. In-Reply-To: References: Message-ID: <53E0584B.1010803@redhat.com> It looks like the data was written by a different version of PutKeyValueCommand than is trying to read it. Make sure you're not mixing ISPN versions in the cluster and/or accidentally clustering with another instance outside your test? -Dennis On 08/04/2014 06:33 PM, Sanne Grinovero wrote: > I'm (rarely) seeing this exception in one of my stress tests.. any > clue about what could be wrong? > I reported a similar one approx a year ago, in that case it was a > value type being unmarshalled as an instance of Class (was also never > resolved). > > 2014-08-05 00:22:29,521 WARN [CommandAwareRpcDispatcher] > (OOB-1,main-NodeD-22196) ISPN000220: Problems un-marshalling remote > command from byte buffer > java.lang.ClassCastException: java.lang.String cannot be cast to > org.infinispan.metadata.Metadata > at org.infinispan.commands.write.PutKeyValueCommand.setParameters(PutKeyValueCommand.java:114) > at org.infinispan.commands.RemoteCommandsFactory.fromStream(RemoteCommandsFactory.java:138) > at org.infinispan.marshall.exts.ReplicableCommandExternalizer.readObject(ReplicableCommandExternalizer.java:85) > at org.infinispan.marshall.exts.ReplicableCommandExternalizer.readObject(ReplicableCommandExternalizer.java:1) > at org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:409) > at org.infinispan.marshall.core.ExternalizerTable.readObject(ExternalizerTable.java:214) > at org.infinispan.marshall.core.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:148) > at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351) > at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209) > at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41) > at org.infinispan.marshall.exts.ReplicableCommandExternalizer.readParameters(ReplicableCommandExternalizer.java:101) > at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:153) > at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:1) > at org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:409) > at org.infinispan.marshall.core.ExternalizerTable.readObject(ExternalizerTable.java:214) > at org.infinispan.marshall.core.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:148) > at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351) > at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209) > at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41) > at org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:135) > at org.infinispan.marshall.core.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:101) > at org.infinispan.commons.marshall.AbstractDelegatingMarshaller.objectFromByteBuffer(AbstractDelegatingMarshaller.java:80) > at org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectFromBuffer(MarshallerAdapter.java:28) > at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:204) > at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460) > at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377) > at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:250) > at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:674) > at org.jgroups.JChannel.up(JChannel.java:733) > at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030) > at org.jgroups.protocols.RSVP.up(RSVP.java:190) > at org.jgroups.protocols.FRAG2.up(FRAG2.java:165) > at org.jgroups.protocols.FlowControl.up(FlowControl.java:390) > at org.jgroups.protocols.tom.TOA.up(TOA.java:121) > at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1041) > at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234) > at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1034) > at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:752) > at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:399) > at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:610) > at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:297) > at org.jgroups.protocols.Discovery.up(Discovery.java:245) > at org.jgroups.protocols.TP.passMessageUp(TP.java:1551) > at org.jgroups.protocols.TP$MyHandler.run(TP.java:1770) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From galder at redhat.com Tue Aug 5 04:27:41 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Tue, 5 Aug 2014 10:27:41 +0200 Subject: [infinispan-dev] minutes from the monitoring&management meeting In-Reply-To: <912B2F10-1854-494C-AD55-1C91413D4A51@redhat.com> References: <912B2F10-1854-494C-AD55-1C91413D4A51@redhat.com> Message-ID: <1B7EF10E-048D-4764-8AF6-4A2712733E40@redhat.com> Can?t comment on the document, so here are my thoughts: Re: ?Get rid of lazy cache starting...all the caches run on all nodes...it should still be possible to start a cache at runtime, but it will be run on all nodes as well? ^ Though I like the idea, it might change a crucial aspect of how default cache configuration works (if we leave the concept of default cache at all). Say you start a cache named ?a? for which there?s no config. Up until now we?d use the default cache configuration and create a cache ?a? with that config. However, if caches are started cluster wide now, before you can do that, you?d have to check that there?s no cache ?a? configuration anywhere in the cluster. If there is, I guess the configuration would be shipped to the node that starts the cache (if it does not have it) and create the cache with it? Or are you assuming all nodes in the cluster must have all configurations defined? Re: ?Revisiting Configuration elements?" If we?re going to do another round of updates in this area, I think we should consider what to do with unconfigured values. Back in the 4.x days, the JAXB XML parsing allowed us to know which configuration elements the user had not configured, which helped us tweak configuration and do validation more easily. Now, when we look at a Configuration builder object, we see default values but we do not that a value is the one it is because the user has specifically defined it, or because it?s unconfigured. One way to do so is by separating the default values, say to an XML file which is reference (I think WF does something along these lines) and leave the builder object with all null values. This would make it easy to figure out which elements have been touched and for that those that have not, use default values. This has popped up in the forums before but can?t find a link right now... Cheers, On 28 Jul 2014, at 17:04, Mircea Markus wrote: > Hi, > > Tristan, Sanne, Gustavo and I meetlast week to discuss a) Infinispan usability and b) monitoring and management. Minutes attached. > > https://docs.google.com/document/d/1dIxH0xTiYBHH6_nkqybc13_zzW9gMIcaF_GX5Y7_PPQ/edit?usp=sharing > > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From sanne at infinispan.org Tue Aug 5 05:51:07 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 5 Aug 2014 10:51:07 +0100 Subject: [infinispan-dev] Weird ClassCastException .. In-Reply-To: <53E0584B.1010803@redhat.com> References: <53E0584B.1010803@redhat.com> Message-ID: Hi Dennis, thanks for the ideas! but that's not possible. This is a single unit test, run from a flat classpath, bound to localhost and no other JVMs running on the same machine. The only thing making "special" is that it's a rather long stress test, it loops at high speed for at least 10 minutes before such exceptions are thrown. I suspect the stream is somehow corrupted under load, getting it to invoke the wrong unmarshaller combination. On 5 August 2014 05:06, Dennis Reed wrote: > It looks like the data was written by a different version of > PutKeyValueCommand than is trying to read it. > > Make sure you're not mixing ISPN versions in the cluster and/or > accidentally clustering with another instance outside your test? > > -Dennis > > On 08/04/2014 06:33 PM, Sanne Grinovero wrote: >> I'm (rarely) seeing this exception in one of my stress tests.. any >> clue about what could be wrong? >> I reported a similar one approx a year ago, in that case it was a >> value type being unmarshalled as an instance of Class (was also never >> resolved). >> >> 2014-08-05 00:22:29,521 WARN [CommandAwareRpcDispatcher] >> (OOB-1,main-NodeD-22196) ISPN000220: Problems un-marshalling remote >> command from byte buffer >> java.lang.ClassCastException: java.lang.String cannot be cast to >> org.infinispan.metadata.Metadata >> at org.infinispan.commands.write.PutKeyValueCommand.setParameters(PutKeyValueCommand.java:114) >> at org.infinispan.commands.RemoteCommandsFactory.fromStream(RemoteCommandsFactory.java:138) >> at org.infinispan.marshall.exts.ReplicableCommandExternalizer.readObject(ReplicableCommandExternalizer.java:85) >> at org.infinispan.marshall.exts.ReplicableCommandExternalizer.readObject(ReplicableCommandExternalizer.java:1) >> at org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:409) >> at org.infinispan.marshall.core.ExternalizerTable.readObject(ExternalizerTable.java:214) >> at org.infinispan.marshall.core.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:148) >> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351) >> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209) >> at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41) >> at org.infinispan.marshall.exts.ReplicableCommandExternalizer.readParameters(ReplicableCommandExternalizer.java:101) >> at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:153) >> at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:1) >> at org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:409) >> at org.infinispan.marshall.core.ExternalizerTable.readObject(ExternalizerTable.java:214) >> at org.infinispan.marshall.core.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:148) >> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351) >> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209) >> at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41) >> at org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:135) >> at org.infinispan.marshall.core.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:101) >> at org.infinispan.commons.marshall.AbstractDelegatingMarshaller.objectFromByteBuffer(AbstractDelegatingMarshaller.java:80) >> at org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectFromBuffer(MarshallerAdapter.java:28) >> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:204) >> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460) >> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377) >> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:250) >> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:674) >> at org.jgroups.JChannel.up(JChannel.java:733) >> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030) >> at org.jgroups.protocols.RSVP.up(RSVP.java:190) >> at org.jgroups.protocols.FRAG2.up(FRAG2.java:165) >> at org.jgroups.protocols.FlowControl.up(FlowControl.java:390) >> at org.jgroups.protocols.tom.TOA.up(TOA.java:121) >> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1041) >> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234) >> at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1034) >> at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:752) >> at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:399) >> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:610) >> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:297) >> at org.jgroups.protocols.Discovery.up(Discovery.java:245) >> at org.jgroups.protocols.TP.passMessageUp(TP.java:1551) >> at org.jgroups.protocols.TP$MyHandler.run(TP.java:1770) >> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) >> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) >> at java.lang.Thread.run(Thread.java:745) >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Tue Aug 5 06:13:26 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Tue, 5 Aug 2014 12:13:26 +0200 Subject: [infinispan-dev] To make it clear which PRs need attention ... In-Reply-To: References: Message-ID: <85D8C8D4-AC41-43BA-8032-31E398BA8E6D@redhat.com> Great use of the new PR labels, thanks a lot Sanne!! :) On 01 Aug 2014, at 21:50, Sanne Grinovero wrote: > .. and from who. > > It's sometimes unclear which PRs are there in need for review, or > which have been commented on and are waiting for fixes / polishing / > rebase / denial. > > Hope these labels help: > https://github.com/infinispan/infinispan/pulls > > And you can bookmark them! > https://github.com/infinispan/infinispan/pulls?q=is%3Aopen+is%3Apr+label%3A%22Ready+for+Review%22 > > Cheers, > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From galder at redhat.com Tue Aug 5 06:14:17 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Tue, 5 Aug 2014 12:14:17 +0200 Subject: [infinispan-dev] To make it clear which PRs need attention ... In-Reply-To: References: Message-ID: Also great to see that you can finally see at a glance to whom a PR is assigned. On 01 Aug 2014, at 21:50, Sanne Grinovero wrote: > .. and from who. > > It's sometimes unclear which PRs are there in need for review, or > which have been commented on and are waiting for fixes / polishing / > rebase / denial. > > Hope these labels help: > https://github.com/infinispan/infinispan/pulls > > And you can bookmark them! > https://github.com/infinispan/infinispan/pulls?q=is%3Aopen+is%3Apr+label%3A%22Ready+for+Review%22 > > Cheers, > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From ttarrant at redhat.com Tue Aug 5 07:41:49 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 05 Aug 2014 13:41:49 +0200 Subject: [infinispan-dev] Infinispan 7.0 schedule Message-ID: <53E0C2FD.60701@redhat.com> Hi all, I have updated the version schedule for Infinispan 7.0 [1]. It is obviously tentative for now, but here's the plan: 7.0.0.Beta1 08/Aug/14 7.0.0.Beta2 05/Sep/14 7.0.0.CR1 19/Sep/14 7.0.0.CR2 03/Oct/14 7.0.0.Final 17/Oct/14 Tristan [1] https://issues.jboss.org/browse/ISPN/?selectedTab=com.atlassian.jira.jira-projects-plugin:versions-panel From galder at redhat.com Tue Aug 5 09:01:59 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Tue, 5 Aug 2014 15:01:59 +0200 Subject: [infinispan-dev] Weird ClassCastException .. In-Reply-To: References: Message-ID: <570F2383-BCA7-4F8E-8083-05918B274F5D@redhat.com> On 05 Aug 2014, at 01:33, Sanne Grinovero wrote: > I'm (rarely) seeing this exception in one of my stress tests.. any > clue about what could be wrong? Hmmm, it smells like a concurrency issue, e.g. buffer mixup, in either jboss marshalling, jgroups or the externalizer layer in Infinispan. > I reported a similar one approx a year ago, in that case it was a > value type being unmarshalled as an instance of Class (was also never > resolved). ^ Do you have a JIRA for it? Please definitely create one for this new CCE. Cheers, > > 2014-08-05 00:22:29,521 WARN [CommandAwareRpcDispatcher] > (OOB-1,main-NodeD-22196) ISPN000220: Problems un-marshalling remote > command from byte buffer > java.lang.ClassCastException: java.lang.String cannot be cast to > org.infinispan.metadata.Metadata > at org.infinispan.commands.write.PutKeyValueCommand.setParameters(PutKeyValueCommand.java:114) > at org.infinispan.commands.RemoteCommandsFactory.fromStream(RemoteCommandsFactory.java:138) > at org.infinispan.marshall.exts.ReplicableCommandExternalizer.readObject(ReplicableCommandExternalizer.java:85) > at org.infinispan.marshall.exts.ReplicableCommandExternalizer.readObject(ReplicableCommandExternalizer.java:1) > at org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:409) > at org.infinispan.marshall.core.ExternalizerTable.readObject(ExternalizerTable.java:214) > at org.infinispan.marshall.core.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:148) > at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351) > at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209) > at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41) > at org.infinispan.marshall.exts.ReplicableCommandExternalizer.readParameters(ReplicableCommandExternalizer.java:101) > at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:153) > at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:1) > at org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:409) > at org.infinispan.marshall.core.ExternalizerTable.readObject(ExternalizerTable.java:214) > at org.infinispan.marshall.core.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:148) > at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351) > at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209) > at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41) > at org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:135) > at org.infinispan.marshall.core.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:101) > at org.infinispan.commons.marshall.AbstractDelegatingMarshaller.objectFromByteBuffer(AbstractDelegatingMarshaller.java:80) > at org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectFromBuffer(MarshallerAdapter.java:28) > at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:204) > at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460) > at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377) > at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:250) > at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:674) > at org.jgroups.JChannel.up(JChannel.java:733) > at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030) > at org.jgroups.protocols.RSVP.up(RSVP.java:190) > at org.jgroups.protocols.FRAG2.up(FRAG2.java:165) > at org.jgroups.protocols.FlowControl.up(FlowControl.java:390) > at org.jgroups.protocols.tom.TOA.up(TOA.java:121) > at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1041) > at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234) > at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1034) > at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:752) > at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:399) > at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:610) > at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:297) > at org.jgroups.protocols.Discovery.up(Discovery.java:245) > at org.jgroups.protocols.TP.passMessageUp(TP.java:1551) > at org.jgroups.protocols.TP$MyHandler.run(TP.java:1770) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From sanne at infinispan.org Tue Aug 5 10:02:45 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 5 Aug 2014 15:02:45 +0100 Subject: [infinispan-dev] Weird ClassCastException .. In-Reply-To: <570F2383-BCA7-4F8E-8083-05918B274F5D@redhat.com> References: <570F2383-BCA7-4F8E-8083-05918B274F5D@redhat.com> Message-ID: I've figured it out. I'll not explain it yet, it's such a nice puzzler :-P @Galder you were close: let me say JGroups is the only one not related. On 5 August 2014 14:01, Galder Zamarre?o wrote: > > On 05 Aug 2014, at 01:33, Sanne Grinovero wrote: > >> I'm (rarely) seeing this exception in one of my stress tests.. any >> clue about what could be wrong? > > Hmmm, it smells like a concurrency issue, e.g. buffer mixup, in either jboss marshalling, jgroups or the externalizer layer in Infinispan. > >> I reported a similar one approx a year ago, in that case it was a >> value type being unmarshalled as an instance of Class (was also never >> resolved). > > ^ Do you have a JIRA for it? > > Please definitely create one for this new CCE. > > Cheers, > >> >> 2014-08-05 00:22:29,521 WARN [CommandAwareRpcDispatcher] >> (OOB-1,main-NodeD-22196) ISPN000220: Problems un-marshalling remote >> command from byte buffer >> java.lang.ClassCastException: java.lang.String cannot be cast to >> org.infinispan.metadata.Metadata >> at org.infinispan.commands.write.PutKeyValueCommand.setParameters(PutKeyValueCommand.java:114) >> at org.infinispan.commands.RemoteCommandsFactory.fromStream(RemoteCommandsFactory.java:138) >> at org.infinispan.marshall.exts.ReplicableCommandExternalizer.readObject(ReplicableCommandExternalizer.java:85) >> at org.infinispan.marshall.exts.ReplicableCommandExternalizer.readObject(ReplicableCommandExternalizer.java:1) >> at org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:409) >> at org.infinispan.marshall.core.ExternalizerTable.readObject(ExternalizerTable.java:214) >> at org.infinispan.marshall.core.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:148) >> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351) >> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209) >> at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41) >> at org.infinispan.marshall.exts.ReplicableCommandExternalizer.readParameters(ReplicableCommandExternalizer.java:101) >> at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:153) >> at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:1) >> at org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:409) >> at org.infinispan.marshall.core.ExternalizerTable.readObject(ExternalizerTable.java:214) >> at org.infinispan.marshall.core.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:148) >> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351) >> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209) >> at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41) >> at org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:135) >> at org.infinispan.marshall.core.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:101) >> at org.infinispan.commons.marshall.AbstractDelegatingMarshaller.objectFromByteBuffer(AbstractDelegatingMarshaller.java:80) >> at org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectFromBuffer(MarshallerAdapter.java:28) >> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:204) >> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460) >> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377) >> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:250) >> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:674) >> at org.jgroups.JChannel.up(JChannel.java:733) >> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030) >> at org.jgroups.protocols.RSVP.up(RSVP.java:190) >> at org.jgroups.protocols.FRAG2.up(FRAG2.java:165) >> at org.jgroups.protocols.FlowControl.up(FlowControl.java:390) >> at org.jgroups.protocols.tom.TOA.up(TOA.java:121) >> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1041) >> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234) >> at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1034) >> at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:752) >> at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:399) >> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:610) >> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:297) >> at org.jgroups.protocols.Discovery.up(Discovery.java:245) >> at org.jgroups.protocols.TP.passMessageUp(TP.java:1551) >> at org.jgroups.protocols.TP$MyHandler.run(TP.java:1770) >> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) >> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) >> at java.lang.Thread.run(Thread.java:745) >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Tue Aug 5 11:20:05 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Tue, 5 Aug 2014 17:20:05 +0200 Subject: [infinispan-dev] Handling dynamic interdependencies of Hot Rod servers and filter/converters - Re: ISPN-3950 Message-ID: <0C51B8EA-5642-4083-9431-29B57E44DC36@redhat.com> Hi guys, Re: https://issues.jboss.org/browse/ISPN-3950 The aim here is to enable deployment of custom Hot Rod filter/converter instances to Hot Rod servers running on top of WF, following a similar model to that used for JDBC drivers. I came up with an initial solution in [1] but it?s incomplete because it can only handle a single Hot Rod server definition [2]. Digging through this issue has uncoverered a little can of worms: - It?s possible not only to have N Hot Rod connectors (read: servers) defined in the configuration and they can be added at runtime too via CLI/RHQ. - Deployments of filter/converters can happen anytime, so we?ve enabled Hot Rod servers to be plugged with these at runtime. For the time being, we?ve agreed that if a filter/converter gets deployed, it?ll be applied to all Hot Rod servers. I had an earlier chat with Emmanuel and he suggested having some kind of deployment processor that deals with the filter/converter deployments and registers them in some service. The fun begins now, from my POV, it seems to me that Hot Rod connectors need to depend on the filter/converter deployment tracking service so that if a new Hot Rod connector is added at runtime, it can take all the filter/converter deployments to self. However, the opposite is also plausible, that the filter/converter tracking service depends on all Hot Rod connector services, and that when a filter/converter is deployed, it can be applied to all running Hot Rod servers. This seems like a chicken and egg problem :| What would be the best approach to handle these type of scenarios? Cheers, [1] https://github.com/galderz/infinispan/commit/0a3b37fdab05603654ee81d9ff38784e3283a708 [2] https://github.com/infinispan/infinispan/pull/2742#discussion_r15750154 -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From pedro at infinispan.org Tue Aug 5 13:49:06 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Tue, 05 Aug 2014 18:49:06 +0100 Subject: [infinispan-dev] Testsuite again... Message-ID: <53E11912.1020008@infinispan.org> Hi, OSGi integration tests are blocking the test suite. It is failing with: java.lang.Exception: Could not start bundle mvn:org.infinispan/infinispan-cachestore-jpa/7.0.0-SNAPSHOT in feature(s) infinispan-cachestore-jpa-7.0.0-SNAPSHOT: Unresolved constraint in bundle org.infinispan.cachestore-jpa [70]: Unable to resolve 70.0: missing requirement [70.0] osgi.wiring.package; (&(osgi.wiring.package=javax.persistence)(version>=2.1.0)(version<=2.1.0)) Also, the security integration test has a failure: NodeAuthenticationKrbPassIT.testReadItemOnJoiningNode:71->AbstractNodeAuthentication.testReadItemOnJoiningNode:94 expected: but was: Pedro From ttarrant at redhat.com Tue Aug 5 14:44:07 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 05 Aug 2014 20:44:07 +0200 Subject: [infinispan-dev] Testsuite again... In-Reply-To: <53E11912.1020008@infinispan.org> References: <53E11912.1020008@infinispan.org> Message-ID: <53E125F7.1010100@redhat.com> The security issue is weird: it works fine locally. Is there a way to do builds on CI on a custom branch ? Tristan On 05/08/14 19:49, Pedro Ruivo wrote: > Hi, > > OSGi integration tests are blocking the test suite. It is failing with: > > java.lang.Exception: Could not start bundle > mvn:org.infinispan/infinispan-cachestore-jpa/7.0.0-SNAPSHOT in > feature(s) infinispan-cachestore-jpa-7.0.0-SNAPSHOT: Unresolved > constraint in bundle org.infinispan.cachestore-jpa [70]: Unable to > resolve 70.0: missing requirement [70.0] osgi.wiring.package; > (&(osgi.wiring.package=javax.persistence)(version>=2.1.0)(version<=2.1.0)) > > Also, the security integration test has a failure: > > NodeAuthenticationKrbPassIT.testReadItemOnJoiningNode:71->AbstractNodeAuthentication.testReadItemOnJoiningNode:94 > expected: but was: > > Pedro > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From isavin at redhat.com Wed Aug 6 04:01:37 2014 From: isavin at redhat.com (Ion Savin) Date: Wed, 06 Aug 2014 11:01:37 +0300 Subject: [infinispan-dev] Testsuite again... In-Reply-To: <53E11912.1020008@infinispan.org> References: <53E11912.1020008@infinispan.org> Message-ID: <53E1E0E1.6000800@redhat.com> > OSGi integration tests are blocking the test suite. It is failing with: > > java.lang.Exception: Could not start bundle > mvn:org.infinispan/infinispan-cachestore-jpa/7.0.0-SNAPSHOT in > feature(s) infinispan-cachestore-jpa-7.0.0-SNAPSHOT: Unresolved > constraint in bundle org.infinispan.cachestore-jpa [70]: Unable to > resolve 70.0: missing requirement [70.0] osgi.wiring.package; > (&(osgi.wiring.package=javax.persistence)(version>=2.1.0)(version<=2.1.0)) The failure is triggered by the ORM version update. Opened PR: https://github.com/infinispan/infinispan/pull/2777 (Need a bit of feedback on whether the ISPN dep jpa-api-2.0 should be bumped to jpa-api-2.1 or not.) -- Ion Savin From sanne at infinispan.org Wed Aug 6 05:24:05 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 6 Aug 2014 10:24:05 +0100 Subject: [infinispan-dev] Testsuite again... In-Reply-To: <53E1E0E1.6000800@redhat.com> References: <53E11912.1020008@infinispan.org> <53E1E0E1.6000800@redhat.com> Message-ID: On 6 August 2014 09:01, Ion Savin wrote: >> OSGi integration tests are blocking the test suite. It is failing with: >> >> java.lang.Exception: Could not start bundle >> mvn:org.infinispan/infinispan-cachestore-jpa/7.0.0-SNAPSHOT in >> feature(s) infinispan-cachestore-jpa-7.0.0-SNAPSHOT: Unresolved >> constraint in bundle org.infinispan.cachestore-jpa [70]: Unable to >> resolve 70.0: missing requirement [70.0] osgi.wiring.package; >> (&(osgi.wiring.package=javax.persistence)(version>=2.1.0)(version<=2.1.0)) > > The failure is triggered by the ORM version update. Opened PR: I'm sorry, my fault. Still puzzled at why this could slip through my test runs, the cause might be that this is actually a rather old changeset which I've recently rebased. > https://github.com/infinispan/infinispan/pull/2777 > > (Need a bit of feedback on whether the ISPN dep jpa-api-2.0 should be > bumped to jpa-api-2.1 or not.) +1 to update, these APIs are strictly backwards compatible so it just unlocks new features to the users, with no drawbacks. Also it is the API version used by WildFly, I don't think we want to use a different version. Sanne > > -- > Ion Savin > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From pedro at infinispan.org Wed Aug 6 05:35:58 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Wed, 06 Aug 2014 10:35:58 +0100 Subject: [infinispan-dev] Testsuite again... In-Reply-To: References: <53E11912.1020008@infinispan.org> <53E1E0E1.6000800@redhat.com> Message-ID: <53E1F6FE.40104@infinispan.org> On 08/06/2014 10:24 AM, Sanne Grinovero wrote: > On 6 August 2014 09:01, Ion Savin wrote: >>> OSGi integration tests are blocking the test suite. It is failing with: >>> >>> java.lang.Exception: Could not start bundle >>> mvn:org.infinispan/infinispan-cachestore-jpa/7.0.0-SNAPSHOT in >>> feature(s) infinispan-cachestore-jpa-7.0.0-SNAPSHOT: Unresolved >>> constraint in bundle org.infinispan.cachestore-jpa [70]: Unable to >>> resolve 70.0: missing requirement [70.0] osgi.wiring.package; >>> (&(osgi.wiring.package=javax.persistence)(version>=2.1.0)(version<=2.1.0)) >> >> The failure is triggered by the ORM version update. Opened PR: > > I'm sorry, my fault. Still puzzled at why this could slip through my > test runs, the cause might be that this is actually a rather old > changeset which I've recently rebased. No problem :) > >> https://github.com/infinispan/infinispan/pull/2777 >> >> (Need a bit of feedback on whether the ISPN dep jpa-api-2.0 should be >> bumped to jpa-api-2.1 or not.) > > +1 to update, these APIs are strictly backwards compatible so it just > unlocks new features to the users, with no drawbacks. > Also it is the API version used by WildFly, I don't think we want to > use a different version. > > Sanne > >> >> -- >> Ion Savin >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From pedro at infinispan.org Wed Aug 6 05:38:23 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Wed, 06 Aug 2014 10:38:23 +0100 Subject: [infinispan-dev] Testsuite again... In-Reply-To: <53E125F7.1010100@redhat.com> References: <53E11912.1020008@infinispan.org> <53E125F7.1010100@redhat.com> Message-ID: <53E1F78F.3070005@infinispan.org> On 08/05/2014 07:44 PM, Tristan Tarrant wrote: > The security issue is weird: it works fine locally. Is there a way to do > builds on CI on a custom branch ? > it is only failing one test in my machine, but in CI it fails 4 tests: http://ci.infinispan.org/viewLog.html?buildId=10533&tab=buildResultsDiv&buildTypeId=bt8 Do we need to configure something for the security tests? Yes I think it is possible. I think, if you have the TeamCity plugin in Intellij, you can send that branch to CI... > Tristan > > On 05/08/14 19:49, Pedro Ruivo wrote: >> Hi, >> >> OSGi integration tests are blocking the test suite. It is failing with: >> >> java.lang.Exception: Could not start bundle >> mvn:org.infinispan/infinispan-cachestore-jpa/7.0.0-SNAPSHOT in >> feature(s) infinispan-cachestore-jpa-7.0.0-SNAPSHOT: Unresolved >> constraint in bundle org.infinispan.cachestore-jpa [70]: Unable to >> resolve 70.0: missing requirement [70.0] osgi.wiring.package; >> (&(osgi.wiring.package=javax.persistence)(version>=2.1.0)(version<=2.1.0)) >> >> Also, the security integration test has a failure: >> >> NodeAuthenticationKrbPassIT.testReadItemOnJoiningNode:71->AbstractNodeAuthentication.testReadItemOnJoiningNode:94 >> expected: but was: >> >> Pedro >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From bban at redhat.com Wed Aug 6 06:02:08 2014 From: bban at redhat.com (Bela Ban) Date: Wed, 06 Aug 2014 12:02:08 +0200 Subject: [infinispan-dev] DIST-SYNC, put(), a problem and a solution Message-ID: <53E1FD20.5020504@redhat.com> Seems like this discussion has died with the general agreement that this is broken and with a few proposals on how to fix it, but without any follow-up action items. I think we (= someone from the ISPN team) need to create a JIRA, preferably blocking. WDYT ? If not, here's what our options are: #1 I'll create a JIRA #2 We'll hold the team meeting in Krasnojarsk, Russia #3 There will be only vodka, no beers in #2 #4 Bela will join the ISPN team Thoughts ? -- Bela Ban, JGroups lead (http://www.jgroups.org) From dan.berindei at gmail.com Wed Aug 6 10:13:16 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 6 Aug 2014 17:13:16 +0300 Subject: [infinispan-dev] DIST-SYNC, put(), a problem and a solution In-Reply-To: <53E1FD20.5020504@redhat.com> References: <53E1FD20.5020504@redhat.com> Message-ID: I could create the issue in JIRA, but I wouldn't make it high priority because I think it have lots of corner cases with NBST and cause headaches for the maintainers of state transfer ;) Besides, I'm still not sure I understood your proposals properly, e.g. whether they are meant only for non-tx caches or you want to change something for tx caches as well... Cheers Dan On Wed, Aug 6, 2014 at 1:02 PM, Bela Ban wrote: > Seems like this discussion has died with the general agreement that this > is broken and with a few proposals on how to fix it, but without any > follow-up action items. > > I think we (= someone from the ISPN team) need to create a JIRA, > preferably blocking. > > WDYT ? > > If not, here's what our options are: > > #1 I'll create a JIRA > > #2 We'll hold the team meeting in Krasnojarsk, Russia > > #3 There will be only vodka, no beers in #2 > > #4 Bela will join the ISPN team > > Thoughts ? > > -- > Bela Ban, JGroups lead (http://www.jgroups.org) > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140806/91a263de/attachment.html From bban at redhat.com Wed Aug 6 11:19:50 2014 From: bban at redhat.com (Bela Ban) Date: Wed, 06 Aug 2014 17:19:50 +0200 Subject: [infinispan-dev] DIST-SYNC, put(), a problem and a solution In-Reply-To: References: <53E1FD20.5020504@redhat.com> Message-ID: <53E24796.20104@redhat.com> Hey Dan, On 06/08/14 16:13, Dan Berindei wrote: > I could create the issue in JIRA, but I wouldn't make it high priority > because I think it have lots of corner cases with NBST and cause > headaches for the maintainers of state transfer ;) I do believe the put-while-holding-the-lock issue *is* a critical issue; anyone banging a cluster of Infinispan nodes with more than 1 thread will run into lock timeouts, with or without transactions. The only workaround for now is to use total order, but at the cost of reduced performance. However, once a system starts hitting the lock timeout issues, performance drops to a crawl, way slower than TO, and work starts to pile up, which compounds the problem. I believe doing a sync RPC while holding the lock on a key is asking for trouble and is (IMO) an anti-pattern. Sorry if this has a negative impact on NBST, but should we not fix this because we don't want to risk a change to NBST ? > Besides, I'm still not sure I understood your proposals properly, e.g. > whether they are meant only for non-tx caches or you want to change > something for tx caches as well... I think this can be used for both cases; however, I think either Sanne's solution of using seqnos *per key* and updating in the order of seqnos or using Pedro's total order impl are probably better solutions. I'm not pretending these solutions are final (e.g. Sanne's solution needs more thought when multiple keys are involved), but we should at least acknowledge the issue exists, create a JIRA to prioritize it and then start discussing solutions. > Cheers > Dan > > > > > On Wed, Aug 6, 2014 at 1:02 PM, Bela Ban > wrote: > > Seems like this discussion has died with the general agreement that this > is broken and with a few proposals on how to fix it, but without any > follow-up action items. > > I think we (= someone from the ISPN team) need to create a JIRA, > preferably blocking. > > WDYT ? > > If not, here's what our options are: > > #1 I'll create a JIRA > > #2 We'll hold the team meeting in Krasnojarsk, Russia > > #3 There will be only vodka, no beers in #2 > > #4 Bela will join the ISPN team > > Thoughts ? -- Bela Ban, JGroups lead (http://www.jgroups.org) From pedro at infinispan.org Wed Aug 6 11:42:23 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Wed, 06 Aug 2014 16:42:23 +0100 Subject: [infinispan-dev] DIST-SYNC, put(), a problem and a solution In-Reply-To: <53E24796.20104@redhat.com> References: <53E1FD20.5020504@redhat.com> <53E24796.20104@redhat.com> Message-ID: <53E24CDF.3040001@infinispan.org> On 08/06/2014 04:19 PM, Bela Ban wrote: > > I think this can be used for both cases; however, I think either Sanne's > solution of using seqnos *per key* and updating in the order of seqnos > or using Pedro's total order impl are probably better solutions. > > I'm not pretending these solutions are final (e.g. Sanne's solution > needs more thought when multiple keys are involved), but we should at > least acknowledge the issue exists, create a JIRA to prioritize it and > then start discussing solutions. > I'm not sure if Sanne's suggestion will work. Imagine the follow scenario: * a thread pool with a single thread (for simplicity) * 3 nodes, A, B and C. * A is the primary owner of K1 and backup owner of K2 * B is the primary owner of K2 and backup owner if K1 NodeC requests two puts concurrently to K1 and K2. Both NodeA and NodeB will process the request, assign a sequence number and send it to the backup owners. In this case, we have a deadlock again because the send to the backup owner is synchronous. cause: The thread pool is exhausted because the thread is blocking waiting for the reply from the backup owner. Any thoughts how can we solve this? Also, will the state transfer need to be adapted to the new behaviour? Pedro From dan.berindei at gmail.com Wed Aug 6 13:49:20 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 6 Aug 2014 20:49:20 +0300 Subject: [infinispan-dev] DIST-SYNC, put(), a problem and a solution In-Reply-To: <53E24796.20104@redhat.com> References: <53E1FD20.5020504@redhat.com> <53E24796.20104@redhat.com> Message-ID: On Wed, Aug 6, 2014 at 6:19 PM, Bela Ban wrote: > Hey Dan, > > On 06/08/14 16:13, Dan Berindei wrote: > > I could create the issue in JIRA, but I wouldn't make it high priority > > because I think it have lots of corner cases with NBST and cause > > headaches for the maintainers of state transfer ;) > > I do believe the put-while-holding-the-lock issue *is* a critical issue; > anyone banging a cluster of Infinispan nodes with more than 1 thread > will run into lock timeouts, with or without transactions. The only > workaround for now is to use total order, but at the cost of reduced > performance. However, once a system starts hitting the lock timeout > issues, performance drops to a crawl, way slower than TO, and work > starts to pile up, which compounds the problem. > I wouldn't call it critical because you can always increase the number of threads. It won't be pretty, but it will work around the thread exhaustion issue. > I believe doing a sync RPC while holding the lock on a key is asking for > trouble and is (IMO) an anti-pattern. > We also hold a lock on a key between the LockControlCommand and the TxCompletionNotificationCommand in pessimistic-locking caches, and there's at least one sync PrepareCommand RPC between them... So I don't see it as an anti-pattern, the only problem is that we should be able to do that without blocking internal threads in addition to the user thread (which is how tx caches do it). > Sorry if this has a negative impact on NBST, but should we not fix this > because we don't want to risk a change to NBST ? > I'm not saying it will have a negative impact on NBST, I'm just saying I don't want to start implementing an incomplete proposal for the basic flow and leave the state transfer/topology change issues for "later". When happens when a node leaves, when a backup owner is added, or when the primary owner changes should be part of the initial discussion, not an afterthought. E.g. with your proposal, any updates in the replication queue on the primary owner will be lost when that primary owner dies, even though we told the user that we successfully updated the key. To quote from my first email on this thread: "OTOH, if the primary owner dies, we have to ask a backup, and we can lose the modifications not yet replicated by the primary." With Sanne's proposal, we wouldn't report to the user that we stored the value until all the backups confirmed the update, so we wouldn't have that problem. But I don't see how we could keep the sequence of versions monotonous when the primary owner of the key changes without some extra sync RPCs (also done while holding the key lock). IIRC TOA also needs some sync RPCs to generate its sequence numbers. > > Besides, I'm still not sure I understood your proposals properly, e.g. > > whether they are meant only for non-tx caches or you want to change > > something for tx caches as well... > > I think this can be used for both cases; however, I think either Sanne's > solution of using seqnos *per key* and updating in the order of seqnos > or using Pedro's total order impl are probably better solutions. > > I'm not pretending these solutions are final (e.g. Sanne's solution > needs more thought when multiple keys are involved), but we should at > least acknowledge the issue exists, create a JIRA to prioritize it and > then start discussing solutions. > > We've been discussing solutions without a JIRA just fine :) My feeling so far is that the thread exhaustion problem would be better served by porting TO to non-tx caches and/or changing non-tx locking to not require a thread. I have created an issue for TO [1], but IMO the locking rework [2] should be higher priority, as it can help both tx and non-tx caches. [1] https://issues.jboss.org/browse/ISPN-4610 [2] https://issues.jboss.org/browse/ISPN-2849 > > > > > > On Wed, Aug 6, 2014 at 1:02 PM, Bela Ban > > wrote: > > > > Seems like this discussion has died with the general agreement that > this > > is broken and with a few proposals on how to fix it, but without any > > follow-up action items. > > > > I think we (= someone from the ISPN team) need to create a JIRA, > > preferably blocking. > > > > WDYT ? > > > > If not, here's what our options are: > > > > #1 I'll create a JIRA > > > > #2 We'll hold the team meeting in Krasnojarsk, Russia > > > > #3 There will be only vodka, no beers in #2 > > > > #4 Bela will join the ISPN team > > > > Thoughts ? > > > -- > Bela Ban, JGroups lead (http://www.jgroups.org) > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140806/213201e6/attachment.html From dan.berindei at gmail.com Wed Aug 6 14:04:57 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 6 Aug 2014 21:04:57 +0300 Subject: [infinispan-dev] Weird ClassCastException .. In-Reply-To: References: <570F2383-BCA7-4F8E-8083-05918B274F5D@redhat.com> Message-ID: Sanne, you really need to say more, I was hoping to see more information in the PR/JIRA but both descriptions are very terse :) If I understand correctly, the problem is that you're keeping a reference to a cache value and modifying it without a put(), which could break marshalling for a concurrent put()? Cheers Dan On Tue, Aug 5, 2014 at 5:02 PM, Sanne Grinovero wrote: > I've figured it out. > I'll not explain it yet, it's such a nice puzzler :-P > @Galder you were close: let me say JGroups is the only one not related. > > On 5 August 2014 14:01, Galder Zamarre?o wrote: > > > > On 05 Aug 2014, at 01:33, Sanne Grinovero wrote: > > > >> I'm (rarely) seeing this exception in one of my stress tests.. any > >> clue about what could be wrong? > > > > Hmmm, it smells like a concurrency issue, e.g. buffer mixup, in either > jboss marshalling, jgroups or the externalizer layer in Infinispan. > > > >> I reported a similar one approx a year ago, in that case it was a > >> value type being unmarshalled as an instance of Class (was also never > >> resolved). > > > > ^ Do you have a JIRA for it? > > > > Please definitely create one for this new CCE. > > > > Cheers, > > > >> > >> 2014-08-05 00:22:29,521 WARN [CommandAwareRpcDispatcher] > >> (OOB-1,main-NodeD-22196) ISPN000220: Problems un-marshalling remote > >> command from byte buffer > >> java.lang.ClassCastException: java.lang.String cannot be cast to > >> org.infinispan.metadata.Metadata > >> at > org.infinispan.commands.write.PutKeyValueCommand.setParameters(PutKeyValueCommand.java:114) > >> at > org.infinispan.commands.RemoteCommandsFactory.fromStream(RemoteCommandsFactory.java:138) > >> at > org.infinispan.marshall.exts.ReplicableCommandExternalizer.readObject(ReplicableCommandExternalizer.java:85) > >> at > org.infinispan.marshall.exts.ReplicableCommandExternalizer.readObject(ReplicableCommandExternalizer.java:1) > >> at > org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:409) > >> at > org.infinispan.marshall.core.ExternalizerTable.readObject(ExternalizerTable.java:214) > >> at > org.infinispan.marshall.core.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:148) > >> at > org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351) > >> at > org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209) > >> at > org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41) > >> at > org.infinispan.marshall.exts.ReplicableCommandExternalizer.readParameters(ReplicableCommandExternalizer.java:101) > >> at > org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:153) > >> at > org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:1) > >> at > org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:409) > >> at > org.infinispan.marshall.core.ExternalizerTable.readObject(ExternalizerTable.java:214) > >> at > org.infinispan.marshall.core.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:148) > >> at > org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351) > >> at > org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209) > >> at > org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41) > >> at > org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:135) > >> at > org.infinispan.marshall.core.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:101) > >> at > org.infinispan.commons.marshall.AbstractDelegatingMarshaller.objectFromByteBuffer(AbstractDelegatingMarshaller.java:80) > >> at > org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectFromBuffer(MarshallerAdapter.java:28) > >> at > org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:204) > >> at > org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460) > >> at > org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377) > >> at > org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:250) > >> at > org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:674) > >> at org.jgroups.JChannel.up(JChannel.java:733) > >> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030) > >> at org.jgroups.protocols.RSVP.up(RSVP.java:190) > >> at org.jgroups.protocols.FRAG2.up(FRAG2.java:165) > >> at org.jgroups.protocols.FlowControl.up(FlowControl.java:390) > >> at org.jgroups.protocols.tom.TOA.up(TOA.java:121) > >> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1041) > >> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234) > >> at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1034) > >> at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:752) > >> at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:399) > >> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:610) > >> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:297) > >> at org.jgroups.protocols.Discovery.up(Discovery.java:245) > >> at org.jgroups.protocols.TP.passMessageUp(TP.java:1551) > >> at org.jgroups.protocols.TP$MyHandler.run(TP.java:1770) > >> at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > >> at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > >> at java.lang.Thread.run(Thread.java:745) > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > -- > > Galder Zamarre?o > > galder at redhat.com > > twitter.com/galderz > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140806/d0e1efae/attachment-0001.html From sanne at infinispan.org Wed Aug 6 14:26:00 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 6 Aug 2014 19:26:00 +0100 Subject: [infinispan-dev] Weird ClassCastException .. In-Reply-To: References: <570F2383-BCA7-4F8E-8083-05918B274F5D@redhat.com> Message-ID: On 6 August 2014 19:04, Dan Berindei wrote: > Sanne, you really need to say more, I was hoping to see more information in > the PR/JIRA but both descriptions are very terse :) hum ok if I need to spoil the fun :) > If I understand correctly, the problem is that you're keeping a reference to > a cache value and modifying it without a put(), which could break > marshalling for a concurrent put()? Exactly, my problem was that I would write a concurrent structure, but without preventing this structure from being manipulated from different threads. The marshaller would first write the size, then iterate on the elements and write each of them, but there could be a mismatch between the written size and the actually written entries, causing such weird exceptions. I guess the idea of writing a concurrent map while keeping a reference to it might be a dumb mistake, but the exception and error happening in such a different area makes it "interesting" to correlate to the actual cause. Cheers, Sanne From sanne at infinispan.org Wed Aug 6 14:50:43 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 6 Aug 2014 19:50:43 +0100 Subject: [infinispan-dev] DIST-SYNC, put(), a problem and a solution In-Reply-To: References: <53E1FD20.5020504@redhat.com> <53E24796.20104@redhat.com> Message-ID: On 6 August 2014 18:49, Dan Berindei wrote: > > > > On Wed, Aug 6, 2014 at 6:19 PM, Bela Ban wrote: >> >> Hey Dan, >> >> On 06/08/14 16:13, Dan Berindei wrote: >> > I could create the issue in JIRA, but I wouldn't make it high priority >> > because I think it have lots of corner cases with NBST and cause >> > headaches for the maintainers of state transfer ;) >> >> I do believe the put-while-holding-the-lock issue *is* a critical issue; >> anyone banging a cluster of Infinispan nodes with more than 1 thread >> will run into lock timeouts, with or without transactions. The only >> workaround for now is to use total order, but at the cost of reduced >> performance. However, once a system starts hitting the lock timeout >> issues, performance drops to a crawl, way slower than TO, and work >> starts to pile up, which compounds the problem. > > > I wouldn't call it critical because you can always increase the number of > threads. It won't be pretty, but it will work around the thread exhaustion > issue. If Infinispan doesn't do it automatically, I wouldn't count that as a solution. Consider the project goal is to make it easy to scale up/down dynamically.. if it requires experts to be alert all the time to for such manual interventions it's a failure. Besides, I can count the names of people able to figure such trouble out on a single hand.. so I agree this is critical. >> I believe doing a sync RPC while holding the lock on a key is asking for >> trouble and is (IMO) an anti-pattern. > > > We also hold a lock on a key between the LockControlCommand and the > TxCompletionNotificationCommand in pessimistic-locking caches, and there's > at least one sync PrepareCommand RPC between them... > > So I don't see it as an anti-pattern, the only problem is that we should be > able to do that without blocking internal threads in addition to the user > thread (which is how tx caches do it). > >> >> Sorry if this has a negative impact on NBST, but should we not fix this >> because we don't want to risk a change to NBST ? > > > I'm not saying it will have a negative impact on NBST, I'm just saying I > don't want to start implementing an incomplete proposal for the basic flow > and leave the state transfer/topology change issues for "later". When > happens when a node leaves, when a backup owner is added, or when the > primary owner changes should be part of the initial discussion, not an > afterthought. Absolutely! No change should be done leaving questions open, and I don't presume I suggested a solution I was just trying to start a conversation on using "such a pattern". But also I believe we already had such conversations in past meetings, so my words were terse and short because I just wanted to remind about those. > E.g. with your proposal, any updates in the replication queue on the primary > owner will be lost when that primary owner dies, even though we told the > user that we successfully updated the key. To quote from my first email on > this thread: "OTOH, if the primary owner dies, we have to ask a backup, and > we can lose the modifications not yet replicated by the primary." > > With Sanne's proposal, we wouldn't report to the user that we stored the > value until all the backups confirmed the update, so we wouldn't have that > problem. But I don't see how we could keep the sequence of versions > monotonous when the primary owner of the key changes without some extra sync > RPCs (also done while holding the key lock). IIRC TOA also needs some sync > RPCs to generate its sequence numbers. I don't know how NBST v.21 is working today, but I trust it doesn't lose writes and that we should break down the problems in smaller problems, in this case I hope to build on the solid foundations of NBST. When the key is re-possessed by a new node (and this starts to generate "reference" write commands), you could restart the sequences: you don't need an universal monotonic number, all what backup owners need is an ordering rule and understand that the commands coming from the new owner are more recent than the old owner. AFAIK you already have the notion of view generation id? Essentially we'd need to store together with the entry not only its sequence but also the viewid. It's a very simplified (compact) vector clock, because in practice from this viewId we can extrapolate addresses and owners.. but is simpler than the full pattern, as you only need the last one, as the longer tail of events is handled by NBST I think? One catch is I think you need tombstones, but those are already needed for so many things that we can't avoid them :) Cheers, Sanne > >> >> > Besides, I'm still not sure I understood your proposals properly, e.g. >> > whether they are meant only for non-tx caches or you want to change >> > something for tx caches as well... >> >> I think this can be used for both cases; however, I think either Sanne's >> solution of using seqnos *per key* and updating in the order of seqnos >> or using Pedro's total order impl are probably better solutions. >> >> I'm not pretending these solutions are final (e.g. Sanne's solution >> needs more thought when multiple keys are involved), but we should at >> least acknowledge the issue exists, create a JIRA to prioritize it and >> then start discussing solutions. >> > > We've been discussing solutions without a JIRA just fine :) > > My feeling so far is that the thread exhaustion problem would be better > served by porting TO to non-tx caches and/or changing non-tx locking to not > require a thread. I have created an issue for TO [1], but IMO the locking > rework [2] should be higher priority, as it can help both tx and non-tx > caches. > > [1] https://issues.jboss.org/browse/ISPN-4610 > [2] https://issues.jboss.org/browse/ISPN-2849 > >> >> > >> > >> > On Wed, Aug 6, 2014 at 1:02 PM, Bela Ban > > > wrote: >> > >> > Seems like this discussion has died with the general agreement that >> > this >> > is broken and with a few proposals on how to fix it, but without any >> > follow-up action items. >> > >> > I think we (= someone from the ISPN team) need to create a JIRA, >> > preferably blocking. >> > >> > WDYT ? >> > >> > If not, here's what our options are: >> > >> > #1 I'll create a JIRA >> > >> > #2 We'll hold the team meeting in Krasnojarsk, Russia >> > >> > #3 There will be only vodka, no beers in #2 >> > >> > #4 Bela will join the ISPN team >> > >> > Thoughts ? >> >> >> -- >> Bela Ban, JGroups lead (http://www.jgroups.org) >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From an1310 at hotmail.com Wed Aug 6 16:32:21 2014 From: an1310 at hotmail.com (Erik Salter) Date: Wed, 6 Aug 2014 16:32:21 -0400 Subject: [infinispan-dev] DIST-SYNC, put(), a problem and a solution In-Reply-To: References: <53E1FD20.5020504@redhat.com> <53E24796.20104@redhat.com> Message-ID: Hi Sanne, So I guess I'm one of the five. This particular issue has been happening over and over again in my production environment. It is a performance and availability killer, since any sort of network blip will result in these lock scenarios. And network blips in my data center environment are very real and surprisingly common. In the past 5-6 weeks, we've had 3 redundant routers fail. The best thing that could happen is lock contention. Almost any sort of MERGE event will result in "stale" locks -- locks that are never released by the system. Now consider that I can't throw more threads at the problem. I have 1100 OOB + ISPN threads available per cluster. So what I've hacked in production is something that basically examines the LockManager (and TxTable) and manages locks that are > 5x lock acquisition time old. If I don't, then I start getting user threads and OOB threads backed up, and it's surprising how quickly these pools can exhaust. Nowhere near a scalable and elastic solution, but desperate times and all that. I can't agree more that any solution must take NBST into consideration, especially WRT pessimistic locks. Regards, Erik On 8/6/14, 2:50 PM, "Sanne Grinovero" wrote: >On 6 August 2014 18:49, Dan Berindei wrote: >> >> >> >> On Wed, Aug 6, 2014 at 6:19 PM, Bela Ban wrote: >>> >>> Hey Dan, >>> >>> On 06/08/14 16:13, Dan Berindei wrote: >>> > I could create the issue in JIRA, but I wouldn't make it high >>>priority >>> > because I think it have lots of corner cases with NBST and cause >>> > headaches for the maintainers of state transfer ;) >>> >>> I do believe the put-while-holding-the-lock issue *is* a critical >>>issue; >>> anyone banging a cluster of Infinispan nodes with more than 1 thread >>> will run into lock timeouts, with or without transactions. The only >>> workaround for now is to use total order, but at the cost of reduced >>> performance. However, once a system starts hitting the lock timeout >>> issues, performance drops to a crawl, way slower than TO, and work >>> starts to pile up, which compounds the problem. >> >> >> I wouldn't call it critical because you can always increase the number >>of >> threads. It won't be pretty, but it will work around the thread >>exhaustion >> issue. > >If Infinispan doesn't do it automatically, I wouldn't count that as a >solution. >Consider the project goal is to make it easy to scale up/down >dynamically.. if it requires experts to be alert all the time to for >such manual interventions it's a failure. >Besides, I can count the names of people able to figure such trouble >out on a single hand.. so I agree this is critical. > > >>> I believe doing a sync RPC while holding the lock on a key is asking >>>for >>> trouble and is (IMO) an anti-pattern. >> >> >> We also hold a lock on a key between the LockControlCommand and the >> TxCompletionNotificationCommand in pessimistic-locking caches, and >>there's >> at least one sync PrepareCommand RPC between them... >> >> So I don't see it as an anti-pattern, the only problem is that we >>should be >> able to do that without blocking internal threads in addition to the >>user >> thread (which is how tx caches do it). >> >>> >>> Sorry if this has a negative impact on NBST, but should we not fix this >>> because we don't want to risk a change to NBST ? >> >> >> I'm not saying it will have a negative impact on NBST, I'm just saying I >> don't want to start implementing an incomplete proposal for the basic >>flow >> and leave the state transfer/topology change issues for "later". When >> happens when a node leaves, when a backup owner is added, or when the >> primary owner changes should be part of the initial discussion, not an >> afterthought. > >Absolutely! >No change should be done leaving questions open, and I don't presume I >suggested a solution I was just trying to start a conversation on >using "such a pattern". >But also I believe we already had such conversations in past meetings, >so my words were terse and short because I just wanted to remind about >those. > > >> E.g. with your proposal, any updates in the replication queue on the >>primary >> owner will be lost when that primary owner dies, even though we told the >> user that we successfully updated the key. To quote from my first email >>on >> this thread: "OTOH, if the primary owner dies, we have to ask a backup, >>and >> we can lose the modifications not yet replicated by the primary." >> >> With Sanne's proposal, we wouldn't report to the user that we stored the >> value until all the backups confirmed the update, so we wouldn't have >>that >> problem. But I don't see how we could keep the sequence of versions >> monotonous when the primary owner of the key changes without some extra >>sync >> RPCs (also done while holding the key lock). IIRC TOA also needs some >>sync >> RPCs to generate its sequence numbers. > >I don't know how NBST v.21 is working today, but I trust it doesn't >lose writes and that we should break down the problems in smaller >problems, in this case I hope to build on the solid foundations of >NBST. > >When the key is re-possessed by a new node (and this starts to >generate "reference" write commands), you could restart the sequences: >you don't need an universal monotonic number, all what backup owners >need is an ordering rule and understand that the commands coming from >the new owner are more recent than the old owner. AFAIK you already >have the notion of view generation id? >Essentially we'd need to store together with the entry not only its >sequence but also the viewid. It's a very simplified (compact) vector >clock, because in practice from this viewId we can extrapolate >addresses and owners.. but is simpler than the full pattern, as you >only need the last one, as the longer tail of events is handled by >NBST I think? > >One catch is I think you need tombstones, but those are already needed >for so many things that we can't avoid them :) > >Cheers, >Sanne > > >> >>> >>> > Besides, I'm still not sure I understood your proposals properly, >>>e.g. >>> > whether they are meant only for non-tx caches or you want to change >>> > something for tx caches as well... >>> >>> I think this can be used for both cases; however, I think either >>>Sanne's >>> solution of using seqnos *per key* and updating in the order of seqnos >>> or using Pedro's total order impl are probably better solutions. >>> >>> I'm not pretending these solutions are final (e.g. Sanne's solution >>> needs more thought when multiple keys are involved), but we should at >>> least acknowledge the issue exists, create a JIRA to prioritize it and >>> then start discussing solutions. >>> >> >> We've been discussing solutions without a JIRA just fine :) >> >> My feeling so far is that the thread exhaustion problem would be better >> served by porting TO to non-tx caches and/or changing non-tx locking to >>not >> require a thread. I have created an issue for TO [1], but IMO the >>locking >> rework [2] should be higher priority, as it can help both tx and non-tx >> caches. >> >> [1] https://issues.jboss.org/browse/ISPN-4610 >> [2] https://issues.jboss.org/browse/ISPN-2849 >> >>> >>> > >>> > >>> > On Wed, Aug 6, 2014 at 1:02 PM, Bela Ban >> > > wrote: >>> > >>> > Seems like this discussion has died with the general agreement >>>that >>> > this >>> > is broken and with a few proposals on how to fix it, but without >>>any >>> > follow-up action items. >>> > >>> > I think we (= someone from the ISPN team) need to create a JIRA, >>> > preferably blocking. >>> > >>> > WDYT ? >>> > >>> > If not, here's what our options are: >>> > >>> > #1 I'll create a JIRA >>> > >>> > #2 We'll hold the team meeting in Krasnojarsk, Russia >>> > >>> > #3 There will be only vodka, no beers in #2 >>> > >>> > #4 Bela will join the ISPN team >>> > >>> > Thoughts ? >>> >>> >>> -- >>> Bela Ban, JGroups lead (http://www.jgroups.org) >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >_______________________________________________ >infinispan-dev mailing list >infinispan-dev at lists.jboss.org >https://lists.jboss.org/mailman/listinfo/infinispan-dev From a.a.olenev at gmail.com Thu Aug 7 04:11:46 2014 From: a.a.olenev at gmail.com (=?UTF-8?B?0JAg0J7Qu9C10L3QtdCy?=) Date: Thu, 7 Aug 2014 12:11:46 +0400 Subject: [infinispan-dev] DIST-SYNC, put(), a problem and a solution Message-ID: Guys, Am I right that you trying to break CAP theorem? Seems like that because you want to have consistent backups (end of cache.put() means that primary node and all backups have the same value) and non-blocking behaviour (from user's POV). Sorry if I'm wrong. Cheers Alexey -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140807/491dfc7e/attachment.html From rvansa at redhat.com Thu Aug 7 06:50:11 2014 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 07 Aug 2014 12:50:11 +0200 Subject: [infinispan-dev] DIST-SYNC, put(), a problem and a solution In-Reply-To: References: Message-ID: <53E359E3.9090409@redhat.com> Hi, the current split-brain handling design wants to sacrifice availability (of writes) - don't worry, we don't want to break CAP and everyone here also obeys the laws of gravity :) Radim On 08/07/2014 10:11 AM, ? ?????? wrote: > Guys, > Am I right that you trying to break CAP theorem? Seems like that > because you want to have consistent backups (end of cache.put() means > that primary node and all backups have the same value) and > non-blocking behaviour (from user's POV). > Sorry if I'm wrong. > > Cheers > Alexey > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140807/29d397be/attachment-0001.html From rvansa at redhat.com Thu Aug 7 06:56:58 2014 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 07 Aug 2014 12:56:58 +0200 Subject: [infinispan-dev] DIST-SYNC, put(), a problem and a solution In-Reply-To: <53E359E3.9090409@redhat.com> References: <53E359E3.9090409@redhat.com> Message-ID: <53E35B7A.1020508@redhat.com> Oh, I've missed the subject (I thought it was related to split-brain, not regular operation). How are partitions related to this? This design does not consider any partitions, just crashing nodes. Radim On 08/07/2014 12:50 PM, Radim Vansa wrote: > Hi, > > the current split-brain handling design wants to sacrifice > availability (of writes) - don't worry, we don't want to break CAP and > everyone here also obeys the laws of gravity :) > > Radim > > > On 08/07/2014 10:11 AM, ? ?????? wrote: >> Guys, >> Am I right that you trying to break CAP theorem? Seems like that >> because you want to have consistent backups (end of cache.put() means >> that primary node and all backups have the same value) and >> non-blocking behaviour (from user's POV). >> Sorry if I'm wrong. >> >> Cheers >> Alexey >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss DataGrid QA > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140807/c9a187d6/attachment.html From sanne at hibernate.org Thu Aug 7 13:56:56 2014 From: sanne at hibernate.org (Sanne Grinovero) Date: Thu, 7 Aug 2014 18:56:56 +0100 Subject: [infinispan-dev] [Search] @Transformable vs @ProvidedId Message-ID: There are two annotations clashing for same responsibilities: - org.infinispan.query.Transformable - org.hibernate.search.annotations.ProvidedId as documented at the following link, these two different ways to apply "Id indexing options" in Infinispan Query, IMHO quite unclear when a user should use one vs. the other. - http://infinispan.org/docs/7.0.x/user_guide/user_guide.html#_requirements_for_the_key_transformable_and_providedid The benefit of @Transformable is that Infinispan provides one out of the box which will work for any user case: it will serialize the whole object representing the id, then hex-encode the buffer into a String: horribly inefficient but works on any serializable type. @ProvidedId originally marked the indexed entry in such a way that the indexing engine would consider the id "provided externally", i.e. given at runtime. It would also assume that its type would be static for a specific type - which is I think a reasonable expectation but doesn't really hold as an absolute truth in the case of Infinispan: nothing prevents me to store an indexed entry of type "Person" for index "personindex" with an Integer typed key in the cache, and also duplicate the same information under a say String typed key. So there's an expectation mismatch: in ORM world the key type is strongly related to the value type, but when indexing Infinispan entries the reality is that we're indexing two independent "modules". I was hoping to drop @ProvidedId today as the original "marker" functionality is no longer needed: since we have org.hibernate.search.cfg.spi.SearchConfiguration.isIdProvidedImplicit() the option can be implicitly applied to all indexed entries, and the annotation is mostly redundant in Infinispan since we added this. But actually it turns out it's a bit more complex as it servers a second function as well: it's the only way for users to be able to specify a FieldBridge for the ID.. so the functionality of this annotation is not consumed yet. So my proposal is to get rid of both @Transformable and @ProvidedId. There needs to be a single way in Infinispan to define both the indexing options and transformation; ideally this should be left to the Search Engine and its provided collection of FieldBridge implementations. Since the id type and the value type in Infinispan are not necessarily strongly related (still the id is unique of course), I think this option doesn't even belong on the @Indexed value but should be specified on the key type. Problem is that to define a class-level annotation to be used on the Infinispan keys doesn't really belong in the collection of annotations of Hibernate Search; I'm tempted to require that the key used for the type must be one of those for which an out-of-the-box FieldBridge is provided: the good thing is that now the set is extensible. In a second phase Infinispan could opt to create a custom annotation like @Transformable to register these options in a simplified way. Even more, I've witnessed cases in which in Infinispan it makes sense to encode some more information in the key than what's strictly necessary to identify the key (like having attributes which are not included in the hashcode and equals definitions). It sounds like the user should be allowed to annotate the Key types, to allow such additional properties to contribute to the index definition. Comments welcome, but I feel strongly that these two annotations need to be removed to make room for better solutions: we have an opportunity now as I'm rewriting the mapping engine. Sanne -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140807/fb3a0b0b/attachment.html From sanne at hibernate.org Thu Aug 7 17:42:58 2014 From: sanne at hibernate.org (Sanne Grinovero) Date: Thu, 7 Aug 2014 22:42:58 +0100 Subject: [infinispan-dev] [hibernate-dev] [Search] @Transformable vs @ProvidedId In-Reply-To: <98A842C3-7F45-4F20-8542-40A592F56C76@hibernate.org> References: <98A842C3-7F45-4F20-8542-40A592F56C76@hibernate.org> Message-ID: On 7 August 2014 22:37, Hardy Ferentschik wrote: > > On 7 Jan 2014, at 19:56, Sanne Grinovero wrote: > > > I was hoping to drop @ProvidedId today as the original "marker" > > functionality is no longer needed: since we have > > > > org.hibernate.search.cfg.spi.SearchConfiguration.isIdProvidedImplicit() > > > > the option can be implicitly applied to all indexed entries, and the > > annotation is mostly redundant in Infinispan since we added this. > > > > But actually it turns out it's a bit more complex as it servers a second > > function as well: it's the only way for users to be able to specify a > > FieldBridge for the ID.. so the functionality of this annotation is not > > consumed yet. > > Wouldn?t an additional explicit @FieldBridge annotation work as well? > ?Yes! But we'd need to apply it to the key type. This implies changing it to allow target @Target(TYPE?), which doesn't make much sense for our ORM users, but also the name "FieldBridge" is rather odd to be applied on a type and not a field. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140807/b4dfb5fc/attachment.html From sanne at hibernate.org Thu Aug 7 18:40:24 2014 From: sanne at hibernate.org (Sanne Grinovero) Date: Thu, 7 Aug 2014 23:40:24 +0100 Subject: [infinispan-dev] [hibernate-dev] [Search] @Transformable vs @ProvidedId In-Reply-To: References: <98A842C3-7F45-4F20-8542-40A592F56C76@hibernate.org> Message-ID: There is an additional complex choice to make. Considering that Infinispan has this separate notion of Key vs Value, and both have to contribute to building the final indexed Document, why is it that we allow the decision of which index is being targeted to be made by *the type of the value*? I think the index definition belongs as a responsibility to the *type of the identifier*, the value should at most help to identify a shard among the ones identified by its key. !!! -> We might want to consider imposing a hard limitation of not allowing a single index to be shared across multiple key types. This implies the @Indexed annotation and its other key options should be defined on the keys, not the values. If we did that, it wouldn't matter if the index is defined on the key or on the value as there would be a 1:1 possible combination. Does anyone see this as a strong limitation or usability concern? This would also resolve a couple of performance problems. Beyond this, considering it's valid (and sometimes useful) to store PersonFile p = ... cache.put( p.taxcode, p ); cache.put( p.uniquename, p ); As a user I think I might even want to define an alternative index mapping for PersonFile, depending on if it's being stored by uniquename or by taxcode. That's totally doable with the Search engine, but how do you envision the user to define this mapping? He can't use annotations on PersonFile, so the user needs to be able to register some form of programmatic mapping linked to the different key types. There is an additional flaw, which is that I'm implying that taxcode and uniquname are of a different type: otherwise we couldn't distinguish the two different meanings of the two put operations. This is generally a fair assumption as you wouldn't want to have key collisions if you're storing in such a fashion, but there might be a known business rule for which such a collision is impossible (i.e. the two codes having a different format). So while you probably shouldn't do this in a strong domain, it's a legal usage of the Cache API. Considering these pitfalls I think I have successfully convinced myself that we should not allow for a different mapping for the same type of key. Question remains if it's more correct to bind the index identification (the name) to the key type. @Hardy yes I will need the Infinispan team's thoughts too, but don't feel excluded, there aren't many smart engineers around knowing about the Infinispan/Query usage :) Cheers, Sanne On 7 August 2014 22:50, Hardy Ferentschik wrote: > > On 7 Jan 2014, at 23:42, Sanne Grinovero wrote: > > > > > > > > > On 7 August 2014 22:37, Hardy Ferentschik wrote: > > > > On 7 Jan 2014, at 19:56, Sanne Grinovero wrote: > > > > > I was hoping to drop @ProvidedId today as the original "marker" > > > functionality is no longer needed: since we have > > > > > > > org.hibernate.search.cfg.spi.SearchConfiguration.isIdProvidedImplicit() > > > > > > the option can be implicitly applied to all indexed entries, and the > > > annotation is mostly redundant in Infinispan since we added this. > > > > > > But actually it turns out it's a bit more complex as it servers a > second > > > function as well: it's the only way for users to be able to specify a > > > FieldBridge for the ID.. so the functionality of this annotation is not > > > consumed yet. > > > > Wouldn?t an additional explicit @FieldBridge annotation work as well? > > > > ?Yes! But we'd need to apply it to the key type. > > This implies changing it to allow target @Target(TYPE?), which doesn't > make much sense for our ORM users, but also the name "FieldBridge" is > rather odd to be applied on a type and not a field. > > Fair enough. I also know too little about the Infinispan usage of Search > in this case. > Either way, @ProvidedId should go, at least from a pure Search point of > view. > > ?Hardy > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140807/13727f72/attachment-0001.html From bban at redhat.com Mon Aug 11 03:42:11 2014 From: bban at redhat.com (Bela Ban) Date: Mon, 11 Aug 2014 09:42:11 +0200 Subject: [infinispan-dev] DIST-SYNC, put(), a problem and a solution In-Reply-To: References: <53E1FD20.5020504@redhat.com> <53E24796.20104@redhat.com> Message-ID: <53E873D3.6000801@redhat.com> On 06/08/14 22:32, Erik Salter wrote: > Hi Sanne, > > So I guess I'm one of the five. This particular issue has been happening > over and over again in my production environment. It is a performance and > availability killer, since any sort of network blip will result in these > lock scenarios. And network blips in my data center environment are very > real and surprisingly common. In the past 5-6 weeks, we've had 3 > redundant routers fail. The best thing that could happen is lock > contention. > > Almost any sort of MERGE event will result in "stale" locks -- locks that > are never released by the system. This is seperate from the put-while-holding-the-lock issue. Have you created a JIRA to keep track of this ? > Now consider that I can't throw more > threads at the problem. I have 1100 OOB + ISPN threads available per > cluster. Understood. > So what I've hacked in production is something that basically examines the > LockManager (and TxTable) and manages locks that are > 5x lock acquisition > time old. If I don't, then I start getting user threads and OOB threads > backed up, and it's surprising how quickly these pools can exhaust. > Nowhere near a scalable and elastic solution, but desperate times and all > that. > > > I can't agree more that any solution must take NBST into consideration, > especially WRT pessimistic locks. > > Regards, > > Erik > > > On 8/6/14, 2:50 PM, "Sanne Grinovero" wrote: > >> On 6 August 2014 18:49, Dan Berindei wrote: >>> >>> >>> >>> On Wed, Aug 6, 2014 at 6:19 PM, Bela Ban wrote: >>>> >>>> Hey Dan, >>>> >>>> On 06/08/14 16:13, Dan Berindei wrote: >>>>> I could create the issue in JIRA, but I wouldn't make it high >>>> priority >>>>> because I think it have lots of corner cases with NBST and cause >>>>> headaches for the maintainers of state transfer ;) >>>> >>>> I do believe the put-while-holding-the-lock issue *is* a critical >>>> issue; >>>> anyone banging a cluster of Infinispan nodes with more than 1 thread >>>> will run into lock timeouts, with or without transactions. The only >>>> workaround for now is to use total order, but at the cost of reduced >>>> performance. However, once a system starts hitting the lock timeout >>>> issues, performance drops to a crawl, way slower than TO, and work >>>> starts to pile up, which compounds the problem. >>> >>> >>> I wouldn't call it critical because you can always increase the number >>> of >>> threads. It won't be pretty, but it will work around the thread >>> exhaustion >>> issue. >> >> If Infinispan doesn't do it automatically, I wouldn't count that as a >> solution. >> Consider the project goal is to make it easy to scale up/down >> dynamically.. if it requires experts to be alert all the time to for >> such manual interventions it's a failure. >> Besides, I can count the names of people able to figure such trouble >> out on a single hand.. so I agree this is critical. >> >> >>>> I believe doing a sync RPC while holding the lock on a key is asking >>>> for >>>> trouble and is (IMO) an anti-pattern. >>> >>> >>> We also hold a lock on a key between the LockControlCommand and the >>> TxCompletionNotificationCommand in pessimistic-locking caches, and >>> there's >>> at least one sync PrepareCommand RPC between them... >>> >>> So I don't see it as an anti-pattern, the only problem is that we >>> should be >>> able to do that without blocking internal threads in addition to the >>> user >>> thread (which is how tx caches do it). >>> >>>> >>>> Sorry if this has a negative impact on NBST, but should we not fix this >>>> because we don't want to risk a change to NBST ? >>> >>> >>> I'm not saying it will have a negative impact on NBST, I'm just saying I >>> don't want to start implementing an incomplete proposal for the basic >>> flow >>> and leave the state transfer/topology change issues for "later". When >>> happens when a node leaves, when a backup owner is added, or when the >>> primary owner changes should be part of the initial discussion, not an >>> afterthought. >> >> Absolutely! >> No change should be done leaving questions open, and I don't presume I >> suggested a solution I was just trying to start a conversation on >> using "such a pattern". >> But also I believe we already had such conversations in past meetings, >> so my words were terse and short because I just wanted to remind about >> those. >> >> >>> E.g. with your proposal, any updates in the replication queue on the >>> primary >>> owner will be lost when that primary owner dies, even though we told the >>> user that we successfully updated the key. To quote from my first email >>> on >>> this thread: "OTOH, if the primary owner dies, we have to ask a backup, >>> and >>> we can lose the modifications not yet replicated by the primary." >>> >>> With Sanne's proposal, we wouldn't report to the user that we stored the >>> value until all the backups confirmed the update, so we wouldn't have >>> that >>> problem. But I don't see how we could keep the sequence of versions >>> monotonous when the primary owner of the key changes without some extra >>> sync >>> RPCs (also done while holding the key lock). IIRC TOA also needs some >>> sync >>> RPCs to generate its sequence numbers. >> >> I don't know how NBST v.21 is working today, but I trust it doesn't >> lose writes and that we should break down the problems in smaller >> problems, in this case I hope to build on the solid foundations of >> NBST. >> >> When the key is re-possessed by a new node (and this starts to >> generate "reference" write commands), you could restart the sequences: >> you don't need an universal monotonic number, all what backup owners >> need is an ordering rule and understand that the commands coming from >> the new owner are more recent than the old owner. AFAIK you already >> have the notion of view generation id? >> Essentially we'd need to store together with the entry not only its >> sequence but also the viewid. It's a very simplified (compact) vector >> clock, because in practice from this viewId we can extrapolate >> addresses and owners.. but is simpler than the full pattern, as you >> only need the last one, as the longer tail of events is handled by >> NBST I think? >> >> One catch is I think you need tombstones, but those are already needed >> for so many things that we can't avoid them :) >> >> Cheers, >> Sanne >> >> >>> >>>> >>>>> Besides, I'm still not sure I understood your proposals properly, >>>> e.g. >>>>> whether they are meant only for non-tx caches or you want to change >>>>> something for tx caches as well... >>>> >>>> I think this can be used for both cases; however, I think either >>>> Sanne's >>>> solution of using seqnos *per key* and updating in the order of seqnos >>>> or using Pedro's total order impl are probably better solutions. >>>> >>>> I'm not pretending these solutions are final (e.g. Sanne's solution >>>> needs more thought when multiple keys are involved), but we should at >>>> least acknowledge the issue exists, create a JIRA to prioritize it and >>>> then start discussing solutions. >>>> >>> >>> We've been discussing solutions without a JIRA just fine :) >>> >>> My feeling so far is that the thread exhaustion problem would be better >>> served by porting TO to non-tx caches and/or changing non-tx locking to >>> not >>> require a thread. I have created an issue for TO [1], but IMO the >>> locking >>> rework [2] should be higher priority, as it can help both tx and non-tx >>> caches. >>> >>> [1] https://issues.jboss.org/browse/ISPN-4610 >>> [2] https://issues.jboss.org/browse/ISPN-2849 >>> >>>> >>>>> >>>>> >>>>> On Wed, Aug 6, 2014 at 1:02 PM, Bela Ban >>>> > wrote: >>>>> >>>>> Seems like this discussion has died with the general agreement >>>> that >>>>> this >>>>> is broken and with a few proposals on how to fix it, but without >>>> any >>>>> follow-up action items. >>>>> >>>>> I think we (= someone from the ISPN team) need to create a JIRA, >>>>> preferably blocking. >>>>> >>>>> WDYT ? >>>>> >>>>> If not, here's what our options are: >>>>> >>>>> #1 I'll create a JIRA >>>>> >>>>> #2 We'll hold the team meeting in Krasnojarsk, Russia >>>>> >>>>> #3 There will be only vodka, no beers in #2 >>>>> >>>>> #4 Bela will join the ISPN team >>>>> >>>>> Thoughts ? >>>> >>>> >>>> -- >>>> Bela Ban, JGroups lead (http://www.jgroups.org) >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban, JGroups lead (http://www.jgroups.org) From mohan.dhawan at gmail.com Mon Aug 11 05:17:37 2014 From: mohan.dhawan at gmail.com (Mohan Dhawan) Date: Mon, 11 Aug 2014 14:47:37 +0530 Subject: [infinispan-dev] can we get replica IP from cache events ? Message-ID: <53E88A31.3010403@gmail.com> Hi All, Is it possible to find out which replica performed the cache operation that generated the cache event ? In other words, can we get the IP of the replica that modified the cache entry ? If yes, then how. Any help is appreciated. Regards, mohan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: OpenPGP digital signature Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140811/bdf8dde4/attachment.bin From rvansa at redhat.com Mon Aug 11 05:32:17 2014 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 11 Aug 2014 11:32:17 +0200 Subject: [infinispan-dev] can we get replica IP from cache events ? In-Reply-To: <53E88A31.3010403@gmail.com> References: <53E88A31.3010403@gmail.com> Message-ID: <53E88DA1.3020701@redhat.com> Hi Mohan, this mailing list is for developer discussions, for support please use forum [1]. As for your question: the events don't contain this information, you have to write your own Interceptor [2] (might be mildly outdated for Infinispan 7), and retrieve the origin using context.getOrigin(). Radim [1] https://community.jboss.org/en/infinispan [2] http://infinispan.org/docs/7.0.x/user_guide/user_guide.html#_custom_interceptors_chapter On 08/11/2014 11:17 AM, Mohan Dhawan wrote: > Hi All, > > Is it possible to find out which replica performed the cache operation > that generated the cache event ? In other words, can we get the IP of > the replica that modified the cache entry ? If yes, then how. > > Any help is appreciated. > > Regards, > mohan > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140811/a219b039/attachment.html From mohan.dhawan at gmail.com Mon Aug 11 05:37:28 2014 From: mohan.dhawan at gmail.com (Mohan Dhawan) Date: Mon, 11 Aug 2014 15:07:28 +0530 Subject: [infinispan-dev] can we get replica IP from cache events ? In-Reply-To: <53E88DA1.3020701@redhat.com> References: <53E88A31.3010403@gmail.com> <53E88DA1.3020701@redhat.com> Message-ID: <53E88ED8.40904@gmail.com> Hi Radim, Apologies for posting on the wrong list. Also, thanks for the pointers. Regards, mohan On Monday 11 August 2014 03:02 PM, Radim Vansa wrote: > Hi Mohan, > > this mailing list is for developer discussions, for support please use > forum [1]. > As for your question: the events don't contain this information, you > have to write your own Interceptor [2] (might be mildly outdated for > Infinispan 7), and retrieve the origin using context.getOrigin(). > > Radim > > [1] https://community.jboss.org/en/infinispan > [2] > http://infinispan.org/docs/7.0.x/user_guide/user_guide.html#_custom_interceptors_chapter > > On 08/11/2014 11:17 AM, Mohan Dhawan wrote: >> Hi All, >> >> Is it possible to find out which replica performed the cache operation >> that generated the cache event ? In other words, can we get the IP of >> the replica that modified the cache entry ? If yes, then how. >> >> Any help is appreciated. >> >> Regards, >> mohan >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss DataGrid QA > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140811/96860874/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: OpenPGP digital signature Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140811/96860874/attachment.bin From ttarrant at redhat.com Mon Aug 11 10:27:04 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 11 Aug 2014 16:27:04 +0200 Subject: [infinispan-dev] Weekly Infinispan IRC meeting 2014-08-11 Message-ID: <53E8D2B8.4060101@redhat.com> Get the minutes from here: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2014/infinispan.2014-08-11-14.03.html From mudokonman at gmail.com Mon Aug 11 14:07:57 2014 From: mudokonman at gmail.com (William Burns) Date: Mon, 11 Aug 2014 14:07:57 -0400 Subject: [infinispan-dev] Infinispan 7.0.0.Beta1 is available! Message-ID: Dear Infinispan community, We are proud to announce the first beta release for Infinispan 7.0.0. More info at http://blog.infinispan.org/2014/08/infinispan-700beta1-is-out.html Thanks to everyone for their involvement and contributions! - Will From jmarkos at redhat.com Tue Aug 12 04:15:06 2014 From: jmarkos at redhat.com (Jakub Markos) Date: Tue, 12 Aug 2014 04:15:06 -0400 (EDT) Subject: [infinispan-dev] Ant based kill not fully working? Re: ISPN-4567 In-Reply-To: <3F75DF34-95F4-482C-9F7E-F8D1D9565BA5@redhat.com> References: <3F75DF34-95F4-482C-9F7E-F8D1D9565BA5@redhat.com> Message-ID: <1263089078.28637630.1407831306586.JavaMail.zimbra@redhat.com> Hi, I looked at it and I don't think using InfinispanServerKillProcessor would be any better, since it still just calls 'kill -9'. The only difference is that it doesn't kill all java processes starting from jboss-modules.jar, but just the one configured for the test. Is it maybe possible that the kill happened, but the port was still hanging? Jakub ----- Original Message ----- > From: "Galder Zamarre?o" > To: "Jakub Markos" , "Martin Gencur" > Cc: "infinispan -Dev List" > Sent: Monday, August 4, 2014 12:35:50 PM > Subject: Ant based kill not fully working? Re: ISPN-4567 > > Hi, > > Dan has reported [1]. It appears as if the last server started in > infinispan-as-module-client-integrationtests did not really get killed. From > what I see, this kill was done via the specific Ant target present in that > Maven module. > > I also remembered recently [2] was added. Maybe we need to get > as-modules/client to be configured with it so that it properly kills > servers? > > What I?m not sure is where we?d put it so that it can be consumed both by > server/integration/testsuite and as-modules/client? The problem is that the > class, as is, brings in arquillian dependency. If we can separate the > arquillian stuff from the actual code, the class itself could maybe go in > commons test source directory? > > @Tristan, thoughts? > > @Jakub, can I assign this to you? > > [1] https://issues.jboss.org/browse/ISPN-4567 > [2] > https://github.com/infinispan/infinispan/blob/master/server/integration/testsuite/src/test/java/org/infinispan/server/test/util/arquillian/extensions/InfinispanServerKillProcessor.java > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > From ttarrant at redhat.com Tue Aug 12 04:35:34 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 12 Aug 2014 10:35:34 +0200 Subject: [infinispan-dev] Log message categories Message-ID: <53E9D1D6.30808@redhat.com> Dear all, currently the Infinispan log messages "fall" in the categories named from the originating class. While this isfine for TRACE/DEBUG messages, there are some high-level INFO events which warrant their own specific categories. I think that user-triggered events (such as JMX ops) should also be treated like this. Examples: org.infinispan.CLUSTER (for important view change, state transfer and rebalancing messages) org.infinispan.CACHE (for cache lifecycle events) org.infinispan.PERSISTENCE What do you think ? Any other suggestions ? Tristan From rory.odonnell at oracle.com Tue Aug 12 06:56:04 2014 From: rory.odonnell at oracle.com (Rory O'Donnell Oracle, Dublin Ireland) Date: Tue, 12 Aug 2014 11:56:04 +0100 Subject: [infinispan-dev] Early Access build for JDK 9 b26 is available on java.net Message-ID: <53E9F2C4.3010709@oracle.com> Hi Galder, Early Access build for JDK 9 b26 is available on java.net. Summary of changes in JDK 9 Build 26 Early Access Build Test Results Rgds, Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140812/eb029bd4/attachment.html From afield at redhat.com Tue Aug 12 08:20:00 2014 From: afield at redhat.com (Alan Field) Date: Tue, 12 Aug 2014 08:20:00 -0400 (EDT) Subject: [infinispan-dev] Log message categories In-Reply-To: <53E9D1D6.30808@redhat.com> References: <53E9D1D6.30808@redhat.com> Message-ID: <419255335.22368717.1407846000718.JavaMail.zimbra@redhat.com> I would also propose these categories for log messages: org.infinispan.QUERY org.infinispan.MAPREDUCE org.infinispan.DISTEXEC Thanks, Alan ----- Original Message ----- > From: "Tristan Tarrant" > To: "infinispan -Dev List" > Sent: Tuesday, August 12, 2014 10:35:34 AM > Subject: [infinispan-dev] Log message categories > > Dear all, > > currently the Infinispan log messages "fall" in the categories named > from the originating class. While this isfine for TRACE/DEBUG messages, > there are some high-level INFO events which warrant their own specific > categories. I think that user-triggered events (such as JMX ops) should > also be treated like this. > Examples: > > org.infinispan.CLUSTER (for important view change, state transfer and > rebalancing messages) > org.infinispan.CACHE (for cache lifecycle events) > org.infinispan.PERSISTENCE > > What do you think ? > Any other suggestions ? > > Tristan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From dan.berindei at gmail.com Tue Aug 12 08:37:45 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 12 Aug 2014 15:37:45 +0300 Subject: [infinispan-dev] Log message categories In-Reply-To: <419255335.22368717.1407846000718.JavaMail.zimbra@redhat.com> References: <53E9D1D6.30808@redhat.com> <419255335.22368717.1407846000718.JavaMail.zimbra@redhat.com> Message-ID: I don't think we have a lot of INFO map/reduce or dist exec, but perhaps some DEBUG messages' level could be increased. Actually, that is also true for state transfer and rebalancing messages... I agree in principle with the idea, my only concern is that we'll spend too much time managing which messages should be moved to these special categories (and sometimes moved from DEBUG to INFO). Cheers Dan On Tue, Aug 12, 2014 at 3:20 PM, Alan Field wrote: > I would also propose these categories for log messages: > > org.infinispan.QUERY > org.infinispan.MAPREDUCE > org.infinispan.DISTEXEC > > Thanks, > Alan > > ----- Original Message ----- > > From: "Tristan Tarrant" > > To: "infinispan -Dev List" > > Sent: Tuesday, August 12, 2014 10:35:34 AM > > Subject: [infinispan-dev] Log message categories > > > > Dear all, > > > > currently the Infinispan log messages "fall" in the categories named > > from the originating class. While this isfine for TRACE/DEBUG messages, > > there are some high-level INFO events which warrant their own specific > > categories. I think that user-triggered events (such as JMX ops) should > > also be treated like this. > > Examples: > > > > org.infinispan.CLUSTER (for important view change, state transfer and > > rebalancing messages) > > org.infinispan.CACHE (for cache lifecycle events) > > org.infinispan.PERSISTENCE > > > > What do you think ? > > Any other suggestions ? > > > > Tristan > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140812/1c3ad310/attachment.html From afield at redhat.com Tue Aug 12 10:01:17 2014 From: afield at redhat.com (Alan Field) Date: Tue, 12 Aug 2014 10:01:17 -0400 (EDT) Subject: [infinispan-dev] Log message categories In-Reply-To: References: <53E9D1D6.30808@redhat.com> <419255335.22368717.1407846000718.JavaMail.zimbra@redhat.com> Message-ID: <1471504390.22541935.1407852077146.JavaMail.zimbra@redhat.com> Hey Dan, I know there aren't too many messages today, but I was thinking that status and statistics messages would be good to show. (Map/Reduce task ID ### has started the map phase, Map/Reduce task ID ### has completed the map phase + stats, etc., Query ID ### executed in x.xx secs) Thanks, Alan ----- Original Message ----- > From: "Dan Berindei" > To: "infinispan -Dev List" > Sent: Tuesday, August 12, 2014 2:37:45 PM > Subject: Re: [infinispan-dev] Log message categories > I don't think we have a lot of INFO map/reduce or dist exec, but perhaps some > DEBUG messages' level could be increased. Actually, that is also true for > state transfer and rebalancing messages... > I agree in principle with the idea, my only concern is that we'll spend too > much time managing which messages should be moved to these special > categories (and sometimes moved from DEBUG to INFO). > Cheers > Dan > On Tue, Aug 12, 2014 at 3:20 PM, Alan Field < afield at redhat.com > wrote: > > I would also propose these categories for log messages: > > > org.infinispan.QUERY > > > org.infinispan.MAPREDUCE > > > org.infinispan.DISTEXEC > > > Thanks, > > > Alan > > > ----- Original Message ----- > > > > From: "Tristan Tarrant" < ttarrant at redhat.com > > > > > To: "infinispan -Dev List" < infinispan-dev at lists.jboss.org > > > > > Sent: Tuesday, August 12, 2014 10:35:34 AM > > > > Subject: [infinispan-dev] Log message categories > > > > > > > > Dear all, > > > > > > > > currently the Infinispan log messages "fall" in the categories named > > > > from the originating class. While this isfine for TRACE/DEBUG messages, > > > > there are some high-level INFO events which warrant their own specific > > > > categories. I think that user-triggered events (such as JMX ops) should > > > > also be treated like this. > > > > Examples: > > > > > > > > org.infinispan.CLUSTER (for important view change, state transfer and > > > > rebalancing messages) > > > > org.infinispan.CACHE (for cache lifecycle events) > > > > org.infinispan.PERSISTENCE > > > > > > > > What do you think ? > > > > Any other suggestions ? > > > > > > > > Tristan > > > > _______________________________________________ > > > > infinispan-dev mailing list > > > > infinispan-dev at lists.jboss.org > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140812/c5bae827/attachment.html From dan.berindei at gmail.com Tue Aug 12 10:05:54 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 12 Aug 2014 17:05:54 +0300 Subject: [infinispan-dev] DIST-SYNC, put(), a problem and a solution In-Reply-To: References: <53E1FD20.5020504@redhat.com> <53E24796.20104@redhat.com> Message-ID: Sorry for the long delay, Sanne! On Wed, Aug 6, 2014 at 9:50 PM, Sanne Grinovero wrote: > On 6 August 2014 18:49, Dan Berindei wrote: > > > > > > > > On Wed, Aug 6, 2014 at 6:19 PM, Bela Ban wrote: > >> > >> Hey Dan, > >> > >> On 06/08/14 16:13, Dan Berindei wrote: > >> > I could create the issue in JIRA, but I wouldn't make it high priority > >> > because I think it have lots of corner cases with NBST and cause > >> > headaches for the maintainers of state transfer ;) > >> > >> I do believe the put-while-holding-the-lock issue *is* a critical issue; > >> anyone banging a cluster of Infinispan nodes with more than 1 thread > >> will run into lock timeouts, with or without transactions. The only > >> workaround for now is to use total order, but at the cost of reduced > >> performance. However, once a system starts hitting the lock timeout > >> issues, performance drops to a crawl, way slower than TO, and work > >> starts to pile up, which compounds the problem. > > > > > > I wouldn't call it critical because you can always increase the number of > > threads. It won't be pretty, but it will work around the thread > exhaustion > > issue. > > If Infinispan doesn't do it automatically, I wouldn't count that as a > solution. > Consider the project goal is to make it easy to scale up/down > dynamically.. if it requires experts to be alert all the time to for > such manual interventions it's a failure. > Besides, I can count the names of people able to figure such trouble > out on a single hand.. so I agree this is critical. > > I agree it's not a solution, just a workaround. I also agree that when something (unrelated) goes wrong, the thread pools can fill quite quickly and then it becomes hard (or even impossible) to recover. So the pool sizes have to be big enough to handle a backlog of 10 seconds or more (until FD/FD_ALL suspect a crashed node), not just regular operation. > > >> I believe doing a sync RPC while holding the lock on a key is asking for > >> trouble and is (IMO) an anti-pattern. > > > > > > We also hold a lock on a key between the LockControlCommand and the > > TxCompletionNotificationCommand in pessimistic-locking caches, and > there's > > at least one sync PrepareCommand RPC between them... > > > > So I don't see it as an anti-pattern, the only problem is that we should > be > > able to do that without blocking internal threads in addition to the user > > thread (which is how tx caches do it). > > > >> > >> Sorry if this has a negative impact on NBST, but should we not fix this > >> because we don't want to risk a change to NBST ? > > > > > > I'm not saying it will have a negative impact on NBST, I'm just saying I > > don't want to start implementing an incomplete proposal for the basic > flow > > and leave the state transfer/topology change issues for "later". When > > happens when a node leaves, when a backup owner is added, or when the > > primary owner changes should be part of the initial discussion, not an > > afterthought. > > Absolutely! > No change should be done leaving questions open, and I don't presume I > suggested a solution I was just trying to start a conversation on > using "such a pattern". > But also I believe we already had such conversations in past meetings, > so my words were terse and short because I just wanted to remind about > those. > > I don't recall discussing the non-tx locking scheme before, only the state machine approach... > > > E.g. with your proposal, any updates in the replication queue on the > primary > > owner will be lost when that primary owner dies, even though we told the > > user that we successfully updated the key. To quote from my first email > on > > this thread: "OTOH, if the primary owner dies, we have to ask a backup, > and > > we can lose the modifications not yet replicated by the primary." > > > > With Sanne's proposal, we wouldn't report to the user that we stored the > > value until all the backups confirmed the update, so we wouldn't have > that > > problem. But I don't see how we could keep the sequence of versions > > monotonous when the primary owner of the key changes without some extra > sync > > RPCs (also done while holding the key lock). IIRC TOA also needs some > sync > > RPCs to generate its sequence numbers. > > I don't know how NBST v.21 is working today, but I trust it doesn't > lose writes and that we should break down the problems in smaller > problems, in this case I hope to build on the solid foundations of > NBST. > Except NBST preserves written data, but it doesn't care about writes in progress (at least in non-tx caches) ;) The replication algorithm takes care of that: when a backup owner sees a newer topology, it throws an OutdatedTopologyException, the originator receives the exception, it waits to receive the new topology, and it retries the operation on the new (or the same) primary owner. Still, there are times when this is enough: https://issues.jboss.org/browse/ISPN-3830 https://issues.jboss.org/browse/ISPN-4286 https://issues.jboss.org/browse/ISPN-3918 We were considering using random client-generated version numbers to fix some of the issues: when retrying, the client would use the same version number, and it would be easy to detect whether the update has been applied on a particular node or not. There is still a chance to end up with inconsistent data if the client is the same as the primary owner (or they both die at approximately the same time) - I don't think we can ever fix that in non-tx mode. > When the key is re-possessed by a new node (and this starts to > generate "reference" write commands), you could restart the sequences: > you don't need an universal monotonic number, all what backup owners > need is an ordering rule and understand that the commands coming from > the new owner are more recent than the old owner. AFAIK you already > have the notion of view generation id? > "universal monotonic number" and a global "ordering rule" sound exactly the same to me :) We do have the notion of topology id (didn't use "view" to avoid confusion with jgroups views). > Essentially we'd need to store together with the entry not only its > sequence but also the viewid. It's a very simplified (compact) vector > clock, because in practice from this viewId we can extrapolate > addresses and owners.. but is simpler than the full pattern, as you > only need the last one, as the longer tail of events is handled by > NBST I think? > You're right, adding the topology id to a monotone-per-node version number would give you a monotonic sequence (with holes in it). But when we retry a command, the new primary owner would generate a different version number, so we would get a different ordering for the same write operation. We could return the version number to the client and allow it to retry with the same version number, but that would still fail if the primary owner died. > > One catch is I think you need tombstones, but those are already needed > for so many things that we can't avoid them :) > We're not talking about removes yet, so we can postpone the discussion about tombstones for now :) > > Cheers, > Sanne > > > > > >> > >> > Besides, I'm still not sure I understood your proposals properly, e.g. > >> > whether they are meant only for non-tx caches or you want to change > >> > something for tx caches as well... > >> > >> I think this can be used for both cases; however, I think either Sanne's > >> solution of using seqnos *per key* and updating in the order of seqnos > >> or using Pedro's total order impl are probably better solutions. > >> > >> I'm not pretending these solutions are final (e.g. Sanne's solution > >> needs more thought when multiple keys are involved), but we should at > >> least acknowledge the issue exists, create a JIRA to prioritize it and > >> then start discussing solutions. > >> > > > > We've been discussing solutions without a JIRA just fine :) > > > > My feeling so far is that the thread exhaustion problem would be better > > served by porting TO to non-tx caches and/or changing non-tx locking to > not > > require a thread. I have created an issue for TO [1], but IMO the locking > > rework [2] should be higher priority, as it can help both tx and non-tx > > caches. > > > > [1] https://issues.jboss.org/browse/ISPN-4610 > > [2] https://issues.jboss.org/browse/ISPN-2849 > > > >> > >> > > >> > > >> > On Wed, Aug 6, 2014 at 1:02 PM, Bela Ban >> > > wrote: > >> > > >> > Seems like this discussion has died with the general agreement > that > >> > this > >> > is broken and with a few proposals on how to fix it, but without > any > >> > follow-up action items. > >> > > >> > I think we (= someone from the ISPN team) need to create a JIRA, > >> > preferably blocking. > >> > > >> > WDYT ? > >> > > >> > If not, here's what our options are: > >> > > >> > #1 I'll create a JIRA > >> > > >> > #2 We'll hold the team meeting in Krasnojarsk, Russia > >> > > >> > #3 There will be only vodka, no beers in #2 > >> > > >> > #4 Bela will join the ISPN team > >> > > >> > Thoughts ? > >> > >> > >> -- > >> Bela Ban, JGroups lead (http://www.jgroups.org) > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140812/78856ece/attachment-0001.html From dan.berindei at gmail.com Tue Aug 12 10:10:35 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 12 Aug 2014 17:10:35 +0300 Subject: [infinispan-dev] Log message categories In-Reply-To: <1471504390.22541935.1407852077146.JavaMail.zimbra@redhat.com> References: <53E9D1D6.30808@redhat.com> <419255335.22368717.1407846000718.JavaMail.zimbra@redhat.com> <1471504390.22541935.1407852077146.JavaMail.zimbra@redhat.com> Message-ID: Maybe it would be appropriate for M/R tasks, which can be expected to be slow anyway. But I'd rather have a progress reporting/statistics interface, so that the application can decide for itself whether it's worth logging something as INFO or as DEBUG. I would expect queries to be not just fast, but also numerous, so I wouldn't log anything above TRACE for individual queries. Cheers Dan On Tue, Aug 12, 2014 at 5:01 PM, Alan Field wrote: > Hey Dan, > > I know there aren't too many messages today, but I was thinking that > status and statistics messages would be good to show. (Map/Reduce task ID > ### has started the map phase, Map/Reduce task ID ### has completed the map > phase + stats, etc., Query ID ### executed in x.xx secs) > > Thanks, > Alan > > ------------------------------ > > *From: *"Dan Berindei" > > *To: *"infinispan -Dev List" > *Sent: *Tuesday, August 12, 2014 2:37:45 PM > *Subject: *Re: [infinispan-dev] Log message categories > > > I don't think we have a lot of INFO map/reduce or dist exec, but perhaps > some DEBUG messages' level could be increased. Actually, that is also true > for state transfer and rebalancing messages... > > I agree in principle with the idea, my only concern is that we'll spend > too much time managing which messages should be moved to these special > categories (and sometimes moved from DEBUG to INFO). > > Cheers > Dan > > > > On Tue, Aug 12, 2014 at 3:20 PM, Alan Field wrote: > >> I would also propose these categories for log messages: >> >> org.infinispan.QUERY >> org.infinispan.MAPREDUCE >> org.infinispan.DISTEXEC >> >> Thanks, >> Alan >> >> ----- Original Message ----- >> > From: "Tristan Tarrant" >> > To: "infinispan -Dev List" >> > Sent: Tuesday, August 12, 2014 10:35:34 AM >> > Subject: [infinispan-dev] Log message categories >> > >> > Dear all, >> > >> > currently the Infinispan log messages "fall" in the categories named >> > from the originating class. While this isfine for TRACE/DEBUG messages, >> > there are some high-level INFO events which warrant their own specific >> > categories. I think that user-triggered events (such as JMX ops) should >> > also be treated like this. >> > Examples: >> > >> > org.infinispan.CLUSTER (for important view change, state transfer and >> > rebalancing messages) >> > org.infinispan.CACHE (for cache lifecycle events) >> > org.infinispan.PERSISTENCE >> > >> > What do you think ? >> > Any other suggestions ? >> > >> > Tristan >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140812/545ae2b6/attachment.html From dan.berindei at gmail.com Tue Aug 12 16:41:29 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 12 Aug 2014 23:41:29 +0300 Subject: [infinispan-dev] minutes from the monitoring&management meeting In-Reply-To: <1B7EF10E-048D-4764-8AF6-4A2712733E40@redhat.com> References: <912B2F10-1854-494C-AD55-1C91413D4A51@redhat.com> <1B7EF10E-048D-4764-8AF6-4A2712733E40@redhat.com> Message-ID: On Tue, Aug 5, 2014 at 11:27 AM, Galder Zamarre?o wrote: > Can?t comment on the document, so here are my thoughts: > > Re: ?Get rid of lazy cache starting...all the caches run on all nodes...it > should still be possible to start a cache at runtime, but it will be run on > all nodes as well? > > ^ Though I like the idea, it might change a crucial aspect of how default > cache configuration works (if we leave the concept of default cache at > all). Say you start a cache named ?a? for which there?s no config. Up until > now we?d use the default cache configuration and create a cache ?a? with > that config. However, if caches are started cluster wide now, before you > can do that, you?d have to check that there?s no cache ?a? configuration > anywhere in the cluster. If there is, I guess the configuration would be > shipped to the node that starts the cache (if it does not have it) and > create the cache with it? Or are you assuming all nodes in the cluster must > have all configurations defined? > +1 to remove the default cache as a default configuration. I like the idea of shipping the cache configuration to all the nodes. We will have to require any user-provided objects in the configuration to be serializable/externalizable, but I don't see a big problem with that. In fact, it would also allow us to send the entire configuration to the coordinator on join, so we could verify that the configuration on all nodes is compatible (not exactly the same, since things like capacityFactor can be different). And it would remove the need for the CacheJoinInfo class... A more limited alternative, not requiring config serialization, would be to disallow getCache(name) when a configuration doesn't exist but add a method createCache(name, configurationName) that only requires configurationName to be defined everywhere. > Re: ?Revisiting Configuration elements?" > > If we?re going to do another round of updates in this area, I think we > should consider what to do with unconfigured values. Back in the 4.x days, > the JAXB XML parsing allowed us to know which configuration elements the > user had not configured, which helped us tweak configuration and do > validation more easily. Now, when we look at a Configuration builder > object, we see default values but we do not that a value is the one it is > because the user has specifically defined it, or because it?s unconfigured. > One way to do so is by separating the default values, say to an XML file > which is reference (I think WF does something along these lines) and leave > the builder object with all null values. This would make it easy to figure > out which elements have been touched and for that those that have not, use > default values. This has popped up in the forums before but can?t find a > link right now... > I was also thinking of doing something like that, but instead of having a separate XML with the defaults I was going to propose creating a layer of indirection: every configuration value would be a ConfigurationProperty, with a default value, an override value, and an actual value. We already do something similar for e.g. StateTransferConfiguration.awaitInitialTransfer and originalAwaitInitialTransfer). I haven't seen the forum post, but I think that would allow us more properly validate conflicting configuration values. E.g. the checks in Configurations.isVersioningEnabled() could be moved to ConfigurationBuilder.validate()/create(). > > Cheers, > > On 28 Jul 2014, at 17:04, Mircea Markus wrote: > > > Hi, > > > > Tristan, Sanne, Gustavo and I meetlast week to discuss a) Infinispan > usability and b) monitoring and management. Minutes attached. > > > > > https://docs.google.com/document/d/1dIxH0xTiYBHH6_nkqybc13_zzW9gMIcaF_GX5Y7_PPQ/edit?usp=sharing > > > > Cheers, > > -- > > Mircea Markus > > Infinispan lead (www.infinispan.org) > > > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140812/ed7392b2/attachment.html From cgfulton at gmail.com Tue Aug 12 17:40:53 2014 From: cgfulton at gmail.com (Gerard Fulton) Date: Tue, 12 Aug 2014 14:40:53 -0700 Subject: [infinispan-dev] Hot Rod not starting Message-ID: We are having issues with Hot Rod when we stop and stop a member in a four node cluster. The server will go through the start up process but then throw a StateTransfer timeout exception. When this issue happens the client port never binds on the node and we are unable to successfully start the node and have it join the cluster until all nodes in the cluster are restarted. Also note the problem cascades if any of the other nodes are restarted in the cluster. They to are not able to rejoin the cluster successfully. I have posted my logs, configuration, and tcpdump capture on the form. Form Link: https://community.jboss.org/thread/243451 Any help is appreciated. Gerard -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140812/36150e23/attachment.html From paul.ferraro at redhat.com Wed Aug 13 11:29:56 2014 From: paul.ferraro at redhat.com (Paul Ferraro) Date: Wed, 13 Aug 2014 11:29:56 -0400 (EDT) Subject: [infinispan-dev] Removal of ConfigurationBuilder.classLoader(...) In-Reply-To: <2035808062.5882244.1407941312919.JavaMail.zimbra@redhat.com> Message-ID: <1574006285.5911220.1407943796712.JavaMail.zimbra@redhat.com> It seems that the ability to associate a cache with a specific classloader has been removed in 7.0 by this commit: https://github.com/infinispan/infinispan/commit/39a21a025db2e0f85019b93d09052b4772abbaa8 I don't fully understand the reason for the removal. WildFly previously relied on this mechanism to define the classloader from which Infinispan should load any classes when building its configuration. In general, WF builds its configuration using object instances instead of class names, so normally this isn't a problem. However, there isn't always such a mechanism (e.g. https://issues.jboss.org/browse/ISPN-3979) However, now that ConfigurationBuilder.classloader(...) is gone, the classloader used to build a Configuration is effectively hardcoded (usually as this.getClass().getClassLoader()). This directly affects the ability for a WildFly using to configure a cache with querying. IndexingConfigurationBuilder.validate(...) previously used the configured classloader to validate that the query module is loadable. https://github.com/infinispan/infinispan/blob/6.0.x/core/src/main/java/org/infinispan/configuration/cache/IndexingConfigurationBuilder.java#L141 However, this is now hardcoded to use the classloader that loaded the IndexingConfigurationBuilder class itself. https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/configuration/cache/IndexingConfigurationBuilder.java#L183 The WF distribution uses distinct modules for infinispan-core vs infinispan-query. Consequently, if your cache don't need querying, the query module is not loaded. WF8 let the user configure a cache with query support via . Currently, however, the only way we can satisfy the validation logic in IndexingConfigurationBuilder.validate(...) is to bloat our core "org.infinispan" module with the infinispan-query module and its dependencies. I don't want to do that. Is there some way we can re-enable the ability to configure a cache with a classloader that still satisfies the reasons for its original removal? GlobalConfigurationBuilder still supports the ability to configure a classloader, why remove this from ConfigurationBuilder? That said, however, the IndexingConfigurationBuilder validation itself is wrong. Ultimately, the infinispan-query module will be loaded by the classloader with which the GlobalConfiguration was built (i.e. the default classloader of the cache), so really, at the very least, the validation logic in IndexingConfigurationBuilder.validate(...) should reflect this. I've opened https://issues.jboss.org/browse/ISPN-4639 to track this specific bug. Thoughts? From hsaid at redhat.com Wed Aug 13 15:39:21 2014 From: hsaid at redhat.com (Hammad Said) Date: Wed, 13 Aug 2014 15:39:21 -0400 (EDT) Subject: [infinispan-dev] Hotrod Server/client cluster with distributed cache question In-Reply-To: <1574006285.5911220.1407943796712.JavaMail.zimbra@redhat.com> References: <1574006285.5911220.1407943796712.JavaMail.zimbra@redhat.com> Message-ID: <2001353693.29016134.1407958761215.JavaMail.zimbra@redhat.com> I have four inifinispan server cluster nodes. The hotrod clients are running on each of the machines where the server resides. Each hotrod client is configured to go to a particular server. The cacheA is a distributed cache with two owners. I want to understand the following: 1) If a particular key1 is saved to a particular sever1 and is replicated to server2 , and the client1 for server1 tries to get the key , does it always get from server1? 2) When the client2 for which the primary server is sever2, tries to get key1, does it get from server2 or does it know to get from the primary owner server1? 3) When client3 for server3 tries to get the key does it get from server1(primary owner), or does it get from server3, which then requests it from server1 Thanks! Hammad From galder at redhat.com Fri Aug 15 04:37:09 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Fri, 15 Aug 2014 10:37:09 +0200 Subject: [infinispan-dev] minutes from the monitoring&management meeting In-Reply-To: References: <912B2F10-1854-494C-AD55-1C91413D4A51@redhat.com> <1B7EF10E-048D-4764-8AF6-4A2712733E40@redhat.com> Message-ID: <26BFF0A6-2094-4B64-A24F-DB67F5927DF9@redhat.com> On 12 Aug 2014, at 22:41, Dan Berindei wrote: > > > > On Tue, Aug 5, 2014 at 11:27 AM, Galder Zamarre?o wrote: > Can?t comment on the document, so here are my thoughts: > > Re: ?Get rid of lazy cache starting...all the caches run on all nodes...it should still be possible to start a cache at runtime, but it will be run on all nodes as well? > > ^ Though I like the idea, it might change a crucial aspect of how default cache configuration works (if we leave the concept of default cache at all). Say you start a cache named ?a? for which there?s no config. Up until now we?d use the default cache configuration and create a cache ?a? with that config. However, if caches are started cluster wide now, before you can do that, you?d have to check that there?s no cache ?a? configuration anywhere in the cluster. If there is, I guess the configuration would be shipped to the node that starts the cache (if it does not have it) and create the cache with it? Or are you assuming all nodes in the cluster must have all configurations defined? > > +1 to remove the default cache as a default configuration. > > I like the idea of shipping the cache configuration to all the nodes. We will have to require any user-provided objects in the configuration to be serializable/externalizable, but I don't see a big problem with that. > > In fact, it would also allow us to send the entire configuration to the coordinator on join, so we could verify that the configuration on all nodes is compatible (not exactly the same, since things like capacityFactor can be different). And it would remove the need for the CacheJoinInfo class... > > A more limited alternative, not requiring config serialization, would be to disallow getCache(name) when a configuration doesn't exist but add a method createCache(name, configurationName) that only requires configurationName to be defined everywhere. > > > Re: ?Revisiting Configuration elements?" > > If we?re going to do another round of updates in this area, I think we should consider what to do with unconfigured values. Back in the 4.x days, the JAXB XML parsing allowed us to know which configuration elements the user had not configured, which helped us tweak configuration and do validation more easily. Now, when we look at a Configuration builder object, we see default values but we do not that a value is the one it is because the user has specifically defined it, or because it?s unconfigured. One way to do so is by separating the default values, say to an XML file which is reference (I think WF does something along these lines) and leave the builder object with all null values. This would make it easy to figure out which elements have been touched and for that those that have not, use default values. This has popped up in the forums before but can?t find a link right now... > > I was also thinking of doing something like that, but instead of having a separate XML with the defaults I was going to propose creating a layer of indirection: every configuration value would be a ConfigurationProperty, with a default value, an override value, and an actual value. We already do something similar for e.g. StateTransferConfiguration.awaitInitialTransfer and originalAwaitInitialTransfer). ^ What?s the problem with a separate XML file? I really like the idea of externalizing default values from a documentation perspective and ease of change down the line, both for us and for users. On top of that, it could be validated and be presented as a reference XML file, getting rid of the sample XML file that we currently have which is half done and no one really updates it. > > I haven't seen the forum post, but I think that would allow us more properly validate conflicting configuration values. E.g. the checks in Configurations.isVersioningEnabled() could be moved to ConfigurationBuilder.validate()/create(). Totally, validation right now it?s quite tricky due to the lack of separation. Cheers, > > > Cheers, > > On 28 Jul 2014, at 17:04, Mircea Markus wrote: > > > Hi, > > > > Tristan, Sanne, Gustavo and I meetlast week to discuss a) Infinispan usability and b) monitoring and management. Minutes attached. > > > > https://docs.google.com/document/d/1dIxH0xTiYBHH6_nkqybc13_zzW9gMIcaF_GX5Y7_PPQ/edit?usp=sharing > > > > Cheers, > > -- > > Mircea Markus > > Infinispan lead (www.infinispan.org) > > > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From dan.berindei at gmail.com Fri Aug 15 06:41:53 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Fri, 15 Aug 2014 13:41:53 +0300 Subject: [infinispan-dev] minutes from the monitoring&management meeting In-Reply-To: <26BFF0A6-2094-4B64-A24F-DB67F5927DF9@redhat.com> References: <912B2F10-1854-494C-AD55-1C91413D4A51@redhat.com> <1B7EF10E-048D-4764-8AF6-4A2712733E40@redhat.com> <26BFF0A6-2094-4B64-A24F-DB67F5927DF9@redhat.com> Message-ID: On Fri, Aug 15, 2014 at 11:37 AM, Galder Zamarre?o wrote: > > On 12 Aug 2014, at 22:41, Dan Berindei wrote: > > > > > > > > > On Tue, Aug 5, 2014 at 11:27 AM, Galder Zamarre?o > wrote: > > Can?t comment on the document, so here are my thoughts: > > > > Re: ?Get rid of lazy cache starting...all the caches run on all > nodes...it should still be possible to start a cache at runtime, but it > will be run on all nodes as well? > > > > ^ Though I like the idea, it might change a crucial aspect of how > default cache configuration works (if we leave the concept of default cache > at all). Say you start a cache named ?a? for which there?s no config. Up > until now we?d use the default cache configuration and create a cache ?a? > with that config. However, if caches are started cluster wide now, before > you can do that, you?d have to check that there?s no cache ?a? > configuration anywhere in the cluster. If there is, I guess the > configuration would be shipped to the node that starts the cache (if it > does not have it) and create the cache with it? Or are you assuming all > nodes in the cluster must have all configurations defined? > > > > +1 to remove the default cache as a default configuration. > > > > I like the idea of shipping the cache configuration to all the nodes. We > will have to require any user-provided objects in the configuration to be > serializable/externalizable, but I don't see a big problem with that. > > > > In fact, it would also allow us to send the entire configuration to the > coordinator on join, so we could verify that the configuration on all nodes > is compatible (not exactly the same, since things like capacityFactor can > be different). And it would remove the need for the CacheJoinInfo class... > > > > A more limited alternative, not requiring config serialization, would be > to disallow getCache(name) when a configuration doesn't exist but add a > method createCache(name, configurationName) that only requires > configurationName to be defined everywhere. > > > > > > Re: ?Revisiting Configuration elements?" > > > > If we?re going to do another round of updates in this area, I think we > should consider what to do with unconfigured values. Back in the 4.x days, > the JAXB XML parsing allowed us to know which configuration elements the > user had not configured, which helped us tweak configuration and do > validation more easily. Now, when we look at a Configuration builder > object, we see default values but we do not that a value is the one it is > because the user has specifically defined it, or because it?s unconfigured. > One way to do so is by separating the default values, say to an XML file > which is reference (I think WF does something along these lines) and leave > the builder object with all null values. This would make it easy to figure > out which elements have been touched and for that those that have not, use > default values. This has popped up in the forums before but can?t find a > link right now... > > > > I was also thinking of doing something like that, but instead of having > a separate XML with the defaults I was going to propose creating a layer of > indirection: every configuration value would be a ConfigurationProperty, > with a default value, an override value, and an actual value. We already do > something similar for e.g. StateTransferConfiguration.awaitInitialTransfer > and originalAwaitInitialTransfer). > > ^ What?s the problem with a separate XML file? > I really like the idea of externalizing default values from a > documentation perspective and ease of change down the line, both for us and > for users. > > On top of that, it could be validated and be presented as a reference XML > file, getting rid of the sample XML file that we currently have which is > half done and no one really updates it. > First of all, how would that XML look? Like a regular configuration file, with one cache of each type? One store of each type? In every cache? How would we handle defaults for custom stores? We already have an XML file with default values: infinispan-config-7.0.xsd. It would be nice if we could parse that and keep the defaults in a single place, but if we need to duplicate the defaults anyway, I'd rather keep them in code. > I also think with a separate XML file, we'd still need to keep some not-quite-defaults in the various builder.build() methods (or Configurations methods). My idea was to keep all these in the *ConfigurationBuilder classes, though I know we'll never get to 100%. > > > > I haven't seen the forum post, but I think that would allow us more > properly validate conflicting configuration values. E.g. the checks in > Configurations.isVersioningEnabled() could be moved to > ConfigurationBuilder.validate()/create(). > > Totally, validation right now it?s quite tricky due to the lack of > separation. > > Cheers, > > > > > > > Cheers, > > > > On 28 Jul 2014, at 17:04, Mircea Markus wrote: > > > > > Hi, > > > > > > Tristan, Sanne, Gustavo and I meetlast week to discuss a) Infinispan > usability and b) monitoring and management. Minutes attached. > > > > > > > https://docs.google.com/document/d/1dIxH0xTiYBHH6_nkqybc13_zzW9gMIcaF_GX5Y7_PPQ/edit?usp=sharing > > > > > > Cheers, > > > -- > > > Mircea Markus > > > Infinispan lead (www.infinispan.org) > > > > > > > > > > > > > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > -- > > Galder Zamarre?o > > galder at redhat.com > > twitter.com/galderz > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140815/2c93ece1/attachment.html From sanne at infinispan.org Fri Aug 15 08:29:20 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 15 Aug 2014 13:29:20 +0100 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies Message-ID: The goal being to resolve ISPN-4561, I was thinking to expose a very simple reference counter in the AdvancedCache API. As you know the Query module - which triggers on indexed caches - can use the Infinispan Lucene Directory to store its indexes in a (different) Cache. When the CacheManager is stopped, if the index storage caches are stopped first, then the indexed cache is stopped, this might need to flush/close some pending state on the index and this results in an illegal operation as the storate is shut down already. We could either implement a complex dependency graph, or add a method like: boolean incRef(); on AdvancedCache. when the Cache#close() method is invoked, this will do an internal decrement, and only when hitting zero it will really close the cache. A CacheManager shutdown will loop through all caches, and invoke close() on all of them; the close() method should return something so that the CacheManager shutdown loop understand if it really did close all caches or if not, in which case it will loop again through all caches, and loops until all cache instances are really closed. The return type of "close()" doesn't necessarily need to be exposed on public API, it could be an internal only variant. Could we do this? --Sanne From dan.berindei at gmail.com Fri Aug 15 09:55:31 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Fri, 15 Aug 2014 16:55:31 +0300 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: References: Message-ID: It looks to me like you actually want a partial order between caches on shutdown, so why not declare an explicit dependency (e.g. manager.stopOrder(before, after)? We could even throw an exception if the user tries to stop a cache manually in the wrong order (e.g. TestingUtil.killCacheManagers). Alternatively, we could add an event CacheManagerStopEvent(pre=true) at the cache manager level that is invoked before any cache is stopped, and you could close all the indexes in that listener. The event could even be at the cache level, if it would make things easier. Cheers Dan On Fri, Aug 15, 2014 at 3:29 PM, Sanne Grinovero wrote: > The goal being to resolve ISPN-4561, I was thinking to expose a very > simple reference counter in the AdvancedCache API. > > As you know the Query module - which triggers on indexed caches - can > use the Infinispan Lucene Directory to store its indexes in a > (different) Cache. > When the CacheManager is stopped, if the index storage caches are > stopped first, then the indexed cache is stopped, this might need to > flush/close some pending state on the index and this results in an > illegal operation as the storate is shut down already. > > We could either implement a complex dependency graph, or add a method like: > > > boolean incRef(); > > on AdvancedCache. > > when the Cache#close() method is invoked, this will do an internal > decrement, and only when hitting zero it will really close the cache. > > A CacheManager shutdown will loop through all caches, and invoke > close() on all of them; the close() method should return something so > that the CacheManager shutdown loop understand if it really did close > all caches or if not, in which case it will loop again through all > caches, and loops until all cache instances are really closed. > The return type of "close()" doesn't necessarily need to be exposed on > public API, it could be an internal only variant. > Could we do this? > > --Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140815/5b529916/attachment.html From sanne at infinispan.org Fri Aug 15 10:26:24 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 15 Aug 2014 15:26:24 +0100 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: References: Message-ID: On 15 August 2014 14:55, Dan Berindei wrote: > It looks to me like you actually want a partial order between caches on > shutdown, so why not declare an explicit dependency (e.g. > manager.stopOrder(before, after)? We could even throw an exception if the > user tries to stop a cache manually in the wrong order (e.g. > TestingUtil.killCacheManagers). Because that's much more complex to implement? incRef() seems trivial, effective and can be used by other components for different patterns. > Alternatively, we could add an event CacheManagerStopEvent(pre=true) at the > cache manager level that is invoked before any cache is stopped, and you > could close all the indexes in that listener. The event could even be at the > cache level, if it would make things easier. I like that more than defining explicit dependency links and it would probably be good enough for this specific problem but I feel like it doesn't solve similar problems with a more complex dependency sequence of services. Counters are effectively providing the same semantics, just that you can use the pre-close pattern nesting it "count times". Also having ref-counting available makes it easier for users to implement independent components - with an independent lifecycle - which might share the same cache. -- Sanne > > Cheers > Dan > > > > On Fri, Aug 15, 2014 at 3:29 PM, Sanne Grinovero > wrote: >> >> The goal being to resolve ISPN-4561, I was thinking to expose a very >> simple reference counter in the AdvancedCache API. >> >> As you know the Query module - which triggers on indexed caches - can >> use the Infinispan Lucene Directory to store its indexes in a >> (different) Cache. >> When the CacheManager is stopped, if the index storage caches are >> stopped first, then the indexed cache is stopped, this might need to >> flush/close some pending state on the index and this results in an >> illegal operation as the storate is shut down already. >> >> We could either implement a complex dependency graph, or add a method >> like: >> >> >> boolean incRef(); >> >> on AdvancedCache. >> >> when the Cache#close() method is invoked, this will do an internal >> decrement, and only when hitting zero it will really close the cache. >> >> A CacheManager shutdown will loop through all caches, and invoke >> close() on all of them; the close() method should return something so >> that the CacheManager shutdown loop understand if it really did close >> all caches or if not, in which case it will loop again through all >> caches, and loops until all cache instances are really closed. >> The return type of "close()" doesn't necessarily need to be exposed on >> public API, it could be an internal only variant. >> >> >> Could we do this? >> >> --Sanne >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ales.justin at gmail.com Fri Aug 15 17:40:27 2014 From: ales.justin at gmail.com (Ales Justin) Date: Fri, 15 Aug 2014 23:40:27 +0200 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: References: Message-ID: <7828BAEC-3C43-4103-8E4E-00A517E1F512@gmail.com> What about if you add an SPI for this? e.g. this could be nicely implemented on top of WildFly's MSC And by default you would keep this simple incRef, or some similar simple state machine we used in Microcontainer. -Ales On 15 Aug 2014, at 16:26, Sanne Grinovero wrote: > On 15 August 2014 14:55, Dan Berindei wrote: >> It looks to me like you actually want a partial order between caches on >> shutdown, so why not declare an explicit dependency (e.g. >> manager.stopOrder(before, after)? We could even throw an exception if the >> user tries to stop a cache manually in the wrong order (e.g. >> TestingUtil.killCacheManagers). > > Because that's much more complex to implement? > incRef() seems trivial, effective and can be used by other components > for different patterns. > >> Alternatively, we could add an event CacheManagerStopEvent(pre=true) at the >> cache manager level that is invoked before any cache is stopped, and you >> could close all the indexes in that listener. The event could even be at the >> cache level, if it would make things easier. > > I like that more than defining explicit dependency links and it would > probably be good enough for this specific problem > but I feel like it doesn't solve similar problems with a more complex > dependency sequence of services. > Counters are effectively providing the same semantics, just that you > can use the pre-close pattern nesting it "count times". > > Also having ref-counting available makes it easier for users to > implement independent components - with an independent lifecycle - > which might share the same cache. > > -- Sanne > >> >> Cheers >> Dan >> >> >> >> On Fri, Aug 15, 2014 at 3:29 PM, Sanne Grinovero >> wrote: >>> >>> The goal being to resolve ISPN-4561, I was thinking to expose a very >>> simple reference counter in the AdvancedCache API. >>> >>> As you know the Query module - which triggers on indexed caches - can >>> use the Infinispan Lucene Directory to store its indexes in a >>> (different) Cache. >>> When the CacheManager is stopped, if the index storage caches are >>> stopped first, then the indexed cache is stopped, this might need to >>> flush/close some pending state on the index and this results in an >>> illegal operation as the storate is shut down already. >>> >>> We could either implement a complex dependency graph, or add a method >>> like: >>> >>> >>> boolean incRef(); >>> >>> on AdvancedCache. >>> >>> when the Cache#close() method is invoked, this will do an internal >>> decrement, and only when hitting zero it will really close the cache. >>> >>> A CacheManager shutdown will loop through all caches, and invoke >>> close() on all of them; the close() method should return something so >>> that the CacheManager shutdown loop understand if it really did close >>> all caches or if not, in which case it will loop again through all >>> caches, and loops until all cache instances are really closed. >>> The return type of "close()" doesn't necessarily need to be exposed on >>> public API, it could be an internal only variant. >>> >>> >>> Could we do this? >>> >>> --Sanne >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Mon Aug 18 06:33:59 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 18 Aug 2014 13:33:59 +0300 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: <7828BAEC-3C43-4103-8E4E-00A517E1F512@gmail.com> References: <7828BAEC-3C43-4103-8E4E-00A517E1F512@gmail.com> Message-ID: Ales, I don't think the implementation matters that much, I was only concerned about the API. BTW, where could I find some documentation on MSC? Sanne, I missed something in your initial email: you mention a Cache.close() method, did you mean Cache.stop(), or did you mean to add a new close() method? Cache doesn't actually define a stop() method, it inherits the stop() method from the Lifecycle interface. So changing its semantics only for caches would be hacky. Adding a different close() method would be better, but it still wouldn't be my first choice... On Sat, Aug 16, 2014 at 12:40 AM, Ales Justin wrote: > What about if you add an SPI for this? > > e.g. this could be nicely implemented on top of WildFly's MSC > > And by default you would keep this simple incRef, > or some similar simple state machine we used in Microcontainer. > > -Ales > > On 15 Aug 2014, at 16:26, Sanne Grinovero wrote: > > > On 15 August 2014 14:55, Dan Berindei wrote: > >> It looks to me like you actually want a partial order between caches on > >> shutdown, so why not declare an explicit dependency (e.g. > >> manager.stopOrder(before, after)? We could even throw an exception if > the > >> user tries to stop a cache manually in the wrong order (e.g. > >> TestingUtil.killCacheManagers). > > > > Because that's much more complex to implement? > > incRef() seems trivial, effective and can be used by other components > > for different patterns. > Implementing proper dependencies doesn't need to be difficult either, all we need is to keep a list of dependants in the cache and prune the stopped caches from it before doing the check. incRef might be easier to implement, but instead it seems harder to explain to a user in the Javadoc. > > >> Alternatively, we could add an event CacheManagerStopEvent(pre=true) at > the > >> cache manager level that is invoked before any cache is stopped, and you > >> could close all the indexes in that listener. The event could even be > at the > >> cache level, if it would make things easier. > > > > I like that more than defining explicit dependency links and it would > > probably be good enough for this specific problem > > but I feel like it doesn't solve similar problems with a more complex > > dependency sequence of services. > > Counters are effectively providing the same semantics, just that you > > can use the pre-close pattern nesting it "count times". > > > > Also having ref-counting available makes it easier for users to > > implement independent components - with an independent lifecycle - > > which might share the same cache. > By independent components do you mean global components? That wouldn't work, since we only start stopping global components after we have stopped all the caches - regardless of the order in which we stop caches. A global pre-stop event, instead, would allow global components to do stuff before any of the caches is stopped. > > > -- Sanne > > > >> > >> Cheers > >> Dan > >> > >> > >> > >> On Fri, Aug 15, 2014 at 3:29 PM, Sanne Grinovero > >> wrote: > >>> > >>> The goal being to resolve ISPN-4561, I was thinking to expose a very > >>> simple reference counter in the AdvancedCache API. > >>> > >>> As you know the Query module - which triggers on indexed caches - can > >>> use the Infinispan Lucene Directory to store its indexes in a > >>> (different) Cache. > >>> When the CacheManager is stopped, if the index storage caches are > >>> stopped first, then the indexed cache is stopped, this might need to > >>> flush/close some pending state on the index and this results in an > >>> illegal operation as the storate is shut down already. > >>> > >>> We could either implement a complex dependency graph, or add a method > >>> like: > >>> > >>> > >>> boolean incRef(); > >>> > >>> on AdvancedCache. > >>> > >>> when the Cache#close() method is invoked, this will do an internal > >>> decrement, and only when hitting zero it will really close the cache. > >>> > >>> A CacheManager shutdown will loop through all caches, and invoke > >>> close() on all of them; the close() method should return something so > >>> that the CacheManager shutdown loop understand if it really did close > >>> all caches or if not, in which case it will loop again through all > >>> caches, and loops until all cache instances are really closed. > >>> The return type of "close()" doesn't necessarily need to be exposed on > >>> public API, it could be an internal only variant. > >>> > >>> > >>> Could we do this? > >>> > >>> --Sanne > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140818/14bf4907/attachment-0001.html From sanne at infinispan.org Mon Aug 18 06:56:42 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 18 Aug 2014 11:56:42 +0100 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: References: <7828BAEC-3C43-4103-8E4E-00A517E1F512@gmail.com> Message-ID: On 18 August 2014 11:33, Dan Berindei wrote: > Ales, I don't think the implementation matters that much, I was only > concerned about the API. BTW, where could I find some documentation on MSC? > > Sanne, I missed something in your initial email: you mention a Cache.close() > method, did you mean Cache.stop(), or did you mean to add a new close() > method? I meant stop(), sorry. > > Cache doesn't actually define a stop() method, it inherits the stop() method > from the Lifecycle interface. So changing its semantics only for caches > would be hacky. Adding a different close() method would be better, but it > still wouldn't be my first choice... > > > On Sat, Aug 16, 2014 at 12:40 AM, Ales Justin wrote: >> >> What about if you add an SPI for this? >> >> e.g. this could be nicely implemented on top of WildFly's MSC >> >> And by default you would keep this simple incRef, >> or some similar simple state machine we used in Microcontainer. >> >> -Ales >> >> On 15 Aug 2014, at 16:26, Sanne Grinovero wrote: >> >> > On 15 August 2014 14:55, Dan Berindei wrote: >> >> It looks to me like you actually want a partial order between caches on >> >> shutdown, so why not declare an explicit dependency (e.g. >> >> manager.stopOrder(before, after)? We could even throw an exception if >> >> the >> >> user tries to stop a cache manually in the wrong order (e.g. >> >> TestingUtil.killCacheManagers). >> > >> > Because that's much more complex to implement? >> > incRef() seems trivial, effective and can be used by other components >> > for different patterns. > > > Implementing proper dependencies doesn't need to be difficult either, all we > need is to keep a list of dependants in the cache and prune the stopped > caches from it before doing the check. I expect you or your team to do it, so your choice :-) I would also be careful in how you decide to spend a day(week?) vs 1h to provide a feature which is essentially the same stuff for the user. And if you go for dependency graphs, prepare to do it transactionally and concurrently.. > incRef might be easier to implement, but instead it seems harder to explain > to a user in the Javadoc. I didn't invent incRef myself, it's common in several other projects (Lucene for one), so I expect it to be a commonly understood pattern. Also I suggested to add it only on AdvancedCache, as I agree it's "advanced" users only. >> >> Alternatively, we could add an event CacheManagerStopEvent(pre=true) at >> >> the >> >> cache manager level that is invoked before any cache is stopped, and >> >> you >> >> could close all the indexes in that listener. The event could even be >> >> at the >> >> cache level, if it would make things easier. >> > >> > I like that more than defining explicit dependency links and it would >> > probably be good enough for this specific problem >> > but I feel like it doesn't solve similar problems with a more complex >> > dependency sequence of services. >> > Counters are effectively providing the same semantics, just that you >> > can use the pre-close pattern nesting it "count times". >> > >> > Also having ref-counting available makes it easier for users to >> > implement independent components - with an independent lifecycle - >> > which might share the same cache. > > > By independent components do you mean global components? That wouldn't work, > since we only start stopping global components after we have stopped all the > caches - regardless of the order in which we stop caches. I didn't meant to add this stopping feature to components, but that many other components might need an entangled sequence of shutdown of Caches. > > A global pre-stop event, instead, would allow global components to do stuff > before any of the caches is stopped. I haven't seen any need for such a thing so far. Your call, but I don't think we are in the business of service lifecycle management and dependency injection frameworks. Alesj is right: at best we should make this an SPI, provide a trivial implementation and leave the details to be handled by those who thought about it properly; just that the trivial counter is good enough for my needs. Sanne > >> > >> > -- Sanne >> > >> >> >> >> Cheers >> >> Dan >> >> >> >> >> >> >> >> On Fri, Aug 15, 2014 at 3:29 PM, Sanne Grinovero >> >> wrote: >> >>> >> >>> The goal being to resolve ISPN-4561, I was thinking to expose a very >> >>> simple reference counter in the AdvancedCache API. >> >>> >> >>> As you know the Query module - which triggers on indexed caches - can >> >>> use the Infinispan Lucene Directory to store its indexes in a >> >>> (different) Cache. >> >>> When the CacheManager is stopped, if the index storage caches are >> >>> stopped first, then the indexed cache is stopped, this might need to >> >>> flush/close some pending state on the index and this results in an >> >>> illegal operation as the storate is shut down already. >> >>> >> >>> We could either implement a complex dependency graph, or add a method >> >>> like: >> >>> >> >>> >> >>> boolean incRef(); >> >>> >> >>> on AdvancedCache. >> >>> >> >>> when the Cache#close() method is invoked, this will do an internal >> >>> decrement, and only when hitting zero it will really close the cache. >> >>> >> >>> A CacheManager shutdown will loop through all caches, and invoke >> >>> close() on all of them; the close() method should return something so >> >>> that the CacheManager shutdown loop understand if it really did close >> >>> all caches or if not, in which case it will loop again through all >> >>> caches, and loops until all cache instances are really closed. >> >>> The return type of "close()" doesn't necessarily need to be exposed on >> >>> public API, it could be an internal only variant. >> >>> >> >>> >> >>> Could we do this? >> >>> >> >>> --Sanne >> >>> _______________________________________________ >> >>> infinispan-dev mailing list >> >>> infinispan-dev at lists.jboss.org >> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> >> >> >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Mon Aug 18 08:46:39 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 18 Aug 2014 15:46:39 +0300 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: References: <7828BAEC-3C43-4103-8E4E-00A517E1F512@gmail.com> Message-ID: On Mon, Aug 18, 2014 at 1:56 PM, Sanne Grinovero wrote: > On 18 August 2014 11:33, Dan Berindei wrote: > > Ales, I don't think the implementation matters that much, I was only > > concerned about the API. BTW, where could I find some documentation on > MSC? > > > > Sanne, I missed something in your initial email: you mention a > Cache.close() > > method, did you mean Cache.stop(), or did you mean to add a new close() > > method? > > I meant stop(), sorry. > > > > > Cache doesn't actually define a stop() method, it inherits the stop() > method > > from the Lifecycle interface. So changing its semantics only for caches > > would be hacky. Adding a different close() method would be better, but it > > still wouldn't be my first choice... > > > > > > On Sat, Aug 16, 2014 at 12:40 AM, Ales Justin > wrote: > >> > >> What about if you add an SPI for this? > >> > >> e.g. this could be nicely implemented on top of WildFly's MSC > >> > >> And by default you would keep this simple incRef, > >> or some similar simple state machine we used in Microcontainer. > >> > >> -Ales > >> > >> On 15 Aug 2014, at 16:26, Sanne Grinovero wrote: > >> > >> > On 15 August 2014 14:55, Dan Berindei wrote: > >> >> It looks to me like you actually want a partial order between caches > on > >> >> shutdown, so why not declare an explicit dependency (e.g. > >> >> manager.stopOrder(before, after)? We could even throw an exception if > >> >> the > >> >> user tries to stop a cache manually in the wrong order (e.g. > >> >> TestingUtil.killCacheManagers). > >> > > >> > Because that's much more complex to implement? > >> > incRef() seems trivial, effective and can be used by other components > >> > for different patterns. > > > > > > Implementing proper dependencies doesn't need to be difficult either, > all we > > need is to keep a list of dependants in the cache and prune the stopped > > caches from it before doing the check. > > I expect you or your team to do it, so your choice :-) > I would also be careful in how you decide to spend a day(week?) vs 1h > to provide a feature which is essentially the same stuff for the user. > And if you go for dependency graphs, prepare to do it transactionally > and concurrently.. > I don't see why we would need transactions for dependency graphs any more than we would need them for incRef. > > > incRef might be easier to implement, but instead it seems harder to > explain > > to a user in the Javadoc. > > I didn't invent incRef myself, it's common in several other projects > (Lucene for one), > so I expect it to be a commonly understood pattern. > > Also I suggested to add it only on AdvancedCache, as I agree it's > "advanced" users only. > AdvancedCache is still public API, so it still needs to be documented. I'm not sure Lucene is a good model here, I looked at Lucene's IndexReader documentation [1] and it doesn't look encouraging: the close javadoc says it "Closes files associated with this index", while the incRef javadoc says "Note that close() simply calls decRef()". I also didn't find any mention of what the initial reference count is. To be clear, I don't have anything against reference counting in general. But I don't think overloading the Lifecycle.stop() method to have a totally different behaviour in Cache is a good idea. [1] http://lucene.apache.org/core/4_0_0/core/org/apache/lucene/index/IndexReader.html > > >> >> Alternatively, we could add an event CacheManagerStopEvent(pre=true) > at > >> >> the > >> >> cache manager level that is invoked before any cache is stopped, and > >> >> you > >> >> could close all the indexes in that listener. The event could even be > >> >> at the > >> >> cache level, if it would make things easier. > >> > > >> > I like that more than defining explicit dependency links and it would > >> > probably be good enough for this specific problem > >> > but I feel like it doesn't solve similar problems with a more complex > >> > dependency sequence of services. > >> > Counters are effectively providing the same semantics, just that you > >> > can use the pre-close pattern nesting it "count times". > >> > > >> > Also having ref-counting available makes it easier for users to > >> > implement independent components - with an independent lifecycle - > >> > which might share the same cache. > > > > > > By independent components do you mean global components? That wouldn't > work, > > since we only start stopping global components after we have stopped all > the > > caches - regardless of the order in which we stop caches. > > I didn't meant to add this stopping feature to components, but that > many other components might need an entangled sequence of shutdown of > Caches. Ok, fair enough. > > > > A global pre-stop event, instead, would allow global components to do > stuff > > before any of the caches is stopped. > > I haven't seen any need for such a thing so far. Your call, but I > don't think we are in the business of service lifecycle management and > dependency injection frameworks. > I think we are in that business, whether we like it or not. > Alesj is right: at best we should make this an SPI, provide a trivial > implementation and leave the details to be handled by those who > thought about it properly; just that the trivial counter is good > enough for my needs. > How would that SPI look? And how would someone be able to provide a better implementation than our "trivial" implementation? > > Sanne > > > > >> > > >> > -- Sanne > >> > > >> >> > >> >> Cheers > >> >> Dan > >> >> > >> >> > >> >> > >> >> On Fri, Aug 15, 2014 at 3:29 PM, Sanne Grinovero < > sanne at infinispan.org> > >> >> wrote: > >> >>> > >> >>> The goal being to resolve ISPN-4561, I was thinking to expose a very > >> >>> simple reference counter in the AdvancedCache API. > >> >>> > >> >>> As you know the Query module - which triggers on indexed caches - > can > >> >>> use the Infinispan Lucene Directory to store its indexes in a > >> >>> (different) Cache. > >> >>> When the CacheManager is stopped, if the index storage caches are > >> >>> stopped first, then the indexed cache is stopped, this might need to > >> >>> flush/close some pending state on the index and this results in an > >> >>> illegal operation as the storate is shut down already. > >> >>> > >> >>> We could either implement a complex dependency graph, or add a > method > >> >>> like: > >> >>> > >> >>> > >> >>> boolean incRef(); > >> >>> > >> >>> on AdvancedCache. > >> >>> > >> >>> when the Cache#close() method is invoked, this will do an internal > >> >>> decrement, and only when hitting zero it will really close the > cache. > >> >>> > >> >>> A CacheManager shutdown will loop through all caches, and invoke > >> >>> close() on all of them; the close() method should return something > so > >> >>> that the CacheManager shutdown loop understand if it really did > close > >> >>> all caches or if not, in which case it will loop again through all > >> >>> caches, and loops until all cache instances are really closed. > >> >>> The return type of "close()" doesn't necessarily need to be exposed > on > >> >>> public API, it could be an internal only variant. > >> >>> > >> >>> > >> >>> Could we do this? > >> >>> > >> >>> --Sanne > >> >>> _______________________________________________ > >> >>> infinispan-dev mailing list > >> >>> infinispan-dev at lists.jboss.org > >> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> >> > >> >> > >> >> > >> >> _______________________________________________ > >> >> infinispan-dev mailing list > >> >> infinispan-dev at lists.jboss.org > >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > _______________________________________________ > >> > infinispan-dev mailing list > >> > infinispan-dev at lists.jboss.org > >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140818/aea8718c/attachment-0001.html From sanne at infinispan.org Mon Aug 18 09:04:53 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 18 Aug 2014 14:04:53 +0100 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: References: <7828BAEC-3C43-4103-8E4E-00A517E1F512@gmail.com> Message-ID: as you prefer.. so can we get this soon please ? -- Sanne On 18 August 2014 13:46, Dan Berindei wrote: > > > > On Mon, Aug 18, 2014 at 1:56 PM, Sanne Grinovero > wrote: >> >> On 18 August 2014 11:33, Dan Berindei wrote: >> > Ales, I don't think the implementation matters that much, I was only >> > concerned about the API. BTW, where could I find some documentation on >> > MSC? >> > >> > Sanne, I missed something in your initial email: you mention a >> > Cache.close() >> > method, did you mean Cache.stop(), or did you mean to add a new close() >> > method? >> >> I meant stop(), sorry. >> >> > >> > Cache doesn't actually define a stop() method, it inherits the stop() >> > method >> > from the Lifecycle interface. So changing its semantics only for caches >> > would be hacky. Adding a different close() method would be better, but >> > it >> > still wouldn't be my first choice... >> > >> > >> > On Sat, Aug 16, 2014 at 12:40 AM, Ales Justin >> > wrote: >> >> >> >> What about if you add an SPI for this? >> >> >> >> e.g. this could be nicely implemented on top of WildFly's MSC >> >> >> >> And by default you would keep this simple incRef, >> >> or some similar simple state machine we used in Microcontainer. >> >> >> >> -Ales >> >> >> >> On 15 Aug 2014, at 16:26, Sanne Grinovero wrote: >> >> >> >> > On 15 August 2014 14:55, Dan Berindei wrote: >> >> >> It looks to me like you actually want a partial order between caches >> >> >> on >> >> >> shutdown, so why not declare an explicit dependency (e.g. >> >> >> manager.stopOrder(before, after)? We could even throw an exception >> >> >> if >> >> >> the >> >> >> user tries to stop a cache manually in the wrong order (e.g. >> >> >> TestingUtil.killCacheManagers). >> >> > >> >> > Because that's much more complex to implement? >> >> > incRef() seems trivial, effective and can be used by other components >> >> > for different patterns. >> > >> > >> > Implementing proper dependencies doesn't need to be difficult either, >> > all we >> > need is to keep a list of dependants in the cache and prune the stopped >> > caches from it before doing the check. >> >> I expect you or your team to do it, so your choice :-) >> I would also be careful in how you decide to spend a day(week?) vs 1h >> to provide a feature which is essentially the same stuff for the user. >> And if you go for dependency graphs, prepare to do it transactionally >> and concurrently.. > > > I don't see why we would need transactions for dependency graphs any more > than we would need them for incRef. > >> >> >> > incRef might be easier to implement, but instead it seems harder to >> > explain >> > to a user in the Javadoc. >> >> I didn't invent incRef myself, it's common in several other projects >> (Lucene for one), >> so I expect it to be a commonly understood pattern. >> >> Also I suggested to add it only on AdvancedCache, as I agree it's >> "advanced" users only. > > > AdvancedCache is still public API, so it still needs to be documented. I'm > not sure Lucene is a good model here, I looked at Lucene's IndexReader > documentation [1] and it doesn't look encouraging: the close javadoc says it > "Closes files associated with this index", while the incRef javadoc says > "Note that close() simply calls decRef()". I also didn't find any mention of > what the initial reference count is. > > To be clear, I don't have anything against reference counting in general. > But I don't think overloading the Lifecycle.stop() method to have a totally > different behaviour in Cache is a good idea. > > [1] > http://lucene.apache.org/core/4_0_0/core/org/apache/lucene/index/IndexReader.html > >> >> >> >> >> Alternatively, we could add an event CacheManagerStopEvent(pre=true) >> >> >> at >> >> >> the >> >> >> cache manager level that is invoked before any cache is stopped, and >> >> >> you >> >> >> could close all the indexes in that listener. The event could even >> >> >> be >> >> >> at the >> >> >> cache level, if it would make things easier. >> >> > >> >> > I like that more than defining explicit dependency links and it would >> >> > probably be good enough for this specific problem >> >> > but I feel like it doesn't solve similar problems with a more complex >> >> > dependency sequence of services. >> >> > Counters are effectively providing the same semantics, just that you >> >> > can use the pre-close pattern nesting it "count times". >> >> > >> >> > Also having ref-counting available makes it easier for users to >> >> > implement independent components - with an independent lifecycle - >> >> > which might share the same cache. >> > >> > >> > By independent components do you mean global components? That wouldn't >> > work, >> > since we only start stopping global components after we have stopped all >> > the >> > caches - regardless of the order in which we stop caches. >> >> I didn't meant to add this stopping feature to components, but that >> many other components might need an entangled sequence of shutdown of >> Caches. > > > Ok, fair enough. > >> >> > >> > A global pre-stop event, instead, would allow global components to do >> > stuff >> > before any of the caches is stopped. >> >> I haven't seen any need for such a thing so far. Your call, but I >> don't think we are in the business of service lifecycle management and >> dependency injection frameworks. > > > I think we are in that business, whether we like it or not. > >> >> Alesj is right: at best we should make this an SPI, provide a trivial >> implementation and leave the details to be handled by those who >> thought about it properly; just that the trivial counter is good >> enough for my needs. > > > How would that SPI look? And how would someone be able to provide a better > implementation than our "trivial" implementation? > > >> >> >> Sanne >> >> > >> >> > >> >> > -- Sanne >> >> > >> >> >> >> >> >> Cheers >> >> >> Dan >> >> >> >> >> >> >> >> >> >> >> >> On Fri, Aug 15, 2014 at 3:29 PM, Sanne Grinovero >> >> >> >> >> >> wrote: >> >> >>> >> >> >>> The goal being to resolve ISPN-4561, I was thinking to expose a >> >> >>> very >> >> >>> simple reference counter in the AdvancedCache API. >> >> >>> >> >> >>> As you know the Query module - which triggers on indexed caches - >> >> >>> can >> >> >>> use the Infinispan Lucene Directory to store its indexes in a >> >> >>> (different) Cache. >> >> >>> When the CacheManager is stopped, if the index storage caches are >> >> >>> stopped first, then the indexed cache is stopped, this might need >> >> >>> to >> >> >>> flush/close some pending state on the index and this results in an >> >> >>> illegal operation as the storate is shut down already. >> >> >>> >> >> >>> We could either implement a complex dependency graph, or add a >> >> >>> method >> >> >>> like: >> >> >>> >> >> >>> >> >> >>> boolean incRef(); >> >> >>> >> >> >>> on AdvancedCache. >> >> >>> >> >> >>> when the Cache#close() method is invoked, this will do an internal >> >> >>> decrement, and only when hitting zero it will really close the >> >> >>> cache. >> >> >>> >> >> >>> A CacheManager shutdown will loop through all caches, and invoke >> >> >>> close() on all of them; the close() method should return something >> >> >>> so >> >> >>> that the CacheManager shutdown loop understand if it really did >> >> >>> close >> >> >>> all caches or if not, in which case it will loop again through all >> >> >>> caches, and loops until all cache instances are really closed. >> >> >>> The return type of "close()" doesn't necessarily need to be exposed >> >> >>> on >> >> >>> public API, it could be an internal only variant. >> >> >>> >> >> >>> >> >> >>> Could we do this? >> >> >>> >> >> >>> --Sanne >> >> >>> _______________________________________________ >> >> >>> infinispan-dev mailing list >> >> >>> infinispan-dev at lists.jboss.org >> >> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> >> >> infinispan-dev mailing list >> >> >> infinispan-dev at lists.jboss.org >> >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> > _______________________________________________ >> >> > infinispan-dev mailing list >> >> > infinispan-dev at lists.jboss.org >> >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mgencur at redhat.com Mon Aug 18 09:28:52 2014 From: mgencur at redhat.com (Martin Gencur) Date: Mon, 18 Aug 2014 15:28:52 +0200 Subject: [infinispan-dev] Ant based kill not fully working? Re: ISPN-4567 In-Reply-To: <1263089078.28637630.1407831306586.JavaMail.zimbra@redhat.com> References: <3F75DF34-95F4-482C-9F7E-F8D1D9565BA5@redhat.com> <1263089078.28637630.1407831306586.JavaMail.zimbra@redhat.com> Message-ID: <53F1FF94.5070409@redhat.com> Hi Galder, I haven't seen this before. I thought the ant-based "kill" command was safe and reliable. It's hard to say what went wrong without further logs. Whether the kill command failed or whether there were other processes that were not found by the jps command. We could also try maven-exec-plugin and call the Unix "kill" command from it, instead of using the InfinispanServerKillProcessor. Martin On 12.8.2014 10:15, Jakub Markos wrote: > Hi, > > I looked at it and I don't think using InfinispanServerKillProcessor would be any better, > since it still just calls 'kill -9'. The only difference is that it doesn't kill all > java processes starting from jboss-modules.jar, but just the one configured for the test. > > Is it maybe possible that the kill happened, but the port was still hanging? > > Jakub > > ----- Original Message ----- >> From: "Galder Zamarre?o" >> To: "Jakub Markos" , "Martin Gencur" >> Cc: "infinispan -Dev List" >> Sent: Monday, August 4, 2014 12:35:50 PM >> Subject: Ant based kill not fully working? Re: ISPN-4567 >> >> Hi, >> >> Dan has reported [1]. It appears as if the last server started in >> infinispan-as-module-client-integrationtests did not really get killed. From >> what I see, this kill was done via the specific Ant target present in that >> Maven module. >> >> I also remembered recently [2] was added. Maybe we need to get >> as-modules/client to be configured with it so that it properly kills >> servers? >> >> What I?m not sure is where we?d put it so that it can be consumed both by >> server/integration/testsuite and as-modules/client? The problem is that the >> class, as is, brings in arquillian dependency. If we can separate the >> arquillian stuff from the actual code, the class itself could maybe go in >> commons test source directory? >> >> @Tristan, thoughts? >> >> @Jakub, can I assign this to you? >> >> [1] https://issues.jboss.org/browse/ISPN-4567 >> [2] >> https://github.com/infinispan/infinispan/blob/master/server/integration/testsuite/src/test/java/org/infinispan/server/test/util/arquillian/extensions/InfinispanServerKillProcessor.java >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> From ttarrant at redhat.com Mon Aug 18 10:58:46 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 18 Aug 2014 16:58:46 +0200 Subject: [infinispan-dev] Weekly Infinispan IRC meeting 2014-08-18 Message-ID: <53F214A6.2050504@redhat.com> Minutes @ http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2014/infinispan.2014-08-18-14.03.html From dan.berindei at gmail.com Tue Aug 19 03:00:05 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 19 Aug 2014 10:00:05 +0300 Subject: [infinispan-dev] Removal of ConfigurationBuilder.classLoader(...) In-Reply-To: <1574006285.5911220.1407943796712.JavaMail.zimbra@redhat.com> References: <2035808062.5882244.1407941312919.JavaMail.zimbra@redhat.com> <1574006285.5911220.1407943796712.JavaMail.zimbra@redhat.com> Message-ID: Hi Paul As we discussed on IRC yesterday, our plan is to support multiple CacheManagers using the same JGroups transport with FORK. If each deployment has its own CacheManager, there's no need for a cache-specific classloader. That being said, ISPN-4639 and ISPN-3979 are indeed bugs and we should fix them for Beta2. Cheers Dan On Wed, Aug 13, 2014 at 6:29 PM, Paul Ferraro wrote: > It seems that the ability to associate a cache with a specific classloader > has been removed in 7.0 by this commit: > > https://github.com/infinispan/infinispan/commit/39a21a025db2e0f85019b93d09052b4772abbaa8 > > I don't fully understand the reason for the removal. WildFly previously > relied on this mechanism to define the classloader from which Infinispan > should load any classes when building its configuration. In general, WF > builds its configuration using object instances instead of class names, so > normally this isn't a problem. However, there isn't always such a > mechanism (e.g. https://issues.jboss.org/browse/ISPN-3979) > > However, now that ConfigurationBuilder.classloader(...) is gone, the > classloader used to build a Configuration is effectively hardcoded (usually > as this.getClass().getClassLoader()). > > This directly affects the ability for a WildFly using to configure a cache > with querying. IndexingConfigurationBuilder.validate(...) previously used > the configured classloader to validate that the query module is loadable. > > > https://github.com/infinispan/infinispan/blob/6.0.x/core/src/main/java/org/infinispan/configuration/cache/IndexingConfigurationBuilder.java#L141 > > However, this is now hardcoded to use the classloader that loaded the > IndexingConfigurationBuilder class itself. > > > https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/configuration/cache/IndexingConfigurationBuilder.java#L183 > > The WF distribution uses distinct modules for infinispan-core vs > infinispan-query. Consequently, if your cache don't need querying, the > query module is not loaded. WF8 let the user configure a cache with query > support via . > > Currently, however, the only way we can satisfy the validation logic in > IndexingConfigurationBuilder.validate(...) is to bloat our core > "org.infinispan" module with the infinispan-query module and its > dependencies. I don't want to do that. Is there some way we can re-enable > the ability to configure a cache with a classloader that still satisfies > the reasons for its original removal? GlobalConfigurationBuilder still > supports the ability to configure a classloader, why remove this from > ConfigurationBuilder? > > That said, however, the IndexingConfigurationBuilder validation itself is > wrong. Ultimately, the infinispan-query module will be loaded by the > classloader with which the GlobalConfiguration was built (i.e. the default > classloader of the cache), so really, at the very least, the validation > logic in IndexingConfigurationBuilder.validate(...) should reflect this. > I've opened https://issues.jboss.org/browse/ISPN-4639 to track this > specific bug. > > Thoughts? > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140819/fcfcf670/attachment.html From ales.justin at gmail.com Tue Aug 19 04:53:56 2014 From: ales.justin at gmail.com (Ales Justin) Date: Tue, 19 Aug 2014 10:53:56 +0200 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: References: <7828BAEC-3C43-4103-8E4E-00A517E1F512@gmail.com> Message-ID: <79BAC60D-C6DD-4379-8098-96F4DC79D999@gmail.com> > Ales, I don't think the implementation matters that much, I was only concerned about the API. BTW, where could I find some documentation on MSC? Perhaps check this? https://docs.jboss.org/author/display/MSC/Home -Ales -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140819/0733a820/attachment-0001.html From dan.berindei at gmail.com Tue Aug 19 05:54:17 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 19 Aug 2014 12:54:17 +0300 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: <79BAC60D-C6DD-4379-8098-96F4DC79D999@gmail.com> References: <7828BAEC-3C43-4103-8E4E-00A517E1F512@gmail.com> <79BAC60D-C6DD-4379-8098-96F4DC79D999@gmail.com> Message-ID: Doesn't look right, all I can see there is a diagram of possible service states. On Tue, Aug 19, 2014 at 11:53 AM, Ales Justin wrote: > > Ales, I don't think the implementation matters that much, I was only > concerned about the API. BTW, where could I find some documentation on MSC? > > > Perhaps check this? > > https://docs.jboss.org/author/display/MSC/Home > > -Ales > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140819/d67869b1/attachment.html From ales.justin at gmail.com Tue Aug 19 06:05:57 2014 From: ales.justin at gmail.com (Ales Justin) Date: Tue, 19 Aug 2014 12:05:57 +0200 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: References: <7828BAEC-3C43-4103-8E4E-00A517E1F512@gmail.com> <79BAC60D-C6DD-4379-8098-96F4DC79D999@gmail.com> Message-ID: Hmmm, couldn't find anything else -- but afaik, the javadoc should be good there. On 19 Aug 2014, at 11:54, Dan Berindei wrote: > Doesn't look right, all I can see there is a diagram of possible service states. > > > On Tue, Aug 19, 2014 at 11:53 AM, Ales Justin wrote: > >> Ales, I don't think the implementation matters that much, I was only concerned about the API. BTW, where could I find some documentation on MSC? > > > Perhaps check this? > > https://docs.jboss.org/author/display/MSC/Home > > -Ales > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140819/835670b5/attachment.html From ttarrant at redhat.com Tue Aug 19 10:02:54 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 19 Aug 2014 16:02:54 +0200 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: References: Message-ID: <53F3590E.9050602@redhat.com> Thanks, this actually has multiple issues currently: - the default cache is stopped last (why ?) - some "service" caches need to be handled manually: e.g. the registry and the topology cache. A generic ref counting system would be a great improvement Tristan On 15/08/14 14:29, Sanne Grinovero wrote: > The goal being to resolve ISPN-4561, I was thinking to expose a very > simple reference counter in the AdvancedCache API. > > As you know the Query module - which triggers on indexed caches - can > use the Infinispan Lucene Directory to store its indexes in a > (different) Cache. > When the CacheManager is stopped, if the index storage caches are > stopped first, then the indexed cache is stopped, this might need to > flush/close some pending state on the index and this results in an > illegal operation as the storate is shut down already. > > We could either implement a complex dependency graph, or add a method like: > > > boolean incRef(); > > on AdvancedCache. > > when the Cache#close() method is invoked, this will do an internal > decrement, and only when hitting zero it will really close the cache. > > A CacheManager shutdown will loop through all caches, and invoke > close() on all of them; the close() method should return something so > that the CacheManager shutdown loop understand if it really did close > all caches or if not, in which case it will loop again through all > caches, and loops until all cache instances are really closed. > The return type of "close()" doesn't necessarily need to be exposed on > public API, it could be an internal only variant. > > Could we do this? > > --Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From dan.berindei at gmail.com Wed Aug 20 04:18:03 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 20 Aug 2014 11:18:03 +0300 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: <53F3590E.9050602@redhat.com> References: <53F3590E.9050602@redhat.com> Message-ID: I'm not sure about the topology cache, but I don't think this would be useful for the cluster registry. The cluster registry is a global component, so it's only stopped *after* all the caches, and other components are not supposed to know that the cluster registry implementation uses a cache. Cheers Dan On Tue, Aug 19, 2014 at 5:02 PM, Tristan Tarrant wrote: > Thanks, this actually has multiple issues currently: > > - the default cache is stopped last (why ?) > - some "service" caches need to be handled manually: e.g. the registry > and the topology cache. > > A generic ref counting system would be a great improvement > > Tristan > > On 15/08/14 14:29, Sanne Grinovero wrote: > > The goal being to resolve ISPN-4561, I was thinking to expose a very > > simple reference counter in the AdvancedCache API. > > > > As you know the Query module - which triggers on indexed caches - can > > use the Infinispan Lucene Directory to store its indexes in a > > (different) Cache. > > When the CacheManager is stopped, if the index storage caches are > > stopped first, then the indexed cache is stopped, this might need to > > flush/close some pending state on the index and this results in an > > illegal operation as the storate is shut down already. > > > > We could either implement a complex dependency graph, or add a method > like: > > > > > > boolean incRef(); > > > > on AdvancedCache. > > > > when the Cache#close() method is invoked, this will do an internal > > decrement, and only when hitting zero it will really close the cache. > > > > A CacheManager shutdown will loop through all caches, and invoke > > close() on all of them; the close() method should return something so > > that the CacheManager shutdown loop understand if it really did close > > all caches or if not, in which case it will loop again through all > > caches, and loops until all cache instances are really closed. > > The return type of "close()" doesn't necessarily need to be exposed on > > public API, it could be an internal only variant. > > > > Could we do this? > > > > --Sanne > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140820/134f3357/attachment.html From galder at redhat.com Wed Aug 20 04:21:55 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Wed, 20 Aug 2014 10:21:55 +0200 Subject: [infinispan-dev] Hotrod Server/client cluster with distributed cache question In-Reply-To: <2001353693.29016134.1407958761215.JavaMail.zimbra@redhat.com> References: <1574006285.5911220.1407943796712.JavaMail.zimbra@redhat.com> <2001353693.29016134.1407958761215.JavaMail.zimbra@redhat.com> Message-ID: <9FFBEA73-FA28-4DBE-92F6-B5B1C48BAE98@redhat.com> This mailing list is dedicated at discussing the development of Infinispan. For user related questions, please head to: https://community.jboss.org/en/infinispan/content?filterID=contentstatus%5bpublished%5d~objecttype~objecttype%5bthread%5d Cheers, On 13 Aug 2014, at 21:39, Hammad Said wrote: > I have four inifinispan server cluster nodes. The hotrod clients are running on each of the machines where the server resides. Each hotrod client is configured to go to a particular server. The cacheA is a distributed cache with two owners. I want to understand the following: > > 1) If a particular key1 is saved to a particular sever1 and is replicated to server2 , and the client1 for server1 tries to get the key , does it always get from server1? > > 2) When the client2 for which the primary server is sever2, tries to get key1, does it get from server2 or does it know to get from the primary owner server1? > > 3) When client3 for server3 tries to get the key does it get from server1(primary owner), or does it get from server3, which then requests it from server1 > > Thanks! > Hammad > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From galder at redhat.com Wed Aug 20 05:27:40 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Wed, 20 Aug 2014 11:27:40 +0200 Subject: [infinispan-dev] minutes from the monitoring&management meeting In-Reply-To: References: <912B2F10-1854-494C-AD55-1C91413D4A51@redhat.com> <1B7EF10E-048D-4764-8AF6-4A2712733E40@redhat.com> <26BFF0A6-2094-4B64-A24F-DB67F5927DF9@redhat.com> Message-ID: <77EE699E-6216-41A6-9A99-521A1A4DE232@redhat.com> On 15 Aug 2014, at 12:41, Dan Berindei wrote: > > > > On Fri, Aug 15, 2014 at 11:37 AM, Galder Zamarre?o wrote: > > On 12 Aug 2014, at 22:41, Dan Berindei wrote: > >> >> >> >> On Tue, Aug 5, 2014 at 11:27 AM, Galder Zamarre?o wrote: >> Can?t comment on the document, so here are my thoughts: >> >> Re: ?Get rid of lazy cache starting...all the caches run on all nodes...it should still be possible to start a cache at runtime, but it will be run on all nodes as well? >> >> ^ Though I like the idea, it might change a crucial aspect of how default cache configuration works (if we leave the concept of default cache at all). Say you start a cache named ?a? for which there?s no config. Up until now we?d use the default cache configuration and create a cache ?a? with that config. However, if caches are started cluster wide now, before you can do that, you?d have to check that there?s no cache ?a? configuration anywhere in the cluster. If there is, I guess the configuration would be shipped to the node that starts the cache (if it does not have it) and create the cache with it? Or are you assuming all nodes in the cluster must have all configurations defined? >> >> +1 to remove the default cache as a default configuration. >> >> I like the idea of shipping the cache configuration to all the nodes. We will have to require any user-provided objects in the configuration to be serializable/externalizable, but I don't see a big problem with that. >> >> In fact, it would also allow us to send the entire configuration to the coordinator on join, so we could verify that the configuration on all nodes is compatible (not exactly the same, since things like capacityFactor can be different). And it would remove the need for the CacheJoinInfo class... >> >> A more limited alternative, not requiring config serialization, would be to disallow getCache(name) when a configuration doesn't exist but add a method createCache(name, configurationName) that only requires configurationName to be defined everywhere. >> >> >> Re: ?Revisiting Configuration elements?" >> >> If we?re going to do another round of updates in this area, I think we should consider what to do with unconfigured values. Back in the 4.x days, the JAXB XML parsing allowed us to know which configuration elements the user had not configured, which helped us tweak configuration and do validation more easily. Now, when we look at a Configuration builder object, we see default values but we do not that a value is the one it is because the user has specifically defined it, or because it?s unconfigured. One way to do so is by separating the default values, say to an XML file which is reference (I think WF does something along these lines) and leave the builder object with all null values. This would make it easy to figure out which elements have been touched and for that those that have not, use default values. This has popped up in the forums before but can?t find a link right now... >> >> I was also thinking of doing something like that, but instead of having a separate XML with the defaults I was going to propose creating a layer of indirection: every configuration value would be a ConfigurationProperty, with a default value, an override value, and an actual value. We already do something similar for e.g. StateTransferConfiguration.awaitInitialTransfer and originalAwaitInitialTransfer). > > ^ What?s the problem with a separate XML file? > > I really like the idea of externalizing default values from a documentation perspective and ease of change down the line, both for us and for users. > > On top of that, it could be validated and be presented as a reference XML file, getting rid of the sample XML file that we currently have which is half done and no one really updates it. > > First of all, how would that XML look? Like a regular configuration file, with one cache of each type? Yeah, could do. Wildfly guys already doing it: https://github.com/wildfly/wildfly/blob/master/clustering/infinispan/src/main/resources/infinispan-defaults.xml > One store of each type? In every cache? How would we handle defaults for custom stores? The defaults for custom stores are the same as for any other cache store. The only thing you cannot default is the custom store specific stuff, which is specific to the custom store :) You could have a JDBC_CACHE_STORE cache with the defaults for JDBC cache stores?etc. > We already have an XML file with default values: infinispan-config-7.0.xsd. It would be nice if we could parse that and keep the defaults in a single place, but if we need to duplicate the defaults anyway, I'd rather keep them in code. An XSD file is not an XML file. By having the defaults in an XML file, we can validate it and confirm that it?s a valid XML file that we can parse it. Users don?t load Infinispan with XSD files :) To avoid duplication, I?d be tempted to remove all default values from the XSD file and keep them only in the reference XML file. > I also think with a separate XML file, we'd still need to keep some not-quite-defaults in the various builder.build() methods (or Configurations methods). ^ What defaults are you talking about? Can you provide an example of such default options? With an XML, you could even have different defaults depending on the other attributes of the cache. E.g. say you have an OL cache, you could say that the default value for writeSkew with OL is true, whereas with PL, the default value is false. Cheers, > My idea was to keep all these in the *ConfigurationBuilder classes, though I know we'll never get to 100%. > > >> >> I haven't seen the forum post, but I think that would allow us more properly validate conflicting configuration values. E.g. the checks in Configurations.isVersioningEnabled() could be moved to ConfigurationBuilder.validate()/create(). > > Totally, validation right now it?s quite tricky due to the lack of separation. > > Cheers, > >> >> >> Cheers, >> >> On 28 Jul 2014, at 17:04, Mircea Markus wrote: >> >>> Hi, >>> >>> Tristan, Sanne, Gustavo and I meetlast week to discuss a) Infinispan usability and b) monitoring and management. Minutes attached. >>> >>> https://docs.google.com/document/d/1dIxH0xTiYBHH6_nkqybc13_zzW9gMIcaF_GX5Y7_PPQ/edit?usp=sharing >>> >>> Cheers, >>> -- >>> Mircea Markus >>> Infinispan lead (www.infinispan.org) >>> >>> >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From sanne at infinispan.org Wed Aug 20 06:08:54 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 20 Aug 2014 11:08:54 +0100 Subject: [infinispan-dev] minutes from the monitoring&management meeting In-Reply-To: References: <912B2F10-1854-494C-AD55-1C91413D4A51@redhat.com> <1B7EF10E-048D-4764-8AF6-4A2712733E40@redhat.com> Message-ID: On 12 August 2014 21:41, Dan Berindei wrote: > > > > On Tue, Aug 5, 2014 at 11:27 AM, Galder Zamarre?o wrote: >> >> Can?t comment on the document, so here are my thoughts: >> >> Re: ?Get rid of lazy cache starting...all the caches run on all nodes...it >> should still be possible to start a cache at runtime, but it will be run on >> all nodes as well? >> >> ^ Though I like the idea, it might change a crucial aspect of how default >> cache configuration works (if we leave the concept of default cache at all). >> Say you start a cache named ?a? for which there?s no config. Up until now >> we?d use the default cache configuration and create a cache ?a? with that >> config. However, if caches are started cluster wide now, before you can do >> that, you?d have to check that there?s no cache ?a? configuration anywhere >> in the cluster. If there is, I guess the configuration would be shipped to >> the node that starts the cache (if it does not have it) and create the cache >> with it? Or are you assuming all nodes in the cluster must have all >> configurations defined? > > > +1 to remove the default cache as a default configuration. > > I like the idea of shipping the cache configuration to all the nodes. We > will have to require any user-provided objects in the configuration to be > serializable/externalizable, but I don't see a big problem with that. That would be nice but needs some care, say for example that I want to inject a custom JDBCCacheStore by instance which has a reference to a datasource (Extremely useful use case). I could make it serializable by changing it from a CacheStore instance to some kind of "CacheStoreLookupStrategy" but you'd need to give me some hook we can react on to restore the references. Once again (as asked previously) allowing to register custom components by instance in the CacheManager's component Registry would solve this. Cheers From dan.berindei at gmail.com Wed Aug 20 06:16:26 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 20 Aug 2014 13:16:26 +0300 Subject: [infinispan-dev] minutes from the monitoring&management meeting In-Reply-To: <77EE699E-6216-41A6-9A99-521A1A4DE232@redhat.com> References: <912B2F10-1854-494C-AD55-1C91413D4A51@redhat.com> <1B7EF10E-048D-4764-8AF6-4A2712733E40@redhat.com> <26BFF0A6-2094-4B64-A24F-DB67F5927DF9@redhat.com> <77EE699E-6216-41A6-9A99-521A1A4DE232@redhat.com> Message-ID: On Wed, Aug 20, 2014 at 12:27 PM, Galder Zamarre?o wrote: > > On 15 Aug 2014, at 12:41, Dan Berindei wrote: > > > > > > > > > On Fri, Aug 15, 2014 at 11:37 AM, Galder Zamarre?o > wrote: > > > > On 12 Aug 2014, at 22:41, Dan Berindei wrote: > > > >> > >> > >> > >> On Tue, Aug 5, 2014 at 11:27 AM, Galder Zamarre?o > wrote: > >> Can?t comment on the document, so here are my thoughts: > >> > >> Re: ?Get rid of lazy cache starting...all the caches run on all > nodes...it should still be possible to start a cache at runtime, but it > will be run on all nodes as well? > >> > >> ^ Though I like the idea, it might change a crucial aspect of how > default cache configuration works (if we leave the concept of default cache > at all). Say you start a cache named ?a? for which there?s no config. Up > until now we?d use the default cache configuration and create a cache ?a? > with that config. However, if caches are started cluster wide now, before > you can do that, you?d have to check that there?s no cache ?a? > configuration anywhere in the cluster. If there is, I guess the > configuration would be shipped to the node that starts the cache (if it > does not have it) and create the cache with it? Or are you assuming all > nodes in the cluster must have all configurations defined? > >> > >> +1 to remove the default cache as a default configuration. > >> > >> I like the idea of shipping the cache configuration to all the nodes. > We will have to require any user-provided objects in the configuration to > be serializable/externalizable, but I don't see a big problem with that. > >> > >> In fact, it would also allow us to send the entire configuration to the > coordinator on join, so we could verify that the configuration on all nodes > is compatible (not exactly the same, since things like capacityFactor can > be different). And it would remove the need for the CacheJoinInfo class... > >> > >> A more limited alternative, not requiring config serialization, would > be to disallow getCache(name) when a configuration doesn't exist but add a > method createCache(name, configurationName) that only requires > configurationName to be defined everywhere. > >> > >> > >> Re: ?Revisiting Configuration elements?" > >> > >> If we?re going to do another round of updates in this area, I think we > should consider what to do with unconfigured values. Back in the 4.x days, > the JAXB XML parsing allowed us to know which configuration elements the > user had not configured, which helped us tweak configuration and do > validation more easily. Now, when we look at a Configuration builder > object, we see default values but we do not that a value is the one it is > because the user has specifically defined it, or because it?s unconfigured. > One way to do so is by separating the default values, say to an XML file > which is reference (I think WF does something along these lines) and leave > the builder object with all null values. This would make it easy to figure > out which elements have been touched and for that those that have not, use > default values. This has popped up in the forums before but can?t find a > link right now... > >> > >> I was also thinking of doing something like that, but instead of having > a separate XML with the defaults I was going to propose creating a layer of > indirection: every configuration value would be a ConfigurationProperty, > with a default value, an override value, and an actual value. We already do > something similar for e.g. StateTransferConfiguration.awaitInitialTransfer > and originalAwaitInitialTransfer). > > > > ^ What?s the problem with a separate XML file? > > > > I really like the idea of externalizing default values from a > documentation perspective and ease of change down the line, both for us and > for users. > > > > On top of that, it could be validated and be presented as a reference > XML file, getting rid of the sample XML file that we currently have which > is half done and no one really updates it. > > > > First of all, how would that XML look? Like a regular configuration > file, with one cache of each type? > > Yeah, could do. Wildfly guys already doing it: > > https://github.com/wildfly/wildfly/blob/master/clustering/infinispan/src/main/resources/infinispan-defaults.xml > > > One store of each type? In every cache? How would we handle defaults for > custom stores? > > The defaults for custom stores are the same as for any other cache store. > The only thing you cannot default is the custom store specific stuff, which > is specific to the custom store :) > Except you can't include them in the default XML, because the default XML is in core (I assume) and the custom stores are not. > > You could have a JDBC_CACHE_STORE cache with the defaults for JDBC cache > stores?etc. > In what XML file? > > > We already have an XML file with default values: > infinispan-config-7.0.xsd. It would be nice if we could parse that and keep > the defaults in a single place, but if we need to duplicate the defaults > anyway, I'd rather keep them in code. > > An XSD file is not an XML file. By having the defaults in an XML file, we > can validate it and confirm that it?s a valid XML file that we can parse > it. Users don?t load Infinispan with XSD files :) > I'm pretty sure infinispan-config-7.0.xsd is a valid XML file, it even starts with a standard XML declaration: > > To avoid duplication, I?d be tempted to remove all default values from the > XSD file and keep them only in the reference XML file. > It would definitely be harder to look up the reference XML and check the defaults, compared to a Ctrl+click on the element/attribute name with the XSD. Of course, the XSD only allows one default value for each attribute, and even duplicating the element types for each cache mode sounds pretty daunting. > > I also think with a separate XML file, we'd still need to keep some > not-quite-defaults in the various builder.build() methods (or > Configurations methods). > > ^ What defaults are you talking about? Can you provide an example of such > default options? > > With an XML, you could even have different defaults depending on the other > attributes of the cache. E.g. say you have an OL cache, you could say that > the default value for writeSkew with OL is true, whereas with PL, the > default value is false. > Yeah, that would be a good example of what I was thinking about :) But I was thinking we shouldn't just change the default value, we should also throw an exception when the user tries to enable write skew in a PL cache. That check would have to stay in the builder class - not a default, but still related. > > Cheers, > > > My idea was to keep all these in the *ConfigurationBuilder classes, > though I know we'll never get to 100%. > > > > > >> > >> I haven't seen the forum post, but I think that would allow us more > properly validate conflicting configuration values. E.g. the checks in > Configurations.isVersioningEnabled() could be moved to > ConfigurationBuilder.validate()/create(). > > > > Totally, validation right now it?s quite tricky due to the lack of > separation. > > > > Cheers, > > > >> > >> > >> Cheers, > >> > >> On 28 Jul 2014, at 17:04, Mircea Markus wrote: > >> > >>> Hi, > >>> > >>> Tristan, Sanne, Gustavo and I meetlast week to discuss a) Infinispan > usability and b) monitoring and management. Minutes attached. > >>> > >>> > https://docs.google.com/document/d/1dIxH0xTiYBHH6_nkqybc13_zzW9gMIcaF_GX5Y7_PPQ/edit?usp=sharing > >>> > >>> Cheers, > >>> -- > >>> Mircea Markus > >>> Infinispan lead (www.infinispan.org) > >>> > >>> > >>> > >>> > >>> > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> > >> -- > >> Galder Zamarre?o > >> galder at redhat.com > >> twitter.com/galderz > >> > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > -- > > Galder Zamarre?o > > galder at redhat.com > > twitter.com/galderz > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140820/f3c43ab6/attachment.html From dan.berindei at gmail.com Wed Aug 20 06:19:54 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 20 Aug 2014 13:19:54 +0300 Subject: [infinispan-dev] minutes from the monitoring&management meeting In-Reply-To: References: <912B2F10-1854-494C-AD55-1C91413D4A51@redhat.com> <1B7EF10E-048D-4764-8AF6-4A2712733E40@redhat.com> Message-ID: On Wed, Aug 20, 2014 at 1:08 PM, Sanne Grinovero wrote: > On 12 August 2014 21:41, Dan Berindei wrote: > > > > > > > > On Tue, Aug 5, 2014 at 11:27 AM, Galder Zamarre?o > wrote: > >> > >> Can?t comment on the document, so here are my thoughts: > >> > >> Re: ?Get rid of lazy cache starting...all the caches run on all > nodes...it > >> should still be possible to start a cache at runtime, but it will be > run on > >> all nodes as well? > >> > >> ^ Though I like the idea, it might change a crucial aspect of how > default > >> cache configuration works (if we leave the concept of default cache at > all). > >> Say you start a cache named ?a? for which there?s no config. Up until > now > >> we?d use the default cache configuration and create a cache ?a? with > that > >> config. However, if caches are started cluster wide now, before you can > do > >> that, you?d have to check that there?s no cache ?a? configuration > anywhere > >> in the cluster. If there is, I guess the configuration would be shipped > to > >> the node that starts the cache (if it does not have it) and create the > cache > >> with it? Or are you assuming all nodes in the cluster must have all > >> configurations defined? > > > > > > +1 to remove the default cache as a default configuration. > > > > I like the idea of shipping the cache configuration to all the nodes. We > > will have to require any user-provided objects in the configuration to be > > serializable/externalizable, but I don't see a big problem with that. > > That would be nice but needs some care, say for example that I want to > inject a custom JDBCCacheStore by instance which has a reference to a > datasource (Extremely useful use case). > Shouldn't the datasource be registered in JNDI anyway? If yes, you could serialize the JNDI name. > I could make it serializable by changing it from a CacheStore instance > to some kind of "CacheStoreLookupStrategy" but you'd need to give me > some hook we can react on to restore the references. Once again (as > asked previously) allowing to register custom components by instance > in the CacheManager's component Registry would solve this. > > We already allow this: EmbeddedCacheManager.getGlobalComponentRegistry().registerComponent(instance, name) > Cheers > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140820/4d75a18e/attachment-0001.html From sanne at infinispan.org Wed Aug 20 07:32:12 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 20 Aug 2014 12:32:12 +0100 Subject: [infinispan-dev] minutes from the monitoring&management meeting In-Reply-To: References: <912B2F10-1854-494C-AD55-1C91413D4A51@redhat.com> <1B7EF10E-048D-4764-8AF6-4A2712733E40@redhat.com> Message-ID: On 20 August 2014 11:19, Dan Berindei wrote: > > > > On Wed, Aug 20, 2014 at 1:08 PM, Sanne Grinovero > wrote: >> >> On 12 August 2014 21:41, Dan Berindei wrote: >> > >> > >> > >> > On Tue, Aug 5, 2014 at 11:27 AM, Galder Zamarre?o >> > wrote: >> >> >> >> Can?t comment on the document, so here are my thoughts: >> >> >> >> Re: ?Get rid of lazy cache starting...all the caches run on all >> >> nodes...it >> >> should still be possible to start a cache at runtime, but it will be >> >> run on >> >> all nodes as well? >> >> >> >> ^ Though I like the idea, it might change a crucial aspect of how >> >> default >> >> cache configuration works (if we leave the concept of default cache at >> >> all). >> >> Say you start a cache named ?a? for which there?s no config. Up until >> >> now >> >> we?d use the default cache configuration and create a cache ?a? with >> >> that >> >> config. However, if caches are started cluster wide now, before you can >> >> do >> >> that, you?d have to check that there?s no cache ?a? configuration >> >> anywhere >> >> in the cluster. If there is, I guess the configuration would be shipped >> >> to >> >> the node that starts the cache (if it does not have it) and create the >> >> cache >> >> with it? Or are you assuming all nodes in the cluster must have all >> >> configurations defined? >> > >> > >> > +1 to remove the default cache as a default configuration. >> > >> > I like the idea of shipping the cache configuration to all the nodes. We >> > will have to require any user-provided objects in the configuration to >> > be >> > serializable/externalizable, but I don't see a big problem with that. >> >> That would be nice but needs some care, say for example that I want to >> inject a custom JDBCCacheStore by instance which has a reference to a >> datasource (Extremely useful use case). > > > Shouldn't the datasource be registered in JNDI anyway? If yes, you could > serialize the JNDI name. You don't want to require the user to need to match configuration settings in different configuration files of what he considers one platform. And we support many more options beyond JNDI. >> I could make it serializable by changing it from a CacheStore instance >> to some kind of "CacheStoreLookupStrategy" but you'd need to give me >> some hook we can react on to restore the references. Once again (as >> asked previously) allowing to register custom components by instance >> in the CacheManager's component Registry would solve this. >> > > We already allow this: > > EmbeddedCacheManager.getGlobalComponentRegistry().registerComponent(instance, > name) Can I use that before the CacheManager is started? -- Sanne From rvansa at redhat.com Wed Aug 20 08:40:41 2014 From: rvansa at redhat.com (Radim Vansa) Date: Wed, 20 Aug 2014 14:40:41 +0200 Subject: [infinispan-dev] minutes from the monitoring&management meeting In-Reply-To: References: <912B2F10-1854-494C-AD55-1C91413D4A51@redhat.com> <1B7EF10E-048D-4764-8AF6-4A2712733E40@redhat.com> <26BFF0A6-2094-4B64-A24F-DB67F5927DF9@redhat.com> <77EE699E-6216-41A6-9A99-521A1A4DE232@redhat.com> Message-ID: <53F49749.2030305@redhat.com> On 08/20/2014 12:16 PM, Dan Berindei wrote: > > > > On Wed, Aug 20, 2014 at 12:27 PM, Galder Zamarre?o > wrote: > > > On 15 Aug 2014, at 12:41, Dan Berindei > wrote: > > > > > > > > > On Fri, Aug 15, 2014 at 11:37 AM, Galder Zamarre?o > > wrote: > > > > On 12 Aug 2014, at 22:41, Dan Berindei > wrote: > > > >> > >> > >> > >> On Tue, Aug 5, 2014 at 11:27 AM, Galder Zamarre?o > > wrote: > >> Can't comment on the document, so here are my thoughts: > >> > >> Re: "Get rid of lazy cache starting...all the caches run on all > nodes...it should still be possible to start a cache at runtime, > but it will be run on all nodes as well" > >> > >> ^ Though I like the idea, it might change a crucial aspect of > how default cache configuration works (if we leave the concept of > default cache at all). Say you start a cache named "a" for which > there's no config. Up until now we'd use the default cache > configuration and create a cache "a" with that config. However, if > caches are started cluster wide now, before you can do that, you'd > have to check that there's no cache "a" configuration anywhere in > the cluster. If there is, I guess the configuration would be > shipped to the node that starts the cache (if it does not have it) > and create the cache with it? Or are you assuming all nodes in the > cluster must have all configurations defined? > >> > >> +1 to remove the default cache as a default configuration. > >> > >> I like the idea of shipping the cache configuration to all the > nodes. We will have to require any user-provided objects in the > configuration to be serializable/externalizable, but I don't see a > big problem with that. > >> > >> In fact, it would also allow us to send the entire > configuration to the coordinator on join, so we could verify that > the configuration on all nodes is compatible (not exactly the > same, since things like capacityFactor can be different). And it > would remove the need for the CacheJoinInfo class... > >> > >> A more limited alternative, not requiring config serialization, > would be to disallow getCache(name) when a configuration doesn't > exist but add a method createCache(name, configurationName) that > only requires configurationName to be defined everywhere. > >> > >> > >> Re: "Revisiting Configuration elements..." > >> > >> If we're going to do another round of updates in this area, I > think we should consider what to do with unconfigured values. Back > in the 4.x days, the JAXB XML parsing allowed us to know which > configuration elements the user had not configured, which helped > us tweak configuration and do validation more easily. Now, when we > look at a Configuration builder object, we see default values but > we do not that a value is the one it is because the user has > specifically defined it, or because it's unconfigured. One way to > do so is by separating the default values, say to an XML file > which is reference (I think WF does something along these lines) > and leave the builder object with all null values. This would make > it easy to figure out which elements have been touched and for > that those that have not, use default values. This has popped up > in the forums before but can't find a link right now... > >> > >> I was also thinking of doing something like that, but instead > of having a separate XML with the defaults I was going to propose > creating a layer of indirection: every configuration value would > be a ConfigurationProperty, with a default value, an override > value, and an actual value. We already do something similar for > e.g. StateTransferConfiguration.awaitInitialTransfer and > originalAwaitInitialTransfer). > > > > ^ What's the problem with a separate XML file? > > > > I really like the idea of externalizing default values from a > documentation perspective and ease of change down the line, both > for us and for users. > > > > On top of that, it could be validated and be presented as a > reference XML file, getting rid of the sample XML file that we > currently have which is half done and no one really updates it. > > > > First of all, how would that XML look? Like a regular > configuration file, with one cache of each type? > > Yeah, could do. Wildfly guys already doing it: > https://github.com/wildfly/wildfly/blob/master/clustering/infinispan/src/main/resources/infinispan-defaults.xml > > > One store of each type? In every cache? How would we handle > defaults for custom stores? > > The defaults for custom stores are the same as for any other cache > store. The only thing you cannot default is the custom store > specific stuff, which is specific to the custom store :) > > > Except you can't include them in the default XML, because the default > XML is in core (I assume) and the custom stores are not. Would it be a problem (from XML tech perspective) to have default store configuration without all the infinispan shell ( and above), just like ... > > You could have a JDBC_CACHE_STORE cache with the defaults for JDBC > cache stores...etc. > > > In what XML file? > > > > We already have an XML file with default values: > infinispan-config-7.0.xsd. It would be nice if we could parse that > and keep the defaults in a single place, but if we need to > duplicate the defaults anyway, I'd rather keep them in code. > > An XSD file is not an XML file. By having the defaults in an XML > file, we can validate it and confirm that it's a valid XML file > that we can parse it. Users don't load Infinispan with XSD files :) > > > I'm pretty sure infinispan-config-7.0.xsd is a valid XML file, it even > starts with a standard XML declaration: encoding="UTF-8" standalone="yes"?> > > > To avoid duplication, I'd be tempted to remove all default values > from the XSD file and keep them only in the reference XML file. > > > It would definitely be harder to look up the reference XML and check > the defaults, compared to a Ctrl+click on the element/attribute name > with the XSD. > Of course, the XSD only allows one default value for each attribute, > and even duplicating the element types for each cache mode sounds > pretty daunting. Cool apps (read: RadarGun) generate XSD from source code :) I am generally fan of having just single place for the default value, propagated automatically. > > > > I also think with a separate XML file, we'd still need to keep > some not-quite-defaults in the various builder.build() methods (or > Configurations methods). > > ^ What defaults are you talking about? Can you provide an example > of such default options? > > > With an XML, you could even have different defaults depending on > the other attributes of the cache. E.g. say you have an OL cache, > you could say that the default value for writeSkew with OL is > true, whereas with PL, the default value is false. > > > Yeah, that would be a good example of what I was thinking about :) > > But I was thinking we shouldn't just change the default value, we > should also throw an exception when the user tries to enable write > skew in a PL cache. That check would have to stay in the builder class > - not a default, but still related. Isn't that a mark that the configuration is not designed well? I am not sure how doable it is, but can we have syntactically correct configuration implying semantically correct documentation? In the OL/PL case, if PL implies not enabling WSC, we should make PL element instead of attribute and not include the WSC attribute at all. If want to keep WSC with different default, we could have different attribute for that (with same name) so that user can look it up. My 2c Radim -- Radim Vansa JBoss DataGrid QA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140820/0a847299/attachment-0001.html From dan.berindei at gmail.com Wed Aug 20 11:36:27 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 20 Aug 2014 18:36:27 +0300 Subject: [infinispan-dev] minutes from the monitoring&management meeting In-Reply-To: References: <912B2F10-1854-494C-AD55-1C91413D4A51@redhat.com> <1B7EF10E-048D-4764-8AF6-4A2712733E40@redhat.com> Message-ID: On Wed, Aug 20, 2014 at 2:32 PM, Sanne Grinovero wrote: > On 20 August 2014 11:19, Dan Berindei wrote: > > > > > > > > On Wed, Aug 20, 2014 at 1:08 PM, Sanne Grinovero > > wrote: > >> > >> On 12 August 2014 21:41, Dan Berindei wrote: > >> > > >> > > >> > > >> > On Tue, Aug 5, 2014 at 11:27 AM, Galder Zamarre?o > >> > wrote: > >> >> > >> >> Can?t comment on the document, so here are my thoughts: > >> >> > >> >> Re: ?Get rid of lazy cache starting...all the caches run on all > >> >> nodes...it > >> >> should still be possible to start a cache at runtime, but it will be > >> >> run on > >> >> all nodes as well? > >> >> > >> >> ^ Though I like the idea, it might change a crucial aspect of how > >> >> default > >> >> cache configuration works (if we leave the concept of default cache > at > >> >> all). > >> >> Say you start a cache named ?a? for which there?s no config. Up until > >> >> now > >> >> we?d use the default cache configuration and create a cache ?a? with > >> >> that > >> >> config. However, if caches are started cluster wide now, before you > can > >> >> do > >> >> that, you?d have to check that there?s no cache ?a? configuration > >> >> anywhere > >> >> in the cluster. If there is, I guess the configuration would be > shipped > >> >> to > >> >> the node that starts the cache (if it does not have it) and create > the > >> >> cache > >> >> with it? Or are you assuming all nodes in the cluster must have all > >> >> configurations defined? > >> > > >> > > >> > +1 to remove the default cache as a default configuration. > >> > > >> > I like the idea of shipping the cache configuration to all the nodes. > We > >> > will have to require any user-provided objects in the configuration to > >> > be > >> > serializable/externalizable, but I don't see a big problem with that. > >> > >> That would be nice but needs some care, say for example that I want to > >> inject a custom JDBCCacheStore by instance which has a reference to a > >> datasource (Extremely useful use case). > > > > > > Shouldn't the datasource be registered in JNDI anyway? If yes, you could > > serialize the JNDI name. > > You don't want to require the user to need to match configuration > settings in different configuration files of what he considers one > platform. > And we support many more options beyond JNDI. > > Still, usually we want to share datasources for pooling, so the cache store should look up its datasource somewhere instead of creating a new connection pool for each cache. > > >> I could make it serializable by changing it from a CacheStore instance > >> to some kind of "CacheStoreLookupStrategy" but you'd need to give me > >> some hook we can react on to restore the references. Once again (as > >> asked previously) allowing to register custom components by instance > >> in the CacheManager's component Registry would solve this. > >> > > > > We already allow this: > > > > > EmbeddedCacheManager.getGlobalComponentRegistry().registerComponent(instance, > > name) > > Can I use that before the CacheManager is started? > Sure, all DefaultCacheManager.start() does is register some MBeans in JMX. > > -- Sanne > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140820/8a111bab/attachment.html From dan.berindei at gmail.com Wed Aug 20 11:50:15 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 20 Aug 2014 18:50:15 +0300 Subject: [infinispan-dev] minutes from the monitoring&management meeting In-Reply-To: <53F49749.2030305@redhat.com> References: <912B2F10-1854-494C-AD55-1C91413D4A51@redhat.com> <1B7EF10E-048D-4764-8AF6-4A2712733E40@redhat.com> <26BFF0A6-2094-4B64-A24F-DB67F5927DF9@redhat.com> <77EE699E-6216-41A6-9A99-521A1A4DE232@redhat.com> <53F49749.2030305@redhat.com> Message-ID: On Wed, Aug 20, 2014 at 3:40 PM, Radim Vansa wrote: > On 08/20/2014 12:16 PM, Dan Berindei wrote: > > > > > On Wed, Aug 20, 2014 at 12:27 PM, Galder Zamarre?o > wrote: > >> >> On 15 Aug 2014, at 12:41, Dan Berindei wrote: >> >> > >> > >> > >> > On Fri, Aug 15, 2014 at 11:37 AM, Galder Zamarre?o >> wrote: >> > >> > On 12 Aug 2014, at 22:41, Dan Berindei wrote: >> > >> >> >> >> >> >> >> >> On Tue, Aug 5, 2014 at 11:27 AM, Galder Zamarre?o >> wrote: >> >> Can?t comment on the document, so here are my thoughts: >> >> >> >> Re: ?Get rid of lazy cache starting...all the caches run on all >> nodes...it should still be possible to start a cache at runtime, but it >> will be run on all nodes as well? >> >> >> >> ^ Though I like the idea, it might change a crucial aspect of how >> default cache configuration works (if we leave the concept of default cache >> at all). Say you start a cache named ?a? for which there?s no config. Up >> until now we?d use the default cache configuration and create a cache ?a? >> with that config. However, if caches are started cluster wide now, before >> you can do that, you?d have to check that there?s no cache ?a? >> configuration anywhere in the cluster. If there is, I guess the >> configuration would be shipped to the node that starts the cache (if it >> does not have it) and create the cache with it? Or are you assuming all >> nodes in the cluster must have all configurations defined? >> >> >> >> +1 to remove the default cache as a default configuration. >> >> >> >> I like the idea of shipping the cache configuration to all the nodes. >> We will have to require any user-provided objects in the configuration to >> be serializable/externalizable, but I don't see a big problem with that. >> >> >> >> In fact, it would also allow us to send the entire configuration to >> the coordinator on join, so we could verify that the configuration on all >> nodes is compatible (not exactly the same, since things like capacityFactor >> can be different). And it would remove the need for the CacheJoinInfo >> class... >> >> >> >> A more limited alternative, not requiring config serialization, would >> be to disallow getCache(name) when a configuration doesn't exist but add a >> method createCache(name, configurationName) that only requires >> configurationName to be defined everywhere. >> >> >> >> >> >> Re: ?Revisiting Configuration elements?" >> >> >> >> If we?re going to do another round of updates in this area, I think we >> should consider what to do with unconfigured values. Back in the 4.x days, >> the JAXB XML parsing allowed us to know which configuration elements the >> user had not configured, which helped us tweak configuration and do >> validation more easily. Now, when we look at a Configuration builder >> object, we see default values but we do not that a value is the one it is >> because the user has specifically defined it, or because it?s unconfigured. >> One way to do so is by separating the default values, say to an XML file >> which is reference (I think WF does something along these lines) and leave >> the builder object with all null values. This would make it easy to figure >> out which elements have been touched and for that those that have not, use >> default values. This has popped up in the forums before but can?t find a >> link right now... >> >> >> >> I was also thinking of doing something like that, but instead of >> having a separate XML with the defaults I was going to propose creating a >> layer of indirection: every configuration value would be a >> ConfigurationProperty, with a default value, an override value, and an >> actual value. We already do something similar for e.g. >> StateTransferConfiguration.awaitInitialTransfer and >> originalAwaitInitialTransfer). >> > >> > ^ What?s the problem with a separate XML file? >> > >> > I really like the idea of externalizing default values from a >> documentation perspective and ease of change down the line, both for us and >> for users. >> > >> > On top of that, it could be validated and be presented as a reference >> XML file, getting rid of the sample XML file that we currently have which >> is half done and no one really updates it. >> > >> > First of all, how would that XML look? Like a regular configuration >> file, with one cache of each type? >> >> Yeah, could do. Wildfly guys already doing it: >> >> https://github.com/wildfly/wildfly/blob/master/clustering/infinispan/src/main/resources/infinispan-defaults.xml >> >> > One store of each type? In every cache? How would we handle defaults >> for custom stores? >> >> The defaults for custom stores are the same as for any other cache >> store. The only thing you cannot default is the custom store specific >> stuff, which is specific to the custom store :) >> > > Except you can't include them in the default XML, because the default > XML is in core (I assume) and the custom stores are not. > > > Would it be a problem (from XML tech perspective) to have default store > configuration without all the infinispan shell ( and above), > just like > > > > ... > > > Yes, that can be done, my point is that moving the defaults to XML doesn't necessarily simplify things. > > > >> >> You could have a JDBC_CACHE_STORE cache with the defaults for JDBC cache >> stores?etc. >> > > In what XML file? > > >> >> > We already have an XML file with default values: >> infinispan-config-7.0.xsd. It would be nice if we could parse that and keep >> the defaults in a single place, but if we need to duplicate the defaults >> anyway, I'd rather keep them in code. >> >> An XSD file is not an XML file. By having the defaults in an XML file, >> we can validate it and confirm that it?s a valid XML file that we can parse >> it. Users don?t load Infinispan with XSD files :) >> > > I'm pretty sure infinispan-config-7.0.xsd is a valid XML file, it even > starts with a standard XML declaration: encoding="UTF-8" standalone="yes"?> > > >> >> To avoid duplication, I?d be tempted to remove all default values from >> the XSD file and keep them only in the reference XML file. >> > > It would definitely be harder to look up the reference XML and check the > defaults, compared to a Ctrl+click on the element/attribute name with the > XSD. > Of course, the XSD only allows one default value for each attribute, and > even duplicating the element types for each cache mode sounds pretty > daunting. > > > Cool apps (read: RadarGun) generate XSD from source code :) I am generally > fan of having just single place for the default value, propagated > automatically. > > That sounds interesting, I guess I should track RadarGun changes more closely. > > >> > I also think with a separate XML file, we'd still need to keep some >> not-quite-defaults in the various builder.build() methods (or >> Configurations methods). >> >> ^ What defaults are you talking about? Can you provide an example of >> such default options? >> > >> With an XML, you could even have different defaults depending on the >> other attributes of the cache. E.g. say you have an OL cache, you could say >> that the default value for writeSkew with OL is true, whereas with PL, the >> default value is false. >> > > Yeah, that would be a good example of what I was thinking about :) > > > But I was thinking we shouldn't just change the default value, we should > also throw an exception when the user tries to enable write skew in a PL > cache. That check would have to stay in the builder class - not a default, > but still related. > > > Isn't that a mark that the configuration is not designed well? I am not > sure how doable it is, but can we have syntactically correct configuration > implying semantically correct documentation? In the OL/PL case, if PL > implies not enabling WSC, we should make PL element instead of attribute > and not include the WSC attribute at all. If want to keep WSC with > different default, we could have different attribute for that (with same > name) so that user can look it up. > Pretty good idea, but I think we want to postpone the next configuration overhaul for 8.0 :) > My 2c > > > Radim > > -- > Radim Vansa > JBoss DataGrid QA > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140820/6b1913b9/attachment-0001.html From rvansa at redhat.com Thu Aug 21 03:26:49 2014 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 21 Aug 2014 09:26:49 +0200 Subject: [infinispan-dev] Fwd: Infinispan In-Reply-To: <20cf30363b812fcd1f05011be486@google.com> References: <20cf30363b812fcd1f05011be486@google.com> Message-ID: <53F59F39.1030408@redhat.com> Hi Galder, as HotRod protocol is portable and this functionality will be eventually implemented in other languages, how is the marshalling of parameters to the factory supposed to work? Radim -------- Original Message -------- Subject: Infinispan Date: Thu, 21 Aug 2014 04:09:47 +0000 From: Infinispan To: rvansa at redhat.com Infinispan Infinispan ------------------------------------------------------------------------ Hot Rod Remote Events #2: Filtering events Posted: 20 Aug 2014 08:19 AM PDT This blog post is the second in a series that looks at the forthcoming Hot Rod Remote Events functionality included in Infinispan 7.0. In the first blog post we looked at how to get started receiving remote events from Hot Rod servers. This time we are going to focus on how to filter events directly in the server. Sending events to remote clients has a cost which increases as the number of clients. The more clients register remote listeners, the more events the server has to send. This cost also goes up as the number of modifications are executed against the cache. The more cache modifications, the more events that need to be sent. A way to reduce this cost is by filtering the events to send server-side. If at the server level custom code decides that clients are not interested in a particular event, the event does not even need to leave the server, improving the overall performance of the system. Remote event filters are created by implementing a org.infinispan.filter.KeyValueFilterFactory class. Each factory must have a name associated to it via the org.infinispan.filter.NamedFactory annotation. When a listener is added, we can provide the name of a key value filter factory to use with this listener, and when the listener is added, the server will look up the factory and invoke getKeyValueFilter method to get a org.infinispan.filter.KeyValueFilter class instance to filter events server side. Filtering can be done based on key or value information, and even based on cached entry metadata. Here's a sample implementation which will filter key "2" out of the events sent to clients: Plugging the server with this key value filter requires deploying this filter factory (and associated filter class) within a jar file including a service definition inside the META-INF/services/org.infinispan.filter.KeyValueFilterFactory file: With the server plugged with the filter, the next step is adding a remote client listener that will use this filter. For this example, we'll extend the EventLogListener implementation provided in the first blog post in the series and we override the @ClientListener annotation to indicate the filter factory to use with this listener: Next, we add the listener via the RemoteCache API and we execute some operations against the remote cache: If we checkout the system output we'll see that the client receives events for all keys except those that have been filtered: Finally, with Hot Rod remote events we have tried to provide additional flexibility at the client side, which is why when adding client listeners, users can provide parameters to the filter factory so that filter instances with different behaviours can be generated out of a single filter factory based on client side information. To show this in action, we are going to enhance the filter factory above so that instead of filtering on a statically given key, it can filter dynamically based on the key provided when adding the listener. Here's the revised version: Finally, here's how we can now filter by "3" instead of "2": And the output: To summarise, we've seen how Hot Rod remote events can be filtered providing key/value filter factories that can create instances that filter which events are sent to clients, and how these filters can act on client provided information. In the next blog post, we'll look at how to customize remote events in order to reduce the amount of information sent to the clients, or on the contrary, provide even more information to our clients. Cheers, Galder You are subscribed to email updates from Infinispan To stop receiving these emails, you may unsubscribe now . Email delivery powered by Google Google Inc., 20 West Kinzie, Chicago IL USA 60610 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140821/ef0bdfcb/attachment.html From sanne at infinispan.org Thu Aug 21 08:11:56 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 21 Aug 2014 13:11:56 +0100 Subject: [infinispan-dev] minutes from the monitoring&management meeting In-Reply-To: References: <912B2F10-1854-494C-AD55-1C91413D4A51@redhat.com> <1B7EF10E-048D-4764-8AF6-4A2712733E40@redhat.com> Message-ID: On 20 August 2014 16:36, Dan Berindei wrote: > > > On Wed, Aug 20, 2014 at 2:32 PM, Sanne Grinovero > wrote: >> >> On 20 August 2014 11:19, Dan Berindei wrote: >> > >> > >> > >> > On Wed, Aug 20, 2014 at 1:08 PM, Sanne Grinovero >> > wrote: >> >> >> >> On 12 August 2014 21:41, Dan Berindei wrote: >> >> > >> >> > >> >> > >> >> > On Tue, Aug 5, 2014 at 11:27 AM, Galder Zamarre?o >> >> > wrote: >> >> >> >> >> >> Can?t comment on the document, so here are my thoughts: >> >> >> >> >> >> Re: ?Get rid of lazy cache starting...all the caches run on all >> >> >> nodes...it >> >> >> should still be possible to start a cache at runtime, but it will be >> >> >> run on >> >> >> all nodes as well? >> >> >> >> >> >> ^ Though I like the idea, it might change a crucial aspect of how >> >> >> default >> >> >> cache configuration works (if we leave the concept of default cache >> >> >> at >> >> >> all). >> >> >> Say you start a cache named ?a? for which there?s no config. Up >> >> >> until >> >> >> now >> >> >> we?d use the default cache configuration and create a cache ?a? with >> >> >> that >> >> >> config. However, if caches are started cluster wide now, before you >> >> >> can >> >> >> do >> >> >> that, you?d have to check that there?s no cache ?a? configuration >> >> >> anywhere >> >> >> in the cluster. If there is, I guess the configuration would be >> >> >> shipped >> >> >> to >> >> >> the node that starts the cache (if it does not have it) and create >> >> >> the >> >> >> cache >> >> >> with it? Or are you assuming all nodes in the cluster must have all >> >> >> configurations defined? >> >> > >> >> > >> >> > +1 to remove the default cache as a default configuration. >> >> > >> >> > I like the idea of shipping the cache configuration to all the nodes. >> >> > We >> >> > will have to require any user-provided objects in the configuration >> >> > to >> >> > be >> >> > serializable/externalizable, but I don't see a big problem with that. >> >> >> >> That would be nice but needs some care, say for example that I want to >> >> inject a custom JDBCCacheStore by instance which has a reference to a >> >> datasource (Extremely useful use case). >> > >> > >> > Shouldn't the datasource be registered in JNDI anyway? If yes, you could >> > serialize the JNDI name. >> >> You don't want to require the user to need to match configuration >> settings in different configuration files of what he considers one >> platform. >> And we support many more options beyond JNDI. >> > > Still, usually we want to share datasources for pooling, so the cache store > should look up its datasource somewhere instead of creating a new connection > pool for each cache. Yes that's exactly my point: I want to be able to share a pool I already have with a CacheManager instance I'm creating. >> >> I could make it serializable by changing it from a CacheStore instance >> >> to some kind of "CacheStoreLookupStrategy" but you'd need to give me >> >> some hook we can react on to restore the references. Once again (as >> >> asked previously) allowing to register custom components by instance >> >> in the CacheManager's component Registry would solve this. >> >> >> > >> > We already allow this: >> > >> > >> > EmbeddedCacheManager.getGlobalComponentRegistry().registerComponent(instance, >> > name) >> >> Can I use that before the CacheManager is started? > > > Sure, all DefaultCacheManager.start() does is register some MBeans in JMX. > >> >> >> -- Sanne >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From emmanuel at hibernate.org Thu Aug 21 09:42:55 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Thu, 21 Aug 2014 15:42:55 +0200 Subject: [infinispan-dev] [hibernate-dev] [Search] @Transformable vs @ProvidedId In-Reply-To: References: Message-ID: <20140821134255.GG93689@hibernate.org> I basically tl;dr; the whole thread for the obvious reason that it is too long :) But skimming through it made me think of the following. Would it make sense to index Map.Entry with @IndexedEmbedded or @FieldBridge on Map.Entry.getKey() / Map.Entry.getValue()? At a conceptual level at least. One more reasons to get free form entities. On Thu 2014-08-07 18:56, Sanne Grinovero wrote: > There are two annotations clashing for same responsibilities: > - org.infinispan.query.Transformable > - org.hibernate.search.annotations.ProvidedId > > as documented at the following link, these two different ways to apply "Id > indexing options" in Infinispan Query, IMHO quite unclear when a user > should use one vs. the other. > > - > http://infinispan.org/docs/7.0.x/user_guide/user_guide.html#_requirements_for_the_key_transformable_and_providedid > > The benefit of @Transformable is that Infinispan provides one out of the > box which will work for any user case: it will serialize the whole object > representing the id, then hex-encode the buffer into a String: horribly > inefficient but works on any serializable type. > > @ProvidedId originally marked the indexed entry in such a way that the > indexing engine would consider the id "provided externally", i.e. given at > runtime. It would also assume that its type would be static for a specific > type - which is I think a reasonable expectation but doesn't really hold as > an absolute truth in the case of Infinispan: nothing prevents me to store > an indexed entry of type "Person" for index "personindex" with an Integer > typed key in the cache, and also duplicate the same information under a say > String typed key. > > So there's an expectation mismatch: in ORM world the key type is strongly > related to the value type, but when indexing Infinispan entries the reality > is that we're indexing two independent "modules". > > I was hoping to drop @ProvidedId today as the original "marker" > functionality is no longer needed: since we have > > org.hibernate.search.cfg.spi.SearchConfiguration.isIdProvidedImplicit() > > the option can be implicitly applied to all indexed entries, and the > annotation is mostly redundant in Infinispan since we added this. > > But actually it turns out it's a bit more complex as it servers a second > function as well: it's the only way for users to be able to specify a > FieldBridge for the ID.. so the functionality of this annotation is not > consumed yet. > > So my proposal is to get rid of both @Transformable and @ProvidedId. There > needs to be a single way in Infinispan to define both the indexing options > and transformation; ideally this should be left to the Search Engine and > its provided collection of FieldBridge implementations. > > Since the id type and the value type in Infinispan are not necessarily > strongly related (still the id is unique of course), I think this option > doesn't even belong on the @Indexed value but should be specified on the > key type. > > Problem is that to define a class-level annotation to be used on the > Infinispan keys doesn't really belong in the collection of annotations of > Hibernate Search; I'm tempted to require that the key used for the type > must be one of those for which an out-of-the-box FieldBridge is provided: > the good thing is that now the set is extensible. In a second phase > Infinispan could opt to create a custom annotation like @Transformable to > register these options in a simplified way. > > Even more, I've witnessed cases in which in Infinispan it makes sense to > encode some more information in the key than what's strictly necessary to > identify the key (like having attributes which are not included in the > hashcode and equals definitions). It sounds like the user should be allowed > to annotate the Key types, to allow such additional properties to > contribute to the index definition. > > Comments welcome, but I feel strongly that these two annotations need to be > removed to make room for better solutions: we have an opportunity now as > I'm rewriting the mapping engine. > > Sanne > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From rory.odonnell at oracle.com Fri Aug 22 05:05:32 2014 From: rory.odonnell at oracle.com (Rory O'Donnell Oracle, Dublin Ireland) Date: Fri, 22 Aug 2014 10:05:32 +0100 Subject: [infinispan-dev] Early Access build for JDK 9 b27 is available on java.net Message-ID: <53F707DC.7030805@oracle.com> Hi Galder, Early Access build for JDK 9 b27 is available on java.net, summary of changes here I'd also like to use this opportunity to point you to ongoing work in OpenJDK on Project Jigsaw. - JDK 9's source code is now modular: http://mail.openjdk.java.net/pipermail/jdk9-dev/2014-August/001220.html - Mark Reinhold's post providing some context is available on his blog: http://mreinhold.org/blog/jigsaw-phase-two - The first two Project Jigsaw JEPs have been posted at http://openjdk.java.net/jeps/200 & http://openjdk.java.net/jeps/201 . You can also track the progress on the JEPs in the JDK Bug System now - the corresponding JBS issue for JEP 201 is https://bugs.openjdk.java.net/browse/JDK-8051619 , for example. Comments, questions, and suggestions are welcome on the jigsaw-dev mailing list. (If you haven?t already subscribed to that list then please do so first, otherwise your message will be discarded as spam.) Rgds,Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140822/ea178695/attachment.html From galder at redhat.com Mon Aug 25 03:26:25 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Mon, 25 Aug 2014 09:26:25 +0200 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: References: Message-ID: <8F6F1260-D5F8-4953-9160-29738438CCCF@redhat.com> On 15 Aug 2014, at 15:55, Dan Berindei wrote: > It looks to me like you actually want a partial order between caches on shutdown, so why not declare an explicit dependency (e.g. manager.stopOrder(before, after)? We could even throw an exception if the user tries to stop a cache manually in the wrong order (e.g. TestingUtil.killCacheManagers). > > Alternatively, we could add an event CacheManagerStopEvent(pre=true) at the cache manager level that is invoked before any cache is stopped, and you could close all the indexes in that listener. The event could even be at the cache level, if it would make things easier. Not sure you need the listener event since we already have lifecycle event callbacks for external modules. IOW, couldn?t you do this cache stop ordering with an implementation of org.infinispan.lifecycle.ModuleLifecycle? On cacheStarting, you could maybe track each started cache and give it a priority, and then on cacheManagerStopping use that priority to close caches. Note: I?ve not tested this and I don?t know if the callbacks happen at the right time to allow this. Just thinking out loud. Cheers, > > Cheers > Dan > > > > On Fri, Aug 15, 2014 at 3:29 PM, Sanne Grinovero wrote: > The goal being to resolve ISPN-4561, I was thinking to expose a very > simple reference counter in the AdvancedCache API. > > As you know the Query module - which triggers on indexed caches - can > use the Infinispan Lucene Directory to store its indexes in a > (different) Cache. > When the CacheManager is stopped, if the index storage caches are > stopped first, then the indexed cache is stopped, this might need to > flush/close some pending state on the index and this results in an > illegal operation as the storate is shut down already. > > We could either implement a complex dependency graph, or add a method like: > > > boolean incRef(); > > on AdvancedCache. > > when the Cache#close() method is invoked, this will do an internal > decrement, and only when hitting zero it will really close the cache. > > A CacheManager shutdown will loop through all caches, and invoke > close() on all of them; the close() method should return something so > that the CacheManager shutdown loop understand if it really did close > all caches or if not, in which case it will loop again through all > caches, and loops until all cache instances are really closed. > The return type of "close()" doesn't necessarily need to be exposed on > public API, it could be an internal only variant. > > Could we do this? > > --Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From dan.berindei at gmail.com Mon Aug 25 03:46:26 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 25 Aug 2014 10:46:26 +0300 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: <8F6F1260-D5F8-4953-9160-29738438CCCF@redhat.com> References: <8F6F1260-D5F8-4953-9160-29738438CCCF@redhat.com> Message-ID: On Mon, Aug 25, 2014 at 10:26 AM, Galder Zamarre?o wrote: > > On 15 Aug 2014, at 15:55, Dan Berindei wrote: > > > It looks to me like you actually want a partial order between caches on > shutdown, so why not declare an explicit dependency (e.g. > manager.stopOrder(before, after)? We could even throw an exception if the > user tries to stop a cache manually in the wrong order (e.g. > TestingUtil.killCacheManagers). > > > > Alternatively, we could add an event CacheManagerStopEvent(pre=true) at > the cache manager level that is invoked before any cache is stopped, and > you could close all the indexes in that listener. The event could even be > at the cache level, if it would make things easier. > > Not sure you need the listener event since we already have lifecycle event > callbacks for external modules. > > IOW, couldn?t you do this cache stop ordering with an implementation of > org.infinispan.lifecycle.ModuleLifecycle? On cacheStarting, you could maybe > track each started cache and give it a priority, and then on > cacheManagerStopping use that priority to close caches. Note: I?ve not > tested this and I don?t know if the callbacks happen at the right time to > allow this. Just thinking out loud. > > Unfortunately ModuleLifecycle.cacheManagerStopping is only called _after_ all the caches have been stopped. > Cheers, > > > > > Cheers > > Dan > > > > > > > > On Fri, Aug 15, 2014 at 3:29 PM, Sanne Grinovero > wrote: > > The goal being to resolve ISPN-4561, I was thinking to expose a very > > simple reference counter in the AdvancedCache API. > > > > As you know the Query module - which triggers on indexed caches - can > > use the Infinispan Lucene Directory to store its indexes in a > > (different) Cache. > > When the CacheManager is stopped, if the index storage caches are > > stopped first, then the indexed cache is stopped, this might need to > > flush/close some pending state on the index and this results in an > > illegal operation as the storate is shut down already. > > > > We could either implement a complex dependency graph, or add a method > like: > > > > > > boolean incRef(); > > > > on AdvancedCache. > > > > when the Cache#close() method is invoked, this will do an internal > > decrement, and only when hitting zero it will really close the cache. > > > > A CacheManager shutdown will loop through all caches, and invoke > > close() on all of them; the close() method should return something so > > that the CacheManager shutdown loop understand if it really did close > > all caches or if not, in which case it will loop again through all > > caches, and loops until all cache instances are really closed. > > The return type of "close()" doesn't necessarily need to be exposed on > > public API, it could be an internal only variant. > > > > Could we do this? > > > > --Sanne > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140825/c7b3a4d3/attachment.html From rvansa at redhat.com Mon Aug 25 03:52:31 2014 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 25 Aug 2014 09:52:31 +0200 Subject: [infinispan-dev] Distributed index? Message-ID: <53FAEB3F.6090108@redhat.com> Hi, as we've discovered some imperfections in current distributed index implementation, I'd like to know whether it could be possible to store on each node only index of those entries that are primary-owned on that node. Then, each query would be broadcast to other nodes and the results would be merged. From what I understood from Coherence documentation, they do that this way - this seems quite reasonable to me, and does not introduce any bottleneck as our index-master node (and also it does not require any synchronization on shared index). It's also different from sharding which introduces multiple indices but shares the index across nodes. I can easily imagine simple ... WHERE x = 'y' queries, ORDER BY or projections wouldn't be complicated either (unless sorting by non-projected field). Effective offsets and limits would require a bit more work, but the simplistic implementation (non-distributed merge) shouldn't be hard either. Could this approach be used with Lucene easily, or are there any caveats? Radim -- Radim Vansa JBoss DataGrid QA From ttarrant at redhat.com Mon Aug 25 03:56:23 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 25 Aug 2014 09:56:23 +0200 Subject: [infinispan-dev] Infinispan Jira workflow Message-ID: <53FAEC27.1010902@redhat.com> I was just looking at the Jira workflow for Infinispan and noticed that all issues start off in the "Open" state and assigned to the default owner for the component. Unfortunately this does not mean that the actual "assignee" has taken ownership, or that he intends to work on it in the near future, or that he has even looked at it. I would therefore like to introduce a state for fresh issues which is just before "Open". This can be "New" or "Unverified/Untriaged" and will make it easier to find all those "lurker" issues which are lost in the noise. What do you think ? Tristan From rvansa at redhat.com Mon Aug 25 04:12:01 2014 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 25 Aug 2014 10:12:01 +0200 Subject: [infinispan-dev] Infinispan Jira workflow In-Reply-To: <53FAEC27.1010902@redhat.com> References: <53FAEC27.1010902@redhat.com> Message-ID: <53FAEFD1.3060302@redhat.com> And are there any recommendations about the 767 currently open issues [1]? It seems to me that after 5 years any issue [2] should be resolved or rejected. [1] https://issues.jboss.org/browse/ISPN/?selectedTab=com.atlassian.jira.jira-projects-plugin:issues-panel [2] https://issues.jboss.org/browse/ISPN-3 https://issues.jboss.org/browse/ISPN-19 etc... On 08/25/2014 09:56 AM, Tristan Tarrant wrote: > I was just looking at the Jira workflow for Infinispan and noticed that > all issues start off in the "Open" state and assigned to the default > owner for the component. Unfortunately this does not mean that the > actual "assignee" has taken ownership, or that he intends to work on it > in the near future, or that he has even looked at it. I would therefore > like to introduce a state for fresh issues which is just before "Open". > This can be "New" or "Unverified/Untriaged" and will make it easier to > find all those "lurker" issues which are lost in the noise. > > What do you think ? > > Tristan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From rvansa at redhat.com Mon Aug 25 04:13:46 2014 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 25 Aug 2014 10:13:46 +0200 Subject: [infinispan-dev] Infinispan Jira workflow In-Reply-To: <53FAEFD1.3060302@redhat.com> References: <53FAEC27.1010902@redhat.com> <53FAEFD1.3060302@redhat.com> Message-ID: <53FAF03A.2090004@redhat.com> ... marking those issues as "New" would sound somewhat funny :) Radim On 08/25/2014 10:12 AM, Radim Vansa wrote: > And are there any recommendations about the 767 currently open issues > [1]? It seems to me that after 5 years any issue [2] should be resolved > or rejected. > > [1] > https://issues.jboss.org/browse/ISPN/?selectedTab=com.atlassian.jira.jira-projects-plugin:issues-panel > [2] https://issues.jboss.org/browse/ISPN-3 > https://issues.jboss.org/browse/ISPN-19 etc... > > On 08/25/2014 09:56 AM, Tristan Tarrant wrote: >> I was just looking at the Jira workflow for Infinispan and noticed that >> all issues start off in the "Open" state and assigned to the default >> owner for the component. Unfortunately this does not mean that the >> actual "assignee" has taken ownership, or that he intends to work on it >> in the near future, or that he has even looked at it. I would therefore >> like to introduce a state for fresh issues which is just before "Open". >> This can be "New" or "Unverified/Untriaged" and will make it easier to >> find all those "lurker" issues which are lost in the noise. >> >> What do you think ? >> >> Tristan >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Radim Vansa JBoss DataGrid QA From gustavonalle at gmail.com Mon Aug 25 04:28:46 2014 From: gustavonalle at gmail.com (Gustavo Fernandes) Date: Mon, 25 Aug 2014 09:28:46 +0100 Subject: [infinispan-dev] Distributed index? In-Reply-To: <53FAEB3F.6090108@redhat.com> References: <53FAEB3F.6090108@redhat.com> Message-ID: On Mon, Aug 25, 2014 at 8:52 AM, Radim Vansa wrote: > Hi, > > as we've discovered some imperfections in current distributed index > implementation, I'd like to know whether it could be possible to store > on each node only index of those entries that are primary-owned on that > node. Then, each query would be broadcast to other nodes and the results > would be merged. > > Hi, Have you tried the ClusteredQuery feature introduced on [1]? If you set index = LOCAL in the cache indexing config, only local entries will be indexed and then using a ClusteredQuery (see example in [2]), the query is executed on all all nodes, results are collected and merged before returning to the caller. Pagination and Sorting should be supported as well. [1] https://issues.jboss.org/browse/ISPN-200 [2] https://github.com/infinispan/infinispan/blob/master/query/src/test/java/org/infinispan/query/searchmanager/ClusteredCacheQueryTimeoutTest.java Gustavo > Radim > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140825/7ccaf27e/attachment.html From ttarrant at redhat.com Mon Aug 25 04:29:57 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 25 Aug 2014 10:29:57 +0200 Subject: [infinispan-dev] Infinispan Jira workflow In-Reply-To: <53FAF03A.2090004@redhat.com> References: <53FAEC27.1010902@redhat.com> <53FAEFD1.3060302@redhat.com> <53FAF03A.2090004@redhat.com> Message-ID: <53FAF405.3070404@redhat.com> Yes, we need to bring sanity to all of that, and that can be done only if we all do it together :) And "New" is probably a bad choice. "Unassigned" is also wrong since we always have a default assignee. That's why I suggested an "Unverified" or "Untriaged" state instead. Tristan On 25/08/14 10:13, Radim Vansa wrote: > ... marking those issues as "New" would sound somewhat funny :) > > Radim > > On 08/25/2014 10:12 AM, Radim Vansa wrote: >> And are there any recommendations about the 767 currently open issues >> [1]? It seems to me that after 5 years any issue [2] should be resolved >> or rejected. >> >> [1] >> https://issues.jboss.org/browse/ISPN/?selectedTab=com.atlassian.jira.jira-projects-plugin:issues-panel >> [2] https://issues.jboss.org/browse/ISPN-3 >> https://issues.jboss.org/browse/ISPN-19 etc... >> >> On 08/25/2014 09:56 AM, Tristan Tarrant wrote: >>> I was just looking at the Jira workflow for Infinispan and noticed that >>> all issues start off in the "Open" state and assigned to the default >>> owner for the component. Unfortunately this does not mean that the >>> actual "assignee" has taken ownership, or that he intends to work on it >>> in the near future, or that he has even looked at it. I would therefore >>> like to introduce a state for fresh issues which is just before "Open". >>> This can be "New" or "Unverified/Untriaged" and will make it easier to >>> find all those "lurker" issues which are lost in the noise. >>> >>> What do you think ? >>> >>> Tristan >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > From rvansa at redhat.com Mon Aug 25 05:09:29 2014 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 25 Aug 2014 11:09:29 +0200 Subject: [infinispan-dev] Distributed index? In-Reply-To: References: <53FAEB3F.6090108@redhat.com> Message-ID: <53FAFD49.9050400@redhat.com> Hmm, that sounds like it! I wish I could use DSL for this, too (I have benchmarks set up only for DSL queries). According to docs it's experimental feature, although quite old - I wonder why that was not promoted (it's not covered . Thanks for the directions Radim On 08/25/2014 10:28 AM, Gustavo Fernandes wrote: > On Mon, Aug 25, 2014 at 8:52 AM, Radim Vansa > wrote: > > Hi, > > as we've discovered some imperfections in current distributed index > implementation, I'd like to know whether it could be possible to store > on each node only index of those entries that are primary-owned on > that > node. Then, each query would be broadcast to other nodes and the > results > would be merged. > > > Hi, > > Have you tried the ClusteredQuery feature introduced on [1]? > > If you set index = LOCAL in the cache indexing config, only local > entries will be indexed > and then using a ClusteredQuery (see example in [2]), the query is > executed on all all nodes, > results are collected and merged before returning to the caller. > Pagination and Sorting should > be supported as well. > > [1] https://issues.jboss.org/browse/ISPN-200 > [2] > https://github.com/infinispan/infinispan/blob/master/query/src/test/java/org/infinispan/query/searchmanager/ClusteredCacheQueryTimeoutTest.java > > > Gustavo > > > Radim > > -- > Radim Vansa > > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140825/7beb5558/attachment.html From ttarrant at redhat.com Mon Aug 25 10:37:47 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 25 Aug 2014 16:37:47 +0200 Subject: [infinispan-dev] Weekly Infinispan IRC meeting 2014-08-25 Message-ID: <53FB4A3B.70208@redhat.com> Get the minutes from here: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2014/infinispan.2014-08-25-14.02.log.html From ttarrant at redhat.com Mon Aug 25 15:21:26 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 25 Aug 2014 21:21:26 +0200 Subject: [infinispan-dev] minutes from the monitoring&management meeting In-Reply-To: References: <912B2F10-1854-494C-AD55-1C91413D4A51@redhat.com> <1B7EF10E-048D-4764-8AF6-4A2712733E40@redhat.com> Message-ID: <53FB8CB6.1000106@redhat.com> On 12/08/14 22:41, Dan Berindei wrote: > > I like the idea of shipping the cache configuration to all the nodes. > We will have to require any user-provided objects in the configuration > to be serializable/externalizable, but I don't see a big problem with > that. > > In fact, it would also allow us to send the entire configuration to > the coordinator on join, so we could verify that the configuration on > all nodes is compatible (not exactly the same, since things like > capacityFactor can be different). And it would remove the need for the > CacheJoinInfo class... Can't we store the configuration defs in the cluster registry ? If a node attempts to overwrite an existing configuration based on the same name, an exception can be thrown. Tristan From rory.odonnell at oracle.com Tue Aug 26 04:17:56 2014 From: rory.odonnell at oracle.com (Rory O'Donnell Oracle, Dublin Ireland) Date: Tue, 26 Aug 2014 09:17:56 +0100 Subject: [infinispan-dev] Early Access build for JDK 8u40 build 02 is available on java.net Message-ID: <53FC42B4.90800@oracle.com> Hi Galder, Early Access build for JDK 8u40 build 02 is available on java.net. Summary of changes in JDK 8u40 build 02 are listed here. Early Access Build Test Results Rgds,Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140826/22520ba3/attachment-0001.html From dan.berindei at gmail.com Tue Aug 26 09:50:40 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 26 Aug 2014 16:50:40 +0300 Subject: [infinispan-dev] minutes from the monitoring&management meeting In-Reply-To: <53FB8CB6.1000106@redhat.com> References: <912B2F10-1854-494C-AD55-1C91413D4A51@redhat.com> <1B7EF10E-048D-4764-8AF6-4A2712733E40@redhat.com> <53FB8CB6.1000106@redhat.com> Message-ID: On Mon, Aug 25, 2014 at 10:21 PM, Tristan Tarrant wrote: > On 12/08/14 22:41, Dan Berindei wrote: > > > > I like the idea of shipping the cache configuration to all the nodes. > > We will have to require any user-provided objects in the configuration > > to be serializable/externalizable, but I don't see a big problem with > > that. > > > > In fact, it would also allow us to send the entire configuration to > > the coordinator on join, so we could verify that the configuration on > > all nodes is compatible (not exactly the same, since things like > > capacityFactor can be different). And it would remove the need for the > > CacheJoinInfo class... > Can't we store the configuration defs in the cluster registry ? If a > node attempts to overwrite an existing configuration based on the same > name, an exception can be thrown. > The cluster registry also uses a clustered cache, how would we ship the cache configuration around for that cache? The cluster registry is also too limited to do this check ATM, as it doesn't support conditional operations. I'm not sure whether that's because they just weren't needed, or it's an intentional limitation. Cheers Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140826/52c081f8/attachment.html From ttarrant at redhat.com Tue Aug 26 09:58:31 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 26 Aug 2014 15:58:31 +0200 Subject: [infinispan-dev] minutes from the monitoring&management meeting In-Reply-To: References: <912B2F10-1854-494C-AD55-1C91413D4A51@redhat.com> <1B7EF10E-048D-4764-8AF6-4A2712733E40@redhat.com> <53FB8CB6.1000106@redhat.com> Message-ID: <53FC9287.2060902@redhat.com> On 26/08/14 15:50, Dan Berindei wrote: > > The cluster registry also uses a clustered cache, how would we ship > the cache configuration around for that cache? Currently the configuration for the cluster registry is static, so there isn't any need to propagate it. My reasoning obviously falls over when we want to add some configuration to it, such as persistence. > > The cluster registry is also too limited to do this check ATM, as it > doesn't support conditional operations. I'm not sure whether that's > because they just weren't needed, or it's an intentional limitation. > I think it was just laziness. Tristan From mudokonman at gmail.com Tue Aug 26 13:38:44 2014 From: mudokonman at gmail.com (William Burns) Date: Tue, 26 Aug 2014 13:38:44 -0400 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: References: <8F6F1260-D5F8-4953-9160-29738438CCCF@redhat.com> Message-ID: On Mon, Aug 25, 2014 at 3:46 AM, Dan Berindei wrote: > > > > On Mon, Aug 25, 2014 at 10:26 AM, Galder Zamarre?o > wrote: >> >> >> On 15 Aug 2014, at 15:55, Dan Berindei wrote: >> >> > It looks to me like you actually want a partial order between caches on >> > shutdown, so why not declare an explicit dependency (e.g. >> > manager.stopOrder(before, after)? We could even throw an exception if the >> > user tries to stop a cache manually in the wrong order (e.g. >> > TestingUtil.killCacheManagers). >> > >> > Alternatively, we could add an event CacheManagerStopEvent(pre=true) at >> > the cache manager level that is invoked before any cache is stopped, and you >> > could close all the indexes in that listener. The event could even be at the >> > cache level, if it would make things easier. I think something like this would be the simplest for now especially, how this is done though we can still decide. >> >> Not sure you need the listener event since we already have lifecycle event >> callbacks for external modules. >> >> IOW, couldn?t you do this cache stop ordering with an implementation of >> org.infinispan.lifecycle.ModuleLifecycle? On cacheStarting, you could maybe >> track each started cache and give it a priority, and then on >> cacheManagerStopping use that priority to close caches. Note: I?ve not >> tested this and I don?t know if the callbacks happen at the right time to >> allow this. Just thinking out loud. +1 this is a nice use of what is already in place. The only issue I see here is that there is no ordering of the lifecycle callbacks if you had more than 1 callback, which could cause issues if users wanted to reference certain caches. >> > > Unfortunately ModuleLifecycle.cacheManagerStopping is only called _after_ > all the caches have been stopped. This seems like a bug, not very nice for ordering of callback methods. > > >> >> Cheers, >> >> > >> > Cheers >> > Dan >> > >> > >> > >> > On Fri, Aug 15, 2014 at 3:29 PM, Sanne Grinovero >> > wrote: >> > The goal being to resolve ISPN-4561, I was thinking to expose a very >> > simple reference counter in the AdvancedCache API. >> > >> > As you know the Query module - which triggers on indexed caches - can >> > use the Infinispan Lucene Directory to store its indexes in a >> > (different) Cache. >> > When the CacheManager is stopped, if the index storage caches are >> > stopped first, then the indexed cache is stopped, this might need to >> > flush/close some pending state on the index and this results in an >> > illegal operation as the storate is shut down already. >> > >> > We could either implement a complex dependency graph, or add a method >> > like: >> > >> > >> > boolean incRef(); >> > >> > on AdvancedCache. >> > >> > when the Cache#close() method is invoked, this will do an internal >> > decrement, and only when hitting zero it will really close the cache. Unfortunately this won't work except in a simple dependency case (you depend on a cache, but no cache can depend on you). Say you have 3 caches (C1, C2, C3). The case is C2 depends on C1 and C3 depends on C2. In this case both C1 and C2 would have a ref count value of 1 and C3 would have 0. This would allow for C1 and C2 to both be eligible to be closed during the same iteration. I think if we started doing dependencies we would really need to have some sort of graph to have anything more than the simple case. Do we know of other use cases where we may want a dependency graph explicitly? It seems what you want is solvable with what is in place, it just has a bug :( >> > >> > A CacheManager shutdown will loop through all caches, and invoke >> > close() on all of them; the close() method should return something so >> > that the CacheManager shutdown loop understand if it really did close >> > all caches or if not, in which case it will loop again through all >> > caches, and loops until all cache instances are really closed. >> > The return type of "close()" doesn't necessarily need to be exposed on >> > public API, it could be an internal only variant. >> > >> > Could we do this? >> > >> > --Sanne >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Tue Aug 26 14:23:43 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 26 Aug 2014 19:23:43 +0100 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: References: <8F6F1260-D5F8-4953-9160-29738438CCCF@redhat.com> Message-ID: On 26 August 2014 18:38, William Burns wrote: > On Mon, Aug 25, 2014 at 3:46 AM, Dan Berindei wrote: >> >> >> >> On Mon, Aug 25, 2014 at 10:26 AM, Galder Zamarre?o >> wrote: >>> >>> >>> On 15 Aug 2014, at 15:55, Dan Berindei wrote: >>> >>> > It looks to me like you actually want a partial order between caches on >>> > shutdown, so why not declare an explicit dependency (e.g. >>> > manager.stopOrder(before, after)? We could even throw an exception if the >>> > user tries to stop a cache manually in the wrong order (e.g. >>> > TestingUtil.killCacheManagers). >>> > >>> > Alternatively, we could add an event CacheManagerStopEvent(pre=true) at >>> > the cache manager level that is invoked before any cache is stopped, and you >>> > could close all the indexes in that listener. The event could even be at the >>> > cache level, if it would make things easier. > > I think something like this would be the simplest for now especially, > how this is done though we can still decide. > >>> >>> Not sure you need the listener event since we already have lifecycle event >>> callbacks for external modules. >>> >>> IOW, couldn?t you do this cache stop ordering with an implementation of >>> org.infinispan.lifecycle.ModuleLifecycle? On cacheStarting, you could maybe >>> track each started cache and give it a priority, and then on >>> cacheManagerStopping use that priority to close caches. Note: I?ve not >>> tested this and I don?t know if the callbacks happen at the right time to >>> allow this. Just thinking out loud. > > +1 this is a nice use of what is already in place. The only issue I > see here is that there is no ordering of the lifecycle callbacks if > you had more than 1 callback, which could cause issues if users wanted > to reference certain caches. > >>> >> >> Unfortunately ModuleLifecycle.cacheManagerStopping is only called _after_ >> all the caches have been stopped. > > This seems like a bug, not very nice for ordering of callback methods. > >> >> >>> >>> Cheers, >>> >>> > >>> > Cheers >>> > Dan >>> > >>> > >>> > >>> > On Fri, Aug 15, 2014 at 3:29 PM, Sanne Grinovero >>> > wrote: >>> > The goal being to resolve ISPN-4561, I was thinking to expose a very >>> > simple reference counter in the AdvancedCache API. >>> > >>> > As you know the Query module - which triggers on indexed caches - can >>> > use the Infinispan Lucene Directory to store its indexes in a >>> > (different) Cache. >>> > When the CacheManager is stopped, if the index storage caches are >>> > stopped first, then the indexed cache is stopped, this might need to >>> > flush/close some pending state on the index and this results in an >>> > illegal operation as the storate is shut down already. >>> > >>> > We could either implement a complex dependency graph, or add a method >>> > like: >>> > >>> > >>> > boolean incRef(); >>> > >>> > on AdvancedCache. >>> > >>> > when the Cache#close() method is invoked, this will do an internal >>> > decrement, and only when hitting zero it will really close the cache. > > Unfortunately this won't work except in a simple dependency case (you > depend on a cache, but no cache can depend on you). > > Say you have 3 caches (C1, C2, C3). > > The case is C2 depends on C1 and C3 depends on C2. In this case both > C1 and C2 would have a ref count value of 1 and C3 would have 0. This > would allow for C1 and C2 to both be eligible to be closed during the > same iteration. Yea people could use it the wrong way :-D But you can increment in different patterns than what you described to model a full graph: the important point is to allow users to define an order in *some* way. > I think if we started doing dependencies we would really need to have > some sort of graph to have anything more than the simple case. > > Do we know of other use cases where we may want a dependency graph > explicitly? It seems what you want is solvable with what is in place, > it just has a bug :( True, for my case a two-phases would be good enough *generally speaking* as we don't expect people to index stuff in a Cache which is also used to store an index for a different Cache, but that's a "legal" configuration. Applying Muprhy's law, that means someone will try it out and I'd rather be safe about that. It just so happens that the counter proposal is both trivial and also can handle a quite long ordering chain. I don't understand how it's solvable "with what's in place", could you elaborate? -- Sanne > >>> > >>> > A CacheManager shutdown will loop through all caches, and invoke >>> > close() on all of them; the close() method should return something so >>> > that the CacheManager shutdown loop understand if it really did close >>> > all caches or if not, in which case it will loop again through all >>> > caches, and loops until all cache instances are really closed. >>> > The return type of "close()" doesn't necessarily need to be exposed on >>> > public API, it could be an internal only variant. >>> > >>> > Could we do this? >>> > >>> > --Sanne From mudokonman at gmail.com Tue Aug 26 14:38:02 2014 From: mudokonman at gmail.com (William Burns) Date: Tue, 26 Aug 2014 14:38:02 -0400 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: References: <8F6F1260-D5F8-4953-9160-29738438CCCF@redhat.com> Message-ID: On Tue, Aug 26, 2014 at 2:23 PM, Sanne Grinovero wrote: > On 26 August 2014 18:38, William Burns wrote: >> On Mon, Aug 25, 2014 at 3:46 AM, Dan Berindei wrote: >>> >>> >>> >>> On Mon, Aug 25, 2014 at 10:26 AM, Galder Zamarre?o >>> wrote: >>>> >>>> >>>> On 15 Aug 2014, at 15:55, Dan Berindei wrote: >>>> >>>> > It looks to me like you actually want a partial order between caches on >>>> > shutdown, so why not declare an explicit dependency (e.g. >>>> > manager.stopOrder(before, after)? We could even throw an exception if the >>>> > user tries to stop a cache manually in the wrong order (e.g. >>>> > TestingUtil.killCacheManagers). >>>> > >>>> > Alternatively, we could add an event CacheManagerStopEvent(pre=true) at >>>> > the cache manager level that is invoked before any cache is stopped, and you >>>> > could close all the indexes in that listener. The event could even be at the >>>> > cache level, if it would make things easier. >> >> I think something like this would be the simplest for now especially, >> how this is done though we can still decide. >> >>>> >>>> Not sure you need the listener event since we already have lifecycle event >>>> callbacks for external modules. >>>> >>>> IOW, couldn?t you do this cache stop ordering with an implementation of >>>> org.infinispan.lifecycle.ModuleLifecycle? On cacheStarting, you could maybe >>>> track each started cache and give it a priority, and then on >>>> cacheManagerStopping use that priority to close caches. Note: I?ve not >>>> tested this and I don?t know if the callbacks happen at the right time to >>>> allow this. Just thinking out loud. >> >> +1 this is a nice use of what is already in place. The only issue I >> see here is that there is no ordering of the lifecycle callbacks if >> you had more than 1 callback, which could cause issues if users wanted >> to reference certain caches. >> >>>> >>> >>> Unfortunately ModuleLifecycle.cacheManagerStopping is only called _after_ >>> all the caches have been stopped. >> >> This seems like a bug, not very nice for ordering of callback methods. >> >>> >>> >>>> >>>> Cheers, >>>> >>>> > >>>> > Cheers >>>> > Dan >>>> > >>>> > >>>> > >>>> > On Fri, Aug 15, 2014 at 3:29 PM, Sanne Grinovero >>>> > wrote: >>>> > The goal being to resolve ISPN-4561, I was thinking to expose a very >>>> > simple reference counter in the AdvancedCache API. >>>> > >>>> > As you know the Query module - which triggers on indexed caches - can >>>> > use the Infinispan Lucene Directory to store its indexes in a >>>> > (different) Cache. >>>> > When the CacheManager is stopped, if the index storage caches are >>>> > stopped first, then the indexed cache is stopped, this might need to >>>> > flush/close some pending state on the index and this results in an >>>> > illegal operation as the storate is shut down already. >>>> > >>>> > We could either implement a complex dependency graph, or add a method >>>> > like: >>>> > >>>> > >>>> > boolean incRef(); >>>> > >>>> > on AdvancedCache. >>>> > >>>> > when the Cache#close() method is invoked, this will do an internal >>>> > decrement, and only when hitting zero it will really close the cache. >> >> Unfortunately this won't work except in a simple dependency case (you >> depend on a cache, but no cache can depend on you). >> >> Say you have 3 caches (C1, C2, C3). >> >> The case is C2 depends on C1 and C3 depends on C2. In this case both >> C1 and C2 would have a ref count value of 1 and C3 would have 0. This >> would allow for C1 and C2 to both be eligible to be closed during the >> same iteration. > > Yea people could use it the wrong way :-D Oh I agree, but it seems this could be a pretty simple case that a user may not know the consequences of. The problem with this you have to know the dependencies of the cache you are depending on as well, which I don't think most users would want. It would be fine if only Infinispan used it internally, or at least it should :) > > But you can increment in different patterns than what you described to > model a full graph: the important point is to allow users to define an > order in *some* way. > > >> I think if we started doing dependencies we would really need to have >> some sort of graph to have anything more than the simple case. >> >> Do we know of other use cases where we may want a dependency graph >> explicitly? It seems what you want is solvable with what is in place, >> it just has a bug :( > > True, for my case a two-phases would be good enough *generally > speaking* as we don't expect people to index stuff in a Cache which is > also used to store an index for a different Cache, but that's a > "legal" configuration. > Applying Muprhy's law, that means someone will try it out and I'd > rather be safe about that. > > It just so happens that the counter proposal is both trivial and also > can handle a quite long ordering chain. > > I don't understand how it's solvable "with what's in place", could you > elaborate? I meant that you could use the ModuleLifecycle callback that Galder mentioned in the query module to close any caches that are needed before the manager starts shutting down others. However until the mentioned bug is fixed it won't quite work. When I said "with what is in place" I meant more that we wouldn't have to design a new implementation to support your use case. > > -- Sanne > >> >>>> > >>>> > A CacheManager shutdown will loop through all caches, and invoke >>>> > close() on all of them; the close() method should return something so >>>> > that the CacheManager shutdown loop understand if it really did close >>>> > all caches or if not, in which case it will loop again through all >>>> > caches, and loops until all cache instances are really closed. >>>> > The return type of "close()" doesn't necessarily need to be exposed on >>>> > public API, it could be an internal only variant. >>>> > >>>> > Could we do this? >>>> > >>>> > --Sanne > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Tue Aug 26 15:17:52 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 26 Aug 2014 20:17:52 +0100 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: References: <8F6F1260-D5F8-4953-9160-29738438CCCF@redhat.com> Message-ID: On 26 August 2014 19:38, William Burns wrote: > On Tue, Aug 26, 2014 at 2:23 PM, Sanne Grinovero wrote: >> On 26 August 2014 18:38, William Burns wrote: >>> On Mon, Aug 25, 2014 at 3:46 AM, Dan Berindei wrote: >>>> >>>> >>>> >>>> On Mon, Aug 25, 2014 at 10:26 AM, Galder Zamarre?o >>>> wrote: >>>>> >>>>> >>>>> On 15 Aug 2014, at 15:55, Dan Berindei wrote: >>>>> >>>>> > It looks to me like you actually want a partial order between caches on >>>>> > shutdown, so why not declare an explicit dependency (e.g. >>>>> > manager.stopOrder(before, after)? We could even throw an exception if the >>>>> > user tries to stop a cache manually in the wrong order (e.g. >>>>> > TestingUtil.killCacheManagers). >>>>> > >>>>> > Alternatively, we could add an event CacheManagerStopEvent(pre=true) at >>>>> > the cache manager level that is invoked before any cache is stopped, and you >>>>> > could close all the indexes in that listener. The event could even be at the >>>>> > cache level, if it would make things easier. >>> >>> I think something like this would be the simplest for now especially, >>> how this is done though we can still decide. >>> >>>>> >>>>> Not sure you need the listener event since we already have lifecycle event >>>>> callbacks for external modules. >>>>> >>>>> IOW, couldn?t you do this cache stop ordering with an implementation of >>>>> org.infinispan.lifecycle.ModuleLifecycle? On cacheStarting, you could maybe >>>>> track each started cache and give it a priority, and then on >>>>> cacheManagerStopping use that priority to close caches. Note: I?ve not >>>>> tested this and I don?t know if the callbacks happen at the right time to >>>>> allow this. Just thinking out loud. >>> >>> +1 this is a nice use of what is already in place. The only issue I >>> see here is that there is no ordering of the lifecycle callbacks if >>> you had more than 1 callback, which could cause issues if users wanted >>> to reference certain caches. >>> >>>>> >>>> >>>> Unfortunately ModuleLifecycle.cacheManagerStopping is only called _after_ >>>> all the caches have been stopped. >>> >>> This seems like a bug, not very nice for ordering of callback methods. >>> >>>> >>>> >>>>> >>>>> Cheers, >>>>> >>>>> > >>>>> > Cheers >>>>> > Dan >>>>> > >>>>> > >>>>> > >>>>> > On Fri, Aug 15, 2014 at 3:29 PM, Sanne Grinovero >>>>> > wrote: >>>>> > The goal being to resolve ISPN-4561, I was thinking to expose a very >>>>> > simple reference counter in the AdvancedCache API. >>>>> > >>>>> > As you know the Query module - which triggers on indexed caches - can >>>>> > use the Infinispan Lucene Directory to store its indexes in a >>>>> > (different) Cache. >>>>> > When the CacheManager is stopped, if the index storage caches are >>>>> > stopped first, then the indexed cache is stopped, this might need to >>>>> > flush/close some pending state on the index and this results in an >>>>> > illegal operation as the storate is shut down already. >>>>> > >>>>> > We could either implement a complex dependency graph, or add a method >>>>> > like: >>>>> > >>>>> > >>>>> > boolean incRef(); >>>>> > >>>>> > on AdvancedCache. >>>>> > >>>>> > when the Cache#close() method is invoked, this will do an internal >>>>> > decrement, and only when hitting zero it will really close the cache. >>> >>> Unfortunately this won't work except in a simple dependency case (you >>> depend on a cache, but no cache can depend on you). >>> >>> Say you have 3 caches (C1, C2, C3). >>> >>> The case is C2 depends on C1 and C3 depends on C2. In this case both >>> C1 and C2 would have a ref count value of 1 and C3 would have 0. This >>> would allow for C1 and C2 to both be eligible to be closed during the >>> same iteration. >> >> Yea people could use it the wrong way :-D > > Oh I agree, but it seems this could be a pretty simple case that a > user may not know the consequences of. The problem with this you have > to know the dependencies of the cache you are depending on as well, > which I don't think most users would want. It would be fine if only > Infinispan used it internally, or at least it should :) Right, I expect this to be used at SPI level: other frameworks integrating still need a stable contract but that doesn't mean all of the SPI needs to be easy, maybe documented only in an appendix, etc.. >> But you can increment in different patterns than what you described to >> model a full graph: the important point is to allow users to define an >> order in *some* way. >> >> >>> I think if we started doing dependencies we would really need to have >>> some sort of graph to have anything more than the simple case. >>> >>> Do we know of other use cases where we may want a dependency graph >>> explicitly? It seems what you want is solvable with what is in place, >>> it just has a bug :( >> >> True, for my case a two-phases would be good enough *generally >> speaking* as we don't expect people to index stuff in a Cache which is >> also used to store an index for a different Cache, but that's a >> "legal" configuration. >> Applying Muprhy's law, that means someone will try it out and I'd >> rather be safe about that. >> >> It just so happens that the counter proposal is both trivial and also >> can handle a quite long ordering chain. >> >> I don't understand how it's solvable "with what's in place", could you >> elaborate? > > I meant that you could use the ModuleLifecycle callback that Galder > mentioned in the query module to close any caches that are needed > before the manager starts shutting down others. However until the > mentioned bug is fixed it won't quite work. When I said "with what is > in place" I meant more that we wouldn't have to design a new > implementation to support your use case. The pattern Galder suggested implies some kind of counting right ;-) just that he suggests I could implement my own. But when you close my module (Query), it might be too late.. or too early, as there might be other users of the index cache. So you're saying I need to build this in the Lucene Directory module? That doesn't work either as the Lucene Directory should not depend on Query, nor it can tell which module is using it, but more importantly this module isn't necessarily used exclusively by the Query module. So we're back to square one, i.e. some kind of general-purpose counting. -- Sanne From mudokonman at gmail.com Tue Aug 26 15:29:14 2014 From: mudokonman at gmail.com (William Burns) Date: Tue, 26 Aug 2014 15:29:14 -0400 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: References: <8F6F1260-D5F8-4953-9160-29738438CCCF@redhat.com> Message-ID: On Tue, Aug 26, 2014 at 3:17 PM, Sanne Grinovero wrote: > On 26 August 2014 19:38, William Burns wrote: >> On Tue, Aug 26, 2014 at 2:23 PM, Sanne Grinovero wrote: >>> On 26 August 2014 18:38, William Burns wrote: >>>> On Mon, Aug 25, 2014 at 3:46 AM, Dan Berindei wrote: >>>>> >>>>> >>>>> >>>>> On Mon, Aug 25, 2014 at 10:26 AM, Galder Zamarre?o >>>>> wrote: >>>>>> >>>>>> >>>>>> On 15 Aug 2014, at 15:55, Dan Berindei wrote: >>>>>> >>>>>> > It looks to me like you actually want a partial order between caches on >>>>>> > shutdown, so why not declare an explicit dependency (e.g. >>>>>> > manager.stopOrder(before, after)? We could even throw an exception if the >>>>>> > user tries to stop a cache manually in the wrong order (e.g. >>>>>> > TestingUtil.killCacheManagers). >>>>>> > >>>>>> > Alternatively, we could add an event CacheManagerStopEvent(pre=true) at >>>>>> > the cache manager level that is invoked before any cache is stopped, and you >>>>>> > could close all the indexes in that listener. The event could even be at the >>>>>> > cache level, if it would make things easier. >>>> >>>> I think something like this would be the simplest for now especially, >>>> how this is done though we can still decide. >>>> >>>>>> >>>>>> Not sure you need the listener event since we already have lifecycle event >>>>>> callbacks for external modules. >>>>>> >>>>>> IOW, couldn?t you do this cache stop ordering with an implementation of >>>>>> org.infinispan.lifecycle.ModuleLifecycle? On cacheStarting, you could maybe >>>>>> track each started cache and give it a priority, and then on >>>>>> cacheManagerStopping use that priority to close caches. Note: I?ve not >>>>>> tested this and I don?t know if the callbacks happen at the right time to >>>>>> allow this. Just thinking out loud. >>>> >>>> +1 this is a nice use of what is already in place. The only issue I >>>> see here is that there is no ordering of the lifecycle callbacks if >>>> you had more than 1 callback, which could cause issues if users wanted >>>> to reference certain caches. >>>> >>>>>> >>>>> >>>>> Unfortunately ModuleLifecycle.cacheManagerStopping is only called _after_ >>>>> all the caches have been stopped. >>>> >>>> This seems like a bug, not very nice for ordering of callback methods. >>>> >>>>> >>>>> >>>>>> >>>>>> Cheers, >>>>>> >>>>>> > >>>>>> > Cheers >>>>>> > Dan >>>>>> > >>>>>> > >>>>>> > >>>>>> > On Fri, Aug 15, 2014 at 3:29 PM, Sanne Grinovero >>>>>> > wrote: >>>>>> > The goal being to resolve ISPN-4561, I was thinking to expose a very >>>>>> > simple reference counter in the AdvancedCache API. >>>>>> > >>>>>> > As you know the Query module - which triggers on indexed caches - can >>>>>> > use the Infinispan Lucene Directory to store its indexes in a >>>>>> > (different) Cache. >>>>>> > When the CacheManager is stopped, if the index storage caches are >>>>>> > stopped first, then the indexed cache is stopped, this might need to >>>>>> > flush/close some pending state on the index and this results in an >>>>>> > illegal operation as the storate is shut down already. >>>>>> > >>>>>> > We could either implement a complex dependency graph, or add a method >>>>>> > like: >>>>>> > >>>>>> > >>>>>> > boolean incRef(); >>>>>> > >>>>>> > on AdvancedCache. >>>>>> > >>>>>> > when the Cache#close() method is invoked, this will do an internal >>>>>> > decrement, and only when hitting zero it will really close the cache. >>>> >>>> Unfortunately this won't work except in a simple dependency case (you >>>> depend on a cache, but no cache can depend on you). >>>> >>>> Say you have 3 caches (C1, C2, C3). >>>> >>>> The case is C2 depends on C1 and C3 depends on C2. In this case both >>>> C1 and C2 would have a ref count value of 1 and C3 would have 0. This >>>> would allow for C1 and C2 to both be eligible to be closed during the >>>> same iteration. >>> >>> Yea people could use it the wrong way :-D >> >> Oh I agree, but it seems this could be a pretty simple case that a >> user may not know the consequences of. The problem with this you have >> to know the dependencies of the cache you are depending on as well, >> which I don't think most users would want. It would be fine if only >> Infinispan used it internally, or at least it should :) > > Right, I expect this to be used at SPI level: other frameworks > integrating still need a stable contract but that doesn't mean all of > the SPI needs to be easy, maybe documented only in an appendix, etc.. > > >>> But you can increment in different patterns than what you described to >>> model a full graph: the important point is to allow users to define an >>> order in *some* way. >>> >>> >>>> I think if we started doing dependencies we would really need to have >>>> some sort of graph to have anything more than the simple case. >>>> >>>> Do we know of other use cases where we may want a dependency graph >>>> explicitly? It seems what you want is solvable with what is in place, >>>> it just has a bug :( >>> >>> True, for my case a two-phases would be good enough *generally >>> speaking* as we don't expect people to index stuff in a Cache which is >>> also used to store an index for a different Cache, but that's a >>> "legal" configuration. >>> Applying Muprhy's law, that means someone will try it out and I'd >>> rather be safe about that. >>> >>> It just so happens that the counter proposal is both trivial and also >>> can handle a quite long ordering chain. >>> >>> I don't understand how it's solvable "with what's in place", could you >>> elaborate? >> >> I meant that you could use the ModuleLifecycle callback that Galder >> mentioned in the query module to close any caches that are needed >> before the manager starts shutting down others. However until the >> mentioned bug is fixed it won't quite work. When I said "with what is >> in place" I meant more that we wouldn't have to design a new >> implementation to support your use case. > > The pattern Galder suggested implies some kind of counting right ;-) > just that he suggests I could implement my own. Not necessarily, could you not just when a cache starts check it's configuration and if it has indexing enabled keep a reference to it. Then when the cache manager is stopping you stop those caches before returning? > > But when you close my module (Query), it might be too late.. or too > early, as there might be other users of the index cache. So you're > saying I need to build this in the Lucene Directory module? > That doesn't work either as the Lucene Directory should not depend on > Query, nor it can tell which module is using it, but more importantly > this module isn't necessarily used exclusively by the Query module. This wouldn't be fired when the module is closed but rather as a notification that the cache manager is about to begin its stop sequence. > > So we're back to square one, i.e. some kind of general-purpose counting. > > -- Sanne > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Tue Aug 26 18:28:27 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 26 Aug 2014 23:28:27 +0100 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: References: <8F6F1260-D5F8-4953-9160-29738438CCCF@redhat.com> Message-ID: On 26 August 2014 20:29, William Burns wrote: > On Tue, Aug 26, 2014 at 3:17 PM, Sanne Grinovero wrote: >> On 26 August 2014 19:38, William Burns wrote: >>> On Tue, Aug 26, 2014 at 2:23 PM, Sanne Grinovero wrote: >>>> On 26 August 2014 18:38, William Burns wrote: >>>>> On Mon, Aug 25, 2014 at 3:46 AM, Dan Berindei wrote: >>>>>> >>>>>> >>>>>> >>>>>> On Mon, Aug 25, 2014 at 10:26 AM, Galder Zamarre?o >>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> On 15 Aug 2014, at 15:55, Dan Berindei wrote: >>>>>>> >>>>>>> > It looks to me like you actually want a partial order between caches on >>>>>>> > shutdown, so why not declare an explicit dependency (e.g. >>>>>>> > manager.stopOrder(before, after)? We could even throw an exception if the >>>>>>> > user tries to stop a cache manually in the wrong order (e.g. >>>>>>> > TestingUtil.killCacheManagers). >>>>>>> > >>>>>>> > Alternatively, we could add an event CacheManagerStopEvent(pre=true) at >>>>>>> > the cache manager level that is invoked before any cache is stopped, and you >>>>>>> > could close all the indexes in that listener. The event could even be at the >>>>>>> > cache level, if it would make things easier. >>>>> >>>>> I think something like this would be the simplest for now especially, >>>>> how this is done though we can still decide. >>>>> >>>>>>> >>>>>>> Not sure you need the listener event since we already have lifecycle event >>>>>>> callbacks for external modules. >>>>>>> >>>>>>> IOW, couldn?t you do this cache stop ordering with an implementation of >>>>>>> org.infinispan.lifecycle.ModuleLifecycle? On cacheStarting, you could maybe >>>>>>> track each started cache and give it a priority, and then on >>>>>>> cacheManagerStopping use that priority to close caches. Note: I?ve not >>>>>>> tested this and I don?t know if the callbacks happen at the right time to >>>>>>> allow this. Just thinking out loud. >>>>> >>>>> +1 this is a nice use of what is already in place. The only issue I >>>>> see here is that there is no ordering of the lifecycle callbacks if >>>>> you had more than 1 callback, which could cause issues if users wanted >>>>> to reference certain caches. >>>>> >>>>>>> >>>>>> >>>>>> Unfortunately ModuleLifecycle.cacheManagerStopping is only called _after_ >>>>>> all the caches have been stopped. >>>>> >>>>> This seems like a bug, not very nice for ordering of callback methods. >>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> Cheers, >>>>>>> >>>>>>> > >>>>>>> > Cheers >>>>>>> > Dan >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > On Fri, Aug 15, 2014 at 3:29 PM, Sanne Grinovero >>>>>>> > wrote: >>>>>>> > The goal being to resolve ISPN-4561, I was thinking to expose a very >>>>>>> > simple reference counter in the AdvancedCache API. >>>>>>> > >>>>>>> > As you know the Query module - which triggers on indexed caches - can >>>>>>> > use the Infinispan Lucene Directory to store its indexes in a >>>>>>> > (different) Cache. >>>>>>> > When the CacheManager is stopped, if the index storage caches are >>>>>>> > stopped first, then the indexed cache is stopped, this might need to >>>>>>> > flush/close some pending state on the index and this results in an >>>>>>> > illegal operation as the storate is shut down already. >>>>>>> > >>>>>>> > We could either implement a complex dependency graph, or add a method >>>>>>> > like: >>>>>>> > >>>>>>> > >>>>>>> > boolean incRef(); >>>>>>> > >>>>>>> > on AdvancedCache. >>>>>>> > >>>>>>> > when the Cache#close() method is invoked, this will do an internal >>>>>>> > decrement, and only when hitting zero it will really close the cache. >>>>> >>>>> Unfortunately this won't work except in a simple dependency case (you >>>>> depend on a cache, but no cache can depend on you). >>>>> >>>>> Say you have 3 caches (C1, C2, C3). >>>>> >>>>> The case is C2 depends on C1 and C3 depends on C2. In this case both >>>>> C1 and C2 would have a ref count value of 1 and C3 would have 0. This >>>>> would allow for C1 and C2 to both be eligible to be closed during the >>>>> same iteration. >>>> >>>> Yea people could use it the wrong way :-D >>> >>> Oh I agree, but it seems this could be a pretty simple case that a >>> user may not know the consequences of. The problem with this you have >>> to know the dependencies of the cache you are depending on as well, >>> which I don't think most users would want. It would be fine if only >>> Infinispan used it internally, or at least it should :) >> >> Right, I expect this to be used at SPI level: other frameworks >> integrating still need a stable contract but that doesn't mean all of >> the SPI needs to be easy, maybe documented only in an appendix, etc.. >> >> >>>> But you can increment in different patterns than what you described to >>>> model a full graph: the important point is to allow users to define an >>>> order in *some* way. >>>> >>>> >>>>> I think if we started doing dependencies we would really need to have >>>>> some sort of graph to have anything more than the simple case. >>>>> >>>>> Do we know of other use cases where we may want a dependency graph >>>>> explicitly? It seems what you want is solvable with what is in place, >>>>> it just has a bug :( >>>> >>>> True, for my case a two-phases would be good enough *generally >>>> speaking* as we don't expect people to index stuff in a Cache which is >>>> also used to store an index for a different Cache, but that's a >>>> "legal" configuration. >>>> Applying Muprhy's law, that means someone will try it out and I'd >>>> rather be safe about that. >>>> >>>> It just so happens that the counter proposal is both trivial and also >>>> can handle a quite long ordering chain. >>>> >>>> I don't understand how it's solvable "with what's in place", could you >>>> elaborate? >>> >>> I meant that you could use the ModuleLifecycle callback that Galder >>> mentioned in the query module to close any caches that are needed >>> before the manager starts shutting down others. However until the >>> mentioned bug is fixed it won't quite work. When I said "with what is >>> in place" I meant more that we wouldn't have to design a new >>> implementation to support your use case. >> >> The pattern Galder suggested implies some kind of counting right ;-) >> just that he suggests I could implement my own. > > Not necessarily, could you not just when a cache starts check it's > configuration and if it has indexing enabled keep a reference to it. > Then when the cache manager is stopping you stop those caches before > returning? I suspect that if we allow extension points to shut down other components eagerly it's going to be a mess. I can't reliably track if and which other components (and user code) are still using it? >> But when you close my module (Query), it might be too late.. or too >> early, as there might be other users of the index cache. So you're >> saying I need to build this in the Lucene Directory module? >> That doesn't work either as the Lucene Directory should not depend on >> Query, nor it can tell which module is using it, but more importantly >> this module isn't necessarily used exclusively by the Query module. > > This wouldn't be fired when the module is closed but rather as a > notification that the cache manager is about to begin its stop > sequence. So I would eagerly shut down an indexed cache from a module which might have been started as its dependant (or not). What about other "in flight" operations happening on that Cache, we let them blow up even if technically we didn't shut down yet? Sorry for playing devil's advocate, but it seems very wrong. BTW I omitted it so far as "execution complexity" shouldn't be an excuse for an inferior solution, but it's worth to keep in mind that some of these resources are managed by Hibernate Search and you really can't introduce hard dependencies to it so the solution we're aiming at needs to be built and supported by infinispan-core exclusively, which has to expose this as an SPI. That said, I just wanted to explain the problem and propose a solution but have no longer time for this so any volunteer taking it? https://issues.jboss.org/browse/ISPN-4561 TiA -- Sanne From dan.berindei at gmail.com Wed Aug 27 03:32:56 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 27 Aug 2014 10:32:56 +0300 Subject: [infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies In-Reply-To: References: <8F6F1260-D5F8-4953-9160-29738438CCCF@redhat.com> Message-ID: On Wed, Aug 27, 2014 at 1:28 AM, Sanne Grinovero wrote: > On 26 August 2014 20:29, William Burns wrote: > > On Tue, Aug 26, 2014 at 3:17 PM, Sanne Grinovero > wrote: > >> On 26 August 2014 19:38, William Burns wrote: > >>> On Tue, Aug 26, 2014 at 2:23 PM, Sanne Grinovero > wrote: > >>>> On 26 August 2014 18:38, William Burns wrote: > >>>>> On Mon, Aug 25, 2014 at 3:46 AM, Dan Berindei < > dan.berindei at gmail.com> wrote: > >>>>>> > >>>>>> > >>>>>> > >>>>>> On Mon, Aug 25, 2014 at 10:26 AM, Galder Zamarre?o < > galder at redhat.com> > >>>>>> wrote: > >>>>>>> > >>>>>>> > >>>>>>> On 15 Aug 2014, at 15:55, Dan Berindei > wrote: > >>>>>>> > >>>>>>> > It looks to me like you actually want a partial order between > caches on > >>>>>>> > shutdown, so why not declare an explicit dependency (e.g. > >>>>>>> > manager.stopOrder(before, after)? We could even throw an > exception if the > >>>>>>> > user tries to stop a cache manually in the wrong order (e.g. > >>>>>>> > TestingUtil.killCacheManagers). > >>>>>>> > > >>>>>>> > Alternatively, we could add an event > CacheManagerStopEvent(pre=true) at > >>>>>>> > the cache manager level that is invoked before any cache is > stopped, and you > >>>>>>> > could close all the indexes in that listener. The event could > even be at the > >>>>>>> > cache level, if it would make things easier. > >>>>> > >>>>> I think something like this would be the simplest for now especially, > >>>>> how this is done though we can still decide. > >>>>> > >>>>>>> > >>>>>>> Not sure you need the listener event since we already have > lifecycle event > >>>>>>> callbacks for external modules. > >>>>>>> > >>>>>>> IOW, couldn?t you do this cache stop ordering with an > implementation of > >>>>>>> org.infinispan.lifecycle.ModuleLifecycle? On cacheStarting, you > could maybe > >>>>>>> track each started cache and give it a priority, and then on > >>>>>>> cacheManagerStopping use that priority to close caches. Note: I?ve > not > >>>>>>> tested this and I don?t know if the callbacks happen at the right > time to > >>>>>>> allow this. Just thinking out loud. > >>>>> > >>>>> +1 this is a nice use of what is already in place. The only issue I > >>>>> see here is that there is no ordering of the lifecycle callbacks if > >>>>> you had more than 1 callback, which could cause issues if users > wanted > >>>>> to reference certain caches. > >>>>> > >>>>>>> > >>>>>> > >>>>>> Unfortunately ModuleLifecycle.cacheManagerStopping is only called > _after_ > >>>>>> all the caches have been stopped. > >>>>> > >>>>> This seems like a bug, not very nice for ordering of callback > methods. > >>>>> > >>>>>> > >>>>>> > >>>>>>> > >>>>>>> Cheers, > >>>>>>> > >>>>>>> > > >>>>>>> > Cheers > >>>>>>> > Dan > >>>>>>> > > >>>>>>> > > >>>>>>> > > >>>>>>> > On Fri, Aug 15, 2014 at 3:29 PM, Sanne Grinovero < > sanne at infinispan.org> > >>>>>>> > wrote: > >>>>>>> > The goal being to resolve ISPN-4561, I was thinking to expose a > very > >>>>>>> > simple reference counter in the AdvancedCache API. > >>>>>>> > > >>>>>>> > As you know the Query module - which triggers on indexed caches > - can > >>>>>>> > use the Infinispan Lucene Directory to store its indexes in a > >>>>>>> > (different) Cache. > >>>>>>> > When the CacheManager is stopped, if the index storage caches are > >>>>>>> > stopped first, then the indexed cache is stopped, this might > need to > >>>>>>> > flush/close some pending state on the index and this results in > an > >>>>>>> > illegal operation as the storate is shut down already. > >>>>>>> > > >>>>>>> > We could either implement a complex dependency graph, or add a > method > >>>>>>> > like: > >>>>>>> > > >>>>>>> > > >>>>>>> > boolean incRef(); > >>>>>>> > > >>>>>>> > on AdvancedCache. > >>>>>>> > > >>>>>>> > when the Cache#close() method is invoked, this will do an > internal > >>>>>>> > decrement, and only when hitting zero it will really close the > cache. > >>>>> > >>>>> Unfortunately this won't work except in a simple dependency case (you > >>>>> depend on a cache, but no cache can depend on you). > >>>>> > >>>>> Say you have 3 caches (C1, C2, C3). > >>>>> > >>>>> The case is C2 depends on C1 and C3 depends on C2. In this case both > >>>>> C1 and C2 would have a ref count value of 1 and C3 would have 0. > This > >>>>> would allow for C1 and C2 to both be eligible to be closed during the > >>>>> same iteration. > >>>> > >>>> Yea people could use it the wrong way :-D > >>> > >>> Oh I agree, but it seems this could be a pretty simple case that a > >>> user may not know the consequences of. The problem with this you have > >>> to know the dependencies of the cache you are depending on as well, > >>> which I don't think most users would want. It would be fine if only > >>> Infinispan used it internally, or at least it should :) > >> > >> Right, I expect this to be used at SPI level: other frameworks > >> integrating still need a stable contract but that doesn't mean all of > >> the SPI needs to be easy, maybe documented only in an appendix, etc.. > >> > >> > >>>> But you can increment in different patterns than what you described to > >>>> model a full graph: the important point is to allow users to define an > >>>> order in *some* way. > >>>> > >>>> > >>>>> I think if we started doing dependencies we would really need to have > >>>>> some sort of graph to have anything more than the simple case. > >>>>> > >>>>> Do we know of other use cases where we may want a dependency graph > >>>>> explicitly? It seems what you want is solvable with what is in > place, > >>>>> it just has a bug :( > >>>> > >>>> True, for my case a two-phases would be good enough *generally > >>>> speaking* as we don't expect people to index stuff in a Cache which is > >>>> also used to store an index for a different Cache, but that's a > >>>> "legal" configuration. > >>>> Applying Muprhy's law, that means someone will try it out and I'd > >>>> rather be safe about that. > >>>> > >>>> It just so happens that the counter proposal is both trivial and also > >>>> can handle a quite long ordering chain. > >>>> > >>>> I don't understand how it's solvable "with what's in place", could you > >>>> elaborate? > >>> > >>> I meant that you could use the ModuleLifecycle callback that Galder > >>> mentioned in the query module to close any caches that are needed > >>> before the manager starts shutting down others. However until the > >>> mentioned bug is fixed it won't quite work. When I said "with what is > >>> in place" I meant more that we wouldn't have to design a new > >>> implementation to support your use case. > >> > >> The pattern Galder suggested implies some kind of counting right ;-) > >> just that he suggests I could implement my own. > > > > Not necessarily, could you not just when a cache starts check it's > > configuration and if it has indexing enabled keep a reference to it. > > Then when the cache manager is stopping you stop those caches before > > returning? > > I suspect that if we allow extension points to shut down other > components eagerly it's going to be a mess. > I can't reliably track if and which other components (and user code) > are still using it? > > We know that user code isn't using it any more because the user called CacheManager.stop(). If there are other threads accessing caches in the cache manager, the user should expect those operations to fail. > > >> But when you close my module (Query), it might be too late.. or too > >> early, as there might be other users of the index cache. So you're > >> saying I need to build this in the Lucene Directory module? > >> That doesn't work either as the Lucene Directory should not depend on > >> Query, nor it can tell which module is using it, but more importantly > >> this module isn't necessarily used exclusively by the Query module. > > > > This wouldn't be fired when the module is closed but rather as a > > notification that the cache manager is about to begin its stop > > sequence. > > So I would eagerly shut down an indexed cache from a module which > might have been started as its dependant (or not). > > What about other "in flight" operations happening on that Cache, we > let them blow up even if technically we didn't shut down yet? > > Sorry for playing devil's advocate, but it seems very wrong. > It wouldn't be any different from now - stopping the cache manager stops all the caches, we'd just change the order. Transactional caches would wait for in-flight transactions to finish, non-transactional caches would not wait. It's true that this approach does not compose very well. But as Will showed in a prior email, your reference counting proposal doesn't compose well either. So we either do a quick fix that supports only one level of dependencies, or we bite the bullet and support a full dependency graph. > > BTW I omitted it so far as "execution complexity" shouldn't be an > excuse for an inferior solution, but it's worth to keep in mind that > some of these resources are managed by Hibernate Search and you really > can't introduce hard dependencies to it so the solution we're aiming > at needs to be built and supported by infinispan-core exclusively, > which has to expose this as an SPI. > I don't see the need for an additional SPI, ModuleLifecycle should be enough. > > That said, I just wanted to explain the problem and propose a solution > but have no longer time for this so any volunteer taking it? > https://issues.jboss.org/browse/ISPN-4561 > > TiA > > -- Sanne > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140827/99797d38/attachment-0001.html From dan.berindei at gmail.com Wed Aug 27 09:14:19 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 27 Aug 2014 16:14:19 +0300 Subject: [infinispan-dev] minutes from the monitoring&management meeting In-Reply-To: <53FC9287.2060902@redhat.com> References: <912B2F10-1854-494C-AD55-1C91413D4A51@redhat.com> <1B7EF10E-048D-4764-8AF6-4A2712733E40@redhat.com> <53FB8CB6.1000106@redhat.com> <53FC9287.2060902@redhat.com> Message-ID: On Tue, Aug 26, 2014 at 4:58 PM, Tristan Tarrant wrote: > On 26/08/14 15:50, Dan Berindei wrote: > > > > The cluster registry also uses a clustered cache, how would we ship > > the cache configuration around for that cache? > Currently the configuration for the cluster registry is static, so there > isn't any need to propagate it. My reasoning obviously falls over when > we want to add some configuration to it, such as persistence. > Right, I was certain we already allowed the user to override the cluster registry cache config. Still, even if the configuration is static, I'm not a fan of adding yet another special case for the cluster registry cache. So I'm not completely sold on your idea yet :) > > > > The cluster registry is also too limited to do this check ATM, as it > > doesn't support conditional operations. I'm not sure whether that's > > because they just weren't needed, or it's an intentional limitation. > > > I think it was just laziness. > > Tristan > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140827/2b4a6f84/attachment.html From ttarrant at redhat.com Wed Aug 27 09:46:02 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 27 Aug 2014 15:46:02 +0200 Subject: [infinispan-dev] minutes from the monitoring&management meeting In-Reply-To: References: <912B2F10-1854-494C-AD55-1C91413D4A51@redhat.com> <1B7EF10E-048D-4764-8AF6-4A2712733E40@redhat.com> <53FB8CB6.1000106@redhat.com> <53FC9287.2060902@redhat.com> Message-ID: <53FDE11A.5030201@redhat.com> I don't think there is any way around "special casing" the Cluster Registry configuration, although I'd use our "named configuration" system with a "well known" name. However a node joining an existing cache would need some kind of bootstrap command to obtain the CR config from the coordinator before starting the other caches. Once we have obtained that we can use the CR itself as a configuration propagation method. Tristan On 27/08/14 15:14, Dan Berindei wrote: > > On Tue, Aug 26, 2014 at 4:58 PM, Tristan Tarrant > wrote: > > On 26/08/14 15:50, Dan Berindei wrote: > > > > The cluster registry also uses a clustered cache, how would we ship > > the cache configuration around for that cache? > Currently the configuration for the cluster registry is static, so > there > isn't any need to propagate it. My reasoning obviously falls over when > we want to add some configuration to it, such as persistence. > > > Right, I was certain we already allowed the user to override the > cluster registry cache config. > > Still, even if the configuration is static, I'm not a fan of adding > yet another special case for the cluster registry cache. So I'm not > completely sold on your idea yet :) > > > > > The cluster registry is also too limited to do this check ATM, as it > > doesn't support conditional operations. I'm not sure whether that's > > because they just weren't needed, or it's an intentional limitation. > > > I think it was just laziness. > > Tristan > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From pierre.sutra at unine.ch Thu Aug 28 07:48:10 2014 From: pierre.sutra at unine.ch (Pierre Sutra) Date: Thu, 28 Aug 2014 13:48:10 +0200 Subject: [infinispan-dev] Nutch atop Hadoop+ISPN Message-ID: <53FF16FA.8070309@unine.ch> Hello, As announced previously, we developed a Gora connector for Infinispan (https://github.com/otrack/gora). The code is quite functional now as we are able to run Apache Nutch 2.x on top of Infinispan and Yarn+HDFS (Hadoop 2.x). Nutch is a pipeline of M/R jobs accessing web pages from a data store (in that case Infinispan). Queries to fetch (and store) pages are executed via the Gora connector which itself relies on an Apache Avro remote query module in Infinispan and Hot Rod. The next step to foster integration would be removing the need for stable storage (distributing jars to the workers), as well as moving to Infinispan native M/R support. I have seen that this is related to https://issues.jboss.org/browse/ISPN-2941. Could someone please give me more details about the next steps in this direction, in particular regarding stable storage ? Many thanks. Cheers, Pierre From galder at redhat.com Thu Aug 28 08:20:59 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Thu, 28 Aug 2014 14:20:59 +0200 Subject: [infinispan-dev] Hot Rod partial topology update processing with new segment info - Re: ISPN-4674 Message-ID: Hey Dan, Re: https://issues.jboss.org/browse/ISPN-4674 If you remember, the topology updates that we send to clients are sometimes partial. This happens when at the JGroups level we have a new view, but the HR address cache has not yet been updated with the JGroups address to endpoint address. This logic works well with HR protocol 1.x. With HR 2.x, there?s a slight problem with this. The problem is that we now write segment information in the topology, and when we have this partial set up, calls to locateOwnersForSegment(), for a partial cluster of 2, it can quite possibly return 2. The problem comes when the client reads the number of servers, discovers it?s one, but reading the segment, it says that there?s two owners. That?s where the ArrayIndexOutOfBoundsException comes from. The question is: how shall we deal with this segment information in the even of a partial topology update? >From a client perspective, one option might be to just ignore those segment positions for which there?s no cluster member. IOW, if the number of owners is bigger than the cluster view, it could just decide to create a smaller segment array, of only cluster view size, and then ignore the index of a node that?s not present in the cluster view. Would this be the best way to solve it? Or could we just avoid sending segment information that?s not right? IOW, directly send from the server segment information with all this filtered. Thoughts? Cheers, -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From dan.berindei at gmail.com Thu Aug 28 08:31:03 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 28 Aug 2014 15:31:03 +0300 Subject: [infinispan-dev] Hot Rod partial topology update processing with new segment info - Re: ISPN-4674 In-Reply-To: References: Message-ID: Do we really need to send those partial topology updates? What topology id do they have? When the coordinator sees the leaver, it updates the consistent hashes on all the members and increases the cache topology id. Normally this is immediately followed by a new topology update that starts a rebalance, but if there is just one node left in the cluster there is nothing to rebalance and this will be the last topology sent to the client. If we already sent a partial topology to the client with that id, we'll never update the CH on the client. Cheers Dan On Thu, Aug 28, 2014 at 3:20 PM, Galder Zamarre?o wrote: > Hey Dan, > > Re: https://issues.jboss.org/browse/ISPN-4674 > > If you remember, the topology updates that we send to clients are > sometimes partial. This happens when at the JGroups level we have a new > view, but the HR address cache has not yet been updated with the JGroups > address to endpoint address. This logic works well with HR protocol 1.x. > > With HR 2.x, there?s a slight problem with this. The problem is that we > now write segment information in the topology, and when we have this > partial set up, calls to locateOwnersForSegment(), for a partial cluster of > 2, it can quite possibly return 2. > > The problem comes when the client reads the number of servers, discovers > it?s one, but reading the segment, it says that there?s two owners. That?s > where the ArrayIndexOutOfBoundsException comes from. > > The question is: how shall we deal with this segment information in the > even of a partial topology update? > > >From a client perspective, one option might be to just ignore those > segment positions for which there?s no cluster member. IOW, if the number > of owners is bigger than the cluster view, it could just decide to create a > smaller segment array, of only cluster view size, and then ignore the index > of a node that?s not present in the cluster view. > > Would this be the best way to solve it? Or could we just avoid sending > segment information that?s not right? IOW, directly send from the server > segment information with all this filtered. > > Thoughts? > > Cheers, > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140828/873ef5f9/attachment.html From dan.berindei at gmail.com Thu Aug 28 08:51:45 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 28 Aug 2014 15:51:45 +0300 Subject: [infinispan-dev] Hot Rod partial topology update processing with new segment info - Re: ISPN-4674 In-Reply-To: References: Message-ID: Ok, I've read the code now. (It's been a long time!) The partial updates should only be sent if there is a mismatch between the current CH and the topology cache (i.e. one of the owners in the CH doesn't have an endpoint address in the topology cache) and the client has a really old CH (i.e. client topology id + 1 < server topology id, e.g. because this is the client's first request). in this case, we send a topology update to the client, even though we know it will be updated soon, but the server must prune all the owners without a valid endpoint address from the CH sent to the client (as per your second proposal). Cheers Dan On Thu, Aug 28, 2014 at 3:31 PM, Dan Berindei wrote: > Do we really need to send those partial topology updates? What topology id > do they have? > > When the coordinator sees the leaver, it updates the consistent hashes on > all the members and increases the cache topology id. Normally this is > immediately followed by a new topology update that starts a rebalance, but > if there is just one node left in the cluster there is nothing to rebalance > and this will be the last topology sent to the client. If we already sent a > partial topology to the client with that id, we'll never update the CH on > the client. > > Cheers > Dan > > > > On Thu, Aug 28, 2014 at 3:20 PM, Galder Zamarre?o > wrote: > >> Hey Dan, >> >> Re: https://issues.jboss.org/browse/ISPN-4674 >> >> If you remember, the topology updates that we send to clients are >> sometimes partial. This happens when at the JGroups level we have a new >> view, but the HR address cache has not yet been updated with the JGroups >> address to endpoint address. This logic works well with HR protocol 1.x. >> >> With HR 2.x, there?s a slight problem with this. The problem is that we >> now write segment information in the topology, and when we have this >> partial set up, calls to locateOwnersForSegment(), for a partial cluster of >> 2, it can quite possibly return 2. >> >> The problem comes when the client reads the number of servers, discovers >> it?s one, but reading the segment, it says that there?s two owners. That?s >> where the ArrayIndexOutOfBoundsException comes from. >> >> The question is: how shall we deal with this segment information in the >> even of a partial topology update? >> >> >From a client perspective, one option might be to just ignore those >> segment positions for which there?s no cluster member. IOW, if the number >> of owners is bigger than the cluster view, it could just decide to create a >> smaller segment array, of only cluster view size, and then ignore the index >> of a node that?s not present in the cluster view. >> >> Would this be the best way to solve it? Or could we just avoid sending >> segment information that?s not right? IOW, directly send from the server >> segment information with all this filtered. >> >> Thoughts? >> >> Cheers, >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140828/b4c3125c/attachment-0001.html From galder at redhat.com Fri Aug 29 03:56:06 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Fri, 29 Aug 2014 08:56:06 +0100 Subject: [infinispan-dev] Hot Rod partial topology update processing with new segment info - Re: ISPN-4674 In-Reply-To: References: Message-ID: <20C0D2FD-0630-4F06-A565-A78FFE321A3B@redhat.com> On 28 Aug 2014, at 13:51, Dan Berindei wrote: > Ok, I've read the code now. (It's been a long time!) :) > > The partial updates should only be sent if there is a mismatch between the current CH and the topology cache (i.e. one of the owners in the CH doesn't have an endpoint address in the topology cache) and the client has a really old CH (i.e. client topology id + 1 < server topology id, e.g. because this is the client's first request). in this case, we send a topology update to the client, even though we know it will be updated soon, but the server must prune all the owners without a valid endpoint address from the CH sent to the client (as per your second proposal). Ok, I?ll give that a go. > > Cheers > Dan > > > > On Thu, Aug 28, 2014 at 3:31 PM, Dan Berindei wrote: > Do we really need to send those partial topology updates? What topology id do they have? > > When the coordinator sees the leaver, it updates the consistent hashes on all the members and increases the cache topology id. Normally this is immediately followed by a new topology update that starts a rebalance, but if there is just one node left in the cluster there is nothing to rebalance and this will be the last topology sent to the client. If we already sent a partial topology to the client with that id, we'll never update the CH on the client. > > Cheers > Dan > > > > On Thu, Aug 28, 2014 at 3:20 PM, Galder Zamarre?o wrote: > Hey Dan, > > Re: https://issues.jboss.org/browse/ISPN-4674 > > If you remember, the topology updates that we send to clients are sometimes partial. This happens when at the JGroups level we have a new view, but the HR address cache has not yet been updated with the JGroups address to endpoint address. This logic works well with HR protocol 1.x. > > With HR 2.x, there?s a slight problem with this. The problem is that we now write segment information in the topology, and when we have this partial set up, calls to locateOwnersForSegment(), for a partial cluster of 2, it can quite possibly return 2. > > The problem comes when the client reads the number of servers, discovers it?s one, but reading the segment, it says that there?s two owners. That?s where the ArrayIndexOutOfBoundsException comes from. > > The question is: how shall we deal with this segment information in the even of a partial topology update? > > >From a client perspective, one option might be to just ignore those segment positions for which there?s no cluster member. IOW, if the number of owners is bigger than the cluster view, it could just decide to create a smaller segment array, of only cluster view size, and then ignore the index of a node that?s not present in the cluster view. > > Would this be the best way to solve it? Or could we just avoid sending segment information that?s not right? IOW, directly send from the server segment information with all this filtered. > > Thoughts? > > Cheers, > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From bban at redhat.com Fri Aug 29 05:02:11 2014 From: bban at redhat.com (Bela Ban) Date: Fri, 29 Aug 2014 11:02:11 +0200 Subject: [infinispan-dev] JGroups 3.5.0.Final released Message-ID: <54004193.9030305@redhat.com> http://belaban.blogspot.ch/2014/08/jgroups-350final-released.html -- Bela Ban, JGroups lead (http://www.jgroups.org)