From sanne at infinispan.org Tue Apr 1 19:11:31 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 2 Apr 2014 00:11:31 +0100 Subject: [infinispan-dev] Why no JGroups 3.5.0.Beta1 yet? In-Reply-To: References: Message-ID: In Hibernate Search we worked around the multicast problem by dropping the need for multicasts: essentially all our tests are now using SHARED_LOOPBACK and SHARED_LOOPBACK_PING. But before that decision, Hardy had made the following interesting solution for our Maven build. Having agreed that this is not a JGroups bug but rather a weird configuration on apple machines, we'd expect apple users to have to fix their routing table; this would need some appropriate instructions in the usual places, but the patch below also provides for a user friendly error message to those who might not have set it up: https://github.com/hferentschik/hibernate-search/commit/d207ba088f8bec5d09bbcf77c9b4fdd6571034ef We choose for the simplicity of the in-JVM loopback tests as we build on top of Infinispan, and trust you guys to deal with the network complexities, but I think Hardy's solution might be a good fit to be applied to Infinispan? Cheers, Sanne On 24 March 2014 23:47, Sanne Grinovero wrote: > I'm wondering what the plans are around updating JGroups. I'd like to > update Search to use the latest JGroups 3.5.0.Beta1, but: > - no good for us to strive ahead of Infinispan as we need to test > them all aligned > - there's an "interesting" situation around JGRP-1808: doesn't work > on a Mac unless you reconfigure your system for proper multicast > routes > > I'm hoping someone who cares about it to work on Mac to take ownership > of it, as it doesn't affect me but it's quite annoying for other > contributors. > > There are many interesting performance improvements in this release, > so I'm surprised it wasn't eagerly adopted. > > Sanne From galder at redhat.com Wed Apr 2 07:14:07 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Wed, 2 Apr 2014 13:14:07 +0200 Subject: [infinispan-dev] Remote Hot Rod events wiki updated Message-ID: <18D41294-AB90-4119-9E85-BECEAFBA8E38@redhat.com> Hi all, I?ve finally managed to get around to updating the remote hot rod event design wiki [1]. The biggest changes are related to piggybacking on the cluster listeners functionality in order to for registration/deregistration of listeners and handling failure scenarios. This should simplify the actual implementation on the Hot Rod side. Based on feedback, I?ve also changed some of the class names so that it?s clearer what?s client side and what?s server side. A very important change is the fact that source id information has gone. This is primarily because near-cache like implementations cannot make assumptions on what to store in the near caches when the client invokes operations. Such implementations need to act purely on the events received. Finally, a filter/converter plugging mechanism will be done via factory implementations, which provide more flexibility on the way filter/converter instances are created. This opens the possibility for filter/converter factory parameters to be added to the protocol and passed, after unmarshalling, to the factory callbacks (this is not included right now). I hope to get started on this in the next few days, so feedback at this point is crucial to get a solid first release. Cheers, [1] https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events -- Galder Zamarre?o galder at redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org From mmarkus at redhat.com Wed Apr 2 09:09:53 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Wed, 2 Apr 2014 14:09:53 +0100 Subject: [infinispan-dev] Why no JGroups 3.5.0.Beta1 yet? In-Reply-To: References: Message-ID: <344D4C39-AE42-4DD9-9688-4DE6458BF3CF@redhat.com> I am looking into this: ISPN-4170 On Apr 2, 2014, at 0:11, Sanne Grinovero wrote: > In Hibernate Search we worked around the multicast problem by dropping > the need for multicasts: > essentially all our tests are now using SHARED_LOOPBACK and > SHARED_LOOPBACK_PING. > > But before that decision, Hardy had made the following interesting > solution for our Maven build. > Having agreed that this is not a JGroups bug but rather a weird > configuration on apple machines, we'd expect apple users to have to > fix their routing table; this would need some appropriate instructions > in the usual places, but the patch below also provides for a user > friendly error message to those who might not have set it up: > > https://github.com/hferentschik/hibernate-search/commit/d207ba088f8bec5d09bbcf77c9b4fdd6571034ef > > We choose for the simplicity of the in-JVM loopback tests as we build > on top of Infinispan, and trust you guys to deal with the network > complexities, but I think Hardy's solution might be a good fit to be > applied to Infinispan? > > Cheers, > Sanne > > > On 24 March 2014 23:47, Sanne Grinovero wrote: >> I'm wondering what the plans are around updating JGroups. I'd like to >> update Search to use the latest JGroups 3.5.0.Beta1, but: >> - no good for us to strive ahead of Infinispan as we need to test >> them all aligned >> - there's an "interesting" situation around JGRP-1808: doesn't work >> on a Mac unless you reconfigure your system for proper multicast >> routes >> >> I'm hoping someone who cares about it to work on Mac to take ownership >> of it, as it doesn't affect me but it's quite annoying for other >> contributors. >> >> There are many interesting performance improvements in this release, >> so I'm surprised it wasn't eagerly adopted. >> >> Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From dan.berindei at gmail.com Wed Apr 2 16:41:19 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 02 Apr 2014 20:44:19 +0003 Subject: [infinispan-dev] Feature requests for 7.0 In-Reply-To: <1395842223.4949.12.camel@T520> References: <1395842223.4949.12.camel@T520> Message-ID: <1396471279.15674.0@smtp.gmail.com> Hi Paul On Wed, Mar 26, 2014 at 3:57 PM, Paul Ferraro wrote: > Hey guys, > > I have created a number of requests for features that I'd like to be > able to leverage for WildFly 9/10. Can the appropriate component > owners > (which I think is Dan in all cases) comment on the following issues? > > The following issues prevent WF from leveraging Infinispan expiration: > * Expiration events from cache store > https://issues.jboss.org/browse/ISPN-3064 You'd probably need expiration events for in-memory entries first: https://issues.jboss.org/browse/ISPN-694 This has been in discussion for a while, but I'm not sure it will make it into 7.0. It may require expiration to be coupled with eviction, to avoid duplicate expiration events. (I'm sure Sanne would be happy about this, because we would stop checking if the entry is expired on every access.) > > * Group-based expiration > https://issues.jboss.org/browse/ISPN-2916 There is a reasonable workaround for this, or at least there will be once we have expiration events: only make one entry mortal, and use its expiration listener to remove all the other entries in the group. > > > Now that Infinispan eviction is safe for use by transactional caches, Funny, I was just looking at the Cache.evict() javadoc and it seems we haven't removed this line yet: Important: this method should not be called from within a transaction scope. > > there remain a few issues complicating the ability for WF to fully > leverage the eviction manager for passivation: > * Group-based eviction > https://issues.jboss.org/browse/ISPN-4132 TBH I think storing the session as a DeltaAware would be a better fit for this requirement. Either way, I don't think it will make it into 7.0. > > * Clustered eviction (this one is really only an inconvenience for > those > of us using manual eviction since I can't use infinispan eviction) > https://issues.jboss.org/browse/ISPN-4134 I don't see this as a priority, you could just invoke a distributed task that calls cache.evict(k) on each node. Internally, we'd probably use a command, but the effect would be the same. > > > Optimizations: > * Enumerate cache keys for group > https://issues.jboss.org/browse/ISPN-3900 I think Mircea already had a chat with Davide about implementing this in Infinispan. However, I don't see a lot of scope for optimizations over what you have already implemented for sessions. > > * Unloadable Key2StringMapper > https://issues.jboss.org/browse/ISPN-3979 This sounds like a bug more than a feature request, it should definitely be included in 7.0. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140402/3f5c0808/attachment-0001.html From faseela.k at ericsson.com Thu Apr 3 02:03:10 2014 From: faseela.k at ericsson.com (Faseela K) Date: Thu, 3 Apr 2014 06:03:10 +0000 Subject: [infinispan-dev] How to disconnect one node from a cluster? Message-ID: Hi, I have infinispan in clustered mode. Right now I use jgroups gossip router to set all the cluster node ips. I want to manually disconnect one node from the cluster. Is there any infinipan api available for the same? Thanks, Faseela -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140403/fa6bccd7/attachment.html From rvansa at redhat.com Thu Apr 3 02:48:41 2014 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 03 Apr 2014 08:48:41 +0200 Subject: [infinispan-dev] How to disconnect one node from a cluster? In-Reply-To: References: Message-ID: <533D0449.6060108@redhat.com> Hi Faseela, you've correctly asked this question on forum [1], please don't cross-post on this mailing list as its purpose is discussing design decisions etc., not user support. Radim [1] https://community.jboss.org/message/866360?et=watches.email.thread#866360 On 04/03/2014 08:03 AM, Faseela K wrote: > Hi, > I have infinispan in clustered mode. > Right now I use jgroups gossip router to set all the cluster node ips. > I want to manually disconnect one node from the cluster. > Is there any infinipan api available for the same? > Thanks, > Faseela > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140403/5411f446/attachment.html From dan.berindei at gmail.com Thu Apr 3 04:05:40 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 03 Apr 2014 08:08:40 +0003 Subject: [infinispan-dev] Remote Hot Rod events wiki updated In-Reply-To: <18D41294-AB90-4119-9E85-BECEAFBA8E38@redhat.com> References: <18D41294-AB90-4119-9E85-BECEAFBA8E38@redhat.com> Message-ID: <1396512340.2904.0@smtp.gmail.com> Don't we want to allow the user to pass some data to the filter factory on registration? Otherwise we'd force the user to write a separate filter factory class every time they want to track changes to a single key. Cheers Dan On Wed, Apr 2, 2014 at 2:14 PM, Galder Zamarre?o wrote: > Hi all, > > I?ve finally managed to get around to updating the remote hot rod > event design wiki [1]. > > The biggest changes are related to piggybacking on the cluster > listeners functionality in order to for registration/deregistration > of listeners and handling failure scenarios. This should simplify the > actual implementation on the Hot Rod side. > > Based on feedback, I?ve also changed some of the class names so > that it?s clearer what?s client side and what?s server side. > > A very important change is the fact that source id information has > gone. This is primarily because near-cache like implementations > cannot make assumptions on what to store in the near caches when the > client invokes operations. Such implementations need to act purely on > the events received. > > Finally, a filter/converter plugging mechanism will be done via > factory implementations, which provide more flexibility on the way > filter/converter instances are created. This opens the possibility > for filter/converter factory parameters to be added to the protocol > and passed, after unmarshalling, to the factory callbacks (this is > not included right now). > > I hope to get started on this in the next few days, so feedback at > this point is crucial to get a solid first release. > > Cheers, > > [1] > https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > Project Lead, Escalante > http://escalante.io > > Engineer, Infinispan > http://infinispan.org > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140403/9230726b/attachment.html From rvansa at redhat.com Thu Apr 3 04:31:11 2014 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 03 Apr 2014 10:31:11 +0200 Subject: [infinispan-dev] Eviction notification deprecated? Message-ID: <533D1C4F.7070704@redhat.com> Hi guys, I've noticed that CacheEntryEvicted is marked as: @deprecated Note that this annotation will be removed in Infinispan 6.0 (by Manik). Apparently it was not removed, and what are the prospects, anyway? I was not able to find any related JIRA. Radim -- Radim Vansa JBoss DataGrid QA From dan.berindei at gmail.com Thu Apr 3 05:03:31 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 03 Apr 2014 09:06:31 +0003 Subject: [infinispan-dev] Eviction notification deprecated? In-Reply-To: <533D1C4F.7070704@redhat.com> References: <533D1C4F.7070704@redhat.com> Message-ID: <1396515811.2904.1@smtp.gmail.com> https://issues.jboss.org/browse/ISPN-720 Replace CacheEntryEvictedEvent with CacheEntriesEvictedEvent (note the plural) On Thu, Apr 3, 2014 at 11:31 AM, Radim Vansa wrote: > Hi guys, > > I've noticed that CacheEntryEvicted is marked as: > > @deprecated Note that this annotation will be removed in Infinispan > 6.0 > (by Manik). > > Apparently it was not removed, and what are the prospects, anyway? I > was > not able to find any related JIRA. > > Radim > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140403/99801017/attachment.html From rvansa at redhat.com Thu Apr 3 05:38:13 2014 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 03 Apr 2014 11:38:13 +0200 Subject: [infinispan-dev] New configuration Message-ID: <533D2C05.9020609@redhat.com> Hi, looking on the new configuration parser, I've noticed that you cannot configure ConsistentHashFactory anymore - is this by purpose? Another my concern is the fact that you enable stuff by parsing the element - for example L1. I expect that omitting the element and setting it with the default value (as presented in XSD) makes no difference, but this is not how current configuration works. My opinion comes probably too late as the PR was already reviewed, discussed and integrated, but at least, please clearly describe the behaviour in the XSD. The fact that l1-lifespan "Defaults to 10 minutes." is not correct - it defaults to L1 being disabled. Thanks Radim -- Radim Vansa JBoss DataGrid QA From rvansa at redhat.com Fri Apr 4 03:29:58 2014 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 04 Apr 2014 09:29:58 +0200 Subject: [infinispan-dev] Remote Hot Rod events wiki updated In-Reply-To: <18D41294-AB90-4119-9E85-BECEAFBA8E38@redhat.com> References: <18D41294-AB90-4119-9E85-BECEAFBA8E38@redhat.com> Message-ID: <533E5F76.4050201@redhat.com> Hi, I still don't think that the document covers properly the description of failover. My understanding is that client registers clustered listeners on one server (the first one it connects, I guess). There's some space for optimization, as the notification will be sent from primary owner to this node and only then over hotrod to the client, but I don't want to discuss it now. > Listener registrations will survive node failures thanks to the underlying clustered listener implementation. I am not that much into clustered listeners yet, but I think that the mechanism makes sure that when the primary owner changes, the new owner will then send the events. But when the node which registered the clustered listener dies, others will just forgot about it. > When a client detects that the server which was serving the events is gone, it needs to resend it?s registration to one of the nodes in the cluster. Whoever receives that request will again loop through its contents and send an event for each entry to the client. Will that be all entries in the whole cache, or just from some node? I guess that the first is correct. So, as soon as one node dies, all clients will be bombarded by the full cache content (ok, filtered). Even if these entries have not changed, because the cluster can't know. > This way the client avoids loosing events. Once all entries have been iterated over, on-going events will be sent to the client. > This way of handling failure means that clients will receive /at-least-once/ delivery of cache updates. It might receive multiple events for the cache update as a result of topology changes handling. So, if there are several modifications before the client reconnects and the new target registers the listener, the clients will get only notification about the last modification, or rather just the entry content, right? Radim On 04/02/2014 01:14 PM, Galder Zamarre?o wrote: > Hi all, > > I?ve finally managed to get around to updating the remote hot rod event design wiki [1]. > > The biggest changes are related to piggybacking on the cluster listeners functionality in order to for registration/deregistration of listeners and handling failure scenarios. This should simplify the actual implementation on the Hot Rod side. > > Based on feedback, I?ve also changed some of the class names so that it?s clearer what?s client side and what?s server side. > > A very important change is the fact that source id information has gone. This is primarily because near-cache like implementations cannot make assumptions on what to store in the near caches when the client invokes operations. Such implementations need to act purely on the events received. > > Finally, a filter/converter plugging mechanism will be done via factory implementations, which provide more flexibility on the way filter/converter instances are created. This opens the possibility for filter/converter factory parameters to be added to the protocol and passed, after unmarshalling, to the factory callbacks (this is not included right now). > > I hope to get started on this in the next few days, so feedback at this point is crucial to get a solid first release. > > Cheers, > > [1] https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > Project Lead, Escalante > http://escalante.io > > Engineer, Infinispan > http://infinispan.org > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140404/9c763049/attachment-0001.html From mudokonman at gmail.com Fri Apr 4 13:11:51 2014 From: mudokonman at gmail.com (William Burns) Date: Fri, 4 Apr 2014 13:11:51 -0400 Subject: [infinispan-dev] Remote Hot Rod events wiki updated In-Reply-To: <533E5F76.4050201@redhat.com> References: <18D41294-AB90-4119-9E85-BECEAFBA8E38@redhat.com> <533E5F76.4050201@redhat.com> Message-ID: On Fri, Apr 4, 2014 at 3:29 AM, Radim Vansa wrote: > Hi, > > I still don't think that the document covers properly the description of > failover. > > My understanding is that client registers clustered listeners on one server > (the first one it connects, I guess). There's some space for optimization, > as the notification will be sent from primary owner to this node and only > then over hotrod to the client, but I don't want to discuss it now. There could be optimizations, but we have to worry about reordering if the primary owner doesn't do the forwarding. You could have the case of multiple writes to the same key from the clients and lets say they send the message to the listener after they are written to the cache, there is no way to make sure they are done in the order they were written to the cache. We could do something with versions for this though. > >> Listener registrations will survive node failures thanks to the underlying >> clustered listener implementation. > > I am not that much into clustered listeners yet, but I think that the > mechanism makes sure that when the primary owner changes, the new owner will > then send the events. But when the node which registered the clustered > listener dies, others will just forgot about it. That is how it is, I assume Galder was referring to node failures not on the one that registered the listener, which is obviously talked about in the next point. > >> When a client detects that the server which was serving the events is >> gone, it needs to resend it's registration to one of the nodes in the >> cluster. Whoever receives that request will again loop through its contents >> and send an event for each entry to the client. > > Will that be all entries in the whole cache, or just from some node? I guess > that the first is correct. So, as soon as one node dies, all clients will be > bombarded by the full cache content (ok, filtered). Even if these entries > have not changed, because the cluster can't know. The former being that the entire filtered/converted contents will be sent over. > >> This way the client avoids loosing events. Once all entries have been >> iterated over, on-going events will be sent to the client. > >> This way of handling failure means that clients will receive at-least-once >> delivery of cache updates. It might receive multiple events for the cache >> update as a result of topology changes handling. > > So, if there are several modifications before the client reconnects and the > new target registers the listener, the clients will get only notification > about the last modification, or rather just the entry content, right? This is all handled by the embedded cluster listeners though. But the end goal is you will only receive 1 event if the modification comes before value was retrieved from the remote node or 2 if afterwards. Also these modifications are queued by key and so if you had multiple modifications before it retrieved the value it would only give you the last one. > > Radim > > > On 04/02/2014 01:14 PM, Galder Zamarre?o wrote: > > Hi all, > > I've finally managed to get around to updating the remote hot rod event > design wiki [1]. > > The biggest changes are related to piggybacking on the cluster listeners > functionality in order to for registration/deregistration of listeners and > handling failure scenarios. This should simplify the actual implementation > on the Hot Rod side. > > Based on feedback, I've also changed some of the class names so that it's > clearer what's client side and what's server side. > > A very important change is the fact that source id information has gone. > This is primarily because near-cache like implementations cannot make > assumptions on what to store in the near caches when the client invokes > operations. Such implementations need to act purely on the events received. > > Finally, a filter/converter plugging mechanism will be done via factory > implementations, which provide more flexibility on the way filter/converter > instances are created. This opens the possibility for filter/converter > factory parameters to be added to the protocol and passed, after > unmarshalling, to the factory callbacks (this is not included right now). > > I hope to get started on this in the next few days, so feedback at this > point is crucial to get a solid first release. > > Cheers, > > [1] https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > Project Lead, Escalante > http://escalante.io > > Engineer, Infinispan > http://infinispan.org > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Radim Vansa > JBoss DataGrid QA > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Mon Apr 7 11:29:35 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Mon, 7 Apr 2014 17:29:35 +0200 Subject: [infinispan-dev] IRC weekly meeting Message-ID: <995BD055-AF1A-4934-8B12-8FFEA45CEB54@redhat.com> Hi, Here?s the IRC chat log for the weekly meeting we had earlier today: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2014/infinispan.2014-04-07-14.12.html Cheers, -- Galder Zamarre?o galder at redhat.com twitter.com/galderz Project Lead, Escalante http://escalante.io Engineer, Infinispan http://infinispan.org From rory.odonnell at oracle.com Tue Apr 8 04:42:49 2014 From: rory.odonnell at oracle.com (Rory O'Donnell Oracle, Dublin Ireland) Date: Tue, 08 Apr 2014 09:42:49 +0100 Subject: [infinispan-dev] JDK 9 build 06 is available on java.net Message-ID: <5343B689.6060301@oracle.com> Hi Galder, JDK 9 Build 06 Early Access Build is now available for download & test. Rgds, Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140408/84992789/attachment.html From mmarkus at redhat.com Wed Apr 9 04:52:08 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Wed, 9 Apr 2014 09:52:08 +0100 Subject: [infinispan-dev] cutting 7.0.0.Alpha3 Message-ID: I plan to do that at some point tomorrow. Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From paul.ferraro at redhat.com Wed Apr 9 12:36:20 2014 From: paul.ferraro at redhat.com (Paul Ferraro) Date: Wed, 09 Apr 2014 12:36:20 -0400 Subject: [infinispan-dev] Infinispan 7.0 to Java 7 In-Reply-To: <04FC5117-C2E3-4187-9E3A-59B2A6915094@redhat.com> References: <04FC5117-C2E3-4187-9E3A-59B2A6915094@redhat.com> Message-ID: <1397061380.2547.23.camel@T520> As an EE7 application server, WF already requires Java SE 7. On Wed, 2014-04-09 at 17:30 +0100, Mircea Markus wrote: > Hi guys, > > Hibernate Search 5.0 is moving to Java 7 (besides others, because Lucene 4.8 does it). > For us it makes a lot of sense to bring in HSearch 5/Lucene 4 rather soon, as it's important for remote querying. > How does that sound? > Paul, how does that fit with the WF integration? > > Cheers, From galder at redhat.com Wed Apr 9 12:37:00 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Wed, 9 Apr 2014 18:37:00 +0200 Subject: [infinispan-dev] New configuration In-Reply-To: <533D2C05.9020609@redhat.com> References: <533D2C05.9020609@redhat.com> Message-ID: On 03 Apr 2014, at 11:38, Radim Vansa wrote: > Hi, > > looking on the new configuration parser, I've noticed that you cannot > configure ConsistentHashFactory anymore - is this by purpose? ^ Rather than being something the users should be tweaking, it?s something that?s used internally. So, I applied a bit of if-in-doubt-leave-it-out logic. I don?t think we lose any major functionality with this. > Another my concern is the fact that you enable stuff by parsing the > element - for example L1. I expect that omitting the element and setting > it with the default value (as presented in XSD) makes no difference, but > this is not how current configuration works. L1 is disabled by default. You enable it by configuring the L1 lifespan to be bigger than 0. The attribute definition follows the pattern that Paul did for the server side. > My opinion comes probably too late as the PR was already reviewed, > discussed and integrated, but at least, please clearly describe the > behaviour in the XSD. The fact that l1-lifespan "Defaults to 10 > minutes." is not correct - it defaults to L1 being disabled. Yeah, I?ll update the XSD and documentation accordingly: https://issues.jboss.org/browse/ISPN-4195 Cheers > > Thanks > > Radim > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From mmarkus at redhat.com Wed Apr 9 12:30:41 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Wed, 9 Apr 2014 17:30:41 +0100 Subject: [infinispan-dev] Infinispan 7.0 to Java 7 Message-ID: <04FC5117-C2E3-4187-9E3A-59B2A6915094@redhat.com> Hi guys, Hibernate Search 5.0 is moving to Java 7 (besides others, because Lucene 4.8 does it). For us it makes a lot of sense to bring in HSearch 5/Lucene 4 rather soon, as it's important for remote querying. How does that sound? Paul, how does that fit with the WF integration? Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From dan.berindei at gmail.com Wed Apr 9 13:38:43 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 09 Apr 2014 17:39:43 +0001 Subject: [infinispan-dev] New configuration In-Reply-To: References: <533D2C05.9020609@redhat.com> Message-ID: <1397065123.5324.2@smtp.gmail.com> On Wed, Apr 9, 2014 at 5:37 PM, Galder Zamarre?o wrote: > > On 03 Apr 2014, at 11:38, Radim Vansa wrote: > >> Hi, >> >> looking on the new configuration parser, I've noticed that you >> cannot >> configure ConsistentHashFactory anymore - is this by purpose? > > ^ Rather than being something the users should be tweaking, it?s > something that?s used internally. So, I applied a bit of > if-in-doubt-leave-it-out logic. I don?t think we lose any major > functionality with this. For now it's the only way for the user to use the SyncConsistentHashFactory, so it's not used just internally. > > >> Another my concern is the fact that you enable stuff by parsing the >> element - for example L1. I expect that omitting the element and >> setting >> it with the default value (as presented in XSD) makes no >> difference, but >> this is not how current configuration works. > > L1 is disabled by default. You enable it by configuring the L1 > lifespan to be bigger than 0. The attribute definition follows the > pattern that Paul did for the server side. > >> My opinion comes probably too late as the PR was already reviewed, >> discussed and integrated, but at least, please clearly describe the >> behaviour in the XSD. The fact that l1-lifespan "Defaults to 10 >> minutes." is not correct - it defaults to L1 being disabled. > > Yeah, I?ll update the XSD and documentation accordingly: > https://issues.jboss.org/browse/ISPN-4195 > > Cheers > >> >> Thanks >> >> Radim >> >> -- >> Radim Vansa >> JBoss DataGrid QA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140409/579bcdad/attachment-0001.html From mmarkus at redhat.com Fri Apr 11 06:37:00 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Fri, 11 Apr 2014 11:37:00 +0100 Subject: [infinispan-dev] Infinispan 7.0.0.Alpha3 is out! Message-ID: <290A93DF-6647-4398-9662-9621A435A002@redhat.com> More about it here: http://blog.infinispan.org/2014/04/infinispan-700alpha3-is-out.html Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From ttarrant at redhat.com Fri Apr 11 07:35:18 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 11 Apr 2014 13:35:18 +0200 Subject: [infinispan-dev] Infinispan Security #1: Authorization Message-ID: <5347D376.4030307@redhat.com> More gripping than a gripping spy novel: http://blog.infinispan.org/2014/04/infinispan-security-1-authorization.html Fortunately you don't need a valid javax.security.auth.Subject to be able to read it :) Tristan From galder at redhat.com Fri Apr 11 08:36:12 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Fri, 11 Apr 2014 14:36:12 +0200 Subject: [infinispan-dev] Remote Hot Rod events wiki updated In-Reply-To: References: <18D41294-AB90-4119-9E85-BECEAFBA8E38@redhat.com> <533E5F76.4050201@redhat.com> Message-ID: <1AD1098E-AF45-403B-B839-F9B817E2185F@redhat.com> On 04 Apr 2014, at 19:11, William Burns wrote: > On Fri, Apr 4, 2014 at 3:29 AM, Radim Vansa wrote: >> Hi, >> >> I still don't think that the document covers properly the description of >> failover. >> >> My understanding is that client registers clustered listeners on one server >> (the first one it connects, I guess). There's some space for optimization, >> as the notification will be sent from primary owner to this node and only >> then over hotrod to the client, but I don't want to discuss it now. > > There could be optimizations, but we have to worry about reordering if > the primary owner doesn't do the forwarding. You could have the case > of multiple writes to the same key from the clients and lets say they > send the message to the listener after they are written to the cache, > there is no way to make sure they are done in the order they were > written to the cache. We could do something with versions for this > though. Versions do not provide global ordering. They are used, at each node, to identify an update, so they?re incrementing at the node level, mixed with some other data that?s node specific to make them unique cluster wide. However, you can?t assume global ordering based on those with the current implementation. I agree there?s room for optimizations but I think correctness and ordering are more important right now. > >> >>> Listener registrations will survive node failures thanks to the underlying >>> clustered listener implementation. >> >> I am not that much into clustered listeners yet, but I think that the >> mechanism makes sure that when the primary owner changes, the new owner will >> then send the events. But when the node which registered the clustered >> listener dies, others will just forgot about it. > > That is how it is, I assume Galder was referring to node failures not > on the one that registered the listener, which is obviously talked > about in the next point. That?s correct. > >> >>> When a client detects that the server which was serving the events is >>> gone, it needs to resend it's registration to one of the nodes in the >>> cluster. Whoever receives that request will again loop through its contents >>> and send an event for each entry to the client. >> >> Will that be all entries in the whole cache, or just from some node? I guess >> that the first is correct. So, as soon as one node dies, all clients will be >> bombarded by the full cache content (ok, filtered). Even if these entries >> have not changed, because the cluster can't know. > > The former being that the entire filtered/converted contents will be sent over. Indeed the former, but the entire entry, only keys, and latest versions, will be sent by default. Converters can be used to send value side too. > >> >>> This way the client avoids loosing events. Once all entries have been >>> iterated over, on-going events will be sent to the client. >> >>> This way of handling failure means that clients will receive at-least-once >>> delivery of cache updates. It might receive multiple events for the cache >>> update as a result of topology changes handling. >> >> So, if there are several modifications before the client reconnects and the >> new target registers the listener, the clients will get only notification >> about the last modification, or rather just the entry content, right? @Radim, you don?t get the content by default. You only get the key and the last version number. If the client wants, it can retrieve the value too, or using a custom converter, it can send back the value, but this is optional. > > This is all handled by the embedded cluster listeners though. But the > end goal is you will only receive 1 event if the modification comes > before value was retrieved from the remote node or 2 if afterwards. > Also these modifications are queued by key and so if you had multiple > modifications before it retrieved the value it would only give you the > last one. > >> >> Radim >> >> >> On 04/02/2014 01:14 PM, Galder Zamarre?o wrote: >> >> Hi all, >> >> I've finally managed to get around to updating the remote hot rod event >> design wiki [1]. >> >> The biggest changes are related to piggybacking on the cluster listeners >> functionality in order to for registration/deregistration of listeners and >> handling failure scenarios. This should simplify the actual implementation >> on the Hot Rod side. >> >> Based on feedback, I've also changed some of the class names so that it's >> clearer what's client side and what's server side. >> >> A very important change is the fact that source id information has gone. >> This is primarily because near-cache like implementations cannot make >> assumptions on what to store in the near caches when the client invokes >> operations. Such implementations need to act purely on the events received. >> >> Finally, a filter/converter plugging mechanism will be done via factory >> implementations, which provide more flexibility on the way filter/converter >> instances are created. This opens the possibility for filter/converter >> factory parameters to be added to the protocol and passed, after >> unmarshalling, to the factory callbacks (this is not included right now). >> >> I hope to get started on this in the next few days, so feedback at this >> point is crucial to get a solid first release. >> >> Cheers, >> >> [1] https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> Project Lead, Escalante >> http://escalante.io >> >> Engineer, Infinispan >> http://infinispan.org >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> -- >> Radim Vansa >> JBoss DataGrid QA >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From mudokonman at gmail.com Fri Apr 11 09:24:34 2014 From: mudokonman at gmail.com (William Burns) Date: Fri, 11 Apr 2014 09:24:34 -0400 Subject: [infinispan-dev] Remote Hot Rod events wiki updated In-Reply-To: <1AD1098E-AF45-403B-B839-F9B817E2185F@redhat.com> References: <18D41294-AB90-4119-9E85-BECEAFBA8E38@redhat.com> <533E5F76.4050201@redhat.com> <1AD1098E-AF45-403B-B839-F9B817E2185F@redhat.com> Message-ID: On Fri, Apr 11, 2014 at 8:36 AM, Galder Zamarre?o wrote: > > On 04 Apr 2014, at 19:11, William Burns wrote: > >> On Fri, Apr 4, 2014 at 3:29 AM, Radim Vansa wrote: >>> Hi, >>> >>> I still don't think that the document covers properly the description of >>> failover. >>> >>> My understanding is that client registers clustered listeners on one server >>> (the first one it connects, I guess). There's some space for optimization, >>> as the notification will be sent from primary owner to this node and only >>> then over hotrod to the client, but I don't want to discuss it now. >> >> There could be optimizations, but we have to worry about reordering if >> the primary owner doesn't do the forwarding. You could have the case >> of multiple writes to the same key from the clients and lets say they >> send the message to the listener after they are written to the cache, >> there is no way to make sure they are done in the order they were >> written to the cache. We could do something with versions for this >> though. > > Versions do not provide global ordering. They are used, at each node, to identify an update, so they're incrementing at the node level, mixed with some other data that's node specific to make them unique cluster wide. However, you can't assume global ordering based on those with the current implementation. I agree there's room for optimizations but I think correctness and ordering are more important right now. Oh I agree with what we have currently it wouldn't work. I was more thinking along the lines when you do an update the response tells you what version it was when it was committed. That way you know in which order the writes were done. Embedded L1 would be able to use this as well to do some additional optimizations. However I don't know if this is worth it at this point though. > >> >>> >>>> Listener registrations will survive node failures thanks to the underlying >>>> clustered listener implementation. >>> >>> I am not that much into clustered listeners yet, but I think that the >>> mechanism makes sure that when the primary owner changes, the new owner will >>> then send the events. But when the node which registered the clustered >>> listener dies, others will just forgot about it. >> >> That is how it is, I assume Galder was referring to node failures not >> on the one that registered the listener, which is obviously talked >> about in the next point. > > That's correct. > >> >>> >>>> When a client detects that the server which was serving the events is >>>> gone, it needs to resend it's registration to one of the nodes in the >>>> cluster. Whoever receives that request will again loop through its contents >>>> and send an event for each entry to the client. >>> >>> Will that be all entries in the whole cache, or just from some node? I guess >>> that the first is correct. So, as soon as one node dies, all clients will be >>> bombarded by the full cache content (ok, filtered). Even if these entries >>> have not changed, because the cluster can't know. >> >> The former being that the entire filtered/converted contents will be sent over. > > Indeed the former, but the entire entry, only keys, and latest versions, will be sent by default. Converters can be used to send value side too. > >> >>> >>>> This way the client avoids loosing events. Once all entries have been >>>> iterated over, on-going events will be sent to the client. >>> >>>> This way of handling failure means that clients will receive at-least-once >>>> delivery of cache updates. It might receive multiple events for the cache >>>> update as a result of topology changes handling. >>> >>> So, if there are several modifications before the client reconnects and the >>> new target registers the listener, the clients will get only notification >>> about the last modification, or rather just the entry content, right? > > @Radim, you don't get the content by default. You only get the key and the last version number. If the client wants, it can retrieve the value too, or using a custom converter, it can send back the value, but this is optional. > >> >> This is all handled by the embedded cluster listeners though. But the >> end goal is you will only receive 1 event if the modification comes >> before value was retrieved from the remote node or 2 if afterwards. >> Also these modifications are queued by key and so if you had multiple >> modifications before it retrieved the value it would only give you the >> last one. >> >>> >>> Radim >>> >>> >>> On 04/02/2014 01:14 PM, Galder Zamarre?o wrote: >>> >>> Hi all, >>> >>> I've finally managed to get around to updating the remote hot rod event >>> design wiki [1]. >>> >>> The biggest changes are related to piggybacking on the cluster listeners >>> functionality in order to for registration/deregistration of listeners and >>> handling failure scenarios. This should simplify the actual implementation >>> on the Hot Rod side. >>> >>> Based on feedback, I've also changed some of the class names so that it's >>> clearer what's client side and what's server side. >>> >>> A very important change is the fact that source id information has gone. >>> This is primarily because near-cache like implementations cannot make >>> assumptions on what to store in the near caches when the client invokes >>> operations. Such implementations need to act purely on the events received. >>> >>> Finally, a filter/converter plugging mechanism will be done via factory >>> implementations, which provide more flexibility on the way filter/converter >>> instances are created. This opens the possibility for filter/converter >>> factory parameters to be added to the protocol and passed, after >>> unmarshalling, to the factory callbacks (this is not included right now). >>> >>> I hope to get started on this in the next few days, so feedback at this >>> point is crucial to get a solid first release. >>> >>> Cheers, >>> >>> [1] https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events >>> -- >>> Galder Zamarre?o >>> galder at redhat.com >>> twitter.com/galderz >>> >>> Project Lead, Escalante >>> http://escalante.io >>> >>> Engineer, Infinispan >>> http://infinispan.org >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> >>> -- >>> Radim Vansa >>> JBoss DataGrid QA >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Fri Apr 11 10:25:07 2014 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 11 Apr 2014 16:25:07 +0200 Subject: [infinispan-dev] Remote Hot Rod events wiki updated In-Reply-To: <1AD1098E-AF45-403B-B839-F9B817E2185F@redhat.com> References: <18D41294-AB90-4119-9E85-BECEAFBA8E38@redhat.com> <533E5F76.4050201@redhat.com> <1AD1098E-AF45-403B-B839-F9B817E2185F@redhat.com> Message-ID: <5347FB43.3060405@redhat.com> OK, now I get the picture. Every time we register to a node (whether the first time or after previous node crash), we receive all (filtered) keys from the whole cache, along with versions. Optionally values as well. In case that multiple modifications happen in the time window before registering to the new cache, we don't get the notification for them, just again the whole cache and it's up to application to decide whether there was no modification or some modifications. As the version for entries is incremented per cache and not per value, there is no way to find out how many times the entry was modified (we can just know it was modified when we remember the previous version and these versions differ). Thanks for the clarifications, Galder - I was not completely sure about this from the design doc. Btw., could you address Dan's question: "Don't we want to allow the user to pass some data to the filter factory on registration? Otherwise we'd force the user to write a separate filter factory class every time they want to track changes to a single key." I know this was already asked several times, but the discussion has always dissolved. I haven't seen the final "NO". Radim On 04/11/2014 02:36 PM, Galder Zamarre?o wrote: > On 04 Apr 2014, at 19:11, William Burns wrote: > >> On Fri, Apr 4, 2014 at 3:29 AM, Radim Vansa wrote: >>> Hi, >>> >>> I still don't think that the document covers properly the description of >>> failover. >>> >>> My understanding is that client registers clustered listeners on one server >>> (the first one it connects, I guess). There's some space for optimization, >>> as the notification will be sent from primary owner to this node and only >>> then over hotrod to the client, but I don't want to discuss it now. >> There could be optimizations, but we have to worry about reordering if >> the primary owner doesn't do the forwarding. You could have the case >> of multiple writes to the same key from the clients and lets say they >> send the message to the listener after they are written to the cache, >> there is no way to make sure they are done in the order they were >> written to the cache. We could do something with versions for this >> though. > Versions do not provide global ordering. They are used, at each node, to identify an update, so they?re incrementing at the node level, mixed with some other data that?s node specific to make them unique cluster wide. However, you can?t assume global ordering based on those with the current implementation. I agree there?s room for optimizations but I think correctness and ordering are more important right now. > >>>> Listener registrations will survive node failures thanks to the underlying >>>> clustered listener implementation. >>> I am not that much into clustered listeners yet, but I think that the >>> mechanism makes sure that when the primary owner changes, the new owner will >>> then send the events. But when the node which registered the clustered >>> listener dies, others will just forgot about it. >> That is how it is, I assume Galder was referring to node failures not >> on the one that registered the listener, which is obviously talked >> about in the next point. > That?s correct. > >>>> When a client detects that the server which was serving the events is >>>> gone, it needs to resend it's registration to one of the nodes in the >>>> cluster. Whoever receives that request will again loop through its contents >>>> and send an event for each entry to the client. >>> Will that be all entries in the whole cache, or just from some node? I guess >>> that the first is correct. So, as soon as one node dies, all clients will be >>> bombarded by the full cache content (ok, filtered). Even if these entries >>> have not changed, because the cluster can't know. >> The former being that the entire filtered/converted contents will be sent over. > Indeed the former, but the entire entry, only keys, and latest versions, will be sent by default. Converters can be used to send value side too. > >>>> This way the client avoids loosing events. Once all entries have been >>>> iterated over, on-going events will be sent to the client. >>>> This way of handling failure means that clients will receive at-least-once >>>> delivery of cache updates. It might receive multiple events for the cache >>>> update as a result of topology changes handling. >>> So, if there are several modifications before the client reconnects and the >>> new target registers the listener, the clients will get only notification >>> about the last modification, or rather just the entry content, right? > @Radim, you don?t get the content by default. You only get the key and the last version number. If the client wants, it can retrieve the value too, or using a custom converter, it can send back the value, but this is optional. > >> This is all handled by the embedded cluster listeners though. But the >> end goal is you will only receive 1 event if the modification comes >> before value was retrieved from the remote node or 2 if afterwards. >> Also these modifications are queued by key and so if you had multiple >> modifications before it retrieved the value it would only give you the >> last one. >> >>> Radim >>> >>> >>> On 04/02/2014 01:14 PM, Galder Zamarre?o wrote: >>> >>> Hi all, >>> >>> I've finally managed to get around to updating the remote hot rod event >>> design wiki [1]. >>> >>> The biggest changes are related to piggybacking on the cluster listeners >>> functionality in order to for registration/deregistration of listeners and >>> handling failure scenarios. This should simplify the actual implementation >>> on the Hot Rod side. >>> >>> Based on feedback, I've also changed some of the class names so that it's >>> clearer what's client side and what's server side. >>> >>> A very important change is the fact that source id information has gone. >>> This is primarily because near-cache like implementations cannot make >>> assumptions on what to store in the near caches when the client invokes >>> operations. Such implementations need to act purely on the events received. >>> >>> Finally, a filter/converter plugging mechanism will be done via factory >>> implementations, which provide more flexibility on the way filter/converter >>> instances are created. This opens the possibility for filter/converter >>> factory parameters to be added to the protocol and passed, after >>> unmarshalling, to the factory callbacks (this is not included right now). >>> >>> I hope to get started on this in the next few days, so feedback at this >>> point is crucial to get a solid first release. >>> >>> Cheers, >>> >>> [1] https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events >>> -- >>> Galder Zamarre?o >>> galder at redhat.com >>> twitter.com/galderz >>> >>> Project Lead, Escalante >>> http://escalante.io >>> >>> Engineer, Infinispan >>> http://infinispan.org >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> >>> -- >>> Radim Vansa >>> JBoss DataGrid QA >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From galder at redhat.com Fri Apr 11 08:03:07 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Fri, 11 Apr 2014 14:03:07 +0200 Subject: [infinispan-dev] Remote Hot Rod events wiki updated In-Reply-To: <1396512340.2904.0@smtp.gmail.com> References: <18D41294-AB90-4119-9E85-BECEAFBA8E38@redhat.com> <1396512340.2904.0@smtp.gmail.com> Message-ID: On 03 Apr 2014, at 10:05, Dan Berindei wrote: > Don't we want to allow the user to pass some data to the filter factory on registration? > > Otherwise we'd force the user to write a separate filter factory class every time they want to track changes to a single key. Possibly, I did consider passing some data from the client to the filter/converter factory objects, but could not think of a very clean solution. One option would be for the protocol to specify a vInt, indicating the number of parameters, and then each parameter as byte[] with its length prepended. A java hot rod client could marshall the parameters into byte[]. For the server side implementations, they could receive an Object[] as parameter in the callback with the unmarshalled versions. > > Cheers > Dan > > > On Wed, Apr 2, 2014 at 2:14 PM, Galder Zamarre?o wrote: >> Hi all, >> >> I?ve finally managed to get around to updating the remote hot rod event design wiki [1]. >> >> The biggest changes are related to piggybacking on the cluster listeners functionality in order to for registration/deregistration of listeners and handling failure scenarios. This should simplify the actual implementation on the Hot Rod side. >> >> Based on feedback, I?ve also changed some of the class names so that it?s clearer what?s client side and what?s server side. >> >> A very important change is the fact that source id information has gone. This is primarily because near-cache like implementations cannot make assumptions on what to store in the near caches when the client invokes operations. Such implementations need to act purely on the events received. >> >> Finally, a filter/converter plugging mechanism will be done via factory implementations, which provide more flexibility on the way filter/converter instances are created. This opens the possibility for filter/converter factory parameters to be added to the protocol and passed, after unmarshalling, to the factory callbacks (this is not included right now). >> >> I hope to get started on this in the next few days, so feedback at this point is crucial to get a solid first release. >> >> Cheers, >> >> [1] >> https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events >> >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> Project Lead, Escalante >> >> http://escalante.io >> >> >> Engineer, Infinispan >> >> http://infinispan.org >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From rvansa at redhat.com Mon Apr 14 04:06:21 2014 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 14 Apr 2014 10:06:21 +0200 Subject: [infinispan-dev] Remote Hot Rod events wiki updated In-Reply-To: References: <18D41294-AB90-4119-9E85-BECEAFBA8E38@redhat.com> <1396512340.2904.0@smtp.gmail.com> Message-ID: <534B96FD.3080908@redhat.com> On 04/11/2014 02:03 PM, Galder Zamarre?o wrote: > On 03 Apr 2014, at 10:05, Dan Berindei wrote: > >> Don't we want to allow the user to pass some data to the filter factory on registration? >> >> Otherwise we'd force the user to write a separate filter factory class every time they want to track changes to a single key. > Possibly, I did consider passing some data from the client to the filter/converter factory objects, but could not think of a very clean solution. One option would be for the protocol to specify a vInt, indicating the number of parameters, and then each parameter as byte[] with its length prepended. A java hot rod client could marshall the parameters into byte[]. For the server side implementations, they could receive an Object[] as parameter in the callback with the unmarshalled versions. From the protocol perspective, byte array is IMO the most simplest = most elegant. Server implementations must be able to process any byte array as well (in order to support non-Java clients) - therefore, there has to be interface accepting raw byte[]. For convenience, we could provide abstract wrapper implementing the interface, marshalling it into Object[] and passing to abstract method. Radim > >> Cheers >> Dan >> >> >> On Wed, Apr 2, 2014 at 2:14 PM, Galder Zamarre?o wrote: >>> Hi all, >>> >>> I?ve finally managed to get around to updating the remote hot rod event design wiki [1]. >>> >>> The biggest changes are related to piggybacking on the cluster listeners functionality in order to for registration/deregistration of listeners and handling failure scenarios. This should simplify the actual implementation on the Hot Rod side. >>> >>> Based on feedback, I?ve also changed some of the class names so that it?s clearer what?s client side and what?s server side. >>> >>> A very important change is the fact that source id information has gone. This is primarily because near-cache like implementations cannot make assumptions on what to store in the near caches when the client invokes operations. Such implementations need to act purely on the events received. >>> >>> Finally, a filter/converter plugging mechanism will be done via factory implementations, which provide more flexibility on the way filter/converter instances are created. This opens the possibility for filter/converter factory parameters to be added to the protocol and passed, after unmarshalling, to the factory callbacks (this is not included right now). >>> >>> I hope to get started on this in the next few days, so feedback at this point is crucial to get a solid first release. >>> >>> Cheers, >>> >>> [1] >>> https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events >>> >>> -- >>> Galder Zamarre?o >>> galder at redhat.com >>> twitter.com/galderz >>> >>> Project Lead, Escalante >>> >>> http://escalante.io >>> >>> >>> Engineer, Infinispan >>> >>> http://infinispan.org >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From galder at redhat.com Tue Apr 15 08:31:57 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Tue, 15 Apr 2014 14:31:57 +0200 Subject: [infinispan-dev] New configuration In-Reply-To: <1397065123.5324.2@smtp.gmail.com> References: <533D2C05.9020609@redhat.com> <1397065123.5324.2@smtp.gmail.com> Message-ID: <7F64ED41-638F-43EE-A37A-E62B655A6B16@redhat.com> On 09 Apr 2014, at 19:38, Dan Berindei wrote: > > > On Wed, Apr 9, 2014 at 5:37 PM, Galder Zamarre?o wrote: >> >> On 03 Apr 2014, at 11:38, Radim Vansa < >> rvansa at redhat.com >> > wrote: >> >> >> Hi, >> >> looking on the new configuration parser, I've noticed that you cannot >> configure ConsistentHashFactory anymore - is this by purpose? >> >> >> ^ Rather than being something the users should be tweaking, it?s something that?s used internally. So, I applied a bit of if-in-doubt-leave-it-out logic. I don?t think we lose any major functionality with this. >> > > For now it's the only way for the user to use the SyncConsistentHashFactory, so it's not used just internally. What?s the use case for that? The javadoc is not very clear on the benefits of using it. Cheers, > >> >> >> >> Another my concern is the fact that you enable stuff by parsing the >> element - for example L1. I expect that omitting the element and setting >> it with the default value (as presented in XSD) makes no difference, but >> this is not how current configuration works. >> >> >> L1 is disabled by default. You enable it by configuring the L1 lifespan to be bigger than 0. The attribute definition follows the pattern that Paul did for the server side. >> >> >> My opinion comes probably too late as the PR was already reviewed, >> discussed and integrated, but at least, please clearly describe the >> behaviour in the XSD. The fact that l1-lifespan "Defaults to 10 >> minutes." is not correct - it defaults to L1 being disabled. >> >> >> Yeah, I?ll update the XSD and documentation accordingly: >> >> https://issues.jboss.org/browse/ISPN-4195 >> >> >> Cheers >> >> >> >> Thanks >> >> Radim >> >> -- >> Radim Vansa >> JBoss DataGrid QA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From rvansa at redhat.com Tue Apr 15 10:29:18 2014 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 15 Apr 2014 16:29:18 +0200 Subject: [infinispan-dev] New configuration In-Reply-To: <7F64ED41-638F-43EE-A37A-E62B655A6B16@redhat.com> References: <533D2C05.9020609@redhat.com> <1397065123.5324.2@smtp.gmail.com> <7F64ED41-638F-43EE-A37A-E62B655A6B16@redhat.com> Message-ID: <534D423E.9050001@redhat.com> On 04/15/2014 02:31 PM, Galder Zamarre?o wrote: > On 09 Apr 2014, at 19:38, Dan Berindei wrote: > >> >> On Wed, Apr 9, 2014 at 5:37 PM, Galder Zamarre?o wrote: >>> On 03 Apr 2014, at 11:38, Radim Vansa < >>> rvansa at redhat.com >>>> wrote: >>> >>> Hi, >>> >>> looking on the new configuration parser, I've noticed that you cannot >>> configure ConsistentHashFactory anymore - is this by purpose? >>> >>> >>> ^ Rather than being something the users should be tweaking, it?s something that?s used internally. So, I applied a bit of if-in-doubt-leave-it-out logic. I don?t think we lose any major functionality with this. >>> >> For now it's the only way for the user to use the SyncConsistentHashFactory, so it's not used just internally. > What?s the use case for that? The javadoc is not very clear on the benefits of using it. > One use case I've noticed is having two caches with same keys, and modification listener handler retrieving data from the other cache. In order to execute the listener soon, you don't want to execute remote gets, and therefore, it's useful to have the hashes synchronized. Radim > >>> >>> >>> Another my concern is the fact that you enable stuff by parsing the >>> element - for example L1. I expect that omitting the element and setting >>> it with the default value (as presented in XSD) makes no difference, but >>> this is not how current configuration works. >>> >>> >>> L1 is disabled by default. You enable it by configuring the L1 lifespan to be bigger than 0. The attribute definition follows the pattern that Paul did for the server side. >>> >>> >>> My opinion comes probably too late as the PR was already reviewed, >>> discussed and integrated, but at least, please clearly describe the >>> behaviour in the XSD. The fact that l1-lifespan "Defaults to 10 >>> minutes." is not correct - it defaults to L1 being disabled. >>> >>> >>> Yeah, I?ll update the XSD and documentation accordingly: >>> >>> https://issues.jboss.org/browse/ISPN-4195 >>> >>> >>> Cheers >>> >>> >>> >>> Thanks >>> >>> Radim >>> >>> -- >>> Radim Vansa >>> JBoss DataGrid QA >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> -- >>> Galder Zamarre?o >>> galder at redhat.com >>> twitter.com/galderz >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From dan.berindei at gmail.com Tue Apr 15 10:52:35 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 15 Apr 2014 14:55:35 +0003 Subject: [infinispan-dev] New configuration In-Reply-To: <534D423E.9050001@redhat.com> References: <533D2C05.9020609@redhat.com> <1397065123.5324.2@smtp.gmail.com> <7F64ED41-638F-43EE-A37A-E62B655A6B16@redhat.com> <534D423E.9050001@redhat.com> Message-ID: <1397573555.6281.5@smtp.gmail.com> On Tue, Apr 15, 2014 at 5:29 PM, Radim Vansa wrote: > > On 04/15/2014 02:31 PM, Galder Zamarre?o wrote: >> On 09 Apr 2014, at 19:38, Dan Berindei >> wrote: >> >>> >>> On Wed, Apr 9, 2014 at 5:37 PM, Galder Zamarre?o >>> wrote: >>>> On 03 Apr 2014, at 11:38, Radim Vansa < >>>> rvansa at redhat.com >>>>> wrote: >>>> >>>> Hi, >>>> >>>> looking on the new configuration parser, I've noticed that you >>>> cannot >>>> configure ConsistentHashFactory anymore - is this by purpose? >>>> >>>> >>>> ^ Rather than being something the users should be tweaking, >>>> it?s something that?s used internally. So, I applied a bit of >>>> if-in-doubt-leave-it-out logic. I don?t think we lose any major >>>> functionality with this. >>>> >>> For now it's the only way for the user to use the >>> SyncConsistentHashFactory, so it's not used just internally. >> What?s the use case for that? The javadoc is not very clear on >> the benefits of using it. >> > > One use case I've noticed is having two caches with same keys, and > modification listener handler retrieving data from the other cache. > In > order to execute the listener soon, you don't want to execute remote > gets, and therefore, it's useful to have the hashes synchronized. Erik is using it with distributed tasks. Normally, keys with the same group in multiple caches doesn't guarantee you that the keys are all located on the same nodes, which means we can't guarantee that a distributed task that accesses multiple caches has all the keys it needs locally just with grouping. SyncConsistentHashFactory fixes that. > > Radim > >> >>>> >>>> >>>> Another my concern is the fact that you enable stuff by parsing >>>> the >>>> element - for example L1. I expect that omitting the element >>>> and setting >>>> it with the default value (as presented in XSD) makes no >>>> difference, but >>>> this is not how current configuration works. >>>> >>>> >>>> L1 is disabled by default. You enable it by configuring the L1 >>>> lifespan to be bigger than 0. The attribute definition follows the >>>> pattern that Paul did for the server side. >>>> >>>> >>>> My opinion comes probably too late as the PR was already >>>> reviewed, >>>> discussed and integrated, but at least, please clearly describe >>>> the >>>> behaviour in the XSD. The fact that l1-lifespan "Defaults to 10 >>>> minutes." is not correct - it defaults to L1 being disabled. >>>> >>>> >>>> Yeah, I?ll update the XSD and documentation accordingly: >>>> >>>> https://issues.jboss.org/browse/ISPN-4195 >>>> >>>> >>>> Cheers >>>> >>>> >>>> >>>> Thanks >>>> >>>> Radim >>>> >>>> -- >>>> Radim Vansa >>>> JBoss DataGrid QA >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> >>>> -- >>>> Galder Zamarre?o >>>> galder at redhat.com >>>> twitter.com/galderz >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140415/5b183ef6/attachment-0001.html From bibryam at gmail.com Wed Apr 16 03:24:35 2014 From: bibryam at gmail.com (Bilgin Ibryam) Date: Wed, 16 Apr 2014 08:24:35 +0100 Subject: [infinispan-dev] OSGi Message-ID: Hi Emmanuel, I will give another try these days but I believe OSGi support in infinispan is almost there now. As for Camel integration, having a string based query language would definitely be useful, for the time being I've provided a QueryBuilderStrategy where the users has to manually build a QueryBuilder from a QueryFactory Not ideal but works. Cheers, Brett has offered to help you but I know for sure he won?t lead it. He > would be more like a good expect to talk to. > Bilgin has shown a Camel integration prototype but he also seemed to imply > that he had some significant problems that needed Infinispan improvements. > > Also, I?m not quite sure but looking at these Camel routes, they seem to > be very URI driven. If we want to support query over a Camel route and > express them via a URI, we will need a string based query language. I might > be talking nonsense and somehow the query is written in Java. But better > anticipate. Bilgin would know more, he has written in his demo > CamelInfinispanOperationQuery after all :) > > Emmanuel > > > ------------------------------ > > Message: 7 > Date: Thu, 27 Mar 2014 11:32:37 +0100 > From: Giovanni Meo > Subject: Re: [infinispan-dev] OSGi > To: infinispan -Dev List , > emmanuel at hibernate.org > Message-ID: <5333FE45.3080909 at cisco.com> > Content-Type: text/plain; charset=windows-1252 > > Hi Emmanuel and infinispan folks, > > we have been using Infinispan in OSGi environment in a project called > OpenDayLight, if interested you can look at: > > > > https://git.opendaylight.org/gerrit/gitweb?p=controller.git;a=blob;f=opendaylight/clustering/services_implementation/pom.xml;h=d7a3db3841888f3c08e5cf8795aa42cf9cd9b4bc;hb=HEAD > > Granted we are using a tiny part of the infinispan capabilities, but we > found > very helpful to first of all define the contract the applications would > have > with the infinispan. For other issues like the classloading, we just made > sure > to provider a ClassResolver that always enforce the lookup in the OSGi > class > loader, and in spite of some initial unreliability things has been doing > ok for us. > > My 2 cents, > Giovanni > > On 27-Mar-14 10:28, Emmanuel Bernard wrote: > > Hey guys, > > > > Sanne and Hardy are working on the OSGi-ification of Hibernate Search > and it > > does not come without trouble. > > > > Who is leading this effort on the Infinispan side? I recommend you start > > early in a release cycle because you will have to butcher APIs and > packages > > to do it properly. Worse, you will suffer from you dependencies. > > > > Brett has offered to help you but I know for sure he won?t lead it. He > would > > be more like a good expect to talk to. Bilgin has shown a Camel > integration > > prototype but he also seemed to imply that he had some significant > problems > > that needed Infinispan improvements. > > > > Also, I?m not quite sure but looking at these Camel routes, they seem to > be > > very URI driven. If we want to support query over a Camel route and > express > > them via a URI, we will need a string based query language. I might be > > talking nonsense and somehow the query is written in Java. But better > > anticipate. Bilgin would know more, he has written in his demo > > CamelInfinispanOperationQuery after all :) > > > > Emmanuel _______________________________________________ infinispan-dev > > mailing list infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Giovanni Meo > Via del Serafico, 200 Telephone: +390651644000 > 00142, Roma Mobile: +393480700958 > Italia Fax: +390651645917 > VOIP: 8-3964000 > ?The pessimist complains about the wind; > the optimist expects it to change; > the realist adjusts the sails.? -- Wm. Arthur Ward > IETF credo: "Rough consensus and running code" > > > ------------------------------ > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > End of infinispan-dev Digest, Vol 60, Issue 30 > ********************************************** > -- Bilgin Ibryam Apache Camel & Apache OFBiz committer Blog: ofbizian.com Twitter: @bibryam Author of Instant Apache Camel Message Routing http://www.amazon.com/dp/1783283475 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140416/77b62f56/attachment.html From galder at redhat.com Wed Apr 16 11:14:38 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Wed, 16 Apr 2014 16:14:38 +0100 Subject: [infinispan-dev] Remote Hot Rod events wiki updated In-Reply-To: <5347FB43.3060405@redhat.com> References: <18D41294-AB90-4119-9E85-BECEAFBA8E38@redhat.com> <533E5F76.4050201@redhat.com> <1AD1098E-AF45-403B-B839-F9B817E2185F@redhat.com> <5347FB43.3060405@redhat.com> Message-ID: <75B1475C-F776-4AE2-B115-A774ECD13BA8@redhat.com> On 11 Apr 2014, at 15:25, Radim Vansa wrote: > OK, now I get the picture. Every time we register to a node (whether the > first time or after previous node crash), we receive all (filtered) keys > from the whole cache, along with versions. Optionally values as well. Exactly. > In case that multiple modifications happen in the time window before > registering to the new cache, we don't get the notification for them, > just again the whole cache and it's up to application to decide whether > there was no modification or some modifications. I?m yet to decide on the type of event exactly here, whether cache entry created, cache entry modified or a different one, but regardless, you?d get the key and the server side version associated with that key. A user provided client listener implementation could detect which keys? versions have changed and react to that, i.e. lazily fetch new values. One such user provided client listener implementation could be a listener that maintains a near cache for example. > As the version for > entries is incremented per cache and not per value, there is no way to > find out how many times the entry was modified (we can just know it was > modified when we remember the previous version and these versions differ). Exaclty, the only assumption you can make is that the version it?s different, and that?s it?s a newer version that the older one. > Thanks for the clarifications, Galder - I was not completely sure about > this from the design doc. No probs > Btw., could you address Dan's question: > > "Don't we want to allow the user to pass some data to the filter factory > on registration? > Otherwise we'd force the user to write a separate filter factory class > every time they want to track changes to a single key." > > I know this was already asked several times, but the discussion has > always dissolved. I haven't seen the final "NO?. > > Radim > > On 04/11/2014 02:36 PM, Galder Zamarre?o wrote: >> On 04 Apr 2014, at 19:11, William Burns wrote: >> >>> On Fri, Apr 4, 2014 at 3:29 AM, Radim Vansa wrote: >>>> Hi, >>>> >>>> I still don't think that the document covers properly the description of >>>> failover. >>>> >>>> My understanding is that client registers clustered listeners on one server >>>> (the first one it connects, I guess). There's some space for optimization, >>>> as the notification will be sent from primary owner to this node and only >>>> then over hotrod to the client, but I don't want to discuss it now. >>> There could be optimizations, but we have to worry about reordering if >>> the primary owner doesn't do the forwarding. You could have the case >>> of multiple writes to the same key from the clients and lets say they >>> send the message to the listener after they are written to the cache, >>> there is no way to make sure they are done in the order they were >>> written to the cache. We could do something with versions for this >>> though. >> Versions do not provide global ordering. They are used, at each node, to identify an update, so they?re incrementing at the node level, mixed with some other data that?s node specific to make them unique cluster wide. However, you can?t assume global ordering based on those with the current implementation. I agree there?s room for optimizations but I think correctness and ordering are more important right now. >> >>>>> Listener registrations will survive node failures thanks to the underlying >>>>> clustered listener implementation. >>>> I am not that much into clustered listeners yet, but I think that the >>>> mechanism makes sure that when the primary owner changes, the new owner will >>>> then send the events. But when the node which registered the clustered >>>> listener dies, others will just forgot about it. >>> That is how it is, I assume Galder was referring to node failures not >>> on the one that registered the listener, which is obviously talked >>> about in the next point. >> That?s correct. >> >>>>> When a client detects that the server which was serving the events is >>>>> gone, it needs to resend it's registration to one of the nodes in the >>>>> cluster. Whoever receives that request will again loop through its contents >>>>> and send an event for each entry to the client. >>>> Will that be all entries in the whole cache, or just from some node? I guess >>>> that the first is correct. So, as soon as one node dies, all clients will be >>>> bombarded by the full cache content (ok, filtered). Even if these entries >>>> have not changed, because the cluster can't know. >>> The former being that the entire filtered/converted contents will be sent over. >> Indeed the former, but the entire entry, only keys, and latest versions, will be sent by default. Converters can be used to send value side too. >> >>>>> This way the client avoids loosing events. Once all entries have been >>>>> iterated over, on-going events will be sent to the client. >>>>> This way of handling failure means that clients will receive at-least-once >>>>> delivery of cache updates. It might receive multiple events for the cache >>>>> update as a result of topology changes handling. >>>> So, if there are several modifications before the client reconnects and the >>>> new target registers the listener, the clients will get only notification >>>> about the last modification, or rather just the entry content, right? >> @Radim, you don?t get the content by default. You only get the key and the last version number. If the client wants, it can retrieve the value too, or using a custom converter, it can send back the value, but this is optional. >> >>> This is all handled by the embedded cluster listeners though. But the >>> end goal is you will only receive 1 event if the modification comes >>> before value was retrieved from the remote node or 2 if afterwards. >>> Also these modifications are queued by key and so if you had multiple >>> modifications before it retrieved the value it would only give you the >>> last one. >>> >>>> Radim >>>> >>>> >>>> On 04/02/2014 01:14 PM, Galder Zamarre?o wrote: >>>> >>>> Hi all, >>>> >>>> I've finally managed to get around to updating the remote hot rod event >>>> design wiki [1]. >>>> >>>> The biggest changes are related to piggybacking on the cluster listeners >>>> functionality in order to for registration/deregistration of listeners and >>>> handling failure scenarios. This should simplify the actual implementation >>>> on the Hot Rod side. >>>> >>>> Based on feedback, I've also changed some of the class names so that it's >>>> clearer what's client side and what's server side. >>>> >>>> A very important change is the fact that source id information has gone. >>>> This is primarily because near-cache like implementations cannot make >>>> assumptions on what to store in the near caches when the client invokes >>>> operations. Such implementations need to act purely on the events received. >>>> >>>> Finally, a filter/converter plugging mechanism will be done via factory >>>> implementations, which provide more flexibility on the way filter/converter >>>> instances are created. This opens the possibility for filter/converter >>>> factory parameters to be added to the protocol and passed, after >>>> unmarshalling, to the factory callbacks (this is not included right now). >>>> >>>> I hope to get started on this in the next few days, so feedback at this >>>> point is crucial to get a solid first release. >>>> >>>> Cheers, >>>> >>>> [1] https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events >>>> -- >>>> Galder Zamarre?o >>>> galder at redhat.com >>>> twitter.com/galderz >>>> >>>> Project Lead, Escalante >>>> http://escalante.io >>>> >>>> Engineer, Infinispan >>>> http://infinispan.org >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> >>>> >>>> -- >>>> Radim Vansa >>>> JBoss DataGrid QA >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From mudokonman at gmail.com Wed Apr 16 11:38:07 2014 From: mudokonman at gmail.com (William Burns) Date: Wed, 16 Apr 2014 11:38:07 -0400 Subject: [infinispan-dev] Remote Hot Rod events wiki updated In-Reply-To: <75B1475C-F776-4AE2-B115-A774ECD13BA8@redhat.com> References: <18D41294-AB90-4119-9E85-BECEAFBA8E38@redhat.com> <533E5F76.4050201@redhat.com> <1AD1098E-AF45-403B-B839-F9B817E2185F@redhat.com> <5347FB43.3060405@redhat.com> <75B1475C-F776-4AE2-B115-A774ECD13BA8@redhat.com> Message-ID: On Wed, Apr 16, 2014 at 11:14 AM, Galder Zamarre?o wrote: > > On 11 Apr 2014, at 15:25, Radim Vansa wrote: > >> OK, now I get the picture. Every time we register to a node (whether the >> first time or after previous node crash), we receive all (filtered) keys >> from the whole cache, along with versions. Optionally values as well. > > Exactly. > >> In case that multiple modifications happen in the time window before >> registering to the new cache, we don't get the notification for them, >> just again the whole cache and it's up to application to decide whether >> there was no modification or some modifications. > > I?m yet to decide on the type of event exactly here, whether cache entry created, cache entry modified or a different one, but regardless, you?d get the key and the server side version associated with that key. A user provided client listener implementation could detect which keys? versions have changed and react to that, i.e. lazily fetch new values. One such user provided client listener implementation could be a listener that maintains a near cache for example. My current code was planning on raising a CacheEntryCreatedEvent in this case. I didn't see any special reason to require a new event type, unless anyone can think of a use case? > >> As the version for >> entries is incremented per cache and not per value, there is no way to >> find out how many times the entry was modified (we can just know it was >> modified when we remember the previous version and these versions differ). > > Exaclty, the only assumption you can make is that the version it?s different, and that?s it?s a newer version that the older one. > >> Thanks for the clarifications, Galder - I was not completely sure about >> this from the design doc. > > No probs > >> Btw., could you address Dan's question: >> >> "Don't we want to allow the user to pass some data to the filter factory >> on registration? >> Otherwise we'd force the user to write a separate filter factory class >> every time they want to track changes to a single key." >> >> I know this was already asked several times, but the discussion has >> always dissolved. I haven't seen the final "NO?. > >> >> Radim >> >> On 04/11/2014 02:36 PM, Galder Zamarre?o wrote: >>> On 04 Apr 2014, at 19:11, William Burns wrote: >>> >>>> On Fri, Apr 4, 2014 at 3:29 AM, Radim Vansa wrote: >>>>> Hi, >>>>> >>>>> I still don't think that the document covers properly the description of >>>>> failover. >>>>> >>>>> My understanding is that client registers clustered listeners on one server >>>>> (the first one it connects, I guess). There's some space for optimization, >>>>> as the notification will be sent from primary owner to this node and only >>>>> then over hotrod to the client, but I don't want to discuss it now. >>>> There could be optimizations, but we have to worry about reordering if >>>> the primary owner doesn't do the forwarding. You could have the case >>>> of multiple writes to the same key from the clients and lets say they >>>> send the message to the listener after they are written to the cache, >>>> there is no way to make sure they are done in the order they were >>>> written to the cache. We could do something with versions for this >>>> though. >>> Versions do not provide global ordering. They are used, at each node, to identify an update, so they?re incrementing at the node level, mixed with some other data that?s node specific to make them unique cluster wide. However, you can?t assume global ordering based on those with the current implementation. I agree there?s room for optimizations but I think correctness and ordering are more important right now. >>> >>>>>> Listener registrations will survive node failures thanks to the underlying >>>>>> clustered listener implementation. >>>>> I am not that much into clustered listeners yet, but I think that the >>>>> mechanism makes sure that when the primary owner changes, the new owner will >>>>> then send the events. But when the node which registered the clustered >>>>> listener dies, others will just forgot about it. >>>> That is how it is, I assume Galder was referring to node failures not >>>> on the one that registered the listener, which is obviously talked >>>> about in the next point. >>> That?s correct. >>> >>>>>> When a client detects that the server which was serving the events is >>>>>> gone, it needs to resend it's registration to one of the nodes in the >>>>>> cluster. Whoever receives that request will again loop through its contents >>>>>> and send an event for each entry to the client. >>>>> Will that be all entries in the whole cache, or just from some node? I guess >>>>> that the first is correct. So, as soon as one node dies, all clients will be >>>>> bombarded by the full cache content (ok, filtered). Even if these entries >>>>> have not changed, because the cluster can't know. >>>> The former being that the entire filtered/converted contents will be sent over. >>> Indeed the former, but the entire entry, only keys, and latest versions, will be sent by default. Converters can be used to send value side too. >>> >>>>>> This way the client avoids loosing events. Once all entries have been >>>>>> iterated over, on-going events will be sent to the client. >>>>>> This way of handling failure means that clients will receive at-least-once >>>>>> delivery of cache updates. It might receive multiple events for the cache >>>>>> update as a result of topology changes handling. >>>>> So, if there are several modifications before the client reconnects and the >>>>> new target registers the listener, the clients will get only notification >>>>> about the last modification, or rather just the entry content, right? >>> @Radim, you don?t get the content by default. You only get the key and the last version number. If the client wants, it can retrieve the value too, or using a custom converter, it can send back the value, but this is optional. >>> >>>> This is all handled by the embedded cluster listeners though. But the >>>> end goal is you will only receive 1 event if the modification comes >>>> before value was retrieved from the remote node or 2 if afterwards. >>>> Also these modifications are queued by key and so if you had multiple >>>> modifications before it retrieved the value it would only give you the >>>> last one. >>>> >>>>> Radim >>>>> >>>>> >>>>> On 04/02/2014 01:14 PM, Galder Zamarre?o wrote: >>>>> >>>>> Hi all, >>>>> >>>>> I've finally managed to get around to updating the remote hot rod event >>>>> design wiki [1]. >>>>> >>>>> The biggest changes are related to piggybacking on the cluster listeners >>>>> functionality in order to for registration/deregistration of listeners and >>>>> handling failure scenarios. This should simplify the actual implementation >>>>> on the Hot Rod side. >>>>> >>>>> Based on feedback, I've also changed some of the class names so that it's >>>>> clearer what's client side and what's server side. >>>>> >>>>> A very important change is the fact that source id information has gone. >>>>> This is primarily because near-cache like implementations cannot make >>>>> assumptions on what to store in the near caches when the client invokes >>>>> operations. Such implementations need to act purely on the events received. >>>>> >>>>> Finally, a filter/converter plugging mechanism will be done via factory >>>>> implementations, which provide more flexibility on the way filter/converter >>>>> instances are created. This opens the possibility for filter/converter >>>>> factory parameters to be added to the protocol and passed, after >>>>> unmarshalling, to the factory callbacks (this is not included right now). >>>>> >>>>> I hope to get started on this in the next few days, so feedback at this >>>>> point is crucial to get a solid first release. >>>>> >>>>> Cheers, >>>>> >>>>> [1] https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events >>>>> -- >>>>> Galder Zamarre?o >>>>> galder at redhat.com >>>>> twitter.com/galderz >>>>> >>>>> Project Lead, Escalante >>>>> http://escalante.io >>>>> >>>>> Engineer, Infinispan >>>>> http://infinispan.org >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> >>>>> >>>>> -- >>>>> Radim Vansa >>>>> JBoss DataGrid QA >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> -- >>> Galder Zamarre?o >>> galder at redhat.com >>> twitter.com/galderz >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Radim Vansa >> JBoss DataGrid QA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Thu Apr 17 03:03:46 2014 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 17 Apr 2014 09:03:46 +0200 Subject: [infinispan-dev] Remote Hot Rod events wiki updated In-Reply-To: References: <18D41294-AB90-4119-9E85-BECEAFBA8E38@redhat.com> <533E5F76.4050201@redhat.com> <1AD1098E-AF45-403B-B839-F9B817E2185F@redhat.com> <5347FB43.3060405@redhat.com> <75B1475C-F776-4AE2-B115-A774ECD13BA8@redhat.com> Message-ID: <534F7CD2.5090701@redhat.com> On 04/16/2014 05:38 PM, William Burns wrote: > On Wed, Apr 16, 2014 at 11:14 AM, Galder Zamarre?o wrote: >> On 11 Apr 2014, at 15:25, Radim Vansa wrote: >> >>> OK, now I get the picture. Every time we register to a node (whether the >>> first time or after previous node crash), we receive all (filtered) keys >>> from the whole cache, along with versions. Optionally values as well. >> Exactly. >> >>> In case that multiple modifications happen in the time window before >>> registering to the new cache, we don't get the notification for them, >>> just again the whole cache and it's up to application to decide whether >>> there was no modification or some modifications. >> I?m yet to decide on the type of event exactly here, whether cache entry created, cache entry modified or a different one, but regardless, you?d get the key and the server side version associated with that key. A user provided client listener implementation could detect which keys? versions have changed and react to that, i.e. lazily fetch new values. One such user provided client listener implementation could be a listener that maintains a near cache for example. > My current code was planning on raising a CacheEntryCreatedEvent in > this case. I didn't see any special reason to require a new event > type, unless anyone can think of a use case? When the code cannot rely on the fact that created = (null -> some) and modified = (some -> some), it seems to me that the user will have to handle the events in the same way. I don't see the reason to differentiate between them in protocol anyway. One problem that has come to my mind: what about removed entries? If you push the keyset to the client, without marking start and end of these events (and expecting the client to fire removed events for all not mentioned keys internally), the client can miss some entry deletion forever. Are the tombstones planned for any particular version of Infinispan? Radim > >>> As the version for >>> entries is incremented per cache and not per value, there is no way to >>> find out how many times the entry was modified (we can just know it was >>> modified when we remember the previous version and these versions differ). >> Exaclty, the only assumption you can make is that the version it?s different, and that?s it?s a newer version that the older one. >> >>> Thanks for the clarifications, Galder - I was not completely sure about >>> this from the design doc. >> No probs >> >>> Btw., could you address Dan's question: >>> >>> "Don't we want to allow the user to pass some data to the filter factory >>> on registration? >>> Otherwise we'd force the user to write a separate filter factory class >>> every time they want to track changes to a single key." >>> >>> I know this was already asked several times, but the discussion has >>> always dissolved. I haven't seen the final "NO?. >>> Radim >>> >>> On 04/11/2014 02:36 PM, Galder Zamarre?o wrote: >>>> On 04 Apr 2014, at 19:11, William Burns wrote: >>>> >>>>> On Fri, Apr 4, 2014 at 3:29 AM, Radim Vansa wrote: >>>>>> Hi, >>>>>> >>>>>> I still don't think that the document covers properly the description of >>>>>> failover. >>>>>> >>>>>> My understanding is that client registers clustered listeners on one server >>>>>> (the first one it connects, I guess). There's some space for optimization, >>>>>> as the notification will be sent from primary owner to this node and only >>>>>> then over hotrod to the client, but I don't want to discuss it now. >>>>> There could be optimizations, but we have to worry about reordering if >>>>> the primary owner doesn't do the forwarding. You could have the case >>>>> of multiple writes to the same key from the clients and lets say they >>>>> send the message to the listener after they are written to the cache, >>>>> there is no way to make sure they are done in the order they were >>>>> written to the cache. We could do something with versions for this >>>>> though. >>>> Versions do not provide global ordering. They are used, at each node, to identify an update, so they?re incrementing at the node level, mixed with some other data that?s node specific to make them unique cluster wide. However, you can?t assume global ordering based on those with the current implementation. I agree there?s room for optimizations but I think correctness and ordering are more important right now. >>>> >>>>>>> Listener registrations will survive node failures thanks to the underlying >>>>>>> clustered listener implementation. >>>>>> I am not that much into clustered listeners yet, but I think that the >>>>>> mechanism makes sure that when the primary owner changes, the new owner will >>>>>> then send the events. But when the node which registered the clustered >>>>>> listener dies, others will just forgot about it. >>>>> That is how it is, I assume Galder was referring to node failures not >>>>> on the one that registered the listener, which is obviously talked >>>>> about in the next point. >>>> That?s correct. >>>> >>>>>>> When a client detects that the server which was serving the events is >>>>>>> gone, it needs to resend it's registration to one of the nodes in the >>>>>>> cluster. Whoever receives that request will again loop through its contents >>>>>>> and send an event for each entry to the client. >>>>>> Will that be all entries in the whole cache, or just from some node? I guess >>>>>> that the first is correct. So, as soon as one node dies, all clients will be >>>>>> bombarded by the full cache content (ok, filtered). Even if these entries >>>>>> have not changed, because the cluster can't know. >>>>> The former being that the entire filtered/converted contents will be sent over. >>>> Indeed the former, but the entire entry, only keys, and latest versions, will be sent by default. Converters can be used to send value side too. >>>> >>>>>>> This way the client avoids loosing events. Once all entries have been >>>>>>> iterated over, on-going events will be sent to the client. >>>>>>> This way of handling failure means that clients will receive at-least-once >>>>>>> delivery of cache updates. It might receive multiple events for the cache >>>>>>> update as a result of topology changes handling. >>>>>> So, if there are several modifications before the client reconnects and the >>>>>> new target registers the listener, the clients will get only notification >>>>>> about the last modification, or rather just the entry content, right? >>>> @Radim, you don?t get the content by default. You only get the key and the last version number. If the client wants, it can retrieve the value too, or using a custom converter, it can send back the value, but this is optional. >>>> >>>>> This is all handled by the embedded cluster listeners though. But the >>>>> end goal is you will only receive 1 event if the modification comes >>>>> before value was retrieved from the remote node or 2 if afterwards. >>>>> Also these modifications are queued by key and so if you had multiple >>>>> modifications before it retrieved the value it would only give you the >>>>> last one. >>>>> >>>>>> Radim >>>>>> >>>>>> >>>>>> On 04/02/2014 01:14 PM, Galder Zamarre?o wrote: >>>>>> >>>>>> Hi all, >>>>>> >>>>>> I've finally managed to get around to updating the remote hot rod event >>>>>> design wiki [1]. >>>>>> >>>>>> The biggest changes are related to piggybacking on the cluster listeners >>>>>> functionality in order to for registration/deregistration of listeners and >>>>>> handling failure scenarios. This should simplify the actual implementation >>>>>> on the Hot Rod side. >>>>>> >>>>>> Based on feedback, I've also changed some of the class names so that it's >>>>>> clearer what's client side and what's server side. >>>>>> >>>>>> A very important change is the fact that source id information has gone. >>>>>> This is primarily because near-cache like implementations cannot make >>>>>> assumptions on what to store in the near caches when the client invokes >>>>>> operations. Such implementations need to act purely on the events received. >>>>>> >>>>>> Finally, a filter/converter plugging mechanism will be done via factory >>>>>> implementations, which provide more flexibility on the way filter/converter >>>>>> instances are created. This opens the possibility for filter/converter >>>>>> factory parameters to be added to the protocol and passed, after >>>>>> unmarshalling, to the factory callbacks (this is not included right now). >>>>>> >>>>>> I hope to get started on this in the next few days, so feedback at this >>>>>> point is crucial to get a solid first release. >>>>>> >>>>>> Cheers, >>>>>> >>>>>> [1] https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events >>>>>> -- >>>>>> Galder Zamarre?o >>>>>> galder at redhat.com >>>>>> twitter.com/galderz >>>>>> >>>>>> Project Lead, Escalante >>>>>> http://escalante.io >>>>>> >>>>>> Engineer, Infinispan >>>>>> http://infinispan.org >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Radim Vansa >>>>>> JBoss DataGrid QA >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> -- >>>> Galder Zamarre?o >>>> galder at redhat.com >>>> twitter.com/galderz >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> -- >>> Radim Vansa >>> JBoss DataGrid QA >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From galder at redhat.com Tue Apr 22 07:30:56 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Tue, 22 Apr 2014 12:30:56 +0100 Subject: [infinispan-dev] Remote Hot Rod events wiki updated In-Reply-To: <534F7CD2.5090701@redhat.com> References: <18D41294-AB90-4119-9E85-BECEAFBA8E38@redhat.com> <533E5F76.4050201@redhat.com> <1AD1098E-AF45-403B-B839-F9B817E2185F@redhat.com> <5347FB43.3060405@redhat.com> <75B1475C-F776-4AE2-B115-A774ECD13BA8@redhat.com> <534F7CD2.5090701@redhat.com> Message-ID: <25BCFD3D-9276-4DB5-8E9F-A551FB316A96@redhat.com> On 17 Apr 2014, at 08:03, Radim Vansa wrote: > On 04/16/2014 05:38 PM, William Burns wrote: >> On Wed, Apr 16, 2014 at 11:14 AM, Galder Zamarre?o wrote: >>> On 11 Apr 2014, at 15:25, Radim Vansa wrote: >>> >>>> OK, now I get the picture. Every time we register to a node (whether the >>>> first time or after previous node crash), we receive all (filtered) keys >>>> from the whole cache, along with versions. Optionally values as well. >>> Exactly. >>> >>>> In case that multiple modifications happen in the time window before >>>> registering to the new cache, we don't get the notification for them, >>>> just again the whole cache and it's up to application to decide whether >>>> there was no modification or some modifications. >>> I?m yet to decide on the type of event exactly here, whether cache entry created, cache entry modified or a different one, but regardless, you?d get the key and the server side version associated with that key. A user provided client listener implementation could detect which keys? versions have changed and react to that, i.e. lazily fetch new values. One such user provided client listener implementation could be a listener that maintains a near cache for example. >> My current code was planning on raising a CacheEntryCreatedEvent in >> this case. I didn't see any special reason to require a new event >> type, unless anyone can think of a use case? > > When the code cannot rely on the fact that created = (null -> some) and > modified = (some -> some), it seems to me that the user will have to > handle the events in the same way. I don't see the reason to > differentiate between them in protocol anyway. > > One problem that has come to my mind: what about removed entries? If you > push the keyset to the client, without marking start and end of these > events (and expecting the client to fire removed events for all not > mentioned keys internally), the client can miss some entry deletion > forever. Are the tombstones planned for any particular version of > Infinispan? That?s a good reason why a different event type might be useful. By receiving a special cache entry event when keys are being looped, it can detect that a keyset is being returned, for example, if the server went down and the Hot Rod client transparently failed over to a different node and re-added the client listener. The user of the client, say a near cache, when it receives the first of this special event, it can make a decision to say, clear the near cache contents, since it might have missed some events. The different event type gets around the need for a start/end event. The first time the special event is received, that?s your start, and when you receive something other than the special event, that?s the end, and normal operation is back in place. WDYT? > > Radim > >> >>>> As the version for >>>> entries is incremented per cache and not per value, there is no way to >>>> find out how many times the entry was modified (we can just know it was >>>> modified when we remember the previous version and these versions differ). >>> Exaclty, the only assumption you can make is that the version it?s different, and that?s it?s a newer version that the older one. >>> >>>> Thanks for the clarifications, Galder - I was not completely sure about >>>> this from the design doc. >>> No probs >>> >>>> Btw., could you address Dan's question: >>>> >>>> "Don't we want to allow the user to pass some data to the filter factory >>>> on registration? >>>> Otherwise we'd force the user to write a separate filter factory class >>>> every time they want to track changes to a single key." >>>> >>>> I know this was already asked several times, but the discussion has >>>> always dissolved. I haven't seen the final "NO?. >>>> Radim >>>> >>>> On 04/11/2014 02:36 PM, Galder Zamarre?o wrote: >>>>> On 04 Apr 2014, at 19:11, William Burns wrote: >>>>> >>>>>> On Fri, Apr 4, 2014 at 3:29 AM, Radim Vansa wrote: >>>>>>> Hi, >>>>>>> >>>>>>> I still don't think that the document covers properly the description of >>>>>>> failover. >>>>>>> >>>>>>> My understanding is that client registers clustered listeners on one server >>>>>>> (the first one it connects, I guess). There's some space for optimization, >>>>>>> as the notification will be sent from primary owner to this node and only >>>>>>> then over hotrod to the client, but I don't want to discuss it now. >>>>>> There could be optimizations, but we have to worry about reordering if >>>>>> the primary owner doesn't do the forwarding. You could have the case >>>>>> of multiple writes to the same key from the clients and lets say they >>>>>> send the message to the listener after they are written to the cache, >>>>>> there is no way to make sure they are done in the order they were >>>>>> written to the cache. We could do something with versions for this >>>>>> though. >>>>> Versions do not provide global ordering. They are used, at each node, to identify an update, so they?re incrementing at the node level, mixed with some other data that?s node specific to make them unique cluster wide. However, you can?t assume global ordering based on those with the current implementation. I agree there?s room for optimizations but I think correctness and ordering are more important right now. >>>>> >>>>>>>> Listener registrations will survive node failures thanks to the underlying >>>>>>>> clustered listener implementation. >>>>>>> I am not that much into clustered listeners yet, but I think that the >>>>>>> mechanism makes sure that when the primary owner changes, the new owner will >>>>>>> then send the events. But when the node which registered the clustered >>>>>>> listener dies, others will just forgot about it. >>>>>> That is how it is, I assume Galder was referring to node failures not >>>>>> on the one that registered the listener, which is obviously talked >>>>>> about in the next point. >>>>> That?s correct. >>>>> >>>>>>>> When a client detects that the server which was serving the events is >>>>>>>> gone, it needs to resend it's registration to one of the nodes in the >>>>>>>> cluster. Whoever receives that request will again loop through its contents >>>>>>>> and send an event for each entry to the client. >>>>>>> Will that be all entries in the whole cache, or just from some node? I guess >>>>>>> that the first is correct. So, as soon as one node dies, all clients will be >>>>>>> bombarded by the full cache content (ok, filtered). Even if these entries >>>>>>> have not changed, because the cluster can't know. >>>>>> The former being that the entire filtered/converted contents will be sent over. >>>>> Indeed the former, but the entire entry, only keys, and latest versions, will be sent by default. Converters can be used to send value side too. >>>>> >>>>>>>> This way the client avoids loosing events. Once all entries have been >>>>>>>> iterated over, on-going events will be sent to the client. >>>>>>>> This way of handling failure means that clients will receive at-least-once >>>>>>>> delivery of cache updates. It might receive multiple events for the cache >>>>>>>> update as a result of topology changes handling. >>>>>>> So, if there are several modifications before the client reconnects and the >>>>>>> new target registers the listener, the clients will get only notification >>>>>>> about the last modification, or rather just the entry content, right? >>>>> @Radim, you don?t get the content by default. You only get the key and the last version number. If the client wants, it can retrieve the value too, or using a custom converter, it can send back the value, but this is optional. >>>>> >>>>>> This is all handled by the embedded cluster listeners though. But the >>>>>> end goal is you will only receive 1 event if the modification comes >>>>>> before value was retrieved from the remote node or 2 if afterwards. >>>>>> Also these modifications are queued by key and so if you had multiple >>>>>> modifications before it retrieved the value it would only give you the >>>>>> last one. >>>>>> >>>>>>> Radim >>>>>>> >>>>>>> >>>>>>> On 04/02/2014 01:14 PM, Galder Zamarre?o wrote: >>>>>>> >>>>>>> Hi all, >>>>>>> >>>>>>> I've finally managed to get around to updating the remote hot rod event >>>>>>> design wiki [1]. >>>>>>> >>>>>>> The biggest changes are related to piggybacking on the cluster listeners >>>>>>> functionality in order to for registration/deregistration of listeners and >>>>>>> handling failure scenarios. This should simplify the actual implementation >>>>>>> on the Hot Rod side. >>>>>>> >>>>>>> Based on feedback, I've also changed some of the class names so that it's >>>>>>> clearer what's client side and what's server side. >>>>>>> >>>>>>> A very important change is the fact that source id information has gone. >>>>>>> This is primarily because near-cache like implementations cannot make >>>>>>> assumptions on what to store in the near caches when the client invokes >>>>>>> operations. Such implementations need to act purely on the events received. >>>>>>> >>>>>>> Finally, a filter/converter plugging mechanism will be done via factory >>>>>>> implementations, which provide more flexibility on the way filter/converter >>>>>>> instances are created. This opens the possibility for filter/converter >>>>>>> factory parameters to be added to the protocol and passed, after >>>>>>> unmarshalling, to the factory callbacks (this is not included right now). >>>>>>> >>>>>>> I hope to get started on this in the next few days, so feedback at this >>>>>>> point is crucial to get a solid first release. >>>>>>> >>>>>>> Cheers, >>>>>>> >>>>>>> [1] https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events >>>>>>> -- >>>>>>> Galder Zamarre?o >>>>>>> galder at redhat.com >>>>>>> twitter.com/galderz >>>>>>> >>>>>>> Project Lead, Escalante >>>>>>> http://escalante.io >>>>>>> >>>>>>> Engineer, Infinispan >>>>>>> http://infinispan.org >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Radim Vansa >>>>>>> JBoss DataGrid QA >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> -- >>>>> Galder Zamarre?o >>>>> galder at redhat.com >>>>> twitter.com/galderz >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> -- >>>> Radim Vansa >>>> JBoss DataGrid QA >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> -- >>> Galder Zamarre?o >>> galder at redhat.com >>> twitter.com/galderz >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From dan.berindei at gmail.com Tue Apr 22 09:58:52 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 22 Apr 2014 14:01:52 +0003 Subject: [infinispan-dev] Remote Hot Rod events wiki updated In-Reply-To: <25BCFD3D-9276-4DB5-8E9F-A551FB316A96@redhat.com> References: <18D41294-AB90-4119-9E85-BECEAFBA8E38@redhat.com> <533E5F76.4050201@redhat.com> <1AD1098E-AF45-403B-B839-F9B817E2185F@redhat.com> <5347FB43.3060405@redhat.com> <75B1475C-F776-4AE2-B115-A774ECD13BA8@redhat.com> <534F7CD2.5090701@redhat.com> <25BCFD3D-9276-4DB5-8E9F-A551FB316A96@redhat.com> Message-ID: <1398175132.28062.3@smtp.gmail.com> On Tue, Apr 22, 2014 at 2:30 PM, Galder Zamarre?o wrote: > > On 17 Apr 2014, at 08:03, Radim Vansa wrote: > >> On 04/16/2014 05:38 PM, William Burns wrote: >>> On Wed, Apr 16, 2014 at 11:14 AM, Galder Zamarre?o >>> wrote: >>>> On 11 Apr 2014, at 15:25, Radim Vansa wrote: >>>> >>>>> OK, now I get the picture. Every time we register to a node >>>>> (whether the >>>>> first time or after previous node crash), we receive all >>>>> (filtered) keys >>>>> from the whole cache, along with versions. Optionally values as >>>>> well. >>>> Exactly. >>>> >>>>> In case that multiple modifications happen in the time window >>>>> before >>>>> registering to the new cache, we don't get the notification for >>>>> them, >>>>> just again the whole cache and it's up to application to decide >>>>> whether >>>>> there was no modification or some modifications. >>>> I?m yet to decide on the type of event exactly here, whether >>>> cache entry created, cache entry modified or a different one, but >>>> regardless, you?d get the key and the server side version >>>> associated with that key. A user provided client listener >>>> implementation could detect which keys? versions have changed >>>> and react to that, i.e. lazily fetch new values. One such user >>>> provided client listener implementation could be a listener that >>>> maintains a near cache for example. >>> My current code was planning on raising a CacheEntryCreatedEvent in >>> this case. I didn't see any special reason to require a new event >>> type, unless anyone can think of a use case? >> >> When the code cannot rely on the fact that created = (null -> some) >> and >> modified = (some -> some), it seems to me that the user will have >> to >> handle the events in the same way. I don't see the reason to >> differentiate between them in protocol anyway. >> >> One problem that has come to my mind: what about removed entries? >> If you >> push the keyset to the client, without marking start and end of >> these >> events (and expecting the client to fire removed events for all not >> mentioned keys internally), the client can miss some entry deletion >> forever. Are the tombstones planned for any particular version of >> Infinispan? > > That?s a good reason why a different event type might be useful. By > receiving a special cache entry event when keys are being looped, it > can detect that a keyset is being returned, for example, if the > server went down and the Hot Rod client transparently failed over to > a different node and re-added the client listener. The user of the > client, say a near cache, when it receives the first of this special > event, it can make a decision to say, clear the near cache contents, > since it might have missed some events. > > The different event type gets around the need for a start/end event. > The first time the special event is received, that?s your start, > and when you receive something other than the special event, that?s > the end, and normal operation is back in place. > > WDYT? I'm not sure if you plan multi-threaded event delivery in the Java client, but having a special start event would make it clear that it must be delivered after all the events from the old server and before any events from the new server. And it should also make special cases like a server dying before it finished sending the initial state easier to handle. Dan > > >> >> Radim >> >>> >>>>> As the version for >>>>> entries is incremented per cache and not per value, there is no >>>>> way to >>>>> find out how many times the entry was modified (we can just know >>>>> it was >>>>> modified when we remember the previous version and these >>>>> versions differ). >>>> Exaclty, the only assumption you can make is that the version >>>> it?s different, and that?s it?s a newer version that the >>>> older one. >>>> >>>>> Thanks for the clarifications, Galder - I was not completely >>>>> sure about >>>>> this from the design doc. >>>> No probs >>>> >>>>> Btw., could you address Dan's question: >>>>> >>>>> "Don't we want to allow the user to pass some data to the filter >>>>> factory >>>>> on registration? >>>>> Otherwise we'd force the user to write a separate filter factory >>>>> class >>>>> every time they want to track changes to a single key." >>>>> >>>>> I know this was already asked several times, but the discussion >>>>> has >>>>> always dissolved. I haven't seen the final "NO?. >>>>> Radim >>>>> >>>>> On 04/11/2014 02:36 PM, Galder Zamarre?o wrote: >>>>>> On 04 Apr 2014, at 19:11, William Burns >>>>>> wrote: >>>>>> >>>>>>> On Fri, Apr 4, 2014 at 3:29 AM, Radim Vansa >>>>>>> wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> I still don't think that the document covers properly the >>>>>>>> description of >>>>>>>> failover. >>>>>>>> >>>>>>>> My understanding is that client registers clustered listeners >>>>>>>> on one server >>>>>>>> (the first one it connects, I guess). There's some space for >>>>>>>> optimization, >>>>>>>> as the notification will be sent from primary owner to this >>>>>>>> node and only >>>>>>>> then over hotrod to the client, but I don't want to discuss >>>>>>>> it now. >>>>>>> There could be optimizations, but we have to worry about >>>>>>> reordering if >>>>>>> the primary owner doesn't do the forwarding. You could have >>>>>>> the case >>>>>>> of multiple writes to the same key from the clients and lets >>>>>>> say they >>>>>>> send the message to the listener after they are written to the >>>>>>> cache, >>>>>>> there is no way to make sure they are done in the order they >>>>>>> were >>>>>>> written to the cache. We could do something with versions for >>>>>>> this >>>>>>> though. >>>>>> Versions do not provide global ordering. They are used, at each >>>>>> node, to identify an update, so they?re incrementing at the >>>>>> node level, mixed with some other data that?s node specific to >>>>>> make them unique cluster wide. However, you can?t assume >>>>>> global ordering based on those with the current implementation. >>>>>> I agree there?s room for optimizations but I think correctness >>>>>> and ordering are more important right now. >>>>>> >>>>>>>>> Listener registrations will survive node failures thanks to >>>>>>>>> the underlying >>>>>>>>> clustered listener implementation. >>>>>>>> I am not that much into clustered listeners yet, but I think >>>>>>>> that the >>>>>>>> mechanism makes sure that when the primary owner changes, the >>>>>>>> new owner will >>>>>>>> then send the events. But when the node which registered the >>>>>>>> clustered >>>>>>>> listener dies, others will just forgot about it. >>>>>>> That is how it is, I assume Galder was referring to node >>>>>>> failures not >>>>>>> on the one that registered the listener, which is obviously >>>>>>> talked >>>>>>> about in the next point. >>>>>> That?s correct. >>>>>> >>>>>>>>> When a client detects that the server which was serving the >>>>>>>>> events is >>>>>>>>> gone, it needs to resend it's registration to one of the >>>>>>>>> nodes in the >>>>>>>>> cluster. Whoever receives that request will again loop >>>>>>>>> through its contents >>>>>>>>> and send an event for each entry to the client. >>>>>>>> Will that be all entries in the whole cache, or just from >>>>>>>> some node? I guess >>>>>>>> that the first is correct. So, as soon as one node dies, all >>>>>>>> clients will be >>>>>>>> bombarded by the full cache content (ok, filtered). Even if >>>>>>>> these entries >>>>>>>> have not changed, because the cluster can't know. >>>>>>> The former being that the entire filtered/converted contents >>>>>>> will be sent over. >>>>>> Indeed the former, but the entire entry, only keys, and latest >>>>>> versions, will be sent by default. Converters can be used to >>>>>> send value side too. >>>>>> >>>>>>>>> This way the client avoids loosing events. Once all entries >>>>>>>>> have been >>>>>>>>> iterated over, on-going events will be sent to the client. >>>>>>>>> This way of handling failure means that clients will receive >>>>>>>>> at-least-once >>>>>>>>> delivery of cache updates. It might receive multiple events >>>>>>>>> for the cache >>>>>>>>> update as a result of topology changes handling. >>>>>>>> So, if there are several modifications before the client >>>>>>>> reconnects and the >>>>>>>> new target registers the listener, the clients will get only >>>>>>>> notification >>>>>>>> about the last modification, or rather just the entry >>>>>>>> content, right? >>>>>> @Radim, you don?t get the content by default. You only get >>>>>> the key and the last version number. If the client wants, it can >>>>>> retrieve the value too, or using a custom converter, it can send >>>>>> back the value, but this is optional. >>>>>> >>>>>>> This is all handled by the embedded cluster listeners though. >>>>>>> But the >>>>>>> end goal is you will only receive 1 event if the modification >>>>>>> comes >>>>>>> before value was retrieved from the remote node or 2 if >>>>>>> afterwards. >>>>>>> Also these modifications are queued by key and so if you had >>>>>>> multiple >>>>>>> modifications before it retrieved the value it would only give >>>>>>> you the >>>>>>> last one. >>>>>>> >>>>>>>> Radim >>>>>>>> >>>>>>>> >>>>>>>> On 04/02/2014 01:14 PM, Galder Zamarre?o wrote: >>>>>>>> >>>>>>>> Hi all, >>>>>>>> >>>>>>>> I've finally managed to get around to updating the remote hot >>>>>>>> rod event >>>>>>>> design wiki [1]. >>>>>>>> >>>>>>>> The biggest changes are related to piggybacking on the >>>>>>>> cluster listeners >>>>>>>> functionality in order to for registration/deregistration of >>>>>>>> listeners and >>>>>>>> handling failure scenarios. This should simplify the actual >>>>>>>> implementation >>>>>>>> on the Hot Rod side. >>>>>>>> >>>>>>>> Based on feedback, I've also changed some of the class names >>>>>>>> so that it's >>>>>>>> clearer what's client side and what's server side. >>>>>>>> >>>>>>>> A very important change is the fact that source id >>>>>>>> information has gone. >>>>>>>> This is primarily because near-cache like implementations >>>>>>>> cannot make >>>>>>>> assumptions on what to store in the near caches when the >>>>>>>> client invokes >>>>>>>> operations. Such implementations need to act purely on the >>>>>>>> events received. >>>>>>>> >>>>>>>> Finally, a filter/converter plugging mechanism will be done >>>>>>>> via factory >>>>>>>> implementations, which provide more flexibility on the way >>>>>>>> filter/converter >>>>>>>> instances are created. This opens the possibility for >>>>>>>> filter/converter >>>>>>>> factory parameters to be added to the protocol and passed, >>>>>>>> after >>>>>>>> unmarshalling, to the factory callbacks (this is not included >>>>>>>> right now). >>>>>>>> >>>>>>>> I hope to get started on this in the next few days, so >>>>>>>> feedback at this >>>>>>>> point is crucial to get a solid first release. >>>>>>>> >>>>>>>> Cheers, >>>>>>>> >>>>>>>> [1] >>>>>>>> https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events >>>>>>>> -- >>>>>>>> Galder Zamarre?o >>>>>>>> galder at redhat.com >>>>>>>> twitter.com/galderz >>>>>>>> >>>>>>>> Project Lead, Escalante >>>>>>>> http://escalante.io >>>>>>>> >>>>>>>> Engineer, Infinispan >>>>>>>> http://infinispan.org >>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Radim Vansa >>>>>>>> JBoss DataGrid QA >>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> -- >>>>>> Galder Zamarre?o >>>>>> galder at redhat.com >>>>>> twitter.com/galderz >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> -- >>>>> Radim Vansa >>>>> JBoss DataGrid QA >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> -- >>>> Galder Zamarre?o >>>> galder at redhat.com >>>> twitter.com/galderz >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Radim Vansa >> JBoss DataGrid QA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140422/f01f56bf/attachment-0001.html From rvansa at redhat.com Tue Apr 22 11:19:34 2014 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 22 Apr 2014 17:19:34 +0200 Subject: [infinispan-dev] Remote Hot Rod events wiki updated In-Reply-To: <1398175132.28062.3@smtp.gmail.com> References: <18D41294-AB90-4119-9E85-BECEAFBA8E38@redhat.com> <533E5F76.4050201@redhat.com> <1AD1098E-AF45-403B-B839-F9B817E2185F@redhat.com> <5347FB43.3060405@redhat.com> <75B1475C-F776-4AE2-B115-A774ECD13BA8@redhat.com> <534F7CD2.5090701@redhat.com> <25BCFD3D-9276-4DB5-8E9F-A551FB316A96@redhat.com> <1398175132.28062.3@smtp.gmail.com> Message-ID: <53568886.9060306@redhat.com> On 04/22/2014 03:58 PM, Dan Berindei wrote: > On Tue, Apr 22, 2014 at 2:30 PM, Galder Zamarre?o > wrote: >> On 17 Apr 2014, at 08:03, Radim Vansa wrote: >> >> On 04/16/2014 05:38 PM, William Burns wrote: >> >> On Wed, Apr 16, 2014 at 11:14 AM, Galder Zamarre?o >> wrote: >> >> On 11 Apr 2014, at 15:25, Radim Vansa >> wrote: >> >> OK, now I get the picture. Every time we register to >> a node (whether the first time or after previous node >> crash), we receive all (filtered) keys from the whole >> cache, along with versions. Optionally values as well. >> >> Exactly. >> >> In case that multiple modifications happen in the >> time window before registering to the new cache, we >> don't get the notification for them, just again the >> whole cache and it's up to application to decide >> whether there was no modification or some modifications. >> >> I'm yet to decide on the type of event exactly here, >> whether cache entry created, cache entry modified or a >> different one, but regardless, you'd get the key and the >> server side version associated with that key. A user >> provided client listener implementation could detect >> which keys' versions have changed and react to that, i.e. >> lazily fetch new values. One such user provided client >> listener implementation could be a listener that >> maintains a near cache for example. >> >> My current code was planning on raising a >> CacheEntryCreatedEvent in this case. I didn't see any special >> reason to require a new event type, unless anyone can think >> of a use case? >> >> When the code cannot rely on the fact that created = (null -> >> some) and modified = (some -> some), it seems to me that the user >> will have to handle the events in the same way. I don't see the >> reason to differentiate between them in protocol anyway. One >> problem that has come to my mind: what about removed entries? If >> you push the keyset to the client, without marking start and end >> of these events (and expecting the client to fire removed events >> for all not mentioned keys internally), the client can miss some >> entry deletion forever. Are the tombstones planned for any >> particular version of Infinispan? >> >> That's a good reason why a different event type might be useful. By >> receiving a special cache entry event when keys are being looped, it >> can detect that a keyset is being returned, for example, if the >> server went down and the Hot Rod client transparently failed over to >> a different node and re-added the client listener. The user of the >> client, say a near cache, when it receives the first of this special >> event, it can make a decision to say, clear the near cache contents, >> since it might have missed some events. The different event type gets >> around the need for a start/end event. The first time the special >> event is received, that's your start, and when you receive something >> other than the special event, that's the end, and normal operation is >> back in place. WDYT? > > I'm not sure if you plan multi-threaded event delivery in the Java > client, but having a special start event would make it clear that it > must be delivered after all the events from the old server and before > any events from the new server. > > And it should also make special cases like a server dying before it > finished sending the initial state easier to handle. > > Dan > Is it really wise to have stateful listener? I would prefer the listener to be called only once per server change, and let it iterate the cache via cache.forEach(ForEachTask task), or cache.iterator(). (which would replace the keySet() etc...) Radim -- Radim Vansa JBoss DataGrid QA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140422/1f3f8ba3/attachment.html From rory.odonnell at oracle.com Fri Apr 25 04:45:59 2014 From: rory.odonnell at oracle.com (Rory O'Donnell Oracle, Dublin Ireland) Date: Fri, 25 Apr 2014 09:45:59 +0100 Subject: [infinispan-dev] Early Access builds for JDK 9 b09, JDK 8u20 b10 and JDK 7U60 b15 are available on java.net Message-ID: <535A20C7.4010502@oracle.com> Hi Galder, Early Access builds for JDK 9 b09 , JDK 8u20 b10 and JDK 7U60 b15 are available on java.net. As we enter the later phases of development for JDK 7u60 & JDK 8u20 , please log any show stoppers as soon as possible. Rgds, Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140425/7daccd25/attachment.html From rvansa at redhat.com Fri Apr 25 08:26:41 2014 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 25 Apr 2014 14:26:41 +0200 Subject: [infinispan-dev] ABI compatibility of C++ client Message-ID: <535A5481.5080406@redhat.com> Hi guys, as I've tried to get rid of all the warnings emitted in Windows build of C++ HotRod client, I've noticed that the ABI of this library is not very well designed. I am not an expert for this kind of stuff, but many sources I've found say that exporting STL containers (such as string or vector, or shared_ptr) is not ABI-safe. For windows, the STL export is allowed [1] when both library and user application is linked against the same version of CRT. I am really not sure whether we want to force it to the user, and moreover, due to bug in VC10 implementation of STL [2] we can't explicitly export shared_ptr (I haven't found any workaround for that so far). Regarding the GCC-world, situation is not better. The usual response for exporting STL classes is "don't do that". It is expected that these trouble will be addressed in C++17 (huh :)). What can we do about that? Fixing this requires a lot of changes in API... can we afford to do that now? Or will we just declare "compile with the same versions and compile options as we did"? (we should state them, then) I have only limited knowledge of the whole C++ ecosystem, if I am wrong, I'd be gladly corrected. Radim [1] http://support.microsoft.com/kb/168958 [2] http://connect.microsoft.com/VisualStudio/feedback/details/649531 -- Radim Vansa JBoss DataGrid QA From rvansa at redhat.com Fri Apr 25 09:49:22 2014 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 25 Apr 2014 15:49:22 +0200 Subject: [infinispan-dev] ABI compatibility of C++ client In-Reply-To: <535A5481.5080406@redhat.com> References: <535A5481.5080406@redhat.com> Message-ID: <535A67E2.9060009@redhat.com> Tracking JIRA https://issues.jboss.org/browse/HRCPP-151 On 04/25/2014 02:26 PM, Radim Vansa wrote: > Hi guys, > > as I've tried to get rid of all the warnings emitted in Windows build of > C++ HotRod client, I've noticed that the ABI of this library is not very > well designed. > I am not an expert for this kind of stuff, but many sources I've found > say that exporting STL containers (such as string or vector, or > shared_ptr) is not ABI-safe. > > For windows, the STL export is allowed [1] when both library and user > application is linked against the same version of CRT. I am really not > sure whether we want to force it to the user, and moreover, due to bug > in VC10 implementation of STL [2] we can't explicitly export shared_ptr > (I haven't found any workaround for that so far). > > Regarding the GCC-world, situation is not better. The usual response for > exporting STL classes is "don't do that". It is expected that these > trouble will be addressed in C++17 (huh :)). > > What can we do about that? Fixing this requires a lot of changes in > API... can we afford to do that now? Or will we just declare "compile > with the same versions and compile options as we did"? (we should state > them, then) > > I have only limited knowledge of the whole C++ ecosystem, if I am wrong, > I'd be gladly corrected. > > Radim > > [1] http://support.microsoft.com/kb/168958 > [2] http://connect.microsoft.com/VisualStudio/feedback/details/649531 > -- Radim Vansa JBoss DataGrid QA From galder at redhat.com Mon Apr 28 10:42:59 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Mon, 28 Apr 2014 16:42:59 +0200 Subject: [infinispan-dev] New configuration In-Reply-To: <1397573555.6281.5@smtp.gmail.com> References: <533D2C05.9020609@redhat.com> <1397065123.5324.2@smtp.gmail.com> <7F64ED41-638F-43EE-A37A-E62B655A6B16@redhat.com> <534D423E.9050001@redhat.com> <1397573555.6281.5@smtp.gmail.com> Message-ID: <025944F9-1433-4A4C-B73A-591C4B027375@redhat.com> On 15 Apr 2014, at 16:52, Dan Berindei wrote: > > > On Tue, Apr 15, 2014 at 5:29 PM, Radim Vansa wrote: >> >> On 04/15/2014 02:31 PM, Galder Zamarre?o wrote: >> >> On 09 Apr 2014, at 19:38, Dan Berindei wrote: >> >> >> >> On Wed, Apr 9, 2014 at 5:37 PM, Galder Zamarre?o wrote: >> >> On 03 Apr 2014, at 11:38, Radim Vansa < >> rvansa at redhat.com >> >> wrote: >> >> >> Hi, >> >> looking on the new configuration parser, I've noticed that you cannot >> configure ConsistentHashFactory anymore - is this by purpose? >> >> >> ^ Rather than being something the users should be tweaking, it?s something that?s used internally. So, I applied a bit of if-in-doubt-leave-it-out logic. I don?t think we lose any major functionality with this. >> >> >> For now it's the only way for the user to use the SyncConsistentHashFactory, so it's not used just internally. >> >> What?s the use case for that? The javadoc is not very clear on the benefits of using it. >> >> >> >> One use case I've noticed is having two caches with same keys, and >> modification listener handler retrieving data from the other cache. In >> order to execute the listener soon, you don't want to execute remote >> gets, and therefore, it's useful to have the hashes synchronized. >> > > Erik is using it with distributed tasks. Normally, keys with the same group in multiple caches doesn't guarantee you that the keys are all located on the same nodes, which means we can't guarantee that a distributed task that accesses multiple caches has all the keys it needs locally just with grouping. SyncConsistentHashFactory fixes that. Thanks Radim and Dan. Based on a further chat I had with Dan, I?ve sent a PR to update the SyncCHF javadoc to explain why that class exists in the first place: https://github.com/infinispan/infinispan/pull/2528 @Dan, have a look and see if you?re happy. Btw, I?ve just created https://issues.jboss.org/browse/ISPN-4245 to address this configuration issue. Cheers, > > >> >> Radim >> >> >> >> >> >> Another my concern is the fact that you enable stuff by parsing the >> element - for example L1. I expect that omitting the element and setting >> it with the default value (as presented in XSD) makes no difference, but >> this is not how current configuration works. >> >> >> L1 is disabled by default. You enable it by configuring the L1 lifespan to be bigger than 0. The attribute definition follows the pattern that Paul did for the server side. >> >> >> My opinion comes probably too late as the PR was already reviewed, >> discussed and integrated, but at least, please clearly describe the >> behaviour in the XSD. The fact that l1-lifespan "Defaults to 10 >> minutes." is not correct - it defaults to L1 being disabled. >> >> >> Yeah, I?ll update the XSD and documentation accordingly: >> >> >> https://issues.jboss.org/browse/ISPN-4195 >> >> >> >> Cheers >> >> >> >> Thanks >> >> Radim >> >> -- >> Radim Vansa >> JBoss DataGrid QA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> -- >> >> Radim Vansa >> JBoss DataGrid QA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From rvansa at redhat.com Tue Apr 29 07:02:22 2014 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 29 Apr 2014 13:02:22 +0200 Subject: [infinispan-dev] ABI compatibility of C++ client In-Reply-To: <535A5481.5080406@redhat.com> References: <535A5481.5080406@redhat.com> Message-ID: <535F86BE.8000603@redhat.com> I was expecting at least some response. Cliff, Ion, Tristan, Vladimir, could you share your opinions? Radim On 04/25/2014 02:26 PM, Radim Vansa wrote: > Hi guys, > > as I've tried to get rid of all the warnings emitted in Windows build of > C++ HotRod client, I've noticed that the ABI of this library is not very > well designed. > I am not an expert for this kind of stuff, but many sources I've found > say that exporting STL containers (such as string or vector, or > shared_ptr) is not ABI-safe. > > For windows, the STL export is allowed [1] when both library and user > application is linked against the same version of CRT. I am really not > sure whether we want to force it to the user, and moreover, due to bug > in VC10 implementation of STL [2] we can't explicitly export shared_ptr > (I haven't found any workaround for that so far). > > Regarding the GCC-world, situation is not better. The usual response for > exporting STL classes is "don't do that". It is expected that these > trouble will be addressed in C++17 (huh :)). > > What can we do about that? Fixing this requires a lot of changes in > API... can we afford to do that now? Or will we just declare "compile > with the same versions and compile options as we did"? (we should state > them, then) > > I have only limited knowledge of the whole C++ ecosystem, if I am wrong, > I'd be gladly corrected. > > Radim > > [1] http://support.microsoft.com/kb/168958 > [2] http://connect.microsoft.com/VisualStudio/feedback/details/649531 > -- Radim Vansa JBoss DataGrid QA From ttarrant at redhat.com Tue Apr 29 07:31:10 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 29 Apr 2014 13:31:10 +0200 Subject: [infinispan-dev] ABI compatibility of C++ client In-Reply-To: <535F86BE.8000603@redhat.com> References: <535A5481.5080406@redhat.com> <535F86BE.8000603@redhat.com> Message-ID: <535F8D7E.4010509@redhat.com> Yes, it is a sticky situation. We can definitely change the API now (actually this is the best moment to do this). I guess we need to provide "wrappers" of some kind. Any examples elsewhere ? Tristan On 29/04/2014 13:02, Radim Vansa wrote: > I was expecting at least some response. > > Cliff, Ion, Tristan, Vladimir, could you share your opinions? > > Radim > > On 04/25/2014 02:26 PM, Radim Vansa wrote: >> Hi guys, >> >> as I've tried to get rid of all the warnings emitted in Windows build of >> C++ HotRod client, I've noticed that the ABI of this library is not very >> well designed. >> I am not an expert for this kind of stuff, but many sources I've found >> say that exporting STL containers (such as string or vector, or >> shared_ptr) is not ABI-safe. >> >> For windows, the STL export is allowed [1] when both library and user >> application is linked against the same version of CRT. I am really not >> sure whether we want to force it to the user, and moreover, due to bug >> in VC10 implementation of STL [2] we can't explicitly export shared_ptr >> (I haven't found any workaround for that so far). >> >> Regarding the GCC-world, situation is not better. The usual response for >> exporting STL classes is "don't do that". It is expected that these >> trouble will be addressed in C++17 (huh :)). >> >> What can we do about that? Fixing this requires a lot of changes in >> API... can we afford to do that now? Or will we just declare "compile >> with the same versions and compile options as we did"? (we should state >> them, then) >> >> I have only limited knowledge of the whole C++ ecosystem, if I am wrong, >> I'd be gladly corrected. >> >> Radim >> >> [1] http://support.microsoft.com/kb/168958 >> [2] http://connect.microsoft.com/VisualStudio/feedback/details/649531 >> > From rvansa at redhat.com Tue Apr 29 09:08:30 2014 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 29 Apr 2014 15:08:30 +0200 Subject: [infinispan-dev] ABI compatibility of C++ client In-Reply-To: <535F8D7E.4010509@redhat.com> References: <535A5481.5080406@redhat.com> <535F86BE.8000603@redhat.com> <535F8D7E.4010509@redhat.com> Message-ID: <535FA44E.9000002@redhat.com> On 04/29/2014 01:31 PM, Tristan Tarrant wrote: > Yes, it is a sticky situation. We can definitely change the API now > (actually this is the best moment to do this). > I guess we need to provide "wrappers" of some kind. Any examples elsewhere ? Are we looking for a third-party library for binary-compatible containers? One example could be [1] although it requires GCC 4.7.2 / VS2013 compiler. What we need is to pass string, vector, set and map, all of them constant. Shouldn't be a rocket science to convert them into flat blobs (I am not sure about the price of coding it ourselves/using 3rd party library). We don't have to use these weird containers in public API nor internally. But we have to put the "compression" into public headers (to be called by user code) and "decompression" (if needed) into internals. It could be a bit tricky to integrate this with marshalling which happens anyway in user code, to make as few copies as possible (for example for bulk methods which return std::map - we don't want to create std::map, convert it into blob_map, then marshall into blob_map and finally convert to std::map). And we should not inherit the public classes from Handle which uses HR_SHARED_PTR, that's impl detail. Public classes should hold only opaque pointers to internal data types. I would recommend to treat warnings from Windows compilation as blockers: it seems Visual Studio is much smarter in detecting DLL-boundary related errors. [1] https://github.com/jbandela/cppcomponents > > Tristan > > On 29/04/2014 13:02, Radim Vansa wrote: >> I was expecting at least some response. >> >> Cliff, Ion, Tristan, Vladimir, could you share your opinions? >> >> Radim >> >> On 04/25/2014 02:26 PM, Radim Vansa wrote: >>> Hi guys, >>> >>> as I've tried to get rid of all the warnings emitted in Windows build of >>> C++ HotRod client, I've noticed that the ABI of this library is not very >>> well designed. >>> I am not an expert for this kind of stuff, but many sources I've found >>> say that exporting STL containers (such as string or vector, or >>> shared_ptr) is not ABI-safe. >>> >>> For windows, the STL export is allowed [1] when both library and user >>> application is linked against the same version of CRT. I am really not >>> sure whether we want to force it to the user, and moreover, due to bug >>> in VC10 implementation of STL [2] we can't explicitly export shared_ptr >>> (I haven't found any workaround for that so far). >>> >>> Regarding the GCC-world, situation is not better. The usual response for >>> exporting STL classes is "don't do that". It is expected that these >>> trouble will be addressed in C++17 (huh :)). >>> >>> What can we do about that? Fixing this requires a lot of changes in >>> API... can we afford to do that now? Or will we just declare "compile >>> with the same versions and compile options as we did"? (we should state >>> them, then) >>> >>> I have only limited knowledge of the whole C++ ecosystem, if I am wrong, >>> I'd be gladly corrected. >>> >>> Radim >>> >>> [1] http://support.microsoft.com/kb/168958 >>> [2] http://connect.microsoft.com/VisualStudio/feedback/details/649531 >>> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From galder at redhat.com Wed Apr 30 07:36:52 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Wed, 30 Apr 2014 13:36:52 +0200 Subject: [infinispan-dev] Infinispan Test language level to Java 8? Message-ID: <91F270E3-806A-4CF1-942A-CCE51BAC5006@redhat.com> Hi all, Just thinking out loud: what about we start using JDK8+ for all the test code in Infinispan? The production code would still have language level 6/7 (whatever is required?). This way we start getting ourselves familiar with JDK8 in a safe environment and we reduce some of the boiler plate code currently existing in the tests. This would only problematic for anyone consuming our test jars. They?d need move up to JDK8+ along with us. Thoughts? p.s. Recently I found https://leanpub.com/whatsnewinjava8/read which provides a great overview on what?s new in JDK8 along with small code samples. -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From galder at redhat.com Wed Apr 30 07:55:24 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Wed, 30 Apr 2014 13:55:24 +0200 Subject: [infinispan-dev] Infinispan Test language level to Java 8? In-Reply-To: <91F270E3-806A-4CF1-942A-CCE51BAC5006@redhat.com> References: <91F270E3-806A-4CF1-942A-CCE51BAC5006@redhat.com> Message-ID: On 30 Apr 2014, at 13:36, Galder Zamarre?o wrote: > Hi all, > > Just thinking out loud: what about we start using JDK8+ for all the test code in Infinispan? > > The production code would still have language level 6/7 (whatever is required?). > > This way we start getting ourselves familiar with JDK8 in a safe environment and we reduce some of the boiler plate code currently existing in the tests. > > This would only problematic for anyone consuming our test jars. They?d need move up to JDK8+ along with us. Another potential problem, as rightly pointed out by Will on IRC, is that it would also cause issues for anyone trying to run our testsuite with JDK7 or earlier, if anyone is doing such a thing. > > Thoughts? > > p.s. Recently I found https://leanpub.com/whatsnewinjava8/read which provides a great overview on what?s new in JDK8 along with small code samples. > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From anistor at redhat.com Wed Apr 30 08:12:53 2014 From: anistor at redhat.com (Adrian Nistor) Date: Wed, 30 Apr 2014 15:12:53 +0300 Subject: [infinispan-dev] Infinispan Test language level to Java 8? In-Reply-To: References: <91F270E3-806A-4CF1-942A-CCE51BAC5006@redhat.com> Message-ID: <5360E8C5.7010005@redhat.com> > Another potential problem, as rightly pointed out by Will on IRC, is that it would also cause issues for anyone trying to run our testsuite with JDK7 or earlier, if anyone is doing such a thing. Galder, we may be doing such a thing :) The test suite is meant to verify correctness of our libraries when executed against a concrete set of external dependencies, with clearly specified supported versions or version intervals - the jdk being the most important of them. Since we'll no longer be able to run on jdk 7 we can no longer support jdk. Even if animal-sniffer cheerfully reports we've not broken binary compat, that still does not mean much when it comes to jdk version specific issues, or jdk maker specific issue (remember the IBM jdk oddities). Mavenwise, I think it is not possible to have a different compiler language level for module sources vs. test sources and Eclipse and Intellij also cannot cope with two source levels per module, so this would introduce some unnecessary development discomfort. I would vote no for this. Adrian On 04/30/2014 02:55 PM, Galder Zamarre?o wrote: > On 30 Apr 2014, at 13:36, Galder Zamarre?o wrote: > >> Hi all, >> >> Just thinking out loud: what about we start using JDK8+ for all the test code in Infinispan? >> >> The production code would still have language level 6/7 (whatever is required?). >> >> This way we start getting ourselves familiar with JDK8 in a safe environment and we reduce some of the boiler plate code currently existing in the tests. >> >> This would only problematic for anyone consuming our test jars. They?d need move up to JDK8+ along with us. > Another potential problem, as rightly pointed out by Will on IRC, is that it would also cause issues for anyone trying to run our testsuite with JDK7 or earlier, if anyone is doing such a thing. > >> Thoughts? >> >> p.s. Recently I found https://leanpub.com/whatsnewinjava8/read which provides a great overview on what?s new in JDK8 along with small code samples. >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mudokonman at gmail.com Wed Apr 30 10:06:49 2014 From: mudokonman at gmail.com (William Burns) Date: Wed, 30 Apr 2014 10:06:49 -0400 Subject: [infinispan-dev] New API to iterate over current entries in cache In-Reply-To: References: <53270AA3.30702@redhat.com> Message-ID: Was wondering if anyone had any opinions on the API for this. These are a few options that Dan and I mulled over: Note the CloseableIterable inteface mentioned is just an interface that extends both Closeable and Iterable. 1. The API that is very similar to previously proposed in this list but slightly changed: Methods on AdvancedCache CloseableIterable> entryIterable(KeyValueFilter filter); CloseableIterable> entryIterable(KeyValueFilter filter, Converter converter); Note the difference here is that it would return an Iterable instead of Iterator, which would allow for it being used in a for loop. Example usage would be (types omitted) for (CacheEntry entry : cache.entryIterable(someFilter, someConverter)) { // Do something } 2. An API that returns a new type EntryIterable for example that can chain methods to provide a filter and converter. on AdvancedCache EntryIterable entryIterable(); where EntryIterable is defined as: public interface EntryIterable extends CloseableIterable> { public EntryIterable filter(KeyValueFilter filter); public EntryIterable converter(Converter converter); public CloseableIterable> projection(Converter converter); } Note that there are 2 methods that take a Converter, this is to preserve the typing, since the method would return a different EntryIterable instance. However I can also see removing one of the converter method and just rename projection to converter instead. This API would allow for providing optional fields more cleanly or not if all if desired. Example usage would be (types omitted) for (CacheEntry entry : cache.entryIterable().filter(someFilter).converter(someConverter)) { // Do something } 3. An API that requires the filter up front in the AdvancedCache method. This also brings up the point should we require a filter to always be provided? Unfortuantely this doesn't prevent a user from querying every entry as they can just use a filter that accepts all key/value pairs. on AdvancedCache EntryIterable entryIterable(Filter filter) where EntryIterable is defined as: public interface EntryIterable extends CloseableIterable> { public CloseableIterable> converter(Converter converter); } The usage would be identical to #2 except the filter is always provided. Let me know what you guys think or if you have any other suggestions. Thanks, - Will From sanne at infinispan.org Wed Apr 30 10:31:50 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 30 Apr 2014 15:31:50 +0100 Subject: [infinispan-dev] Infinispan Test language level to Java 8? In-Reply-To: <5360E8C5.7010005@redhat.com> References: <91F270E3-806A-4CF1-942A-CCE51BAC5006@redhat.com> <5360E8C5.7010005@redhat.com> Message-ID: Valid concerns, but I think we should split those in two very different categories: 1- we provide testing utilities which are quite useful to other people too 2- we run unit tests on our own code to prevent regressions If we split the utilities into a properly delivered package - built with Java7, having a very own Maven identity and maybe even a user guide - that would be even more useful to consumers. For example I use some of the utilities in both Hibernate Search and Hibernate OGM, dependending on the testing classifier of infinispan-core. I'd prefer to depend on a "proper" module with a somehow stable API, and this would be a great improvement for our users who start playing with Infinispan.. I often refer to our testsuite to explain how to setup things. For the second use case - our own test execution - I see great advantages from using Java8. First of, to verify that the API's we're developing today will make sense in a lambda enabled world: we might not baseline on it today but it's very hard to do forward-compatible thinking without actually experimenting with the API in TDD before this is cast in stone. Remember TDD is a design metodology, not a QA approach. But I agree with Adrian on not wanting to fully trust animal-sniffer with this task, nor I like the "flexibility" we have in IDEs for a single module being mixed. For the record Hibernate has been since long keeping the test infrastructure in a different module; we could explore an alternative code organization. While it's important to have some core tests closely coupled with the module it's meant to test, I don't see why we couldn't have additional tests in a different module? +1 to have at least one module using (requiring) Java8. Yes, contributors will need to have it around.. I don't see a problem, any potentially good contributor should have it around by now. Sanne On 30 April 2014 13:12, Adrian Nistor wrote: > > Another potential problem, as rightly pointed out by Will on IRC, is > that it would also cause issues for anyone trying to run our testsuite > with JDK7 or earlier, if anyone is doing such a thing. > > Galder, we may be doing such a thing :) The test suite is meant to > verify correctness of our libraries when executed against a concrete set > of external dependencies, with clearly specified supported versions or > version intervals - the jdk being the most important of them. > > Since we'll no longer be able to run on jdk 7 we can no longer support > jdk. Even if animal-sniffer cheerfully reports we've not broken binary > compat, that still does not mean much when it comes to jdk version > specific issues, or jdk maker specific issue (remember the IBM jdk > oddities). > > Mavenwise, I think it is not possible to have a different compiler > language level for module sources vs. test sources and Eclipse and > Intellij also cannot cope with two source levels per module, so this > would introduce some unnecessary development discomfort. I would vote > no for this. > > Adrian > > On 04/30/2014 02:55 PM, Galder Zamarre?o wrote: >> On 30 Apr 2014, at 13:36, Galder Zamarre?o wrote: >> >>> Hi all, >>> >>> Just thinking out loud: what about we start using JDK8+ for all the test code in Infinispan? >>> >>> The production code would still have language level 6/7 (whatever is required?). >>> >>> This way we start getting ourselves familiar with JDK8 in a safe environment and we reduce some of the boiler plate code currently existing in the tests. >>> >>> This would only problematic for anyone consuming our test jars. They?d need move up to JDK8+ along with us. >> Another potential problem, as rightly pointed out by Will on IRC, is that it would also cause issues for anyone trying to run our testsuite with JDK7 or earlier, if anyone is doing such a thing. >> >>> Thoughts? >>> >>> p.s. Recently I found https://leanpub.com/whatsnewinjava8/read which provides a great overview on what?s new in JDK8 along with small code samples. >>> -- >>> Galder Zamarre?o >>> galder at redhat.com >>> twitter.com/galderz >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From anistor at redhat.com Wed Apr 30 12:41:55 2014 From: anistor at redhat.com (Adrian Nistor) Date: Wed, 30 Apr 2014 19:41:55 +0300 Subject: [infinispan-dev] Infinispan Test language level to Java 8? In-Reply-To: References: <91F270E3-806A-4CF1-942A-CCE51BAC5006@redhat.com> <5360E8C5.7010005@redhat.com> Message-ID: <536127D3.5090307@redhat.com> I don't see those concerns as lightly as you put them. Animal-sniffer is just a quick fail-fast check for binary compatibility of type hierarchy and method signatures, but that's not everything. The only sure way to test against a particular jdk version is to actually run the test suite with it. Seeing is believing. That's why we have separate CI jobs running against older supported jdks. I fail to see how this works when our unit tests are written using jdk 8 features. On 04/30/2014 05:31 PM, Sanne Grinovero wrote: > Valid concerns, but I think we should split those in two very > different categories: > 1- we provide testing utilities which are quite useful to other people too > 2- we run unit tests on our own code to prevent regressions > > If we split the utilities into a properly delivered package - built > with Java7, having a very own Maven identity and maybe even a user > guide - that would be even more useful to consumers. For example I use > some of the utilities in both Hibernate Search and Hibernate OGM, > dependending on the testing classifier of infinispan-core. I'd prefer > to depend on a "proper" module with a somehow stable API, and this > would be a great improvement for our users who start playing with > Infinispan.. I often refer to our testsuite to explain how to setup > things. > > For the second use case - our own test execution - I see great > advantages from using Java8. First of, to verify that the API's we're > developing today will make sense in a lambda enabled world: we might > not baseline on it today but it's very hard to do forward-compatible > thinking without actually experimenting with the API in TDD before > this is cast in stone. Remember TDD is a design metodology, not a QA > approach. > > But I agree with Adrian on not wanting to fully trust animal-sniffer > with this task, nor I like the "flexibility" we have in IDEs for a > single module being mixed. > For the record Hibernate has been since long keeping the test > infrastructure in a different module; we could explore an alternative > code organization. While it's important to have some core tests > closely coupled with the module it's meant to test, I don't see why we > couldn't have additional tests in a different module? > > +1 to have at least one module using (requiring) Java8. Yes, > contributors will need to have it around.. I don't see a problem, any > potentially good contributor should have it around by now. > > Sanne > > > > On 30 April 2014 13:12, Adrian Nistor wrote: >> > Another potential problem, as rightly pointed out by Will on IRC, is >> that it would also cause issues for anyone trying to run our testsuite >> with JDK7 or earlier, if anyone is doing such a thing. >> >> Galder, we may be doing such a thing :) The test suite is meant to >> verify correctness of our libraries when executed against a concrete set >> of external dependencies, with clearly specified supported versions or >> version intervals - the jdk being the most important of them. >> >> Since we'll no longer be able to run on jdk 7 we can no longer support >> jdk. Even if animal-sniffer cheerfully reports we've not broken binary >> compat, that still does not mean much when it comes to jdk version >> specific issues, or jdk maker specific issue (remember the IBM jdk >> oddities). >> >> Mavenwise, I think it is not possible to have a different compiler >> language level for module sources vs. test sources and Eclipse and >> Intellij also cannot cope with two source levels per module, so this >> would introduce some unnecessary development discomfort. I would vote >> no for this. >> >> Adrian >> >> On 04/30/2014 02:55 PM, Galder Zamarre?o wrote: >>> On 30 Apr 2014, at 13:36, Galder Zamarre?o wrote: >>> >>>> Hi all, >>>> >>>> Just thinking out loud: what about we start using JDK8+ for all the test code in Infinispan? >>>> >>>> The production code would still have language level 6/7 (whatever is required?). >>>> >>>> This way we start getting ourselves familiar with JDK8 in a safe environment and we reduce some of the boiler plate code currently existing in the tests. >>>> >>>> This would only problematic for anyone consuming our test jars. They?d need move up to JDK8+ along with us. >>> Another potential problem, as rightly pointed out by Will on IRC, is that it would also cause issues for anyone trying to run our testsuite with JDK7 or earlier, if anyone is doing such a thing. >>> >>>> Thoughts? >>>> >>>> p.s. Recently I found https://leanpub.com/whatsnewinjava8/read which provides a great overview on what?s new in JDK8 along with small code samples. >>>> -- >>>> Galder Zamarre?o >>>> galder at redhat.com >>>> twitter.com/galderz >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> -- >>> Galder Zamarre?o >>> galder at redhat.com >>> twitter.com/galderz >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Wed Apr 30 12:54:07 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 30 Apr 2014 17:54:07 +0100 Subject: [infinispan-dev] Infinispan Test language level to Java 8? In-Reply-To: <536127D3.5090307@redhat.com> References: <91F270E3-806A-4CF1-942A-CCE51BAC5006@redhat.com> <5360E8C5.7010005@redhat.com> <536127D3.5090307@redhat.com> Message-ID: On 30 April 2014 17:41, Adrian Nistor wrote: > I don't see those concerns as lightly as you put them. > > Animal-sniffer is just a quick fail-fast check for binary compatibility > of type hierarchy and method signatures, but that's not everything. The > only sure way to test against a particular jdk version is to actually > run the test suite with it. Seeing is believing. That's why we have > separate CI jobs running against older supported jdks. I fail to see > how this works when our unit tests are written using jdk 8 features. I'm assuming we would not jump to rewrite all existing tests, and that we would be running most tests on CI using Java7. If I misunderstood, then yes I agree with you and would be against the proposal. As mentioned below, I don't want to rely on animal-sniffer for this. Sanne > > On 04/30/2014 05:31 PM, Sanne Grinovero wrote: >> Valid concerns, but I think we should split those in two very >> different categories: >> 1- we provide testing utilities which are quite useful to other people too >> 2- we run unit tests on our own code to prevent regressions >> >> If we split the utilities into a properly delivered package - built >> with Java7, having a very own Maven identity and maybe even a user >> guide - that would be even more useful to consumers. For example I use >> some of the utilities in both Hibernate Search and Hibernate OGM, >> dependending on the testing classifier of infinispan-core. I'd prefer >> to depend on a "proper" module with a somehow stable API, and this >> would be a great improvement for our users who start playing with >> Infinispan.. I often refer to our testsuite to explain how to setup >> things. >> >> For the second use case - our own test execution - I see great >> advantages from using Java8. First of, to verify that the API's we're >> developing today will make sense in a lambda enabled world: we might >> not baseline on it today but it's very hard to do forward-compatible >> thinking without actually experimenting with the API in TDD before >> this is cast in stone. Remember TDD is a design metodology, not a QA >> approach. >> >> But I agree with Adrian on not wanting to fully trust animal-sniffer >> with this task, nor I like the "flexibility" we have in IDEs for a >> single module being mixed. >> For the record Hibernate has been since long keeping the test >> infrastructure in a different module; we could explore an alternative >> code organization. While it's important to have some core tests >> closely coupled with the module it's meant to test, I don't see why we >> couldn't have additional tests in a different module? >> >> +1 to have at least one module using (requiring) Java8. Yes, >> contributors will need to have it around.. I don't see a problem, any >> potentially good contributor should have it around by now. >> >> Sanne >> >> >> >> On 30 April 2014 13:12, Adrian Nistor wrote: >>> > Another potential problem, as rightly pointed out by Will on IRC, is >>> that it would also cause issues for anyone trying to run our testsuite >>> with JDK7 or earlier, if anyone is doing such a thing. >>> >>> Galder, we may be doing such a thing :) The test suite is meant to >>> verify correctness of our libraries when executed against a concrete set >>> of external dependencies, with clearly specified supported versions or >>> version intervals - the jdk being the most important of them. >>> >>> Since we'll no longer be able to run on jdk 7 we can no longer support >>> jdk. Even if animal-sniffer cheerfully reports we've not broken binary >>> compat, that still does not mean much when it comes to jdk version >>> specific issues, or jdk maker specific issue (remember the IBM jdk >>> oddities). >>> >>> Mavenwise, I think it is not possible to have a different compiler >>> language level for module sources vs. test sources and Eclipse and >>> Intellij also cannot cope with two source levels per module, so this >>> would introduce some unnecessary development discomfort. I would vote >>> no for this. >>> >>> Adrian >>> >>> On 04/30/2014 02:55 PM, Galder Zamarre?o wrote: >>>> On 30 Apr 2014, at 13:36, Galder Zamarre?o wrote: >>>> >>>>> Hi all, >>>>> >>>>> Just thinking out loud: what about we start using JDK8+ for all the test code in Infinispan? >>>>> >>>>> The production code would still have language level 6/7 (whatever is required?). >>>>> >>>>> This way we start getting ourselves familiar with JDK8 in a safe environment and we reduce some of the boiler plate code currently existing in the tests. >>>>> >>>>> This would only problematic for anyone consuming our test jars. They?d need move up to JDK8+ along with us. >>>> Another potential problem, as rightly pointed out by Will on IRC, is that it would also cause issues for anyone trying to run our testsuite with JDK7 or earlier, if anyone is doing such a thing. >>>> >>>>> Thoughts? >>>>> >>>>> p.s. Recently I found https://leanpub.com/whatsnewinjava8/read which provides a great overview on what?s new in JDK8 along with small code samples. >>>>> -- >>>>> Galder Zamarre?o >>>>> galder at redhat.com >>>>> twitter.com/galderz >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> -- >>>> Galder Zamarre?o >>>> galder at redhat.com >>>> twitter.com/galderz >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Wed Apr 30 12:58:19 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 30 Apr 2014 17:58:19 +0100 Subject: [infinispan-dev] Infinispan 7.0 to Java 7 In-Reply-To: <1397061380.2547.23.camel@T520> References: <04FC5117-C2E3-4187-9E3A-59B2A6915094@redhat.com> <1397061380.2547.23.camel@T520> Message-ID: I think any opposer had enough time to speak up by now, so I'm assuming we're all good. It's tracked as ISPN-4254 and I'm about to send a pull request, as Lucene 4.8 is released and I need to change our build to be able to even test it. Sanne On 9 April 2014 17:36, Paul Ferraro wrote: > As an EE7 application server, WF already requires Java SE 7. > > On Wed, 2014-04-09 at 17:30 +0100, Mircea Markus wrote: >> Hi guys, >> >> Hibernate Search 5.0 is moving to Java 7 (besides others, because Lucene 4.8 does it). >> For us it makes a lot of sense to bring in HSearch 5/Lucene 4 rather soon, as it's important for remote querying. >> How does that sound? >> Paul, how does that fit with the WF integration? >> >> Cheers, > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev