From sanne at infinispan.org Wed Jul 6 07:11:21 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 6 Jul 2016 12:11:21 +0100 Subject: [infinispan-dev] Deprecating the @ProvidedId annotation w/o a replacement in place Message-ID: I'm deprecating the `org.hibernate.search.annotations.ProvidedId` annotation in Hibernate Search. This was originally introduced when Infinispan Query was first designed as a way to mark the Infinispan value object as "something which doesn't contain the id", since in the key/value store world the key can usually not be extracted from the value (which is a difference from the Hibernate ORM world). In early days, this meant that all indexed objects in Infinispan had to be marked with this, but we quickly fixed this oddness by simply assuming that - when using Hibernate Search to index Infinispan objects - we might as well consider them all annotated with @ProvidedId implicitly. So the main reason for this annotation to exist is long gone, but its role evolved beyond that. This annotation was also enabling a couple more features: A] allow the user to pick the index field name used to store the IDs B] allow to bind a custom FieldBridge to the key type # A: customizing the field name from "providedId" I don't think this is actually very useful. It is complex to handle when different types might want to override this, and the rules at which this is valid across inherited types. I'm proposing we take this "mapping flexibility" away with no replacement. # B: custom FieldBridge for indexing of Infinispan keys Infinispan already has the notion of Transformers, which is similar but not quite the same. The differences are confusing, and neither of them actually makes it very clear how to e.g. search by some attribute of the key type. Clearly there's need for a better approach to deal with keys, and @ProvidedId doesn't fit well in such plans. For now I plan to mark @ProvidedId as deprecated; although I won't remove it yet until we have an alternative in place to better deal with keys. However, I'm unable to properly document what its replacement should be until we fleshed out the alternative. I'd like to proceed with the deprecation even without having the replacement already as I suspect what we had so far for indexing keys was not good enough anyway. Deprecating it is rather urgent as it turns out it's all quite confusing when this annotation should be used. Thanks, Sanne From slaskawi at redhat.com Fri Jul 8 09:26:14 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Fri, 8 Jul 2016 15:26:14 +0200 Subject: [infinispan-dev] Apache Tamaya Message-ID: Hey! I just stumbled upon http://tamaya.incubator.apache.org. Looks pretty interesting, maybe we could use it for managing property-based configuration? Thanks Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160708/13ed05ac/attachment-0001.html From rhauch at redhat.com Sat Jul 9 09:38:17 2016 From: rhauch at redhat.com (Randall Hauch) Date: Sat, 9 Jul 2016 08:38:17 -0500 Subject: [infinispan-dev] Infinispan and change data capture Message-ID: <2569A6BA-FBC2-40A7-A821-26676F10BEB0@redhat.com> The Debezium project [1] is working on building change data capture connectors for a variety of databases. MySQL is available now, MongoDB will be soon, and PostgreSQL and Oracle are next on our roadmap. One way in which Debezium and Infinispan can be used together is when Infinispan is being used as a cache for data stored in a database. In this case, Debezium can capture the changes to the database and produce a stream of events; a separate process can consume these change and evict entries from an Infinispan cache. If Infinispan is to be used as a data store, then it would be useful for Debezium to be able to capture those changes so other apps/services can consume the changes. First of all, does this make sense? Secondly, if it does, then Debezium would need an Infinispan connector, and it?s not clear to me how that connector might capture the changes from Infinispan. Debezium typically monitors the log of transactions/changes that are committed to a database. Of course how this works varies for each type of database. For example, MySQL internally produces a transaction log that contains information about every committed row change, and MySQL ensures that every committed change is included and that non-committed changes are excluded. The MySQL mechanism is actually part of the replication mechanism, so slaves update their internal state by reading the master?s log. The Debezium MySQL connector [2] simply reads the same log. Infinispan has several mechanisms that may be useful: Interceptors - See [3]. This seems pretty straightforward and IIUC provides access to all internal operations. However, it?s not clear to me whether a single interceptor will see all the changes in a cluster (perhaps in local and replicated modes) or only those changes that happen on that particular node (in distributed mode). It?s also not clear whether this interceptor is called within the context of the cache?s transaction, so if a failure happens just at the wrong time whether a change might be made to the cache but is not seen by the interceptor (or vice versa). Cross-site replication - See [4][5]. A potential advantage of this mechanism appears to be that it is defined (more) globally, and it appears to function if the remote backup comes back online after being offline for a period of time. State transfer - is it possible to participate as a non-active member of the cluster, and to effectively read all state transfer activities that occur within the cluster? Cache store - tie into the cache store mechanism, perhaps by wrapping an existing cache store and sitting between the cache and the cache store Monitor the cache store - don?t monitor Infinispan at all, and instead monitor the store in which Infinispan is storing entries. (This is probably the least attractive, since some stores can?t be monitored, or because the store is persisting an opaque binary value.) Are there other mechanism that might be used? There are a couple of important requirements for change data capture to be able to work correctly: Upon initial connection, the CDC connector must be able to obtain a snapshot of all existing data, followed by seeing all changes to data that may have occurred since the snapshot was started. If the connector is stopped/fails, upon restart it needs to be able to reconnect and either see all changes that occurred since it last was capturing changes, or perform a snapshot. (Performing a snapshot upon restart is very inefficient and undesirable.) This works as follows: the CDC connector only records the ?offset? in the source?s sequence of events; what this ?offset? entails depends on the source. Upon restart, the connector can use this offset information to coordinate with the source where it wants to start reading. (In MySQL and PostgreSQL, every event includes the filename of the log and position in that file. MongoDB includes in each event the monotonically increasing timestamp of the transaction. No change can be missed, even when things go wrong and components crash. When a new entry is added, the ?after? state of the entity will be included. When an entry is updated, the ?after? state will be included in the event; if possible, the event should also include the ?before? state. When an entry is removed, the ?before? state should be included in the event. Any thoughts or advice would be greatly appreciated. Best regards, Randall [1] http://debezium.io [2] http://debezium.io/docs/connectors/mysql/ [3] http://infinispan.org/docs/stable/user_guide/user_guide.html#_custom_interceptors_chapter [4] http://infinispan.org/docs/stable/user_guide/user_guide.html#CrossSiteReplication [5] https://github.com/infinispan/infinispan/wiki/Design-For-Cross-Site-Replication -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160709/cc042989/attachment.html From anistor at redhat.com Mon Jul 11 04:42:23 2016 From: anistor at redhat.com (Adrian Nistor) Date: Mon, 11 Jul 2016 11:42:23 +0300 Subject: [infinispan-dev] Infinispan and change data capture In-Reply-To: <2569A6BA-FBC2-40A7-A821-26676F10BEB0@redhat.com> References: <2569A6BA-FBC2-40A7-A821-26676F10BEB0@redhat.com> Message-ID: <57835BEF.4010902@redhat.com> Hi Randall, Infinispan supports both push and pull access models. The push model is supported by events (and listeners), which are cluster wide and are available in both library and remote mode (hotrod). The notification system is pretty advanced as there is a filtering mechanism available that can use a hand coded filter / converter or one specified in jpql (experimental atm). Getting a snapshot of the initial data is also possible. But infinispan does not produce a transaction log to be used for determining all changes that happened since a previous connection time, so you'll always have to get a new full snapshot when re-connecting. So if Infinispan is the data store I would base the Debezium connector implementation on Infinispan's event notification system. Not sure about the other use case though. Adrian On 07/09/2016 04:38 PM, Randall Hauch wrote: > The Debezium project [1] is working on building change data capture > connectors for a variety of databases. MySQL is available now, MongoDB > will be soon, and PostgreSQL and Oracle are next on our roadmap. > > One way in which Debezium and Infinispan can be used together is when > Infinispan is being used as a cache for data stored in a database. In > this case, Debezium can capture the changes to the database and > produce a stream of events; a separate process can consume these > change and evict entries from an Infinispan cache. > > If Infinispan is to be used as a data store, then it would be useful > for Debezium to be able to capture those changes so other > apps/services can consume the changes. First of all, does this make > sense? Secondly, if it does, then Debezium would need an Infinispan > connector, and it?s not clear to me how that connector might capture > the changes from Infinispan. > > Debezium typically monitors the log of transactions/changes that are > committed to a database. Of course how this works varies for each type > of database. For example, MySQL internally produces a transaction log > that contains information about every committed row change, and MySQL > ensures that every committed change is included and that non-committed > changes are excluded. The MySQL mechanism is actually part of the > replication mechanism, so slaves update their internal state by > reading the master?s log. The Debezium MySQL connector [2] simply > reads the same log. > > Infinispan has several mechanisms that may be useful: > > * Interceptors - See [3]. This seems pretty straightforward and IIUC > provides access to all internal operations. However, it?s not > clear to me whether a single interceptor will see all the changes > in a cluster (perhaps in local and replicated modes) or only those > changes that happen on that particular node (in distributed mode). > It?s also not clear whether this interceptor is called within the > context of the cache?s transaction, so if a failure happens just > at the wrong time whether a change might be made to the cache but > is not seen by the interceptor (or vice versa). > * Cross-site replication - See [4][5]. A potential advantage of this > mechanism appears to be that it is defined (more) globally, and it > appears to function if the remote backup comes back online after > being offline for a period of time. > * State transfer - is it possible to participate as a non-active > member of the cluster, and to effectively read all state transfer > activities that occur within the cluster? > * Cache store - tie into the cache store mechanism, perhaps by > wrapping an existing cache store and sitting between the cache and > the cache store > * Monitor the cache store - don?t monitor Infinispan at all, and > instead monitor the store in which Infinispan is storing entries. > (This is probably the least attractive, since some stores can?t be > monitored, or because the store is persisting an opaque binary value.) > > > Are there other mechanism that might be used? > > There are a couple of important requirements for change data capture > to be able to work correctly: > > 1. Upon initial connection, the CDC connector must be able to obtain > a snapshot of all existing data, followed by seeing all changes to > data that may have occurred since the snapshot was started. If the > connector is stopped/fails, upon restart it needs to be able to > reconnect and either see all changes that occurred since it last > was capturing changes, or perform a snapshot. (Performing a > snapshot upon restart is very inefficient and undesirable.) This > works as follows: the CDC connector only records the ?offset? in > the source?s sequence of events; what this ?offset? entails > depends on the source. Upon restart, the connector can use this > offset information to coordinate with the source where it wants to > start reading. (In MySQL and PostgreSQL, every event includes the > filename of the log and position in that file. MongoDB includes in > each event the monotonically increasing timestamp of the transaction. > 2. No change can be missed, even when things go wrong and components > crash. > 3. When a new entry is added, the ?after? state of the entity will be > included. When an entry is updated, the ?after? state will be > included in the event; if possible, the event should also include > the ?before? state. When an entry is removed, the ?before? state > should be included in the event. > > > Any thoughts or advice would be greatly appreciated. > > Best regards, > > Randall > > > [1] http://debezium.io > [2] http://debezium.io/docs/connectors/mysql/ > [3] > http://infinispan.org/docs/stable/user_guide/user_guide.html#_custom_interceptors_chapter > [4] > http://infinispan.org/docs/stable/user_guide/user_guide.html#CrossSiteReplication > [5] > https://github.com/infinispan/infinispan/wiki/Design-For-Cross-Site-Replication > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160711/6366c9f5/attachment-0001.html From rhauch at redhat.com Mon Jul 11 10:41:59 2016 From: rhauch at redhat.com (Randall Hauch) Date: Mon, 11 Jul 2016 09:41:59 -0500 Subject: [infinispan-dev] Infinispan and change data capture In-Reply-To: <57835BEF.4010902@redhat.com> References: <2569A6BA-FBC2-40A7-A821-26676F10BEB0@redhat.com> <57835BEF.4010902@redhat.com> Message-ID: <8EF34011-3667-495B-8191-F2ED4286F0FA@redhat.com> > On Jul 11, 2016, at 3:42 AM, Adrian Nistor wrote: > > Hi Randall, > > Infinispan supports both push and pull access models. The push model is supported by events (and listeners), which are cluster wide and are available in both library and remote mode (hotrod). The notification system is pretty advanced as there is a filtering mechanism available that can use a hand coded filter / converter or one specified in jpql (experimental atm). Getting a snapshot of the initial data is also possible. But infinispan does not produce a transaction log to be used for determining all changes that happened since a previous connection time, so you'll always have to get a new full snapshot when re-connecting. > > So if Infinispan is the data store I would base the Debezium connector implementation on Infinispan's event notification system. Not sure about the other use case though. > Thanks, Adrian, for the feedback. A couple of questions. You mentioned Infinispan has a pull model ? is this just using the normal API to read the entries? With event listeners, a single connection will receive all of the events that occur in the cluster, correct? Is it possible (e.g., a very unfortunately timed crash) for a change to be made to the cache without an event being produced and sent to listeners? What happens if the network fails or partitions? How does cross site replication address this? Has there been any thought about adding to Infinispan a write ahead log or transaction log to each node or, better yet, for the whole cluster? Thanks again! > Adrian > > On 07/09/2016 04:38 PM, Randall Hauch wrote: >> The Debezium project [1] is working on building change data capture connectors for a variety of databases. MySQL is available now, MongoDB will be soon, and PostgreSQL and Oracle are next on our roadmap. >> >> One way in which Debezium and Infinispan can be used together is when Infinispan is being used as a cache for data stored in a database. In this case, Debezium can capture the changes to the database and produce a stream of events; a separate process can consume these change and evict entries from an Infinispan cache. >> >> If Infinispan is to be used as a data store, then it would be useful for Debezium to be able to capture those changes so other apps/services can consume the changes. First of all, does this make sense? Secondly, if it does, then Debezium would need an Infinispan connector, and it?s not clear to me how that connector might capture the changes from Infinispan. >> >> Debezium typically monitors the log of transactions/changes that are committed to a database. Of course how this works varies for each type of database. For example, MySQL internally produces a transaction log that contains information about every committed row change, and MySQL ensures that every committed change is included and that non-committed changes are excluded. The MySQL mechanism is actually part of the replication mechanism, so slaves update their internal state by reading the master?s log. The Debezium MySQL connector [2] simply reads the same log. >> >> Infinispan has several mechanisms that may be useful: >> >> Interceptors - See [3]. This seems pretty straightforward and IIUC provides access to all internal operations. However, it?s not clear to me whether a single interceptor will see all the changes in a cluster (perhaps in local and replicated modes) or only those changes that happen on that particular node (in distributed mode). It?s also not clear whether this interceptor is called within the context of the cache?s transaction, so if a failure happens just at the wrong time whether a change might be made to the cache but is not seen by the interceptor (or vice versa). >> Cross-site replication - See [4][5]. A potential advantage of this mechanism appears to be that it is defined (more) globally, and it appears to function if the remote backup comes back online after being offline for a period of time. >> State transfer - is it possible to participate as a non-active member of the cluster, and to effectively read all state transfer activities that occur within the cluster? >> Cache store - tie into the cache store mechanism, perhaps by wrapping an existing cache store and sitting between the cache and the cache store >> Monitor the cache store - don?t monitor Infinispan at all, and instead monitor the store in which Infinispan is storing entries. (This is probably the least attractive, since some stores can?t be monitored, or because the store is persisting an opaque binary value.) >> >> Are there other mechanism that might be used? >> >> There are a couple of important requirements for change data capture to be able to work correctly: >> >> Upon initial connection, the CDC connector must be able to obtain a snapshot of all existing data, followed by seeing all changes to data that may have occurred since the snapshot was started. If the connector is stopped/fails, upon restart it needs to be able to reconnect and either see all changes that occurred since it last was capturing changes, or perform a snapshot. (Performing a snapshot upon restart is very inefficient and undesirable.) This works as follows: the CDC connector only records the ?offset? in the source?s sequence of events; what this ?offset? entails depends on the source. Upon restart, the connector can use this offset information to coordinate with the source where it wants to start reading. (In MySQL and PostgreSQL, every event includes the filename of the log and position in that file. MongoDB includes in each event the monotonically increasing timestamp of the transaction. >> No change can be missed, even when things go wrong and components crash. >> When a new entry is added, the ?after? state of the entity will be included. When an entry is updated, the ?after? state will be included in the event; if possible, the event should also include the ?before? state. When an entry is removed, the ?before? state should be included in the event. >> >> Any thoughts or advice would be greatly appreciated. >> >> Best regards, >> >> Randall >> >> >> [1] http://debezium.io >> [2] http://debezium.io/docs/connectors/mysql/ >> [3] http://infinispan.org/docs/stable/user_guide/user_guide.html#_custom_interceptors_chapter >> [4] http://infinispan.org/docs/stable/user_guide/user_guide.html#CrossSiteReplication >> [5] https://github.com/infinispan/infinispan/wiki/Design-For-Cross-Site-Replication >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160711/70b4a4c8/attachment.html From gustavo at infinispan.org Mon Jul 11 11:16:13 2016 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Mon, 11 Jul 2016 16:16:13 +0100 Subject: [infinispan-dev] Weekly Infinispan IRC meeting 2016-07-11 Message-ID: Hello everyone, The logs from our weekly meeting on #infinispan are here: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2016/infinispan.2016-07-11-14.01.log.html Cheers, Gustavo -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160711/1388e58d/attachment.html From vblagoje at redhat.com Mon Jul 11 18:33:34 2016 From: vblagoje at redhat.com (Vladimir Blagojevic) Date: Mon, 11 Jul 2016 18:33:34 -0400 Subject: [infinispan-dev] Infinispan 8.2.3.Final and 9.0.0.Alpha3 Message-ID: <27fa02b2-26cc-cb3e-4461-c8498cd71fdb@redhat.com> Hey guys, Over the weekend we released Infinispan 9.0.0.Alpha3 and Infinispan 8.2.3.Final. Read more about it at http://blog.infinispan.org/2016/07/infinispan-900alpha3-and-823final.html All the best, Vladimir From slaskawi at redhat.com Thu Jul 14 06:17:40 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Thu, 14 Jul 2016 12:17:40 +0200 Subject: [infinispan-dev] Kubernetes/OpenShift Rolling updates and configuration changes Message-ID: Hey! I've been thinking about potential use of Kubernetes/OpenShift (OpenShift = Kubernetes + additional features) Rolling Update mechanism for updating configuration of Hot Rod servers. You might find some more information about the rolling updates here [1][2] but putting it simply, Kubernetes replaces nodes in the cluster one at a time. What's worth mentioning, Kubernetes ensures that the newly created replica is fully operational before taking down another one. There are two things that make me scratching my head... #1 - What type of configuration changes can we introduce using rolling updates? I'm pretty sure introducing a new cache definition won't do any harm. But what if we change a cache type from Distributed to Replicated? Do you have any idea which configuration changes are safe and which are not? Could come up with such list? #2 - How to prevent loosing data during the rolling update process? In Kubernetes we have a mechanism called lifecycle hooks [3] (we can invoke a script during container startup/shutdown). The problem with shutdown script is that it's time constrained (if it won't end up within certain amount of time, Kubernetes will simply kill the container). Fortunately this time is configurable. The idea to prevent from loosing data would be to invoke (enquque and wait for finish) state transfer process triggered by the shutdown hook (with timeout set to maximum value). If for some reason this won't work (e.g. a user has so much data that migrating it this way would take ages), there is a backup plan - Infinispan Rolling Upgrades [4]. What do you think about this? Thanks Sebastian [1] https://www.youtube.com/watch?v=9C6YeyyUUmI [2] http://kubernetes.io/docs/user-guide/rolling-updates/ [3] http://kubernetes.io/docs/user-guide/container-environment/#container-hooks [4] http://infinispan.org/docs/stable/user_guide/user_guide.html#_Rolling_chapter -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160714/482a578f/attachment-0001.html From slaskawi at redhat.com Mon Jul 18 03:14:08 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 18 Jul 2016 09:14:08 +0200 Subject: [infinispan-dev] Multi tenancy support for Infinispan In-Reply-To: References: <5731A2D6.1020300@redhat.com> Message-ID: Hey! Dan pointed out a very interesting thing [1] - we could use host header for multi-tenant REST endpoints. Although I really like the idea (this header was introduced to support this kind of use cases), it might be a bit problematic from security point of view (if someone forgets to set it, he'll be talking to someone else Cache Container). What do you think about this? Should we implement this (now or later)? I vote for yes and implement it in 9.1 (or 9.0 if there is enough time). Thanks Sebastian On Wed, Jun 29, 2016 at 8:55 AM, Sebastian Laskawiec wrote: > Hey! > > The multi-tenancy support for Hot Rod and REST has been implemented [2]. > Since the PR is gigantic, I marked some interesting places for review so > you might want to skip boilerplate parts. > > The Memcached and WebSockets implementations are currently out of scope. > If you would like us to implement them, please vote on the following > tickets: > > - Memcached https://issues.jboss.org/browse/ISPN-6639 > - Web Sockets https://issues.jboss.org/browse/ISPN-6638 > > Thanks > Sebastian > > [2] https://github.com/infinispan/infinispan/pull/4348 > > On Thu, May 26, 2016 at 4:51 PM, Sebastian Laskawiec > wrote: > >> Hey Galder! >> >> Comments inlined. >> >> Thanks >> Sebastian >> >> On Wed, May 25, 2016 at 10:52 AM, Galder Zamarre?o >> wrote: >> >>> Hi all, >>> >>> Sorry for the delay getting back on this. >>> >>> The addition of a new component does not worry me so much. It has the >>> advantage of implementing it once independent of the backend endpoint, >>> whether HR or Rest. >>> >>> What I'm struggling to understand is what protocol the clients will use >>> to talk to the router. It seems wasteful having to build two protocols at >>> this level, e.g. one at TCP level and one at REST level. If you're going to >>> end up building two protocols, the benefit of the router component >>> dissapears and then you might as well embedded the two routing protocols >>> within REST and HR directly. >>> >> >> I think I wasn't clear enough in the design how the routing works... >> >> In your scenario - both servers (hotrod and rest) will start >> EmbeddedCacheManagers internally but none of them will start Netty >> transport. The only transport that will be turned on is the router. The >> router will be responsible for recognizing the request type (if HTTP - find >> proper REST server, if HotRod protocol - find proper HotRod) and attaching >> handlers at the end of the pipeline. >> >> Regarding to custom protocol (this usecase could be used with Hotrod >> clients which do not use SSL (so SNI routing is not possible)), you and >> Tristan got me thinking whether we really need it. Maybe we should require >> SSL+SNI when using HotRod protocol with no exceptions? The thing that >> bothers me is that SSL makes the whole setup twice slower: >> https://gist.github.com/slaskawi/51f76b0658b9ee0c9351bd17224b1ba2#file-gistfile1-txt-L1753-L1754 >> >> >>> >>> In other words, for the router component to make sense, I think it >>> should: >>> >>> 1. Clients, no matter whether HR or REST, to use 1 single protocol to >>> the router. The natural thing here would be HTTP/2 or similar protocol. >>> >> >> Yes, that's the goal. >> >> >>> 2. The router then talks HR or REST to the backend. Here the router uses >>> TCP or HTTP protocol based on the backend needs. >>> >> >> It's even simpler - it just uses the backend's Netty Handlers. >> >> Since the SNI implementation is ready, please have a look: >> https://github.com/infinispan/infinispan/pull/4348 >> >> >>> >>> ^ The above implies that HR client has to talk TCP when using HR server >>> directly or HTTP/2 when using it via router, but I don't think this is too >>> bad and it gives us some experience working with HTTP/2 besides the work >>> Anton is carrying out as part of GSoC. >> >> >>> Cheers, >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >>> > On 11 May 2016, at 10:38, Sebastian Laskawiec >>> wrote: >>> > >>> > Hey Tristan! >>> > >>> > If I understood you correctly, you're suggesting to enhance the >>> ProtocolServer to support multiple EmbeddedCacheManagers (probably with >>> shared transport and by that I mean started on the same Netty server). >>> > >>> > Yes, that also could work but I'm not convinced if we won't loose some >>> configuration flexibility. >>> > >>> > Let's consider a configuration file - >>> https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how >>> for example use authentication for CacheContainer cc1 (and not for cc2) and >>> encryption for cc1 (and not for cc1)? Both are tied to hotrod-connector. I >>> think using this kind of different options makes sense in terms of multi >>> tenancy. And please note that if we start a new Netty server for each >>> CacheContainer - we almost ended up with the router I proposed. >>> > >>> > The second argument for using a router is extracting the routing logic >>> into a separate module. Otherwise we would probably end up with several >>> if(isMultiTenent()) statements in Hotrod as well as REST server. Extracting >>> this has also additional advantage that we limit changes in those modules >>> (actually there will be probably 2 changes #1 we should be able to start a >>> ProtocolServer without starting a Netty server (the Router will do it in >>> multi tenant configuration) and #2 collect Netty handlers from >>> ProtocolServer). >>> > >>> > To sum it up - the router's implementation seems to be more >>> complicated but in the long run I think it might be worth it. >>> > >>> > I also wrote the summary of the above here: >>> https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#alternative-approach >>> > >>> > @Galder - you wrote a huge part of the Hot Rod server - I would love >>> to hear your opinion as well. >>> > >>> > Thanks >>> > Sebastian >>> > >>> > >>> > >>> > On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant >>> wrote: >>> > Not sure I like the introduction of another component at the front. >>> > >>> > My original idea for allowing the client to choose the container was: >>> > >>> > - with TLS: use SNI to choose the container >>> > - without TLS: enhance the PING operation of the Hot Rod protocol to >>> > also take the server name. This would need to be a requirement when >>> > exposing multiple containers over the same endpoint. >>> > >>> > From a client API perspective, there would be no difference between >>> the >>> > above two approaches: just specify the server name and depending on the >>> > transport, select the right one. >>> > >>> > Tristan >>> > >>> > On 29/04/2016 17:29, Sebastian Laskawiec wrote: >>> > > Dear Community, >>> > > >>> > > Please have a look at the design of Multi tenancy support for >>> Infinispan >>> > > [1]. I would be more than happy to get some feedback from you. >>> > > >>> > > Highlights: >>> > > >>> > > * The implementation will be based on a Router (which will be built >>> > > based on Netty) >>> > > * Multiple Hot Rod and REST servers will be attached to the router >>> > > which in turn will be attached to the endpoint >>> > > * The router will operate on a binary protocol when using Hot Rod >>> > > clients and path-based routing when using REST >>> > > * Memcached will be out of scope >>> > > * The router will support SSL+SNI >>> > > >>> > > Thanks >>> > > Sebastian >>> > > >>> > > [1] >>> > > >>> https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server >>> > > >>> > > >>> > > _______________________________________________ >>> > > infinispan-dev mailing list >>> > > infinispan-dev at lists.jboss.org >>> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> > > >>> > >>> > -- >>> > Tristan Tarrant >>> > Infinispan Lead >>> > JBoss, a division of Red Hat >>> > _______________________________________________ >>> > infinispan-dev mailing list >>> > infinispan-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> > >>> > _______________________________________________ >>> > infinispan-dev mailing list >>> > infinispan-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160718/69d2efcd/attachment.html From ttarrant at redhat.com Mon Jul 18 11:12:06 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 18 Jul 2016 17:12:06 +0200 Subject: [infinispan-dev] Kubernetes/OpenShift Rolling updates and configuration changes In-Reply-To: References: Message-ID: <578CF1C6.8090106@infinispan.org> On 14/07/16 12:17, Sebastian Laskawiec wrote: > Hey! > > I've been thinking about potential use of Kubernetes/OpenShift > (OpenShift = Kubernetes + additional features) Rolling Update > mechanism for updating configuration of Hot Rod servers. You might > find some more information about the rolling updates here [1][2] but > putting it simply, Kubernetes replaces nodes in the cluster one at a > time. What's worth mentioning, Kubernetes ensures that the newly > created replica is fully operational before taking down another one. > > There are two things that make me scratching my head... > > #1 - What type of configuration changes can we introduce using rolling > updates? > > I'm pretty sure introducing a new cache definition won't do any harm. > But what if we change a cache type from Distributed to Replicated? Do > you have any idea which configuration changes are safe and which are > not? Could come up with such list? Very few changes are safe, but obviously this would need to be verified on a per-attribute basis. All of the attributes which can be changed at runtime (timeouts, eviction size) are safe. > > #2 - How to prevent loosing data during the rolling update process? I believe you want to write losing :) > In Kubernetes we have a mechanism called lifecycle hooks [3] (we can > invoke a script during container startup/shutdown). The problem with > shutdown script is that it's time constrained (if it won't end up > within certain amount of time, Kubernetes will simply kill the > container). Fortunately this time is configurable. > > The idea to prevent from loosing data would be to invoke (enquque and > wait for finish) state transfer process triggered by the shutdown hook > (with timeout set to maximum value). If for some reason this won't > work (e.g. a user has so much data that migrating it this way would > take ages), there is a backup plan - Infinispan Rolling Upgrades [4]. The thing that concerns me here is the amount of churn involved: the safest bet for us is that the net topology doesn't change, i.e. you end up with the exact number of nodes you started with and they are replaced one by one in a way that the replacement assumes the identity of the replaced (both as persistent uuid, owned segments and data in a persistent store). Other types could be supported but they will definitely have a level of risk. Also we don't have any guarantees that a newer version will be able to cluster with an older one... Tristan From emmanuel at hibernate.org Tue Jul 19 04:08:58 2016 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Tue, 19 Jul 2016 10:08:58 +0200 Subject: [infinispan-dev] Kubernetes/OpenShift Rolling updates and configuration changes In-Reply-To: <578CF1C6.8090106@infinispan.org> References: <578CF1C6.8090106@infinispan.org> Message-ID: <20160719080858.GJ69003@hibernate.org> Considering very few options can be changed at runtime safely, should we rather focus of a strategy where we start a new grid and populate it with the old grid before flipping the proxy to the new one? On Mon 2016-07-18 17:12, Tristan Tarrant wrote: > On 14/07/16 12:17, Sebastian Laskawiec wrote: > > Hey! > > > > I've been thinking about potential use of Kubernetes/OpenShift > > (OpenShift = Kubernetes + additional features) Rolling Update > > mechanism for updating configuration of Hot Rod servers. You might > > find some more information about the rolling updates here [1][2] but > > putting it simply, Kubernetes replaces nodes in the cluster one at a > > time. What's worth mentioning, Kubernetes ensures that the newly > > created replica is fully operational before taking down another one. > > > > There are two things that make me scratching my head... > > > > #1 - What type of configuration changes can we introduce using rolling > > updates? > > > > I'm pretty sure introducing a new cache definition won't do any harm. > > But what if we change a cache type from Distributed to Replicated? Do > > you have any idea which configuration changes are safe and which are > > not? Could come up with such list? > Very few changes are safe, but obviously this would need to be verified > on a per-attribute basis. All of the attributes which can be changed at > runtime (timeouts, eviction size) are safe. > > > > > #2 - How to prevent loosing data during the rolling update process? > I believe you want to write losing :) > > In Kubernetes we have a mechanism called lifecycle hooks [3] (we can > > invoke a script during container startup/shutdown). The problem with > > shutdown script is that it's time constrained (if it won't end up > > within certain amount of time, Kubernetes will simply kill the > > container). Fortunately this time is configurable. > > > > The idea to prevent from loosing data would be to invoke (enquque and > > wait for finish) state transfer process triggered by the shutdown hook > > (with timeout set to maximum value). If for some reason this won't > > work (e.g. a user has so much data that migrating it this way would > > take ages), there is a backup plan - Infinispan Rolling Upgrades [4]. > The thing that concerns me here is the amount of churn involved: the > safest bet for us is that the net topology doesn't change, i.e. you end > up with the exact number of nodes you started with and they are replaced > one by one in a way that the replacement assumes the identity of the > replaced (both as persistent uuid, owned segments and data in a > persistent store). Other types could be supported but they will > definitely have a level of risk. > Also we don't have any guarantees that a newer version will be able to > cluster with an older one... > > Tristan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From slaskawi at redhat.com Tue Jul 19 05:06:54 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 19 Jul 2016 11:06:54 +0200 Subject: [infinispan-dev] Kubernetes/OpenShift Rolling updates and configuration changes In-Reply-To: <20160719080858.GJ69003@hibernate.org> References: <578CF1C6.8090106@infinispan.org> <20160719080858.GJ69003@hibernate.org> Message-ID: Hey Tristan, Emmanuel! Comments inlined. Thanks Sebastian On Tue, Jul 19, 2016 at 10:08 AM, Emmanuel Bernard wrote: > Considering very few options can be changed at runtime safely, should we > rather focus of a strategy where we start a new grid and populate it > with the old grid before flipping the proxy to the new one? > +1, that's exactly what the Infinispan Rolling Upgrade does. > > On Mon 2016-07-18 17:12, Tristan Tarrant wrote: > > On 14/07/16 12:17, Sebastian Laskawiec wrote: > > > Hey! > > > > > > I've been thinking about potential use of Kubernetes/OpenShift > > > (OpenShift = Kubernetes + additional features) Rolling Update > > > mechanism for updating configuration of Hot Rod servers. You might > > > find some more information about the rolling updates here [1][2] but > > > putting it simply, Kubernetes replaces nodes in the cluster one at a > > > time. What's worth mentioning, Kubernetes ensures that the newly > > > created replica is fully operational before taking down another one. > > > > > > There are two things that make me scratching my head... > > > > > > #1 - What type of configuration changes can we introduce using rolling > > > updates? > > > > > > I'm pretty sure introducing a new cache definition won't do any harm. > > > But what if we change a cache type from Distributed to Replicated? Do > > > you have any idea which configuration changes are safe and which are > > > not? Could come up with such list? > > Very few changes are safe, but obviously this would need to be verified > > on a per-attribute basis. All of the attributes which can be changed at > > runtime (timeouts, eviction size) are safe. > > > > > > > > #2 - How to prevent loosing data during the rolling update process? > > I believe you want to write losing :) > Good one :) > > > In Kubernetes we have a mechanism called lifecycle hooks [3] (we can > > > invoke a script during container startup/shutdown). The problem with > > > shutdown script is that it's time constrained (if it won't end up > > > within certain amount of time, Kubernetes will simply kill the > > > container). Fortunately this time is configurable. > > > > > > The idea to prevent from loosing data would be to invoke (enquque and > > > wait for finish) state transfer process triggered by the shutdown hook > > > (with timeout set to maximum value). If for some reason this won't > > > work (e.g. a user has so much data that migrating it this way would > > > take ages), there is a backup plan - Infinispan Rolling Upgrades [4]. > > The thing that concerns me here is the amount of churn involved: the > > safest bet for us is that the net topology doesn't change, i.e. you end > > up with the exact number of nodes you started with > Yes, Kubernetes Rolling Update works this way. The number of nodes at the end of the process is equal to the number you started with. > and they are replaced > > one by one in a way that the replacement assumes the identity of the > > replaced (both as persistent uuid, owned segments and data in a > > persistent store). > Other types could be supported but they will > > definitely have a level of risk. > > Also we don't have any guarantees that a newer version will be able to > > cluster with an older one... > I'm not sure we can ensure the same identity of the replaced node. If we consider configuration changes, a user can change anything... I think I'm convinced that the Infinispan Rolling Upgrade procedure is the only proper solution at this stage. Other ways (although much simpler) must be treated as - 'do it at your own risk'. > > > > Tristan > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160719/28a97dc9/attachment.html From sanne at infinispan.org Tue Jul 19 07:06:55 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 19 Jul 2016 12:06:55 +0100 Subject: [infinispan-dev] Kubernetes/OpenShift Rolling updates and configuration changes In-Reply-To: References: <578CF1C6.8090106@infinispan.org> <20160719080858.GJ69003@hibernate.org> Message-ID: Just wondering, we're looking into how to compare configuration settings just to try validate the user isn't attempting an insane upgrade right? Or is there an actual need to want to change Infinispan version AND switch the essential grid configuration settings at the same time? (that seems quite insane to me, and unnecessary as one could do it in two steps..) Thanks On 19 July 2016 at 10:06, Sebastian Laskawiec wrote: > Hey Tristan, Emmanuel! > > Comments inlined. > > Thanks > Sebastian > > On Tue, Jul 19, 2016 at 10:08 AM, Emmanuel Bernard > wrote: >> >> Considering very few options can be changed at runtime safely, should we >> rather focus of a strategy where we start a new grid and populate it >> with the old grid before flipping the proxy to the new one? > > > +1, that's exactly what the Infinispan Rolling Upgrade does. > >> >> >> On Mon 2016-07-18 17:12, Tristan Tarrant wrote: >> > On 14/07/16 12:17, Sebastian Laskawiec wrote: >> > > Hey! >> > > >> > > I've been thinking about potential use of Kubernetes/OpenShift >> > > (OpenShift = Kubernetes + additional features) Rolling Update >> > > mechanism for updating configuration of Hot Rod servers. You might >> > > find some more information about the rolling updates here [1][2] but >> > > putting it simply, Kubernetes replaces nodes in the cluster one at a >> > > time. What's worth mentioning, Kubernetes ensures that the newly >> > > created replica is fully operational before taking down another one. >> > > >> > > There are two things that make me scratching my head... >> > > >> > > #1 - What type of configuration changes can we introduce using rolling >> > > updates? >> > > >> > > I'm pretty sure introducing a new cache definition won't do any harm. >> > > But what if we change a cache type from Distributed to Replicated? Do >> > > you have any idea which configuration changes are safe and which are >> > > not? Could come up with such list? >> > Very few changes are safe, but obviously this would need to be verified >> > on a per-attribute basis. All of the attributes which can be changed at >> > runtime (timeouts, eviction size) are safe. >> > >> > > >> > > #2 - How to prevent loosing data during the rolling update process? >> > I believe you want to write losing :) > > > Good one :) > >> >> > > In Kubernetes we have a mechanism called lifecycle hooks [3] (we can >> > > invoke a script during container startup/shutdown). The problem with >> > > shutdown script is that it's time constrained (if it won't end up >> > > within certain amount of time, Kubernetes will simply kill the >> > > container). Fortunately this time is configurable. >> > > >> > > The idea to prevent from loosing data would be to invoke (enquque and >> > > wait for finish) state transfer process triggered by the shutdown hook >> > > (with timeout set to maximum value). If for some reason this won't >> > > work (e.g. a user has so much data that migrating it this way would >> > > take ages), there is a backup plan - Infinispan Rolling Upgrades [4]. >> > The thing that concerns me here is the amount of churn involved: the >> > safest bet for us is that the net topology doesn't change, i.e. you end >> > up with the exact number of nodes you started with > > > Yes, Kubernetes Rolling Update works this way. The number of nodes at the > end of the process is equal to the number you started with. > >> >> and they are replaced >> > one by one in a way that the replacement assumes the identity of the >> > replaced (both as persistent uuid, owned segments and data in a >> > persistent store). >> >> Other types could be supported but they will >> > definitely have a level of risk. >> > Also we don't have any guarantees that a newer version will be able to >> > cluster with an older one... > > > I'm not sure we can ensure the same identity of the replaced node. If we > consider configuration changes, a user can change anything... > > I think I'm convinced that the Infinispan Rolling Upgrade procedure is the > only proper solution at this stage. Other ways (although much simpler) must > be treated as - 'do it at your own risk'. > >> >> > >> > Tristan >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From slaskawi at redhat.com Tue Jul 19 23:01:56 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Wed, 20 Jul 2016 05:01:56 +0200 Subject: [infinispan-dev] Kubernetes/OpenShift Rolling updates and configuration changes In-Reply-To: References: <578CF1C6.8090106@infinispan.org> <20160719080858.GJ69003@hibernate.org> Message-ID: Hey Sanne! I would treat both those things (upgrading Infinispan from version X to Y and changing configuration) separate. Thanks Sebastian On Tue, Jul 19, 2016 at 1:06 PM, Sanne Grinovero wrote: > Just wondering, we're looking into how to compare configuration > settings just to try validate the user isn't attempting an insane > upgrade right? > > Or is there an actual need to want to change Infinispan version AND > switch the essential grid configuration settings at the same time? > (that seems quite insane to me, and unnecessary as one could do it in > two steps..) > > Thanks > > > On 19 July 2016 at 10:06, Sebastian Laskawiec wrote: > > Hey Tristan, Emmanuel! > > > > Comments inlined. > > > > Thanks > > Sebastian > > > > On Tue, Jul 19, 2016 at 10:08 AM, Emmanuel Bernard < > emmanuel at hibernate.org> > > wrote: > >> > >> Considering very few options can be changed at runtime safely, should we > >> rather focus of a strategy where we start a new grid and populate it > >> with the old grid before flipping the proxy to the new one? > > > > > > +1, that's exactly what the Infinispan Rolling Upgrade does. > > > >> > >> > >> On Mon 2016-07-18 17:12, Tristan Tarrant wrote: > >> > On 14/07/16 12:17, Sebastian Laskawiec wrote: > >> > > Hey! > >> > > > >> > > I've been thinking about potential use of Kubernetes/OpenShift > >> > > (OpenShift = Kubernetes + additional features) Rolling Update > >> > > mechanism for updating configuration of Hot Rod servers. You might > >> > > find some more information about the rolling updates here [1][2] but > >> > > putting it simply, Kubernetes replaces nodes in the cluster one at a > >> > > time. What's worth mentioning, Kubernetes ensures that the newly > >> > > created replica is fully operational before taking down another one. > >> > > > >> > > There are two things that make me scratching my head... > >> > > > >> > > #1 - What type of configuration changes can we introduce using > rolling > >> > > updates? > >> > > > >> > > I'm pretty sure introducing a new cache definition won't do any > harm. > >> > > But what if we change a cache type from Distributed to Replicated? > Do > >> > > you have any idea which configuration changes are safe and which are > >> > > not? Could come up with such list? > >> > Very few changes are safe, but obviously this would need to be > verified > >> > on a per-attribute basis. All of the attributes which can be changed > at > >> > runtime (timeouts, eviction size) are safe. > >> > > >> > > > >> > > #2 - How to prevent loosing data during the rolling update process? > >> > I believe you want to write losing :) > > > > > > Good one :) > > > >> > >> > > In Kubernetes we have a mechanism called lifecycle hooks [3] (we can > >> > > invoke a script during container startup/shutdown). The problem with > >> > > shutdown script is that it's time constrained (if it won't end up > >> > > within certain amount of time, Kubernetes will simply kill the > >> > > container). Fortunately this time is configurable. > >> > > > >> > > The idea to prevent from loosing data would be to invoke (enquque > and > >> > > wait for finish) state transfer process triggered by the shutdown > hook > >> > > (with timeout set to maximum value). If for some reason this won't > >> > > work (e.g. a user has so much data that migrating it this way would > >> > > take ages), there is a backup plan - Infinispan Rolling Upgrades > [4]. > >> > The thing that concerns me here is the amount of churn involved: the > >> > safest bet for us is that the net topology doesn't change, i.e. you > end > >> > up with the exact number of nodes you started with > > > > > > Yes, Kubernetes Rolling Update works this way. The number of nodes at the > > end of the process is equal to the number you started with. > > > >> > >> and they are replaced > >> > one by one in a way that the replacement assumes the identity of the > >> > replaced (both as persistent uuid, owned segments and data in a > >> > persistent store). > >> > >> Other types could be supported but they will > >> > definitely have a level of risk. > >> > Also we don't have any guarantees that a newer version will be able to > >> > cluster with an older one... > > > > > > I'm not sure we can ensure the same identity of the replaced node. If we > > consider configuration changes, a user can change anything... > > > > I think I'm convinced that the Infinispan Rolling Upgrade procedure is > the > > only proper solution at this stage. Other ways (although much simpler) > must > > be treated as - 'do it at your own risk'. > > > >> > >> > > >> > Tristan > >> > _______________________________________________ > >> > infinispan-dev mailing list > >> > infinispan-dev at lists.jboss.org > >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160720/1b2b8b68/attachment-0001.html From rory.odonnell at oracle.com Fri Jul 22 05:17:02 2016 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Fri, 22 Jul 2016 10:17:02 +0100 Subject: [infinispan-dev] Early Access builds of JDK 8u112 b03, JDK 9 b128 are available on java.net Message-ID: Hi Galder, Early Access b128 for JDK 9 is available on java.net, summary of changes are listed here . Early Access b127 (#5304) for JDK 9 with Project Jigsaw is available on java.net, summary of changes are listed here Early Access b03 for JDK 8u112 is available on java.net, summary of changes are listed here Alan Bateman posted new EA builds contain initial implementation of current proposals , more info [0] The jigsaw/jake forest has been updated with an initial implementation of the proposals that Mark brought to the jpms-spec-experts mailing list last week. For those that don't build from source then the EA build/downloads [1] has also been refreshed. Rgds,Rory [0] http://mail.openjdk.java.net/pipermail/jigsaw-dev/2016-July/008467.html [1] https://jdk9.java.net/jigsaw/ -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160722/235cd29d/attachment.html From galder at redhat.com Mon Jul 25 09:28:41 2016 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 25 Jul 2016 15:28:41 +0200 Subject: [infinispan-dev] Compatiibility 2.0 dump In-Reply-To: <5768F177.9060307@redhat.com> References: <5768F177.9060307@redhat.com> Message-ID: <50F8D520-2D3B-4575-869D-B8CE19C9B568@redhat.com> Hi Tristan, Thanks for writing that up the document, one (long) comment: I think the wiki is not very clear on the difference between type and content type. It often uses the word type when it really means content type. IOW, a cache can store instances of a single type (e.g. Person) or it can mix it up by storing multiple types (e.g. Person, Car...etc), but that's not the same as the content type (e.g. JSON or XML). In theory, you could have: 1. A single type and a single content type, e.g. Person instances, stored as JSON. ^ This option makes the most sense to me. We should strive to promote this. 2. A single type and multiple content types, e.g. Person instances, somes stored as JSON, some as Protobuf binary. ^ Is this realistic? I'd imagine that even if you have difference input devices, you'd try to find the format that's common denominator. However, maybe due to lack of capabilities or performance reasons, you might decide to store differently? Also, for querying to work, you'd have to be able to index different source types. 3. Multiple types and a single content type, e.g. Person and Car instances, all stored as JSON. ^ We've had discussions about this in the past: In general I'm in favour of a single type per cache. However, there are some limitations to such set up, e.g. querys can only execute against 1 cache (AFAIK unless this limitation has changed?). So, such limitations can force users to add multiple types in the same cache. 4. Multiple types and multiple content type, e.g. Person and Car instances, Persons stored as JSON, Cars stored as XML. ^ If you are inclined to store multiple types in the same cache, this could certainly happen. Thoughts? Cheers, -- Galder Zamarre?o Infinispan, Red Hat > On 21 Jun 2016, at 09:49, Tristan Tarrant wrote: > > Hi all, > > I've created a wiki [1] for the "compatibility 2.0" ideas we talked > about recently at the query meeting. > > This is mostly a dump of the minutes, so the form is not complete, but > initial comments are welcome. > > > Tristan > > [1] https://github.com/infinispan/infinispan/wiki/Compatibility-2.0 > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Mon Jul 25 10:03:12 2016 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 25 Jul 2016 16:03:12 +0200 Subject: [infinispan-dev] Compatiibility 2.0 dump In-Reply-To: <576AB255.7060202@redhat.com> References: <5768F177.9060307@redhat.com> <576AB255.7060202@redhat.com> Message-ID: <1DA0CA01-4903-4A7B-AC75-EBBC80A0FA63@redhat.com> > On 22 Jun 2016, at 17:44, Radim Vansa wrote: > > I've spotted things like 'decorating cache' in the wiki page. I though > that the core architecture in Infinispan, modifying the behavior > according to configurations, is the interceptor stack. While we have > some doubts about its performance, and there are limitations - e.g. the > Flags don't allow to add custom parameters and we certainly don't want > to add Flag.JSON and Flag.XML - I would consider decorating a Cache V> vs. adding interceptors. > > I am thinking of adding the transcoder information to invocation context > and only pass different ICF to the CacheImpl. Though, this requires new > factory, new interceptor and a handful of specialized context classes > (or a wrapper to the existing ones). Whoo, just decorating Cache sounds > much simpler (and probably more performant). Or should we have forks in > interceptor stack? (as an alternative to different wrappers). > > The idea of interceptors is that these are common for all operations, if > we want to do things differently for different endpoints (incl. > embedded), decorating probably is the answer. > > My 2c (or rather just random thoughts and whining) Will and I had a good discussion on the problems that interceptors had with data conversion layers while at Summit/DevNation. I don't remember the details very well, that's what 3 weeks of holiday does to you ;), but Will will reply with some more details. From what I remember, doing conversion in interceptors made did not fully work with streams and custom filters. Cheers, > > Radim > > On 06/21/2016 09:49 AM, Tristan Tarrant wrote: >> Hi all, >> >> I've created a wiki [1] for the "compatibility 2.0" ideas we talked >> about recently at the query meeting. >> >> This is mostly a dump of the minutes, so the form is not complete, but >> initial comments are welcome. >> >> >> Tristan >> >> [1] https://github.com/infinispan/infinispan/wiki/Compatibility-2.0 > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Mon Jul 25 10:44:11 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 25 Jul 2016 16:44:11 +0200 Subject: [infinispan-dev] Weekly meeting IRC logs 2016-07-25 Message-ID: Hi all, the weekly meeting logs are available at http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2016/infinispan.2016-07-25-14.00.log.html Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From slaskawi at redhat.com Tue Jul 26 01:10:16 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 26 Jul 2016 07:10:16 +0200 Subject: [infinispan-dev] Health check use cases Message-ID: Dear Community, I'd like to ask you for help. I'm currently sketching a design for a REST health check endpoint for Infinispan and I'm trying to imagine possible use cases. Could you please give me a hand and tell me what functionalities are important for you? Would you like to be able to check status per-cache or maybe a red (not healthy), green (healthy), yellow (healthy, rebalance in progress) cluster status is sufficient? What kind of information do you expect to be there? Thanks Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160726/85228e9c/attachment.html From ancosen1985 at yahoo.com Tue Jul 26 02:25:08 2016 From: ancosen1985 at yahoo.com (Andrea Cosentino) Date: Tue, 26 Jul 2016 06:25:08 +0000 (UTC) Subject: [infinispan-dev] Health check use cases In-Reply-To: References: Message-ID: <1804400625.5531134.1469514308826.JavaMail.yahoo@mail.yahoo.com> Hi Sebastian, This type of feature can be very useful for liveness and readiness probes in a Kubernetes cluster [1]. Maybe you can think at check status per-cache but also at whole server level. [1]?http://kubernetes.io/docs/user-guide/production-pods/#liveness-and-readiness-probes-aka-health-checks Thanks!?--Andrea Cosentino?----------------------------------Apache Camel PMC MemberApache Karaf CommitterApache Servicemix CommitterEmail: ancosen1985 at yahoo.comTwitter: @oscerd2Github: oscerd On Tuesday, July 26, 2016 7:11 AM, Sebastian Laskawiec wrote: Dear Community, I'd like to ask you for help. I'm currently sketching a design for a REST health check endpoint for Infinispan and I'm trying to imagine possible use cases.? Could you please give me a hand and tell me what functionalities are important for you? Would you like to be able to check status per-cache or maybe a red (not healthy), green (healthy), yellow (healthy, rebalance in progress) cluster status is sufficient? What kind of information do you expect to be there? ThanksSebastian _______________________________________________ infinispan-dev mailing list infinispan-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160726/a33d1dd1/attachment-0001.html From vjuranek at redhat.com Tue Jul 26 03:00:55 2016 From: vjuranek at redhat.com (Vojtech Juranek) Date: Tue, 26 Jul 2016 09:00:55 +0200 Subject: [infinispan-dev] Health check use cases In-Reply-To: References: Message-ID: <4917598.f5xnHPsQzL@localhost.localdomain> On Tuesday 26 July 2016 07:10:16 Sebastian Laskawiec wrote: > I'm currently sketching a design for a REST > health check endpoint for Infinispan if it's not too broad, I'd include also various information about the cluster - e.g. number of machines in the cluster, recent exceptions in the log (or dump of N lines of log) etc. If would be useful at least for testing purposes so that we won't have to gather various information via JMX and CLI -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part. Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160726/f52c5c94/attachment.bin From ttarrant at redhat.com Tue Jul 26 03:50:28 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 26 Jul 2016 09:50:28 +0200 Subject: [infinispan-dev] Health check use cases In-Reply-To: References: Message-ID: On 26/07/16 07:10, Sebastian Laskawiec wrote: > Dear Community, > > I'd like to ask you for help. I'm currently sketching a design for a > REST health check endpoint for Infinispan and I'm trying to imagine > possible use cases. The health-check should be implemented as an MBean initially, with the ability to expose it via alternative implementations later. The server RESTful endpoint should be registered with the management interface via a special handler. A cache and cachemanager's health is determined by a combination of parameters and we probably should allow for a user-pluggable checker. We already expose a number of statuses already, although obviously this would be an aggregate. > > Could you please give me a hand and tell me what functionalities are > important for you? Would you like to be able to check status per-cache > or maybe a red (not healthy), green (healthy), yellow (healthy, > rebalance in progress) cluster status is sufficient? What kind of > information do you expect to be there? I wouldn't want this to be overly complex: a simple OK, KO should be sufficient. Additional detail may be optionally present, but not a requirement. Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From slaskawi at redhat.com Tue Jul 26 04:02:43 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 26 Jul 2016 10:02:43 +0200 Subject: [infinispan-dev] Health check use cases In-Reply-To: <1804400625.5531134.1469514308826.JavaMail.yahoo@mail.yahoo.com> References: <1804400625.5531134.1469514308826.JavaMail.yahoo@mail.yahoo.com> Message-ID: Hey Andrea! Exactly! One of the most important use cases is Kubernetes. I also absolutely agree - per cache and per cache manager level sounds reasonable. Thanks Sebastian On Tue, Jul 26, 2016 at 8:25 AM, Andrea Cosentino wrote: > Hi Sebastian, > > This type of feature can be very useful for liveness and readiness probes > in a Kubernetes cluster [1]. > > Maybe you can think at check status per-cache but also at whole server > level. > > [1] > http://kubernetes.io/docs/user-guide/production-pods/#liveness-and-readiness-probes-aka-health-checks > > Thanks! > > -- > Andrea Cosentino > ---------------------------------- > Apache Camel PMC Member > Apache Karaf Committer > Apache Servicemix Committer > Email: ancosen1985 at yahoo.com > Twitter: @oscerd2 > Github: oscerd > > > On Tuesday, July 26, 2016 7:11 AM, Sebastian Laskawiec < > slaskawi at redhat.com> wrote: > > > Dear Community, > > I'd like to ask you for help. I'm currently sketching a design for a REST > health check endpoint for Infinispan and I'm trying to imagine possible use > cases. > > Could you please give me a hand and tell me what functionalities are > important for you? Would you like to be able to check status per-cache or > maybe a red (not healthy), green (healthy), yellow (healthy, rebalance in > progress) cluster status is sufficient? What kind of information do you > expect to be there? > > Thanks > Sebastian > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160726/0e6f48a5/attachment.html From slaskawi at redhat.com Tue Jul 26 04:06:21 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 26 Jul 2016 10:06:21 +0200 Subject: [infinispan-dev] Health check use cases In-Reply-To: <4917598.f5xnHPsQzL@localhost.localdomain> References: <4917598.f5xnHPsQzL@localhost.localdomain> Message-ID: Hey Vojtech! JMX and CLI integration sounds very interesting. I also like the idea of exposing log and exception dump. Thanks a lot for the input! Sebastian On Tue, Jul 26, 2016 at 9:00 AM, Vojtech Juranek wrote: > On Tuesday 26 July 2016 07:10:16 Sebastian Laskawiec wrote: > > I'm currently sketching a design for a REST > > health check endpoint for Infinispan > > if it's not too broad, I'd include also various information about the > cluster > - e.g. number of machines in the cluster, recent exceptions in the log (or > dump of N lines of log) etc. If would be useful at least for testing > purposes > so that we won't have to gather various information via JMX and CLI > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160726/af3c4328/attachment.html From wfink at redhat.com Tue Jul 26 04:21:06 2016 From: wfink at redhat.com (Wolf Fink) Date: Tue, 26 Jul 2016 10:21:06 +0200 Subject: [infinispan-dev] Health check use cases In-Reply-To: References: <4917598.f5xnHPsQzL@localhost.localdomain> Message-ID: Do we expose historical data for the cluster view. Often it is important to see whether there are view changes, rebalancing and unexpected leave/merge events where nodes are kicked by JGroups. Having special entries for controlled view change and sudden view changes might be good On Tue, Jul 26, 2016 at 10:06 AM, Sebastian Laskawiec wrote: > Hey Vojtech! > > JMX and CLI integration sounds very interesting. I also like the idea of > exposing log and exception dump. > > Thanks a lot for the input! > Sebastian > > On Tue, Jul 26, 2016 at 9:00 AM, Vojtech Juranek > wrote: > >> On Tuesday 26 July 2016 07:10:16 Sebastian Laskawiec wrote: >> > I'm currently sketching a design for a REST >> > health check endpoint for Infinispan >> >> if it's not too broad, I'd include also various information about the >> cluster >> - e.g. number of machines in the cluster, recent exceptions in the log (or >> dump of N lines of log) etc. If would be useful at least for testing >> purposes >> so that we won't have to gather various information via JMX and CLI >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160726/515b1ddf/attachment.html From slaskawi at redhat.com Tue Jul 26 04:24:59 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 26 Jul 2016 10:24:59 +0200 Subject: [infinispan-dev] Health check use cases In-Reply-To: References: Message-ID: Hey Tristan! Comments inlined. Thanks Sebastian On Tue, Jul 26, 2016 at 9:50 AM, Tristan Tarrant wrote: > On 26/07/16 07:10, Sebastian Laskawiec wrote: > > Dear Community, > > > > I'd like to ask you for help. I'm currently sketching a design for a > > REST health check endpoint for Infinispan and I'm trying to imagine > > possible use cases. > The health-check should be implemented as an MBean initially, with the > ability to expose it via alternative implementations later. The server > RESTful endpoint should be registered with the management interface via > a special handler. > Yes, I think it's a good idea. We could even use tools like Jolokia [1] to expose MBeans through REST interface (it can be added to standalone.conf to the bootstrap classpath). Alternatively we could use JDK embedded HTTP Server [2]. The only restriction that comes into my mind is that we shouldn't allow duplicated domains when exposing MBeans through REST interface. Otherwise it will be very hard to construct proper paths for the endpoint. [1] https://jolokia.org/ [2] https://docs.oracle.com/javase/8/docs/jre/api/net/httpserver/spec/com/sun/net/httpserver/HttpServer.html > A cache and cachemanager's health is determined by a combination of > parameters and we probably should allow for a user-pluggable checker. We > already expose a number of statuses already, although obviously this > would be an aggregate. > Could you please elaborate more on that? How do we expose this information? Are you referring to Infinispan Stats [3]? I also though about supporting queries somehow. An imaginary example from the top of my head could look like the following: http://$ISPN/_health?cluster.nodes=3&MyCacheManager.MyCache.status=DEGRADED //<-- This would return 200 OK if we have 3 nodes and given cache is in degraded mode. http://$ISPN/_health?cluster.nodes=3&MyCacheManager.rebalance=IN_PROGRESS //<-- Checks if we have 3 nodes and rebalance is in proress [3] https://github.com/infinispan/infinispan/tree/8.2.x/core/src/main/java/org/infinispan/stats > > > > > Could you please give me a hand and tell me what functionalities are > > important for you? Would you like to be able to check status per-cache > > or maybe a red (not healthy), green (healthy), yellow (healthy, > > rebalance in progress) cluster status is sufficient? What kind of > > information do you expect to be there? > I wouldn't want this to be overly complex: a simple OK, KO should be > sufficient. Additional detail may be optionally present, but not a > requirement. > I think we will need at least a 3rd state - yellow or something like this. This would mean that a rebalance is in progress of a node is joining/leaving. In other words - the cluster accepts requests but don't touch the nodes! > > > Tristan > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160726/2575f3e8/attachment-0001.html From slaskawi at redhat.com Tue Jul 26 04:31:21 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 26 Jul 2016 10:31:21 +0200 Subject: [infinispan-dev] Health check use cases In-Reply-To: References: <4917598.f5xnHPsQzL@localhost.localdomain> Message-ID: Hey Wolf! Technically it's possible but I'm not sure if we should do this. I think this is a responsibility of monitoring tools (e.g. Splunk, Kibana or even Zabbix). Thanks Sebastian On Tue, Jul 26, 2016 at 10:21 AM, Wolf Fink wrote: > Do we expose historical data for the cluster view. Often it is important > to see whether there are view changes, rebalancing and unexpected > leave/merge events where nodes are kicked by JGroups. > Having special entries for controlled view change and sudden view changes > might be good > > On Tue, Jul 26, 2016 at 10:06 AM, Sebastian Laskawiec > wrote: > >> Hey Vojtech! >> >> JMX and CLI integration sounds very interesting. I also like the idea of >> exposing log and exception dump. >> >> Thanks a lot for the input! >> Sebastian >> >> On Tue, Jul 26, 2016 at 9:00 AM, Vojtech Juranek >> wrote: >> >>> On Tuesday 26 July 2016 07:10:16 Sebastian Laskawiec wrote: >>> > I'm currently sketching a design for a REST >>> > health check endpoint for Infinispan >>> >>> if it's not too broad, I'd include also various information about the >>> cluster >>> - e.g. number of machines in the cluster, recent exceptions in the log >>> (or >>> dump of N lines of log) etc. If would be useful at least for testing >>> purposes >>> so that we won't have to gather various information via JMX and CLI >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160726/ab2d4c76/attachment.html From ttarrant at redhat.com Tue Jul 26 04:34:43 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 26 Jul 2016 10:34:43 +0200 Subject: [infinispan-dev] Health check use cases In-Reply-To: References: Message-ID: <18348a81-a261-0d36-428d-37bbd4b0ecef@infinispan.org> On 26/07/16 10:24, Sebastian Laskawiec wrote: > Hey Tristan! > > Comments inlined. > > Thanks > Sebastian > > On Tue, Jul 26, 2016 at 9:50 AM, Tristan Tarrant > wrote: > > On 26/07/16 07:10, Sebastian Laskawiec wrote: > > Dear Community, > > > > I'd like to ask you for help. I'm currently sketching a design for a > > REST health check endpoint for Infinispan and I'm trying to imagine > > possible use cases. > The health-check should be implemented as an MBean initially, with the > ability to expose it via alternative implementations later. The server > RESTful endpoint should be registered with the management > interface via > a special handler. > > > Yes, I think it's a good idea. We could even use tools like Jolokia > [1] to expose MBeans through REST interface (it can be added to > standalone.conf to the bootstrap classpath). Alternatively we could > use JDK embedded HTTP Server [2]. No, for server we would not use Jolokia but rely on the management HTTP server (the one that handles port 9990 already). > > A cache and cachemanager's health is determined by a combination of > parameters and we probably should allow for a user-pluggable > checker. We > already expose a number of statuses already, although obviously this > would be an aggregate. > > > Could you please elaborate more on that? How do we expose this > information? Are you referring to Infinispan Stats [3]? > > I also though about supporting queries somehow. An imaginary example > from the top of my head could look like the following: > > http://$ISPN/_health?cluster.nodes=3&MyCacheManager.MyCache.status=DEGRADED > //<-- This would return 200 OK if we have 3 nodes and given cache is > in degraded mode. > http://$ISPN/_health?cluster.nodes=3&MyCacheManager.rebalance=IN_PROGRESS > //<-- Checks if we have 3 nodes and rebalance is in proress > > [3] > https://github.com/infinispan/infinispan/tree/8.2.x/core/src/main/java/org/infinispan/stats > > > > > > Could you please give me a hand and tell me what functionalities are > > important for you? Would you like to be able to check status > per-cache > > or maybe a red (not healthy), green (healthy), yellow (healthy, > > rebalance in progress) cluster status is sufficient? What kind of > > information do you expect to be there? > I wouldn't want this to be overly complex: a simple OK, KO should be > sufficient. Additional detail may be optionally present, but not a > requirement. > > > I think we will need at least a 3rd state - yellow or something like > this. This would mean that a rebalance is in progress of a node is > joining/leaving. In other words - the cluster accepts requests but > don't touch the nodes! Agreed. Tristan From slaskawi at redhat.com Tue Jul 26 04:38:57 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 26 Jul 2016 10:38:57 +0200 Subject: [infinispan-dev] Health check use cases In-Reply-To: <18348a81-a261-0d36-428d-37bbd4b0ecef@infinispan.org> References: <18348a81-a261-0d36-428d-37bbd4b0ecef@infinispan.org> Message-ID: Just for clarification - if we used the management HTTP server - would it be possible to expose health endpoints in Library mode? I think the library use case might be also very important. On Tue, Jul 26, 2016 at 10:34 AM, Tristan Tarrant wrote: > On 26/07/16 10:24, Sebastian Laskawiec wrote: > > Hey Tristan! > > > > Comments inlined. > > > > Thanks > > Sebastian > > > > On Tue, Jul 26, 2016 at 9:50 AM, Tristan Tarrant > > wrote: > > > > On 26/07/16 07:10, Sebastian Laskawiec wrote: > > > Dear Community, > > > > > > I'd like to ask you for help. I'm currently sketching a design for > a > > > REST health check endpoint for Infinispan and I'm trying to imagine > > > possible use cases. > > The health-check should be implemented as an MBean initially, with > the > > ability to expose it via alternative implementations later. The > server > > RESTful endpoint should be registered with the management > > interface via > > a special handler. > > > > > > Yes, I think it's a good idea. We could even use tools like Jolokia > > [1] to expose MBeans through REST interface (it can be added to > > standalone.conf to the bootstrap classpath). Alternatively we could > > use JDK embedded HTTP Server [2]. > > No, for server we would not use Jolokia but rely on the management HTTP > server (the one that handles port 9990 already). > > > > > A cache and cachemanager's health is determined by a combination of > > parameters and we probably should allow for a user-pluggable > > checker. We > > already expose a number of statuses already, although obviously this > > would be an aggregate. > > > > > > Could you please elaborate more on that? How do we expose this > > information? Are you referring to Infinispan Stats [3]? > > > > I also though about supporting queries somehow. An imaginary example > > from the top of my head could look like the following: > > > > http:// > $ISPN/_health?cluster.nodes=3&MyCacheManager.MyCache.status=DEGRADED > > //<-- This would return 200 OK if we have 3 nodes and given cache is > > in degraded mode. > > http:// > $ISPN/_health?cluster.nodes=3&MyCacheManager.rebalance=IN_PROGRESS > > //<-- Checks if we have 3 nodes and rebalance is in proress > > > > [3] > > > https://github.com/infinispan/infinispan/tree/8.2.x/core/src/main/java/org/infinispan/stats > > > > > > > > > > Could you please give me a hand and tell me what functionalities > are > > > important for you? Would you like to be able to check status > > per-cache > > > or maybe a red (not healthy), green (healthy), yellow (healthy, > > > rebalance in progress) cluster status is sufficient? What kind of > > > information do you expect to be there? > > I wouldn't want this to be overly complex: a simple OK, KO should be > > sufficient. Additional detail may be optionally present, but not a > > requirement. > > > > > > I think we will need at least a 3rd state - yellow or something like > > this. This would mean that a rebalance is in progress of a node is > > joining/leaving. In other words - the cluster accepts requests but > > don't touch the nodes! > Agreed. > > Tristan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160726/1c854e5c/attachment.html From ttarrant at redhat.com Tue Jul 26 04:40:25 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 26 Jul 2016 10:40:25 +0200 Subject: [infinispan-dev] Health check use cases In-Reply-To: References: <4917598.f5xnHPsQzL@localhost.localdomain> Message-ID: <90e400ba-4a30-03d9-1ca9-193df679c1f3@infinispan.org> This, and complex monitoring rules, will be handled by the Hawkular integration. That is why for the use-case Sebastian is considering we need to keep it as simple and as linear as possible. We code datagrids, not monitoring tools :) Tristan On 26/07/16 10:31, Sebastian Laskawiec wrote: > Hey Wolf! > > Technically it's possible but I'm not sure if we should do this. I > think this is a responsibility of monitoring tools (e.g. Splunk, > Kibana or even Zabbix). > > Thanks > Sebastian > > On Tue, Jul 26, 2016 at 10:21 AM, Wolf Fink > wrote: > > Do we expose historical data for the cluster view. Often it is > important to see whether there are view changes, rebalancing and > unexpected leave/merge events where nodes are kicked by JGroups. > Having special entries for controlled view change and sudden view > changes might be good > > On Tue, Jul 26, 2016 at 10:06 AM, Sebastian Laskawiec > > wrote: > > Hey Vojtech! > > JMX and CLI integration sounds very interesting. I also like > the idea of exposing log and exception dump. > > Thanks a lot for the input! > Sebastian > > On Tue, Jul 26, 2016 at 9:00 AM, Vojtech Juranek > > wrote: > > On Tuesday 26 July 2016 07:10:16 Sebastian Laskawiec wrote: > > I'm currently sketching a design for a REST > > health check endpoint for Infinispan > > if it's not too broad, I'd include also various > information about the cluster > - e.g. number of machines in the cluster, recent > exceptions in the log (or > dump of N lines of log) etc. If would be useful at least > for testing purposes > so that we won't have to gather various information via > JMX and CLI > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Tue Jul 26 04:42:47 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 26 Jul 2016 10:42:47 +0200 Subject: [infinispan-dev] Health check use cases In-Reply-To: References: <18348a81-a261-0d36-428d-37bbd4b0ecef@infinispan.org> Message-ID: On 26/07/16 10:38, Sebastian Laskawiec wrote: > Just for clarification - if we used the management HTTP server - would > it be possible to expose health endpoints in Library mode? I think the > library use case might be also very important. > Library mode can be plain JMX, which is the de-facto standard for Java application monitoring. It should be up to the environment setup to tunnel that to whatever is needed. However, we do plan to support Jolokia for interfacing the management console for embedded uses. Tristan From bban at redhat.com Tue Jul 26 04:51:15 2016 From: bban at redhat.com (Bela Ban) Date: Tue, 26 Jul 2016 10:51:15 +0200 Subject: [infinispan-dev] Health check use cases In-Reply-To: References: <4917598.f5xnHPsQzL@localhost.localdomain> Message-ID: <57972483.9080404@redhat.com> Note that JGroups diagnostics (probe) exposes operations and attributes via a simple TCP- or UDP-based text protocol. If you guys come up with a JSON based format, it should be easy to create a REST endpoint that implements that format. On 26/07/16 10:21, Wolf Fink wrote: > Do we expose historical data for the cluster view. Often it is important > to see whether there are view changes, rebalancing and unexpected > leave/merge events where nodes are kicked by JGroups. > Having special entries for controlled view change and sudden view > changes might be good > > On Tue, Jul 26, 2016 at 10:06 AM, Sebastian Laskawiec > > wrote: > > Hey Vojtech! > > JMX and CLI integration sounds very interesting. I also like the > idea of exposing log and exception dump. > > Thanks a lot for the input! > Sebastian > > On Tue, Jul 26, 2016 at 9:00 AM, Vojtech Juranek > > wrote: > > On Tuesday 26 July 2016 07:10:16 Sebastian Laskawiec wrote: > > I'm currently sketching a design for a REST > > health check endpoint for Infinispan > > if it's not too broad, I'd include also various information > about the cluster > - e.g. number of machines in the cluster, recent exceptions in > the log (or > dump of N lines of log) etc. If would be useful at least for > testing purposes > so that we won't have to gather various information via JMX and CLI > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban, JGroups lead (http://www.jgroups.org) From bban at redhat.com Tue Jul 26 07:46:18 2016 From: bban at redhat.com (Bela Ban) Date: Tue, 26 Jul 2016 13:46:18 +0200 Subject: [infinispan-dev] Fwd: [jgroups-users] Event.MSG and JGRP-2067 In-Reply-To: <57974D2F.50701@yahoo.com> References: <57974D2F.50701@yahoo.com> Message-ID: <57974D8A.9070205@redhat.com> FYI -------- Forwarded Message -------- Subject: [jgroups-users] Event.MSG and JGRP-2067 Date: Tue, 26 Jul 2016 13:44:47 +0200 From: Questions/problems related to using JGroups Reply-To: javagroups-users at lists.sourceforge.net To: jg-users so far, all messages to be sent and all received messages have always been wrapped in an Event, e.g. when calling JChannel.send(Message msg): Event evt=new Event(Event.MSG, msg); channel.down(evt); This caused the creation of an Event instance for every sent and received message. In [1], I changed this and added 2 methods to Protocol: public Object down(Message msg); public Object up(Message msg) These callbacks are now called instead of down(Event) and up(Event) whenever a message is sent or received. Since messages make up 99.9% of all traffic up and down a stack, this change should reduce the memory allocation rate even more, although Event instances are very short-lived and usually die in eden. The downside is that this breaks code and devs who've handled messages and events in the same method (up(Event) / down(Event)) now have to break out the message handling code into separate methods (up(Message) / down(Message)). This change is quite big (111 files changed, 2552 insertions(+), 2796 deletions(-)), but only affects protocol developers (and devs who implement UpHandler directly). This is for 4.0; 3.6.x is unaffected. Let me know (via the mailing list) if you encounter any problems. Cheers, [1] https://issues.jboss.org/browse/JGRP-2067 -- Bela Ban, JGroups lead (http://www.jgroups.org) ------------------------------------------------------------------------------ What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic patterns at an interface-level. Reveals which users, apps, and protocols are consuming the most bandwidth. Provides multi-vendor support for NetFlow, J-Flow, sFlow and other flows. Make informed decisions using capacity planning reports.http://sdm.link/zohodev2dev _______________________________________________ javagroups-users mailing list javagroups-users at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/javagroups-users From slaskawi at redhat.com Wed Jul 27 02:38:41 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Wed, 27 Jul 2016 08:38:41 +0200 Subject: [infinispan-dev] Uber client, which means ALPN investigation Message-ID: Hey guys! Recently I've been looking into ALPN support [1] and studying RFC [2] as well as JEP [3]. In short, the Application Layer Protocol Negotiation - allows the server and the client to agree which protocol shall be used after TLS handshake. It will be supported out of the box in JDK9. For JDK8 you need a special Jetty Java agent [4]. With ALPN we could build an Uber Client, which would be able to support many protocols at the same time (REST, HTTP/2, Hot Rod). We should be able to select the protocol during client initialization as well as renegotiate existing connection. This could be very convenient for situations when connecting to multiple Hot Rod servers and some of them are accessible using Hot Rod (the same DC or the same Cloud tenant) and some connections need to get through a firewall (HTTP/2, REST). Of course implementing this requires major refactoring in the server endpoint as well as in the client. Possibly this is something for Infinispan 10 :) WDYT? Thanks Sebastian [1] https://issues.jboss.org/browse/ISPN-6899 [2] https://tools.ietf.org/html/rfc7301 [3] http://openjdk.java.net/jeps/244 [4] https://github.com/jetty-project/jetty-alpn -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160727/ac2e7a3f/attachment.html From sanne at infinispan.org Wed Jul 27 06:06:54 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 27 Jul 2016 11:06:54 +0100 Subject: [infinispan-dev] Uber client, which means ALPN investigation In-Reply-To: References: Message-ID: Not my area of expertise, so forgive me if I give no technical feedback.. So, quite off-topic: please let's not call it "Uber Client". I've tried it, but I'm still waiting for my car ride to appear :) Is it always this slow? On 27 July 2016 at 07:38, Sebastian Laskawiec wrote: > Hey guys! > > Recently I've been looking into ALPN support [1] and studying RFC [2] as > well as JEP [3]. In short, the Application Layer Protocol Negotiation - > allows the server and the client to agree which protocol shall be used after > TLS handshake. It will be supported out of the box in JDK9. For JDK8 you > need a special Jetty Java agent [4]. > > With ALPN we could build an Uber Client, which would be able to support many > protocols at the same time (REST, HTTP/2, Hot Rod). We should be able to > select the protocol during client initialization as well as renegotiate > existing connection. This could be very convenient for situations when > connecting to multiple Hot Rod servers and some of them are accessible using > Hot Rod (the same DC or the same Cloud tenant) and some connections need to > get through a firewall (HTTP/2, REST). > > Of course implementing this requires major refactoring in the server > endpoint as well as in the client. Possibly this is something for Infinispan > 10 :) > > WDYT? > > Thanks > Sebastian > > [1] https://issues.jboss.org/browse/ISPN-6899 > [2] https://tools.ietf.org/html/rfc7301 > [3] http://openjdk.java.net/jeps/244 > [4] https://github.com/jetty-project/jetty-alpn > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From slaskawi at redhat.com Wed Jul 27 06:34:16 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Wed, 27 Jul 2016 12:34:16 +0200 Subject: [infinispan-dev] Uber client, which means ALPN investigation In-Reply-To: References: Message-ID: Maybe a polyglot client? On Wed, Jul 27, 2016 at 12:06 PM, Sanne Grinovero wrote: > Not my area of expertise, so forgive me if I give no technical feedback.. > > So, quite off-topic: > please let's not call it "Uber Client". I've tried it, but I'm still > waiting for my car ride to appear :) Is it always this slow? > > On 27 July 2016 at 07:38, Sebastian Laskawiec wrote: > > Hey guys! > > > > Recently I've been looking into ALPN support [1] and studying RFC [2] as > > well as JEP [3]. In short, the Application Layer Protocol Negotiation - > > allows the server and the client to agree which protocol shall be used > after > > TLS handshake. It will be supported out of the box in JDK9. For JDK8 you > > need a special Jetty Java agent [4]. > > > > With ALPN we could build an Uber Client, which would be able to support > many > > protocols at the same time (REST, HTTP/2, Hot Rod). We should be able to > > select the protocol during client initialization as well as renegotiate > > existing connection. This could be very convenient for situations when > > connecting to multiple Hot Rod servers and some of them are accessible > using > > Hot Rod (the same DC or the same Cloud tenant) and some connections need > to > > get through a firewall (HTTP/2, REST). > > > > Of course implementing this requires major refactoring in the server > > endpoint as well as in the client. Possibly this is something for > Infinispan > > 10 :) > > > > WDYT? > > > > Thanks > > Sebastian > > > > [1] https://issues.jboss.org/browse/ISPN-6899 > > [2] https://tools.ietf.org/html/rfc7301 > > [3] http://openjdk.java.net/jeps/244 > > [4] https://github.com/jetty-project/jetty-alpn > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160727/0c1ca27c/attachment-0001.html From galder at redhat.com Wed Jul 27 11:02:09 2016 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Wed, 27 Jul 2016 17:02:09 +0200 Subject: [infinispan-dev] Anyone using AdvancedCache.with(ClassLoader) functionality? Message-ID: <4FCE54DA-BAA5-4FB8-83DC-A1709F34FD6C@redhat.com> Hi all, AdvancedCache.with(ClassLoader) is an outdated functionality that we're interested in removing altogether in the next major Infinispan version. We're thinking of removing it all together without deprecation since we believe this was only used by older JBoss Application Server / Wildfly versions. If you're still using this functionality right now, please let us know asap. Cheers, -- Galder Zamarre?o Infinispan, Red Hat From slaskawi at redhat.com Thu Jul 28 08:09:01 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Thu, 28 Jul 2016 14:09:01 +0200 Subject: [infinispan-dev] Health check use cases In-Reply-To: <18348a81-a261-0d36-428d-37bbd4b0ecef@infinispan.org> References: <18348a81-a261-0d36-428d-37bbd4b0ecef@infinispan.org> Message-ID: Hi Tristan! I've been investigating the possibility of integrating with WF monitoring endpoint and I found this article: [1]. If I understand the Infinispan and Wildfly integration bits correctly - all we need to do is to implement additional resources in [2]. This should make the new API available through CLI as well as through REST [1]. The biggest advantage of this solution is that one implementation solves both REST and CLI use cases at the same time. However there are some drawbacks too - we won't be able to support any custom queries (we are limited only to queries supported by WF bits) and the REST api will be a bit complicated to consume: - Imaginary examples based on [1]: - curl http://localhost:9990/management/subsystem=datagrid-infinispan/cache-container=local?operation=health&recursive=true&json.pretty=1 - curl http://localhost:9990/management/subsystem=datagrid-infinispan/cache-container=local/local-cache=default?operation=health&json.pretty=1 - An example what ElasticSearch does [3]: - curl 'http://localhost:9200/_cluster/health?pretty=true' - An example what Spring Actuator does [4]: - curl 'http://localhost:8080/health' - curl 'http://localhost:8080/metrics' If I'm right - we will need to document this feature correctly since those URLs are not very intuitive. WDYT? Thanks Sebastian [1] https://docs.jboss.org/author/display/WFLY10/The+HTTP+management+API [2] https://github.com/infinispan/infinispan/tree/master/server/integration/infinispan/src/main/java/org/jboss/as/clustering/infinispan/subsystem [3] https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html [4] https://spring.io/guides/gs/actuator-service/ On Tue, Jul 26, 2016 at 10:34 AM, Tristan Tarrant wrote: > On 26/07/16 10:24, Sebastian Laskawiec wrote: > > Hey Tristan! > > > > Comments inlined. > > > > Thanks > > Sebastian > > > > On Tue, Jul 26, 2016 at 9:50 AM, Tristan Tarrant > > wrote: > > > > On 26/07/16 07:10, Sebastian Laskawiec wrote: > > > Dear Community, > > > > > > I'd like to ask you for help. I'm currently sketching a design for > a > > > REST health check endpoint for Infinispan and I'm trying to imagine > > > possible use cases. > > The health-check should be implemented as an MBean initially, with > the > > ability to expose it via alternative implementations later. The > server > > RESTful endpoint should be registered with the management > > interface via > > a special handler. > > > > > > Yes, I think it's a good idea. We could even use tools like Jolokia > > [1] to expose MBeans through REST interface (it can be added to > > standalone.conf to the bootstrap classpath). Alternatively we could > > use JDK embedded HTTP Server [2]. > > No, for server we would not use Jolokia but rely on the management HTTP > server (the one that handles port 9990 already). > > > > > A cache and cachemanager's health is determined by a combination of > > parameters and we probably should allow for a user-pluggable > > checker. We > > already expose a number of statuses already, although obviously this > > would be an aggregate. > > > > > > Could you please elaborate more on that? How do we expose this > > information? Are you referring to Infinispan Stats [3]? > > > > I also though about supporting queries somehow. An imaginary example > > from the top of my head could look like the following: > > > > http:// > $ISPN/_health?cluster.nodes=3&MyCacheManager.MyCache.status=DEGRADED > > //<-- This would return 200 OK if we have 3 nodes and given cache is > > in degraded mode. > > http:// > $ISPN/_health?cluster.nodes=3&MyCacheManager.rebalance=IN_PROGRESS > > //<-- Checks if we have 3 nodes and rebalance is in proress > > > > [3] > > > https://github.com/infinispan/infinispan/tree/8.2.x/core/src/main/java/org/infinispan/stats > > > > > > > > > > Could you please give me a hand and tell me what functionalities > are > > > important for you? Would you like to be able to check status > > per-cache > > > or maybe a red (not healthy), green (healthy), yellow (healthy, > > > rebalance in progress) cluster status is sufficient? What kind of > > > information do you expect to be there? > > I wouldn't want this to be overly complex: a simple OK, KO should be > > sufficient. Additional detail may be optionally present, but not a > > requirement. > > > > > > I think we will need at least a 3rd state - yellow or something like > > this. This would mean that a rebalance is in progress of a node is > > joining/leaving. In other words - the cluster accepts requests but > > don't touch the nodes! > Agreed. > > Tristan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160728/b338238f/attachment.html From ttarrant at redhat.com Thu Jul 28 09:05:19 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 28 Jul 2016 15:05:19 +0200 Subject: [infinispan-dev] Health check use cases In-Reply-To: References: <18348a81-a261-0d36-428d-37bbd4b0ecef@infinispan.org> Message-ID: <7490c296-b0aa-f050-7bd9-a5a768d8673d@infinispan.org> We are a bit more complex than either ES and SA :) First of all this needs to work for both standalone and domain modes, as well as support multiple cache containers. I believe it is possible to register an io.undertow.server.HttpHandler in the ManagementHttpServer to handle additional request types and have convenient aliases for the queries below. Tristan On 28/07/16 14:09, Sebastian Laskawiec wrote: > Hi Tristan! > > I've been investigating the possibility of integrating with WF > monitoring endpoint and I found this article: [1]. > > If I understand the Infinispan and Wildfly integration bits correctly > - all we need to do is to implement additional resources in [2]. This > should make the new API available through CLI as well as through REST [1]. > > The biggest advantage of this solution is that one implementation > solves both REST and CLI use cases at the same time. However there are > some drawbacks too - we won't be able to support any custom queries > (we are limited only to queries supported by WF bits) and the REST api > will be a bit complicated to consume: > > * Imaginary examples based on [1]: > o curl > http://localhost:9990/management/subsystem=datagrid-infinispan/cache-container=local?operation=health&recursive=true&json.pretty=1 > o curl > http://localhost:9990/management/subsystem=datagrid-infinispan/cache-container=local/local-cache=default?operation=health&json.pretty=1 > * An example what ElasticSearch does [3]: > o curl 'http://localhost:9200/_cluster/health?pretty=true' > * An example what Spring Actuator does [4]: > o curl 'http://localhost:8080/health' > o curl 'http://localhost:8080/metrics' > > If I'm right - we will need to document this feature correctly since > those URLs are not very intuitive. WDYT? > Thanks > Sebastian > > [1] https://docs.jboss.org/author/display/WFLY10/The+HTTP+management+API > [2] > https://github.com/infinispan/infinispan/tree/master/server/integration/infinispan/src/main/java/org/jboss/as/clustering/infinispan/subsystem > [3] > https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html > [4] https://spring.io/guides/gs/actuator-service/ > > On Tue, Jul 26, 2016 at 10:34 AM, Tristan Tarrant > wrote: > > On 26/07/16 10:24, Sebastian Laskawiec wrote: > > Hey Tristan! > > > > Comments inlined. > > > > Thanks > > Sebastian > > > > On Tue, Jul 26, 2016 at 9:50 AM, Tristan Tarrant > > > >> wrote: > > > > On 26/07/16 07:10, Sebastian Laskawiec wrote: > > > Dear Community, > > > > > > I'd like to ask you for help. I'm currently sketching a > design for a > > > REST health check endpoint for Infinispan and I'm trying > to imagine > > > possible use cases. > > The health-check should be implemented as an MBean > initially, with the > > ability to expose it via alternative implementations later. > The server > > RESTful endpoint should be registered with the management > > interface via > > a special handler. > > > > > > Yes, I think it's a good idea. We could even use tools like Jolokia > > [1] to expose MBeans through REST interface (it can be added to > > standalone.conf to the bootstrap classpath). Alternatively we could > > use JDK embedded HTTP Server [2]. > > No, for server we would not use Jolokia but rely on the management > HTTP > server (the one that handles port 9990 already). > > > > > A cache and cachemanager's health is determined by a > combination of > > parameters and we probably should allow for a user-pluggable > > checker. We > > already expose a number of statuses already, although > obviously this > > would be an aggregate. > > > > > > Could you please elaborate more on that? How do we expose this > > information? Are you referring to Infinispan Stats [3]? > > > > I also though about supporting queries somehow. An imaginary example > > from the top of my head could look like the following: > > > > > http://$ISPN/_health?cluster.nodes=3&MyCacheManager.MyCache.status=DEGRADED > > //<-- This would return 200 OK if we have 3 nodes and given cache is > > in degraded mode. > > > http://$ISPN/_health?cluster.nodes=3&MyCacheManager.rebalance=IN_PROGRESS > > //<-- Checks if we have 3 nodes and rebalance is in proress > > > > [3] > > > https://github.com/infinispan/infinispan/tree/8.2.x/core/src/main/java/org/infinispan/stats > > > > > > > > > > Could you please give me a hand and tell me what > functionalities are > > > important for you? Would you like to be able to check status > > per-cache > > > or maybe a red (not healthy), green (healthy), yellow > (healthy, > > > rebalance in progress) cluster status is sufficient? What > kind of > > > information do you expect to be there? > > I wouldn't want this to be overly complex: a simple OK, KO > should be > > sufficient. Additional detail may be optionally present, but > not a > > requirement. > > > > > > I think we will need at least a 3rd state - yellow or something like > > this. This would mean that a rebalance is in progress of a node is > > joining/leaving. In other words - the cluster accepts requests but > > don't touch the nodes! > Agreed. > > Tristan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Thu Jul 28 09:59:49 2016 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 28 Jul 2016 15:59:49 +0200 Subject: [infinispan-dev] Persisted state Message-ID: <579A0FD5.2000801@redhat.com> Hi, in what situations is the state (ATM just version + cache topologies) meant to be persisted? I guess it's necessary with non-shared cache stores, but should it be persisted with shared one, too? And what are the guarantees during writing that state down? (e.g. can you make sure that no operations are executed when persisting?) My problem is that for scattered cache, I need to persist highest version for each segment, or I have to iterate through cache store when starting - that's kind of forced preload. It's even worse - during regular preload, cache topology is not installed yet, and as I've found, I can't do that in @Start annotated method because I need to find segment for each key and cache topology can be installed even later than in STMI.start() (when the persistent state is being loaded, the response to join may not contain the cache topology). Radim -- Radim Vansa JBoss Performance Team From ttarrant at redhat.com Fri Jul 29 04:59:48 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 29 Jul 2016 10:59:48 +0200 Subject: [infinispan-dev] Passivation and manual eviction Message-ID: <2b737b23-4bcd-399d-db34-f4f676bf69aa@redhat.com> Hi all, Radoslav just brought to my attention that our eviction configuration validator prints out a warning when passivation is enabled without configuring eviction. This was done to make sure users are aware of the fact that a cache with this configuration will never actually passivate. In WildFly's case however, eviction is performed manually (by invoking cache.evict()). In this case the warning is just misleading. My proposal is therefore to introduce a new eviction strategy "MANUAL" wihch internally would be handled in the same way as "NONE" but which would prevent the warning. Wdyt ? Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From remerson at redhat.com Fri Jul 29 05:15:12 2016 From: remerson at redhat.com (Ryan Emerson) Date: Fri, 29 Jul 2016 05:15:12 -0400 (EDT) Subject: [infinispan-dev] Passivation and manual eviction In-Reply-To: <2b737b23-4bcd-399d-db34-f4f676bf69aa@redhat.com> References: <2b737b23-4bcd-399d-db34-f4f676bf69aa@redhat.com> Message-ID: <1855590160.45710689.1469783712751.JavaMail.zimbra@redhat.com> +1 makes sense to me. ----- Original Message ----- From: "Tristan Tarrant" To: "infinispan -Dev List" Sent: Friday, 29 July, 2016 9:59:48 AM Subject: [infinispan-dev] Passivation and manual eviction Hi all, Radoslav just brought to my attention that our eviction configuration validator prints out a warning when passivation is enabled without configuring eviction. This was done to make sure users are aware of the fact that a cache with this configuration will never actually passivate. In WildFly's case however, eviction is performed manually (by invoking cache.evict()). In this case the warning is just misleading. My proposal is therefore to introduce a new eviction strategy "MANUAL" wihch internally would be handled in the same way as "NONE" but which would prevent the warning. Wdyt ? Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat _______________________________________________ infinispan-dev mailing list infinispan-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Fri Jul 29 05:21:31 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 29 Jul 2016 11:21:31 +0200 Subject: [infinispan-dev] Passivation and manual eviction In-Reply-To: <1855590160.45710689.1469783712751.JavaMail.zimbra@redhat.com> References: <2b737b23-4bcd-399d-db34-f4f676bf69aa@redhat.com> <1855590160.45710689.1469783712751.JavaMail.zimbra@redhat.com> Message-ID: I've created a Jira and corresponding PR for this https://github.com/infinispan/infinispan/pull/4472 Tristan On 29/07/16 11:15, Ryan Emerson wrote: > +1 makes sense to me. > > ----- Original Message ----- > From: "Tristan Tarrant" > To: "infinispan -Dev List" > Sent: Friday, 29 July, 2016 9:59:48 AM > Subject: [infinispan-dev] Passivation and manual eviction > > Hi all, > > Radoslav just brought to my attention that our eviction configuration > validator prints out a warning when passivation is enabled without > configuring eviction. This was done to make sure users are aware of the > fact that a cache with this configuration will never actually passivate. > In WildFly's case however, eviction is performed manually (by invoking > cache.evict()). In this case the warning is just misleading. > > My proposal is therefore to introduce a new eviction strategy "MANUAL" > wihch internally would be handled in the same way as "NONE" but which > would prevent the warning. > > Wdyt ? > > Tristan From ttarrant at redhat.com Fri Jul 29 06:19:09 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 29 Jul 2016 12:19:09 +0200 Subject: [infinispan-dev] Uber client, which means ALPN investigation In-Reply-To: References: Message-ID: <6a5e11af-6243-18b8-b550-546e4f122ceb@infinispan.org> The WildFly guys are looking into this, so it would make sense to coordinate with them [5] [5] http://lists.jboss.org/pipermail/wildfly-dev/2016-June/005040.html Tristan On 27/07/16 08:38, Sebastian Laskawiec wrote: > Hey guys! > > Recently I've been looking into ALPN support [1] and studying RFC [2] > as well as JEP [3]. In short, the Application Layer Protocol > Negotiation - allows the server and the client to agree which protocol > shall be used after TLS handshake. It will be supported out of the box > in JDK9. For JDK8 you need a special Jetty Java agent [4]. > > With ALPN we could build an Uber Client, which would be able to > support many protocols at the same time (REST, HTTP/2, Hot Rod). We > should be able to select the protocol during client initialization as > well as renegotiate existing connection. This could be very convenient > for situations when connecting to multiple Hot Rod servers and some of > them are accessible using Hot Rod (the same DC or the same Cloud > tenant) and some connections need to get through a firewall (HTTP/2, > REST). > > Of course implementing this requires major refactoring in the server > endpoint as well as in the client. Possibly this is something for > Infinispan 10 :) > > WDYT? > > Thanks > Sebastian > > [1] https://issues.jboss.org/browse/ISPN-6899 > [2] https://tools.ietf.org/html/rfc7301 > [3] http://openjdk.java.net/jeps/244 > [4] https://github.com/jetty-project/jetty-alpn > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Fri Jul 29 06:40:12 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 29 Jul 2016 12:40:12 +0200 Subject: [infinispan-dev] Persisted state In-Reply-To: <579A0FD5.2000801@redhat.com> References: <579A0FD5.2000801@redhat.com> Message-ID: On 28/07/16 15:59, Radim Vansa wrote: > Hi, > > in what situations is the state (ATM just version + cache topologies) > meant to be persisted? I guess it's necessary with non-shared cache > stores, but should it be persisted with shared one, too? The writing is handled by the global state manager. You need to enable global state first obviously. There are two types of state: per-cachemanager and per-cache. Also graceful stop is performed only when a cache is shutdown(), not stop()ed. > And what are the guarantees during writing that state down? (e.g. can > you make sure that no operations are executed when persisting?) That is not how it is being handled atm: rebalancing is disabled, caches are passivated, and the state is written before stopping the cache components. It's like this because I was thinking that the state that we are writing (CH and topology) wouldn't be affected by some additional operations, but it would make sense to put the cache in a STOPPING state first to avoid ops. Tristan From galder at redhat.com Fri Jul 29 09:47:37 2016 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Fri, 29 Jul 2016 15:47:37 +0200 Subject: [infinispan-dev] Infinispan and change data capture In-Reply-To: <8EF34011-3667-495B-8191-F2ED4286F0FA@redhat.com> References: <2569A6BA-FBC2-40A7-A821-26676F10BEB0@redhat.com> <57835BEF.4010902@redhat.com> <8EF34011-3667-495B-8191-F2ED4286F0FA@redhat.com> Message-ID: -- Galder Zamarre?o Infinispan, Red Hat > On 11 Jul 2016, at 16:41, Randall Hauch wrote: > >> >> On Jul 11, 2016, at 3:42 AM, Adrian Nistor wrote: >> >> Hi Randall, >> >> Infinispan supports both push and pull access models. The push model is supported by events (and listeners), which are cluster wide and are available in both library and remote mode (hotrod). The notification system is pretty advanced as there is a filtering mechanism available that can use a hand coded filter / converter or one specified in jpql (experimental atm). Getting a snapshot of the initial data is also possible. But infinispan does not produce a transaction log to be used for determining all changes that happened since a previous connection time, so you'll always have to get a new full snapshot when re-connecting. >> >> So if Infinispan is the data store I would base the Debezium connector implementation on Infinispan's event notification system. Not sure about the other use case though. >> > > Thanks, Adrian, for the feedback. A couple of questions. > > You mentioned Infinispan has a pull model ? is this just using the normal API to read the entries? > > With event listeners, a single connection will receive all of the events that occur in the cluster, correct? Is it possible (e.g., a very unfortunately timed crash) for a change to be made to the cache without an event being produced and sent to listeners? ^ Yeah, that can happen due to async nature of remote events. However, there's the possibility for clients, upon receiving a new topology, to receive the current state of the server as events, see [1] and [2] [1] http://infinispan.org/docs/dev/user_guide/user_guide.html#client_event_listener_state_consumption [2] http://infinispan.org/docs/dev/user_guide/user_guide.html#client_event_listener_failure_handling > What happens if the network fails or partitions? How does cross site replication address this? In terms of cross-site, depends what the client is connected to. Clients can now failover between sites, so they should be able to deal with events too in the same as explained above. > > Has there been any thought about adding to Infinispan a write ahead log or transaction log to each node or, better yet, for the whole cluster? Not that I'm aware of but we've recently added security audit log, so a transaction log might make sense too. Cheers, > > Thanks again! > >> Adrian >> >> On 07/09/2016 04:38 PM, Randall Hauch wrote: >>> The Debezium project [1] is working on building change data capture connectors for a variety of databases. MySQL is available now, MongoDB will be soon, and PostgreSQL and Oracle are next on our roadmap. >>> >>> One way in which Debezium and Infinispan can be used together is when Infinispan is being used as a cache for data stored in a database. In this case, Debezium can capture the changes to the database and produce a stream of events; a separate process can consume these change and evict entries from an Infinispan cache. >>> >>> If Infinispan is to be used as a data store, then it would be useful for Debezium to be able to capture those changes so other apps/services can consume the changes. First of all, does this make sense? Secondly, if it does, then Debezium would need an Infinispan connector, and it?s not clear to me how that connector might capture the changes from Infinispan. >>> >>> Debezium typically monitors the log of transactions/changes that are committed to a database. Of course how this works varies for each type of database. For example, MySQL internally produces a transaction log that contains information about every committed row change, and MySQL ensures that every committed change is included and that non-committed changes are excluded. The MySQL mechanism is actually part of the replication mechanism, so slaves update their internal state by reading the master?s log. The Debezium MySQL connector [2] simply reads the same log. >>> >>> Infinispan has several mechanisms that may be useful: >>> >>> ? Interceptors - See [3]. This seems pretty straightforward and IIUC provides access to all internal operations. However, it?s not clear to me whether a single interceptor will see all the changes in a cluster (perhaps in local and replicated modes) or only those changes that happen on that particular node (in distributed mode). It?s also not clear whether this interceptor is called within the context of the cache?s transaction, so if a failure happens just at the wrong time whether a change might be made to the cache but is not seen by the interceptor (or vice versa). >>> ? Cross-site replication - See [4][5]. A potential advantage of this mechanism appears to be that it is defined (more) globally, and it appears to function if the remote backup comes back online after being offline for a period of time. >>> ? State transfer - is it possible to participate as a non-active member of the cluster, and to effectively read all state transfer activities that occur within the cluster? >>> ? Cache store - tie into the cache store mechanism, perhaps by wrapping an existing cache store and sitting between the cache and the cache store >>> ? Monitor the cache store - don?t monitor Infinispan at all, and instead monitor the store in which Infinispan is storing entries. (This is probably the least attractive, since some stores can?t be monitored, or because the store is persisting an opaque binary value.) >>> >>> Are there other mechanism that might be used? >>> >>> There are a couple of important requirements for change data capture to be able to work correctly: >>> >>> ? Upon initial connection, the CDC connector must be able to obtain a snapshot of all existing data, followed by seeing all changes to data that may have occurred since the snapshot was started. If the connector is stopped/fails, upon restart it needs to be able to reconnect and either see all changes that occurred since it last was capturing changes, or perform a snapshot. (Performing a snapshot upon restart is very inefficient and undesirable.) This works as follows: the CDC connector only records the ?offset? in the source?s sequence of events; what this ?offset? entails depends on the source. Upon restart, the connector can use this offset information to coordinate with the source where it wants to start reading. (In MySQL and PostgreSQL, every event includes the filename of the log and position in that file. MongoDB includes in each event the monotonically increasing timestamp of the transaction. >>> ? No change can be missed, even when things go wrong and components crash. >>> ? When a new entry is added, the ?after? state of the entity will be included. When an entry is updated, the ?after? state will be included in the event; if possible, the event should also include the ?before? state. When an entry is removed, the ?before? state should be included in the event. >>> >>> Any thoughts or advice would be greatly appreciated. >>> >>> Best regards, >>> >>> Randall >>> >>> >>> [1] http://debezium.io >>> [2] http://debezium.io/docs/connectors/mysql/ >>> [3] http://infinispan.org/docs/stable/user_guide/user_guide.html#_custom_interceptors_chapter >>> [4] http://infinispan.org/docs/stable/user_guide/user_guide.html#CrossSiteReplication >>> [5] https://github.com/infinispan/infinispan/wiki/Design-For-Cross-Site-Replication >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Fri Jul 29 10:30:45 2016 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 29 Jul 2016 16:30:45 +0200 Subject: [infinispan-dev] Persisted state In-Reply-To: References: <579A0FD5.2000801@redhat.com> Message-ID: <579B6895.7070305@redhat.com> On 07/29/2016 12:40 PM, Tristan Tarrant wrote: > On 28/07/16 15:59, Radim Vansa wrote: >> Hi, >> >> in what situations is the state (ATM just version + cache topologies) >> meant to be persisted? I guess it's necessary with non-shared cache >> stores, but should it be persisted with shared one, too? > The writing is handled by the global state manager. You need to enable > global state first obviously. > There are two types of state: per-cachemanager and per-cache. Also > graceful stop is performed only when a cache is shutdown(), not stop()ed. Okay, I see that this needs to be enabled manually through configuration (which makes sense). I can't find any recommendation to users *when* they should enable it and therefore, when can the developer expect it to be set (and emit a warning/error when it is not set). >> And what are the guarantees during writing that state down? (e.g. can >> you make sure that no operations are executed when persisting?) > That is not how it is being handled atm: rebalancing is disabled, caches > are passivated, and the state is written before stopping the cache > components. It's like this because I was thinking that the state that we > are writing (CH and topology) wouldn't be affected by some additional > operations, but it would make sense to put the cache in a STOPPING state > first to avoid ops. Ack, moving the cache to STOPPING state is what I had in mind. I wanted to know whether it would be 'intended'. > > Tristan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team