From rvansa at redhat.com Wed Jun 1 03:02:46 2016 From: rvansa at redhat.com (Radim Vansa) Date: Wed, 1 Jun 2016 09:02:46 +0200 Subject: [infinispan-dev] Infinispan URL format In-Reply-To: References: <574BEFE7.4000407@redhat.com> Message-ID: <574E8896.3@redhat.com> On 05/31/2016 01:33 PM, Galder Zamarre?o wrote: > Comments inline: > > -- > Galder Zamarre?o > Infinispan, Red Hat > >> On 30 May 2016, at 09:46, Tristan Tarrant wrote: >> >> In the past there has been talk of representing a connection to >> Infinispan using a URL, in particular for HotRod. >> The Hibernate OGM team is now working on adding NoSQL datasources to >> WildFly, and they've asked for they should represent connections to >> various of these. > ^ What's this trying to solve exactly? > >> For Hot Rod: >> >> infinispan:hotrod://[host1][:port1][,[host2][:port2]]...[/cachemanager] >> >> The [cachemanager] part is for multi-tenant servers (Hot Rod doesn't >> currently support this, so this is forward-looking). >> Obviously we will support all of the HotRod properties for specifying >> things like security, etc. > ^ Hmmm, all properties? Do you envision potentially putting all HR client config inside a URL? > >> For Embedded: >> >> infinispan:embedded:file://path/to/config.xml (for specifying an >> external config file) >> infinispan:embedded:jndi://path/to/jndi (for referencing a cachemanager >> in JNDI) >> infinispan:embedded: (configuration specified as properties) >> >> For the latter, we also need to be able to represent an infinispan >> configuration using properties with a simple mapping to XML >> elements/attributes, e.g. >> >> cache-manager.local-cache.mycache.eviction.size=1000 > ^ Why 'local-cache' in property name? cachemanager.mycache...etc would be enough since there can't be duplicate cache names inside a given cache manager. So, is 'local-cache' merely a hint? The first idea would be to make the left-hand side XPath expressions, so it would be cache-container[@name=myManager].local-cache[@name=myCache].eviction.size=1000 As we probably want to select only on the name attribute, this could be sufficient and less verbose: cache-container[myManager].local-cache[myCache].eviction.size=1000 I wouldn't mix 'schema' of the property with user-defined identifiers - those brackets clearly separate them for good. There are cases where you have multiple children in one element - custom interceptors, groups, persistence (though the current schema tells me I can have only one store defined)... and there is no clear identifier (as cache name, or backup site). I would suggest that there a custom identifier that is not present in configuration would help user identify this, e.g. cache-container[myManager].distributed-cache[myCache].persistence.store[foo].class=org.my.FooStore cache-container[myManager].distributed-cache[myCache].persistence.store[foo].file=/some/path cache-container[myManager].distributed-cache[myCache].persistence.store[bar].class=org.my.BarStore cache-container[myManager].distributed-cache[myCache].persistence.store[bar].url=http://example.com My 2c Radim > > Cheers, > >> >> Comments are welcome >> >> Tristan >> -- >> Tristan Tarrant >> Infinispan Lead >> JBoss, a division of Red Hat >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From ttarrant at redhat.com Wed Jun 1 03:24:47 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 1 Jun 2016 09:24:47 +0200 Subject: [infinispan-dev] Infinispan URL format In-Reply-To: <574E8896.3@redhat.com> References: <574BEFE7.4000407@redhat.com> <574E8896.3@redhat.com> Message-ID: <574E8DBF.3010503@infinispan.org> So you've been putting that XSL/Xpath knowledge to good use I see. I like it. Tristan On 01/06/2016 09:02, Radim Vansa wrote: > On 05/31/2016 01:33 PM, Galder Zamarre?o wrote: >> Comments inline: >> >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >>> On 30 May 2016, at 09:46, Tristan Tarrant wrote: >>> >>> In the past there has been talk of representing a connection to >>> Infinispan using a URL, in particular for HotRod. >>> The Hibernate OGM team is now working on adding NoSQL datasources to >>> WildFly, and they've asked for they should represent connections to >>> various of these. >> ^ What's this trying to solve exactly? >> >>> For Hot Rod: >>> >>> infinispan:hotrod://[host1][:port1][,[host2][:port2]]...[/cachemanager] >>> >>> The [cachemanager] part is for multi-tenant servers (Hot Rod doesn't >>> currently support this, so this is forward-looking). >>> Obviously we will support all of the HotRod properties for specifying >>> things like security, etc. >> ^ Hmmm, all properties? Do you envision potentially putting all HR client config inside a URL? >> >>> For Embedded: >>> >>> infinispan:embedded:file://path/to/config.xml (for specifying an >>> external config file) >>> infinispan:embedded:jndi://path/to/jndi (for referencing a cachemanager >>> in JNDI) >>> infinispan:embedded: (configuration specified as properties) >>> >>> For the latter, we also need to be able to represent an infinispan >>> configuration using properties with a simple mapping to XML >>> elements/attributes, e.g. >>> >>> cache-manager.local-cache.mycache.eviction.size=1000 >> ^ Why 'local-cache' in property name? cachemanager.mycache...etc would be enough since there can't be duplicate cache names inside a given cache manager. So, is 'local-cache' merely a hint? > The first idea would be to make the left-hand side XPath expressions, so > it would be > > cache-container[@name=myManager].local-cache[@name=myCache].eviction.size=1000 > > As we probably want to select only on the name attribute, this could be > sufficient and less verbose: > > cache-container[myManager].local-cache[myCache].eviction.size=1000 > > I wouldn't mix 'schema' of the property with user-defined identifiers - > those brackets clearly separate them for good. > > There are cases where you have multiple children in one element - custom > interceptors, groups, persistence (though the current schema tells me I > can have only one store defined)... and there is no clear identifier (as > cache name, or backup site). I would suggest that there a custom > identifier that is not present in configuration would help user identify > this, e.g. > > cache-container[myManager].distributed-cache[myCache].persistence.store[foo].class=org.my.FooStore > cache-container[myManager].distributed-cache[myCache].persistence.store[foo].file=/some/path > cache-container[myManager].distributed-cache[myCache].persistence.store[bar].class=org.my.BarStore > cache-container[myManager].distributed-cache[myCache].persistence.store[bar].url=http://example.com > > My 2c > > Radim > >> Cheers, >> >>> Comments are welcome >>> >>> Tristan >>> -- >>> Tristan Tarrant >>> Infinispan Lead >>> JBoss, a division of Red Hat >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > From galder at redhat.com Wed Jun 1 04:26:14 2016 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Wed, 1 Jun 2016 10:26:14 +0200 Subject: [infinispan-dev] Changing default Hot Rod client max retries Message-ID: <052C93EA-EEC4-43A2-9A7C-3DEA44E3DFD3@redhat.com> Hi all, Java Hot Rod client has 10 max retries as default. This sounds a bit too much, and as I find the need to add similar configuration to JS client, I'm wondering whether this should be reduce to 3 for all clients, including Java, C* and JS clients. Any objections? Cheers, -- Galder Zamarre?o Infinispan, Red Hat From sanne at infinispan.org Wed Jun 1 05:29:25 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 1 Jun 2016 10:29:25 +0100 Subject: [infinispan-dev] Infinispan URL format In-Reply-To: <574E8DBF.3010503@infinispan.org> References: <574BEFE7.4000407@redhat.com> <574E8896.3@redhat.com> <574E8DBF.3010503@infinispan.org> Message-ID: The implementation proposals seem slick, but I'd have some doubts about allowing overrides to the datastore settings at this level. The hot-rod proposal looks fine, as similarly to a RDBMs it helps to figure how to connect to a specific database by expressing: - how to reach the DB - WHICH database you mean to connect to In case of the proposals for Infinispan Embedded, I think we fail these goals: you need to provide a means for multiple applications to "connect" to the same database. So the container needs to be able to distinguish *same* Cache instance from a different one, and this might get complex if the URL includes a mixture of client specific settings (i.e. how to connect) and configuration of the Cache (i.e. TTL and CacheStore options). It also gets messy in terms of lifecycle: do you stop the CacheManager when the last client is undeployed? I'd rather see an approach based on naming lookup. How the Cache is configured, started and "bound" to that specific name should be treated separately. For that purpose, I think the WildFly caches configuration can be considered the first step, and the next would be to allow a "Cache configuration fragment" to be deployed either included with an application, or independently from an application. Thanks, Sanne On 1 June 2016 at 08:24, Tristan Tarrant wrote: > So you've been putting that XSL/Xpath knowledge to good use I see. I > like it. > > Tristan > > On 01/06/2016 09:02, Radim Vansa wrote: >> On 05/31/2016 01:33 PM, Galder Zamarre?o wrote: >>> Comments inline: >>> >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >>>> On 30 May 2016, at 09:46, Tristan Tarrant wrote: >>>> >>>> In the past there has been talk of representing a connection to >>>> Infinispan using a URL, in particular for HotRod. >>>> The Hibernate OGM team is now working on adding NoSQL datasources to >>>> WildFly, and they've asked for they should represent connections to >>>> various of these. >>> ^ What's this trying to solve exactly? >>> >>>> For Hot Rod: >>>> >>>> infinispan:hotrod://[host1][:port1][,[host2][:port2]]...[/cachemanager] >>>> >>>> The [cachemanager] part is for multi-tenant servers (Hot Rod doesn't >>>> currently support this, so this is forward-looking). >>>> Obviously we will support all of the HotRod properties for specifying >>>> things like security, etc. >>> ^ Hmmm, all properties? Do you envision potentially putting all HR client config inside a URL? >>> >>>> For Embedded: >>>> >>>> infinispan:embedded:file://path/to/config.xml (for specifying an >>>> external config file) >>>> infinispan:embedded:jndi://path/to/jndi (for referencing a cachemanager >>>> in JNDI) >>>> infinispan:embedded: (configuration specified as properties) >>>> >>>> For the latter, we also need to be able to represent an infinispan >>>> configuration using properties with a simple mapping to XML >>>> elements/attributes, e.g. >>>> >>>> cache-manager.local-cache.mycache.eviction.size=1000 >>> ^ Why 'local-cache' in property name? cachemanager.mycache...etc would be enough since there can't be duplicate cache names inside a given cache manager. So, is 'local-cache' merely a hint? >> The first idea would be to make the left-hand side XPath expressions, so >> it would be >> >> cache-container[@name=myManager].local-cache[@name=myCache].eviction.size=1000 >> >> As we probably want to select only on the name attribute, this could be >> sufficient and less verbose: >> >> cache-container[myManager].local-cache[myCache].eviction.size=1000 >> >> I wouldn't mix 'schema' of the property with user-defined identifiers - >> those brackets clearly separate them for good. >> >> There are cases where you have multiple children in one element - custom >> interceptors, groups, persistence (though the current schema tells me I >> can have only one store defined)... and there is no clear identifier (as >> cache name, or backup site). I would suggest that there a custom >> identifier that is not present in configuration would help user identify >> this, e.g. >> >> cache-container[myManager].distributed-cache[myCache].persistence.store[foo].class=org.my.FooStore >> cache-container[myManager].distributed-cache[myCache].persistence.store[foo].file=/some/path >> cache-container[myManager].distributed-cache[myCache].persistence.store[bar].class=org.my.BarStore >> cache-container[myManager].distributed-cache[myCache].persistence.store[bar].url=http://example.com >> >> My 2c >> >> Radim >> >>> Cheers, >>> >>>> Comments are welcome >>>> >>>> Tristan >>>> -- >>>> Tristan Tarrant >>>> Infinispan Lead >>>> JBoss, a division of Red Hat >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Wed Jun 1 05:34:25 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 1 Jun 2016 10:34:25 +0100 Subject: [infinispan-dev] Changing default Hot Rod client max retries In-Reply-To: <052C93EA-EEC4-43A2-9A7C-3DEA44E3DFD3@redhat.com> References: <052C93EA-EEC4-43A2-9A7C-3DEA44E3DFD3@redhat.com> Message-ID: No objection, just not sure about the usefulness. I think what matters for people is how long is it going to wait before it fails. If it's a long time (i.e. 10 minutes) then you'd probably want it try faster than waiting 5 minutes for the second try ... exponential backoff sounds nicer than trying to find a reasonable balance in the connection retries. Another benefit of an exponential backoff strategy is that you could allow the users to set an option to wait essentially forever (until interrupted: nicer to allow this control to higher up stacks), which could be useful for cloud deployments, microservices, etc.. On 1 June 2016 at 09:26, Galder Zamarre?o wrote: > Hi all, > > Java Hot Rod client has 10 max retries as default. This sounds a bit too much, and as I find the need to add similar configuration to JS client, I'm wondering whether this should be reduce to 3 for all clients, including Java, C* and JS clients. > > Any objections? > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Wed Jun 1 08:17:55 2016 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 1 Jun 2016 15:17:55 +0300 Subject: [infinispan-dev] Infinispan URL format In-Reply-To: References: <574BEFE7.4000407@redhat.com> <574E8896.3@redhat.com> <574E8DBF.3010503@infinispan.org> Message-ID: +1, a URL that gives you a different CacheManager every time you use it doesn't seem very useful. JCache also requires the the CacheManager returned for one URL to be more or less constant: * Multiple calls to this method with the same {@link URI} and * {@link ClassLoader} must return the same {@link CacheManager} instance, * except if a previously returned {@link CacheManager} has been closed. Cheers Dan On Wed, Jun 1, 2016 at 12:29 PM, Sanne Grinovero wrote: > The implementation proposals seem slick, but I'd have some doubts > about allowing overrides to the datastore settings at this level. > > The hot-rod proposal looks fine, as similarly to a RDBMs it helps to > figure how to connect to a specific database by expressing: > - how to reach the DB > - WHICH database you mean to connect to > > In case of the proposals for Infinispan Embedded, I think we fail > these goals: you need to provide a means for multiple applications to > "connect" to the same database. So the container needs to be able to > distinguish *same* Cache instance from a different one, and this might > get complex if the URL includes a mixture of client specific settings > (i.e. how to connect) and configuration of the Cache (i.e. TTL and > CacheStore options). > It also gets messy in terms of lifecycle: do you stop the CacheManager > when the last client is undeployed? > > I'd rather see an approach based on naming lookup. How the Cache is > configured, started and "bound" to that specific name should be > treated separately. > > For that purpose, I think the WildFly caches configuration can be > considered the first step, and the next would be to allow a "Cache > configuration fragment" to be deployed either included with an > application, or independently from an application. > > Thanks, > Sanne > > > > On 1 June 2016 at 08:24, Tristan Tarrant wrote: >> So you've been putting that XSL/Xpath knowledge to good use I see. I >> like it. >> >> Tristan >> >> On 01/06/2016 09:02, Radim Vansa wrote: >>> On 05/31/2016 01:33 PM, Galder Zamarre?o wrote: >>>> Comments inline: >>>> >>>> -- >>>> Galder Zamarre?o >>>> Infinispan, Red Hat >>>> >>>>> On 30 May 2016, at 09:46, Tristan Tarrant wrote: >>>>> >>>>> In the past there has been talk of representing a connection to >>>>> Infinispan using a URL, in particular for HotRod. >>>>> The Hibernate OGM team is now working on adding NoSQL datasources to >>>>> WildFly, and they've asked for they should represent connections to >>>>> various of these. >>>> ^ What's this trying to solve exactly? >>>> >>>>> For Hot Rod: >>>>> >>>>> infinispan:hotrod://[host1][:port1][,[host2][:port2]]...[/cachemanager] >>>>> >>>>> The [cachemanager] part is for multi-tenant servers (Hot Rod doesn't >>>>> currently support this, so this is forward-looking). >>>>> Obviously we will support all of the HotRod properties for specifying >>>>> things like security, etc. >>>> ^ Hmmm, all properties? Do you envision potentially putting all HR client config inside a URL? >>>> >>>>> For Embedded: >>>>> >>>>> infinispan:embedded:file://path/to/config.xml (for specifying an >>>>> external config file) >>>>> infinispan:embedded:jndi://path/to/jndi (for referencing a cachemanager >>>>> in JNDI) >>>>> infinispan:embedded: (configuration specified as properties) >>>>> >>>>> For the latter, we also need to be able to represent an infinispan >>>>> configuration using properties with a simple mapping to XML >>>>> elements/attributes, e.g. >>>>> >>>>> cache-manager.local-cache.mycache.eviction.size=1000 >>>> ^ Why 'local-cache' in property name? cachemanager.mycache...etc would be enough since there can't be duplicate cache names inside a given cache manager. So, is 'local-cache' merely a hint? >>> The first idea would be to make the left-hand side XPath expressions, so >>> it would be >>> >>> cache-container[@name=myManager].local-cache[@name=myCache].eviction.size=1000 >>> >>> As we probably want to select only on the name attribute, this could be >>> sufficient and less verbose: >>> >>> cache-container[myManager].local-cache[myCache].eviction.size=1000 >>> >>> I wouldn't mix 'schema' of the property with user-defined identifiers - >>> those brackets clearly separate them for good. >>> >>> There are cases where you have multiple children in one element - custom >>> interceptors, groups, persistence (though the current schema tells me I >>> can have only one store defined)... and there is no clear identifier (as >>> cache name, or backup site). I would suggest that there a custom >>> identifier that is not present in configuration would help user identify >>> this, e.g. >>> >>> cache-container[myManager].distributed-cache[myCache].persistence.store[foo].class=org.my.FooStore >>> cache-container[myManager].distributed-cache[myCache].persistence.store[foo].file=/some/path >>> cache-container[myManager].distributed-cache[myCache].persistence.store[bar].class=org.my.BarStore >>> cache-container[myManager].distributed-cache[myCache].persistence.store[bar].url=http://example.com >>> >>> My 2c >>> >>> Radim >>> >>>> Cheers, >>>> >>>>> Comments are welcome >>>>> >>>>> Tristan >>>>> -- >>>>> Tristan Tarrant >>>>> Infinispan Lead >>>>> JBoss, a division of Red Hat >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Wed Jun 1 08:34:57 2016 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 1 Jun 2016 15:34:57 +0300 Subject: [infinispan-dev] Changing default Hot Rod client max retries In-Reply-To: References: <052C93EA-EEC4-43A2-9A7C-3DEA44E3DFD3@redhat.com> Message-ID: I'd also like to see an option for the total time to wait, instead of having to worry about two (or more) different settings. True, if there's a bug that causes the request to fail immediately and the client retries without pause for 1 minute, it can generate a lot of unnecessary load. So perhaps we should only retry if we "know" the error can be fixed by retrying, e.g. on connection close or on IllegalLifecycleStateExceptions. Cheers Dan On Wed, Jun 1, 2016 at 12:34 PM, Sanne Grinovero wrote: > No objection, just not sure about the usefulness. I think what matters > for people is how long is it going to wait before it fails. > > If it's a long time (i.e. 10 minutes) then you'd probably want it try > faster than waiting 5 minutes for the second try ... exponential > backoff sounds nicer than trying to find a reasonable balance in the > connection retries. > > Another benefit of an exponential backoff strategy is that you could > allow the users to set an option to wait essentially forever (until > interrupted: nicer to allow this control to higher up stacks), which > could be useful for cloud deployments, microservices, etc.. > > > > On 1 June 2016 at 09:26, Galder Zamarre?o wrote: >> Hi all, >> >> Java Hot Rod client has 10 max retries as default. This sounds a bit too much, and as I find the need to add similar configuration to JS client, I'm wondering whether this should be reduce to 3 for all clients, including Java, C* and JS clients. >> >> Any objections? >> >> Cheers, >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From smarlow at redhat.com Wed Jun 1 09:31:53 2016 From: smarlow at redhat.com (Scott Marlow) Date: Wed, 1 Jun 2016 09:31:53 -0400 Subject: [infinispan-dev] Infinispan URL format In-Reply-To: <574BEFE7.4000407@redhat.com> References: <574BEFE7.4000407@redhat.com> Message-ID: <5b217358-b264-49cd-5705-fb24ad0d402a@redhat.com> On 05/30/2016 03:46 AM, Tristan Tarrant wrote: > In the past there has been talk of representing a connection to > Infinispan using a URL, in particular for HotRod. > The Hibernate OGM team is now working on adding NoSQL datasources to > WildFly, and they've asked for they should represent connections to > various of these. > > For Hot Rod: > > infinispan:hotrod://[host1][:port1][,[host2][:port2]]...[/cachemanager] > > The [cachemanager] part is for multi-tenant servers (Hot Rod doesn't > currently support this, so this is forward-looking). > Obviously we will support all of the HotRod properties for specifying > things like security, etc. Once you are connected to a remote (Infinispan) database, does the application simply use the java.util.Map api to put/get any application get values? Or are puts not allowed to use application classes? I'm trying to better understand how the marshaling works, since the remote Infinispan database probably wouldn't have access to the application classloader (unless it does, which I'd like to also understand). > > For Embedded: > > infinispan:embedded:file://path/to/config.xml (for specifying an > external config file) > infinispan:embedded:jndi://path/to/jndi (for referencing a cachemanager > in JNDI) > infinispan:embedded: (configuration specified as properties) > > For the latter, we also need to be able to represent an infinispan > configuration using properties with a simple mapping to XML > elements/attributes, e.g. > > cache-manager.local-cache.mycache.eviction.size=1000 > > > Comments are welcome > > Tristan > From mudokonman at gmail.com Wed Jun 1 09:54:13 2016 From: mudokonman at gmail.com (William Burns) Date: Wed, 01 Jun 2016 13:54:13 +0000 Subject: [infinispan-dev] Singleton Cache Stores with Shared Cache Stores Message-ID: Recently there was a start of a discussion regarding singleton cache stores and how they behave. Interestingly according to our documentation [1] and verification code [2] a singleton store cannot be used with a shared cache store. This makes no sense to me as this means you would have a single point of failure for your data. And also as Dan pointed out [3] there is no Singleton cache loader to make sure all the loads are from the coordinator either, which means you could have a read that returns null despite it being in the store/loader. And even looking at [4] it talks about singleton being used so not every node writes to the underlying store, which implies it being shared. I think we have enough proof to update this so a singleton store requires a shared store, but I wanted to make sure we weren't missing something here. Thanks, - Will [1] http://infinispan.org/docs/9.0.x/user_guide/user_guide.html#_configuration_2 [2] https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/configuration/cache/PersistenceConfigurationBuilder.java#L108 [3] https://github.com/infinispan/infinispan/pull/4382#discussion_r65360312 [4] https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/persistence/support/SingletonCacheWriter.java#L40 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160601/569a66d1/attachment.html From remerson at redhat.com Wed Jun 1 10:24:04 2016 From: remerson at redhat.com (Ryan Emerson) Date: Wed, 1 Jun 2016 10:24:04 -0400 (EDT) Subject: [infinispan-dev] Singleton Cache Stores with Shared Cache Stores In-Reply-To: References: Message-ID: <835648218.43963060.1464791044075.JavaMail.zimbra@redhat.com> After further discussions on IRC, we have concluded the following: In shared mode only the primary owner of a key writes to the shared store, therefore there is no obvious use-case for having a singleton mode which delegates all writes to a single node. With this in mind, I propose that the singleton option and associated writers be deprecated [1]. If anybody has any objections, please speak up. [1] https://issues.jboss.org/browse/ISPN-6748 Cheers Ryan ----- Original Message ----- From: "William Burns" To: "infinispan -Dev List" Cc: dan at infinispan.org, remerson at redhat.com Sent: Wednesday, 1 June, 2016 2:54:13 PM Subject: Singleton Cache Stores with Shared Cache Stores Recently there was a start of a discussion regarding singleton cache stores and how they behave. Interestingly according to our documentation [1] and verification code [2] a singleton store cannot be used with a shared cache store. This makes no sense to me as this means you would have a single point of failure for your data. And also as Dan pointed out [3] there is no Singleton cache loader to make sure all the loads are from the coordinator either, which means you could have a read that returns null despite it being in the store/loader. And even looking at [4] it talks about singleton being used so not every node writes to the underlying store, which implies it being shared. I think we have enough proof to update this so a singleton store requires a shared store, but I wanted to make sure we weren't missing something here. Thanks, - Will [1] http://infinispan.org/docs/9.0.x/user_guide/user_guide.html#_configuration_2 [2] https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/configuration/cache/PersistenceConfigurationBuilder.java#L108 [3] https://github.com/infinispan/infinispan/pull/4382#discussion_r65360312 [4] https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/persistence/support/SingletonCacheWriter.java#L40 From sanne at infinispan.org Wed Jun 1 10:35:45 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 1 Jun 2016 15:35:45 +0100 Subject: [infinispan-dev] Singleton Cache Stores with Shared Cache Stores In-Reply-To: <835648218.43963060.1464791044075.JavaMail.zimbra@redhat.com> References: <835648218.43963060.1464791044075.JavaMail.zimbra@redhat.com> Message-ID: On 1 June 2016 at 15:24, Ryan Emerson wrote: > After further discussions on IRC, we have concluded the following: > > In shared mode only the primary owner of a key writes to the shared store, > therefore there is no obvious use-case for having a singleton mode which > delegates all writes to a single node. As far as I remember, the *intent* was to allow dealing with stores which can't handle concurrent writes, i.e. needing a global lock. We had different CacheStore implementations back then, I guess some of them might have had exotic limitations. I don't know which practical use case people had in mind though: it's likely we already dropped any implementation which could need this long ago, so no objections about getting rid of it. Thanks, Sanne > > With this in mind, I propose that the singleton option and associated > writers be deprecated [1]. If anybody has any objections, please speak up. > > [1] https://issues.jboss.org/browse/ISPN-6748 > > Cheers > Ryan > > ----- Original Message ----- > From: "William Burns" > To: "infinispan -Dev List" > Cc: dan at infinispan.org, remerson at redhat.com > Sent: Wednesday, 1 June, 2016 2:54:13 PM > Subject: Singleton Cache Stores with Shared Cache Stores > > Recently there was a start of a discussion regarding singleton cache stores > and how they behave. Interestingly according to our documentation [1] and > verification code [2] a singleton store cannot be used with a shared cache > store. This makes no sense to me as this means you would have a single > point of failure for your data. And also as Dan pointed out [3] there is > no Singleton cache loader to make sure all the loads are from the > coordinator either, which means you could have a read that returns null > despite it being in the store/loader. > > And even looking at [4] it talks about singleton being used so not every > node writes to the underlying store, which implies it being shared. > > I think we have enough proof to update this so a singleton store requires a > shared store, but I wanted to make sure we weren't missing something here. > > Thanks, > > - Will > > [1] > http://infinispan.org/docs/9.0.x/user_guide/user_guide.html#_configuration_2 > [2] > https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/configuration/cache/PersistenceConfigurationBuilder.java#L108 > [3] https://github.com/infinispan/infinispan/pull/4382#discussion_r65360312 > [4] > https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/persistence/support/SingletonCacheWriter.java#L40 > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mudokonman at gmail.com Thu Jun 2 13:38:12 2016 From: mudokonman at gmail.com (William Burns) Date: Thu, 02 Jun 2016 17:38:12 +0000 Subject: [infinispan-dev] Changing default Hot Rod client max retries In-Reply-To: References: <052C93EA-EEC4-43A2-9A7C-3DEA44E3DFD3@redhat.com> Message-ID: On Wed, Jun 1, 2016 at 8:35 AM Dan Berindei wrote: > I'd also like to see an option for the total time to wait, instead of > having to worry about two (or more) different settings. > Only 1 config sounds good to me. I admit I am more used to total time to wait rather than retry, using long and TimeUnit. > > True, if there's a bug that causes the request to fail immediately and > the client retries without pause for 1 minute, it can generate a lot > of unnecessary load. So perhaps we should only retry if we "know" the > error can be fixed by retrying, e.g. on connection close or on > IllegalLifecycleStateExceptions. > +1, retrying on specific exceptions sounds like a good idea to me > > Cheers > Dan > > > On Wed, Jun 1, 2016 at 12:34 PM, Sanne Grinovero > wrote: > > No objection, just not sure about the usefulness. I think what matters > > for people is how long is it going to wait before it fails. > > > > If it's a long time (i.e. 10 minutes) then you'd probably want it try > > faster than waiting 5 minutes for the second try ... exponential > > backoff sounds nicer than trying to find a reasonable balance in the > > connection retries. > > > > Another benefit of an exponential backoff strategy is that you could > > allow the users to set an option to wait essentially forever (until > > interrupted: nicer to allow this control to higher up stacks), which > > could be useful for cloud deployments, microservices, etc.. > > > > > > > > On 1 June 2016 at 09:26, Galder Zamarre?o wrote: > >> Hi all, > >> > >> Java Hot Rod client has 10 max retries as default. This sounds a bit > too much, and as I find the need to add similar configuration to JS client, > I'm wondering whether this should be reduce to 3 for all clients, including > Java, C* and JS clients. > >> > >> Any objections? > >> > >> Cheers, > >> -- > >> Galder Zamarre?o > >> Infinispan, Red Hat > >> > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160602/b9a2acdf/attachment-0001.html From galder at redhat.com Mon Jun 6 09:39:45 2016 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 6 Jun 2016 15:39:45 +0200 Subject: [infinispan-dev] Infispector In-Reply-To: <5731E7ED.6060700@redhat.com> References: <7A2F9076-2AB8-4B56-A29B-CFA070C3E447@redhat.com> <166216030.1733480.1462887673535.JavaMail.zimbra@redhat.com> <5731E7ED.6060700@redhat.com> Message-ID: <10F9B177-4E27-4261-BFC2-5B7D8A1CE39F@redhat.com> Hey guys, Sorry for the delay. I somehow read the replies but forgot to reply :) I think the target of Infispector and Zipkin based tracing would be different, although there is some common ground. I think both Infispector and Zipkin would be helpful in helping diagnose cluster performance issues and get a general overview of how messages pass from one node to the other. In terms of difference Infispector seems more geared towards education/qe whereas Zipkin is more targeted as something we can run at runtime in production for our users. Although both projects have different targets I think we'll be able to take advantage of each others work, e.g. custom JGroups protocols points, byteman/bytebuddy rules for intercept scenarions...etc. Great work leading all the Infispector work Tomas, looking forward to demos/videos :D Cheers, -- Galder Zamarre?o Infinispan, Red Hat > On 10 May 2016, at 15:53, Radim Vansa wrote: > > To complement this, MFT is a tool that won't offer any sleek charts or > visualisations. It's tricky to use and understand - it's intended for > developers as a tool for problem analysis. But it gets more in depth > than InfiSpector, linking the information from different nodes, JFR > events and so forth. > > R. > > On 05/10/2016 09:41 AM, Tomas Sykora wrote: >> Hello Galder, and all! >> It?s nice to communicate again via infinispan-dev after a while :) >> >> TL;DR: I can see some intersections with zipkin.io initiative goals but InfiSpector seems to be much more ?easier to handle and contribute to at this moment? -- that suits more our student-related use case. Let?s continue with the discussion :) >> >> Firstly, a short introduction into the context. Red Hat is running Research & Development laboratory here in Brno at 2 biggest local universities: Masaryk University, Faculty of Informatics (FI MU) and Brno University of Technology, Faculty of Information Technologies (FIT VUT). >> The aim is to better and sooner reach out to students, get them involved into interesting project, show them open-source, git, git workflows and many other things (project specific). An year ago I got excited about this idea and started to think whether I can deliver such a project. And I did. >> >> Team faces one big challenge and this is a time constraint. Students are working on _several_ projects during their studies to fulfill courses? requirements to pass the semester. It?s hard for them to find additional time to be coding even something else. Team managed that but slowly, that?s understandable though. Designing InfiSpector infrastructure took us some time (Kafka, Druid, NodeJS) + evaluation of these technologies + proof of concepts. >> >> All 5 team members are 2nd year students of bachelor studies at FIT VUT Brno. >> Marek Ciz (https://github.com/mciz), also my very good friend from my home town :) His primary domain is Druid, Kafka and infrastructure. >> Vratislav Hais (https://github.com/vratislavhais) Primary domain is front-end. >> Jan Fitz (https://github.com/janfitz) Primary domain is front-end and graphic design (also designed our logo). >> Tomas Veskrna -- starting >> Patrik Cigas -- starting >> >> At this moment we are very close to getting real data to be monitored via web UI. It?s a matter of 1-2 months considering there is an examination period happening now at the University. >> >> ******************* >> What is InfiSpector and what we want to achieve: >> >> * We missed graphical representation of Infinispan nodes communication so we want >> -- To be able to spot possible issues AT THE FIRST LOOK (e.g. wait, this should be coordinator, how is that possible he sends/receives only 10 % of all messages?) >> -- To demonstrate nicely what?s happening inside of ISPN cluster for newcomers (to see how Infinispan nodes talk to each other to better understand concepts) >> -- To be using nice communication diagrams that describes flows like (130 messages from node1 to node5 -- click to see them in detail, filter out in more detail) >> * We aimed for NON-invasive solution >> -- No changes in Infinispan internal code >> -- Just add custom JGroups protocol, gather data and send them where you want [0] >> * Provide historical recording of JGroups communication >> * Help to analyze communication recording from big data point of view >> -- No need to manually go through gigabytes of text trace logs >> >> Simplified InfiSpector architecture: >> >> Infinispan Cluster (JGroups with our protocol) ---> Apache Kafka ---> Druid Database (using Kafka Firehose to inject Kafka Topic) <---> NodeJS server back-end <---> front-end (AngularJS) >> >> What is coming out from custom JGroup protocol is a short JSON document [1] with a timestamp, sending and receiving node, length of a message and the message itself. Other stuff can be added easily. >> >> We will be able to easily answer queries like: >> How many messages were sent from node1 to node3 during ?last? 60 seconds? >> What are these messages? >> How many of them were PutKeyValueCommands? >> Filter out Heart beats (or even ignore them completely), etc. >> >> We don?t have any video recording yet but we are very close to that point. From UI perspective we will be using these 2 charts: [2], [3]. >> >> >> Talking about Infinispan 9 plans -- [4] was reported a month ago by you Galder and we are working on InfiSpector actively let?s say 5 months -- it looks like I should have advertised InfiSpector more, sooner, but I was waiting for at least first working demo to start with blogging and videos :) It?s good that you?ve noticed and that we are having this conversation right now. >> >> To be honest I find http://zipkin.io/ initiative to be quite similar. However, InfiSpector seems to be much more ?easier? and not targeting at performance analysis directly. Just adding one protocol at protocol stack and you are good to go. We were thinking about putting Kafka and Druid somewhere into the cloud (later) so users don?t need to start all that big infrastructure at their local machines. >> >> I am very open to anything that will help us as a community to achieve our common goal -- to be able to graphically monitor Infinispan communication. >> Additionally I would be _personally_ looking for something that is easily achievable and is suitable for students to quickly learn new things and quickly make meaningful contributions. >> >> Thanks! >> Tomas >> >> [0] Achieved by custom JGroups protocol -- JGROUPS_TO_KAFKA protocol has been implemented. This can be added at the end of JGroups stack and every single message that goes through that is sent to Apache Kafka. >> [1] >> { >> "direction": "receiving/up", >> "src": "tsykora-19569", >> "dest": "tsykora-27916", >> "length": 182, >> "timestamp": 1460302055376, >> "message": "SingleRpcCommand{cacheName='___defaultcache', command=PutKeyValueCommand{key=f6d52117-8a27-475e-86a7-002a54324615, value=tsykora-19569, flags=null, putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=60000, maxIdle=-1, version=null}, successful=true}}" >> } >> [2] http://bl.ocks.org/NPashaP/9796212 >> [3] http://bl.ocks.org/mbostock/1046712 >> [4] https://issues.jboss.org/browse/ISPN-6346 >> >> >> >> >> ----- Original Message ----- >>> From: "Galder Zamarre?o" >>> To: "infinispan -Dev List" , "Tomas Sykora" >>> Sent: Monday, May 9, 2016 5:06:06 PM >>> Subject: Infispector >>> >>> Hi all, >>> >>> I've just noticed [1], @Thomas, it appears this is your baby? Could you >>> explain in more detail what you are trying to achieve with this? Do you have >>> a video to show what exactly it does? >>> >>> Also, who's [2]? Curious to know who's working on this stuff :) >>> >>> The reason I'm interested in finding out a bit more about [1] is because we >>> have several efforts in the distributed monitoring/tracing area and want to >>> make sure we're not duplicating same effort. >>> >>> 1. Radim's Message Flow Tracer [3]: This is a project to tool for tracing >>> messages and control flow in JGroups/Infinispan using Byteman. >>> >>> 2. Zipkin effort [4]: The idea behind is to have a way to have Infinispan >>> cluster-wide tracing that uses Zipkin to capture and visualize where time is >>> spent within Infinispan. This includes both JGroups and other components >>> that could be time consuming, e.g. persistence. This will be main task for >>> Infinispan 9. This effort will use a lot of interception points Radim has >>> developed in [3] to tie together messages related to a request/tx around the >>> cluster. >>> >>> Does your effort fall within any of these spaces? >>> >>> Cheers, >>> >>> [1] https://github.com/infinispan/infispector >>> [2] https://github.com/mciz >>> [3] https://github.com/rvansa/message-flow-tracer >>> [4] https://issues.jboss.org/browse/ISPN-6346 >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >>> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Mon Jun 6 10:09:39 2016 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 6 Jun 2016 16:09:39 +0200 Subject: [infinispan-dev] Singleton Cache Stores with Shared Cache Stores In-Reply-To: References: <835648218.43963060.1464791044075.JavaMail.zimbra@redhat.com> Message-ID: The singleton store goes back to the JBC days and I don't remember a single use of it in the wild, so happy to get rid of it. Cheers, -- Galder Zamarre?o Infinispan, Red Hat > On 1 Jun 2016, at 16:35, Sanne Grinovero wrote: > > On 1 June 2016 at 15:24, Ryan Emerson wrote: >> After further discussions on IRC, we have concluded the following: >> >> In shared mode only the primary owner of a key writes to the shared store, >> therefore there is no obvious use-case for having a singleton mode which >> delegates all writes to a single node. > > As far as I remember, the *intent* was to allow dealing with stores > which can't handle concurrent writes, i.e. needing a global lock. > > We had different CacheStore implementations back then, I guess some of > them might have had exotic limitations. > > I don't know which practical use case people had in mind though: it's > likely we already dropped any implementation which could need this > long ago, so no objections about getting rid of it. > > Thanks, > Sanne > >> >> With this in mind, I propose that the singleton option and associated >> writers be deprecated [1]. If anybody has any objections, please speak up. >> >> [1] https://issues.jboss.org/browse/ISPN-6748 >> >> Cheers >> Ryan >> >> ----- Original Message ----- >> From: "William Burns" >> To: "infinispan -Dev List" >> Cc: dan at infinispan.org, remerson at redhat.com >> Sent: Wednesday, 1 June, 2016 2:54:13 PM >> Subject: Singleton Cache Stores with Shared Cache Stores >> >> Recently there was a start of a discussion regarding singleton cache stores >> and how they behave. Interestingly according to our documentation [1] and >> verification code [2] a singleton store cannot be used with a shared cache >> store. This makes no sense to me as this means you would have a single >> point of failure for your data. And also as Dan pointed out [3] there is >> no Singleton cache loader to make sure all the loads are from the >> coordinator either, which means you could have a read that returns null >> despite it being in the store/loader. >> >> And even looking at [4] it talks about singleton being used so not every >> node writes to the underlying store, which implies it being shared. >> >> I think we have enough proof to update this so a singleton store requires a >> shared store, but I wanted to make sure we weren't missing something here. >> >> Thanks, >> >> - Will >> >> [1] >> http://infinispan.org/docs/9.0.x/user_guide/user_guide.html#_configuration_2 >> [2] >> https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/configuration/cache/PersistenceConfigurationBuilder.java#L108 >> [3] https://github.com/infinispan/infinispan/pull/4382#discussion_r65360312 >> [4] >> https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/persistence/support/SingletonCacheWriter.java#L40 >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Tue Jun 7 05:05:15 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 7 Jun 2016 10:05:15 +0100 Subject: [infinispan-dev] Singleton Cache Stores with Shared Cache Stores In-Reply-To: References: <835648218.43963060.1464791044075.JavaMail.zimbra@redhat.com> Message-ID: <57568E4B.2040900@infinispan.org> +1 to remove this. Tristan On 06/06/2016 15:09, Galder Zamarre?o wrote: > The singleton store goes back to the JBC days and I don't remember a single use of it in the wild, so happy to get rid of it. > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > >> On 1 Jun 2016, at 16:35, Sanne Grinovero wrote: >> >> On 1 June 2016 at 15:24, Ryan Emerson wrote: >>> After further discussions on IRC, we have concluded the following: >>> >>> In shared mode only the primary owner of a key writes to the shared store, >>> therefore there is no obvious use-case for having a singleton mode which >>> delegates all writes to a single node. >> As far as I remember, the *intent* was to allow dealing with stores >> which can't handle concurrent writes, i.e. needing a global lock. >> >> We had different CacheStore implementations back then, I guess some of >> them might have had exotic limitations. >> >> I don't know which practical use case people had in mind though: it's >> likely we already dropped any implementation which could need this >> long ago, so no objections about getting rid of it. >> >> Thanks, >> Sanne >> >>> With this in mind, I propose that the singleton option and associated >>> writers be deprecated [1]. If anybody has any objections, please speak up. >>> >>> [1] https://issues.jboss.org/browse/ISPN-6748 >>> >>> Cheers >>> Ryan >>> >>> ----- Original Message ----- >>> From: "William Burns" >>> To: "infinispan -Dev List" >>> Cc: dan at infinispan.org, remerson at redhat.com >>> Sent: Wednesday, 1 June, 2016 2:54:13 PM >>> Subject: Singleton Cache Stores with Shared Cache Stores >>> >>> Recently there was a start of a discussion regarding singleton cache stores >>> and how they behave. Interestingly according to our documentation [1] and >>> verification code [2] a singleton store cannot be used with a shared cache >>> store. This makes no sense to me as this means you would have a single >>> point of failure for your data. And also as Dan pointed out [3] there is >>> no Singleton cache loader to make sure all the loads are from the >>> coordinator either, which means you could have a read that returns null >>> despite it being in the store/loader. >>> >>> And even looking at [4] it talks about singleton being used so not every >>> node writes to the underlying store, which implies it being shared. >>> >>> I think we have enough proof to update this so a singleton store requires a >>> shared store, but I wanted to make sure we weren't missing something here. >>> >>> Thanks, >>> >>> - Will >>> >>> [1] >>> http://infinispan.org/docs/9.0.x/user_guide/user_guide.html#_configuration_2 >>> [2] >>> https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/configuration/cache/PersistenceConfigurationBuilder.java#L108 >>> [3] https://github.com/infinispan/infinispan/pull/4382#discussion_r65360312 >>> [4] >>> https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/persistence/support/SingletonCacheWriter.java#L40 >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From emmanuel at hibernate.org Tue Jun 7 09:55:04 2016 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Tue, 7 Jun 2016 15:55:04 +0200 Subject: [infinispan-dev] Infinispan URL format In-Reply-To: <5b217358-b264-49cd-5705-fb24ad0d402a@redhat.com> References: <574BEFE7.4000407@redhat.com> <5b217358-b264-49cd-5705-fb24ad0d402a@redhat.com> Message-ID: <20160607135504.GG43559@hibernate.org> On Wed 2016-06-01 9:31, Scott Marlow wrote: > > The [cachemanager] part is for multi-tenant servers (Hot Rod doesn't > > currently support this, so this is forward-looking). > > Obviously we will support all of the HotRod properties for specifying > > things like security, etc. > > Once you are connected to a remote (Infinispan) database, does the > application simply use the java.util.Map api to put/get any application > get values? Or are puts not allowed to use application classes? I'm > trying to better understand how the marshaling works, since the remote > Infinispan database probably wouldn't have access to the application > classloader (unless it does, which I'd like to also understand). Scott, the application receives Infinspan's CacheManager and/or Cache APIs just like in the Mongo case, one receives the Mongo specific objects. As far as the objects you can put in the cache: the ideal situation is that you use a protobuf schema and the client side will marshall things as protobuf and send these protobuf structure to the server. The server then does not need to have the client classes in its classpath. From slaskawi at redhat.com Wed Jun 8 02:04:30 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Wed, 8 Jun 2016 08:04:30 +0200 Subject: [infinispan-dev] Infinispan URL format In-Reply-To: <574BEFE7.4000407@redhat.com> References: <574BEFE7.4000407@redhat.com> Message-ID: Hi Tristan! Multi tenancy is more an endpoint thing. If you look into the Configuration part of the design [1] you might notice that I'm actually routing between "hotrod-connector"s (which means between ProtocolServer instances). So to be consistent I believe the [/cachemanager] part should be mapped to the name used in hotrod-connector. Here is an example to make is more clear: ... ... With the above configuration we will need the following URIs: - infinispan:hotrod://[host1][:port1][,[host2][:port2]].../*hotrod1* - infinispan:hotrod://[host1][:port1][,[host2][:port2]].../*hotrod2* Thanks Sebastian [1] https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#configuration On Mon, May 30, 2016 at 9:46 AM, Tristan Tarrant wrote: > In the past there has been talk of representing a connection to > Infinispan using a URL, in particular for HotRod. > The Hibernate OGM team is now working on adding NoSQL datasources to > WildFly, and they've asked for they should represent connections to > various of these. > > For Hot Rod: > > infinispan:hotrod://[host1][:port1][,[host2][:port2]]...[/cachemanager] > > The [cachemanager] part is for multi-tenant servers (Hot Rod doesn't > currently support this, so this is forward-looking). > Obviously we will support all of the HotRod properties for specifying > things like security, etc. > > For Embedded: > > infinispan:embedded:file://path/to/config.xml (for specifying an > external config file) > infinispan:embedded:jndi://path/to/jndi (for referencing a cachemanager > in JNDI) > infinispan:embedded: (configuration specified as properties) > > For the latter, we also need to be able to represent an infinispan > configuration using properties with a simple mapping to XML > elements/attributes, e.g. > > cache-manager.local-cache.mycache.eviction.size=1000 > > > Comments are welcome > > Tristan > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160608/8f871d2a/attachment-0001.html From slaskawi at redhat.com Wed Jun 8 03:22:37 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Wed, 8 Jun 2016 09:22:37 +0200 Subject: [infinispan-dev] Infispector In-Reply-To: <10F9B177-4E27-4261-BFC2-5B7D8A1CE39F@redhat.com> References: <7A2F9076-2AB8-4B56-A29B-CFA070C3E447@redhat.com> <166216030.1733480.1462887673535.JavaMail.zimbra@redhat.com> <5731E7ED.6060700@redhat.com> <10F9B177-4E27-4261-BFC2-5B7D8A1CE39F@redhat.com> Message-ID: +1000 looks very interesting! I think we should a nice visualization app for our conference presentations... We might consider refreshing Visual [5], but personally I would love to see something new. A NodeJS client might be also a nice way to go but tracing how the data flows through the cluster... this would be something nice... Thanks Sebastian [5] https://github.com/infinispan/visual On Mon, Jun 6, 2016 at 3:39 PM, Galder Zamarre?o wrote: > Hey guys, > > Sorry for the delay. I somehow read the replies but forgot to reply :) > > I think the target of Infispector and Zipkin based tracing would be > different, although there is some common ground. I think both Infispector > and Zipkin would be helpful in helping diagnose cluster performance issues > and get a general overview of how messages pass from one node to the other. > In terms of difference Infispector seems more geared towards education/qe > whereas Zipkin is more targeted as something we can run at runtime in > production for our users. > > Although both projects have different targets I think we'll be able to > take advantage of each others work, e.g. custom JGroups protocols points, > byteman/bytebuddy rules for intercept scenarions...etc. > > Great work leading all the Infispector work Tomas, looking forward to > demos/videos :D > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > > > On 10 May 2016, at 15:53, Radim Vansa wrote: > > > > To complement this, MFT is a tool that won't offer any sleek charts or > > visualisations. It's tricky to use and understand - it's intended for > > developers as a tool for problem analysis. But it gets more in depth > > than InfiSpector, linking the information from different nodes, JFR > > events and so forth. > > > > R. > > > > On 05/10/2016 09:41 AM, Tomas Sykora wrote: > >> Hello Galder, and all! > >> It?s nice to communicate again via infinispan-dev after a while :) > >> > >> TL;DR: I can see some intersections with zipkin.io initiative goals > but InfiSpector seems to be much more ?easier to handle and contribute to > at this moment? -- that suits more our student-related use case. Let?s > continue with the discussion :) > >> > >> Firstly, a short introduction into the context. Red Hat is running > Research & Development laboratory here in Brno at 2 biggest local > universities: Masaryk University, Faculty of Informatics (FI MU) and Brno > University of Technology, Faculty of Information Technologies (FIT VUT). > >> The aim is to better and sooner reach out to students, get them > involved into interesting project, show them open-source, git, git > workflows and many other things (project specific). An year ago I got > excited about this idea and started to think whether I can deliver such a > project. And I did. > >> > >> Team faces one big challenge and this is a time constraint. Students > are working on _several_ projects during their studies to fulfill courses? > requirements to pass the semester. It?s hard for them to find additional > time to be coding even something else. Team managed that but slowly, that?s > understandable though. Designing InfiSpector infrastructure took us some > time (Kafka, Druid, NodeJS) + evaluation of these technologies + proof of > concepts. > >> > >> All 5 team members are 2nd year students of bachelor studies at FIT VUT > Brno. > >> Marek Ciz (https://github.com/mciz), also my very good friend from my > home town :) His primary domain is Druid, Kafka and infrastructure. > >> Vratislav Hais (https://github.com/vratislavhais) Primary domain is > front-end. > >> Jan Fitz (https://github.com/janfitz) Primary domain is front-end and > graphic design (also designed our logo). > >> Tomas Veskrna -- starting > >> Patrik Cigas -- starting > >> > >> At this moment we are very close to getting real data to be monitored > via web UI. It?s a matter of 1-2 months considering there is an examination > period happening now at the University. > >> > >> ******************* > >> What is InfiSpector and what we want to achieve: > >> > >> * We missed graphical representation of Infinispan nodes communication > so we want > >> -- To be able to spot possible issues AT THE FIRST LOOK (e.g. wait, > this should be coordinator, how is that possible he sends/receives only 10 > % of all messages?) > >> -- To demonstrate nicely what?s happening inside of ISPN cluster for > newcomers (to see how Infinispan nodes talk to each other to better > understand concepts) > >> -- To be using nice communication diagrams that describes flows like > (130 messages from node1 to node5 -- click to see them in detail, filter > out in more detail) > >> * We aimed for NON-invasive solution > >> -- No changes in Infinispan internal code > >> -- Just add custom JGroups protocol, gather data and send them where > you want [0] > >> * Provide historical recording of JGroups communication > >> * Help to analyze communication recording from big data point of view > >> -- No need to manually go through gigabytes of text trace logs > >> > >> Simplified InfiSpector architecture: > >> > >> Infinispan Cluster (JGroups with our protocol) ---> Apache Kafka ---> > Druid Database (using Kafka Firehose to inject Kafka Topic) <---> NodeJS > server back-end <---> front-end (AngularJS) > >> > >> What is coming out from custom JGroup protocol is a short JSON document > [1] with a timestamp, sending and receiving node, length of a message and > the message itself. Other stuff can be added easily. > >> > >> We will be able to easily answer queries like: > >> How many messages were sent from node1 to node3 during ?last? 60 > seconds? > >> What are these messages? > >> How many of them were PutKeyValueCommands? > >> Filter out Heart beats (or even ignore them completely), etc. > >> > >> We don?t have any video recording yet but we are very close to that > point. From UI perspective we will be using these 2 charts: [2], [3]. > >> > >> > >> Talking about Infinispan 9 plans -- [4] was reported a month ago by you > Galder and we are working on InfiSpector actively let?s say 5 months -- it > looks like I should have advertised InfiSpector more, sooner, but I was > waiting for at least first working demo to start with blogging and videos > :) It?s good that you?ve noticed and that we are having this conversation > right now. > >> > >> To be honest I find http://zipkin.io/ initiative to be quite similar. > However, InfiSpector seems to be much more ?easier? and not targeting at > performance analysis directly. Just adding one protocol at protocol stack > and you are good to go. We were thinking about putting Kafka and Druid > somewhere into the cloud (later) so users don?t need to start all that big > infrastructure at their local machines. > >> > >> I am very open to anything that will help us as a community to achieve > our common goal -- to be able to graphically monitor Infinispan > communication. > >> Additionally I would be _personally_ looking for something that is > easily achievable and is suitable for students to quickly learn new things > and quickly make meaningful contributions. > >> > >> Thanks! > >> Tomas > >> > >> [0] Achieved by custom JGroups protocol -- JGROUPS_TO_KAFKA protocol > has been implemented. This can be added at the end of JGroups stack and > every single message that goes through that is sent to Apache Kafka. > >> [1] > >> { > >> "direction": "receiving/up", > >> "src": "tsykora-19569", > >> "dest": "tsykora-27916", > >> "length": 182, > >> "timestamp": 1460302055376, > >> "message": "SingleRpcCommand{cacheName='___defaultcache', > command=PutKeyValueCommand{key=f6d52117-8a27-475e-86a7-002a54324615, > value=tsykora-19569, flags=null, putIfAbsent=false, > valueMatcher=MATCH_ALWAYS, > metadata=EmbeddedExpirableMetadata{lifespan=60000, maxIdle=-1, > version=null}, successful=true}}" > >> } > >> [2] http://bl.ocks.org/NPashaP/9796212 > >> [3] http://bl.ocks.org/mbostock/1046712 > >> [4] https://issues.jboss.org/browse/ISPN-6346 > >> > >> > >> > >> > >> ----- Original Message ----- > >>> From: "Galder Zamarre?o" > >>> To: "infinispan -Dev List" , "Tomas > Sykora" > >>> Sent: Monday, May 9, 2016 5:06:06 PM > >>> Subject: Infispector > >>> > >>> Hi all, > >>> > >>> I've just noticed [1], @Thomas, it appears this is your baby? Could you > >>> explain in more detail what you are trying to achieve with this? Do > you have > >>> a video to show what exactly it does? > >>> > >>> Also, who's [2]? Curious to know who's working on this stuff :) > >>> > >>> The reason I'm interested in finding out a bit more about [1] is > because we > >>> have several efforts in the distributed monitoring/tracing area and > want to > >>> make sure we're not duplicating same effort. > >>> > >>> 1. Radim's Message Flow Tracer [3]: This is a project to tool for > tracing > >>> messages and control flow in JGroups/Infinispan using Byteman. > >>> > >>> 2. Zipkin effort [4]: The idea behind is to have a way to have > Infinispan > >>> cluster-wide tracing that uses Zipkin to capture and visualize where > time is > >>> spent within Infinispan. This includes both JGroups and other > components > >>> that could be time consuming, e.g. persistence. This will be main task > for > >>> Infinispan 9. This effort will use a lot of interception points Radim > has > >>> developed in [3] to tie together messages related to a request/tx > around the > >>> cluster. > >>> > >>> Does your effort fall within any of these spaces? > >>> > >>> Cheers, > >>> > >>> [1] https://github.com/infinispan/infispector > >>> [2] https://github.com/mciz > >>> [3] https://github.com/rvansa/message-flow-tracer > >>> [4] https://issues.jboss.org/browse/ISPN-6346 > >>> -- > >>> Galder Zamarre?o > >>> Infinispan, Red Hat > >>> > >>> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > -- > > Radim Vansa > > JBoss Performance Team > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160608/902a9aee/attachment.html From rvansa at redhat.com Wed Jun 8 08:23:48 2016 From: rvansa at redhat.com (Radim Vansa) Date: Wed, 8 Jun 2016 14:23:48 +0200 Subject: [infinispan-dev] Sequential interceptors API Message-ID: <57580E54.10703@redhat.com> Hi, I would like to encourage you to play with the (relatively) new API for sequential interceptors, and voice your comments - especially you corish devs, and Galder who has much experience with async invocations and that kind of stuff from JS-world. I am now trying to use the asynchronous methods only (the forkInvocationSync() is only temporary helper!); Dan has made it this way as he wanted to avoid unnecessary allocations, and I welcome this GC-awareness, but regrettably I find it rather hard to use, due to its handler-style nature. For the simplest style interceptors (do this, invoke next interceptor, and process result) it's fine, but when you want to do something like: visitFoo(cmd) { Object x = null; if (/* ... */) { x = invoke(new OtherCommand()); } invoke(new DifferentCommand(x)); Object retval = invoke(cmd); return wrap(retval); } I find myself passing handlers deep down. There is allocation cost for closures, so API that does not allocate CompletableFutures does not pay off. I don't say that I could improve it (I have directed my comments to Dan on IRC when I had something in particular), I just say that this is very important API for further Infinispan development and everyone should pay attention before it gets final. So please, play with it, and show your opinion. Radim -- Radim Vansa JBoss Performance Team From galder at redhat.com Mon Jun 13 06:50:48 2016 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 13 Jun 2016 12:50:48 +0200 Subject: [infinispan-dev] Changing default Hot Rod client max retries In-Reply-To: References: <052C93EA-EEC4-43A2-9A7C-3DEA44E3DFD3@redhat.com> Message-ID: Thanks all for the feedback. Very good points made, so I've created this: ISPN-6774 and HRJS-22 so that we implement that for Java/JS clients. @Vittorio, how does C++/C# client deal with this? Cheers, -- Galder Zamarre?o Infinispan, Red Hat > On 2 Jun 2016, at 19:38, William Burns wrote: > > > > On Wed, Jun 1, 2016 at 8:35 AM Dan Berindei wrote: > I'd also like to see an option for the total time to wait, instead of > having to worry about two (or more) different settings. > > Only 1 config sounds good to me. I admit I am more used to total time to wait rather than retry, using long and TimeUnit. > > > True, if there's a bug that causes the request to fail immediately and > the client retries without pause for 1 minute, it can generate a lot > of unnecessary load. So perhaps we should only retry if we "know" the > error can be fixed by retrying, e.g. on connection close or on > IllegalLifecycleStateExceptions. > > +1, retrying on specific exceptions sounds like a good idea to me > > > Cheers > Dan > > > On Wed, Jun 1, 2016 at 12:34 PM, Sanne Grinovero wrote: > > No objection, just not sure about the usefulness. I think what matters > > for people is how long is it going to wait before it fails. > > > > If it's a long time (i.e. 10 minutes) then you'd probably want it try > > faster than waiting 5 minutes for the second try ... exponential > > backoff sounds nicer than trying to find a reasonable balance in the > > connection retries. > > > > Another benefit of an exponential backoff strategy is that you could > > allow the users to set an option to wait essentially forever (until > > interrupted: nicer to allow this control to higher up stacks), which > > could be useful for cloud deployments, microservices, etc.. > > > > > > > > On 1 June 2016 at 09:26, Galder Zamarre?o wrote: > >> Hi all, > >> > >> Java Hot Rod client has 10 max retries as default. This sounds a bit too much, and as I find the need to add similar configuration to JS client, I'm wondering whether this should be reduce to 3 for all clients, including Java, C* and JS clients. > >> > >> Any objections? > >> > >> Cheers, > >> -- > >> Galder Zamarre?o > >> Infinispan, Red Hat > >> > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Fri Jun 17 04:52:52 2016 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Fri, 17 Jun 2016 10:52:52 +0200 Subject: [infinispan-dev] Sequential interceptors API In-Reply-To: <57580E54.10703@redhat.com> References: <57580E54.10703@redhat.com> Message-ID: <8BD560BF-0F3E-4CC8-843C-EB54F5410877@redhat.com> Radim, do you have a branch where you have been trying these things out? I'd like to play with what you're trying to do. Cheers, -- Galder Zamarre?o Infinispan, Red Hat > On 8 Jun 2016, at 14:23, Radim Vansa wrote: > > Hi, > > I would like to encourage you to play with the (relatively) new API for > sequential interceptors, and voice your comments - especially you corish > devs, and Galder who has much experience with async invocations and that > kind of stuff from JS-world. > > I am now trying to use the asynchronous methods only (the > forkInvocationSync() is only temporary helper!); Dan has made it this > way as he wanted to avoid unnecessary allocations, and I welcome this > GC-awareness, but regrettably I find it rather hard to use, due to its > handler-style nature. For the simplest style interceptors (do this, > invoke next interceptor, and process result) it's fine, but when you > want to do something like: > > visitFoo(cmd) { > Object x = null; > if (/* ... */) { > x = invoke(new OtherCommand()); > } > invoke(new DifferentCommand(x)); > Object retval = invoke(cmd); > return wrap(retval); > } > > I find myself passing handlers deep down. There is allocation cost for > closures, so API that does not allocate CompletableFutures does not pay off. > > I don't say that I could improve it (I have directed my comments to Dan > on IRC when I had something in particular), I just say that this is very > important API for further Infinispan development and everyone should pay > attention before it gets final. > > So please, play with it, and show your opinion. > > Radim > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Tue Jun 21 03:49:11 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 21 Jun 2016 09:49:11 +0200 Subject: [infinispan-dev] Compatiibility 2.0 dump Message-ID: <5768F177.9060307@redhat.com> Hi all, I've created a wiki [1] for the "compatibility 2.0" ideas we talked about recently at the query meeting. This is mostly a dump of the minutes, so the form is not complete, but initial comments are welcome. Tristan [1] https://github.com/infinispan/infinispan/wiki/Compatibility-2.0 -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From galder at redhat.com Tue Jun 21 10:02:28 2016 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Tue, 21 Jun 2016 16:02:28 +0200 Subject: [infinispan-dev] Infinispan team meeting minutes 20th June Message-ID: An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160621/1a6144eb/attachment.html From rvansa at redhat.com Wed Jun 22 11:44:21 2016 From: rvansa at redhat.com (Radim Vansa) Date: Wed, 22 Jun 2016 17:44:21 +0200 Subject: [infinispan-dev] Compatiibility 2.0 dump In-Reply-To: <5768F177.9060307@redhat.com> References: <5768F177.9060307@redhat.com> Message-ID: <576AB255.7060202@redhat.com> I've spotted things like 'decorating cache' in the wiki page. I though that the core architecture in Infinispan, modifying the behavior according to configurations, is the interceptor stack. While we have some doubts about its performance, and there are limitations - e.g. the Flags don't allow to add custom parameters and we certainly don't want to add Flag.JSON and Flag.XML - I would consider decorating a Cache vs. adding interceptors. I am thinking of adding the transcoder information to invocation context and only pass different ICF to the CacheImpl. Though, this requires new factory, new interceptor and a handful of specialized context classes (or a wrapper to the existing ones). Whoo, just decorating Cache sounds much simpler (and probably more performant). Or should we have forks in interceptor stack? (as an alternative to different wrappers). The idea of interceptors is that these are common for all operations, if we want to do things differently for different endpoints (incl. embedded), decorating probably is the answer. My 2c (or rather just random thoughts and whining) Radim On 06/21/2016 09:49 AM, Tristan Tarrant wrote: > Hi all, > > I've created a wiki [1] for the "compatibility 2.0" ideas we talked > about recently at the query meeting. > > This is mostly a dump of the minutes, so the form is not complete, but > initial comments are welcome. > > > Tristan > > [1] https://github.com/infinispan/infinispan/wiki/Compatibility-2.0 -- Radim Vansa JBoss Performance Team From rvansa at redhat.com Wed Jun 22 11:52:08 2016 From: rvansa at redhat.com (Radim Vansa) Date: Wed, 22 Jun 2016 17:52:08 +0200 Subject: [infinispan-dev] Sequential interceptors API In-Reply-To: <8BD560BF-0F3E-4CC8-843C-EB54F5410877@redhat.com> References: <57580E54.10703@redhat.com> <8BD560BF-0F3E-4CC8-843C-EB54F5410877@redhat.com> Message-ID: <576AB428.10203@redhat.com> Yes [1]. The longest chaining of operations I had in [2], basically during ST I have to load a value locally*, perform a unicast/broadcast to read different value and then execute the original one. * I shouldn't load it just from DC, as it could be in cache store, too; though without persistence (which I don't deal with properly in scattered cache yet) it would be more efficient to do the DC lookup directly. Radim [1] https://github.com/rvansa/infinispan/tree/ISPN-6645 [2] https://github.com/rvansa/infinispan/blob/ISPN-6645/core/src/main/java/org/infinispan/interceptors/impl/PrefetchInvalidationInterceptor.java On 06/17/2016 10:52 AM, Galder Zamarre?o wrote: > Radim, do you have a branch where you have been trying these things out? I'd like to play with what you're trying to do. > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > >> On 8 Jun 2016, at 14:23, Radim Vansa wrote: >> >> Hi, >> >> I would like to encourage you to play with the (relatively) new API for >> sequential interceptors, and voice your comments - especially you corish >> devs, and Galder who has much experience with async invocations and that >> kind of stuff from JS-world. >> >> I am now trying to use the asynchronous methods only (the >> forkInvocationSync() is only temporary helper!); Dan has made it this >> way as he wanted to avoid unnecessary allocations, and I welcome this >> GC-awareness, but regrettably I find it rather hard to use, due to its >> handler-style nature. For the simplest style interceptors (do this, >> invoke next interceptor, and process result) it's fine, but when you >> want to do something like: >> >> visitFoo(cmd) { >> Object x = null; >> if (/* ... */) { >> x = invoke(new OtherCommand()); >> } >> invoke(new DifferentCommand(x)); >> Object retval = invoke(cmd); >> return wrap(retval); >> } >> >> I find myself passing handlers deep down. There is allocation cost for >> closures, so API that does not allocate CompletableFutures does not pay off. >> >> I don't say that I could improve it (I have directed my comments to Dan >> on IRC when I had something in particular), I just say that this is very >> important API for further Infinispan development and everyone should pay >> attention before it gets final. >> >> So please, play with it, and show your opinion. >> >> Radim >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From slaskawi at redhat.com Fri Jun 24 07:40:23 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Fri, 24 Jun 2016 13:40:23 +0200 Subject: [infinispan-dev] Propagating changes to the configuration in standalone mode Message-ID: Hey! About a week ago I had a conversation with Brian about propagating configuration changes in standalone mode. This topic might be very interesting when considering deploying Infinispan with Docker images in the context of our administration/management console. If anyone is interested in this topic, please join the discussion here: https://issues.jboss.org/browse/WFCORE-1612 Thanks Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160624/ed43672c/attachment.html From rory.odonnell at oracle.com Mon Jun 27 05:12:54 2016 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Mon, 27 Jun 2016 10:12:54 +0100 Subject: [infinispan-dev] Early Access builds of JDK 8u112 b01, JDK 9 b124 are available on java.net Message-ID: <7c47c275-37c3-1ba2-597d-f1e9e3c69e54@oracle.com> Hi Galder, Early Access b124 for JDK 9 is available on java.net, summary of changes are listed here . Early Access b123 (#5178) for JDK 9 with Project Jigsaw is available on java.net, summary of changes are listed here Early Access b01 for JDK 8u112 is available on java.net. Update to JEP 261 : Module System - email from Mark Reinhold [1] - The special ALL-DEFAULT module name, which represents the default set of root modules for use with the -addmods option [2]; - A more thorough explanation of how the built-in class loaders have changed, how built-in modules are assigned to each loader, and how these loaders work together to load classes [3]; and - The reason why Java EE-related modules are no longer resolved by default [4]. - There are various other minor corrections and clarifications, as can be seen in the detailed diff [5]. Rgds,Rory [1]http://mail.openjdk.java.net/pipermail/jigsaw-dev/2016-June/008227.html [2]http://openjdk.java.net/jeps/261#ALL-DEFAULT [3]http://openjdk.java.net/jeps/261#Class-loaders [4]http://openjdk.java.net/jeps/261#EE-modules [5]http://cr.openjdk.java.net/~mr/jigsaw/jeps/updates/261-2016-06-15.html -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA, Dublin,Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160627/e4067d69/attachment-0001.html From slaskawi at redhat.com Wed Jun 29 02:55:51 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Wed, 29 Jun 2016 08:55:51 +0200 Subject: [infinispan-dev] Multi tenancy support for Infinispan In-Reply-To: References: <5731A2D6.1020300@redhat.com> Message-ID: Hey! The multi-tenancy support for Hot Rod and REST has been implemented [2]. Since the PR is gigantic, I marked some interesting places for review so you might want to skip boilerplate parts. The Memcached and WebSockets implementations are currently out of scope. If you would like us to implement them, please vote on the following tickets: - Memcached https://issues.jboss.org/browse/ISPN-6639 - Web Sockets https://issues.jboss.org/browse/ISPN-6638 Thanks Sebastian [2] https://github.com/infinispan/infinispan/pull/4348 On Thu, May 26, 2016 at 4:51 PM, Sebastian Laskawiec wrote: > Hey Galder! > > Comments inlined. > > Thanks > Sebastian > > On Wed, May 25, 2016 at 10:52 AM, Galder Zamarre?o > wrote: > >> Hi all, >> >> Sorry for the delay getting back on this. >> >> The addition of a new component does not worry me so much. It has the >> advantage of implementing it once independent of the backend endpoint, >> whether HR or Rest. >> >> What I'm struggling to understand is what protocol the clients will use >> to talk to the router. It seems wasteful having to build two protocols at >> this level, e.g. one at TCP level and one at REST level. If you're going to >> end up building two protocols, the benefit of the router component >> dissapears and then you might as well embedded the two routing protocols >> within REST and HR directly. >> > > I think I wasn't clear enough in the design how the routing works... > > In your scenario - both servers (hotrod and rest) will start > EmbeddedCacheManagers internally but none of them will start Netty > transport. The only transport that will be turned on is the router. The > router will be responsible for recognizing the request type (if HTTP - find > proper REST server, if HotRod protocol - find proper HotRod) and attaching > handlers at the end of the pipeline. > > Regarding to custom protocol (this usecase could be used with Hotrod > clients which do not use SSL (so SNI routing is not possible)), you and > Tristan got me thinking whether we really need it. Maybe we should require > SSL+SNI when using HotRod protocol with no exceptions? The thing that > bothers me is that SSL makes the whole setup twice slower: > https://gist.github.com/slaskawi/51f76b0658b9ee0c9351bd17224b1ba2#file-gistfile1-txt-L1753-L1754 > > >> >> In other words, for the router component to make sense, I think it should: >> >> 1. Clients, no matter whether HR or REST, to use 1 single protocol to the >> router. The natural thing here would be HTTP/2 or similar protocol. >> > > Yes, that's the goal. > > >> 2. The router then talks HR or REST to the backend. Here the router uses >> TCP or HTTP protocol based on the backend needs. >> > > It's even simpler - it just uses the backend's Netty Handlers. > > Since the SNI implementation is ready, please have a look: > https://github.com/infinispan/infinispan/pull/4348 > > >> >> ^ The above implies that HR client has to talk TCP when using HR server >> directly or HTTP/2 when using it via router, but I don't think this is too >> bad and it gives us some experience working with HTTP/2 besides the work >> Anton is carrying out as part of GSoC. > > >> Cheers, >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> > On 11 May 2016, at 10:38, Sebastian Laskawiec >> wrote: >> > >> > Hey Tristan! >> > >> > If I understood you correctly, you're suggesting to enhance the >> ProtocolServer to support multiple EmbeddedCacheManagers (probably with >> shared transport and by that I mean started on the same Netty server). >> > >> > Yes, that also could work but I'm not convinced if we won't loose some >> configuration flexibility. >> > >> > Let's consider a configuration file - >> https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how >> for example use authentication for CacheContainer cc1 (and not for cc2) and >> encryption for cc1 (and not for cc1)? Both are tied to hotrod-connector. I >> think using this kind of different options makes sense in terms of multi >> tenancy. And please note that if we start a new Netty server for each >> CacheContainer - we almost ended up with the router I proposed. >> > >> > The second argument for using a router is extracting the routing logic >> into a separate module. Otherwise we would probably end up with several >> if(isMultiTenent()) statements in Hotrod as well as REST server. Extracting >> this has also additional advantage that we limit changes in those modules >> (actually there will be probably 2 changes #1 we should be able to start a >> ProtocolServer without starting a Netty server (the Router will do it in >> multi tenant configuration) and #2 collect Netty handlers from >> ProtocolServer). >> > >> > To sum it up - the router's implementation seems to be more complicated >> but in the long run I think it might be worth it. >> > >> > I also wrote the summary of the above here: >> https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#alternative-approach >> > >> > @Galder - you wrote a huge part of the Hot Rod server - I would love to >> hear your opinion as well. >> > >> > Thanks >> > Sebastian >> > >> > >> > >> > On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant >> wrote: >> > Not sure I like the introduction of another component at the front. >> > >> > My original idea for allowing the client to choose the container was: >> > >> > - with TLS: use SNI to choose the container >> > - without TLS: enhance the PING operation of the Hot Rod protocol to >> > also take the server name. This would need to be a requirement when >> > exposing multiple containers over the same endpoint. >> > >> > From a client API perspective, there would be no difference between the >> > above two approaches: just specify the server name and depending on the >> > transport, select the right one. >> > >> > Tristan >> > >> > On 29/04/2016 17:29, Sebastian Laskawiec wrote: >> > > Dear Community, >> > > >> > > Please have a look at the design of Multi tenancy support for >> Infinispan >> > > [1]. I would be more than happy to get some feedback from you. >> > > >> > > Highlights: >> > > >> > > * The implementation will be based on a Router (which will be built >> > > based on Netty) >> > > * Multiple Hot Rod and REST servers will be attached to the router >> > > which in turn will be attached to the endpoint >> > > * The router will operate on a binary protocol when using Hot Rod >> > > clients and path-based routing when using REST >> > > * Memcached will be out of scope >> > > * The router will support SSL+SNI >> > > >> > > Thanks >> > > Sebastian >> > > >> > > [1] >> > > >> https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server >> > > >> > > >> > > _______________________________________________ >> > > infinispan-dev mailing list >> > > infinispan-dev at lists.jboss.org >> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > >> > >> > -- >> > Tristan Tarrant >> > Infinispan Lead >> > JBoss, a division of Red Hat >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160629/6837b73e/attachment.html