From slaskawi at redhat.com Mon Jul 3 02:24:31 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 03 Jul 2017 06:24:31 +0000 Subject: [infinispan-dev] KUBE_PING changes In-Reply-To: References: <6b3dc945-449b-1c9b-1384-4a4cdde83eca@mailbox.org> Message-ID: Hey Thomas, Comments inlined. Thanks, Sebastian On Fri, Jun 30, 2017 at 9:29 PM Thomas SEGISMONT wrote: > Also, it seems infinispan-cloud 9.0.3.Final depends on JGroups 0.9.1. > > Do you plan to release another 9.0.x version which depends on 1.0.0.Beta1 > or later? If so, is there a target date? > No, I didn't plan to backport it to 9.0.x branch. The implementation is pretty new and I wanted to play with it a little bit more before make it "stable". Could you please tell me why do you need it? > > 2017-06-30 11:40 GMT+02:00 Thomas SEGISMONT : > >> Hi everyone, >> >> Thank you for this great work, the dependency diet and the extra port >> removal are both very useful. The extra port removal is key to enable >> Vert.x clustering in Openshift S2I environments. >> >> I tried the new KUBE_PING (beta1) with vertx-infinispan and it worked >> fine. I have a few questions though. >> >> I couldn't configure it with env variables. Before you ask, yes I noticed >> the name changes ;-) I only had a quick look at JGroups config code but it >> seems it only resolves system properties. Did it work for you because you >> tried with an Infinispan server? >> > Could you please tell me what is the JGroups version you're running on? I think environment variables support requires >= 4.0.3.Final [1]. [1] https://github.com/jgroups-extras/jgroups-kubernetes/blob/master/pom.xml#L56 > >> Since I couldn't configure it externally I had to create a custom JGroups >> file. Usually, we recommend [1] Vert.x users to add the infinispan-cloud >> dependency and a system property: >> -Dvertx.jgroups.config=default-configs/default-jgroups-kubernetes.xml >> > +1, that's the correct way to do it. I think bumping up JGroups version might solve your problem here. > >> My custom JGroups file is a just a copy of >> default-configs/default-jgroups-kubernetes.xml in which I added the >> masterHost and namespace properties. >> > hmmmm that's odd. Why do you need to change masterHost? The default should be fine in 99% of the use cases. The namespace is a separate thing and in most of the cases a user should set it to his own project. > Is it still recommended to use the >> default-configs/default-jgroups-kubernetes.xml stack ? Or is any change >> planned after the KUBE_PING changes? >> I wouldn't expect a protocol implementation change to impact a stack >> configuration but they say there are no stupid questions :) >> > No no, using default-jgroups-kubernetes.xml is still necessary (and there are no plans to change it in the future). Using specific transport is tied with your deployment model. In most of the cases in Kubernetes and OpenShift you should use KUBE_PING. Your network configuration might support multicasting and then you'd probably need to check if UDP is not performing better. You may also use StatefulSets and try out DNS_PING. As you can see there are many different combinations of protocols you might use. Recommending default config shipped with infinispan-cloud is a way to go here in my opinion. > >> Thank you, >> Thomas >> >> >> [1] >> http://vertx.io/docs/vertx-infinispan/java/#_configuring_for_openshift_3 >> >> >> 2017-06-15 8:21 GMT+02:00 Sebastian Laskawiec : >> >>> Yep, no problems found!!! >>> >>> I had also impression that the new implementation is "faster". Though I >>> haven't measured it... it just my impression. >>> >>> Awesome work Bela! >>> >>> On Thu, Jun 15, 2017 at 7:42 AM Bela Ban wrote: >>> >>>> Thanks, Sebastian! >>>> >>>> I assume testing on GKE and minikube/openshift was successful? >>>> >>>> >>>> On 14/06/17 13:15, Sebastian Laskawiec wrote: >>>> > Hey guys, >>>> > >>>> > Just a heads up, I've just created a PR that upgrades KUBE_PING to >>>> > 1.0.0.Beta1 [1]. As you probably seen in [2], 1.0.0.Beta1 was >>>> completely >>>> > rewritten and might behave slightly differently. >>>> > >>>> > Here is a summary of changes: >>>> > >>>> > * The latest KUBE_PING doesn't require embedded HTTP server for >>>> > discovery. Thus it is no longer required to expose port 8888 in >>>> Pods. >>>> > * The number of dependencies has been decreased. Currently we only >>>> > require JGroups and OAuth library. >>>> > * The new KUBE_PING works only with JGroups 4. There will be no >>>> > JGroups 3 support. >>>> > * Some of the environmental variables were shortened and we removed >>>> > `OPENSHIFT` prefix. So if you use `OPENSHIFT_KUBE_PING_NAMESPACE`, >>>> > you will need to change it to `KUBERNETES_NAMESPACE`. Please refer >>>> > to [3] for more information. >>>> > >>>> > I also switched default branch in Kubernetes Ping repository to >>>> master [4]. >>>> > >>>> > Thanks, >>>> > Sebastian >>>> > >>>> > [1] https://github.com/infinispan/infinispan/pull/5201 >>>> > [2] >>>> http://belaban.blogspot.ch/2017/05/running-infinispan-cluster-with.html >>>> > [3] >>>> https://github.com/jgroups-extras/jgroups-kubernetes/blob/master/README.adoc >>>> > [4] https://github.com/jgroups-extras/jgroups-kubernetes >>>> > -- >>>> > >>>> > SEBASTIAN ?ASKAWIEC >>>> > >>>> > INFINISPAN DEVELOPER >>>> > >>>> > Red Hat EMEA >>>> > >>>> > >>>> > >>>> > >>>> > >>>> > _______________________________________________ >>>> > infinispan-dev mailing list >>>> > infinispan-dev at lists.jboss.org >>>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> > >>>> >>>> -- >>>> Bela Ban | http://www.jgroups.org >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> -- >>> >>> SEBASTIAN ?ASKAWIEC >>> >>> INFINISPAN DEVELOPER >>> >>> Red Hat EMEA >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170703/1eee473f/attachment-0001.html From tsegismont at gmail.com Mon Jul 3 03:37:12 2017 From: tsegismont at gmail.com (Thomas SEGISMONT) Date: Mon, 3 Jul 2017 09:37:12 +0200 Subject: [infinispan-dev] KUBE_PING changes In-Reply-To: References: <6b3dc945-449b-1c9b-1384-4a4cdde83eca@mailbox.org> Message-ID: Hi Sebastian, 2017-07-03 8:24 GMT+02:00 Sebastian Laskawiec : > Hey Thomas, > > Comments inlined. > > Thanks, > Sebastian > > > On Fri, Jun 30, 2017 at 9:29 PM Thomas SEGISMONT > wrote: > >> Also, it seems infinispan-cloud 9.0.3.Final depends on JGroups 0.9.1. >> >> Do you plan to release another 9.0.x version which depends on 1.0.0.Beta1 >> or later? If so, is there a target date? >> > > No, I didn't plan to backport it to 9.0.x branch. The implementation is > pretty new and I wanted to play with it a little bit more before make it > "stable". > > Could you please tell me why do you need it? > On Openshift S2I environments, a pod can only expose a predetermined set of ports. Of course the administrator can customize this set, but in some cases (e.g. openshift.io) it is very unlikely that the extra port is added. > > >> >> 2017-06-30 11:40 GMT+02:00 Thomas SEGISMONT : >> >>> Hi everyone, >>> >>> Thank you for this great work, the dependency diet and the extra port >>> removal are both very useful. The extra port removal is key to enable >>> Vert.x clustering in Openshift S2I environments. >>> >>> I tried the new KUBE_PING (beta1) with vertx-infinispan and it worked >>> fine. I have a few questions though. >>> >>> I couldn't configure it with env variables. Before you ask, yes I >>> noticed the name changes ;-) I only had a quick look at JGroups config code >>> but it seems it only resolves system properties. Did it work for you >>> because you tried with an Infinispan server? >>> >> > Could you please tell me what is the JGroups version you're running on? I > think environment variables support requires >= 4.0.3.Final [1]. > > [1] https://github.com/jgroups-extras/jgroups-kubernetes/blob/master/pom. > xml#L56 > This explains it. I use the version of JGroups which comes with ISPN 9.0.0.Final (4.0.1.Final) > > >> >>> Since I couldn't configure it externally I had to create a custom >>> JGroups file. Usually, we recommend [1] Vert.x users to add the >>> infinispan-cloud dependency and a system property: >>> -Dvertx.jgroups.config=default-configs/default-jgroups-kubernetes.xml >>> >> > +1, that's the correct way to do it. I think bumping up JGroups version > might solve your problem here. > Yes. Upgrading JGroups that solved the issue. > > >> >>> My custom JGroups file is a just a copy of default-configs/default-jgroups-kubernetes.xml >>> in which I added the masterHost and namespace properties. >>> >> > hmmmm that's odd. Why do you need to change masterHost? The default should > be fine in 99% of the use cases. The namespace is a separate thing and in > most of the cases a user should set it to his own project. > I needed to set the masterHost via a sysprop as my older version of JGroups wouldn't lookup env vars. With 4.0.3.Final I don't need it anymore. > > >> Is it still recommended to use the default-configs/default-jgroups-kubernetes.xml >>> stack ? Or is any change planned after the KUBE_PING changes? >>> I wouldn't expect a protocol implementation change to impact a stack >>> configuration but they say there are no stupid questions :) >>> >> > No no, using default-jgroups-kubernetes.xml is still necessary (and there > are no plans to change it in the future). Using specific transport is tied > with your deployment model. In most of the cases in Kubernetes and > OpenShift you should use KUBE_PING. Your network configuration might > support multicasting and then you'd probably need to check if UDP is not > performing better. You may also use StatefulSets and try out DNS_PING. > > As you can see there are many different combinations of protocols you > might use. Recommending default config shipped with infinispan-cloud is a > way to go here in my opinion. > OK. I had no plans to try my own stack for Openshift really :) Just wanted to make sure the new KUBE_PING doesn't impact the infinispan-cloud default Kubernetes stack. > > >> >>> Thank you, >>> Thomas >>> >>> >>> [1] http://vertx.io/docs/vertx-infinispan/java/#_ >>> configuring_for_openshift_3 >>> >>> >>> 2017-06-15 8:21 GMT+02:00 Sebastian Laskawiec : >>> >>>> Yep, no problems found!!! >>>> >>>> I had also impression that the new implementation is "faster". Though I >>>> haven't measured it... it just my impression. >>>> >>>> Awesome work Bela! >>>> >>>> On Thu, Jun 15, 2017 at 7:42 AM Bela Ban wrote: >>>> >>>>> Thanks, Sebastian! >>>>> >>>>> I assume testing on GKE and minikube/openshift was successful? >>>>> >>>>> >>>>> On 14/06/17 13:15, Sebastian Laskawiec wrote: >>>>> > Hey guys, >>>>> > >>>>> > Just a heads up, I've just created a PR that upgrades KUBE_PING to >>>>> > 1.0.0.Beta1 [1]. As you probably seen in [2], 1.0.0.Beta1 was >>>>> completely >>>>> > rewritten and might behave slightly differently. >>>>> > >>>>> > Here is a summary of changes: >>>>> > >>>>> > * The latest KUBE_PING doesn't require embedded HTTP server for >>>>> > discovery. Thus it is no longer required to expose port 8888 in >>>>> Pods. >>>>> > * The number of dependencies has been decreased. Currently we only >>>>> > require JGroups and OAuth library. >>>>> > * The new KUBE_PING works only with JGroups 4. There will be no >>>>> > JGroups 3 support. >>>>> > * Some of the environmental variables were shortened and we removed >>>>> > `OPENSHIFT` prefix. So if you use `OPENSHIFT_KUBE_PING_ >>>>> NAMESPACE`, >>>>> > you will need to change it to `KUBERNETES_NAMESPACE`. Please >>>>> refer >>>>> > to [3] for more information. >>>>> > >>>>> > I also switched default branch in Kubernetes Ping repository to >>>>> master [4]. >>>>> > >>>>> > Thanks, >>>>> > Sebastian >>>>> > >>>>> > [1] https://github.com/infinispan/infinispan/pull/5201 >>>>> > [2] http://belaban.blogspot.ch/2017/05/running-infinispan- >>>>> cluster-with.html >>>>> > [3] https://github.com/jgroups-extras/jgroups-kubernetes/ >>>>> blob/master/README.adoc >>>>> > [4] https://github.com/jgroups-extras/jgroups-kubernetes >>>>> > -- >>>>> > >>>>> > SEBASTIAN ?ASKAWIEC >>>>> > >>>>> > INFINISPAN DEVELOPER >>>>> > >>>>> > Red Hat EMEA >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > _______________________________________________ >>>>> > infinispan-dev mailing list >>>>> > infinispan-dev at lists.jboss.org >>>>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> > >>>>> >>>>> -- >>>>> Bela Ban | http://www.jgroups.org >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> -- >>>> >>>> SEBASTIAN ?ASKAWIEC >>>> >>>> INFINISPAN DEVELOPER >>>> >>>> Red Hat EMEA >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>> >>> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > > SEBASTIAN ?ASKAWIEC > > INFINISPAN DEVELOPER > > Red Hat EMEA > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170703/0c19a4d8/attachment-0001.html From ttarrant at redhat.com Mon Jul 3 03:52:47 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 3 Jul 2017 09:52:47 +0200 Subject: [infinispan-dev] Feedback for PR 5233 needed In-Reply-To: References: Message-ID: <6cc2794d-c5f5-4014-8430-6bc97877f1f4@redhat.com> I like it a lot. To follow up on my comment on the PR, but to get a wider distribution, we really need to think about how to deal with redeployments and resource restarts. I think restarts are unavoidable: a redeployment means dumping and replacing a classloader with all of its classes. There are two approaches I can think of: - "freezing" and "thawing" a cache via some form of persistence (which could also mean adding a temporary cache store - separate the wildfly service lifecycle from the cache lifecycle, detaching/reattaching a cache without stopping when the wrapping service is restarted. Tristan On 6/29/17 5:20 PM, Adrian Nistor wrote: > People, don't be shy, the PR is in now, but things can still change > based on you feedback. We still have two weeks until we release the Final. > > On 06/29/2017 03:45 PM, Adrian Nistor wrote: >> This pr [1] adds a new approach for defining the compat marshaller class >> and the indexed entity classes (in server), and the same approach could >> be used in future for deployment of encoders, lucene analyzers and >> possilby other code bits that a user would want to add a server in order >> to implement an extension point that we support. >> >> Your feedback is wellcome! >> >> [1] https://github.com/infinispan/infinispan/pull/5233 >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From slaskawi at redhat.com Mon Jul 3 03:59:14 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 03 Jul 2017 07:59:14 +0000 Subject: [infinispan-dev] KUBE_PING changes In-Reply-To: References: <6b3dc945-449b-1c9b-1384-4a4cdde83eca@mailbox.org> Message-ID: On Mon, Jul 3, 2017 at 9:38 AM Thomas SEGISMONT wrote: > Hi Sebastian, > > 2017-07-03 8:24 GMT+02:00 Sebastian Laskawiec : > >> Hey Thomas, >> >> Comments inlined. >> >> Thanks, >> Sebastian >> >> >> On Fri, Jun 30, 2017 at 9:29 PM Thomas SEGISMONT >> wrote: >> >>> Also, it seems infinispan-cloud 9.0.3.Final depends on JGroups 0.9.1. >>> >>> Do you plan to release another 9.0.x version which depends on >>> 1.0.0.Beta1 or later? If so, is there a target date? >>> >> >> No, I didn't plan to backport it to 9.0.x branch. The implementation is >> pretty new and I wanted to play with it a little bit more before make it >> "stable". >> >> Could you please tell me why do you need it? >> > > On Openshift S2I environments, a pod can only expose a predetermined set > of ports. Of course the administrator can customize this set, but in some > cases (e.g. openshift.io) it is very unlikely that the extra port is > added. > That's a fair point. And there's no workaround for this. Ok, I'll do a backport than. > > >> >> >>> >>> 2017-06-30 11:40 GMT+02:00 Thomas SEGISMONT : >>> >>>> Hi everyone, >>>> >>>> Thank you for this great work, the dependency diet and the extra port >>>> removal are both very useful. The extra port removal is key to enable >>>> Vert.x clustering in Openshift S2I environments. >>>> >>>> I tried the new KUBE_PING (beta1) with vertx-infinispan and it worked >>>> fine. I have a few questions though. >>>> >>>> I couldn't configure it with env variables. Before you ask, yes I >>>> noticed the name changes ;-) I only had a quick look at JGroups config code >>>> but it seems it only resolves system properties. Did it work for you >>>> because you tried with an Infinispan server? >>>> >>> >> Could you please tell me what is the JGroups version you're running on? I >> think environment variables support requires >= 4.0.3.Final [1]. >> >> [1] >> https://github.com/jgroups-extras/jgroups-kubernetes/blob/master/pom.xml#L56 >> > > This explains it. I use the version of JGroups which comes with ISPN > 9.0.0.Final (4.0.1.Final) > Yeah, I will also bump it up. I hope Dan and Pedro will be OK with that. > > >> >> >>> >>>> Since I couldn't configure it externally I had to create a custom >>>> JGroups file. Usually, we recommend [1] Vert.x users to add the >>>> infinispan-cloud dependency and a system property: >>>> -Dvertx.jgroups.config=default-configs/default-jgroups-kubernetes.xml >>>> >>> >> +1, that's the correct way to do it. I think bumping up JGroups version >> might solve your problem here. >> > > Yes. Upgrading JGroups that solved the issue. > > >> >> >>> >>>> My custom JGroups file is a just a copy of >>>> default-configs/default-jgroups-kubernetes.xml in which I added the >>>> masterHost and namespace properties. >>>> >>> >> hmmmm that's odd. Why do you need to change masterHost? The default >> should be fine in 99% of the use cases. The namespace is a separate thing >> and in most of the cases a user should set it to his own project. >> > > I needed to set the masterHost via a sysprop as my older version of > JGroups wouldn't lookup env vars. With 4.0.3.Final I don't need it anymore. > Ok, understood. > > >> >> >>> Is it still recommended to use the >>>> default-configs/default-jgroups-kubernetes.xml stack ? Or is any change >>>> planned after the KUBE_PING changes? >>>> I wouldn't expect a protocol implementation change to impact a stack >>>> configuration but they say there are no stupid questions :) >>>> >>> >> No no, using default-jgroups-kubernetes.xml is still necessary (and there >> are no plans to change it in the future). Using specific transport is tied >> with your deployment model. In most of the cases in Kubernetes and >> OpenShift you should use KUBE_PING. Your network configuration might >> support multicasting and then you'd probably need to check if UDP is not >> performing better. You may also use StatefulSets and try out DNS_PING. >> >> As you can see there are many different combinations of protocols you >> might use. Recommending default config shipped with infinispan-cloud is a >> way to go here in my opinion. >> > > OK. I had no plans to try my own stack for Openshift really :) Just wanted > to make sure the new KUBE_PING doesn't impact the infinispan-cloud default > Kubernetes stack. > No no... it should be fine. > > >> >> >>> >>>> Thank you, >>>> Thomas >>>> >>>> >>>> [1] >>>> http://vertx.io/docs/vertx-infinispan/java/#_configuring_for_openshift_3 >>>> >>>> >>>> 2017-06-15 8:21 GMT+02:00 Sebastian Laskawiec : >>>> >>>>> Yep, no problems found!!! >>>>> >>>>> I had also impression that the new implementation is "faster". Though >>>>> I haven't measured it... it just my impression. >>>>> >>>>> Awesome work Bela! >>>>> >>>>> On Thu, Jun 15, 2017 at 7:42 AM Bela Ban wrote: >>>>> >>>>>> Thanks, Sebastian! >>>>>> >>>>>> I assume testing on GKE and minikube/openshift was successful? >>>>>> >>>>>> >>>>>> On 14/06/17 13:15, Sebastian Laskawiec wrote: >>>>>> > Hey guys, >>>>>> > >>>>>> > Just a heads up, I've just created a PR that upgrades KUBE_PING to >>>>>> > 1.0.0.Beta1 [1]. As you probably seen in [2], 1.0.0.Beta1 was >>>>>> completely >>>>>> > rewritten and might behave slightly differently. >>>>>> > >>>>>> > Here is a summary of changes: >>>>>> > >>>>>> > * The latest KUBE_PING doesn't require embedded HTTP server for >>>>>> > discovery. Thus it is no longer required to expose port 8888 in >>>>>> Pods. >>>>>> > * The number of dependencies has been decreased. Currently we only >>>>>> > require JGroups and OAuth library. >>>>>> > * The new KUBE_PING works only with JGroups 4. There will be no >>>>>> > JGroups 3 support. >>>>>> > * Some of the environmental variables were shortened and we >>>>>> removed >>>>>> > `OPENSHIFT` prefix. So if you use >>>>>> `OPENSHIFT_KUBE_PING_NAMESPACE`, >>>>>> > you will need to change it to `KUBERNETES_NAMESPACE`. Please >>>>>> refer >>>>>> > to [3] for more information. >>>>>> > >>>>>> > I also switched default branch in Kubernetes Ping repository to >>>>>> master [4]. >>>>>> > >>>>>> > Thanks, >>>>>> > Sebastian >>>>>> > >>>>>> > [1] https://github.com/infinispan/infinispan/pull/5201 >>>>>> > [2] >>>>>> http://belaban.blogspot.ch/2017/05/running-infinispan-cluster-with.html >>>>>> > [3] >>>>>> https://github.com/jgroups-extras/jgroups-kubernetes/blob/master/README.adoc >>>>>> > [4] https://github.com/jgroups-extras/jgroups-kubernetes >>>>>> > -- >>>>>> > >>>>>> > SEBASTIAN ?ASKAWIEC >>>>>> > >>>>>> > INFINISPAN DEVELOPER >>>>>> > >>>>>> > Red Hat EMEA >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> > _______________________________________________ >>>>>> > infinispan-dev mailing list >>>>>> > infinispan-dev at lists.jboss.org >>>>>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> > >>>>>> >>>>>> -- >>>>>> Bela Ban | http://www.jgroups.org >>>>>> >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> -- >>>>> >>>>> SEBASTIAN ?ASKAWIEC >>>>> >>>>> INFINISPAN DEVELOPER >>>>> >>>>> Red Hat EMEA >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>> >>>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> >> SEBASTIAN ?ASKAWIEC >> >> INFINISPAN DEVELOPER >> >> Red Hat EMEA >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170703/20c3123c/attachment-0001.html From belaban at mailbox.org Mon Jul 3 04:45:28 2017 From: belaban at mailbox.org (Bela Ban) Date: Mon, 3 Jul 2017 10:45:28 +0200 Subject: [infinispan-dev] KUBE_PING changes In-Reply-To: References: <6b3dc945-449b-1c9b-1384-4a4cdde83eca@mailbox.org> Message-ID: <86a452c8-f9d7-4cfb-05b6-b7d257fcbdfc@mailbox.org> Hi Thomas, has this issue been resolved? Env variables were introduced in 4.0.2 [1], so you need at least that version of JGroups. [1] https://issues.jboss.org/browse/JGRP-2166 On 30/06/17 11:40, Thomas SEGISMONT wrote: > Hi everyone, > > Thank you for this great work, the dependency diet and the extra port > removal are both very useful. The extra port removal is key to enable > Vert.x clustering in Openshift S2I environments. > > I tried the new KUBE_PING (beta1) with vertx-infinispan and it worked > fine. I have a few questions though. > > I couldn't configure it with env variables. Before you ask, yes I > noticed the name changes ;-) I only had a quick look at JGroups config > code but it seems it only resolves system properties. Did it work for > you because you tried with an Infinispan server? > > Since I couldn't configure it externally I had to create a custom > JGroups file. Usually, we recommend [1] Vert.x users to add the > infinispan-cloud dependency and a system property: > -Dvertx.jgroups.config=default-configs/default-jgroups-kubernetes.xml > > My custom JGroups file is a just a copy of > default-configs/default-jgroups-kubernetes.xml in which I added the > masterHost and namespace properties. > > Is it still recommended to use the > default-configs/default-jgroups-kubernetes.xml stack ? Or is any change > planned after the KUBE_PING changes? > I wouldn't expect a protocol implementation change to impact a stack > configuration but they say there are no stupid questions :) > > Thank you, > Thomas > > > [1] http://vertx.io/docs/vertx-infinispan/java/#_configuring_for_openshift_3 > > 2017-06-15 8:21 GMT+02:00 Sebastian Laskawiec >: > > Yep, no problems found!!! > > I had also impression that the new implementation is "faster". > Though I haven't measured it... it just my impression. > > Awesome work Bela! > > On Thu, Jun 15, 2017 at 7:42 AM Bela Ban > wrote: > > Thanks, Sebastian! > > I assume testing on GKE and minikube/openshift was successful? > > > On 14/06/17 13:15, Sebastian Laskawiec wrote: > > Hey guys, > > > > Just a heads up, I've just created a PR that upgrades KUBE_PING to > > 1.0.0.Beta1 [1]. As you probably seen in [2], 1.0.0.Beta1 was > completely > > rewritten and might behave slightly differently. > > > > Here is a summary of changes: > > > > * The latest KUBE_PING doesn't require embedded HTTP server for > > discovery. Thus it is no longer required to expose port > 8888 in Pods. > > * The number of dependencies has been decreased. Currently > we only > > require JGroups and OAuth library. > > * The new KUBE_PING works only with JGroups 4. There will be no > > JGroups 3 support. > > * Some of the environmental variables were shortened and we > removed > > `OPENSHIFT` prefix. So if you use > `OPENSHIFT_KUBE_PING_NAMESPACE`, > > you will need to change it to `KUBERNETES_NAMESPACE`. > Please refer > > to [3] for more information. > > > > I also switched default branch in Kubernetes Ping repository > to master [4]. > > > > Thanks, > > Sebastian > > > > [1] https://github.com/infinispan/infinispan/pull/5201 > > > [2] > http://belaban.blogspot.ch/2017/05/running-infinispan-cluster-with.html > > > [3] > https://github.com/jgroups-extras/jgroups-kubernetes/blob/master/README.adoc > > > [4] https://github.com/jgroups-extras/jgroups-kubernetes > > > -- > > > > SEBASTIAN ?ASKAWIEC > > > > INFINISPAN DEVELOPER > > > > Red Hat EMEA > > > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > -- > Bela Ban | http://www.jgroups.org > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > > SEBASTIAN ?ASKAWIEC > > INFINISPAN DEVELOPER > > Red Hat EMEA > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban | http://www.jgroups.org From slaskawi at redhat.com Mon Jul 3 05:39:43 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 03 Jul 2017 09:39:43 +0000 Subject: [infinispan-dev] 9.0.x branch not compiling Message-ID: Hey Will, Tristan, I think you accidentally broke 9.0.x branch with this commit [1]. See [2]. May I ask you to have a look at it? Thanks, Sebastian [1] https://github.com/infinispan/infinispan/commit/fbed38fd4007f9e36a2697965a924d5b0db0bbd4 [2] http://ci.infinispan.org/job/Infinispan/job/9.0.x/1/console -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170703/cba68c25/attachment.html From slaskawi at redhat.com Mon Jul 3 05:42:15 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 03 Jul 2017 09:42:15 +0000 Subject: [infinispan-dev] KUBE_PING changes In-Reply-To: <86a452c8-f9d7-4cfb-05b6-b7d257fcbdfc@mailbox.org> References: <6b3dc945-449b-1c9b-1384-4a4cdde83eca@mailbox.org> <86a452c8-f9d7-4cfb-05b6-b7d257fcbdfc@mailbox.org> Message-ID: FYI: https://github.com/infinispan/infinispan/pull/5257 But we first need to fix 9.0.x branch :D On Mon, Jul 3, 2017 at 10:48 AM Bela Ban wrote: > Hi Thomas, > > has this issue been resolved? Env variables were introduced in 4.0.2 > [1], so you need at least that version of JGroups. > > [1] https://issues.jboss.org/browse/JGRP-2166 > > On 30/06/17 11:40, Thomas SEGISMONT wrote: > > Hi everyone, > > > > Thank you for this great work, the dependency diet and the extra port > > removal are both very useful. The extra port removal is key to enable > > Vert.x clustering in Openshift S2I environments. > > > > I tried the new KUBE_PING (beta1) with vertx-infinispan and it worked > > fine. I have a few questions though. > > > > I couldn't configure it with env variables. Before you ask, yes I > > noticed the name changes ;-) I only had a quick look at JGroups config > > code but it seems it only resolves system properties. Did it work for > > you because you tried with an Infinispan server? > > > > Since I couldn't configure it externally I had to create a custom > > JGroups file. Usually, we recommend [1] Vert.x users to add the > > infinispan-cloud dependency and a system property: > > -Dvertx.jgroups.config=default-configs/default-jgroups-kubernetes.xml > > > > My custom JGroups file is a just a copy of > > default-configs/default-jgroups-kubernetes.xml in which I added the > > masterHost and namespace properties. > > > > Is it still recommended to use the > > default-configs/default-jgroups-kubernetes.xml stack ? Or is any change > > planned after the KUBE_PING changes? > > I wouldn't expect a protocol implementation change to impact a stack > > configuration but they say there are no stupid questions :) > > > > Thank you, > > Thomas > > > > > > [1] > http://vertx.io/docs/vertx-infinispan/java/#_configuring_for_openshift_3 > > > > 2017-06-15 8:21 GMT+02:00 Sebastian Laskawiec > >: > > > > Yep, no problems found!!! > > > > I had also impression that the new implementation is "faster". > > Though I haven't measured it... it just my impression. > > > > Awesome work Bela! > > > > On Thu, Jun 15, 2017 at 7:42 AM Bela Ban > > wrote: > > > > Thanks, Sebastian! > > > > I assume testing on GKE and minikube/openshift was successful? > > > > > > On 14/06/17 13:15, Sebastian Laskawiec wrote: > > > Hey guys, > > > > > > Just a heads up, I've just created a PR that upgrades > KUBE_PING to > > > 1.0.0.Beta1 [1]. As you probably seen in [2], 1.0.0.Beta1 was > > completely > > > rewritten and might behave slightly differently. > > > > > > Here is a summary of changes: > > > > > > * The latest KUBE_PING doesn't require embedded HTTP server > for > > > discovery. Thus it is no longer required to expose port > > 8888 in Pods. > > > * The number of dependencies has been decreased. Currently > > we only > > > require JGroups and OAuth library. > > > * The new KUBE_PING works only with JGroups 4. There will be > no > > > JGroups 3 support. > > > * Some of the environmental variables were shortened and we > > removed > > > `OPENSHIFT` prefix. So if you use > > `OPENSHIFT_KUBE_PING_NAMESPACE`, > > > you will need to change it to `KUBERNETES_NAMESPACE`. > > Please refer > > > to [3] for more information. > > > > > > I also switched default branch in Kubernetes Ping repository > > to master [4]. > > > > > > Thanks, > > > Sebastian > > > > > > [1] https://github.com/infinispan/infinispan/pull/5201 > > > > > [2] > > > http://belaban.blogspot.ch/2017/05/running-infinispan-cluster-with.html > > < > http://belaban.blogspot.ch/2017/05/running-infinispan-cluster-with.html> > > > [3] > > > https://github.com/jgroups-extras/jgroups-kubernetes/blob/master/README.adoc > > < > https://github.com/jgroups-extras/jgroups-kubernetes/blob/master/README.adoc > > > > > [4] https://github.com/jgroups-extras/jgroups-kubernetes > > > > > -- > > > > > > SEBASTIAN ?ASKAWIEC > > > > > > INFINISPAN DEVELOPER > > > > > > Red Hat EMEA > > > > > > > > > > > > > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > > -- > > Bela Ban | http://www.jgroups.org > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > -- > > > > SEBASTIAN ?ASKAWIEC > > > > INFINISPAN DEVELOPER > > > > Red Hat EMEA > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org infinispan-dev at lists.jboss.org> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Bela Ban | http://www.jgroups.org > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170703/329ad8f3/attachment-0001.html From tsegismont at gmail.com Mon Jul 3 07:40:13 2017 From: tsegismont at gmail.com (Thomas SEGISMONT) Date: Mon, 3 Jul 2017 13:40:13 +0200 Subject: [infinispan-dev] KUBE_PING changes In-Reply-To: <86a452c8-f9d7-4cfb-05b6-b7d257fcbdfc@mailbox.org> References: <6b3dc945-449b-1c9b-1384-4a4cdde83eca@mailbox.org> <86a452c8-f9d7-4cfb-05b6-b7d257fcbdfc@mailbox.org> Message-ID: Hi Bela, Yes, upgrading to 4.0.3.Final solved the issue. Thank you! 2017-07-03 10:45 GMT+02:00 Bela Ban : > Hi Thomas, > > has this issue been resolved? Env variables were introduced in 4.0.2 > [1], so you need at least that version of JGroups. > > [1] https://issues.jboss.org/browse/JGRP-2166 > > On 30/06/17 11:40, Thomas SEGISMONT wrote: > > Hi everyone, > > > > Thank you for this great work, the dependency diet and the extra port > > removal are both very useful. The extra port removal is key to enable > > Vert.x clustering in Openshift S2I environments. > > > > I tried the new KUBE_PING (beta1) with vertx-infinispan and it worked > > fine. I have a few questions though. > > > > I couldn't configure it with env variables. Before you ask, yes I > > noticed the name changes ;-) I only had a quick look at JGroups config > > code but it seems it only resolves system properties. Did it work for > > you because you tried with an Infinispan server? > > > > Since I couldn't configure it externally I had to create a custom > > JGroups file. Usually, we recommend [1] Vert.x users to add the > > infinispan-cloud dependency and a system property: > > -Dvertx.jgroups.config=default-configs/default-jgroups-kubernetes.xml > > > > My custom JGroups file is a just a copy of > > default-configs/default-jgroups-kubernetes.xml in which I added the > > masterHost and namespace properties. > > > > Is it still recommended to use the > > default-configs/default-jgroups-kubernetes.xml stack ? Or is any change > > planned after the KUBE_PING changes? > > I wouldn't expect a protocol implementation change to impact a stack > > configuration but they say there are no stupid questions :) > > > > Thank you, > > Thomas > > > > > > [1] http://vertx.io/docs/vertx-infinispan/java/#_configuring_ > for_openshift_3 > > > > 2017-06-15 8:21 GMT+02:00 Sebastian Laskawiec > >: > > > > Yep, no problems found!!! > > > > I had also impression that the new implementation is "faster". > > Though I haven't measured it... it just my impression. > > > > Awesome work Bela! > > > > On Thu, Jun 15, 2017 at 7:42 AM Bela Ban > > wrote: > > > > Thanks, Sebastian! > > > > I assume testing on GKE and minikube/openshift was successful? > > > > > > On 14/06/17 13:15, Sebastian Laskawiec wrote: > > > Hey guys, > > > > > > Just a heads up, I've just created a PR that upgrades > KUBE_PING to > > > 1.0.0.Beta1 [1]. As you probably seen in [2], 1.0.0.Beta1 was > > completely > > > rewritten and might behave slightly differently. > > > > > > Here is a summary of changes: > > > > > > * The latest KUBE_PING doesn't require embedded HTTP server > for > > > discovery. Thus it is no longer required to expose port > > 8888 in Pods. > > > * The number of dependencies has been decreased. Currently > > we only > > > require JGroups and OAuth library. > > > * The new KUBE_PING works only with JGroups 4. There will be > no > > > JGroups 3 support. > > > * Some of the environmental variables were shortened and we > > removed > > > `OPENSHIFT` prefix. So if you use > > `OPENSHIFT_KUBE_PING_NAMESPACE`, > > > you will need to change it to `KUBERNETES_NAMESPACE`. > > Please refer > > > to [3] for more information. > > > > > > I also switched default branch in Kubernetes Ping repository > > to master [4]. > > > > > > Thanks, > > > Sebastian > > > > > > [1] https://github.com/infinispan/infinispan/pull/5201 > > > > > [2] > > http://belaban.blogspot.ch/2017/05/running-infinispan- > cluster-with.html > > cluster-with.html> > > > [3] > > https://github.com/jgroups-extras/jgroups-kubernetes/ > blob/master/README.adoc > > blob/master/README.adoc> > > > [4] https://github.com/jgroups-extras/jgroups-kubernetes > > > > > -- > > > > > > SEBASTIAN ?ASKAWIEC > > > > > > INFINISPAN DEVELOPER > > > > > > Red Hat EMEA > > > > > > > > > > > > > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > > -- > > Bela Ban | http://www.jgroups.org > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > -- > > > > SEBASTIAN ?ASKAWIEC > > > > INFINISPAN DEVELOPER > > > > Red Hat EMEA > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org jboss.org> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Bela Ban | http://www.jgroups.org > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170703/f5fac2cc/attachment.html From mudokonman at gmail.com Mon Jul 3 09:17:57 2017 From: mudokonman at gmail.com (William Burns) Date: Mon, 03 Jul 2017 13:17:57 +0000 Subject: [infinispan-dev] 9.0.x branch not compiling In-Reply-To: References: Message-ID: It was actually [1]. Will send a PR in just a few. [1] https://github.com/infinispan/infinispan/commit/905eb3551973db0590a336234e51aefeed62ec08#diff-3073de718ac371bd99728ce9e21557ce On Mon, Jul 3, 2017 at 5:40 AM Sebastian Laskawiec wrote: > Hey Will, Tristan, > > I think you accidentally broke 9.0.x branch with this commit [1]. See [2]. > > May I ask you to have a look at it? > > Thanks, > Sebastian > > [1] > https://github.com/infinispan/infinispan/commit/fbed38fd4007f9e36a2697965a924d5b0db0bbd4 > [2] http://ci.infinispan.org/job/Infinispan/job/9.0.x/1/console > -- > > SEBASTIAN ?ASKAWIEC > > INFINISPAN DEVELOPER > > Red Hat EMEA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170703/5d2db18d/attachment-0001.html From slaskawi at redhat.com Mon Jul 3 10:15:53 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 03 Jul 2017 14:15:53 +0000 Subject: [infinispan-dev] 9.0.x branch not compiling In-Reply-To: References: Message-ID: Thanks Will! On Mon, Jul 3, 2017 at 4:14 PM William Burns wrote: > It was actually [1]. Will send a PR in just a few. > > [1] > https://github.com/infinispan/infinispan/commit/905eb3551973db0590a336234e51aefeed62ec08#diff-3073de718ac371bd99728ce9e21557ce > > On Mon, Jul 3, 2017 at 5:40 AM Sebastian Laskawiec > wrote: > >> Hey Will, Tristan, >> >> I think you accidentally broke 9.0.x branch with this commit [1]. See [2]. >> >> May I ask you to have a look at it? >> >> Thanks, >> Sebastian >> >> [1] >> https://github.com/infinispan/infinispan/commit/fbed38fd4007f9e36a2697965a924d5b0db0bbd4 >> [2] http://ci.infinispan.org/job/Infinispan/job/9.0.x/1/console >> -- >> >> SEBASTIAN ?ASKAWIEC >> >> INFINISPAN DEVELOPER >> >> Red Hat EMEA >> >> > _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170703/7de5da9d/attachment.html From galder at redhat.com Mon Jul 3 10:27:02 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 3 Jul 2017 16:27:02 +0200 Subject: [infinispan-dev] Feedback for PR 5233 needed In-Reply-To: <6cc2794d-c5f5-4014-8430-6bc97877f1f4@redhat.com> References: <6cc2794d-c5f5-4014-8430-6bc97877f1f4@redhat.com> Message-ID: <830E5BF2-C565-4602-B209-0AEA3E067C00@redhat.com> I already explained in another email thread, but let me make it explicit here: The way compatibility mode works has a big influence on how useful redeploying marshallers is. If compatibility is lazy, redeployment of marshaller could be useful since all the conversions happen lazily. So, conversions would only happen when data is requested. So, if data comes from Hot Rod in byte[], only when reading it might be converted into a POJO. If data comes as POJO, say from embedded, you'd keep it as is, and only when read from Hot Rod you'd convert to binary. If compatibility is eager, the conversion happens on write, and that can be have negative impact if marshaller is redeployed. If data has been unmarshalled with marshaller A, and then you deploy marshaller B, it might result in converting the unmarshalled POJO into a binary format that the client can't understand. So, IMO, if compat mode is lazy, redeployment could work... but I think redeployments add a layer of complexity that users might not really need it. I'd rather not have redeployments and instead of focus on rolling upgrade or freezing capabilities like Tristan mention to be able to bring a server down and up wo/ issues for the user. Cheers, -- Galder Zamarre?o Infinispan, Red Hat > On 3 Jul 2017, at 09:52, Tristan Tarrant wrote: > > I like it a lot. > To follow up on my comment on the PR, but to get a wider distribution, > we really need to think about how to deal with redeployments and > resource restarts. > I think restarts are unavoidable: a redeployment means dumping and > replacing a classloader with all of its classes. There are two > approaches I can think of: > > - "freezing" and "thawing" a cache via some form of persistence (which > could also mean adding a temporary cache store > - separate the wildfly service lifecycle from the cache lifecycle, > detaching/reattaching a cache without stopping when the wrapping service > is restarted. > > Tristan > > On 6/29/17 5:20 PM, Adrian Nistor wrote: >> People, don't be shy, the PR is in now, but things can still change >> based on you feedback. We still have two weeks until we release the Final. >> >> On 06/29/2017 03:45 PM, Adrian Nistor wrote: >>> This pr [1] adds a new approach for defining the compat marshaller class >>> and the indexed entity classes (in server), and the same approach could >>> be used in future for deployment of encoders, lucene analyzers and >>> possilby other code bits that a user would want to add a server in order >>> to implement an extension point that we support. >>> >>> Your feedback is wellcome! >>> >>> [1] https://github.com/infinispan/infinispan/pull/5233 >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Mon Jul 3 11:22:33 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 3 Jul 2017 17:22:33 +0200 Subject: [infinispan-dev] Weekly IRC Meeting Logs 2017-07-03 Message-ID: <8e9addac-3135-8c4e-eb8e-488825603d18@redhat.com> Hi all, the weekly meeting logs are here: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-07-03-14.01.log.html Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From galder at redhat.com Tue Jul 4 07:11:28 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Tue, 4 Jul 2017 13:11:28 +0200 Subject: [infinispan-dev] On the scattered cache blog post Message-ID: Hey Radim, Awesome blog post on scattered cache [1]! I think there's some extra information to be added or to be clarified in the blog itself: 1. From what I understand, scattered cache should help the embedded use case primarily? When using Hot Rod, the primary owner is always hit, so the penalty of landing in a non-owner and having to do 2 RPCs is not there. Am I right? This should be clarified in the blog post. 2. "As you can see, this algorithm cannot be easily extended to multiple owners" <- Do you mean users should never set num owners to 3 or higher? How would the system work if num owners was 1? Some of these questions might have been answered in the design doc, but as a user, I should not be expected to read the design document to answer these questions. If these questions are answered in the user documentation, that would be fine but I feel these are things that should be explained/clarified in the blog post itself. Cheers, [1] http://blog.infinispan.org/2017/07/scattered-cache.html -- Galder Zamarre?o Infinispan, Red Hat From rvansa at redhat.com Tue Jul 4 07:21:21 2017 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 4 Jul 2017 13:21:21 +0200 Subject: [infinispan-dev] On the scattered cache blog post In-Reply-To: References: Message-ID: Hi Galder, 1. Yes, and the documentation states that we do not support scattered cache in server, see the last paragraph: > Scattered mode is not exposed in the server configuration as the server is usually accessed through the Hot Rod protocol. The protocol automatically selects primary owner for the writes and therefore the write (in distributed mode with two owner) requires single RPC inside the cluster, too. Therefore, scattered cache would not bring the performance benefit. In the blogpost I have focused on 'what works' and on the design. I've left the limitations (server, functional commands) for the documentation, keeping it short. If you really thing that I should mention server, I could do that... 2. Setting numOwners to anything but 1 (or keeping it at default value) throws an exception when the configuration is validated. XML does not expose such attribute. Yes, you've read correctly: 1 is the num owners because we don't keep more owners in the CH, so it's resilient against one node failure with single owner. I could add this info to the user docs. Radim On Tue, Jul 4, 2017 at 1:11 PM, Galder Zamarre?o wrote: > Hey Radim, > > Awesome blog post on scattered cache [1]! > > I think there's some extra information to be added or to be clarified in > the blog itself: > > 1. From what I understand, scattered cache should help the embedded use > case primarily? When using Hot Rod, the primary owner is always hit, so the > penalty of landing in a non-owner and having to do 2 RPCs is not there. Am > I right? This should be clarified in the blog post. > > 2. "As you can see, this algorithm cannot be easily extended to multiple > owners" <- Do you mean users should never set num owners to 3 or higher? > How would the system work if num owners was 1? > > Some of these questions might have been answered in the design doc, but as > a user, I should not be expected to read the design document to answer > these questions. > > If these questions are answered in the user documentation, that would be > fine but I feel these are things that should be explained/clarified in the > blog post itself. > > Cheers, > > [1] http://blog.infinispan.org/2017/07/scattered-cache.html > -- > Galder Zamarre?o > Infinispan, Red Hat > > -- Radim Vansa JBoss Performance Team -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170704/e9d1413b/attachment-0001.html From emmanuel at hibernate.org Tue Jul 4 12:19:38 2017 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Tue, 4 Jul 2017 18:19:38 +0200 Subject: [infinispan-dev] On the scattered cache blog post In-Reply-To: References: Message-ID: <20170704161938.GI19074@hibernate.org> On Tue 17-07-04 13:21, Radim Vansa wrote: >2. Setting numOwners to anything but 1 (or keeping it at default value) >throws an exception when the configuration is validated. XML does not >expose such attribute. Yes, you've read correctly: 1 is the num owners >because we don't keep more owners in the CH, so it's resilient against one >node failure with single owner. I could add this info to the user docs. I must be missing something but if you are only resilient when < 1 node goes down, aren't you called a non resilient system? From slaskawi at redhat.com Wed Jul 5 02:22:49 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Wed, 05 Jul 2017 06:22:49 +0000 Subject: [infinispan-dev] Jenkins SSL and signup Message-ID: Hey, Our Jenkins CI now has SSL enabled with Let's Encrypt certificate: https://ci.infinispan.org (please update your bookmarks). I also turned the signup button off. This doesn't effect our community since everyone can see the build logs. In case you need an account - just let me know. Thanks, Sebastian -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170705/4bba0c83/attachment.html From dan.berindei at gmail.com Wed Jul 5 05:54:33 2017 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 5 Jul 2017 12:54:33 +0300 Subject: [infinispan-dev] Write-only commands In-Reply-To: References: <8b0b3142-5dc3-c5d6-675c-dd3d2b2723e5@redhat.com> <3a1c9888-0b9a-7bae-8986-57568ca4668b@redhat.com> <3e17fe7b-a860-0143-bafd-a2799ebe6ee1@redhat.com> Message-ID: On Thu, Jun 29, 2017 at 4:51 PM, Radim Vansa wrote: > On 06/29/2017 02:36 PM, Dan Berindei wrote: >> On Thu, Jun 29, 2017 at 2:19 PM, Radim Vansa wrote: >>> On 06/29/2017 11:16 AM, Dan Berindei wrote: >>>> On Thu, Jun 29, 2017 at 11:53 AM, Radim Vansa wrote: >>>>> On 06/28/2017 04:20 PM, Dan Berindei wrote: >>>>>> On Wed, Jun 28, 2017 at 2:17 PM, Radim Vansa wrote: >>>>>>> On 06/28/2017 10:40 AM, Dan Berindei wrote: >>>>>>>> On Wed, Jun 28, 2017 at 10:17 AM, Radim Vansa wrote: >>>>>>>>> On 06/27/2017 03:54 PM, Dan Berindei wrote: >>>>>>>>>> On Tue, Jun 27, 2017 at 2:43 PM, Adrian Nistor wrote: >>>>>>>>>>> I've said this in a previous thread on this same issue, I will repeat myself >>>>>>>>>>> as many times as needed. >>>>>>>>>>> >>>>>>>>>>> Continuous queries require the previous value itself, not just knowledge of >>>>>>>>>>> the type of the previous value. Strongly typed caches solve no problem here. >>>>>>>>>>> >>>>>>>>>>> So if we half-fix query but leave CQ broken I will be half-happy (ie. very >>>>>>>>>>> depressed) :) >>>>>>>>>>> >>>>>>>>>>> I'd remove these commands completely or possibly remove them just from >>>>>>>>>>> public API and keep them internal. >>>>>>>>>>> >>>>>>>>>> +1 to remove the flags from the public API. Most of them are not safe >>>>>>>>>> for applications to use, and ignoring them when they can lead to >>>>>>>>>> inconsistencies would make them useless. >>>>>>>>>> >>>>>>>>>> E.g. the whole point of SKIP_INDEX_CLEANUP is that the cache doesn't >>>>>>>>>> know when it is safe to skip the delete statement, and it relies on >>>>>>>>>> the application making a (possibly wrong) choice. >>>>>>>>>> >>>>>>>>>> IGNORE_RETURN_VALUES should be safe to use, and we actually recommend >>>>>>>>>> that applications use it right now. If query or listeners need the >>>>>>>>>> previous value, then we should load it internally, but hide it from >>>>>>>>>> the user. >>>>>>>>>> >>>>>>>>>> But removing it opens another discussion: should we replace it in the >>>>>>>>>> public API with a new method AdvancedCache.ignoreReturnValues(), or >>>>>>>>>> should we make it the default and add a method >>>>>>>>>> AdvancedCache.forceReturnPreviousValues()? >>>>>>>>> Please don't derail the thread. >>>>>>>>> >>>>>>>> I don't think I'm derailing the thread: IGNORE_PREVIOUS_VALUES also >>>>>>>> breaks the previous value for listeners, even if the QueryInterceptor >>>>>>>> removes it from write commands. And it is public (+recommended) API, >>>>>>>> in fact most if not all of our performance tests use it. >>>>>>> That's just a flawed implementation. IPV is documented to be a 'safe' >>>>>>> flag that should affect mostly primary -> origin replication, all the >>>>>>> other is implementation. And we can fix that. Users should *not* expect >>>>>>> that it e.g. skips loading from a cache store. We have already removed >>>>>>> the modes that would be broken-by-design. >>>>>>> >>>>>> I think you're confusing IGNORE_RETURN_VALUES with SKIP_REMOTE_LOOKUP >>>>>> here. The IVR javadoc doesn't say anything about remote lookups, only >>>>>> SRL does. >>>>> No, I am not; While IRV does not mention the replication, it's said to >>>>> be 'safe'. So omitting the primary -> origin replication is basically >>>>> all it can do when listeners are in place. You're right that I have >>>>> missed the second part in SRL talking about put()s; I took it as a flag >>>>> prohibiting any remote lookup (as the RPC operation in its whole) any >>>>> time the remote value is needed. Yes, the second part seems equal to my >>>>> understanding of IRV. >>>>> >>>>>> And I agree that the current status is far from ideal, but there is >>>>>> one more valid alternative: we can decide that the previous value is >>>>>> only reliable in clustered listeners, and local listeners don't always >>>>>> have it. Document that, make sure continuous query uses clustered >>>>>> listeners, and we're done :) >>>>> Unreliable return values are worse than none; I would rather remove them >>>>> if we can't guarantee that these are right. Though, clustered listeners >>>>> are based on regular listeners, so you'd need some means to make them >>>>> reliable. >>>> We could change the clustered listeners so that they're not based on >>>> the regular listeners... I've been pestering Will about this ever >>>> since the clustered listeners landed! >>>> >>>> But I should have been clearer: I didn't mean that the listeners on >>>> the backups should receive the previous value whenever we feel like >>>> it, I meant we should document and enforce that the previous value is >>>> only included in the event for listeners on the primary owner. >>>>>>> On the other hand, write-only commands are not about *returning* the >>>>>>> value but about (not) *reading* it, therefore (in my eyes) user could >>>>>>> make that assumption and would like to enforce it this way. Even some >>>>>>> docs explaining PersistenceMode.SKIP suggest that. >>>>>>> >>>>>> To me the purpose the same, there is no difference between returning >>>>>> the previous value to the application or providing the previous value >>>>>> via EntryView. >>>>> There is a difference between what's provided locally and what's send >>>>> over the network. >>>>> >>>>>> Applying this logic to the JCache API, it would mean >>>>>> put() should never read the previous value, because some users could >>>>>> assume that only getAndPut() reads it. >>>>> OK, this is a valid point. >>>>> >>>>>> In the old times we didn't have IGNORE_RETURN_VALUES, only >>>>>> SKIP_REMOTE_LOOKUP+SKIP_CACHE_LOAD, and they would sometimes be >>>>>> ignored (e.g. if the write was conditional). I think that's what >>>>>> Galder had in mind when he wrote the PersistenceMode api note, not the >>>>>> current behaviour of SKIP_CACHE_LOAD. I'll let Galder clarify this >>>>>> himself, but I'll be very disappointed if he says he designed the >>>>>> write-only operations so that they'll never work with query. >>>>>> >>>>>> >>>>>>> I don't want to talk about flags, because I see all flags but IPV as >>>>>>> 'effectively internal'. Let's discuss it more high-level. Some API >>>>>>> exposes non-reading operation - we can see that under some circumstances >>>>>>> this is not possible so we have options to 1) break stuff 2) break API >>>>>>> assumptions 3) sometimes break API assumptions 4) remove such API (to >>>>>>> not allow the user to make such assumptions). There's also an option 5) >>>>>>> to fail the operation if the API assumption would be broken. Though, I >>>>>>> don't fancy getting exception from a WriteOnlyMap.eval just because >>>>>>> someone has registered a listener. >>>>>>> >>>>>> I disagree with the premise: there's no good reason for the user to >>>>>> assume that write-only commands are *guaranteed* to never load the >>>>>> previous value from a store. We just need to add a clarification to >>>>>> the write-only operations' javadoc, no need to break anything. >>>>> OK then, though it diminishes the value of write-only commands a lot. >>>>> >>>>>>>> For that matter, ClusteredCacheLoaderInterceptor also doesn't load the >>>>>>>> previous value on backup owners for most write commands >>>>>>>> (LoadType.PRIMARY), we'd need to change that as well. >>>>>>> Yes, all commands will have to load current value on all owners. >>>>>>> >>>>>>>>>>> On 06/27/2017 01:28 PM, Sanne Grinovero wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On 27 Jun 2017 10:13, "Radim Vansa" wrote: >>>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> I am working on entry version history (again). In Como we've discussed >>>>>>>>>>> that previous values are needed for (continuous) query and reliable >>>>>>>>>>> listeners, >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Index based queries also require the previous value on a write - unless we >>>>>>>>>>> can get "strongly typed caches" giving guarantees about the class to >>>>>>>>>>> represent the content of a cache to be unique. >>>>>>>>>>> >>>>>>>>>>> Essentially we only need to know the type of the previous object. It might >>>>>>>>>>> be worth having a way to load the type metadata if the previous value only. >>>>>>>>>>> >>>>>>>>>>> so I wonder what should we do with functional write-only >>>>>>>>>>> commands. These are different to commands with flags, because flags >>>>>>>>>>> (other than ignore return value) are expected to break something. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Sorry I hope to not derail the thread but let's remind that we hope to >>>>>>>>>>> evolve beyond "flags are expected to break stuff" ; we never got to it but >>>>>>>>>>> search the mailing list. >>>>>>>>>>> >>>>>>>>>>> Since flags are exposed to the user I would rather they're not allowed to >>>>>>>>>>> break things. >>>>>>>>>>> Could they be treated as hints? Ignore the flag (and warn?) if the used >>>>>>>>>>> configuration/integrations veto them. >>>>>>>>>>> >>>>>>>>>>> Alternatively, let's remove them from API. Remember "The Jokre" POC was >>>>>>>>>>> intentionally designed to explore pushing the limits on performance w/o end >>>>>>>>>>> users having to solve puzzles, such as learning details about these flags >>>>>>>>>>> and their possible side effects. >>>>>>>>>>> >>>>>>>>>>> So assuming they become either "safe" or internal, maybe you can take >>>>>>>>>>> advantage of them? >>>>>>>>>>> >>>>>>>>>>> I see >>>>>>>>>>> the available options as: >>>>>>>>>>> >>>>>>>>>>> 1) run write-only commands 'optimized', ignoring any querying and such >>>>>>>>>>> (warn user that he will break it) >>>>>>>>>>> >>>>>>>>>>> 2) run write-only without any optimization, rendering them useless >>>>>>>>>>> >>>>>>>>>>> 3) detect when querying is set up (ignoring listeners and maybe other >>>>>>>>>>> stuff that could get broken) >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Might be useful for making a POC work, but I believe query will be very >>>>>>>>>>> likely to be often enabled. >>>>>>>>>>> Having an either / or switch for different features in Infinispan will make >>>>>>>>>>> it harder to use and understand, so I'd rather see work on the right design >>>>>>>>>>> as taking temporary shortcuts risks baking into stone features which we >>>>>>>>>>> later struggle to fix or maintain. >>>>>>>>>>> >>>>>>>>>> I vote for this option. >>>>>>>>>> >>>>>>>>>> Query, listeners, and other components that need the previous value >>>>>>>>>> should not just assume that the application knows better, they should >>>>>>>>>> be able to change how operations works based on their needs. Of >>>>>>>>>> course, the reverse is also true: if the application uses write-only >>>>>>>>>> commands (or IGNORE_RETURN_VALUES) for performance reasons, it should >>>>>>>>>> be possible for the user to detect why the previous values are still >>>>>>>>>> loaded. >>>>>>>>> If it were just query (static configuration), I would be okay with this >>>>>>>>> idea. But as per listeners - besides tainting the design (event source >>>>>>>>> should not check if there's a listener) you'd need to check *before* >>>>>>>> The source wouldn't check for listeners explicitly, the notifier would >>>>>>>> have an isPreviousValueNeeded() method and precompute that before a >>>>>>>> listener is added or after a listener is removed. I was am assuming >>>>>>>> some listeners will not need the previous value, e.g. the listeners >>>>>>>> installed by streams. >>>>>>> You can cover your warts with a make-up but you'll still have warts :) >>>>>> Cutting them off doesn't necessarily work, either :) >>>>> Yep, some people tend to fix w/ hacks instead of designing :) >>>>> >>>>>>>>> (DistributionI, CacheLoaderI) you have to call notify (cmd.perform, >>>>>>>>> EWI). So this is a space for race conditions or weird handling (if >>>>>>>>> there's a listener when I am about to call notify and my flags are not >>>>>>>>> cleared, skip the notification and pretend that this code was invoked >>>>>>>>> before the listener was registered...). Or do you have another solution >>>>>>>>> in mind (config option to disable listeners && all features using those?). >>>>>>>>> >>>>>>>> I was definitely going for the weird handling... >>>>>>>> >>>>>>>> My plan was to set a HAS_PREVIOUS_VALUE flag on the context entry when >>>>>>>> it's loaded, and check that before invoking a listener that needs the >>>>>>>> previous value. It is missing one edge case: if one thread starts a >>>>>>>> write operation, then another thread installs a listener that requires >>>>>>>> the previous value and iterates over the cache, the second thread may >>>>>>>> not see the value written by the first thread. >>>>>>> If the operations overlap, you could pretend that the write has finished >>>>>>> before the listener was invoked and simply not notify the listener. If I >>>>>>> am missing it please write it down in code. But handling this in any way >>>>>>> is still clumsy. >>>>>> I hope pseudo-code is fine... >>>>>> >>>>>> 1. cache.put(k, v1) starts, doesn't load the previous value v0 in the context >>>>>> 2. cache.addListener(l) runs, doesn't block >>>>>> 3. cache.entrySet().forEach() runs, finds k->v0 >>>>>> 4. cache.put(k, v1) commits k->v1, should notify the listener but >>>>>> doesn't have the previous value >>>>>> 5. cache.put(k, v0) returns, but the code that installed the listener >>>>>> thinks the value of k is still v0 >>>>> Oh OK, I should have drawn that myself when considering the scenario. >>>>> You're right, here we'll have to retry. >>>>> >>>>> All in all, I think this discussion is done. We'll tell users to stick >>>>> their flags where the sun doesn't shine and remove any inconvenient >>>>> ones. Should we issue a warning any time we're removing the flag? >>>>> >>>> If you mean that we should remove the flags from the public API, I >>>> agree. If you mean we should just ignore them, then no, because most >>>> of the flags were added for internal components that really need their >>>> semantics. >>> We can't remove them from public API before Infinspan 10, and I think >>> that it will be a quite an unpopular step even after that. But until 10, >>> I think that the common agreement was to not break query, that is ignore >>> the flags. And make write-only reading. >>> >> So SKIP_INDEXING should not skip indexing because it can break query?? > > Ehm... Talking about all flags was wrong, and I think that I've also > mixed your input on write-only command and on flags. This is at least partially my fault, because I was thinking of the write-only commands as regular write commands with the IGNORE_RETURN_VALUES flag. > Let's reiterate, > until we hide the flags (in 10+): > > A) how should we treat SKIP_CACHE_LOAD with respect to (clustered) > listeners, query, and write skew check? (IIRC we ignore that for > purposes of WSC) This is a perfect example of a flag that shouldn't be in the public API. The intent was clearly to skip the cache loader for Cache.put(), but it works at a much lower level than it should, so the interaction with any other operation was undefined before ISPN-5643. In ISPN-5643 I documented it as skipping the load all the time, even for delta writes, in order to push users towards IGNORE_RETURN_VALUES, but I missed WSC (or perhaps there was a test I didn't want to break). For a simpler implementation, I'd still like SKIP_CACHE_LOAD to apply all the time. But it's hard for users to know exactly how other features will be affected, so I'm starting to think it should be like IGNORE_RETURN_VALUES/SKIP_REMOTE_LOOKUP, and the entry should be loaded from the store if it's needed for anything other than the return value (both for reads and for writes). compute() should probably load the previous value all the time. The same goes for read/read-write operations on a functional map created on top of a cache with SKIP_CACHE_LOAD. > B) for write-only, will we load the value if necessary > (listeners/query/wsc)? (I guess that the answer is yes) For query/WSC, yes. For listeners, my current thinking is that the previous value should only be available in clustered listeners, and adding a regular listener should not force write commands to load the previous value. > C) for write-only, will we treat PersistenceMode.SKIP differently? The PersistenceMode javadoc says it's only about stores, just like SKIP_CACHE_STORE, so I don't see any reason to treat write-only commands differently. > D) how should we treat SKIP_REMOTE_LOOKUP when the current write-owner > is not a read-owner? > The javadoc says the flag "will prevent retrieving a remote value [...] to return the overwritten value for {@link Cache#put(Object, Object)}", so SKIP_REMOTE_LOOKUP should be ignored when the previous value is needed for anything other than the return value. Dan > R. > >> >>> R. >>> >>>> Dan >>>> >>>> >>>>> Radim >>>>> >>>>>>>> So now I'm thinking we should retry the write commands when >>>>>>>> isPreviousValueNeeded() changes... Not very appealing, but I think the >>>>>>>> performance difference is worth it. >>>>>>>> >>>>>>>>> R. >>>>>>>>> >>>>>>>>>>> 4) remove write-only commands completely (and probably functional >>>>>>>>>>> listeners as well because these will lose their purpose) >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> +1 to remove "unconditional writes", at least an entry version check should >>>>>>>>>>> be applied. >>>>>>>>>>> I believe we had already pointed out this would eventually happen, pretty >>>>>>>>>>> much for the reasons you're hitting now. >>>>>>>>>>> >>>>>>>>>> IMO version checks should be done internally, we shouldn't force the >>>>>>>>>> users of the functional API to deal with versions themselves because >>>>>>>>>> we know how hard making write skew checks work is for us :) >>>>>>>>>> >>>>>>>>>> And I wouldn't go as far as to remove the functional listeners, >>>>>>>>>> instead I would change them so that read-write listeners are invoked >>>>>>>>>> on write-only operations and they force the loading of the previous >>>>>>>>>> value. I would also add a way for the regular listeners to say whether >>>>>>>>>> they need the previous value or not. >>>>>>>>>> >>>>>>>>>>> Right now I am inclined towards 4). There could be some internal use >>>>>>>>>>> (e.g. multimaps) that could use 1) which is ran without a fancy setup, >>>>>>>>>>> though, but it's asking for trouble. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I agree! >>>>>>>>>>> >>>>>>>>>>> Thanks >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> WDYT? >>>>>>>>>>> >>>>>>>>>>> Radim >>>>>>>>>>> >>>>>>>>>> Cheers >>>>>>>>>> Dan >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> -- >>>>>>> Radim Vansa >>>>>>> JBoss Performance Team >>>>>>> >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> -- >>>>> Radim Vansa >>>>> JBoss Performance Team >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> -- >>> Radim Vansa >>> JBoss Performance Team >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From gustavo at infinispan.org Thu Jul 6 04:41:22 2017 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Thu, 6 Jul 2017 09:41:22 +0100 Subject: [infinispan-dev] Feedback for PR 5233 needed In-Reply-To: <830E5BF2-C565-4602-B209-0AEA3E067C00@redhat.com> References: <6cc2794d-c5f5-4014-8430-6bc97877f1f4@redhat.com> <830E5BF2-C565-4602-B209-0AEA3E067C00@redhat.com> Message-ID: Just a head-up, compat mode is being deprecated and will be replaced by on demand cache conversions (aka cache.getAdvancedCache.withEncodig(...)) Gustavo On Mon, Jul 3, 2017 at 3:27 PM, Galder Zamarre?o wrote: > I already explained in another email thread, but let me make it explicit > here: > > The way compatibility mode works has a big influence on how useful > redeploying marshallers is. > > If compatibility is lazy, redeployment of marshaller could be useful since > all the conversions happen lazily. So, conversions would only happen when > data is requested. So, if data comes from Hot Rod in byte[], only when > reading it might be converted into a POJO. If data comes as POJO, say from > embedded, you'd keep it as is, and only when read from Hot Rod you'd > convert to binary. > > If compatibility is eager, the conversion happens on write, and that can > be have negative impact if marshaller is redeployed. If data has been > unmarshalled with marshaller A, and then you deploy marshaller B, it might > result in converting the unmarshalled POJO into a binary format that the > client can't understand. > > So, IMO, if compat mode is lazy, redeployment could work... but I think > redeployments add a layer of complexity that users might not really need > it. I'd rather not have redeployments and instead of focus on rolling > upgrade or freezing capabilities like Tristan mention to be able to bring a > server down and up wo/ issues for the user. > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > > > On 3 Jul 2017, at 09:52, Tristan Tarrant wrote: > > > > I like it a lot. > > To follow up on my comment on the PR, but to get a wider distribution, > > we really need to think about how to deal with redeployments and > > resource restarts. > > I think restarts are unavoidable: a redeployment means dumping and > > replacing a classloader with all of its classes. There are two > > approaches I can think of: > > > > - "freezing" and "thawing" a cache via some form of persistence (which > > could also mean adding a temporary cache store > > - separate the wildfly service lifecycle from the cache lifecycle, > > detaching/reattaching a cache without stopping when the wrapping service > > is restarted. > > > > Tristan > > > > On 6/29/17 5:20 PM, Adrian Nistor wrote: > >> People, don't be shy, the PR is in now, but things can still change > >> based on you feedback. We still have two weeks until we release the > Final. > >> > >> On 06/29/2017 03:45 PM, Adrian Nistor wrote: > >>> This pr [1] adds a new approach for defining the compat marshaller > class > >>> and the indexed entity classes (in server), and the same approach could > >>> be used in future for deployment of encoders, lucene analyzers and > >>> possilby other code bits that a user would want to add a server in > order > >>> to implement an extension point that we support. > >>> > >>> Your feedback is wellcome! > >>> > >>> [1] https://github.com/infinispan/infinispan/pull/5233 > >>> > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > > > > -- > > Tristan Tarrant > > Infinispan Lead > > JBoss, a division of Red Hat > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170706/18206794/attachment.html From sanne at infinispan.org Tue Jul 11 11:23:04 2017 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 11 Jul 2017 16:23:04 +0100 Subject: [infinispan-dev] tuned profiles for Infinispan ? Message-ID: Hi all, tuned is a very nice utility to apply all kind of tuning options to a machine focusing on performance options. Of course it doesn't replace the tuning that an expert could provide for a specific system, but it gives people a quick an easy way to get to a reasonable starting point, which is much better than the generic out of the box of a Linux distribution. In many distributions it runs at boostrap transparently, for example it will automatically apply a "laptop" profile if it's able to detect running on a laptop, and might be the little tool which switches your settings to an higher performance profile when you plug in the laptop. There's some good reference here: - https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Performance_Tuning_Guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Performance_Monitoring_Tools-tuned_and_tuned_adm.html It's also easy to find it integrated with other tools, e.g. you can use Ansible to set a profile. Distributions like Fedora have out of the box profiles included which are good tuning base settings to run e.g. an Oracle RDBMS, an HANA database, or just tune for latency rather than throughput. Communities like Hadoop also provide suggested tuned settings. It would be great to distribute an Infinispan optimised profile? We could ask the Fedora team to include it, I feel it's important to have a profile there, or at least have one provided by any Infinispan RPMs. Thanks, Sanne From slaskawi at redhat.com Thu Jul 13 04:11:21 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Thu, 13 Jul 2017 08:11:21 +0000 Subject: [infinispan-dev] Shall we deprecate jboss/infinispan-modules image? Message-ID: Hey, I noticed our Infinispan image built on top of Wildfly [1] has only 300 pulls (opposed to infinispan server with 7.7k). Shall we deprecate this image? WDYT? Thanks, Sebastian [1] https://hub.docker.com/r/jboss/infinispan-modules/ -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170713/08660526/attachment.html From gustavo at infinispan.org Thu Jul 13 05:18:19 2017 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Thu, 13 Jul 2017 10:18:19 +0100 Subject: [infinispan-dev] Shall we deprecate jboss/infinispan-modules image? In-Reply-To: References: Message-ID: +1 On Thu, Jul 13, 2017 at 9:11 AM, Sebastian Laskawiec wrote: > Hey, > > I noticed our Infinispan image built on top of Wildfly [1] has only 300 > pulls (opposed to infinispan server with 7.7k). > > Shall we deprecate this image? WDYT? > > Thanks, > Sebastian > > [1] https://hub.docker.com/r/jboss/infinispan-modules/ > -- > > SEBASTIAN ?ASKAWIEC > > INFINISPAN DEVELOPER > > Red Hat EMEA > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170713/1bdb642b/attachment.html From slaskawi at redhat.com Thu Jul 13 06:14:28 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Thu, 13 Jul 2017 10:14:28 +0000 Subject: [infinispan-dev] Docker image authentication Message-ID: Hey guys, I just wanted to give you a heads on some breaking change on our Docker image: https://github.com/jboss-dockerfiles/infinispan/pull/55 After that PR gets merged, the application and management user/password pairs could be specified via environmental variables, passed into bootstrap script as parameters or autogenerated. Note there is no pre-configured user/password as it was before. Please let me know if you have any questions. Thanks, Sebastian -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170713/4f4207c8/attachment-0001.html From ttarrant at redhat.com Fri Jul 14 15:51:17 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 14 Jul 2017 21:51:17 +0200 Subject: [infinispan-dev] Infinispan 9.1.0.Final Message-ID: <4ba9ece3-7b0f-368c-f341-7b835a3f2cc1@redhat.com> Dear all, it is with great pleasure that we are announcing the release of Infinispan 9.1. This release contains a number of great features: - conflict resolution - scattered caches - clustered counters - HTTP/2 support for the REST endpoint - batching support for cache stores - locked streams - cache creation/removal over Hot Rod - endpoint admin through the console - ... and much more So please check out the full announcement: http://blog.infinispan.org/2017/07/infinispan-91-bastille.html Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From rory.odonnell at oracle.com Mon Jul 17 08:12:00 2017 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Mon, 17 Jul 2017 13:12:00 +0100 Subject: [infinispan-dev] JDK 9 EA Build 178 & JDK 8u152 b05 are available on jdk.java.net Message-ID: <0b5fd263-31c5-7d90-4b6c-8033ec9865e7@oracle.com> Hi Galder, *JDK 9 Early Access* build 178 is available at : - jdk.java.net/9/ A summary of all the changes in this build are listed here . Changes which were introduced since the last availability email that may be of interest : * b175 - Module system implementation refresh**(6/2017 update) * b175 - no longer has "-ea" in the version string and the system property "java version" is now simply "9" o *java -version* >java version "9" >Java(TM) SE Runtime Environment (build 9+175) >Java HotSpot(TM) 64-Bit Server VM (build 9+175, mixed mode) o *Bundle name changes:* e.g. jdk-9+175_linux-x86_bin.tar.gz *JDK 8u152 Early Access* build 05 is available at : - jdk.java.net/8/ A summary of all the changes in this build are listed here . Rgds,Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170717/d71478d0/attachment.html From remerson at redhat.com Mon Jul 17 08:16:27 2017 From: remerson at redhat.com (Ryan Emerson) Date: Mon, 17 Jul 2017 08:16:27 -0400 (EDT) Subject: [infinispan-dev] Conflict Manager and Partition Handling Blog In-Reply-To: <520545302.28090951.1500293739904.JavaMail.zimbra@redhat.com> Message-ID: <658199000.28091136.1500293787173.JavaMail.zimbra@redhat.com> Hi Everyone, Here's a blog post on the introduction of ConflictManager and the recent changes to partition handling. http://blog.infinispan.org/2017/07/conflict-management-and-partition.html Cheers Ryan From pedro at infinispan.org Mon Jul 17 10:41:11 2017 From: pedro at infinispan.org (Pedro Ruivo) Date: Mon, 17 Jul 2017 15:41:11 +0100 Subject: [infinispan-dev] Weekly IRC Meeting Logs 2017-07-17 Message-ID: <8177c5ab-23da-6e58-de57-3b9e92c926fe@infinispan.org> Hi all, The weekly meeting logs are in attachment. Unfortunately the bot didn't cooperate :) Cheers, Pedro -------------- next part -------------- Jul 17 15:00:09 #startmeeting Jul 17 15:00:43 pruivo, I'll go first Jul 17 15:00:44 ttarrant, do I have permission to start the meeting? Jul 17 15:00:53 pruivo, jbott isn't here Jul 17 15:01:06 pruivo, so we'll use the # commands as demarcators Jul 17 15:01:13 ack Jul 17 15:01:15 ttarrant, go ahead Jul 17 15:01:22 #topic ttarrant Jul 17 15:01:38 last week I spent some time fixing up the global javadocs Jul 17 15:02:05 ISPN-8064 was quite a cleanuo Jul 17 15:02:06 jira [ISPN-8064] Javadocs are missing some packages [Reopened (Unresolved) Bug, Major, Documentation-Core, Tristan Tarrant] https://issues.jboss.org/browse/ISPN-8064 Jul 17 15:02:07 cleanup Jul 17 15:02:53 I also fixed ISPN-8057 Jul 17 15:02:54 jira [ISPN-8057] AdminOperation task engine does not adhere to API [Resolved (Done) Bug, Major, Server/Tasks, Tristan Tarrant] https://issues.jboss.org/browse/ISPN-8057 Jul 17 15:03:19 and while I was there I made some sorely needed changes to the tasks API: ISPN-8058 Jul 17 15:03:21 jira [ISPN-8058] The org.infinispan.tasks.Task interface should be in the tasks-api package [Resolved (Done) Task, Major, Tasks, Tristan Tarrant] https://issues.jboss.org/browse/ISPN-8058 Jul 17 15:03:42 I've also been playing with the slim server Jul 17 15:04:16 creating a uberjar with infinispan, jgroups, hibernate search, lucene, netty, hotrod, rest, memcached, router, rocksdb, jdbc, jpa, sifs, narayana Jul 17 15:04:26 I've got a 37MB jar Jul 17 15:04:57 which only takes 8MB heap when started with a clustered hot rod endpoint and a single cache Jul 17 15:05:26 I plan to do a bit more work here, but my main task for then next couple of weeks will be ISPN-7779 Jul 17 15:05:27 jira [ISPN-7779] State transfer does not work with protobuf encoded entities [Resolved (Done) Bug, Major, Remote Querying, Adrian Nistor] https://issues.jboss.org/browse/ISPN-7779 Jul 17 15:05:30 oops Jul 17 15:05:50 ISPN-7776 Jul 17 15:05:51 jira [ISPN-7776] Clustered configuration state [Open (Unresolved) Feature Request, Major, Configuration, Tristan Tarrant] https://issues.jboss.org/browse/ISPN-7776 Jul 17 15:06:10 last week I also merged a bunch of PRs and released 9.1.0.Final Jul 17 15:06:28 thanks to everybody for the hard work in getting us there: it was quite a release Jul 17 15:06:36 and that's all Jul 17 15:06:39 #topic karesti Jul 17 15:07:45 hi all, so last week I was trying to reproduce the pb Radim spoke about in the PR of merge https://issues.jboss.org/browse/ISPN-7752 where apparently the previous value can be null in the context of QueryInterceptor Jul 17 15:07:47 jira [ISPN-7752] Merge [Pull Request Sent (Unresolved) Sub-task, Major, Katia Aresti] https://issues.jboss.org/browse/ISPN-7752 Jul 17 15:08:25 I could't make a test fail on this scenario, I decided to wait for rvansa to come back to speak with him Jul 17 15:10:02 Gustavo has already coded something that should make the failing test pass, but I couldn't make it happen, probably I wasted too much time changing a topology where I splitted the cluster and getting weird results til I understood that this was normal Jul 17 15:10:28 anyway, I learned on cluster split, so that's cool Jul 17 15:10:59 I made the embedded multimap work on encoding caches Jul 17 15:12:09 this is the multimap PR where I added the commit of the PR where I implemented the encoding on functional maps https://issues.jboss.org/browse/ISPN-7993 Jul 17 15:12:10 jira [ISPN-7993] Functional commands don't support Data convertions [Pull Request Sent (Unresolved) Feature Request, Major, Core, Katia Aresti] https://issues.jboss.org/browse/ISPN-7993 Jul 17 15:12:18 I implemented the foreach Jul 17 15:12:32 https://issues.jboss.org/browse/ISPN-7754 Jul 17 15:12:33 jira [ISPN-7754] ForEach [Pull Request Sent (Unresolved) Sub-task, Major, Katia Aresti] https://issues.jboss.org/browse/ISPN-7754 Jul 17 15:12:37 did some reviews Jul 17 15:12:44 and I'm on hotrod mutimap now Jul 17 15:12:56 https://issues.jboss.org/browse/ISPN-7887 Jul 17 15:12:57 jira [ISPN-7887] CacheMultimap over Hot Rod [Open (Unresolved) Feature Request, Major, Remote Protocols, Katia Aresti] https://issues.jboss.org/browse/ISPN-7887 Jul 17 15:13:30 so rvansa, tomorrow if you have time, or the day after, could be cool to talk on all this haha Jul 17 15:14:10 #topic pruivo Jul 17 15:14:18 thanks karesti Jul 17 15:14:21 hi all, Jul 17 15:14:36 last week my main focus was reviewing and integrate PR for the release Jul 17 15:15:08 also, I handled the comments and improved the hot rod transactions (server) Jul 17 15:15:30 and updated the client as well. Jul 17 15:15:48 I advanced a little on the client and my plan is to start working on the test this week. Jul 17 15:16:22 this week, I started by doing a tutorial about counters. Jul 17 15:16:34 I've opened a PR. comments are welcome Jul 17 15:16:45 here: https://github.com/infinispan/infinispan-simple-tutorials/pull/31 Jul 17 15:17:00 I'm working on a blog post about counters as well. Jul 17 15:17:12 and I think thats it from me Jul 17 15:17:14 remerson, next? Jul 17 15:17:55 sure pruivo, thanks Jul 17 15:17:58 #topic remerson Jul 17 15:18:10 Last week I worked on a variety of things Jul 17 15:18:36 A few last minutes on my PRs ready for final Jul 17 15:19:38 I also performed the console release, but ran into a few issues when releasing 9.1.0.Final unfortunately, so this meant that I had to make changes to the build process and then release 9.1.1.Final Jul 17 15:19:50 thanks ryan :-) Jul 17 15:20:20 ISPN-8066 Jul 17 15:20:21 jira [ISPN-8066] Management Console - Json meta files not included in dist build [Resolved (Done) Bug, Major, Build process/Console, Ryan Emerson] https://issues.jboss.org/browse/ISPN-8066 Jul 17 15:20:28 vblagoje: np Jul 17 15:21:09 I also created an initial implementation for ISPN-6677 and ISPN-8008 Jul 17 15:21:10 jira [ISPN-6677] Deal with unavailable dependencies during startup [Coding In Progress (Unresolved) Feature Request, Blocker, Cloud Integrations, Ryan Emerson] https://issues.jboss.org/browse/ISPN-6677 Jul 17 15:21:10 jira [ISPN-8008] Add Fault-tolerance to write-behind stores [Coding In Progress (Unresolved) Enhancement, Major, Loaders and Stores, Ryan Emerson] https://issues.jboss.org/browse/ISPN-8008 Jul 17 15:21:32 I should have an initial PR up for this later in the week Jul 17 15:22:41 I also started reading more about reactive streams and Rx java to help review will's latest PR, however I didn't make as much progress on this front as I would have liked due to different issues that popped up Jul 17 15:23:24 But I plan to revisit this soon, as our intention is to utilise reactive streams with the cache stores to improve how we iterate over stored entities Jul 17 15:23:42 Finally, I wrote the ConflictManager blog Jul 17 15:24:40 This week I plan to write a blog on store batching, finish up ISPN-6677, backport some features for JDG and prepare for the London server meeting next week Jul 17 15:24:41 jira [ISPN-6677] Deal with unavailable dependencies during startup [Coding In Progress (Unresolved) Feature Request, Blocker, Cloud Integrations, Ryan Emerson] https://issues.jboss.org/browse/ISPN-6677 Jul 17 15:24:54 that's all from me, rvansa next? Jul 17 15:26:08 or maybe slaskawi? Jul 17 15:26:13 remerson: Sure Jul 17 15:26:16 #topic slaskawi Jul 17 15:26:36 I guess rvansa has been doing a release recently :) Jul 17 15:26:48 Last week I was mainly working on Service Brokers stuff Jul 17 15:26:56 We are fairly close Jul 17 15:27:11 But I implemented a couple of adjustments to make the integration better: Jul 17 15:28:17 1) I revisited Galder's authentication PR and made some improvements: https://github.com/jboss-dockerfiles/infinispan/pull/55. By default it generates user/pass on startup or uses credentials passed in as env variables or parameters Jul 17 15:28:32 2) I exposed Jolokia ports on our Docker image: https://github.com/jboss-dockerfiles/infinispan/pull/56 Jul 17 15:28:42 2b) https://github.com/infinispan/infinispan-openshift-templates/pull/3 Jul 17 15:29:02 3) I added some improvements for generating binding secrets: https://github.com/infinispan/infinispan-openshift-templates/pull/4 Jul 17 15:29:31 4) I also released Spring Boot Starters 2.0.0.Alpha1 and prepared demo based on it https://github.com/infinispan-demos/infinispan-openshift-monitoring-and-catalog Jul 17 15:29:44 As for this week: Jul 17 15:29:51 1) I plan to finish the demo Jul 17 15:29:57 2) I plan to look into Hawkular stuff Jul 17 15:30:10 3) I plan to look into centralized logging stuff Jul 17 15:30:18 4) and prepare for OpenShift F2F Jul 17 15:30:24 All from me, vblagoje? Jul 17 15:32:00 #topic rvansa Jul 17 15:32:26 rvansa: Congrats Radim!!! Jul 17 15:32:34 Hi, I was off whole last week and most of the one before that, mostly playing with shovel & bucket etc :) Jul 17 15:32:58 This week I have to finally handle some problems with backports in Hibernate 5.1 2LC Jul 17 15:33:30 and more fun will probably pop up once I go through github notifications :) Jul 17 15:33:37 vblagoje next, please :) Jul 17 15:33:45 sure rvansa Jul 17 15:33:54 And congrats from me as well Jul 17 15:34:02 #topic vblagoje Jul 17 15:34:21 So last week was a bit shorter for me, I had Thu/Fri off for PTO Jul 17 15:34:55 Before that I was making sure that console is ready for the release, and in doing use cases I found task bug that ttarrant fixed promptly Jul 17 15:35:02 However, I overlook my own bug Jul 17 15:35:14 That remerson then fixed for the release Jul 17 15:35:36 As mentioned above in ISPN-8066 Jul 17 15:35:37 jira [ISPN-8066] Management Console - Json meta files not included in dist build [Resolved (Done) Bug, Major, Build process/Console, Ryan Emerson] https://issues.jboss.org/browse/ISPN-8066 Jul 17 15:36:20 I also fixed ISPN-7649 and ISPN-7657 by issuing a PR and also a few other minor issues that are not relevant any longer Jul 17 15:36:21 jira [ISPN-7649] Administration console - transaction tab allows to set invalid options [Pull Request Sent (Unresolved) Bug, Major, Console, Vladimir Blagojevic] https://issues.jboss.org/browse/ISPN-7649 Jul 17 15:36:21 jira [ISPN-7657] Administration console - Indexing tab allows invalid configuration to be set [Pull Request Sent (Unresolved) Bug, Major, Console, Vladimir Blagojevic] https://issues.jboss.org/browse/ISPN-7657 Jul 17 15:36:49 This week looking to complete my sprint, looking good so far Jul 17 15:36:57 So much from me for today Jul 17 15:37:07 And that?s it ttarrant? Jul 17 15:37:21 vblagoje, I'm running the meeting :) Jul 17 15:37:33 Oh, right apologies pruivo :-) Jul 17 15:37:34 thanks karesti remerson rvansa slaskawi vblagoje Jul 17 15:37:40 #endmeeting From slaskawi at redhat.com Tue Jul 18 09:05:32 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 18 Jul 2017 13:05:32 +0000 Subject: [infinispan-dev] tuned profiles for Infinispan ? In-Reply-To: References: Message-ID: I have mixed feelings about this to be honest. On one hand this gives a really good experience for new users (just pick a profile you want to use) but on the other hand tools like this discourage users for doing proper tuning work (why should I read any documentation and do anything if everything has already been provided by Infinispan authors). Nevertheless I think it might be worth to do a POC and host profiles in a separate repository (to avoid user confusion). On Tue, Jul 11, 2017 at 6:49 PM Sanne Grinovero wrote: > Hi all, > > tuned is a very nice utility to apply all kind of tuning options to a > machine focusing on performance options. > > Of course it doesn't replace the tuning that an expert could provide > for a specific system, but it gives people a quick an easy way to get > to a reasonable starting point, which is much better than the generic > out of the box of a Linux distribution. > > In many distributions it runs at boostrap transparently, for example > it will automatically apply a "laptop" profile if it's able to detect > running on a laptop, and might be the little tool which switches your > settings to an higher performance profile when you plug in the laptop. > > There's some good reference here: > - > https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Performance_Tuning_Guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Performance_Monitoring_Tools-tuned_and_tuned_adm.html > > It's also easy to find it integrated with other tools, e.g. you can > use Ansible to set a profile. > > Distributions like Fedora have out of the box profiles included which > are good tuning base settings to run e.g. an Oracle RDBMS, an HANA > database, or just tune for latency rather than throughput. > Communities like Hadoop also provide suggested tuned settings. > > It would be great to distribute an Infinispan optimised profile? We > could ask the Fedora team to include it, I feel it's important to have > a profile there, or at least have one provided by any Infinispan RPMs. > > Thanks, > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170718/1cbc6da5/attachment.html From emmanuel at hibernate.org Wed Jul 19 07:40:54 2017 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Wed, 19 Jul 2017 13:40:54 +0200 Subject: [infinispan-dev] tuned profiles for Infinispan ? In-Reply-To: References: Message-ID: I don?t think it discourages, the people you pention would simply use the ?default? profile. At least with a list of profiles, the idea of tuning pops into your mind and you can go further. > On 18 Jul 2017, at 15:05, Sebastian Laskawiec wrote: > > I have mixed feelings about this to be honest. On one hand this gives a really good experience for new users (just pick a profile you want to use) but on the other hand tools like this discourage users for doing proper tuning work (why should I read any documentation and do anything if everything has already been provided by Infinispan authors). > > Nevertheless I think it might be worth to do a POC and host profiles in a separate repository (to avoid user confusion). > > On Tue, Jul 11, 2017 at 6:49 PM Sanne Grinovero > wrote: > Hi all, > > tuned is a very nice utility to apply all kind of tuning options to a > machine focusing on performance options. > > Of course it doesn't replace the tuning that an expert could provide > for a specific system, but it gives people a quick an easy way to get > to a reasonable starting point, which is much better than the generic > out of the box of a Linux distribution. > > In many distributions it runs at boostrap transparently, for example > it will automatically apply a "laptop" profile if it's able to detect > running on a laptop, and might be the little tool which switches your > settings to an higher performance profile when you plug in the laptop. > > There's some good reference here: > - https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Performance_Tuning_Guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Performance_Monitoring_Tools-tuned_and_tuned_adm.html > > It's also easy to find it integrated with other tools, e.g. you can > use Ansible to set a profile. > > Distributions like Fedora have out of the box profiles included which > are good tuning base settings to run e.g. an Oracle RDBMS, an HANA > database, or just tune for latency rather than throughput. > Communities like Hadoop also provide suggested tuned settings. > > It would be great to distribute an Infinispan optimised profile? We > could ask the Fedora team to include it, I feel it's important to have > a profile there, or at least have one provided by any Infinispan RPMs. > > Thanks, > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- > SEBASTIAN ?ASKAWIEC > INFINISPAN DEVELOPER > Red Hat?EMEA > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170719/9947fc43/attachment.html From dan.berindei at gmail.com Wed Jul 19 08:44:54 2017 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 19 Jul 2017 15:44:54 +0300 Subject: [infinispan-dev] tuned profiles for Infinispan ? In-Reply-To: References: Message-ID: Can't we just copy a profile from Hibernate or WildFly? Dan On Wed, Jul 19, 2017 at 2:40 PM, Emmanuel Bernard wrote: > I don?t think it discourages, the people you pention would simply use the > ?default? profile. At least with a list of profiles, the idea of tuning > pops into your mind and you can go further. > > On 18 Jul 2017, at 15:05, Sebastian Laskawiec wrote: > > I have mixed feelings about this to be honest. On one hand this gives a > really good experience for new users (just pick a profile you want to use) > but on the other hand tools like this discourage users for doing proper > tuning work (why should I read any documentation and do anything if > everything has already been provided by Infinispan authors). > > Nevertheless I think it might be worth to do a POC and host profiles in a > separate repository (to avoid user confusion). > > On Tue, Jul 11, 2017 at 6:49 PM Sanne Grinovero > wrote: > >> Hi all, >> >> tuned is a very nice utility to apply all kind of tuning options to a >> machine focusing on performance options. >> >> Of course it doesn't replace the tuning that an expert could provide >> for a specific system, but it gives people a quick an easy way to get >> to a reasonable starting point, which is much better than the generic >> out of the box of a Linux distribution. >> >> In many distributions it runs at boostrap transparently, for example >> it will automatically apply a "laptop" profile if it's able to detect >> running on a laptop, and might be the little tool which switches your >> settings to an higher performance profile when you plug in the laptop. >> >> There's some good reference here: >> - https://access.redhat.com/documentation/en-US/Red_Hat_ >> Enterprise_Linux/7/html/Performance_Tuning_Guide/sect- >> Red_Hat_Enterprise_Linux-Performance_Tuning_Guide- >> Performance_Monitoring_Tools-tuned_and_tuned_adm.html >> >> It's also easy to find it integrated with other tools, e.g. you can >> use Ansible to set a profile. >> >> Distributions like Fedora have out of the box profiles included which >> are good tuning base settings to run e.g. an Oracle RDBMS, an HANA >> database, or just tune for latency rather than throughput. >> Communities like Hadoop also provide suggested tuned settings. >> >> It would be great to distribute an Infinispan optimised profile? We >> could ask the Fedora team to include it, I feel it's important to have >> a profile there, or at least have one provided by any Infinispan RPMs. >> >> Thanks, >> Sanne >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > -- > SEBASTIAN ?ASKAWIEC > > INFINISPAN DEVELOPER > Red Hat EMEA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170719/194fd840/attachment-0001.html From ttarrant at redhat.com Fri Jul 21 03:08:00 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 21 Jul 2017 09:08:00 +0200 Subject: [infinispan-dev] 9.1.1, 9.2 and 9.1.x branch Message-ID: <308f8676-ab55-c045-0c3d-6ff1bfc039bd@redhat.com> Hey all, i just wanted to clarify the situation with master and the releases. I would like to tag 9.1.1 as soon as possible with 0 testsuite failures (other fixes are also acceptable). As soon as that is done we can branch for 9.2. Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From anistor at redhat.com Mon Jul 24 10:51:18 2017 From: anistor at redhat.com (Adrian Nistor) Date: Mon, 24 Jul 2017 17:51:18 +0300 Subject: [infinispan-dev] Weekly IRC Meeting Logs 2017-07-24 Message-ID: <9193ac61-0ef5-de83-598f-fe5990a2c687@redhat.com> Hi all, The weekly meeting logs are attached. jbott is missing in action again. RIP jbott! Cheers, Adrian --------------------------------------------------------------------------------------------------------------------------------- (17:03:54) anistor: dberindei: gustavonalle: jsenko: karesti: pruivo: rigazilla: rvansa: ttarrant: vblagoje: everybody ready for the meeting? (17:04:01) vblagoje: +1 (17:04:03) dberindei: sure anistor (17:04:12) karesti: :o) (17:04:38) rvansa: braced (17:04:44) jsenko: anistor: skip me please:/ (17:04:53) jsenko: no updates (17:04:56) anistor: ok. (17:04:58) anistor: I believe ttarrant, gustavonalle, ryan and sebastian are in another meeting right now, so we can skip them too :) (17:05:32) anistor: #startmeeting (17:05:39) anistor: #topic anistor (17:06:20) anistor: it seems jbott died july 14th, and was not resurected :) (17:06:53) pruivo: lol (17:07:37) anistor: last week I helped a bit with reviewing a PR from gustavo, who implemented json to prorobuf conversion in Protostream. great work! it is now in, but we'll need to release protostream 4.2 to make it into ISPN (17:08:56) anistor: while on the occasion of fiddling with protostream I restarted my previous work of migrating to protrobuf 3.x schema support, which might take forever. (17:10:21) anistor: I also started an article and short tutorial on using remote infinispan with plain serialization/jboss marshalling with query et all. no protobuf involved (17:10:46) anistor: this deserves a blog post too. coming soon (17:11:14) anistor: that's about all (17:12:15) anistor: this week I plan to fix whatever is not working in the deployment of lucene analyzers in wildfly. I hit a wall the requires debugging wildlfy and I'm a bit stuck. adding more logging did not help ... (17:12:43) anistor: #topic dberindei (17:13:34) dberindei: I haven't attended the last meeting as I was on PTO on Monday (17:14:02) dberindei: I finally opened a (preview) PR for ISPN-7919 (17:14:03) jbossbot: jira [ISPN-7919] Expose ResponseCollector in the RpcManager interface [Pull Request Sent (Unresolved) Task, Major, Core, Dan Berindei] https://issues.jboss.org/browse/ISPN-7919 (17:14:25) wsiqueir-brb is now known as wsiqueir (17:15:20) dberindei: in some ways it's a lot less than I wanted to change, because I'm still using a MapResponseCollector for most RPCs (17:15:33) dberindei: in other ways maybe I changed too much, because I had too many test failures to fix :) (17:16:01) dberindei: I also wrote the usual random PR comments (17:16:29) dberindei: and I made the xsite tests run in parallel with ISPN-5476 (17:16:30) jbossbot: jira [ISPN-5476] Cross-site tests should run in parallel [Pull Request Sent (Unresolved) Task, Major, Core/Cross-Site Replication/Test Suite - Core, Dan Berindei] https://issues.jboss.org/browse/ISPN-5476 (17:16:48) dberindei: now waiting for another run in CI (17:17:37) dberindei: I'm now trying to figure out what's still wrong with my ISPN-7997 PR, because it seems to break ScatteredStreamIteratorTest (17:17:38) jbossbot: jira [ISPN-7997] DistributedStreamIteratorTest.testLocallyForcedStream random failure [Pull Request Sent (Unresolved) Bug, Critical, Test Suite - Core, Dan Berindei] https://issues.jboss.org/browse/ISPN-7997 (17:18:03) dberindei: that's it for me, karesti next? (17:18:21) karesti: yes, thankyou dberindei (17:18:28) karesti: #topic karesti (17:18:44) ttarrant: 654523 (17:18:46) ttarrant: 279432 (17:19:28) ttarrant: 862914 (17:19:50) ttarrant: 611346 (17:19:52) ttarrant: 190773 (17:19:54) ttarrant: 048715 (17:20:05) anistor: ttarrant: lucky or unlucky numbers? (17:21:19) karesti: last week I was stuck making hotrod multimap work, I managed to unblock and I will probably open a PR soon. Meanwhile rvansa came back and thank you for your reviews etc ! so we merged ISPN-7752 (17:21:20) jbossbot: jira [ISPN-7752] Merge [Pull Request Sent (Unresolved) Sub-task, Major, Katia Aresti] https://issues.jboss.org/browse/ISPN-7752 (17:21:23) ttarrant: anistor, :) (17:21:27) vblagoje: it is his ybikey (17:22:36) karesti: I reported rvansa and other's comments on https://github.com/infinispan/infinispan/pull/5271 (17:22:37) jbossbot: git pull req [infinispan] (open) Katia Aresti ISPN-7993 Encoding support on functional maps https://github.com/infinispan/infinispan/pull/5271 (17:22:38) jbossbot: jira [ISPN-7993] Functional commands don't support Data convertions [Pull Request Sent (Unresolved) Feature Request, Major, Core, Katia Aresti] https://issues.jboss.org/browse/ISPN-7993 (17:24:32) karesti: and https://github.com/infinispan/infinispan/pull/5193 can be reviewed and merged just after 7993. I would like to do it this week. Embedded multimap is experimental and a separate building block, so even if its not perfect yet, tis can be easily changed and improved (17:25:09) rvansa: karesti: isn't 7993 blocked by Gustavo? (17:25:23) karesti: rvansa I don't know if this can be merged or not (17:25:51) karesti: I mean, I don't know if 7993 is acceptable and after Gustavo will come with his huge PR and modify it (17:26:18) karesti: but meanwhile my work can be merged on master (17:26:29) rvansa: karesti: carrying (default) encoding classes brings a regression for embedded mode (17:27:14) karesti: rvansa, hm (17:28:40) karesti: rvansa, I'm going to continue with hotrod and give a reviewable PR to advance tis work and move on, based on my code that makes encoding work and I can adance (17:29:22) karesti: so nothing will be merged yet, but the idea was to merge soon on 9.2 multimap so we can test it etc and improve (17:29:32) karesti: I'm going to move forward with locks too (17:29:48) karesti: so this is what I will be doing this week (17:30:12) karesti: locks only in embedded mode (17:30:24) karesti: pruivo, next ? (17:30:38) pruivo: yes, thanks karesti (17:30:42) pruivo: #topic pruivo (17:30:47) pruivo: hi all, (17:30:57) pruivo: last week I work in 2 fronts :) (17:31:27) pruivo: I'm writing the test suite for the HR client transactions (17:31:37) pruivo: and I spent some time reviewing PRs (17:32:11) pruivo: and did some "blogging" about the counters (including a simple tutorial) (17:33:00) pruivo: this week, I'm handling the comments on HR server transactions and reviewing PRs (you have some comments! :)) (17:33:17) pruivo: also, I'll be on PTO next Wednesday (/cc ttarrant pzapataf) (17:33:56) pruivo: and I'm going to try to finish the HR client before the end of the week (although there are a bunch of API tests that need to be done :() (17:34:04) pruivo: and I think that's it... (17:34:08) pruivo: rigazilla, next? (17:34:29) rigazilla: sure thanks pruivo (17:34:33) rigazilla: #topic rigazilla (17:34:58) rigazilla: short week for me the last one since it started on Wed (17:35:39) rigazilla: I worked on merging a C# pr from mgencur, with some new tests on Authentication (17:36:07) rigazilla: in the while I discovered a bug in the swig/C# integration and I'm currently working on it (17:36:22) rigazilla: mmm think all for me (17:36:44) rigazilla: rvansa: next? (17:38:30) rigazilla: or vblagoje? (17:38:32) vblagoje: let me go until rvansa comes back (17:38:35) vblagoje: ok (17:38:40) vblagoje: #topic vblagoje (17:39:20) vblagoje: Last week I investigated how we can reuse existing DMR data/ops and somehow plug it into our JMX ecosystem (17:39:57) vblagoje: It turns out we already have this solution in EAP and I tried it out, made sure that all our existing DMR is exposed through JMX and it is (17:40:05) vblagoje: This is amazing (17:40:44) vblagoje: Because we don?t have to potentially now rewrite or abandon the entire DMR data/ops we worked on for years now (17:41:14) vblagoje: We could support both DMR and JMX and expose our data and ops that way. I want to investigate this in more detail this week (17:41:47) vblagoje: On the dev front in my agile cycle I had two issues I resolved: ISPN-7649 and ISPN-7642 (17:41:48) jbossbot: jira [ISPN-7649] Administration console - transaction tab allows to set invalid options [Pull Request Sent (Unresolved) Bug, Major, Console, Vladimir Blagojevic] https://issues.jboss.org/browse/ISPN-7649 (17:41:48) jbossbot: jira [ISPN-7642] Administration console - remote sites are not displayed correctly on cache container page [Coding In Progress (Unresolved) Bug, Major, Console, Vladimir Blagojevic] https://issues.jboss.org/browse/ISPN-7642 (17:42:24) vblagoje: Well, one in in PR queue and for the other I want to hear remerson?s opinion. But both should be resolved by weeks end (17:42:57) vblagoje: I have another issue or two on my plate that I want to complete by Monday (17:43:03) vblagoje: And that?s it from me (17:43:09) vblagoje: ping rvansa (17:43:18) rvansa: #topic rvansa (17:43:33) rvansa: Last week I've been coordinating my work with Katia (17:44:14) rvansa: Most of the time I've worked on adding functional commands to scattered cache, since merge() should use funcs; I am close to finishing that work (17:44:15) karesti: rvansa, you can totally say you've been harnessed by me (17:44:58) karesti: rvansa, so you reviewed haha thanks again btw :) (17:45:01) rvansa: I've been also asked to check some backports in Hibernate 2LC 5.1 (17:45:33) rvansa: This week I'd like to look on the testSplit failures (17:45:54) rvansa: and then create a reproducer for the missing value for QueryInterceptor (17:46:13) rvansa: Howgh (17:46:23) rvansa: anistor: endmeeting? (17:46:23) karesti: rvansa, I can work on the reproducer this week too (17:46:34) anistor: rvansa: yes (17:46:40) anistor: #endmeeting (17:47:10) anistor: dberindei: karesti: pruivo: rigazilla: rvansa: ttarrant: vblagoje: thank you all! From galder at redhat.com Tue Jul 25 07:45:47 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Tue, 25 Jul 2017 13:45:47 +0200 Subject: [infinispan-dev] Important feedback for transcoding work - Re: Quick fix for ISPN-7710 In-Reply-To: References: <8538223C-2F4B-4E18-B325-7F77A1298619@redhat.com> <1cbc3eea-7a52-b13d-1a2a-0c66da7dec55@redhat.com> <62BF467B-791C-4141-85C1-ACA9AC96AED5@redhat.com> Message-ID: -- Galder Zamarre?o Infinispan, Red Hat > On 19 Jun 2017, at 13:17, Dan Berindei wrote: > > On Fri, Jun 16, 2017 at 1:07 PM, Galder Zamarre?o wrote: >> >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >>> On 15 Jun 2017, at 15:25, Adrian Nistor wrote: >>> >>> Galder, I've seen AddProtobufTask in March or April when you mentioned this issue on the devlist; that approach can work for protostream marshallers or any other code bits that the Cache does not depend on during startup, and which can be deployed anytime later. In this category we currently have : filters, converters. These are currently deployed with the help of a DeploymentUnitProcessor, but we could have done it using a ServerTask as well. >> >> ^ I'm not sure we had ServerTasks in place when we had filters and converters... But if we had server tasks then, we should have used that path. My bad if we didn't do it :\ >> >>> Now that we took the route of DUP, I think we should continue in a consistent manner and use it for other 'deployables' we identify from now on, ie. the protobuf entity marshallers. >> >> ^ Having written DUPs, I consider them to be a royal PITA. So, I'm against that. >> >>> As for encoders, lucene analyzers, compatibility marshaller, event marshaller - these are all needed during cache startup. We need to come up with something for these, so I propose to look them up using the "moduleId:slot:className" convention. >> >> As I see it, encoders/compatibility-marshaller/event-marshaller might not be needed on startup. If data is kept in binary and only deserialized lazily when needed, you only need them when you're going to do what you need... >> > > What if you start a node and a client immediately tries to register an > even listener? If the event listener server side requires any deserialization, I'd expect the node on startup to have a way to load the encoder to be used, either via config or a server tasks that's deployed by the user or pre-registered by the server. > > Not sure about the others, but for the lucene analyzers, I assume some > configurations will have to analyze/index entries that we receive via > state transfer during startup. Good point. This is a use case where unmarshalling/deserialization/decoding would be required on startup, to be able to index data. > > Dan > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Tue Jul 25 10:54:58 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Tue, 25 Jul 2017 16:54:58 +0200 Subject: [infinispan-dev] Conflict Manager and Partition Handling Blog In-Reply-To: <658199000.28091136.1500293787173.JavaMail.zimbra@redhat.com> References: <658199000.28091136.1500293787173.JavaMail.zimbra@redhat.com> Message-ID: <1A277579-1486-4125-A879-33EECB81FC29@redhat.com> Hey Ryan, Very detailed blog post! Great work on both the post and the feature! :D While reading, the following question came to my mind: how does Infinispan determine there's a conflict? Does it rely on .equals() based equality? A follow up would be: whether in the future this could be pluggable, e.g. when comparing a version field is enough to realise there's a conflict. As opposed of relying in .equals(), if that's what's being used inside :) Cheers, -- Galder Zamarre?o Infinispan, Red Hat > On 17 Jul 2017, at 14:16, Ryan Emerson wrote: > > Hi Everyone, > > Here's a blog post on the introduction of ConflictManager and the recent changes to partition handling. > > http://blog.infinispan.org/2017/07/conflict-management-and-partition.html > > Cheers > Ryan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Tue Jul 25 11:11:50 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Tue, 25 Jul 2017 17:11:50 +0200 Subject: [infinispan-dev] Conflict Manager and Partition Handling Blog In-Reply-To: <1A277579-1486-4125-A879-33EECB81FC29@redhat.com> References: <658199000.28091136.1500293787173.JavaMail.zimbra@redhat.com> <1A277579-1486-4125-A879-33EECB81FC29@redhat.com> Message-ID: <99A51374-C1E9-41EA-A26E-9E2EE4A7CB2A@redhat.com> One more thing: have you thought how we could have a simple tutorial on this feature? It'd be great to find a simple, reduced, example to show it off :) Cheers, -- Galder Zamarre?o Infinispan, Red Hat > On 25 Jul 2017, at 16:54, Galder Zamarre?o wrote: > > Hey Ryan, > > Very detailed blog post! Great work on both the post and the feature! :D > > While reading, the following question came to my mind: how does Infinispan determine there's a conflict? Does it rely on .equals() based equality? > > A follow up would be: whether in the future this could be pluggable, e.g. when comparing a version field is enough to realise there's a conflict. As opposed of relying in .equals(), if that's what's being used inside :) > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > >> On 17 Jul 2017, at 14:16, Ryan Emerson wrote: >> >> Hi Everyone, >> >> Here's a blog post on the introduction of ConflictManager and the recent changes to partition handling. >> >> http://blog.infinispan.org/2017/07/conflict-management-and-partition.html >> >> Cheers >> Ryan >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > From galder at redhat.com Tue Jul 25 11:12:32 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Tue, 25 Jul 2017 17:12:32 +0200 Subject: [infinispan-dev] Conflict Manager and Partition Handling Blog In-Reply-To: <99A51374-C1E9-41EA-A26E-9E2EE4A7CB2A@redhat.com> References: <658199000.28091136.1500293787173.JavaMail.zimbra@redhat.com> <1A277579-1486-4125-A879-33EECB81FC29@redhat.com> <99A51374-C1E9-41EA-A26E-9E2EE4A7CB2A@redhat.com> Message-ID: Oh, if we can't find a simple tutorial for it, there's always https://github.com/infinispan-demos :) -- Galder Zamarre?o Infinispan, Red Hat > On 25 Jul 2017, at 17:11, Galder Zamarre?o wrote: > > One more thing: have you thought how we could have a simple tutorial on this feature? > > It'd be great to find a simple, reduced, example to show it off :) > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > >> On 25 Jul 2017, at 16:54, Galder Zamarre?o wrote: >> >> Hey Ryan, >> >> Very detailed blog post! Great work on both the post and the feature! :D >> >> While reading, the following question came to my mind: how does Infinispan determine there's a conflict? Does it rely on .equals() based equality? >> >> A follow up would be: whether in the future this could be pluggable, e.g. when comparing a version field is enough to realise there's a conflict. As opposed of relying in .equals(), if that's what's being used inside :) >> >> Cheers, >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >>> On 17 Jul 2017, at 14:16, Ryan Emerson wrote: >>> >>> Hi Everyone, >>> >>> Here's a blog post on the introduction of ConflictManager and the recent changes to partition handling. >>> >>> http://blog.infinispan.org/2017/07/conflict-management-and-partition.html >>> >>> Cheers >>> Ryan >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > From galder at redhat.com Wed Jul 26 08:41:41 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Wed, 26 Jul 2017 14:41:41 +0200 Subject: [infinispan-dev] Docker image authentication In-Reply-To: References: Message-ID: Looks great Sebastian! Great work :) -- Galder Zamarre?o Infinispan, Red Hat > On 13 Jul 2017, at 12:14, Sebastian Laskawiec wrote: > > Hey guys, > > I just wanted to give you a heads on some breaking change on our Docker image: https://github.com/jboss-dockerfiles/infinispan/pull/55 > > After that PR gets merged, the application and management user/password pairs could be specified via environmental variables, passed into bootstrap script as parameters or autogenerated. Note there is no pre-configured user/password as it was before. > > Please let me know if you have any questions. > > Thanks, > Sebastian > > > -- > SEBASTIAN ?ASKAWIEC > INFINISPAN DEVELOPER > Red Hat EMEA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From remerson at redhat.com Wed Jul 26 08:41:56 2017 From: remerson at redhat.com (Ryan Emerson) Date: Wed, 26 Jul 2017 08:41:56 -0400 (EDT) Subject: [infinispan-dev] Conflict Manager and Partition Handling Blog In-Reply-To: References: <658199000.28091136.1500293787173.JavaMail.zimbra@redhat.com> <1A277579-1486-4125-A879-33EECB81FC29@redhat.com> <99A51374-C1E9-41EA-A26E-9E2EE4A7CB2A@redhat.com> Message-ID: <73413902.30250862.1501072916488.JavaMail.zimbra@redhat.com> Hi Galder, Thanks for the feedback. Conflicts are detected by applying a predicate to the returned Map for each cache entry. Currently this predicate utilises Stream::distinct (so .equals), to check for multiple versions of an entry. So implementing pluggable strategies for defining a conflict should be trivial :) Good idea about a simple tutorial/demo, I'll look into it when I get a chance. Cheers Ryan ----- Original Message ----- > Oh, if we can't find a simple tutorial for it, there's always > https://github.com/infinispan-demos :) > > -- > Galder Zamarre?o > Infinispan, Red Hat > > > On 25 Jul 2017, at 17:11, Galder Zamarre?o wrote: > > > > One more thing: have you thought how we could have a simple tutorial on > > this feature? > > > > It'd be great to find a simple, reduced, example to show it off :) > > > > Cheers, > > -- > > Galder Zamarre?o > > Infinispan, Red Hat > > > >> On 25 Jul 2017, at 16:54, Galder Zamarre?o wrote: > >> > >> Hey Ryan, > >> > >> Very detailed blog post! Great work on both the post and the feature! :D > >> > >> While reading, the following question came to my mind: how does Infinispan > >> determine there's a conflict? Does it rely on .equals() based equality? > >> > >> A follow up would be: whether in the future this could be pluggable, e.g. > >> when comparing a version field is enough to realise there's a conflict. > >> As opposed of relying in .equals(), if that's what's being used inside :) > >> > >> Cheers, > >> -- > >> Galder Zamarre?o > >> Infinispan, Red Hat > >> > >>> On 17 Jul 2017, at 14:16, Ryan Emerson wrote: > >>> > >>> Hi Everyone, > >>> > >>> Here's a blog post on the introduction of ConflictManager and the recent > >>> changes to partition handling. > >>> > >>> http://blog.infinispan.org/2017/07/conflict-management-and-partition.html > >>> > >>> Cheers > >>> Ryan > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Thu Jul 27 10:02:15 2017 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 27 Jul 2017 16:02:15 +0200 Subject: [infinispan-dev] Annotated encoded entries Message-ID: <53087102-c5a7-8d07-5ecc-23ee70f03e4e@redhat.com> Hi guys, recently the new encoding stuff has dawned in the codebase, and I must admit that keeping track what's in storage-format and what's not (and I think I've spotted some bugs). I think that we could use Java 8's type annotations that would explicitly mark the contents as encoded or not encoded: @Target({ ElementType.FIELD, ElementType.LOCAL_VARIABLE, ElementType.PARAMETER, ElementType.TYPE_PARAMETER, ElementType.TYPE_USE }) public @interface Storage { } @Target({ ElementType.FIELD, ElementType.LOCAL_VARIABLE, ElementType.PARAMETER, ElementType.TYPE_PARAMETER, ElementType.TYPE_USE }) public @interface External { } Then the Encoder would look like: public interface Encoder { @Storage Object toStorage(@External Object content); @External Object fromStorage(@Storage Object content); } Eventually we could use tools like the Checker Framework [1] to enforce explicit casts from non-annotated type to annotated. I am still not fully decided if we should have separate annotations for keys & values. While this gives some clarity on use, those annotations would have to be rather ubiquitous (code bloat?). Nevertheless, what do you think about it? Radim [1] https://checkerframework.org/jsr308/ -- Radim Vansa JBoss Performance Team From anistor at redhat.com Thu Jul 27 17:08:38 2017 From: anistor at redhat.com (Adrian Nistor) Date: Fri, 28 Jul 2017 00:08:38 +0300 Subject: [infinispan-dev] Annotated encoded entries In-Reply-To: <53087102-c5a7-8d07-5ecc-23ee70f03e4e@redhat.com> References: <53087102-c5a7-8d07-5ecc-23ee70f03e4e@redhat.com> Message-ID: <8a615e57-62a9-7678-0d8d-fd66064c0bd0@redhat.com> That's clever, but a bit too much bloat I'm afraid. On 07/27/2017 05:02 PM, Radim Vansa wrote: > Hi guys, > > recently the new encoding stuff has dawned in the codebase, and I must > admit that keeping track what's in storage-format and what's not (and I > think I've spotted some bugs). I think that we could use Java 8's type > annotations that would explicitly mark the contents as encoded or not > encoded: > > @Target({ ElementType.FIELD, ElementType.LOCAL_VARIABLE, > ElementType.PARAMETER, ElementType.TYPE_PARAMETER, ElementType.TYPE_USE }) > public @interface Storage { > } > > @Target({ ElementType.FIELD, ElementType.LOCAL_VARIABLE, > ElementType.PARAMETER, ElementType.TYPE_PARAMETER, ElementType.TYPE_USE }) > public @interface External { > } > > Then the Encoder would look like: > > public interface Encoder { > @Storage Object toStorage(@External Object content); > @External Object fromStorage(@Storage Object content); > > } > > Eventually we could use tools like the Checker Framework [1] to enforce > explicit casts from non-annotated type to annotated. I am still not > fully decided if we should have separate annotations for keys & values. > > While this gives some clarity on use, those annotations would have to be > rather ubiquitous (code bloat?). Nevertheless, what do you think about it? > > Radim > > [1] https://checkerframework.org/jsr308/ > > > > From rvansa at redhat.com Fri Jul 28 06:38:26 2017 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 28 Jul 2017 12:38:26 +0200 Subject: [infinispan-dev] Transactional consistency of query Message-ID: <85aaa2cd-b55b-90ec-50a2-a777c5907315@redhat.com> Hi, while working on ISPN-7806 I am wondering how should queries work with transactions. Right now it seems that updates to index are done during either regular command execution (on originator [A]) or prepare command on remote nodes [B]. Both of these cause rolled-back transactions to be seen, so these must be treated as bugs [C]. If we index the data after committing the transaction, there would be a time window when we could see the updated entries but the index would not reflect that. That might be acceptable limitation if a query-matching misses some entity, but it's also possible that we retrieve the query result key-set and then (after retrieving full entities) we return something that does not match the query. One of the reproducers for ISPN-7806 I've written [1] triggers a situation where listing all Persons could return Animal (different entity type), so I think that there's no validity post-check (though these reproducers don't use transactions). Therefore, I wonder if the index should contain only the key; maybe we should store an unique version and invalidate the query if some of the entries has changed. If we index the data before committing the transaction, similar situation could happen: the index will return keys for entities that will match in the future but the actually returned list will contain stale entities. What's the overall plan? Do we just accept inconsistencies? In that case, please add a verbose statement in docs and point me to that. And if I've misinterpreted something and raised the red flag in error, please let me know. Radim [A] This seems to be a regression after moving towards async interceptors - our impl of org.hibernate.search.backend.TransactionContext is incorrectly bound to TransactionManager. Then we seem to be running out of transaction and are happy to index it right away. The thread that executes the interceptor handler is also dependent on ownership (due to remote LockCommand execution), so I think that it does not fail the local-mode tests. [B] ... and it does so twice as a regression after ISPN-7840 but that's easy to fix. [C] Indexing in prepare command was OK before ISPN-7840 with pessimistic locking which does not send the CommitCommand, but now that the QI has been moved below EWI it means that we're indexing before storing the actual values. Optimistic locking was not correct, though. [1] https://github.com/rvansa/infinispan/commit/1d62c9b84888c7ac21a9811213b5657aa44ff546 -- Radim Vansa JBoss Performance Team From anistor at redhat.com Fri Jul 28 08:59:50 2017 From: anistor at redhat.com (Adrian Nistor) Date: Fri, 28 Jul 2017 15:59:50 +0300 Subject: [infinispan-dev] Transactional consistency of query In-Reply-To: <85aaa2cd-b55b-90ec-50a2-a777c5907315@redhat.com> References: <85aaa2cd-b55b-90ec-50a2-a777c5907315@redhat.com> Message-ID: <7ff9bb19-c196-c438-a710-d9bb10146cf7@redhat.com> My feeling regarding this was to accept such inconsistencies, but maybe I'm wrong. I've always regarded indexing as being async in general, even though it did behave as if being sync in some not so rare circumstances, which probably made people believe it is expected to be sync in general. I'm curious what Sanne and Gustavo have in mind. Please note that updating the index synchronously during tx commit was always regarded as a performance bottleneck, so it was out of the question. And that would not always work anyway, it all depends on the underlying indexing technology. For example when using HS with elastic search you have to accept that elastic indexing is always async. And there might not be an index at all. It's very possible that the query runs unindexed. In that case it will use distributed streams which have their own transaction issues. In the past we had some bugs were a matching entry was deleted/evicted right before the search results were returned to the user, so loading of those values failed in a silent way. Those queries mistakenly returned some unexpected nulls among other valid results. The fix was to just filter out those nulls. We could enhance that to double check that the returned entry is indeed of the requested type, to also cover the issue that you encountered. Adrian On 07/28/2017 01:38 PM, Radim Vansa wrote: > Hi, > > while working on ISPN-7806 I am wondering how should queries work with > transactions. Right now it seems that updates to index are done during > either regular command execution (on originator [A]) or prepare command > on remote nodes [B]. Both of these cause rolled-back transactions to be > seen, so these must be treated as bugs [C]. > > If we index the data after committing the transaction, there would be a > time window when we could see the updated entries but the index would > not reflect that. That might be acceptable limitation if a > query-matching misses some entity, but it's also possible that we > retrieve the query result key-set and then (after retrieving full > entities) we return something that does not match the query. One of the > reproducers for ISPN-7806 I've written [1] triggers a situation where > listing all Persons could return Animal (different entity type), so I > think that there's no validity post-check (though these reproducers > don't use transactions). > > Therefore, I wonder if the index should contain only the key; maybe we > should store an unique version and invalidate the query if some of the > entries has changed. > > If we index the data before committing the transaction, similar > situation could happen: the index will return keys for entities that > will match in the future but the actually returned list will contain > stale entities. > > What's the overall plan? Do we just accept inconsistencies? In that > case, please add a verbose statement in docs and point me to that. > > And if I've misinterpreted something and raised the red flag in error, > please let me know. > > Radim > > [A] This seems to be a regression after moving towards async > interceptors - our impl of > org.hibernate.search.backend.TransactionContext is incorrectly bound to > TransactionManager. Then we seem to be running out of transaction and > are happy to index it right away. The thread that executes the > interceptor handler is also dependent on ownership (due to remote > LockCommand execution), so I think that it does not fail the local-mode > tests. > > [B] ... and it does so twice as a regression after ISPN-7840 but that's > easy to fix. > > [C] Indexing in prepare command was OK before ISPN-7840 with pessimistic > locking which does not send the CommitCommand, but now that the QI has > been moved below EWI it means that we're indexing before storing the > actual values. Optimistic locking was not correct, though. > > [1] > https://github.com/rvansa/infinispan/commit/1d62c9b84888c7ac21a9811213b5657aa44ff546 > > From rvansa at redhat.com Fri Jul 28 09:42:41 2017 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 28 Jul 2017 15:42:41 +0200 Subject: [infinispan-dev] Transactional consistency of query In-Reply-To: <7ff9bb19-c196-c438-a710-d9bb10146cf7@redhat.com> References: <85aaa2cd-b55b-90ec-50a2-a777c5907315@redhat.com> <7ff9bb19-c196-c438-a710-d9bb10146cf7@redhat.com> Message-ID: <709048aa-a0c9-8ccb-4256-8774287763a8@redhat.com> On 07/28/2017 02:59 PM, Adrian Nistor wrote: > My feeling regarding this was to accept such inconsistencies, but > maybe I'm wrong. I've always regarded indexing as being async in > general, even though it did behave as if being sync in some not so > rare circumstances, which probably made people believe it is expected > to be sync in general. I'm curious what Sanne and Gustavo have in mind. > > Please note that updating the index synchronously during tx commit was > always regarded as a performance bottleneck, so it was out of the > question. And that would not always work anyway, it all depends on the > underlying indexing technology. For example when using HS with elastic > search you have to accept that elastic indexing is always async. OK, queries being inherently async would be acceptable for me (as long as we document it - preferably blogging about the limitations, too). Could you make sure that But async should mean that the result looks as being done at some point earlier, maybe mix ordering a bit, but not that it's inconsistent (e.g. returning entries that not match the criteria). Also in case that we store fields in index and return a projection, those values should not come expose any non-committed data. I guess that expecting query in transaction to reflect uncommitted state would be probably too much :) > > And there might not be an index at all. It's very possible that the > query runs unindexed. In that case it will use distributed streams > which have their own transaction issues. Yes; please leave non-indexed queries aside from this discussion. > > In the past we had some bugs were a matching entry was deleted/evicted > right before the search results were returned to the user, so loading > of those values failed in a silent way. Those queries mistakenly > returned some unexpected nulls among other valid results. The fix was > to just filter out those nulls. We could enhance that to double check > that the returned entry is indeed of the requested type, to also cover > the issue that you encountered. It's not just entity type, criteria may be invalidated by any field change. Would a full criteria check on the returned entities be too expensive? Can you even check e.g. native queries against provided set of objects? Radim > > Adrian > > On 07/28/2017 01:38 PM, Radim Vansa wrote: >> Hi, >> >> while working on ISPN-7806 I am wondering how should queries work with >> transactions. Right now it seems that updates to index are done during >> either regular command execution (on originator [A]) or prepare command >> on remote nodes [B]. Both of these cause rolled-back transactions to be >> seen, so these must be treated as bugs [C]. >> >> If we index the data after committing the transaction, there would be a >> time window when we could see the updated entries but the index would >> not reflect that. That might be acceptable limitation if a >> query-matching misses some entity, but it's also possible that we >> retrieve the query result key-set and then (after retrieving full >> entities) we return something that does not match the query. One of the >> reproducers for ISPN-7806 I've written [1] triggers a situation where >> listing all Persons could return Animal (different entity type), so I >> think that there's no validity post-check (though these reproducers >> don't use transactions). >> >> Therefore, I wonder if the index should contain only the key; maybe we >> should store an unique version and invalidate the query if some of the >> entries has changed. >> >> If we index the data before committing the transaction, similar >> situation could happen: the index will return keys for entities that >> will match in the future but the actually returned list will contain >> stale entities. >> >> What's the overall plan? Do we just accept inconsistencies? In that >> case, please add a verbose statement in docs and point me to that. >> >> And if I've misinterpreted something and raised the red flag in error, >> please let me know. >> >> Radim >> >> [A] This seems to be a regression after moving towards async >> interceptors - our impl of >> org.hibernate.search.backend.TransactionContext is incorrectly bound to >> TransactionManager. Then we seem to be running out of transaction and >> are happy to index it right away. The thread that executes the >> interceptor handler is also dependent on ownership (due to remote >> LockCommand execution), so I think that it does not fail the local-mode >> tests. >> >> [B] ... and it does so twice as a regression after ISPN-7840 but that's >> easy to fix. >> >> [C] Indexing in prepare command was OK before ISPN-7840 with pessimistic >> locking which does not send the CommitCommand, but now that the QI has >> been moved below EWI it means that we're indexing before storing the >> actual values. Optimistic locking was not correct, though. >> >> [1] >> https://github.com/rvansa/infinispan/commit/1d62c9b84888c7ac21a9811213b5657aa44ff546 >> >> >> > -- Radim Vansa JBoss Performance Team From gustavo at infinispan.org Mon Jul 31 03:44:38 2017 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Mon, 31 Jul 2017 08:44:38 +0100 Subject: [infinispan-dev] Annotated encoded entries In-Reply-To: <8a615e57-62a9-7678-0d8d-fd66064c0bd0@redhat.com> References: <53087102-c5a7-8d07-5ecc-23ee70f03e4e@redhat.com> <8a615e57-62a9-7678-0d8d-fd66064c0bd0@redhat.com> Message-ID: On Thu, Jul 27, 2017 at 10:08 PM, Adrian Nistor wrote: > That's clever, but a bit too much bloat I'm afraid. > > Your mileage may vary, but in the spots I needed to handle those conversions, a mix of comments and proper local variable naming seemed enough, so I'm with Adrian in this. Gustavo -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170731/cc048fcd/attachment.html From gustavo at infinispan.org Mon Jul 31 04:41:41 2017 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Mon, 31 Jul 2017 09:41:41 +0100 Subject: [infinispan-dev] Transactional consistency of query In-Reply-To: <7ff9bb19-c196-c438-a710-d9bb10146cf7@redhat.com> References: <85aaa2cd-b55b-90ec-50a2-a777c5907315@redhat.com> <7ff9bb19-c196-c438-a710-d9bb10146cf7@redhat.com> Message-ID: IMO, indexing should be eventually consistent, as this offers the best performance. On tx-caches, although Lucene has hooks to be enlisted in a transaction [1], some backends (elasticsearch) don't expose this, and Hibernate Search by design doesn't make use of it. So currently we must deal with inconsistencies after the fact: checking for nulls, mismatched types and so on. [1] https://lucene.apache.org/core/6_0_1/core/org/apache/lucene/index/TwoPhaseCommit.html On Fri, Jul 28, 2017 at 1:59 PM, Adrian Nistor wrote: > My feeling regarding this was to accept such inconsistencies, but maybe > I'm wrong. I've always regarded indexing as being async in general, even > though it did behave as if being sync in some not so rare circumstances, > which probably made people believe it is expected to be sync in general. > I'm curious what Sanne and Gustavo have in mind. > > Please note that updating the index synchronously during tx commit was > always regarded as a performance bottleneck, so it was out of the > question. > And that would not always work anyway, it all depends on the > underlying indexing technology. For example when using HS with elastic > search you have to accept that elastic indexing is always async. > > And there might not be an index at all. It's very possible that the > query runs unindexed. In that case it will use distributed streams which > have their own transaction issues. > > In the past we had some bugs were a matching entry was deleted/evicted > right before the search results were returned to the user, so loading of > those values failed in a silent way. Those queries mistakenly returned > some unexpected nulls among other valid results. The fix was to just > filter out those nulls. We could enhance that to double check that the > returned entry is indeed of the requested type, to also cover the issue > that you encountered. > > Adrian > > On 07/28/2017 01:38 PM, Radim Vansa wrote: > > Hi, > > > > while working on ISPN-7806 I am wondering how should queries work with > > transactions. Right now it seems that updates to index are done during > > either regular command execution (on originator [A]) or prepare command > > on remote nodes [B]. Both of these cause rolled-back transactions to be > > seen, so these must be treated as bugs [C]. > > > > If we index the data after committing the transaction, there would be a > > time window when we could see the updated entries but the index would > > not reflect that. That might be acceptable limitation if a > > query-matching misses some entity, but it's also possible that we > > retrieve the query result key-set and then (after retrieving full > > entities) we return something that does not match the query. One of the > > reproducers for ISPN-7806 I've written [1] triggers a situation where > > listing all Persons could return Animal (different entity type), so I > > think that there's no validity post-check (though these reproducers > > don't use transactions). > > > > Therefore, I wonder if the index should contain only the key; maybe we > > should store an unique version and invalidate the query if some of the > > entries has changed. > > > > If we index the data before committing the transaction, similar > > situation could happen: the index will return keys for entities that > > will match in the future but the actually returned list will contain > > stale entities. > > > > What's the overall plan? Do we just accept inconsistencies? In that > > case, please add a verbose statement in docs and point me to that. > > > > And if I've misinterpreted something and raised the red flag in error, > > please let me know. > > > > Radim > > > > [A] This seems to be a regression after moving towards async > > interceptors - our impl of > > org.hibernate.search.backend.TransactionContext is incorrectly bound to > > TransactionManager. Then we seem to be running out of transaction and > > are happy to index it right away. The thread that executes the > > interceptor handler is also dependent on ownership (due to remote > > LockCommand execution), so I think that it does not fail the local-mode > > tests. > > > > [B] ... and it does so twice as a regression after ISPN-7840 but that's > > easy to fix. > > > > [C] Indexing in prepare command was OK before ISPN-7840 with pessimistic > > locking which does not send the CommitCommand, but now that the QI has > > been moved below EWI it means that we're indexing before storing the > > actual values. Optimistic locking was not correct, though. > > > > [1] > > https://github.com/rvansa/infinispan/commit/ > 1d62c9b84888c7ac21a9811213b5657aa44ff546 > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170731/0961b9f9/attachment.html From ttarrant at redhat.com Mon Jul 31 05:12:48 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 31 Jul 2017 11:12:48 +0200 Subject: [infinispan-dev] Transactional consistency of query In-Reply-To: References: <85aaa2cd-b55b-90ec-50a2-a777c5907315@redhat.com> <7ff9bb19-c196-c438-a710-d9bb10146cf7@redhat.com> Message-ID: <3c85ea2a-4b65-291f-20ba-a476f4247cdd@redhat.com> Shouldn't we use an appropriate conflict resolution strategy for this so that in case of partitions we repair the index ? Tristan On 7/31/17 10:41 AM, Gustavo Fernandes wrote: > IMO, indexing should be eventually consistent, as this offers the best > performance. > > On tx-caches, although Lucene has hooks to be enlisted in a transaction > [1], some backends (elasticsearch) don't > expose this, and Hibernate Search by design doesn't make use of it. So > currently we must deal with inconsistencies > after the fact: checking for nulls, mismatched types and so on. > > [1] > https://lucene.apache.org/core/6_0_1/core/org/apache/lucene/index/TwoPhaseCommit.html > > > On Fri, Jul 28, 2017 at 1:59 PM, Adrian Nistor > wrote: > > My feeling regarding this was to accept such inconsistencies, but maybe > I'm wrong. I've always regarded indexing as being async in general, even > though it did behave as if being sync in some not so rare circumstances, > which probably made people believe it is expected to be sync in general. > I'm curious what Sanne and Gustavo have in mind. > > Please note that updating the index synchronously during tx commit was > always regarded as a performance bottleneck, so it was out of the > question. > > And that would not always work anyway, it all depends on the > underlying indexing technology. For example when using HS with elastic > search you have to accept that elastic indexing is always async. > > And there might not be an index at all. It's very possible that the > query runs unindexed. In that case it will use distributed streams which > have their own transaction issues. > > In the past we had some bugs were a matching entry was deleted/evicted > right before the search results were returned to the user, so loading of > those values failed in a silent way. Those queries mistakenly returned > some unexpected nulls among other valid results. The fix was to just > filter out those nulls. We could enhance that to double check that the > returned entry is indeed of the requested type, to also cover the issue > that you encountered. > > Adrian > > On 07/28/2017 01:38 PM, Radim Vansa wrote: > > Hi, > > > > while working on ISPN-7806 I am wondering how should queries work > with > > transactions. Right now it seems that updates to index are done > during > > either regular command execution (on originator [A]) or prepare > command > > on remote nodes [B]. Both of these cause rolled-back transactions > to be > > seen, so these must be treated as bugs [C]. > > > > If we index the data after committing the transaction, there > would be a > > time window when we could see the updated entries but the index would > > not reflect that. That might be acceptable limitation if a > > query-matching misses some entity, but it's also possible that we > > retrieve the query result key-set and then (after retrieving full > > entities) we return something that does not match the query. One > of the > > reproducers for ISPN-7806 I've written [1] triggers a situation where > > listing all Persons could return Animal (different entity type), so I > > think that there's no validity post-check (though these reproducers > > don't use transactions). > > > > Therefore, I wonder if the index should contain only the key; > maybe we > > should store an unique version and invalidate the query if some > of the > > entries has changed. > > > > If we index the data before committing the transaction, similar > > situation could happen: the index will return keys for entities that > > will match in the future but the actually returned list will contain > > stale entities. > > > > What's the overall plan? Do we just accept inconsistencies? In that > > case, please add a verbose statement in docs and point me to that. > > > > And if I've misinterpreted something and raised the red flag in > error, > > please let me know. > > > > Radim > > > > [A] This seems to be a regression after moving towards async > > interceptors - our impl of > > org.hibernate.search.backend.TransactionContext is incorrectly > bound to > > TransactionManager. Then we seem to be running out of transaction and > > are happy to index it right away. The thread that executes the > > interceptor handler is also dependent on ownership (due to remote > > LockCommand execution), so I think that it does not fail the > local-mode > > tests. > > > > [B] ... and it does so twice as a regression after ISPN-7840 but > that's > > easy to fix. > > > > [C] Indexing in prepare command was OK before ISPN-7840 with > pessimistic > > locking which does not send the CommitCommand, but now that the > QI has > > been moved below EWI it means that we're indexing before storing the > > actual values. Optimistic locking was not correct, though. > > > > [1] > > > https://github.com/rvansa/infinispan/commit/1d62c9b84888c7ac21a9811213b5657aa44ff546 > > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From rvansa at redhat.com Mon Jul 31 06:27:56 2017 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 31 Jul 2017 12:27:56 +0200 Subject: [infinispan-dev] Transactional consistency of query In-Reply-To: <3c85ea2a-4b65-291f-20ba-a476f4247cdd@redhat.com> References: <85aaa2cd-b55b-90ec-50a2-a777c5907315@redhat.com> <7ff9bb19-c196-c438-a710-d9bb10146cf7@redhat.com> <3c85ea2a-4b65-291f-20ba-a476f4247cdd@redhat.com> Message-ID: On 07/31/2017 11:12 AM, Tristan Tarrant wrote: > Shouldn't we use an appropriate conflict resolution strategy for this so > that in case of partitions we repair the index ? This is not about eventual consistency in case of partitions, just eventually publishing the change in the index after the transaction completes. Making index consistent after a split brain (even with DENY_ALL policy some operations may end up in a half-complete state) is a completely different issue and I think nobody ever tried to deal with that. R. > > Tristan > > On 7/31/17 10:41 AM, Gustavo Fernandes wrote: >> IMO, indexing should be eventually consistent, as this offers the best >> performance. >> >> On tx-caches, although Lucene has hooks to be enlisted in a transaction >> [1], some backends (elasticsearch) don't >> expose this, and Hibernate Search by design doesn't make use of it. So >> currently we must deal with inconsistencies >> after the fact: checking for nulls, mismatched types and so on. >> >> [1] >> https://lucene.apache.org/core/6_0_1/core/org/apache/lucene/index/TwoPhaseCommit.html >> >> >> On Fri, Jul 28, 2017 at 1:59 PM, Adrian Nistor > > wrote: >> >> My feeling regarding this was to accept such inconsistencies, but maybe >> I'm wrong. I've always regarded indexing as being async in general, even >> though it did behave as if being sync in some not so rare circumstances, >> which probably made people believe it is expected to be sync in general. >> I'm curious what Sanne and Gustavo have in mind. >> >> Please note that updating the index synchronously during tx commit was >> always regarded as a performance bottleneck, so it was out of the >> question. >> >> And that would not always work anyway, it all depends on the >> underlying indexing technology. For example when using HS with elastic >> search you have to accept that elastic indexing is always async. >> >> And there might not be an index at all. It's very possible that the >> query runs unindexed. In that case it will use distributed streams which >> have their own transaction issues. >> >> In the past we had some bugs were a matching entry was deleted/evicted >> right before the search results were returned to the user, so loading of >> those values failed in a silent way. Those queries mistakenly returned >> some unexpected nulls among other valid results. The fix was to just >> filter out those nulls. We could enhance that to double check that the >> returned entry is indeed of the requested type, to also cover the issue >> that you encountered. >> >> Adrian >> >> On 07/28/2017 01:38 PM, Radim Vansa wrote: >> > Hi, >> > >> > while working on ISPN-7806 I am wondering how should queries work >> with >> > transactions. Right now it seems that updates to index are done >> during >> > either regular command execution (on originator [A]) or prepare >> command >> > on remote nodes [B]. Both of these cause rolled-back transactions >> to be >> > seen, so these must be treated as bugs [C]. >> > >> > If we index the data after committing the transaction, there >> would be a >> > time window when we could see the updated entries but the index would >> > not reflect that. That might be acceptable limitation if a >> > query-matching misses some entity, but it's also possible that we >> > retrieve the query result key-set and then (after retrieving full >> > entities) we return something that does not match the query. One >> of the >> > reproducers for ISPN-7806 I've written [1] triggers a situation where >> > listing all Persons could return Animal (different entity type), so I >> > think that there's no validity post-check (though these reproducers >> > don't use transactions). >> > >> > Therefore, I wonder if the index should contain only the key; >> maybe we >> > should store an unique version and invalidate the query if some >> of the >> > entries has changed. >> > >> > If we index the data before committing the transaction, similar >> > situation could happen: the index will return keys for entities that >> > will match in the future but the actually returned list will contain >> > stale entities. >> > >> > What's the overall plan? Do we just accept inconsistencies? In that >> > case, please add a verbose statement in docs and point me to that. >> > >> > And if I've misinterpreted something and raised the red flag in >> error, >> > please let me know. >> > >> > Radim >> > >> > [A] This seems to be a regression after moving towards async >> > interceptors - our impl of >> > org.hibernate.search.backend.TransactionContext is incorrectly >> bound to >> > TransactionManager. Then we seem to be running out of transaction and >> > are happy to index it right away. The thread that executes the >> > interceptor handler is also dependent on ownership (due to remote >> > LockCommand execution), so I think that it does not fail the >> local-mode >> > tests. >> > >> > [B] ... and it does so twice as a regression after ISPN-7840 but >> that's >> > easy to fix. >> > >> > [C] Indexing in prepare command was OK before ISPN-7840 with >> pessimistic >> > locking which does not send the CommitCommand, but now that the >> QI has >> > been moved below EWI it means that we're indexing before storing the >> > actual values. Optimistic locking was not correct, though. >> > >> > [1] >> > >> https://github.com/rvansa/infinispan/commit/1d62c9b84888c7ac21a9811213b5657aa44ff546 >> >> > >> > >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> -- Radim Vansa JBoss Performance Team From anistor at redhat.com Mon Jul 31 06:30:28 2017 From: anistor at redhat.com (Adrian Nistor) Date: Mon, 31 Jul 2017 13:30:28 +0300 Subject: [infinispan-dev] Transactional consistency of query In-Reply-To: References: <85aaa2cd-b55b-90ec-50a2-a777c5907315@redhat.com> <7ff9bb19-c196-c438-a710-d9bb10146cf7@redhat.com> Message-ID: <1523c072-798a-2c4e-535a-3c534dc12a59@redhat.com> Yup, I also meant 'eventually consistent' when saying such inconsistencies should be acceptable. At some point in time after transactions have been committed and topology changes have been handled (state transfer completed) and we have a steady state we should see a consistent index when querying. On 07/31/2017 11:41 AM, Gustavo Fernandes wrote: > IMO, indexing should be eventually consistent, as this offers the best > performance. > > On tx-caches, although Lucene has hooks to be enlisted in a > transaction [1], some backends (elasticsearch) don't > expose this, and Hibernate Search by design doesn't make use of it. So > currently we must deal with inconsistencies > after the fact: checking for nulls, mismatched types and so on. > > [1] > https://lucene.apache.org/core/6_0_1/core/org/apache/lucene/index/TwoPhaseCommit.html > > > On Fri, Jul 28, 2017 at 1:59 PM, Adrian Nistor > wrote: > > My feeling regarding this was to accept such inconsistencies, but > maybe > I'm wrong. I've always regarded indexing as being async in > general, even > though it did behave as if being sync in some not so rare > circumstances, > which probably made people believe it is expected to be sync in > general. > I'm curious what Sanne and Gustavo have in mind. > > Please note that updating the index synchronously during tx commit was > always regarded as a performance bottleneck, so it was out of the > question. > > And that would not always work anyway, it all depends on the > underlying indexing technology. For example when using HS with elastic > search you have to accept that elastic indexing is always async. > > And there might not be an index at all. It's very possible that the > query runs unindexed. In that case it will use distributed streams > which > have their own transaction issues. > > In the past we had some bugs were a matching entry was deleted/evicted > right before the search results were returned to the user, so > loading of > those values failed in a silent way. Those queries mistakenly returned > some unexpected nulls among other valid results. The fix was to just > filter out those nulls. We could enhance that to double check that the > returned entry is indeed of the requested type, to also cover the > issue > that you encountered. > > Adrian > > On 07/28/2017 01:38 PM, Radim Vansa wrote: > > Hi, > > > > while working on ISPN-7806 I am wondering how should queries > work with > > transactions. Right now it seems that updates to index are done > during > > either regular command execution (on originator [A]) or prepare > command > > on remote nodes [B]. Both of these cause rolled-back > transactions to be > > seen, so these must be treated as bugs [C]. > > > > If we index the data after committing the transaction, there > would be a > > time window when we could see the updated entries but the index > would > > not reflect that. That might be acceptable limitation if a > > query-matching misses some entity, but it's also possible that we > > retrieve the query result key-set and then (after retrieving full > > entities) we return something that does not match the query. One > of the > > reproducers for ISPN-7806 I've written [1] triggers a situation > where > > listing all Persons could return Animal (different entity type), > so I > > think that there's no validity post-check (though these reproducers > > don't use transactions). > > > > Therefore, I wonder if the index should contain only the key; > maybe we > > should store an unique version and invalidate the query if some > of the > > entries has changed. > > > > If we index the data before committing the transaction, similar > > situation could happen: the index will return keys for entities that > > will match in the future but the actually returned list will contain > > stale entities. > > > > What's the overall plan? Do we just accept inconsistencies? In that > > case, please add a verbose statement in docs and point me to that. > > > > And if I've misinterpreted something and raised the red flag in > error, > > please let me know. > > > > Radim > > > > [A] This seems to be a regression after moving towards async > > interceptors - our impl of > > org.hibernate.search.backend.TransactionContext is incorrectly > bound to > > TransactionManager. Then we seem to be running out of > transaction and > > are happy to index it right away. The thread that executes the > > interceptor handler is also dependent on ownership (due to remote > > LockCommand execution), so I think that it does not fail the > local-mode > > tests. > > > > [B] ... and it does so twice as a regression after ISPN-7840 but > that's > > easy to fix. > > > > [C] Indexing in prepare command was OK before ISPN-7840 with > pessimistic > > locking which does not send the CommitCommand, but now that the > QI has > > been moved below EWI it means that we're indexing before storing the > > actual values. Optimistic locking was not correct, though. > > > > [1] > > > https://github.com/rvansa/infinispan/commit/1d62c9b84888c7ac21a9811213b5657aa44ff546 > > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170731/f4f94d1a/attachment.html From slaskawi at redhat.com Mon Jul 31 09:26:31 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 31 Jul 2017 13:26:31 +0000 Subject: [infinispan-dev] Spring Boot Starters 2.0.0.Alpha1 release Message-ID: Hey, I'm happy to announce Infinispan Spring Boot Starters 2.0.0.Alpha1 release. The release includes: - - Creating caches defined by the Configuration - Testsuite cleanup - Created two separate started for embedded and client/server use cases - Allow using Infinispan Spring Boot Starter along with the one provided by Spring Boot - Moved sources into separate packages You may find more info here: https://github.com/infinispan/infinispan-spring-boot/releases/tag/2.0.0.Alpha1 Big thank you to Luca Burgazzoli for all the heavy weightlifting! Thanks, Sebastian -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170731/1946ad34/attachment.html From ttarrant at redhat.com Mon Jul 31 12:02:51 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 31 Jul 2017 18:02:51 +0200 Subject: [infinispan-dev] URGENT: Master test failures Message-ID: Hi all, these are some of the failures I have seen recently in master. Some of these are already being ignored. If you know of more, please add them. It seems like there are some recurring failures with some rehashing tests. We REALLY REALLY need to bring this list down to 0 ASAP ! Let us stop every other activity until we get there. Please feel free to comment, disable, add any missing known failures. Tristan OptimisticPrimaryOwnerCrashDuringPrepareTest -------------------------------------------- Tracked by ISPN-8139. http://ci.infinispan.org/job/Infinispan/job/master/59/testReport/junit/org.infinispan.distribution.rehash/OptimisticPrimaryOwnerCrashDuringPrepareTest/testPrimaryOwnerCrash/ JCacheTwoCachesBasicOpsTest --------------------------- I hadn't seen this in a while. Tracked by ISPN-6952 http://ci.infinispan.org/job/Infinispan/job/master/59/testReport/junit/org.infinispan.jcache/JCacheTwoCachesBasicOpsTest/testRemovedListener_remote_/ DistributedStreamIteratorWithPassivationTest -------------------------------------------- http://ci.infinispan.org/job/Infinispan/job/master/60/testReport/junit/org.infinispan.stream/DistributedStreamIteratorWithPassivationTest/testConcurrentActivationWithFilter_DIST_SYNC__tx_false_/ HotRodCustomMarshallerIteratorIT -------------------------------- Marked as ignored. Tracked by ISPN-8001. This fails because of a race condition in the deployment of the marshaller. http://ci.infinispan.org/job/Infinispan/job/master/59/testReport/junit/org.infinispan.server.test.client.hotrod/HotRodCustomMarshallerIteratorIT(localmode-udp)/testIteration/ EmbeddedHotRodCacheListenerTest ------------------------------- http://ci.infinispan.org/job/Infinispan/job/master/60/testReport/junit/org.infinispan.it.compatibility/EmbeddedHotRodCacheListenerTest/setup/ ScatteredCrashInSequenceTest ---------------------------- Marked as ignored. Tracked by ISPN-8097 http://ci.infinispan.org/job/Infinispan/job/master/59/testReport/junit/org.infinispan.partitionhandling/ScatteredCrashInSequenceTest/testSplit2_SCATTERED_SYNC_/ RehashWithL1Test ---------------- Tracked by ISPN-7801. http://ci.infinispan.org/job/Infinispan/job/master/58/testReport/junit/org.infinispan.distribution.rehash/RehashWithL1Test/testPutWithRehashAndCacheClear/ NonTxPutIfAbsentDuringLeaveStressTest ------------------------------------- Tracked by ISPN-6451. http://ci.infinispan.org/job/Infinispan/job/master/57/testReport/junit/org.infinispan.distribution.rehash/NonTxPutIfAbsentDuringLeaveStressTest/testNodeLeavingDuringPutIfAbsent_DIST_SYNC_/ ReplTotalOrderVersionedStateTransferTest ---------------------------------------- Tracked by ISPN-6827. http://ci.infinispan.org/job/Infinispan/job/master/57/testReport/junit/org.infinispan.tx.totalorder.statetransfer/ReplTotalOrderVersionedStateTransferTest/testStateTransfer/ -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From sanne at infinispan.org Mon Jul 31 12:03:36 2017 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 31 Jul 2017 17:03:36 +0100 Subject: [infinispan-dev] Conflict Manager and Partition Handling Blog In-Reply-To: <73413902.30250862.1501072916488.JavaMail.zimbra@redhat.com> References: <658199000.28091136.1500293787173.JavaMail.zimbra@redhat.com> <1A277579-1486-4125-A879-33EECB81FC29@redhat.com> <99A51374-C1E9-41EA-A26E-9E2EE4A7CB2A@redhat.com> <73413902.30250862.1501072916488.JavaMail.zimbra@redhat.com> Message-ID: Great job! I love to see this improved and being described in detail. +1 to add some practical examples, as I'm afraid we only notice limitations in features like this when thinking about specific use-cases. The option `REMOVE_ALL` seems sensible for the disposable Cache use case. One question though: if one partition has a defined value for a key, while the other partition has no value (null) for this same key, is it considered a conflict? I think you need to clarify if a "null" in a subset of partitions causes the conflict merge to be triggered or not. I think it should: for example having the cache use case in mind, an explicit invalidation needs to be propagated safely. Thanks, Sanne On 26 July 2017 at 13:41, Ryan Emerson wrote: > Hi Galder, > > Thanks for the feedback. > > Conflicts are detected by applying a predicate to the returned Map for each cache entry. Currently this predicate utilises Stream::distinct (so .equals), to check for multiple versions of an entry. So implementing pluggable strategies for defining a conflict should be trivial :) > > Good idea about a simple tutorial/demo, I'll look into it when I get a chance. > > Cheers > Ryan > > ----- Original Message ----- >> Oh, if we can't find a simple tutorial for it, there's always >> https://github.com/infinispan-demos :) >> >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> > On 25 Jul 2017, at 17:11, Galder Zamarre?o wrote: >> > >> > One more thing: have you thought how we could have a simple tutorial on >> > this feature? >> > >> > It'd be great to find a simple, reduced, example to show it off :) >> > >> > Cheers, >> > -- >> > Galder Zamarre?o >> > Infinispan, Red Hat >> > >> >> On 25 Jul 2017, at 16:54, Galder Zamarre?o wrote: >> >> >> >> Hey Ryan, >> >> >> >> Very detailed blog post! Great work on both the post and the feature! :D >> >> >> >> While reading, the following question came to my mind: how does Infinispan >> >> determine there's a conflict? Does it rely on .equals() based equality? >> >> >> >> A follow up would be: whether in the future this could be pluggable, e.g. >> >> when comparing a version field is enough to realise there's a conflict. >> >> As opposed of relying in .equals(), if that's what's being used inside :) >> >> >> >> Cheers, >> >> -- >> >> Galder Zamarre?o >> >> Infinispan, Red Hat >> >> >> >>> On 17 Jul 2017, at 14:16, Ryan Emerson wrote: >> >>> >> >>> Hi Everyone, >> >>> >> >>> Here's a blog post on the introduction of ConflictManager and the recent >> >>> changes to partition handling. >> >>> >> >>> http://blog.infinispan.org/2017/07/conflict-management-and-partition.html >> >>> >> >>> Cheers >> >>> Ryan >> >>> _______________________________________________ >> >>> infinispan-dev mailing list >> >>> infinispan-dev at lists.jboss.org >> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> > >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From Wayne.Wang at impactmobile.com Mon Jul 31 16:31:29 2017 From: Wayne.Wang at impactmobile.com (Wayne Wang) Date: Mon, 31 Jul 2017 20:31:29 +0000 Subject: [infinispan-dev] not sure if this is the place to post a question on infinispan Message-ID: <51C4569C9C1C58419D764E855FB0E3090DA80CCD@exchange1-tor.impactmobile.local> Hi All, I am not sure if this is the place I could post a question on infinispan. Basically, I am testing a scenario of invalidation cache in a cluster environment. It looks like the instance that actually modified the object has not problem of re-building the cache after data is updated, but the instances (in the same cluster) receiving the signal to invalidate the cache will indeed invalidate the cache, but can not re-build the cache until cache expiration. Is this intended design, or there is some wrong in the configuration? Thanks, Wayne -----Original Message----- From: infinispan-dev-bounces at lists.jboss.org [mailto:infinispan-dev-bounces at lists.jboss.org] On Behalf Of Sanne Grinovero Sent: Monday, July 31, 2017 12:04 PM To: infinispan -Dev List Subject: Re: [infinispan-dev] Conflict Manager and Partition Handling Blog Great job! I love to see this improved and being described in detail. +1 to add some practical examples, as I'm afraid we only notice limitations in features like this when thinking about specific use-cases. The option `REMOVE_ALL` seems sensible for the disposable Cache use case. One question though: if one partition has a defined value for a key, while the other partition has no value (null) for this same key, is it considered a conflict? I think you need to clarify if a "null" in a subset of partitions causes the conflict merge to be triggered or not. I think it should: for example having the cache use case in mind, an explicit invalidation needs to be propagated safely. Thanks, Sanne On 26 July 2017 at 13:41, Ryan Emerson wrote: > Hi Galder, > > Thanks for the feedback. > > Conflicts are detected by applying a predicate to the returned > Map for each cache entry. Currently this > predicate utilises Stream::distinct (so .equals), to check for > multiple versions of an entry. So implementing pluggable strategies > for defining a conflict should be trivial :) > > Good idea about a simple tutorial/demo, I'll look into it when I get a chance. > > Cheers > Ryan > > ----- Original Message ----- >> Oh, if we can't find a simple tutorial for it, there's always >> https://github.com/infinispan-demos :) >> >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> > On 25 Jul 2017, at 17:11, Galder Zamarre?o wrote: >> > >> > One more thing: have you thought how we could have a simple >> > tutorial on this feature? >> > >> > It'd be great to find a simple, reduced, example to show it off :) >> > >> > Cheers, >> > -- >> > Galder Zamarre?o >> > Infinispan, Red Hat >> > >> >> On 25 Jul 2017, at 16:54, Galder Zamarre?o wrote: >> >> >> >> Hey Ryan, >> >> >> >> Very detailed blog post! Great work on both the post and the >> >> feature! :D >> >> >> >> While reading, the following question came to my mind: how does >> >> Infinispan determine there's a conflict? Does it rely on .equals() based equality? >> >> >> >> A follow up would be: whether in the future this could be pluggable, e.g. >> >> when comparing a version field is enough to realise there's a conflict. >> >> As opposed of relying in .equals(), if that's what's being used >> >> inside :) >> >> >> >> Cheers, >> >> -- >> >> Galder Zamarre?o >> >> Infinispan, Red Hat >> >> >> >>> On 17 Jul 2017, at 14:16, Ryan Emerson wrote: >> >>> >> >>> Hi Everyone, >> >>> >> >>> Here's a blog post on the introduction of ConflictManager and the >> >>> recent changes to partition handling. >> >>> >> >>> http://blog.infinispan.org/2017/07/conflict-management-and-partit >> >>> ion.html >> >>> >> >>> Cheers >> >>> Ryan >> >>> _______________________________________________ >> >>> infinispan-dev mailing list >> >>> infinispan-dev at lists.jboss.org >> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> > >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev _______________________________________________ infinispan-dev mailing list infinispan-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev