From remerson at redhat.com Tue Jan 9 05:56:50 2018 From: remerson at redhat.com (Ryan Emerson) Date: Tue, 9 Jan 2018 05:56:50 -0500 (EST) Subject: [infinispan-dev] Avoid using ${project.groupId} in pom dependencies In-Reply-To: <114629669.5428557.1515494184769.JavaMail.zimbra@redhat.com> Message-ID: <2030092217.5442733.1515495410612.JavaMail.zimbra@redhat.com> Hi Everyone, Recently I further refactored our pom structure so that the server parent no longer inherits from jboss-as, but instead inherits from the infinispan-parent pom. As part of this refactor, it was necessary to change the groupId declaration for infinispan dependencies in the parent and bom poms from `${project.groupId}` to `org.infinispan` due to the way that maven handles pom inheritance. See [1] for more details. In order to make all of our poms more consistent, I applied this change across all of our non-server poms and avoided using the placeholder in the server poms (`org.infinispan.server` being utilised instead). So this is a request for all contributors to avoid using the `${project.groupId}` placeholder when declaring infinispan dependencies in the pom. Furthermore, if reviewers could try to be vigilante against the re-introduction of said placeholder, it would be much appreciated. N.B. We still utilise `{project.groupId}` in various bundle plugins [2], which I think is fine as these are not on the parent so they do not cause an issue and are trickier to find/replace automatically in the event of groupId name change. [1] https://github.com/infinispan/infinispan/pull/5652 [2] https://github.com/infinispan/infinispan/blob/master/core/pom.xml#L183-L186 Cheers Ryan From ttarrant at redhat.com Tue Jan 9 11:06:01 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 9 Jan 2018 17:06:01 +0100 Subject: [infinispan-dev] Weekly Infinispan IRC meeting logs 2018-01-08 Message-ID: <07bc6dd8-e318-4565-cbe3-22e4c985d5b6@redhat.com> The weekly meeting logs are here: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2018/infinispan.2018-01-08-15.01.log.html Tristan -- Tristan Tarrant Infinispan Lead and JBoss Data Grid Chief Architect JBoss, a division of Red Hat From karesti at redhat.com Tue Jan 9 14:24:30 2018 From: karesti at redhat.com (Katia Aresti) Date: Tue, 9 Jan 2018 20:24:30 +0100 Subject: [infinispan-dev] Weekly Infinispan IRC meeting logs 2018-01-08 In-Reply-To: <07bc6dd8-e318-4565-cbe3-22e4c985d5b6@redhat.com> References: <07bc6dd8-e318-4565-cbe3-22e4c985d5b6@redhat.com> Message-ID: Hi all, I had some technical issues here yesterday and I missed the meeting. First of all, happy new year and I hope you all enjoyed ! This week will be a bit shorter for me, I'm taking a PTO tomorrow. I have to finish my PR for the clustered locks, review some of your work, and work on the streaming data workshop / deep dive, catch up with the cache mission feedback (obsidian team) and work on my Snowcamp presentation that will showcase the clustered locks with an example. I have some product side related things, but I will probably be leaving this for next week. That's all from my side ! Katia On Tue, Jan 9, 2018 at 5:06 PM, Tristan Tarrant wrote: > The weekly meeting logs are here: > > http://transcripts.jboss.org/meeting/irc.freenode.org/ > infinispan/2018/infinispan.2018-01-08-15.01.log.html > > Tristan > -- > Tristan Tarrant > Infinispan Lead and JBoss Data Grid Chief Architect > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180109/545b9815/attachment.html From slaskawi at redhat.com Tue Jan 16 03:19:47 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 16 Jan 2018 08:19:47 +0000 Subject: [infinispan-dev] Status 2018/01/15 Message-ID: Hey guys, Sorry I couldn't attend the community meeting. Here are my bits: - Rebased and refreshed Synchronous Get PR: https://github.com/infinispan/infinispan/pull/5262 - Upgraded Netty: https://github.com/infinispan/infinispan/pull/5676 - Started working on Single Port. The implementation will be very similar to REST's HTTP/1.1 Upgrade and TLS/ALPN negotiation. I'm trying to reuse as much code as I can so there will be a lot of refactoring going on. I hope to have a POC till the end of the week and then tidy it up and send a PR. And of course, welcome Osni to the team!! We are happy to have you! Thanks, Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180116/6c4cd1ca/attachment.html From galder at redhat.com Mon Jan 22 05:24:50 2018 From: galder at redhat.com (Galder =?utf-8?Q?Zamarre=C3=B1o?=) Date: Mon, 22 Jan 2018 11:24:50 +0100 Subject: [infinispan-dev] Infinispan 9.2.0.CR1 released Message-ID: Hi, Last Friday we released Infinispan 9.2.0.CR1. You can find out all about it here: http://blog.infinispan.org/2018/01/first-candidate-release-for-infinispan.html Cheers, Galder From rory.odonnell at oracle.com Mon Jan 22 05:50:30 2018 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Mon, 22 Jan 2018 10:50:30 +0000 Subject: [infinispan-dev] JDK 10 Early Access b40 & JDK 8u172 Early Access b02 are available on jdk.java.net Message-ID: <4afb32a7-5a3f-d0f1-8aff-9ebd74a1012b@oracle.com> Hi Galder, Happy New Year! *OpenJDK builds - *JDK 10 Early Access build 40 is available at http://jdk.java.net/10/ * These early-access, open-source builds are provided under the GNU General Public License, version?2, with the Classpath Exception . * Summary of changes :- https://download.java.net/java/jdk10/archive/40/jdk-10+40.html *JDK 10 will enter Rampdown Phase Two on Thursday the 18th of January, 2018. * * More details , see Mark Reinhold's email to jdk-dev mailing list [1] * The Rampdown Phase Two process will be similar to that of JDK 9 [2]. * JDK 10 Schedule, Status & Features are available [3] *JDK **8u172 Early-Access build 03*is available at :- http://jdk.java.net/8/ * Summary of Changes here :- https://download.java.net/java/jdk8u172/changes/jdk8u172-b02.html Regards, Rory [1] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-January/000416.html [2] http://openjdk.java.net/projects/jdk/10/rdp-2 [3] http://openjdk.java.net/projects/jdk/10/ -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA, Dublin,Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180122/17cb624d/attachment-0001.html From ttarrant at redhat.com Mon Jan 29 11:38:37 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 29 Jan 2018 17:38:37 +0100 Subject: [infinispan-dev] Weekly Infinispan IRC logs 2018-01-29 Message-ID: <16e139d4-d441-4af3-653b-240a1acdc8c5@redhat.com> Hi all, the weekly Infinispan logs are here: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2018/infinispan.2018-01-29-15.01.log.html Tristan From rvansa at redhat.com Mon Jan 29 12:42:03 2018 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 29 Jan 2018 18:42:03 +0100 Subject: [infinispan-dev] Order of locking in optimistic tx cache Message-ID: Hi Pedro, I have a looong open JIRA [1] and so I've tried to look into current ordering of locks. And I've noticed that we don't sort the keys anymore but have a short 'big lock' [2] - I don't fully understand it, though. What happens if T1 has A locked and T2 tries to lock A and B? Is T2 allowed to acquire lock on B and waits for A? Are the locks 'fair' and the synchronized block introduces an ordering on the lock requests? Thanks Radim [1] https://issues.jboss.org/browse/ISPN-2491 [2] https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/util/concurrent/locks/impl/DefaultLockManager.java#L115 -- Radim Vansa JBoss Performance Team From pedro at infinispan.org Mon Jan 29 13:06:04 2018 From: pedro at infinispan.org (Pedro Ruivo) Date: Mon, 29 Jan 2018 18:06:04 +0000 Subject: [infinispan-dev] Order of locking in optimistic tx cache In-Reply-To: References: Message-ID: <9e5be326-491c-67d1-9407-2caa105dfa50@infinispan.org> Hi, On 29-01-2018 17:42, Radim Vansa wrote: > Hi Pedro, > > I have a looong open JIRA [1] and so I've tried to look into current > ordering of locks. And I've noticed that we don't sort the keys anymore > but have a short 'big lock' [2] - I don't fully understand it, though. > What happens if T1 has A locked and T2 tries to lock A and B? Is T2 > allowed to acquire lock on B and waits for A? T2 is allowed to acquire B but has to wait for T1 until it is ready to proceed. > Are the locks 'fair' and the synchronized block introduces an ordering on the lock requests? The main reason of the block is to avoid deadlocks. Using your example, the thread handling T2 won't block for A to be available and continues trying to acquire the remaining lock. Also, it has the benefit of ordering contented transactions and provides fairness Pedro > > Thanks > > Radim > > [1] https://issues.jboss.org/browse/ISPN-2491 > > [2] > https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/util/concurrent/locks/impl/DefaultLockManager.java#L115 > From galder at redhat.com Tue Jan 30 02:53:24 2018 From: galder at redhat.com (Galder =?utf-8?Q?Zamarre=C3=B1o?=) Date: Tue, 30 Jan 2018 08:53:24 +0100 Subject: [infinispan-dev] Weekly Infinispan IRC logs 2018-01-29 In-Reply-To: <16e139d4-d441-4af3-653b-240a1acdc8c5@redhat.com> (Tristan Tarrant's message of "Mon, 29 Jan 2018 17:38:37 +0100") References: <16e139d4-d441-4af3-653b-240a1acdc8c5@redhat.com> Message-ID: Tristan Tarrant writes: Hi all, Here's my update which I was unable to provide yesterday: * Mostly worked on JFokus related presentations, both deep dive and own presentation. This includes some slides and a lot of live coding, * Btw, OpenShift 3.7 and Fabric8 Maven plugin are not playing along with redeployments, so having to workaround that. For my presentation this means switching to binary builds and for the deep dive it means switching to OpenShift 3.6. More info in [1]. * I also worked on adding Hibernate tutorials to website, a PR is waiting to be reviewed/integrated [2]. After that's integrated we should republish the website. Cheers, [1] https://github.com/fabric8io/fabric8-maven-plugin/issues/1130 [2] https://github.com/infinispan/infinispan.github.io/pull/53 > Hi all, > > the weekly Infinispan logs are here: > > http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2018/infinispan.2018-01-29-15.01.log.html > > Tristan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Tue Jan 30 03:35:10 2018 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 30 Jan 2018 08:35:10 +0000 Subject: [infinispan-dev] Weekly Infinispan IRC logs 2018-01-29 In-Reply-To: References: <16e139d4-d441-4af3-653b-240a1acdc8c5@redhat.com> Message-ID: Hi all I missed the meeting yesterday as well. I was on PTO for most of last week. I spent Mon and Tue polishing the pull request for ISPN-8693, but unfortunately I didn't finish and I had to open a new PR yesterday [1]. I think I found a retry bug while checking for possibly-related test failures yesterday: ISPN-8731 [2]. [1]: https://github.com/infinispan/infinispan/pull/5705 [2]: https://issues.jboss.org/browse/ISPN-8731 On Tue, Jan 30, 2018 at 7:53 AM, Galder Zamarre?o wrote: > Tristan Tarrant writes: > > Hi all, > > Here's my update which I was unable to provide yesterday: > > * Mostly worked on JFokus related presentations, both deep dive and own > presentation. This includes some slides and a lot of live coding, > * Btw, OpenShift 3.7 and Fabric8 Maven plugin are not playing > along with redeployments, so having to workaround that. For my > presentation this means switching to binary builds and for the deep > dive it means switching to OpenShift 3.6. More info in > [1]. > * I also worked on adding Hibernate tutorials to website, a PR is > waiting to be reviewed/integrated [2]. After that's integrated we should > republish the website. > > Cheers, > > [1] https://github.com/fabric8io/fabric8-maven-plugin/issues/1130 > [2] https://github.com/infinispan/infinispan.github.io/pull/53 > > > > Hi all, > > > > the weekly Infinispan logs are here: > > > > http://transcripts.jboss.org/meeting/irc.freenode.org/ > infinispan/2018/infinispan.2018-01-29-15.01.log.html > > > > Tristan > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180130/96e5f9c2/attachment.html From sanne at infinispan.org Tue Jan 30 11:30:01 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 30 Jan 2018 16:30:01 +0000 Subject: [infinispan-dev] PersistentUUIDManagerImpl NPEs being logged when running the testsuite Message-ID: Hi all, I'm building master [1] and see such NPEs dumped on my terminal quite often; I guess you all noticed already? I couldn't find a JIRA.. 16:24:03,083 FATAL (transport-thread-StateTransferLinkFailuresTest[null, tx=false]-NodeN-p63985-t2) [PersistentUUIDManagerImpl] Cannot find mapping for address StateTransferLinkFailuresTest[null, tx=false]-NodeN-32100 java.lang.NullPointerException at org.infinispan.topology.PersistentUUIDManagerImpl.mapAddresses(PersistentUUIDManagerImpl.java:70) at org.infinispan.partitionhandling.impl.PreferAvailabilityStrategy.onPartitionMerge(PreferAvailabilityStrategy.java:214) at org.infinispan.topology.ClusterCacheStatus.doMergePartitions(ClusterCacheStatus.java:597) at org.infinispan.topology.ClusterTopologyManagerImpl.lambda$recoverClusterStatus$6(ClusterTopologyManagerImpl.java:519) at org.infinispan.executors.LimitedExecutor.runTasks(LimitedExecutor.java:144) at org.infinispan.executors.LimitedExecutor.access$100(LimitedExecutor.java:33) at org.infinispan.executors.LimitedExecutor$Runner.run(LimitedExecutor.java:174) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 16:24:03,115 FATAL (transport-thread-StateTransferLinkFailuresTest[null, tx=false]-NodeQ-p64193-t5) [PersistentUUIDManagerImpl] Cannot find mapping for address StateTransferLinkFailuresTest[null, tx=false]-NodeQ-10499 java.lang.NullPointerException at org.infinispan.topology.PersistentUUIDManagerImpl.mapAddresses(PersistentUUIDManagerImpl.java:70) at org.infinispan.partitionhandling.impl.PreferAvailabilityStrategy.onPartitionMerge(PreferAvailabilityStrategy.java:214) at org.infinispan.topology.ClusterCacheStatus.doMergePartitions(ClusterCacheStatus.java:597) at org.infinispan.topology.ClusterTopologyManagerImpl.lambda$recoverClusterStatus$6(ClusterTopologyManagerImpl.java:519) at org.infinispan.executors.LimitedExecutor.runTasks(LimitedExecutor.java:144) at org.infinispan.executors.LimitedExecutor.access$100(LimitedExecutor.java:33) at org.infinispan.executors.LimitedExecutor$Runner.run(LimitedExecutor.java:174) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 1 - cc2744e9f509d917f1ed0ff1a18b28b72595af83 Thanks, Sanne From ion at infinispan.org Wed Jan 31 03:10:07 2018 From: ion at infinispan.org (Ion Savin) Date: Wed, 31 Jan 2018 10:10:07 +0200 Subject: [infinispan-dev] spare cycles Message-ID: <20a3e345-fe25-fe9a-4a6a-acf7f148df45@infinispan.org> Hi all, I have some spare cycles over the course of the year which I'm going to use to contribute to open source projects. If you can think of anything specific that you could use some help with please let me know. Thanks, Ion Savin