From rvansa at redhat.com Tue Dec 1 04:00:26 2015 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 1 Dec 2015 10:00:26 +0100 Subject: [infinispan-dev] Consolidating temporary per-key data In-Reply-To: References: <56533020.8090806@redhat.com> <5658224F.7060404@redhat.com> Message-ID: <565D61AA.5010104@redhat.com> On 11/30/2015 08:17 PM, Dan Berindei wrote: > The first problem that comes to mind is that context entries are also > stored in a map, at least in transactional mode. So access through the > context would only be faster in non-tx caches, in tx caches it would > not add any benefits. I admit that I was thinking more about non-tx caches, I would have to re-check what maps are written tx mode. Please, let's focus on non-tx. However I probably don't comprehend your answer completely. I am not talking that much about reading something from the context, but reads from these concurrent maps (these can be always somehow cached in the context), but about writes. > > I also have some trouble imagining how these temporary entries would > be released, since locks, L1 requestors, L1 synchronizers, and write > registrations all have their own rules for cleaning up. Maybe I am oversimplifying that - all the components would modify the shared record, and once the record becomes empty, it's removed. It's just a logical OR on these collections. > > Finally, I'm not sure how much this would help. I actually removed the > write registration for everything except RemoveExpiredCommand when > testing the HotRod server performance, but I didn't get any > significant improvement on my machine. Which was kind of expected, > since the benchmark doesn't seem to be CPU-bound, and JFR was showing > it with < 1.5% of CPU. Datapoint noted. But if there's application using embedded Infinispan, it's quite likely that the app is CPU-bound, and Infinispan becomes, too. We're not optimizing for benchmarks but apps. Though, I can see that if server is not CPU bound, it won't make much difference. Thanks for your opinions, guys - if Will will remove the expiration registration, this idea is void (is anyone really using L1?). We'll see how this will end up. Radim > > Cheers > Dan > > > On Fri, Nov 27, 2015 at 11:28 AM, Radim Vansa wrote: >> No thoughts at all? @wburns, could I have your view on this? >> >> Thanks >> >> Radim >> >> On 11/23/2015 04:26 PM, Radim Vansa wrote: >>> Hi again, >>> >>> examining some flamegraphs I've found out that recently the >>> ExpirationInterceptor has been added, which registers ongoing write in a >>> hashmap. So at this point we have a map for locks, map for writes used >>> for expiration, another two key-addressed maps in L1ManagerImpl and one >>> in L1NonTxInterceptor and maybe another maps elsewhere. >>> >>> This makes me think that we could spare map lookups and expensive writes >>> by providing *single map for temporary per-key data*. A reference to the >>> entry could be stored in the context to save the lookups. An extreme >>> case would be to put this into DataContainer, but I think that this >>> would prove too tricky in practice. >>> >>> A downside would be the loss of encapsulation (any component could >>> theoretically access e.g. locks), but I don't find that too dramatic. >>> >>> WDYT? >>> >>> Radim >>> >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From dan.berindei at gmail.com Tue Dec 1 09:26:26 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 1 Dec 2015 16:26:26 +0200 Subject: [infinispan-dev] Consolidating temporary per-key data In-Reply-To: <565D61AA.5010104@redhat.com> References: <56533020.8090806@redhat.com> <5658224F.7060404@redhat.com> <565D61AA.5010104@redhat.com> Message-ID: On Tue, Dec 1, 2015 at 11:00 AM, Radim Vansa wrote: > On 11/30/2015 08:17 PM, Dan Berindei wrote: >> The first problem that comes to mind is that context entries are also >> stored in a map, at least in transactional mode. So access through the >> context would only be faster in non-tx caches, in tx caches it would >> not add any benefits. > > I admit that I was thinking more about non-tx caches, I would have to > re-check what maps are written tx mode. Please, let's focus on non-tx. > > However I probably don't comprehend your answer completely. I am not > talking that much about reading something from the context, but reads > from these concurrent maps (these can be always somehow cached in the > context), but about writes. It makes sense if you consider tx caches, because the only way to cache these in the context for a transaction is to store them in another hashmap (a la `AbstractCacheTransaction.lookedUpEntries`). I know you're more interested in non-tx caches, but you have the same problem with the non-tx commands that use a NonTxInvocationContext (e.g. PutMapCommand). > >> >> I also have some trouble imagining how these temporary entries would >> be released, since locks, L1 requestors, L1 synchronizers, and write >> registrations all have their own rules for cleaning up. > > Maybe I am oversimplifying that - all the components would modify the > shared record, and once the record becomes empty, it's removed. It's > just a logical OR on these collections. I think you'd need something more like a refcount, and to atomically release the entry when the count reaches 0. (We used to have something similar for the locks map.) It's certainly doable, but the challenge is to make it cheaper to maintain than the separate maps we have now. > >> >> Finally, I'm not sure how much this would help. I actually removed the >> write registration for everything except RemoveExpiredCommand when >> testing the HotRod server performance, but I didn't get any >> significant improvement on my machine. Which was kind of expected, >> since the benchmark doesn't seem to be CPU-bound, and JFR was showing >> it with < 1.5% of CPU. > > Datapoint noted. But if there's application using embedded Infinispan, > it's quite likely that the app is CPU-bound, and Infinispan becomes, > too. We're not optimizing for benchmarks but apps. Though, I can see > that if server is not CPU bound, it won't make much difference. > > Thanks for your opinions, guys - if Will will remove the expiration > registration, this idea is void (is anyone really using L1?). We'll see > how this will end up. > > Radim > >> >> Cheers >> Dan >> >> >> On Fri, Nov 27, 2015 at 11:28 AM, Radim Vansa wrote: >>> No thoughts at all? @wburns, could I have your view on this? >>> >>> Thanks >>> >>> Radim >>> >>> On 11/23/2015 04:26 PM, Radim Vansa wrote: >>>> Hi again, >>>> >>>> examining some flamegraphs I've found out that recently the >>>> ExpirationInterceptor has been added, which registers ongoing write in a >>>> hashmap. So at this point we have a map for locks, map for writes used >>>> for expiration, another two key-addressed maps in L1ManagerImpl and one >>>> in L1NonTxInterceptor and maybe another maps elsewhere. >>>> >>>> This makes me think that we could spare map lookups and expensive writes >>>> by providing *single map for temporary per-key data*. A reference to the >>>> entry could be stored in the context to save the lookups. An extreme >>>> case would be to put this into DataContainer, but I think that this >>>> would prove too tricky in practice. >>>> >>>> A downside would be the loss of encapsulation (any component could >>>> theoretically access e.g. locks), but I don't find that too dramatic. >>>> >>>> WDYT? >>>> >>>> Radim >>>> >>> >>> -- >>> Radim Vansa >>> JBoss Performance Team >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Tue Dec 1 09:34:14 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 1 Dec 2015 16:34:14 +0200 Subject: [infinispan-dev] Consolidating temporary per-key data In-Reply-To: References: <56533020.8090806@redhat.com> <5658224F.7060404@redhat.com> Message-ID: Excellent! If RemoveExpiredCommand checks the entry is really expired while holding the lock, and doesn't fire the CacheEntryExpired notification, then I don't think the extra command would be a problem. Cheers Dan On Tue, Dec 1, 2015 at 5:54 AM, William Burns wrote: > Actually looking into this closer. I have found a way to completely remove > the expiration interceptor with minimal drawback. The drawback is that a > remove expired command will be generated if a read finds the entry gone is > concurrently fired with a write for the same key. But I would say this > should happen so infrequently that it probably shouldn't matter. > > I have put it all on [1] and all the tests seem to pass fine. I want to > double check a few things but this should be pretty good. > > [1] https://github.com/wburns/infinispan/commits/expiration_listener > > > On Mon, Nov 30, 2015 at 5:46 PM Sanne Grinovero > wrote: >> >> Wouldn't it be an interesting compromise to make sure we calculate >> things like the key's hash only once? >> >> On 30 November 2015 at 21:54, William Burns wrote: >> > I am not sure there is an easy way to consolidate these into a single >> > map, >> > since some of these are written to on reads, some on writes and >> > sometimes >> > conditionally written to. And then as Dan said they are cleaned up at >> > different times possibly. >> > >> > We could do something like states (based on which ones would have >> > written to >> > the map), but I think it will get quite complex, especially if we ever >> > add >> > more of these map type requirement. >> > >> > On a similar note, I had actually thought of possibly moving the >> > expiration >> > check out of the data container and into the entry wrapping interceptor >> > or >> > the likes. This would allow for us to remove the expiration map >> > completely >> > since we could only raise the extra expiration commands on a read and >> > not >> > writes. But this would change the API and I am thinking we can only do >> > this >> > for 9.0. >> > >> > On Mon, Nov 30, 2015 at 2:18 PM Dan Berindei >> > wrote: >> >> >> >> The first problem that comes to mind is that context entries are also >> >> stored in a map, at least in transactional mode. So access through the >> >> context would only be faster in non-tx caches, in tx caches it would >> >> not add any benefits. >> >> >> >> I also have some trouble imagining how these temporary entries would >> >> be released, since locks, L1 requestors, L1 synchronizers, and write >> >> registrations all have their own rules for cleaning up. >> >> >> >> Finally, I'm not sure how much this would help. I actually removed the >> >> write registration for everything except RemoveExpiredCommand when >> >> testing the HotRod server performance, but I didn't get any >> >> significant improvement on my machine. Which was kind of expected, >> >> since the benchmark doesn't seem to be CPU-bound, and JFR was showing >> >> it with < 1.5% of CPU. >> >> >> >> >> >> Cheers >> >> Dan >> >> >> >> >> >> On Fri, Nov 27, 2015 at 11:28 AM, Radim Vansa >> >> wrote: >> >> > No thoughts at all? @wburns, could I have your view on this? >> >> > >> >> > Thanks >> >> > >> >> > Radim >> >> > >> >> > On 11/23/2015 04:26 PM, Radim Vansa wrote: >> >> >> Hi again, >> >> >> >> >> >> examining some flamegraphs I've found out that recently the >> >> >> ExpirationInterceptor has been added, which registers ongoing write >> >> >> in >> >> >> a >> >> >> hashmap. So at this point we have a map for locks, map for writes >> >> >> used >> >> >> for expiration, another two key-addressed maps in L1ManagerImpl and >> >> >> one >> >> >> in L1NonTxInterceptor and maybe another maps elsewhere. >> >> >> >> >> >> This makes me think that we could spare map lookups and expensive >> >> >> writes >> >> >> by providing *single map for temporary per-key data*. A reference to >> >> >> the >> >> >> entry could be stored in the context to save the lookups. An extreme >> >> >> case would be to put this into DataContainer, but I think that this >> >> >> would prove too tricky in practice. >> >> >> >> >> >> A downside would be the loss of encapsulation (any component could >> >> >> theoretically access e.g. locks), but I don't find that too >> >> >> dramatic. >> >> >> >> >> >> WDYT? >> >> >> >> >> >> Radim >> >> >> >> >> > >> >> > >> >> > -- >> >> > Radim Vansa >> >> > JBoss Performance Team >> >> > >> >> > _______________________________________________ >> >> > infinispan-dev mailing list >> >> > infinispan-dev at lists.jboss.org >> >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Tue Dec 1 10:13:07 2015 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 1 Dec 2015 16:13:07 +0100 Subject: [infinispan-dev] Consolidating temporary per-key data In-Reply-To: References: <56533020.8090806@redhat.com> <5658224F.7060404@redhat.com> <565D61AA.5010104@redhat.com> Message-ID: <565DB903.1020106@redhat.com> On 12/01/2015 03:26 PM, Dan Berindei wrote: > On Tue, Dec 1, 2015 at 11:00 AM, Radim Vansa wrote: >> On 11/30/2015 08:17 PM, Dan Berindei wrote: >>> The first problem that comes to mind is that context entries are also >>> stored in a map, at least in transactional mode. So access through the >>> context would only be faster in non-tx caches, in tx caches it would >>> not add any benefits. >> I admit that I was thinking more about non-tx caches, I would have to >> re-check what maps are written tx mode. Please, let's focus on non-tx. >> >> However I probably don't comprehend your answer completely. I am not >> talking that much about reading something from the context, but reads >> from these concurrent maps (these can be always somehow cached in the >> context), but about writes. > It makes sense if you consider tx caches, because the only way to > cache these in the context for a transaction is to store them in > another hashmap (a la `AbstractCacheTransaction.lookedUpEntries`). > > I know you're more interested in non-tx caches, but you have the same > problem with the non-tx commands that use a NonTxInvocationContext > (e.g. PutMapCommand). lookedUpEntries is something completely different than I am talking about, although it is also addressed by-key, but it's just *private to the command invocation*. I was talking only about maps that are shared by *concurrent command invocations* (as any map held in Interceptor). Radim > >>> I also have some trouble imagining how these temporary entries would >>> be released, since locks, L1 requestors, L1 synchronizers, and write >>> registrations all have their own rules for cleaning up. >> Maybe I am oversimplifying that - all the components would modify the >> shared record, and once the record becomes empty, it's removed. It's >> just a logical OR on these collections. > I think you'd need something more like a refcount, and to atomically > release the entry when the count reaches 0. (We used to have something > similar for the locks map.) It's certainly doable, but the challenge > is to make it cheaper to maintain than the separate maps we have now. > >>> Finally, I'm not sure how much this would help. I actually removed the >>> write registration for everything except RemoveExpiredCommand when >>> testing the HotRod server performance, but I didn't get any >>> significant improvement on my machine. Which was kind of expected, >>> since the benchmark doesn't seem to be CPU-bound, and JFR was showing >>> it with < 1.5% of CPU. >> Datapoint noted. But if there's application using embedded Infinispan, >> it's quite likely that the app is CPU-bound, and Infinispan becomes, >> too. We're not optimizing for benchmarks but apps. Though, I can see >> that if server is not CPU bound, it won't make much difference. >> >> Thanks for your opinions, guys - if Will will remove the expiration >> registration, this idea is void (is anyone really using L1?). We'll see >> how this will end up. >> >> Radim >> >>> Cheers >>> Dan >>> >>> >>> On Fri, Nov 27, 2015 at 11:28 AM, Radim Vansa wrote: >>>> No thoughts at all? @wburns, could I have your view on this? >>>> >>>> Thanks >>>> >>>> Radim >>>> >>>> On 11/23/2015 04:26 PM, Radim Vansa wrote: >>>>> Hi again, >>>>> >>>>> examining some flamegraphs I've found out that recently the >>>>> ExpirationInterceptor has been added, which registers ongoing write in a >>>>> hashmap. So at this point we have a map for locks, map for writes used >>>>> for expiration, another two key-addressed maps in L1ManagerImpl and one >>>>> in L1NonTxInterceptor and maybe another maps elsewhere. >>>>> >>>>> This makes me think that we could spare map lookups and expensive writes >>>>> by providing *single map for temporary per-key data*. A reference to the >>>>> entry could be stored in the context to save the lookups. An extreme >>>>> case would be to put this into DataContainer, but I think that this >>>>> would prove too tricky in practice. >>>>> >>>>> A downside would be the loss of encapsulation (any component could >>>>> theoretically access e.g. locks), but I don't find that too dramatic. >>>>> >>>>> WDYT? >>>>> >>>>> Radim >>>>> >>>> -- >>>> Radim Vansa >>>> JBoss Performance Team >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From dan.berindei at gmail.com Tue Dec 1 13:04:51 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 1 Dec 2015 20:04:51 +0200 Subject: [infinispan-dev] Consolidating temporary per-key data In-Reply-To: <565DB903.1020106@redhat.com> References: <56533020.8090806@redhat.com> <5658224F.7060404@redhat.com> <565D61AA.5010104@redhat.com> <565DB903.1020106@redhat.com> Message-ID: On Tue, Dec 1, 2015 at 5:13 PM, Radim Vansa wrote: > On 12/01/2015 03:26 PM, Dan Berindei wrote: >> On Tue, Dec 1, 2015 at 11:00 AM, Radim Vansa wrote: >>> On 11/30/2015 08:17 PM, Dan Berindei wrote: >>>> The first problem that comes to mind is that context entries are also >>>> stored in a map, at least in transactional mode. So access through the >>>> context would only be faster in non-tx caches, in tx caches it would >>>> not add any benefits. >>> I admit that I was thinking more about non-tx caches, I would have to >>> re-check what maps are written tx mode. Please, let's focus on non-tx. >>> >>> However I probably don't comprehend your answer completely. I am not >>> talking that much about reading something from the context, but reads >>> from these concurrent maps (these can be always somehow cached in the >>> context), but about writes. >> It makes sense if you consider tx caches, because the only way to >> cache these in the context for a transaction is to store them in >> another hashmap (a la `AbstractCacheTransaction.lookedUpEntries`). >> >> I know you're more interested in non-tx caches, but you have the same >> problem with the non-tx commands that use a NonTxInvocationContext >> (e.g. PutMapCommand). > > lookedUpEntries is something completely different than I am talking > about, although it is also addressed by-key, but it's just *private to > the command invocation*. > > I was talking only about maps that are shared by *concurrent command > invocations* (as any map held in Interceptor). In your initial email, you said >>>>>> A reference to the >>>>>> entry could be stored in the context to save the lookups. How would you store those references in the context, if not in a map that's *private to the command invocation*? Remember, I'm talking about NonTxInvocationContext here, *not* SingleKeyNonTxInvocationContext. Cheers Dan > > Radim > >> >>>> I also have some trouble imagining how these temporary entries would >>>> be released, since locks, L1 requestors, L1 synchronizers, and write >>>> registrations all have their own rules for cleaning up. >>> Maybe I am oversimplifying that - all the components would modify the >>> shared record, and once the record becomes empty, it's removed. It's >>> just a logical OR on these collections. >> I think you'd need something more like a refcount, and to atomically >> release the entry when the count reaches 0. (We used to have something >> similar for the locks map.) It's certainly doable, but the challenge >> is to make it cheaper to maintain than the separate maps we have now. >> >>>> Finally, I'm not sure how much this would help. I actually removed the >>>> write registration for everything except RemoveExpiredCommand when >>>> testing the HotRod server performance, but I didn't get any >>>> significant improvement on my machine. Which was kind of expected, >>>> since the benchmark doesn't seem to be CPU-bound, and JFR was showing >>>> it with < 1.5% of CPU. >>> Datapoint noted. But if there's application using embedded Infinispan, >>> it's quite likely that the app is CPU-bound, and Infinispan becomes, >>> too. We're not optimizing for benchmarks but apps. Though, I can see >>> that if server is not CPU bound, it won't make much difference. >>> >>> Thanks for your opinions, guys - if Will will remove the expiration >>> registration, this idea is void (is anyone really using L1?). We'll see >>> how this will end up. >>> >>> Radim >>> >>>> Cheers >>>> Dan >>>> >>>> >>>> On Fri, Nov 27, 2015 at 11:28 AM, Radim Vansa wrote: >>>>> No thoughts at all? @wburns, could I have your view on this? >>>>> >>>>> Thanks >>>>> >>>>> Radim >>>>> >>>>> On 11/23/2015 04:26 PM, Radim Vansa wrote: >>>>>> Hi again, >>>>>> >>>>>> examining some flamegraphs I've found out that recently the >>>>>> ExpirationInterceptor has been added, which registers ongoing write in a >>>>>> hashmap. So at this point we have a map for locks, map for writes used >>>>>> for expiration, another two key-addressed maps in L1ManagerImpl and one >>>>>> in L1NonTxInterceptor and maybe another maps elsewhere. >>>>>> >>>>>> This makes me think that we could spare map lookups and expensive writes >>>>>> by providing *single map for temporary per-key data*. A reference to the >>>>>> entry could be stored in the context to save the lookups. An extreme >>>>>> case would be to put this into DataContainer, but I think that this >>>>>> would prove too tricky in practice. >>>>>> >>>>>> A downside would be the loss of encapsulation (any component could >>>>>> theoretically access e.g. locks), but I don't find that too dramatic. >>>>>> >>>>>> WDYT? >>>>>> >>>>>> Radim >>>>>> >>>>> -- >>>>> Radim Vansa >>>>> JBoss Performance Team >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> -- >>> Radim Vansa >>> JBoss Performance Team >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Wed Dec 2 04:32:18 2015 From: rvansa at redhat.com (Radim Vansa) Date: Wed, 2 Dec 2015 10:32:18 +0100 Subject: [infinispan-dev] Consolidating temporary per-key data In-Reply-To: References: <56533020.8090806@redhat.com> <5658224F.7060404@redhat.com> <565D61AA.5010104@redhat.com> <565DB903.1020106@redhat.com> Message-ID: <565EBAA2.90206@redhat.com> On 12/01/2015 07:04 PM, Dan Berindei wrote: > On Tue, Dec 1, 2015 at 5:13 PM, Radim Vansa wrote: >> On 12/01/2015 03:26 PM, Dan Berindei wrote: >>> On Tue, Dec 1, 2015 at 11:00 AM, Radim Vansa wrote: >>>> On 11/30/2015 08:17 PM, Dan Berindei wrote: >>>>> The first problem that comes to mind is that context entries are also >>>>> stored in a map, at least in transactional mode. So access through the >>>>> context would only be faster in non-tx caches, in tx caches it would >>>>> not add any benefits. >>>> I admit that I was thinking more about non-tx caches, I would have to >>>> re-check what maps are written tx mode. Please, let's focus on non-tx. >>>> >>>> However I probably don't comprehend your answer completely. I am not >>>> talking that much about reading something from the context, but reads >>>> from these concurrent maps (these can be always somehow cached in the >>>> context), but about writes. >>> It makes sense if you consider tx caches, because the only way to >>> cache these in the context for a transaction is to store them in >>> another hashmap (a la `AbstractCacheTransaction.lookedUpEntries`). >>> >>> I know you're more interested in non-tx caches, but you have the same >>> problem with the non-tx commands that use a NonTxInvocationContext >>> (e.g. PutMapCommand). >> lookedUpEntries is something completely different than I am talking >> about, although it is also addressed by-key, but it's just *private to >> the command invocation*. >> >> I was talking only about maps that are shared by *concurrent command >> invocations* (as any map held in Interceptor). > In your initial email, you said > >>>>>>> A reference to the >>>>>>> entry could be stored in the context to save the lookups. > How would you store those references in the context, if not in a map > that's *private to the command invocation*? > > Remember, I'm talking about NonTxInvocationContext here, *not* > SingleKeyNonTxInvocationContext. This reference stored in a context is just an optimization that would help SingleKeyNonTxInvocationContext, it's not the focal point of the suggestion. Ok, for NonTxInvocationContext, you would either need a) private map in the context - so you wouldn't have to lookup a big concurrent (=internally locking) map, but you would have to write to the map once or b) just lookup the big shared map I really don't know which variant would prove more performant, and it's not *that* important. You're right, this *read optimization* is not possible for multi-key operations. However the *write optimization* is there, you'd still write the shared map only once (ok, and then again for removal). I should have structured the original mail in a better way, and state the (expected) major advantage, and further possible optimizations clearly separated. Radim > > Cheers > Dan > >> Radim >> >>>>> I also have some trouble imagining how these temporary entries would >>>>> be released, since locks, L1 requestors, L1 synchronizers, and write >>>>> registrations all have their own rules for cleaning up. >>>> Maybe I am oversimplifying that - all the components would modify the >>>> shared record, and once the record becomes empty, it's removed. It's >>>> just a logical OR on these collections. >>> I think you'd need something more like a refcount, and to atomically >>> release the entry when the count reaches 0. (We used to have something >>> similar for the locks map.) It's certainly doable, but the challenge >>> is to make it cheaper to maintain than the separate maps we have now. >>> >>>>> Finally, I'm not sure how much this would help. I actually removed the >>>>> write registration for everything except RemoveExpiredCommand when >>>>> testing the HotRod server performance, but I didn't get any >>>>> significant improvement on my machine. Which was kind of expected, >>>>> since the benchmark doesn't seem to be CPU-bound, and JFR was showing >>>>> it with < 1.5% of CPU. >>>> Datapoint noted. But if there's application using embedded Infinispan, >>>> it's quite likely that the app is CPU-bound, and Infinispan becomes, >>>> too. We're not optimizing for benchmarks but apps. Though, I can see >>>> that if server is not CPU bound, it won't make much difference. >>>> >>>> Thanks for your opinions, guys - if Will will remove the expiration >>>> registration, this idea is void (is anyone really using L1?). We'll see >>>> how this will end up. >>>> >>>> Radim >>>> >>>>> Cheers >>>>> Dan >>>>> >>>>> >>>>> On Fri, Nov 27, 2015 at 11:28 AM, Radim Vansa wrote: >>>>>> No thoughts at all? @wburns, could I have your view on this? >>>>>> >>>>>> Thanks >>>>>> >>>>>> Radim >>>>>> >>>>>> On 11/23/2015 04:26 PM, Radim Vansa wrote: >>>>>>> Hi again, >>>>>>> >>>>>>> examining some flamegraphs I've found out that recently the >>>>>>> ExpirationInterceptor has been added, which registers ongoing write in a >>>>>>> hashmap. So at this point we have a map for locks, map for writes used >>>>>>> for expiration, another two key-addressed maps in L1ManagerImpl and one >>>>>>> in L1NonTxInterceptor and maybe another maps elsewhere. >>>>>>> >>>>>>> This makes me think that we could spare map lookups and expensive writes >>>>>>> by providing *single map for temporary per-key data*. A reference to the >>>>>>> entry could be stored in the context to save the lookups. An extreme >>>>>>> case would be to put this into DataContainer, but I think that this >>>>>>> would prove too tricky in practice. >>>>>>> >>>>>>> A downside would be the loss of encapsulation (any component could >>>>>>> theoretically access e.g. locks), but I don't find that too dramatic. >>>>>>> >>>>>>> WDYT? >>>>>>> >>>>>>> Radim >>>>>>> >>>>>> -- >>>>>> Radim Vansa >>>>>> JBoss Performance Team >>>>>> >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> -- >>>> Radim Vansa >>>> JBoss Performance Team >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From dan.berindei at gmail.com Thu Dec 3 07:44:46 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 3 Dec 2015 14:44:46 +0200 Subject: [infinispan-dev] Memory consumption of org.infinispan.marshall.core.JBossMarshaller In-Reply-To: <2CA3DE92-00A8-4789-B4F2-9A3F82451AAA@redhat.com> References: <5651CA88.2000201@sweazer.com> <1BFA84C8-A69E-45A8-889B-5E6C7896E9AD@redhat.com> <565813B1.1040502@sweazer.com> <2CA3DE92-00A8-4789-B4F2-9A3F82451AAA@redhat.com> Message-ID: Hi Christian If I were to guess, I would say your stateTransfer.chunkSize is too high. The default is 512, but you may need an even lower value. Those IdentityIntMaps can only get as big as the number of objects referenced by a single Infinispan command (well, 2x to be more precise). I doubt your keys and values each aggregate 100000s of objects, so the culprit is most likely a command that aggregates lots of entries - like StateResponseCommand, which can hold chunkSize entries. I would also recommend taking a heap dump with allocation profiling enabled (e.g. with FlightRecorder or JProfiler). If the allocation stacks of those IdentityIntMap's keys and values arrays contain an OutboundTransferTask frame, then reducing the chunkSize will definitely help. Unfortunately there are many things, both in the configuration and in the way you use Infinispan, which will affect the amount of memory it uses. E.g. state transfer also keeps track of entries modified since state transfer started. If your values are big and/or your state transfer is short, that memory is insignificant, but if your values are small and for some reason state transfer takes a long time, that extra memory can become a problem. The only way to protect yourself is to leave a lot of margin for unexpected stuff, and investigating whenever your application dips into that margin. Cheers Dan On Mon, Nov 30, 2015 at 6:57 PM, Galder Zamarre?o wrote: > We're actively working to reduce our memory footprint. > > I can't really provide guidance on memory requirements since it's very dependant on the types stored and the amount of instances that are stored, which is specific to each use case. > > It's worth investing some time estimating loads and running load tests to adjust memory parameters before going to production. > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > >> On 27 Nov 2015, at 09:26, Christian Beikov wrote: >> >> Are you going to do something about this memory consumption or is there >> at least some kind of minimum expected memory usage you can give me? >> I ran into an OOMEs the other day and the cluster was unable to recover >> from that by restarting single nodes. The nodes couldn't synchronize >> because of the OOMEs. I had to (jgroups-)disconnect all nodes from the >> cluster and start a separate cluster which of course lead to data loss. >> All of this happened because of some wrong memory consumption >> estimations I made so in order to avoid that in the future I would like >> to plan better ahead. Is there any other way to avoid such a cluster death? >> >> Regards, >> Christian >> >> Am 26.11.2015 um 15:56 schrieb Galder Zamarre?o: >>> Those IdentityIntMap are caches meant to speed up serialization if the same objects or types are marshalled again. It's normal for them to be populated as marshalling operations are executed. We don't currently have a way to clear these caches. >>> >>> Cheers, >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >>>> On 22 Nov 2015, at 15:00, Christian Beikov wrote: >>>> >>>> Hello, >>>> >>>> In a recent heap dump analysis I found that >>>> org.infinispan.marshall.core.JBossMarshaller consumes a lot of >>>> memory(about 46 MB) that seems to be unused. >>>> This is due to PerThreadInstanceHolder having ExtendedRiverMarshaller >>>> objects that contain big IdentityIntMap objects. Some of those >>>> IdentityIntMap instances have a size of 2 million entries, but most of >>>> them have sizes of a few 100 thousands. >>>> When I look into these IdentityIntMap instances, it seems that the >>>> entries are all unused. >>>> >>>> Is that kind of memory consumption expected or does that indicate a >>>> possibly wrong configuration? >>>> >>>> I am using Infinispan 7.2.4.Final on Wildfly 9.0.1.Final. >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rory.odonnell at oracle.com Mon Dec 7 04:09:10 2015 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Mon, 7 Dec 2015 09:09:10 +0000 Subject: [infinispan-dev] Early-access build b95 of JDK 9 is available for download Message-ID: <56654CB6.5010804@oracle.com> Hi Galder, Early-access builds of JDK 9 with Project Verona [0] in b95 are available for download here . The goal of this Project is to implement the new JDK version string as described in JEP-223 [1]. The new version-string scheme is designed to easily distinguish major, minor, and security-update releases. For more information please see Iris Clark's email [2] , also see Dalibor Topic's blog on this topic [3]. Please send usage questions, feedback and experience reports to the verona-dev mailing list. Note: If you haven?t already subscribed to that mailing list then please do so first, otherwise your message will be discarded as spam. Rgds,Rory [0] http://openjdk.java.net/projects/verona/ [1] http://openjdk.java.net/jeps/223 [2] http://mail.openjdk.java.net/pipermail/verona-dev/2015-November/000293.html [3] https://blogs.oracle.com/java-platform-group/entry/a_new_jdk_9_version -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20151207/11171bde/attachment.html From galder at redhat.com Mon Dec 7 10:50:36 2015 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 7 Dec 2015 16:50:36 +0100 Subject: [infinispan-dev] Weekly IRC meeting notes Message-ID: <1D900753-D685-4972-9EA0-BE4910C9C021@redhat.com> Hi all, Please find below the minutes from the weekly IRC meeting: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2015/infinispan.2015-12-07-15.07.html Cheers, -- Galder Zamarre?o Infinispan, Red Hat From dan.berindei at gmail.com Mon Dec 7 11:58:22 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 7 Dec 2015 18:58:22 +0200 Subject: [infinispan-dev] Weekly IRC meeting notes In-Reply-To: <1D900753-D685-4972-9EA0-BE4910C9C021@redhat.com> References: <1D900753-D685-4972-9EA0-BE4910C9C021@redhat.com> Message-ID: Hi guys Sorry for missing the meeting, here's my update: Last week I fixed ISPN-5883 [1] (again) and ISPN-6012 [2]. [1] https://issues.jboss.org/browse/ISPN-5883 [2] https://issues.jboss.org/browse/ISPN-6012 I also worked a bit on performance, trying to test HotRod with JMH. I finally got the HotRod server to work from the JMH uber-jar on Friday, but I didn't have the time to gather any results yet. Towards the end of the week I focused more on pull requests. For this week, I plan to get back to the sequential interceptors. Cheers Dan On Mon, Dec 7, 2015 at 5:50 PM, Galder Zamarre?o wrote: > Hi all, > > Please find below the minutes from the weekly IRC meeting: > http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2015/infinispan.2015-12-07-15.07.html > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From anistor at redhat.com Tue Dec 8 13:43:28 2015 From: anistor at redhat.com (Adrian Nistor) Date: Tue, 8 Dec 2015 20:43:28 +0200 Subject: [infinispan-dev] Infinispan 8.1.0.Final is released! Message-ID: <566724D0.2040502@redhat.com> Dear community, I'm pleased to announce Infinispan 8.1.0.Final was finally released. Read more about it on our blog here: http://goo.gl/NSNNF5 Cheers, Adrian From smarlow at redhat.com Fri Dec 11 10:54:41 2015 From: smarlow at redhat.com (Scott Marlow) Date: Fri, 11 Dec 2015 10:54:41 -0500 Subject: [infinispan-dev] Awesome job on reducing the call stack length... Message-ID: <566AF1C1.5050001@redhat.com> I was just looking at how short the call stack is for a call into SimpleCache [1]. Years ago, I think that the typical "cache" invocation call stack was much longer, well done team! It is great having the SimpleCache available now with the short call stack! :-) Scott [1] http://pastebin.com/6T0Aj0bZ From jholusa at redhat.com Tue Dec 15 09:38:33 2015 From: jholusa at redhat.com (Jiri Holusa) Date: Tue, 15 Dec 2015 09:38:33 -0500 (EST) Subject: [infinispan-dev] Uber jars testing In-Reply-To: References: <511839617.16737762.1441183234404.JavaMail.zimbra@redhat.com> <356567689.27056391.1441804972571.JavaMail.zimbra@redhat.com> <55F05633.2020307@redhat.com> <480833508.28352119.1441981142276.JavaMail.zimbra@redhat.com> <55F2EB34.1040005@redhat.com> Message-ID: <1999632812.36430702.1450190313505.JavaMail.zimbra@redhat.com> Hi, I'm reopening this thread, because I want to share some ideas we gone through the past weeks. Just as a remainder, we've been exploring a way how to run the whole Infinispan testsuite with uber jars, giving us a real confidence in uber jars. I've been experimenting with this a lot (actually only with one module - core) and I was able to run the testsuite with uber jars. However, there are some changes needed and the base cause is [1], please see Jakub's comment with detailed explanation. Long story short, the thing is that two instances of jboss-logging are present on classpath. So the process I went through is: 1) build ISPN with exclusion of jboss-logging from uber jars 2) change dependency in core/pom.xml from infinispan-commons to infinispan-embedded 3) run the testsuite of core The thing is that I have to remove the jboss-logging from uber jars because of [1] and hence I'm asking. Do you think that there is a possibility how to solve this issue? To be honest, I don't really see a way. Logically, the jboss-logging has to be in the uber jar, since this is its primary purpose (to reduce number of dependencies for user to add and Infinispan needs it) and it also has to be present in core/pom.xml because ISPN uses it extensively and without it, the core will not compile. Let me enhance the question. Do you think [1] could be somehow solved, so we wouldn't have to do this manual hack? Thanks, Jiri [1] https://issues.jboss.org/browse/ISPN-5193 From galder at redhat.com Wed Dec 16 05:26:50 2015 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Wed, 16 Dec 2015 11:26:50 +0100 Subject: [infinispan-dev] Cache Aliases In-Reply-To: <565CAA7C.7040206@redhat.com> References: <565CAA7C.7040206@redhat.com> Message-ID: <23D36757-8CEA-43EF-9895-383154679FC2@redhat.com> Hey Tristan, sorry for the delay. Looks good to me. IIRC, Paul did some kind of cache name alias work for WF clustering and I wonder if they could potentially rely on this? The use cases might be different but worth checking up with him. Cheers, -- Galder Zamarre?o Infinispan, Red Hat > On 30 Nov 2015, at 20:58, Tristan Tarrant wrote: > > Hi everybody, > > to address the needs of Teiid to implement materialized views on > Infinispan caches, I have written a small design document describing > "alias caches" [1]. > To be able to fully support the use-case in remote configurations, we > need to perform the switching operation remotely. Since remote admin > functionality is also required by the JCache integration code, I have > created a separate page [2] describing what the remote admin client > would look like in terms of functionality and packaging. > > As usual, comments are welcome > > > [1] https://github.com/infinispan/infinispan/wiki/Alias-Caches > [2] https://github.com/infinispan/infinispan/wiki/Remote-Admin-Client > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Wed Dec 16 05:34:22 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 16 Dec 2015 11:34:22 +0100 Subject: [infinispan-dev] Cache Aliases In-Reply-To: <23D36757-8CEA-43EF-9895-383154679FC2@redhat.com> References: <565CAA7C.7040206@redhat.com> <23D36757-8CEA-43EF-9895-383154679FC2@redhat.com> Message-ID: <56713E2E.1030001@redhat.com> The cache-alias stuff in WF is actually a matter of JNDI and internal service naming. It actually returns the same CacheImpl for all aliases. The cache manager has no idea about the alias. Tristan On 16/12/2015 11:26, Galder Zamarre?o wrote: > Hey Tristan, sorry for the delay. Looks good to me. > > IIRC, Paul did some kind of cache name alias work for WF clustering and I wonder if they could potentially rely on this? The use cases might be different but worth checking up with him. > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > >> On 30 Nov 2015, at 20:58, Tristan Tarrant wrote: >> >> Hi everybody, >> >> to address the needs of Teiid to implement materialized views on >> Infinispan caches, I have written a small design document describing >> "alias caches" [1]. >> To be able to fully support the use-case in remote configurations, we >> need to perform the switching operation remotely. Since remote admin >> functionality is also required by the JCache integration code, I have >> created a separate page [2] describing what the remote admin client >> would look like in terms of functionality and packaging. >> >> As usual, comments are welcome >> >> >> [1] https://github.com/infinispan/infinispan/wiki/Alias-Caches >> [2] https://github.com/infinispan/infinispan/wiki/Remote-Admin-Client >> -- >> Tristan Tarrant >> Infinispan Lead >> JBoss, a division of Red Hat >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From slaskawi at redhat.com Wed Dec 16 07:31:06 2015 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Wed, 16 Dec 2015 13:31:06 +0100 Subject: [infinispan-dev] Uber jars testing In-Reply-To: <1999632812.36430702.1450190313505.JavaMail.zimbra@redhat.com> References: <511839617.16737762.1441183234404.JavaMail.zimbra@redhat.com> <356567689.27056391.1441804972571.JavaMail.zimbra@redhat.com> <55F05633.2020307@redhat.com> <480833508.28352119.1441981142276.JavaMail.zimbra@redhat.com> <55F2EB34.1040005@redhat.com> <1999632812.36430702.1450190313505.JavaMail.zimbra@redhat.com> Message-ID: When I was adjusting Uber Jars content I hit the same problem a couple of times. Currently we have almost all logging frameworks and facades in uber jars (Log4J, SLF4J, commons-logging, JBoss Logging). So in my opinion we need a long term solution to handle this (probably it would be better to discuss that in a separate email thread). How about *not* relocating JBoss Logging? Would it solve the problem? Thanks Sebastian On Tue, Dec 15, 2015 at 3:38 PM, Jiri Holusa wrote: > Hi, > > I'm reopening this thread, because I want to share some ideas we gone > through the past weeks. Just as a remainder, we've been exploring a way how > to run the whole Infinispan testsuite with uber jars, giving us a real > confidence in uber jars. > > I've been experimenting with this a lot (actually only with one module - > core) and I was able to run the testsuite with uber jars. However, there > are some changes needed and the base cause is [1], please see Jakub's > comment with detailed explanation. Long story short, the thing is that two > instances of jboss-logging are present on classpath. > > So the process I went through is: > 1) build ISPN with exclusion of jboss-logging from uber jars > 2) change dependency in core/pom.xml from infinispan-commons to > infinispan-embedded > 3) run the testsuite of core > > The thing is that I have to remove the jboss-logging from uber jars > because of [1] and hence I'm asking. Do you think that there is a > possibility how to solve this issue? To be honest, I don't really see a > way. Logically, the jboss-logging has to be in the uber jar, since this is > its primary purpose (to reduce number of dependencies for user to add and > Infinispan needs it) and it also has to be present in core/pom.xml because > ISPN uses it extensively and without it, the core will not compile. > > Let me enhance the question. Do you think [1] could be somehow solved, so > we wouldn't have to do this manual hack? > > Thanks, > Jiri > > [1] https://issues.jboss.org/browse/ISPN-5193 > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20151216/c4bc8690/attachment.html From dan.berindei at gmail.com Wed Dec 16 08:16:14 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 16 Dec 2015 15:16:14 +0200 Subject: [infinispan-dev] Uber jars testing In-Reply-To: References: <511839617.16737762.1441183234404.JavaMail.zimbra@redhat.com> <356567689.27056391.1441804972571.JavaMail.zimbra@redhat.com> <55F05633.2020307@redhat.com> <480833508.28352119.1441981142276.JavaMail.zimbra@redhat.com> <55F2EB34.1040005@redhat.com> <1999632812.36430702.1450190313505.JavaMail.zimbra@redhat.com> Message-ID: How about changing the Log interfaces in the modules to not extend the core Log interface, would that help? About all the logging frameworks, I suspect log4J may be the only one that we don't need in the uber jars - since it's always used through a jboss-logging/commons-logging/slf4j facade. Cheers Dan On Wed, Dec 16, 2015 at 2:31 PM, Sebastian Laskawiec wrote: > When I was adjusting Uber Jars content I hit the same problem a couple of > times. > > Currently we have almost all logging frameworks and facades in uber jars > (Log4J, SLF4J, commons-logging, JBoss Logging). So in my opinion we need a > long term solution to handle this (probably it would be better to discuss > that in a separate email thread). > > How about *not* relocating JBoss Logging? Would it solve the problem? > > Thanks > Sebastian > > On Tue, Dec 15, 2015 at 3:38 PM, Jiri Holusa wrote: >> >> Hi, >> >> I'm reopening this thread, because I want to share some ideas we gone >> through the past weeks. Just as a remainder, we've been exploring a way how >> to run the whole Infinispan testsuite with uber jars, giving us a real >> confidence in uber jars. >> >> I've been experimenting with this a lot (actually only with one module - >> core) and I was able to run the testsuite with uber jars. However, there are >> some changes needed and the base cause is [1], please see Jakub's comment >> with detailed explanation. Long story short, the thing is that two instances >> of jboss-logging are present on classpath. >> >> So the process I went through is: >> 1) build ISPN with exclusion of jboss-logging from uber jars >> 2) change dependency in core/pom.xml from infinispan-commons to >> infinispan-embedded >> 3) run the testsuite of core >> >> The thing is that I have to remove the jboss-logging from uber jars >> because of [1] and hence I'm asking. Do you think that there is a >> possibility how to solve this issue? To be honest, I don't really see a way. >> Logically, the jboss-logging has to be in the uber jar, since this is its >> primary purpose (to reduce number of dependencies for user to add and >> Infinispan needs it) and it also has to be present in core/pom.xml because >> ISPN uses it extensively and without it, the core will not compile. >> >> Let me enhance the question. Do you think [1] could be somehow solved, so >> we wouldn't have to do this manual hack? >> >> Thanks, >> Jiri >> >> [1] https://issues.jboss.org/browse/ISPN-5193 >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From 0track at gmail.com Thu Dec 17 07:57:48 2015 From: 0track at gmail.com (Pierre Sutra) Date: Thu, 17 Dec 2015 13:57:48 +0100 Subject: [infinispan-dev] Infinispan on Openshift Online Message-ID: <5672B14C.4040600@gmail.com> Hello, As part of a hands-on in my cloud computing course, I plan to run Infinispan on Openshift Online. In the current state however, it seems that the Wildfly cartridge [1] does not allow remote access to Infinispan, even when using JMX. In fact, solely shared-memory accesses from an embedded application are possible (e.g.,following [2]). Such a pattern is of interest but the hands-on becomes much more involved. On the other hand, if I understand correctly JBoss Data Grid offers HotRod access [3], yet it seems not deployable on Openshift Online. Would you have any idea on that matters ? Many thanks in advance. Regards, Pierre [1] https://github.com/openshift-cartridges/openshift-wildfly-cartridge [2] https://github.com/JonSnow360/wildfly10-infinispan-quickstart [3] https://github.com/sgilda/wildfly-quickstart/tree/master/carmart From rvansa at redhat.com Thu Dec 17 10:52:01 2015 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 17 Dec 2015 16:52:01 +0100 Subject: [infinispan-dev] StackOverflow team? Message-ID: <5672DA21.5000604@redhat.com> SO has started the 'Teams' initiative [1], and since Hibernate team is registering, I wonder if there'll be Infinispan team, too. I personally don't see much value in that, but I am tech-asocial (I don't have twitter), so your experience may differ. Radim [1] http://meta.stackoverflow.com/questions/307513/the-power-of-teams-a-proposed-expansion-of-stack-overflow -- Radim Vansa JBoss Performance Team From sanne at infinispan.org Thu Dec 17 10:56:42 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 17 Dec 2015 15:56:42 +0000 Subject: [infinispan-dev] StackOverflow team? In-Reply-To: <5672DA21.5000604@redhat.com> References: <5672DA21.5000604@redhat.com> Message-ID: +1 for that, but keep in mind it's a beta feature. You have to ask for it to be made available to you by contacting some SO admin. On 17 December 2015 at 15:52, Radim Vansa wrote: > SO has started the 'Teams' initiative [1], and since Hibernate team is > registering, I wonder if there'll be Infinispan team, too. > > I personally don't see much value in that, but I am tech-asocial (I > don't have twitter), so your experience may differ. > > Radim > > [1] > http://meta.stackoverflow.com/questions/307513/the-power-of-teams-a-proposed-expansion-of-stack-overflow > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Fri Dec 18 05:24:22 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 18 Dec 2015 10:24:22 +0000 Subject: [infinispan-dev] Atomic counters / sequences over Hot Rod Message-ID: Hi all, I'm well aware that we don't have support for counters generally speaking, but for Infinispan in embedded mode I could so far use some clumsy workarounds: inefficient solutions but at least I could get it to work. I'm not as expert in using the remote client though. Could someone volunteer to draft me some possible solution? I would still hope the team will attack the issue of having efficient atomic counters too, but at this very moment I'd be happy with a temporary, low performance workaround. Server side scripting maybe? Thanks, Sanne From rhauch at redhat.com Fri Dec 18 10:31:12 2015 From: rhauch at redhat.com (Randall Hauch) Date: Fri, 18 Dec 2015 09:31:12 -0600 Subject: [infinispan-dev] Atomic counters / sequences over Hot Rod In-Reply-To: References: Message-ID: <56C5FD66-20EA-40D8-8114-B69F23E6B914@redhat.com> CRDTs, especially PNCounters, could be very valuable here to ensure eventual consistency of the counters. They don?t require total ordering of operations to be maintained, so this reduces the need for coordination and works better when stuff goes wrong. Sending requests more than once is still a problem, but no more so than with normal atomic counters. > On Dec 18, 2015, at 4:24 AM, Sanne Grinovero wrote: > > Hi all, > I'm well aware that we don't have support for counters generally > speaking, but for Infinispan in embedded mode I could so far use some > clumsy workarounds: inefficient solutions but at least I could get it > to work. > > I'm not as expert in using the remote client though. Could someone > volunteer to draft me some possible solution? > > I would still hope the team will attack the issue of having efficient > atomic counters too, but at this very moment I'd be happy with a > temporary, low performance workaround. > Server side scripting maybe? > > Thanks, > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Mon Dec 21 12:28:51 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 21 Dec 2015 18:28:51 +0100 Subject: [infinispan-dev] Weekly IRC meeting log 2015-12-21 Message-ID: <567836D3.1080008@redhat.com> Dear all, the logs for today's meeting have been diligently collected by brave JBott and available here: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2015/infinispan.2015-12-21-15.01.log.html Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From rory.odonnell at oracle.com Tue Dec 29 06:09:15 2015 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Tue, 29 Dec 2015 11:09:15 +0000 Subject: [infinispan-dev] Early Access builds b99 for JDK 9 & build b96 for JDK 9 with Project Jigsaw are available on java.net Message-ID: <568269DB.1080300@oracle.com> Hi Galder, Early Access b99 for JDK 9 is available on java.net, summary of changes are listed here . Early Access b96 for JDK 9 with Project Jigsaw is available on java.net, summary of changes are listed here . We have reached a milestone of 100 bugs logged by Open Source projects , thank you for your continued support in testing Early Access builds based on various OpenJDK Projects. Best wishes for the New Year, hope to catch up with you at FOSDEM in January. Rgds,Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA, Dublin,Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20151229/b9491bab/attachment.html