Object instance reuse for reference data during Marshalling
by Sanne Grinovero
Hi all,
we discussed this need a while back [1] and there's great news: Galder
created a demo showing how this might work on github:
- https://github.com/galderz/shared-object-spaces
In case you want to better understand what we're after I'd suggest to
have a look: the code is trivial to read and the example is spot-on.
This single test is the goal and the setup should be self-explanatory:
- https://github.com/galderz/shared-object-spaces/blob/master/src/main/java...
The magic is of course in how the Externalizers are allowed to keep a
registry of instances, essentially to not be fully stateless anymore.
I will now explore building this capability in Hibernate OGM, as some
performance tests had shown that without this feature the amount of
data was "balooning" unexpectedly - especially when some object is
frequently read/written and there's no much variation. In short, we
need to reap the benefits of a properly normalized data structure.
Some questions for further evolution:
# Injection framework
I don't love this need to produce serialized metadata at build time
and have to include it in our jars. I have just been playing with the
demo to see if I can avoid the injection but I failed and can see that
it's necessary for now.
Essentially I'd prefer to have the CountryFactory *instance* to be
created and managed by OGM before bootstrapping Infinispan, and then
inject the *instance* into the componentregistry. I guess I'm just
asking if we'll eventually be able to inject actual instances in the
component registry rather than injecting either FQN or Class
instances?
Today I need to:
gcr.registerComponent(countryFactory, CountryFactory.class);
I'd love to do:
gcr.registerComponent(countryFactory, countryFactoryInstance);
This would allow me to look up the CountryFactory instance from the
componentRegistry in various other integration points, like during the
LifecycleCallbacks methods where I'd register the custom
externalizers.
[Incidentally I believe I asked the same capability for other
components, like the Transport to be injected by instance as in some
situations I'd want to configure JGroups programmatically].
# Cache scoping
These components are global now as the Externalizer registry is
global. I understand the need to have global externalizers for
Infinispan itself, but when it comes to user data encoding - which is
cache specific - could we separate this?
I'm guessing this would help with multi-tenancy designs and with
sharing of CacheManagers across different deployments (classloaders)
too.
Or - like we also asked some while back - to allow independent
CacheManagers to share some core components like the Transport. It's
another long time debate which I'd really hope to see improved.
# Size management in the registry implementations
This example code uses a simple hashmap. Clearly a real solution would
need a bounded container, possibly with some eviction or allow
explicit eviction control to the application.
I will need a way to either inject a local, bounded Cache instance
within this component, or possibly programmatically just create a
simple cache out of the scope of the typical CacheManager ?
I'm also wondering if this Cache should use weak references.
The primary use case in OGM is for trivially small and bounded data
(metadata identifiers) so this is not a problem for my specific use
case, I'm merely mentioning it as others might need this.
# Requirements?
OGM is still using Infinispan 8. Do you think I could try this out
already or should we finish the upgrade to Infinispan 9 first? The
demo seems to work with Infinispan 8.2 as well: just wondering if I'm
missing something.
Thanks!
Sanne
[1] - https://issues.jboss.org/browse/ISPN-2133 ,
http://lists.jboss.org/pipermail/infinispan-dev/2012-June/010925.html
, http://lists.jboss.org/pipermail/infinispan-dev/2016-April/016602.html
7 years, 6 months
Rest server storage nuances
by Gustavo Fernandes
Hi all,
With the ongoing encoding revamp work on Infinispan, time to decide how to
handle the rest server.
The rest server currently stores along with each entry, a string
representing the MimeType of that entry, which allows the user to POST/PUT
each entry with its own format.
At request time, using the accept header, the user can request the entry in
a particular format, and the rest server internally extracts the mime type
from the entry and converts it accordingly.
The issue with this approach is, apart from the extra space required, it
makes it challenging to expose via rest anything less trivial than put/get.
Think for example querying, consistent hash calculations, stream
operations: all those features will have a hard time dealing with a cache
where each entry has a different format.
Proposal:
Remove this behavior completely. For a certain cache, all entries will be
homogeneous, just like
Hot Rod, Memcached and embedded. The user can optionally configure the
MimeType at cache level.
Impacts:
- It may be required for users to add the media type configuration to the
cache, if supporting
multiple formats is required;
- Migrating from 9.1 to 9.2 may require re-populating the cache if multiple
formats are being used;
- From the API perspective, I don't expect any change: POST/PUT with
multiple formats should be still supported, as the internal transcoding
should convert them on the fly to the unified configured format.
Thoughts?
7 years, 6 months
Fwd: [jgroups-users] Revamping the JGroups workshop
by Bela Ban
FYI
-------- Forwarded Message --------
Subject: [jgroups-users] Revamping the JGroups workshop
Date: Thu, 10 Aug 2017 13:38:59 +0200
From: Questions/problems related to using JGroups
<javagroups-users(a)lists.sourceforge.net>
Reply-To: javagroups-users(a)lists.sourceforge.net
To: jg-users <javagroups-users(a)lists.sourceforge.net>
I'm thinking about holding the JGroups workshop [1] in Europe in the
fall and in the US early next year.
I'd have to upgrade the labs and slides to 4.0.x, and am thinking of
revamping it as follows:
- Remove sections on distributed caching; this is done by Infinispan /
JDG already
- Remove section on cross-datacenter replication (RELAY2)
- Expand sections on split brain issues, primary partition approach,
eventual consistency,
CAP, issues with split brain and distributed locks/counters etc
- Add a section on jgroups-raft?
- Expand:
- Common problems, their diagnosis and fixes
- E.g. members don't find each other, frequent member exclusions,
incorrect thread pool sizing etc
- Monitoring with probe
- JGroups and docker
Feedback is appreciated!
Any favorite locations? I'm thinking Munich or Berlin for Europe and
Boston for the US...
Cheers,
[1] http://www.jgroups.org/workshops.html
--
Bela Ban | http://www.jgroups.org
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
javagroups-users mailing list
javagroups-users(a)lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/javagroups-users
7 years, 6 months
Hot Rod client sending data to itself - ISPN-8186
by Galder Zamarreño
Hi,
Re: https://issues.jboss.org/browse/ISPN-8186
I've been looking at TRACE logs and what seems to happen is that is that sometimes, when the client needs to create a new Socket, it sends using the same localport as the Hot Rod server port. As a result, when the client sends something to the server, it also receives it, hence it ends finding a request instead of a response. Analysis of the logs linked in the JIRA can be found in [1].
What I'm not sure about is how to fix this... There are ways to potentially pass a specific localport to a Socket [2] but this could be a bit messy: It'd require us to generate a random local port and see if that works, making sure that's not the server port...
However, I think the real problem we're having here is the fact that both the server and client are bound to same IP address, 127.0.0.1. A simpler solution could be a way to get the server to be in a different IP address to the client, but what would that be that IP address and how to make sure it always works? Bind the server to eth0?
Any other ideas?
Cheers,
[1] https://gist.github.com/galderz/b8549259ff65cb74505c9268eeec96a7
[2] http://docs.oracle.com/javase/6/docs/api/java/net/Socket.html#Socket(java...
--
Galder Zamarreño
Infinispan, Red Hat
7 years, 7 months
Ready for JDK 9 ?
by Rory O'Donnell
Hi Galder,
Thank you very much for all your testing of JDK 9 during its
development! Such contributions have significantly helped shape and
improve JDK 9.
Now that we have reached the JDK 9 Final Release Candidate phase [1] , I
would like to ask if your project can be considered to be 'ready for JDK
9', or if there are any remaining show stopper issues which you've
encountered when testing with the JDK 9 release candidate.
JDK 9 b181 is available at http://jdk.java.net/9/
If you have a public web page, mailing list post, or even a tweet
announcing you project's readiness for JDK 9, I'd love to add the URL to
the upcoming JDK 9 readiness page on the Quality Outreach wiki.
Looking forward to hearing from you,
Rory
[1] http://openjdk.java.net/projects/jdk9/
--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland
7 years, 7 months
Re: [infinispan-dev] tuned profiles for Infinispan ?
by Sanne Grinovero
On 19 Jul 2017 14:52, "Dan Berindei" <dan.berindei(a)gmail.com> wrote:
Can't we just copy a profile from Hibernate or WildFly?
I'm not aware of such profiles. Remember these are system wide settings,
I'm not sure how useful it is for libraries like Hibernate.
It seems more common for databases, I guess because they are more likely to
have a whole machine dedicated to their specialized use case.
WildFly is an interesting idea but I suspect they'd rather inherit from the
Infinispan tuning recommendations.
Dan
On Wed, Jul 19, 2017 at 2:40 PM, Emmanuel Bernard <emmanuel(a)hibernate.org>
wrote:
> I don’t think it discourages, the people you pention would simply use the
> “default” profile. At least with a list of profiles, the idea of tuning
> pops into your mind and you can go further.
>
> On 18 Jul 2017, at 15:05, Sebastian Laskawiec <slaskawi(a)redhat.com> wrote:
>
> I have mixed feelings about this to be honest. On one hand this gives a
> really good experience for new users (just pick a profile you want to use)
> but on the other hand tools like this discourage users for doing proper
> tuning work (why should I read any documentation and do anything if
> everything has already been provided by Infinispan authors).
>
> Nevertheless I think it might be worth to do a POC and host profiles in a
> separate repository (to avoid user confusion).
>
> On Tue, Jul 11, 2017 at 6:49 PM Sanne Grinovero <sanne(a)infinispan.org>
> wrote:
>
>> Hi all,
>>
>> tuned is a very nice utility to apply all kind of tuning options to a
>> machine focusing on performance options.
>>
>> Of course it doesn't replace the tuning that an expert could provide
>> for a specific system, but it gives people a quick an easy way to get
>> to a reasonable starting point, which is much better than the generic
>> out of the box of a Linux distribution.
>>
>> In many distributions it runs at boostrap transparently, for example
>> it will automatically apply a "laptop" profile if it's able to detect
>> running on a laptop, and might be the little tool which switches your
>> settings to an higher performance profile when you plug in the laptop.
>>
>> There's some good reference here:
>> - https://access.redhat.com/documentation/en-US/Red_Hat_Enterp
>> rise_Linux/7/html/Performance_Tuning_Guide/sect-Red_Hat_
>> Enterprise_Linux-Performance_Tuning_Guide-Performance_
>> Monitoring_Tools-tuned_and_tuned_adm.html
>>
>> It's also easy to find it integrated with other tools, e.g. you can
>> use Ansible to set a profile.
>>
>> Distributions like Fedora have out of the box profiles included which
>> are good tuning base settings to run e.g. an Oracle RDBMS, an HANA
>> database, or just tune for latency rather than throughput.
>> Communities like Hadoop also provide suggested tuned settings.
>>
>> It would be great to distribute an Infinispan optimised profile? We
>> could ask the Fedora team to include it, I feel it's important to have
>> a profile there, or at least have one provided by any Infinispan RPMs.
>>
>> Thanks,
>> Sanne
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
> --
> SEBASTIAN ŁASKAWIEC
>
> INFINISPAN DEVELOPER
> Red Hat EMEA <https://www.redhat.com/>
> <https://red.ht/sig>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
7 years, 7 months
Late end invalidation delivery stops put from load - ISPN-8114
by Galder Zamarreño
Hey Radim,
Re: https://issues.jboss.org/browse/ISPN-8114
I've been looking at the trace logs of this failure. I've extracted the most interesting parts of this failure into [1].
What happens is that after loading the entries into the cache, the end invalidation message to allow put from loads to succeed does not get executed in time before the put from load is attempted. As a result of this, the put from load does not happen and hence the entry is not loaded into the cache.
The end invalidation message eventually gets through. There's a gap between receiving the JGroups message and the actual execution, but that's due to the delivery mode of the message.
I'm not sure how we should fix this. Options:
1) A thread sleep before loading entries "might work" but for a CI test this could always backfire with the right timing sets.
2) Find a way to hook into the PFLValidator class and only load after we know end invalidation has been received by all nodes.
3) Make end invalidation message sync? This would expensive. Even with async, changing delivery mode might have worked here... but under the right circumstances you could still get the same issue with async.
I'm keen on trying to find a potential solution using 2), but wondered if you have other ideas.
Cheers,
[1] https://gist.github.com/galderz/0bce6dce16de018375e43e25c0cf3913
--
Galder Zamarreño
Infinispan, Red Hat
7 years, 7 months