Anyone using AdvancedCache.with(ClassLoader) functionality?
by Galder Zamarreño
Hi all,
AdvancedCache.with(ClassLoader) is an outdated functionality that we're interested in removing altogether in the next major Infinispan version.
We're thinking of removing it all together without deprecation since we believe this was only used by older JBoss Application Server / Wildfly versions.
If you're still using this functionality right now, please let us know asap.
Cheers,
--
Galder Zamarreño
Infinispan, Red Hat
7 years, 10 months
Fwd: [jgroups-users] Event.MSG and JGRP-2067
by Bela Ban
FYI
-------- Forwarded Message --------
Subject: [jgroups-users] Event.MSG and JGRP-2067
Date: Tue, 26 Jul 2016 13:44:47 +0200
From: Questions/problems related to using JGroups
<javagroups-users(a)lists.sourceforge.net>
Reply-To: javagroups-users(a)lists.sourceforge.net
To: jg-users <javagroups-users(a)lists.sourceforge.net>
so far, all messages to be sent and all received messages have always
been wrapped in an Event, e.g. when calling JChannel.send(Message msg):
Event evt=new Event(Event.MSG, msg);
channel.down(evt);
This caused the creation of an Event instance for every sent and
received message.
In [1], I changed this and added 2 methods to Protocol:
public Object down(Message msg);
public Object up(Message msg)
These callbacks are now called instead of down(Event) and up(Event)
whenever a message is sent or received. Since messages make up 99.9% of
all traffic up and down a stack, this change should reduce the memory
allocation rate even more, although Event instances are very short-lived
and usually die in eden.
The downside is that this breaks code and devs who've handled messages
and events in the same method (up(Event) / down(Event)) now have to
break out the message handling code into separate methods (up(Message) /
down(Message)).
This change is quite big (111 files changed, 2552 insertions(+), 2796
deletions(-)), but only affects protocol developers (and devs who
implement UpHandler directly).
This is for 4.0; 3.6.x is unaffected.
Let me know (via the mailing list) if you encounter any problems.
Cheers,
[1] https://issues.jboss.org/browse/JGRP-2067
--
Bela Ban, JGroups lead (http://www.jgroups.org)
------------------------------------------------------------------------------
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols
are
consuming the most bandwidth. Provides multi-vendor support for NetFlow,
J-Flow, sFlow and other flows. Make informed decisions using capacity
planning
reports.http://sdm.link/zohodev2dev
_______________________________________________
javagroups-users mailing list
javagroups-users(a)lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/javagroups-users
7 years, 10 months
Early Access builds of JDK 8u112 b03, JDK 9 b128 are available on java.net
by Rory O'Donnell
Hi Galder,
Early Access b128 <https://jdk9.java.net/download/> for JDK 9 is
available on java.net, summary of changes are listed here
<http://www.java.net/download/java/jdk9/changes/jdk-9+128.html>.
Early Access b127 <https://jdk9.java.net/jigsaw/> (#5304) for JDK 9 with
Project Jigsaw is available on java.net, summary of changes are listed
here
<http://download.java.net/java/jigsaw/archive/127/binaries/jdk-9+127.html>
Early Access b03 <https://jdk8.java.net/download.html> for JDK 8u112 is
available on java.net, summary of changes are listed here
<http://www.java.net/download/java/jdk8u112/changes/jdk8u112-b03.html>
Alan Bateman posted new EA builds contain initial implementation of
current proposals , more info [0]
The jigsaw/jake forest has been updated with an initial
implementation of the proposals that Mark brought to the
jpms-spec-experts mailing list last week. For those that don't build
from source then the EA build/downloads [1] has also been refreshed.
Rgds,Rory
[0] http://mail.openjdk.java.net/pipermail/jigsaw-dev/2016-July/008467.html
[1] https://jdk9.java.net/jigsaw/
--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland
7 years, 11 months
Kubernetes/OpenShift Rolling updates and configuration changes
by Sebastian Laskawiec
Hey!
I've been thinking about potential use of Kubernetes/OpenShift (OpenShift =
Kubernetes + additional features) Rolling Update mechanism for updating
configuration of Hot Rod servers. You might find some more information
about the rolling updates here [1][2] but putting it simply, Kubernetes
replaces nodes in the cluster one at a time. What's worth mentioning,
Kubernetes ensures that the newly created replica is fully operational
before taking down another one.
There are two things that make me scratching my head...
#1 - What type of configuration changes can we introduce using rolling
updates?
I'm pretty sure introducing a new cache definition won't do any harm. But
what if we change a cache type from Distributed to Replicated? Do you have
any idea which configuration changes are safe and which are not? Could come
up with such list?
#2 - How to prevent loosing data during the rolling update process?
In Kubernetes we have a mechanism called lifecycle hooks [3] (we can invoke
a script during container startup/shutdown). The problem with shutdown
script is that it's time constrained (if it won't end up within certain
amount of time, Kubernetes will simply kill the container). Fortunately
this time is configurable.
The idea to prevent from loosing data would be to invoke (enquque and wait
for finish) state transfer process triggered by the shutdown hook (with
timeout set to maximum value). If for some reason this won't work (e.g. a
user has so much data that migrating it this way would take ages), there is
a backup plan - Infinispan Rolling Upgrades [4].
What do you think about this?
Thanks
Sebastian
[1] https://www.youtube.com/watch?v=9C6YeyyUUmI
[2] http://kubernetes.io/docs/user-guide/rolling-updates/
[3]
http://kubernetes.io/docs/user-guide/container-environment/#container-hooks
[4]
http://infinispan.org/docs/stable/user_guide/user_guide.html#_Rolling_cha...
7 years, 11 months
Deprecating the @ProvidedId annotation w/o a replacement in place
by Sanne Grinovero
I'm deprecating the `org.hibernate.search.annotations.ProvidedId`
annotation in Hibernate Search.
This was originally introduced when Infinispan Query was first
designed as a way to mark the Infinispan value object as "something
which doesn't contain the id", since in the key/value store world the
key can usually not be extracted from the value (which is a difference
from the Hibernate ORM world).
In early days, this meant that all indexed objects in Infinispan had
to be marked with this, but we quickly fixed this oddness by simply
assuming that - when using Hibernate Search to index Infinispan
objects - we might as well consider them all annotated with
@ProvidedId implicitly.
So the main reason for this annotation to exist is long gone, but its
role evolved beyond that.
This annotation was also enabling a couple more features:
A] allow the user to pick the index field name used to store the IDs
B] allow to bind a custom FieldBridge to the key type
# A: customizing the field name from "providedId"
I don't think this is actually very useful. It is complex to handle
when different types might want to override this, and the rules at
which this is valid across inherited types.
I'm proposing we take this "mapping flexibility" away with no replacement.
# B: custom FieldBridge for indexing of Infinispan keys
Infinispan already has the notion of Transformers, which is similar
but not quite the same. The differences are confusing, and neither of
them actually makes it very clear how to e.g. search by some attribute
of the key type.
Clearly there's need for a better approach to deal with keys, and
@ProvidedId doesn't fit well in such plans.
For now I plan to mark @ProvidedId as deprecated; although I won't
remove it yet until we have an alternative in place to better deal
with keys.
However, I'm unable to properly document what its replacement should
be until we fleshed out the alternative.
I'd like to proceed with the deprecation even without having the
replacement already as I suspect what we had so far for indexing keys
was not good enough anyway. Deprecating it is rather urgent as it
turns out it's all quite confusing when this annotation should be
used.
Thanks,
Sanne
7 years, 11 months