Re: [infinispan-dev] Openshift blogposts
by Sebastian Laskawiec
Hey Radim,
Moving to dev mailing list.
Comments inlined.
Thanks,
Sebastian
On Tue, May 2, 2017 at 5:28 PM Radim Vansa <rvansa(a)redhat.com> wrote:
> Hi Sebastian,
>
> I am currently getting acquainted with OpenShift so I have been reading
> your blogposts about that. Couple of questions:
>
> http://blog.infinispan.org/2016/10/openshift-and-node-affinity.html
>
> - so you need to have different deployment config for each rack/site?
>
Yes. A while ago I read an article about managing scheduler using labels:
https://blog.openshift.com/deploying-applications-to-specific-nodes/
So I think it can be optimized to 1 DeploymentConfig + some magic in
spec.template. But that's only my intuition. I haven't played with this yet.
>
> http://blog.infinispan.org/2017/03/checking-infinispan-cluster-health-and...
>
> maxUnavailable: 1 and maxSurge: 1 don't sound too good to me - if you
> can't fit all the data into single pod, you need to set maxUnavailable:
> 0 (to not bring any nodes down before the rolling upgrade completes) and
> maxSurge: 100% to have enough nodes started. + Some post-hook to make
> sure all data are in new cluster before you bring down the old one. Am I
> missing something?
>
Before answering those questions, let me show you two examples:
- maxUnavailable: 1, maxSurge 1
-
- oc logs transactions-repository-2-deploy -f
1. --> Scaling up transactions-repository-2 from 0 to 3, scaling
down transactions-repository-1 from 3 to 0 (keep 2 pods
available, don't
exceed 4 pods)
2. * Scaling transactions-repository-2 up to 1*
3. * Scaling transactions-repository-1 down to 2*
4. Scaling transactions-repository-2 up to 2
5. Scaling transactions-repository-1 down to 1
6. Scaling transactions-repository-2 up to 3
7. Scaling transactions-repository-1 down to 0
8. --> Success
- maxUnavailable: 0, maxSurge 100%
- oc logs transactions-repository-3-deploy -f
1. --> Scaling up transactions-repository-3 from 0 to 3, scaling
down transactions-repository-2 from 3 to 0 (keep 3 pods
available, don't
exceed 6 pods)
2. Scaling transactions-repository-3 up to 3
3.
* Scaling transactions-repository-2 down to 1 *
4. * Scaling transactions-repository-2 down to 0*
5. --> Success
So we are talking about Kubernetes Rolling Update here. You have a new
version of your deployment (e.g. with updated parameters, labels etc) and
you want update your deployment in Kubernetes (do not mess it up with
Infinispan Rolling Upgrade where the intention is to roll out a new
Infinispan cluster).
The former approach (maxUnavailable: 1, maxSurge 1) allocates additional
Infinispan node for greater cluster capacity. Then it scales the old
cluster down. This results in sending KILL [1] signal to the Pod so it gets
a chance to shut down gracefully. As a side effect, this also triggers
cluster rebalance (since 1 node leaves the cluster). And we go like this on
and on until we replace old cluster with new one.
The latter approach spins a new cluster up. Then Kubernetes sends KILL
signal too *all* old cluster members.
Both approaches should work if configured correctly (the former relies
heavily on readiness probes and the latter on moving data off the node
after receiving KILL signal). However I would assume the latter generates
much more network traffic in a short period of time which I consider a bit
more risky.
Regarding to to a hook which ensures all data has been migrated - I'm not
sure how to build such a hook. The main idea is to keep cluster in
operational state so that none of the client would notice the rollout. It
works like a charm with the former approach.
[1]
https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods
> Radim
>
> --
> Radim Vansa <rvansa(a)redhat.com>
> JBoss Performance Team
>
> --
SEBASTIAN ŁASKAWIEC
INFINISPAN DEVELOPER
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
7 years, 8 months
My weekly report
by Vittorio Rigamonti
Hi team,
I won't be able to be in the today's weekly meeting.
My updates for the last two weeks:
JENKINS
worked on Jenkins to setup the build pipeline for C++ and C# client. This
task is completed but we have still these open points:
the windows machine needs a manual start up at the moment, but we want
it to be automatic.
need to study how to expose the produced release artifacts
8.1.1
worked on code cleanup for a 0.0.1 release. I'm collection all the changes
here: https://github.com/rigazilla/cpp-client/tree/HRCPP-373/warning
I would like to cleanup the SChannel socket implementation (windows) but I
need to get a more deep knowledge of the windows security api. I'm working
on this currently
--
Vittorio Rigamonti
Senior Software Engineer
Red Hat
<https://www.redhat.com>
Milan, Italy
vrigamon(a)redhat.com
irc: rigazilla
<https://red.ht/sig>
7 years, 8 months
Unwrapping exceptions
by Katia Aresti
Hi all !
Radim pointed me to this thread discussing the exceptions launched by the
lambda executed by the user.
So, I've came accros this problem right now with the compute method.
ComputeIfAbsent is used by the QueryCache [1]
This method is now a Command, so when the wrapped lambda throws an
exception, [2], the expected exception is the one raised by the lambda. But
with my modifications, this exception is wrapped in a CacheException.
I discussed with Adrien yesterday, and IMHO and his, a CacheException is
not the same thing as the exception raised inside the lambda. Moreover, in
this particular case, I don't know if users some code could be broken if we
make the user get a CacheException that wrappes the ParseException instead
of the ParseException itself.
How can I fix the problem ?
Should we correct the tests and say that, from now on, CacheException will
be raised ?
Should we handle this CacheException in the QueryCache class when
computeIfAbsent is called ?
Should we propagate the lambda's exception as it is ?
Katia
[1]
https://github.com/infinispan/infinispan/blob/master/query/src/main/java/...
[2]
https://github.com/infinispan/infinispan/blob/master/query/src/test/java/...
7 years, 8 months
Documentation code snippets
by Jiri Holusa
Moving this to infinispan-dev.
I've just issued a PR [1], where I setup the code snippets generation. It was actually pretty easy. I started implementing it for the configuration part of the documentation and I came across following findings/issues.
There were more votes for option 2 (see the previous mail for detail, in summary using existing testsuite), hence I started with that. Pretty shortly I see following issues:
* XML configuration - since we want to have the <infinispan> element there in the configuration, I have to do one XML file per one configuration code snippet -> the number of files will grow and will mess up the "normal" testsuite
* IMHO biggest problem - our testsuite is usually not written in "documentation simplicity". For example, in testsuite we barely (= never) do "EmbeddedCacheManager cacheManager = new DefaultCacheManager("...");", we obtain the cache manager by some helper method. While this is great for testing, you don't want to have this in documentation as it should be simple and straightforward. Another example would be [2]. Look at the programmatic configuration snippets. In the testsuite, we usually don't have that trivial setup, not so comprehensively written somewhere.
* When you want to introduce a new code snippet, how can you be sure that the snippet is not somewhere in the testsuite already, but written a bit differently? I encountered this right from the beginning, search the test classes and looking for "good enough" code snippet that I could use.
Together it seems to me that it will mess up the testsuite quite a bit, make the maintenance of documentation harder and will significantly prolong the time needed for writing new documentation. What do you think? How about we went the same way as Hibernate (option 1 in first email) - creating separate documentation testsuite that is as simple as possible, descriptive and straightforward.
I don't really care, which option we choose, I will implement it either way, but I wanted to show that there are some pitfalls of the option 2 as well :(
Cheers,
Jiri
[1] https://github.com/infinispan/infinispan/pull/5115
[2] http://infinispan.org/docs/stable/user_guide/user_guide.html#configuring_...
----- Forwarded Message -----
> From: "Jiri Holusa" <jholusa(a)redhat.com>
> To: "infinispan-internal" <infinispan-internal(a)redhat.com>
> Sent: Friday, April 7, 2017 6:33:53 PM
> Subject: [infinispan-internal] Documentation code snippets
>
> Hi everybody,
>
> during the documentation review for JDG 7.1 GA, I came across this little
> thing.
>
> Having a good documentation is IMHO crucial for people to like our technology
> and the key point is having code snippets in the documentation up to date
> and working. During review of my parts, I found out many and many outdated
> code snippets, either non-compilable or using deprecated methods. I would
> like to eliminate this issue in the future, so it would make our
> documentation better and also remove burden when doing documentation review.
>
> I did some research and I found out that Hibernate team (thanks Radim, Sanne
> for the information) does a very cool thing and that is that the code
> snippets are taken right from testsuite. This way they know that the code
> snippet can always compile and also make sure that it's working properly. I
> would definitely love to see the same in Infinispan.
>
> It works extremely simply that you mark by comment in the test the part, you
> want to include in the documentation, see an example here for the AsciiDoc
> part [1] and here for the test part [2]. There are two ways of how to
> organize that:
> 1) create a separate "documentation testsuite", with as simple as possible
> test classes - Hibernate team does it this way. Pros: documentation is
> easily separated. Cons: possible duplication.
> 2) use existing testsuite, marking the parts in the existing testsuite. Pros:
> no duplication. Cons: documentation snippets are spread all across the
> testsuite.
>
> I would definitely volunteer to make this happen in Infinispan
> documentation.
>
> What do you guys think about it?
>
> Cheers,
> Jiri
>
> [1]
> https://raw.githubusercontent.com/hibernate/hibernate-validator/master/do...
> [2]
> https://github.com/hibernate/hibernate-orm/blob/master/documentation/src/...
>
>
7 years, 8 months
All jars must go?
by Galder Zamarreño
Hi all,
As you might already know, there's been big debates about upcoming Java 9 module system.
Recently Stephen Colebourne, creator Joda time, posted his thoughts [1].
Stephen mentions some potential problems with all jars since no two modules should have same package. We know from past experience that using these jars as dependencies in Maven create all sorts of problems, but with the new JPMS they might not even work?
Have we tried all jars in Java 9? I'm wondering whether Stephen's problems with all jars are truly founded since Java offers no publishing itself. I mean, for that Stephen mentions to appear, you'd have to at runtime have an all jar and then individual jars, in which case it would fail. But as long as Maven does not enforce this in their repos, I think it's fine. If Maven starts enforcing this in the jars that are stored in Maven repos then yeah, we have a big problem.
Thoughts?
Cheers,
[1] http://blog.joda.org/2017/04/java-se-9-jpms-module-naming.html
--
Galder Zamarreño
Infinispan, Red Hat
7 years, 8 months
Simplest way to check the validity of connection to Remote Cache
by Ramesh Reddy
Hi,
Is there call I can make on the cache API like ping to check the validity of the remote connection? In OpenShift JDV is having issues with keeping the connections fresh to JDG when node count goes to zero and comes back up.
Thank you.
Ramesh..
7 years, 8 months
TLS/SNI support for Relay protocol
by Sebastian Laskawiec
Hey Bela!
I've been thinking about Cross Site Replication using Relay protocol on
Kubernetes/OpenShift. Most of the installations should use Federation [1]
but I can also imagine a custom installation with two sites (let's call
them X and Y) and totally separate networks. In that case, the flow through
Kubernetes/OpenShift might look like the following:
Site X, Pod 1 (sending relay message) ---> sending packets ---> the
Internet ---> Site Y, Ingress/Route ---> Service ---> Site Y, Pod 1
Ingress/Routes and Services are Kubernetes/OpenShift "things". The former
acts as a reverse proxy and the latter as a load balancer.
Unfortunately Ingress/Routes don't have good support for custom protocols
using TCP (they were designed with HTTP in mind). The only way to make it
work is to use TLS with SNI [2][3]. So we would need to encrypt all traffic
with TLS and use Application FQDN (a fully qualified application name, so
something like this: infinispan-app-2-myproject.*site-x*.com) as SNI
Hostname. Note that FQDN for both sites might be slightly different -
Infinispan on site X might want to use FQDN containing site Y in its name
and vice versa.
I was wondering if it is possible to configure JGroups this way. If not,
are there any plans to do so?
Thanks,
Sebastian
[1] https://kubernetes.io/docs/concepts/cluster-administration/federation/
[2] https://www.ietf.org/rfc/rfc3546.txt
[3] Look for "Passthrough Termination"
https://docs.openshift.com/enterprise/3.2/architecture/core_concepts/rout...
--
SEBASTIAN ŁASKAWIEC
INFINISPAN DEVELOPER
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
7 years, 8 months