Switch to AffinityPartitioner by default, or even enforce it
by Sanne Grinovero
Hi all,
I would like for Infinispan to use the AffinityPartitioner by default,
replacing the HashFunctionPartitioner.
This should have zero impact as AffinityPartitioner extends
HashFunctionPartitioner and only changes semantics when a key is
implementing AffinityTaggedKey.
So the difference would be that for those using AffinityTaggedKey,
these would work out of the box without having to change the
configuration as well.
WDYT?
As a further but separate improvement, I'd change the
AffinityPartitioner to use delegation to the HashFunctionPartitioner
instead of extending it, and always wrap any user configured
partitioner with the AffinityPartitioner.
This would ensure that AffinityTaggedKey work as expectedf even when
people experiment with other partitioners, and avoids some complexity
of configuring Infinispan:
"oh, I didn't know that changing hashing function would break feature [x]..."
For the record, we're using AffinityTaggedKey in our evil plans to
improve query performance, but it's also sparked interest as a very
useful function by the HACEP team.
I have patches ready...
Thanks,
Sanne
7 years, 9 months
Re: [infinispan-dev] Infinispan and OpenShift/Kubernetes PetSets
by Sebastian Laskawiec
Hey Kevin!
The timing for looking into PetSets is perfect I think. Kubernetes upstream
folks are thinking about Sticky IPs [16] (which are not essential for
Infinispan but other projects might be really interested in this) and I
asked some time ago about exposing PetSets to the outside world [17] (which
is essential for accessing the cluster using Hot Rod client).
Please keep me in the loop. I'm really interested in this.
Thanks
Sebastian
[16] https://github.com/kubernetes/kubernetes/issues/28969
[17] https://groups.google.com/forum/#!topic/kubernetes-dev/K-9KA_wMbmk
On Sun, Aug 21, 2016 at 7:05 PM, Kevin Conner <kconner(a)redhat.com> wrote:
> Apologies, I’ve been travelling to London for the JDG meeting.
>
> On 19 Aug 2016, at 10:00, Sebastian Laskawiec <slaskawi(a)redhat.com> wrote:
> > I've been playing with Kubernetes PetSets [1] for a while and I'd like
> to share some thoughts. Before I dig in, let me give you some PetSets
> highlights:
> > • PetSets are alpha resources for managing stateful apps in
> Kubernetes 1.3 (and OpenShift Origin 1.3).
> > • Since this is an alpha resource, there are no guarantees about
> backwards compatibility. Alpha resources can also be disabled in some
> public cloud providers (you can control which API versions are accessible
> [2]).
> > • PetSets allows starting pods in sequence (not relevant for us,
> but this is a killer feature for master-slave systems).
> > • Each Pod has it's own unique entry in DNS, which makes discovery
> very simple (I'll dig into that a bit later)
> > • Volumes are always mounted to the same Pods, which is very
> important in Cache Store scenarios when we restart pods (e.g. Rolling
> Upgrades [3]).
> > Thoughts and ideas after spending some time playing with this feature:
> > • PetSets make discovery a lot easier. It's a combination of two
> things - Headless Services [4] which create multiple A records in DNS and
> predictable host names. Each Pod has it's own unique DNS entry following
> pattern: {PetSetName}-{PodIndex}.{ServiceName} [5]. Here's an example of
> an Infinispan PetSet deployed on my local cluster [6]. As you can see we
> have all domain names and IPs from a single DNS query.
> > • Maybe we could perform discovery using this mechanism? I'm aware
> of DNS discovery implemented in KUBE_PING [7][8] but the code looks trivial
> [9] so maybe it should be implement inside JGroups? @Bela - WDYT?
> > • PetSets do not integrate well with OpenShift 'new-app' command.
> In other words, our users will need to use provided yaml (or json) files to
> create Infinispan cluster. It's not a show-stopper but it's a bit less
> convenient than 'oc new-app'.
> > • Since PetSets are alpha resources they need to be considered as
> secondary way to deploy Infinispan on Kubernetes and OpenShift.
> > • Finally, the persistent volumes - since a Pod always gets the
> same volume, it would be safe to use any file-based cache store.
> > If you'd like to play with PetSets on your local environment, here are
> necessary yaml files [10].
>
> PetSets are still in extremely early stages and I am part of a working
> group that has recently formed to cover the introduction of this within the
> products, including driving any necessary changes back upstream. There
> will be Cloud Enablement involvement in this effort and any adoption of
> petsets will likely go through the existing project.
>
> Kev
>
> --
> JBoss by Red Hat
>
>
7 years, 9 months
Re: [infinispan-dev] Infinispan and OpenShift/Kubernetes PetSets
by Bela Ban
If we add a DNS discovery protocol, it would only be another discovery
protocol among many, and customers can choose which one to use.
I'm also thinking of adding the ability to JGroups to use multiple
discovery protocols in the same stack and combine their result sets into
one. Not sure though if it makes sense to use KUBE_PING and DNS in the
same stack...
On 20/08/16 00:04, Rob Cernich wrote:
> A couple of things...
>
> re. volumes:
> We also need to consider the mounting behavior for scale down scenarios
> and for overage scenarios when doing upgrades. For the latter,
> OpenShift can spin up pods of the new version before the older version
> pods have terminated. This may mean that some volumes from the old pods
> are orphaned. We did see this when testing A-MQ during upgrades. With
> a single pod, the upgrade process caused the new version to have a new
> mount and the original mount was left orphaned (another upgrade would
> cause the newer pod to pick up the orphaned mount, leaving the new mount
> orphaned). I believe we worked around this by specifying an overage of
> 0% during upgrades. This ensured the new pods would pick up the volumes
> left behind by the old pods. (Actually, we were using subdirectories in
> the mount, since all pods shared the same volume.)
>
> re. dns:
> DNS should work fine as-is, but there are a couple things that you need
> to consider.
> 1. Service endpoints are only available in DNS after the pod becomes
> ready (SVC records on the service name). Because infinispan attaches
> itself to the cluster, this meant pods were all started as cluster of
> one, then merged once they noticed the other pods. This had a
> significant impact on startup. Since then, OpenShift has added the
> ability to query the endpoints associated with a service as soon as the
> pod is created, which would allow initialization to work correctly. To
> make this work, we'd have to change the form of the DNS query to pick up
> the service endpoints (I forget the naming scheme).
>
> Another thing to keep in mind is that looking up pods by labels allows
> any pod with the specified label to be added to the cluster. I'm not
> sure of a use case for this, but it would allow other deployments to be
> included in the cluster. (You could also argue that the service is the
> authority for this and any pod with said label would be added as a
> service endpoint, thus achieving the same behavior...probably more
> simply too.)
>
> Lastly, DNS was a little flaky when we first implemented this, which was
> part of the reason we went straight to kubernetes. Users were using
> dnsmasq with wildcards that worked well for routes, but ended up routing
> services to the router ip instead of pod ip. Needless to say, there
> were a lot of complications trying to use DNS and debug user problems
> with service resolution.
>
> Hope that helps,
> Rob
>
> ------------------------------------------------------------------------
>
> Hey Bela!
>
> No no, the resolution can be done with pure JDK.
>
> Thanks
> Sebastian
>
> On Fri, Aug 19, 2016 at 11:18 AM, Bela Ban <bban(a)redhat.com
> <mailto:bban@redhat.com>> wrote:
>
> Hi Sebastian
>
> the usual restrictions apply: if DNS discovery depends on
> external libs, then it should be hosted in jgroups-extras,
> otherwise we can add it to JGroups itself.
>
> On 19/08/16 11:00, Sebastian Laskawiec wrote:
>
> Hey!
>
> I've been playing with Kubernetes PetSets [1] for a while
> and I'd like
> to share some thoughts. Before I dig in, let me give you
> some PetSets
> highlights:
>
> * PetSets are alpha resources for managing stateful apps
> in Kubernetes
> 1.3 (and OpenShift Origin 1.3).
> * Since this is an alpha resource, there are no
> guarantees about
> backwards compatibility. Alpha resources can also be
> disabled in
> some public cloud providers (you can control which API
> versions are
> accessible [2]).
> * PetSets allows starting pods in sequence (not relevant
> for us, but
> this is a killer feature for master-slave systems).
> * Each Pod has it's own unique entry in DNS, which makes
> discovery
> very simple (I'll dig into that a bit later)
> * Volumes are always mounted to the same Pods, which is
> very important
> in Cache Store scenarios when we restart pods (e.g.
> Rolling Upgrades
> [3]).
>
> Thoughts and ideas after spending some time playing with
> this feature:
>
> * PetSets make discovery a lot easier. It's a combination
> of two
> things - Headless Services [4] which create multiple A
> records in
> DNS and predictable host names. Each Pod has it's own
> unique DNS
> entry following pattern:
> {PetSetName}-{PodIndex}.{ServiceName} [5].
> Here's an example of an Infinispan PetSet deployed on
> my local
> cluster [6]. As you can see we have all domain names
> and IPs from a
> single DNS query.
> * Maybe we could perform discovery using this mechanism?
> I'm aware of
> DNS discovery implemented in KUBE_PING [7][8] but the
> code looks
> trivial [9] so maybe it should be implement inside
> JGroups? @Bela -
> WDYT?
> * PetSets do not integrate well with OpenShift 'new-app'
> command. In
> other words, our users will need to use provided yaml
> (or json)
> files to create Infinispan cluster. It's not a
> show-stopper but it's
> a bit less convenient than 'oc new-app'.
> * Since PetSets are alpha resources they need to be
> considered as
> secondary way to deploy Infinispan on Kubernetes and
> OpenShift.
> * Finally, the persistent volumes - since a Pod always
> gets the same
> volume, it would be safe to use any file-based cache store.
>
> If you'd like to play with PetSets on your local
> environment, here are
> necessary yaml files [10].
>
> Thanks
> Sebastian
>
>
> [1] http://kubernetes.io/docs/user-guide/petset/
> [2] For checking which APIs are accessible, use 'kubectl
> api-versions'
> [3]
> http://infinispan.org/docs/stable/user_guide/user_guide.html#_Rolling_cha...
> [4]
> http://kubernetes.io/docs/user-guide/services/#headless-services
> [5] http://kubernetes.io/docs/user-guide/petset/#peer-discovery
> [6]
> https://gist.github.com/slaskawi/0866e63a39276f8ab66376229716a676
> [7]
> https://github.com/jboss-openshift/openshift-ping/tree/master/dns
> [8]
> https://github.com/jgroups-extras/jgroups-kubernetes/tree/master/dns
> [9] http://stackoverflow.com/a/12405896/562699
> [10] You might need to adjust ImageStream.
> https://gist.github.com/slaskawi/7cffb5588dabb770f654557579c5f2d0
>
>
> --
> Bela Ban, JGroups lead (http://www.jgroups.org)
>
>
>
--
Bela Ban, JGroups lead (http://www.jgroups.org)
7 years, 9 months
Re: [infinispan-dev] Infinispan and OpenShift/Kubernetes PetSets
by Sebastian Laskawiec
Hey Rob!
Thanks a lot for clarification!
More comments inlined.
Thanks
Sebastian
On Sat, Aug 20, 2016 at 12:04 AM, Rob Cernich <rcernich(a)redhat.com> wrote:
> A couple of things...
>
> re. volumes:
> We also need to consider the mounting behavior for scale down scenarios
> and for overage scenarios when doing upgrades. For the latter, OpenShift
> can spin up pods of the new version before the older version pods have
> terminated. This may mean that some volumes from the old pods are
> orphaned. We did see this when testing A-MQ during upgrades. With a
> single pod, the upgrade process caused the new version to have a new mount
> and the original mount was left orphaned (another upgrade would cause the
> newer pod to pick up the orphaned mount, leaving the new mount orphaned).
> I believe we worked around this by specifying an overage of 0% during
> upgrades. This ensured the new pods would pick up the volumes left behind
> by the old pods. (Actually, we were using subdirectories in the mount,
> since all pods shared the same volume.)
>
>
I think PetSets try to address this kind of problems. According to the
manual page [11], the storage is linked to the Pod ordinal and hostaname
and should be stable.
[11] http://kubernetes.io/docs/user-guide/petset/#when-to-use-pet-set
> re. dns:
> DNS should work fine as-is, but there are a couple things that you need to
> consider.
> 1. Service endpoints are only available in DNS after the pod becomes ready
> (SVC records on the service name). Because infinispan attaches itself to
> the cluster, this meant pods were all started as cluster of one, then
> merged once they noticed the other pods. This had a significant impact on
> startup. Since then, OpenShift has added the ability to query the
> endpoints associated with a service as soon as the pod is created, which
> would allow initialization to work correctly. To make this work, we'd have
> to change the form of the DNS query to pick up the service endpoints (I
> forget the naming scheme).
>
Yes, I agree. Adding nodes one after another will have significant impact
on cluster startup time. However it should be safe to query the cluster
(and even put data) during rebalance. So I would say, if a node is up, and
cluster is not damaged - we should treat it as ready.
NB - I proposed a HealthCheck API to Infinispan 9 (currently under
development) [12][13]. The overall cluster health can be in one of 3
statuses - GREEN (everything is fine), YELLOW (rebalance in progress), RED
(cluster not healthy). Kubernetes/OpenShift readiness probe should check if
the status is GREEN or YELLOW. The HealthCheck API is attached to the WF
management API so you can query it with CURL or using ispn_cli.sh script.
[12] https://github.com/infinispan/infinispan/wiki/Health-check-API
[13] https://github.com/infinispan/infinispan/pull/4499
> Another thing to keep in mind is that looking up pods by labels allows any
> pod with the specified label to be added to the cluster. I'm not sure of a
> use case for this, but it would allow other deployments to be included in
> the cluster. (You could also argue that the service is the authority for
> this and any pod with said label would be added as a service endpoint, thus
> achieving the same behavior...probably more simply too.)
>
I think this is a scenario when someone might try to attach Infinispan in
library mode (a dependency in WAR file for example) to the Hot Rod cluster.
Gustavo answered question like this a while ago [14].
[14] https://developer.jboss.org/message/961568
> Lastly, DNS was a little flaky when we first implemented this, which was
> part of the reason we went straight to kubernetes. Users were using
> dnsmasq with wildcards that worked well for routes, but ended up routing
> services to the router ip instead of pod ip. Needless to say, there were a
> lot of complications trying to use DNS and debug user problems with service
> resolution.
>
I think a governing headless service [15] is required here (PetSets require
a service but considering how Infinispan works, it should be a headless
service in my opinion).
[15] http://kubernetes.io/docs/user-guide/services/#headless-services
>
>
> Hope that helps,
> Rob
>
> ------------------------------
>
> Hey Bela!
>
> No no, the resolution can be done with pure JDK.
>
> Thanks
> Sebastian
>
> On Fri, Aug 19, 2016 at 11:18 AM, Bela Ban <bban(a)redhat.com> wrote:
>
>> Hi Sebastian
>>
>> the usual restrictions apply: if DNS discovery depends on external libs,
>> then it should be hosted in jgroups-extras, otherwise we can add it to
>> JGroups itself.
>>
>> On 19/08/16 11:00, Sebastian Laskawiec wrote:
>>
>>> Hey!
>>>
>>> I've been playing with Kubernetes PetSets [1] for a while and I'd like
>>> to share some thoughts. Before I dig in, let me give you some PetSets
>>> highlights:
>>>
>>> * PetSets are alpha resources for managing stateful apps in Kubernetes
>>> 1.3 (and OpenShift Origin 1.3).
>>> * Since this is an alpha resource, there are no guarantees about
>>> backwards compatibility. Alpha resources can also be disabled in
>>> some public cloud providers (you can control which API versions are
>>> accessible [2]).
>>> * PetSets allows starting pods in sequence (not relevant for us, but
>>> this is a killer feature for master-slave systems).
>>> * Each Pod has it's own unique entry in DNS, which makes discovery
>>> very simple (I'll dig into that a bit later)
>>> * Volumes are always mounted to the same Pods, which is very important
>>> in Cache Store scenarios when we restart pods (e.g. Rolling Upgrades
>>> [3]).
>>>
>>> Thoughts and ideas after spending some time playing with this feature:
>>>
>>> * PetSets make discovery a lot easier. It's a combination of two
>>> things - Headless Services [4] which create multiple A records in
>>> DNS and predictable host names. Each Pod has it's own unique DNS
>>> entry following pattern: {PetSetName}-{PodIndex}.{ServiceName} [5].
>>> Here's an example of an Infinispan PetSet deployed on my local
>>> cluster [6]. As you can see we have all domain names and IPs from a
>>> single DNS query.
>>> * Maybe we could perform discovery using this mechanism? I'm aware of
>>> DNS discovery implemented in KUBE_PING [7][8] but the code looks
>>> trivial [9] so maybe it should be implement inside JGroups? @Bela -
>>> WDYT?
>>> * PetSets do not integrate well with OpenShift 'new-app' command. In
>>> other words, our users will need to use provided yaml (or json)
>>> files to create Infinispan cluster. It's not a show-stopper but it's
>>> a bit less convenient than 'oc new-app'.
>>> * Since PetSets are alpha resources they need to be considered as
>>> secondary way to deploy Infinispan on Kubernetes and OpenShift.
>>> * Finally, the persistent volumes - since a Pod always gets the same
>>> volume, it would be safe to use any file-based cache store.
>>>
>>> If you'd like to play with PetSets on your local environment, here are
>>> necessary yaml files [10].
>>>
>>> Thanks
>>> Sebastian
>>>
>>>
>>> [1] http://kubernetes.io/docs/user-guide/petset/
>>> [2] For checking which APIs are accessible, use 'kubectl api-versions'
>>> [3]
>>> http://infinispan.org/docs/stable/user_guide/user_guide.
>>> html#_Rolling_chapter
>>> [4] http://kubernetes.io/docs/user-guide/services/#headless-services
>>> [5] http://kubernetes.io/docs/user-guide/petset/#peer-discovery
>>> [6] https://gist.github.com/slaskawi/0866e63a39276f8ab66376229716a676
>>> [7] https://github.com/jboss-openshift/openshift-ping/tree/master/dns
>>> [8] https://github.com/jgroups-extras/jgroups-kubernetes/tree/master/dns
>>> [9] http://stackoverflow.com/a/12405896/562699
>>> [10] You might need to adjust ImageStream.
>>> https://gist.github.com/slaskawi/7cffb5588dabb770f654557579c5f2d0
>>>
>>
>> --
>> Bela Ban, JGroups lead (http://www.jgroups.org)
>>
>>
>
>
7 years, 9 months
HTTP/2 Upgrade [0] ideas and thoughts
by Sebastian Laskawiec
Hey!
I started sketching some ideas how HTTP/2 client and the upgrade procedure
could work in cloud environment. Before digging in, let me explain some
basic concepts and facts:
- Kubernetes as well as OpenShift operate on a very simple architecture.
We have a group of Pods (Docker Containers), a Service which acts as a load
balancer and a Route (which acts as a proxy for serving requests to the
outside world).
- Communication between components deployed on the same
Kubernetes/OpenShift cluster looks like this: MyApp -> Service (target app)
-> one of the Pods
- Communication from the outside world looks like this: MyApp -> the
Internet -> Route (target app) -> Service (target app) -> one of the Pods
[1]
- Currently Kubernetes/OpenShift Services use round-robin strategy for
load balancing, can use Client IP affinity or can use HTTP Cookies for
session stickiness.
- OpenShift Route (or Kubernetes Ingress) can support TLS. They can
downgrade HTTPS to HTTP (in other words terminate it), pass through an
encrypted request without inspecting the content or reencrypt it with
different certificate [2].
- HTTP/2 does not have upgrade header. It uses TLS with ALPN to
negotiate which protocol should be used [3].
- HTTP/2 can support custom protocols (which allows writing custom
Frames, Settings and Error Codes) [4].
The initial idea of using HTTP 1.1/Upgrade (as I mentioned above HTTP/2
doesn't have this concept, it uses ALPN) is to support Hot Rod Clients from
the outside of the Kubernetes/OpenShift cluster. The client connects to a
random Pod (through w Route) using HTTP Protocol and upgrades the
connection to Hot Rod Protocol.
After thinking about it for a while, several things don't fit. A Hot Rod
client uses topology information to minimize the amount of hops. When we
access the data using a Route (or Ingress) and a Service we don't control
to which Pod we're connecting to. Moreover, if we switch from HTTP to Hot
Rod protocol (which is based on TCP), we lose HTTP Headers which could be
possible used for routing inside Kubernetes/OpenShift. Switching protocols
is also problematic since HTTP/2 does not support upgrade header (as I
mentioned above - it uses ALPN). ALPN support needs to be implemented in
Hot Rod Server (this is the only component which has enough data to say
which protocols are supported; OpenShift Routes or Kunernetes Ingresses
don't have this knowledge). This means that Routes and Services sees
encrypted traffic and won't be able to help us with Routing.
So what can we do about it? There are a couple of ideas how to solve those
problems.
The first one is to enhance the Hot Rod Client to initialize a connection
pool. The client could periodically initialize a new connection and send
PING operation. If the connection is already in the pool - close it.
Otherwise add it to the pool. We can call it brute-force-cluster-discovery
:) It should be good for any round-robin like load-balancers.
The second idea is to implement a fully fledged HTTP/2 client for Hot Rod
(@Anton - I think you're working on that aren't you?). We could use HTTP
Headers to control to which Pod we are connecting to (this would require
adding some code to Kubernetes/OpenShift but it shouldn't be very hard).
After TLS Handshake we could use topology information and HTTP Headers to
initialize connection to all cluster members. In this scenario ALPN won't
be needed (since we will implement separate server and client) which is
firewall friendly (operates on HTTP/HTTPS ports).
Having in mind the raise of stateful apps in the cloud we could also
propose PetSets enhancement and donate extra code for supporting them from
Ingress/Route perspective. The result should be similar to the previous
option (donating some code to Services/Ingresses and Routes).
The above are only some ideas how all the pieces could work together. I'll
also consult it with the OpenShift Team - maybe they will give us some more
hints.
Think about it for a while and let me know if you have any ideas...
Thanks
Sebastian
[0] https://issues.jboss.org/browse/ISPN-6676
[1]
http://www.slideshare.net/SamuelTerburg/open-shift-enterprise-31-paas-on-...
[2]
https://docs.openshift.org/latest/architecture/core_concepts/routes.html#...
[3] https://http2.github.io/http2-spec/#rfc.section.8.1.2.2
[4] https://http2.github.io/http2-spec/#rfc.section.5.5
7 years, 10 months
Replication is not happening with Infinispan 8.2.2
by Sathish Kumarbt
Dear all,
I am upgrading infinispan version from 6.0 to 8.2.2 version .
Replication between the nodes is not happening .Here is the infinispan
config
<infinispan>
<jgroups>
<stack-file name="configurationFile" path="config/jgroups.xml"/>
</jgroups>
<cache-container>
<transport cluster="x-cluster" stack="configurationFile" />
<replicated-cache name="transactional-type" mode="SYNC">
<transaction mode="NON_XA" locking="OPTIMISTIC"
transaction-manager-lookup="org.infinispan.transaction.lookup.JBossStandaloneJTAManagerLookup"
auto-commit="true" />
<locking acquire-timeout="60000"/>
<expiration lifespan="43200000"/>
</replicated-cache>
</cache-container>
</infinispan>
Jgroups configuration
<!--
TCP based stack, with flow control and message bundling. This is
usually used when IP
multicasting cannot be used in a network, e.g. because it is disabled
(routers discard multicast).
Note that TCP.bind_addr and TCPPING.initial_hosts should be set,
possibly via system properties, e.g.
-Djgroups.bind_addr=192.168.5.2 and
-Djgroups.tcpping.initial_hosts=192.168.5.2[7800]".
author: Bela Ban
-->
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups
http://www.jgroups.org/schema/jgroups-3.6.xsd">
<TCP loopback="true"
bind_addr="${jgroups.tcp.address:127.0.0.1}"
bind_port="${jgroups.tcp.port:7800}"
recv_buf_size="${tcp.recv_buf_size:20M}"
send_buf_size="${tcp.send_buf_size:640K}"
discard_incompatible_packets="true"
max_bundle_size="64K"
max_bundle_timeout="30"
enable_bundling="true"
use_send_queues="true"
sock_conn_timeout="300"
timer_type="new"
timer.min_threads="4"
timer.max_threads="10"
timer.keep_alive_time="3000"
timer.queue_max_size="500"
thread_pool.enabled="true"
thread_pool.min_threads="2"
thread_pool.max_threads="30"
thread_pool.keep_alive_time="60000"
thread_pool.queue_enabled="false"
thread_pool.queue_max_size="100"
thread_pool.rejection_policy="discard"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="2"
oob_thread_pool.max_threads="30"
oob_thread_pool.keep_alive_time="60000"
oob_thread_pool.queue_enabled="false"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="discard"/>
<!-- <TCP_NIO -->
<!-- bind_port="7800" -->
<!-- bind_interface="${jgroups.tcp_nio.bind_interface:bond0}"
-->
<!-- use_send_queues="true" -->
<!-- sock_conn_timeout="300" -->
<!-- reader_threads="3" -->
<!-- writer_threads="3" -->
<!-- processor_threads="0" -->
<!-- processor_minThreads="0" -->
<!-- processor_maxThreads="0" -->
<!-- processor_queueSize="100" -->
<!-- processor_keepAliveTime="9223372036854775807"/> -->
<TCPGOSSIP initial_hosts="${jgroups.tcpgossip.initial_hosts}"/>
<!-- <TCPPING async_discovery="true"
initial_hosts="${jgroups.tcpping.initial_hosts}"
port_range="2" timeout="3000" /> -->
<MERGE2 max_interval="30000" min_interval="10000"/>
<FD_SOCK/>
<FD timeout="3000" max_tries="3"/>
<VERIFY_SUSPECT timeout="1500"/>
<pbcast.NAKACK
use_mcast_xmit="false"
retransmit_timeout="300,600,1200,2400,4800"
discard_delivered_msgs="false"/>
<UNICAST2 timeout="300,600,1200"
stable_interval="5000"
max_bytes="1m"/>
<pbcast.STABLE stability_delay="500" desired_avg_gossip="5000"
max_bytes="1m"/>
<pbcast.GMS print_local_addr="false" join_timeout="3000"
view_bundling="true"/>
<UFC max_credits="200k" min_threshold="0.20"/>
<MFC max_credits="200k" min_threshold="0.20"/>
<FRAG2 frag_size="60000"/>
<RSVP timeout="60000" resend_interval="500" ack_on_delivery="false" />
</config>
Regards,
Sathish.b.t
��K�� � �
7 years, 10 months
Style errors :(
by Galder Zamarreño
Hi all,
After integrating [1] I'm getting build errors such as:
[INFO] --- maven-checkstyle-plugin:2.17:checkstyle (checkstyle) @ infinispan-core ---
[INFO] Starting audit...
/home/g/0/infinispan/git/core/src/main/java/org/infinispan/marshall/core/internal/InternalExternalizerTable.java:55: error: Using the '.*' form of import should be avoided - org.infinispan.marshall.exts.*.
/home/g/0/infinispan/git/core/src/main/java/org/infinispan/marshall/core/ExternalizerTable.java:76: error: Using the '.*' form of import should be avoided - org.infinispan.marshall.exts.*.
/home/g/0/infinispan/git/core/src/test/java/org/infinispan/filter/CompositeKeyValueFilterConverter.java:11:1: error: Duplicate import to line 10 - org.infinispan.metadata.Metadata.
Even after installing the latest style for IntelliJ [2], reformatting InternalExternalizerTable.java won't fix those how errors.
So, what do we do? :(
Cheers,
[1] https://github.com/infinispan/infinispan/commit/313b19301055c6267c6f2ea90...
[2] https://github.com/infinispan/infinispan/blob/master/ide-settings/intelli...
--
Galder Zamarreño
Infinispan, Red Hat
7 years, 10 months
Hot Rod Multi-get with versions?
by Sanne Grinovero
Hi all,
I'm having lots of fun learning to use the Hot Rod clients.
One thing that is getting very clear to me is that getVersioned( k )
is much more useful than a regular get, as anything I'm doing turns
out to ultimately need a version when it goes beyond a trivial hello
world test.
In certain cases I can optimise requests by using the very useful
getAll( keyset ) method.
However, I don't see how I can use getAll while still reading versioned entries?
Thanks,
Sanne
7 years, 10 months
Configuration management and Docker/Kubernetes/OpenShift
by Sebastian Laskawiec
Hey!
I'm working on configuration management in cloud environment [1]. I put
some finding in the ticket [1] but let me tell you more about the options I
investigated:
- Kubernetes/OpenShift ConfigMaps [2][3]
- Those structures are specially designed to hold configuration data
in any format (yaml, json, properties, xml, doesn't matter).
- The configuration can be mounted into a pod as a directory
(technically a volume). We can not mount it as a single file. Having said
that we would need to either store cloud.xml file separately
(standalone/cloud/configuration?) or ask users to build
configuration from
whole standalone/configuration directory. Both options are valid in my
opinion.
- OpenShift S2I [4] builder
- S2I builder takes a git repository and a Docker image and combines
those two together.
- We could store a cloud.xml file inside a git repository and modify
our infinispan Docker image to support S2I scripts (if cloud.xml is
detected in a git repository - replace the default configuration).
- Unfortunately it's OpenShift specific thing.
- Extend the Infinispan Docker image
- We could ask users to extend our Docker image and put their
specific configuration there
- The biggest advantage - it will work regardless to the environment
(pure Docker, Kubernetes, OpenShift, doesn't matter)
All options require restarting pods to update configuration (remember, pods
were designed to be immutable).
I think we should support 2 options - ConfigMaps for Kubernetes and
OpenShift and extending our Docker image for all other use cases (because
this option gives the most flexibility).
What do you think?
Thanks
Sebastian
[1] https://issues.jboss.org/browse/ISPN-6675
[2] http://kubernetes.io/docs/user-guide/configmap/
[3] https://docs.openshift.org/latest/dev_guide/configmaps.html
[4] https://github.com/openshift/source-to-image
7 years, 10 months