Re: [infinispan-dev] [infinispan-internal] Unstable Cluster
by Bela Ban
Another node: in general, would it make sense to use shorter names ?
E.g. instead of
** New view: [jdg-perf-01-60164|9] [jdg-perf-01-60164,
| jdg-perf-01-24167, jdg-perf-01-53841, jdg-perf-01-39558,
| jdg-perf-01-8977, jdg-perf-01-49115, jdg-perf-01-24774,
| jdg-perf-01-5758, jdg-perf-01-37137, jdg-perf-01-45330,
| jdg-perf-01-24793, jdg-perf-01-35602, jdg-perf-02-7751,
| jdg-perf-02-37056, jdg-perf-02-50381, jdg-perf-02-53449,
| jdg-perf-02-64954, jdg-perf-02-34066, jdg-perf-02-61515,
| jdg-perf-02-65045 ...]
we could have
** New view: [1|9] [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
16, 17, 18, 19, 20, ...]
This makes reading logs *much* easier than having those long names.
If we wanted the host name to be part of a cluster name, we could use
the alphabet, e.g. A=jdk-perf-01, B=jdg-perf-02:
** New view: [A1|9][A1, A2, A3, B4, B6, C2, C3, ...]
This is of course tied to a given host naming scheme. But oftentimes,
host names include numbers, so perhaps we could use a regexp to extract
that number and use it as a prefix to the name, e.g.
cluster-01 first instance: 1-1
cluster-02 2nd instance: 1-2
etc.
Thoughts ?
On 3/4/13 8:43 AM, Radim Vansa wrote:
> Just a small sidenote: if you want to print full view (not just first 20 nodes and ellipsis after that), use -Dmax.list.print_size=cluster_size
>
> Radim
>
> ----- Original Message -----
> | From: "Shane Johnson" <shjohnso(a)redhat.com>
> | To: infinispan-internal(a)redhat.com
> | Sent: Friday, March 1, 2013 5:52:17 PM
> | Subject: Re: [infinispan-internal] Unstable Cluster
> |
> | The JGroups cluster appeared stable, at first. However, I did notice
> | that the logs looked a little bit different on one machine /
> | instance. I'm not sure if that means anything or not.
> |
> | Machine 1 / Instance 1-12
> |
> | ** New view: [jdg-perf-01-60164|9] [jdg-perf-01-60164,
> | jdg-perf-01-24167, jdg-perf-01-53841, jdg-perf-01-39558,
> | jdg-perf-01-8977, jdg-perf-01-49115, jdg-perf-01-24774,
> | jdg-perf-01-5758, jdg-perf-01-37137, jdg-perf-01-45330,
> | jdg-perf-01-24793, jdg-perf-01-35602, jdg-perf-02-7751,
> | jdg-perf-02-37056, jdg-perf-02-50381, jdg-perf-02-53449,
> | jdg-perf-02-64954, jdg-perf-02-34066, jdg-perf-02-61515,
> | jdg-perf-02-65045 ...]
--
Bela Ban, JGroups lead (http://www.jgroups.org)
11 years, 2 months
Some missing tests during testsuite run
by Anna Manukyan
Hi all,
during ER12 testing I've found out that there are some tests which were not included into ISPN testsuite run.
And this issue appeared both on our JDG related jobs as well as I've checked the Cloudbees for the community version runs and the same situation is there.
If the testsute run Maven command includes the -Dtest=org/infinispan/**... parameter (for corresponding module), then these tests are included in the run.
The thing was that there were some failing tests, which we didn't see during our previous test runs.
I've found out that the issue is that some of the test classes don't follow the naming convention for Maven (Test*.java || *Test.java || *TestCase.java). Example tests are: JdbcMixedCacheStoreTest2 & JdbcMixedCacheStoreVamTest2 classes.
So I've renamed and fixed the tests mentioned above, but I will need to find all tests which are under the mentioned category and rename them so that all existing tests run properly (they are not so much).
Best regards,
Anna.
11 years, 2 months
Minutes Infinispan meeting in London
by Bela Ban
Mircea, Dan, Pedro, Sanne and I had a meeting in London this week on how
to use the new features of JGroups 3.3 in Infinispan 5.3, I've copied
the minutes from the wiki below.
London meeting
Bela, Pedro, Mircea, Dan, Sanne
Message bundling and OOB messages
In 3.3, all messages will be bundled, not just regular messages, but
also OOB messages. The way this works on the sender side is:
- A thread sending a message in the transport adds it to a queue
- There's one thread which dequeues messages and sends them as bundles
- It sends a messages bundle if the max size has been reached, or
there are no more messages in the queue
- This means single messages are sent immediately, or we fill up a
bundle (in a few microseconds) and send it
Impact on Infinispan:
- Use DONT_BUNDLE instead of OOB if you don't want to bundle messages
- However, even DONT_BUNDLE might get deprecated
- If we have 1 sender invoking sync RPCs, we don't need to set
DONT_BUNDLE anymore
- If we have multiple senders invoking sync RPCs, performance should
get better as RPCs and responses are bundled
- Since bundling will result in message *batches* on the receiver,
performance should increase in general
Message batching
Message bundles sent by a sender are received as message batches
(MessageBatch) by the receivers. When a batch is received, the batch
is passed up using up(MessageBatch).
Protocols can remove / replace / add messages in a batch and pass the
batch further up.
The advantage of a batch is that resources such as locks are acquired
only once for a batch of N messages rather than N times. Example: when
NAKACK2 receives a batch of 10 messages, it adds the 10 messages to
the receiver table in a bulk operation, which is more efficient than
doing this 10 times.
Further optimizations on batching (probably 3.4):
- Remove similar ops, e.g. UNICAST3 acks for A:15, A:25 and A:35 can
be clubbed together into just ack(A:35)
- Merge similar headers, e.g. multicast messages 20-30 can be orderd
by seqno, and we simply send a range [20..30] and let the receiver
generate the headers on the fly
Async Invocation API (AIA)
JGroups only passes up messages to Infinispan, which then uses its own
thread pool to deliver them. E.g. based on Pedro's code for TO, we
could parallelize delivery based on the target keys of the
transaction. E.g if we have tx1 modifying keys {A,B,C} and tx2
modifying keys {T,U}, then tx1 and tx2 can be run concurrently.
If tx1 and tx2 modify overlapping key sets, then tx2 would be queued
and executed *after* tx1, not taking up a thread from the pool,
reducing the chances of the thread pool maxing out and also
ensuring different threads are not going to contend on the locks
on same keys.
The implementation could be done in an interceptor fronting the
interceptor stack, which queues dependent TXs and
- when ready to be executed - sends them up the interceptor stack on a
thread from
the internal pool.
Infinispan having its own thread pool means that JGroups threads will
not block anymore, e.g. trying to acquire a lock for a TX. The size of
those pools can therefore be reduced.
The advantage of AIA is that it's up to Infinispan, not JGroups, how
to deliver messages. JGroups delivers messages based on the order in
which they were sent by a sender (FIFO), whereas Infinispan can make
much more informed decisions as to how to deliver the messages.
Internal thread pool for JGroups
All JGroups internal messages use the internal thread pool (message
flag=INTERNAL). Not having to share the OOB pool with apps (such as
Infinispan) means that internal messages can always be processed, and
are not discarded or blocked, e.g. by a maxed out thread pool.
The internal pool can be switched off, and - if AIA is implemented in
Infinispan - the number of OOB and regular threads can be massively
reduced. The internal thread pool doesn't need to be big either.
UNICAST3
Successor to UNICAST and UNICAST2, best of both worlds. Acks single
messages quickly, so we have no first-msg-lost or last-msg-lost issues
anymore. Doesn't generate many acks though.
Proposed to trigger and ACK only after a certain number of messages
rather than after any batch to avoid ACK on small batches.
https://issues.jboss.org/browse/JGRP-1594
--
Bela Ban, JGroups lead (http://www.jgroups.org)
11 years, 2 months
Moving JCache annotation implementation and TCK run from infinispan-cdi to infinispan-jcache
by Galder Zamarreño
Hi all,
As part of the JSR-107 TCK, there's the possibility to run the optional JSR-107 annotation TCK.
Currently this is run as part of https://github.com/infinispan/infinispan/tree/master/cdi/tck-runner, but I'd like to integrate with the JSR-107 implementation TCK run integration project so that both TCKs are run from a single project:
https://github.com/galderz/infinispan/tree/t_2639/jcache/src/it/tck-runner
Btw, I've recently discovered Maven Invoker plugin which is ideal for running integration tests for a particular project, such as TCK. The nice thing about it is that if you're building integration tests for project X, it will run them as part of the build for project X. So no need to have a separate project which you have to invokve manually (as we have now with lucene integration tests). For more info, check: http://maven.apache.org/plugins/maven-invoker-plugin/ - I discovered it looking at how Dagger was testing it's CDI implementation.
Also, I've renamed the JSR-107 module name to JCache cos that's the name of the API, and in the future there could other JSRs that deal with further improvements of JCache, so you don't really wanna be stuck in a particular JSR number.
The other thing I'd like to do is remove the JCache annotation support from https://github.com/infinispan/infinispan/tree/master/cdi/extension and put it in the JSR-107 implementation I'm finishing.
That way, all JCache/JSR-107 related stuff is under the same umbrella (both implementation and TCK run integration tests):
https://github.com/galderz/infinispan/tree/t_2639/jcache
This would be a trivial subtask of https://issues.jboss.org/browse/ISPN-2639
Cheers,
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
11 years, 2 months
JGroups 3.3.0.Beta1
by Bela Ban
FYI,
added to Nexus.
This completes the implementation of message batching in all protocols,
and contains the complete Async Invocation API as well. Once we have an
implementation of async request handling in Infinispan, which runs
independent transactions in separate threads from an internal thread
pool, I expect a significant performance increase !
--
Bela Ban, JGroups lead (http://www.jgroups.org)
11 years, 2 months