Distribution across multiple sites
by Bela Ban
Tobias made me aware of a bug that can happen when we use RELAY and one
of the relay coordinators crash: messages sent between the crash of the
relay coordinator and the switch of the backup relay to full relay are
not relayed.
This can lead to inconsistencies, see [1] for details. If I implement
solution #1, then the chances of this happening are vastly reduced.
I wanted to ask the bright folks on this list though, if you see a
solution that only involves Infinispan (rebalancing) ?
Cheers,
[1] https://issues.jboss.org/browse/JGRP-1401
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
12 years, 12 months
Distributed Executors in write mode, need reference to Cache
by Sanne Grinovero
Highlighting this use case:
http://community.jboss.org/message/642300#642300
As I replied on the forum, I think we might need to improve the
DistributedExecutorService by providing access to some context to the
Future before it's executed remotely, as having a reference to the
Cache it's being executed on seems very useful, but it might not be
limited to the Cache only.
I guess to make good use of it, someone might need to access just
anything: CDI looks like a good fit?
13 years
DistributedExecutionCompletionService 2.0
by Vladimir Blagojevic
Hey,
One of the users rightfully asked for extension of
DistributedExecutionCompletionService to include task submission to
cluster of nodes - http://community.jboss.org/thread/175686?tstart=0
Galder and I debated the resulting
https://github.com/infinispan/infinispan/pull/722 and have concluded
that we want to capture the added methods in
DistributedCompletionService<V> which extends JDK's
CompletionService<V>. The problem is that
DistributedCompletionService<V> is exactly the same as
DistributedExecutorService but without generics twist. So we end up with
essentially duplicate interface.
Now, ideally we could have DistributedExecutionCompletionService simply
implement DistributedExecutorService and CompletionService but compiler
does not allow us to have generics based
DistributedExecutionCompletionService implementing non-generics based
DistributedExecutorService with the same method definitions.
Any suggestions?
Vladimir
13 years
New partial replication protocol providing serializability guarantees in Infinispan
by Paolo Romano
Hi,
within the context Cloud-TM project we have developed a new partial
replication algorithm (corresponding to distribution mode of Infinispan)
that guarantees serializability in a very scalable fashion. We have
called the algorithm GMU, Genuine Multiversion Update Serializability,
and we've integrated it into Infinispan (5.0.0).
The source code is available on github:
http://github.com/cloudtm/infinispan-5.0.0.SERIALIZABLE
GMU's key features are:
1. unlike any other partial replication protocol we are aware of, GMU is
the first distributed multi-versioned based partial replication protocol
that does not rely on a single global clock in order to determine
consistent snapshots. Conversely, the protocol guarantees to involve
only the nodes that maintain data accessed by a committing transaction T
(a property that is known in literature as "genuineness"). This is a
property that is crucial, in our opinion, to achieve high scalability.
2. read-only tranasctions are never aborted, and do not need to be
validated at commit time, making them very fast. Read-only transactions
are guaranteed to observe a consistent snapshot of the data using a
novel mechanism based on vector clocks. Note that in order to achieve
this results we integrated in ISPN a multiversion concurrency control,
very similar to the one used in PostgreSQL or JVSTM, that maintains
multiple data item versions tagged with scalars per each key.
3. The consistency guarantees ensured by GMU are a variant of classic
1-Copy-Serialiability (1CS), and, more precisely, "extended update
serializable" (EUS). You can check the tech. report in attach for more
details on this, but, roughly speaking, US guarantees that update
transactions execute according to 1CS. Concurrent read-only
transactions, instead, may observe the updates generated by two
*non-conflicting* update transactions in different order.
In practice, we could not think of any realistic application for which
the schedules admitted by US would represent an issue, which leads us to
argue that US is, in practical settings, as good as 1CS, but brings the
key advantage of allowing way more scalable (genuine) implementations.
We have evaluated GMU performance using up to 20 physical machines in
our in-house cluster, and in 40 VMs in the FutureGrid (and we are
currently trying to use more VMs in FutureGrid to see if we can make it
scale up to hundreds of machines... we'll keep you posted on this!) with
the YCSB (https://github.com/brianfrankcooper/YCSB/wiki) and TPC-C
benchmarks.
Our experimental results show that in low conflict scenarios, the
protocol performs as good as the existing Repeatable Read
implementation... and actually, in some scenarios, even slightly better,
given that GMU spares the cost of saving the values read in the
transactional context, unlike the existing Repeatable Read implementation.
In high contention scenarios, GMU does pay a higher toll in terms of
aborts, but it still drastically outperform classic non-genuine MVCC
implementations as the size of the system grows. Also, we've a bunch of
ideas on how to improve GMU performance in high contention scenarios...
but that's another story!
You find the technical report at this url:
http://www.inesc-id.pt/ficheiros/publicacoes/7549.pdf
Comments are more than welcome of course!
Cheers,
Paolo
--
Paolo Romano, PhD
Coordinator of the Cloud-TM ICT FP7 Project (www.cloudtm.eu)
Senior Researcher @ INESC-ID (www.inesc-id.pt)
Invited Professor @ Instituto Superior Tecnico (www.ist.utl.pt)
Rua Alves Redol, 9
1000-059, Lisbon Portugal
Tel. + 351 21 3100300
Fax + 351 21 3145843
Webpage http://www.gsd.inesc-id.pt/~romanop
13 years
inefficient usage of CacheLoader with REPL
by Sanne Grinovero
Hi all,
I just noticed writing a test using REPL that when doing a put
operation the other nodes do a CacheLoader LOAD as well.
Isn't that totally unnecessary?
In fact assuming REPL guarantees consistency across all nodes, any
form of remote return value should be implicitly skipped as we can
satisfy return values by looking at local copies.
Sanne
13 years
Some flags are incompatible with implicit transactions
by Galder Zamarreño
Hi all,
Re: https://issues.jboss.org/browse/ISPN-1556
Re: https://github.com/infinispan/infinispan/pull/719/files#r288994
The fix I suggest works well with explicit transactions, but if we leave this as is, implicit txs might leak transactions. The reason is because if we allow a put with FAIL_SILENT which fails with an implicit tx, then the tx won't be committed nor removed from tx table.
But, does FAIL_SILENT make sense with implicit tx? Well, it doesn't. The point of FAIL_SILENT is to avoid a failure rollbacking a tx and being noisy. So, it implies that there's a bigger, external, transaction within which this operation is called.
And it's not just FAIL_SILENT, there're other flags do not make sense with implicit transactions, such as FORCE_WRITE_LOCK:
/**
* Forces a write lock, even if the invocation is a read operation. Useful when reading an entry to later update it
* within the same transaction, and is analogous in behavior and use case to a <tt>select ... for update ... </tt>
* SQL statement.
*/
So, I think my fix is right here, but what we really need is it's a way to stop people from using certain flags with implicit transactions. Here's my (quickly thought) list:
FORCE_WRITE_LOCK
FAIL_SILENTLY
PUT_FOR_EXTERNAL_READ
Any others?
Cheers,
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
13 years
ISPN-1586 - inconsistent cache data in replication cluster with local (not shared) cache store
by Dan Berindei
Hi guys
For a little background, see the discussion at
https://issues.jboss.org/browse/ISPN-1586
How do you feel about discarding the contents of the cache store on all
cache (virtual) nodes except the first to start?
Configuring purgeOnStartup="true" is not ok in data grid scenarios, because
you'd lose all the data on restart. But loading entries from the cache
store of a node that started later is almost guaranteed to lead to stale
data. In some use cases that stale data might be acceptable, but in most
cases it probably isn't. So perhaps it makes sense to make this
configurable?
Cheers
Dan
13 years