Implications of adding put with metadata (ISPN-2281)
by Galder Zamarreño
Hi all,
Looking at the pull req for ISPN-2281, I'm adding methods like this to AdvancedCache:
V put(K key, V value, Metadata metadata);
Now, I think that adding this should deprecate these methods:
V put(K key, V value, long lifespan, TimeUnit unit);
V put(K key, V value, long lifespan, TimeUnit lifespanUnit, long maxIdleTime, TimeUnit maxIdleTimeUnit);
But these two are located in BasicCache, and if we end up removing them eventually, it'd mean that either:
a) users need to use advanced cache even for adding lifespan
b) put(k, v, metadata) gets promoted to BasicCache
I don't like either option tbh. BasicCache might be implemented by users or AS guys, which until we have JDK8 (in year 2048 :p), will break existing clients, and not happy for people to get hold of AdvancedCache for use cases where they didn't need before.
Cheers,
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
11 years, 8 months
AdvancedCache.put with Metadata parameter
by Galder Zamarreño
Hi all,
As mentioned in http://lists.jboss.org/pipermail/infinispan-dev/2013-March/012348.html, in paralell to the switch to Equivalent* collections, I was also working on being able to pass metadata into Infinispan caches. This is done to better support the ability to store custom metadata in Infinispan without the need of extra wrappers. So, the idea is that InternalCacheEntry instances will have a a reference to this Metadata.
One of that metadata is version, which I've been using as test bed to see if clients could pass succesfully version information via metadata. As you already know, Hot Rod requires to store version information. Before, this was stored in a class called CacheValue alongside the value itself, but the work I've done in [1], this is passed via the new API I've added in [2].
So, I'd like to get some thoughts on this new API. I hope that with these new put/replace versions, we can get rid of the nightmare which is all the other put/replace calls taking lifespan and/or maxIdle information. In the end, I think there should be two basic puts:
- put(K, V)
- put(K, V, Metadata)
And their equivalents.
IMPORTANT NOTE 1: The implementation details are bound to change, because the entire Metadata needs to be stored in InternalCacheEntry, not just version, lifespan..etc. I'll further develop the implementation once I get into adding more metadata, i.e. when working on interoperability with REST. So, don't pay too much attention to the implementation itself, focus on the AdvancedCache API itself and let's refine that.
IMPORTANT NOTE 2: The interoperability work in commit in [1] is WIP, so please let's avoid discussing it in this email thread. Once I have a more final version I'll send an email about it.
Apart from working on enhancements to the API, I'm now carry on tackling the interoperability work with aim to have an initial version of the Embedded <-> Hot Rod interoperability as first step. Once that's in, it can be released to get early feedback while the rest of interoperability modes are developed.
Cheers,
[1] https://github.com/galderz/infinispan/commit/a35956fe291d2b2dc3b7fa7bf44d...
[2] https://github.com/galderz/infinispan/commit/a35956fe291d2b2dc3b7fa7bf44d...
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
11 years, 8 months
5.3.0.Beta1 next week
by Mircea Markus
Hi,
Galder has just issued a rather large pull request on "ISPN-2281 Initial Embedded and Hot Rod compatibility".
Would be great to have this in rather soon, so let's postpone the release till this makes it in: the new target date is 1st May.
Cheers,
--
Mircea Markus
Infinispan lead (www.infinispan.org)
11 years, 8 months
Re: [infinispan-dev] SIA ok to start the JDG Hotrod C Client
by Mircea Markus
Hi Eduardo,
GREAT news!
>> Hi all,
>> customer SIA (http://sia.eu) just gave us the ok to start the project.
>> We are going to have the kickoff on Monday, 13th of May with 5 days to cover first tasks in the attached list.
>> Samuele will be part of this project sharing the effort with the customer.
>> I'll update you all after the k.o.
>>
>> Do we have any news regarding the internal resource (MRG Red Hat developer) that could join the team and help on this project?
It will be Cliff (CC) that would lead this effort. I'm not sure he'd start that early but please keep him involved in the discussions/technical decisions.
Who's going to develop this BTW?
Also I guess this will be hosted/integrated as a community project? If so I've created a repo for it here: https://github.com/infinispan/cpp-client
>> Did you think about the best choice between C or C++ client?
C++
>>
>> In the meantime they are asking me for a list of requirements to prepare the environment (hw, sw,...). Do you see any specific needs?
I've updated the JIRA with the SW requirements: https://issues.jboss.org/browse/ISPN-470
>>
>> Thanks
>> Edoardo
>>
>
> <SIA_WBS-JDG-CppClient.ods>
Cheers,
--
Mircea Markus
Infinispan lead (www.infinispan.org)
11 years, 8 months
Re: [infinispan-dev] XSite synchronous replication
by Mircea Markus
Thanks Bela.
On 23 Apr 2013, at 16:27, Bela Ban wrote:
> Erik and I had a call and concluded that
> - the regular thread pool should have a queue enabled
is that something you plan to do in the JGroups sample TCP configuration Erik mentioned? Or just something we should recommend for x-site bridge in particular?
> - for sync replication between sites, RPCs are *not* tagged as OOB, but they should ! Mircea, any idea why this deviates from the default (local) replication where sync RPCs are OOB ?
It shouldn't: https://issues.jboss.org/browse/ISPN-3043
>
> On 4/22/13 11:29 PM, Erik Salter (esalter) wrote:
>> Hi guys,
>>
>> While we wait for the threading model to change in ISPN 5.3, I was doing
>> a deep-dive into the existing xsite implementation, and I noticed that
>> all messages originating from the bridge use the regular/default/in-band
>> thread pool, even those that are marked as synchronous in ISPN.
>>
>> Ex:
>>
>> 2013-05-23 13:03:14,153 TRACE [org.jgroups.protocols.TCP]
>> (Incoming-2,erm-cluster,adht1-12627(DC1)) sending msg to
>> _bdht5-37320:DC2, src=_adht1-12627:DC1, headers are RequestCorrelator:
>> id=200, type=REQ, id=146, rsp_expected=true, RELAY2: DATA
>> [dest=SiteMaster(DC2), sender=adht5-23034:DC1], UNICAST2: DATA, seqno=4,
>> conn_id=5, TCP: [channel_name=erm-bridge]
>>
>> 2013-05-23 13:03:14,153 TRACE [org.jgroups.protocols.TCP]
>> (Incoming-2,erm-cluster,adht1-12627(DC1)) dest=10.30.16.134:44572 (1269
>> bytes)
>>
>> 2013-05-23 13:03:14,164 TRACE [org.jgroups.protocols.TCP]
>> (OOB-9,erm-bridge,_adht1-12627:DC1) received [dst: _adht1-12627:DC1,
>> src: _bdht5-37320:DC2 (4 headers), size=4 bytes, flags=OOB|DONT_BUNDLE],
>> headers are RequestCorrelator: id=200, type=RSP, id=146,
>> rsp_expected=false, RELAY2: DATA [dest=adht5-23034:DC1,
>> sender=SiteMaster(DC2)], UNICAST2: DATA, seqno=2, conn_id=6, TCP:
>> [channel_name=erm-bridge]
>>
>> Shouldn’t this message from the bridge end to the remote site use the
>> OOB thread pool, since a response is expected?
>>
>> I ask because in JGroups 3.2.x, the sample TCP configuration shows that
>> the default thread pool has queuing disabled:
>> thread_pool.queue_enabled="false". If I enable queuing for my async
>> replication use case, my performance for sync very much degrades. But I
>> can easily flood/abuse the incoming thread pool if I disable queuing –
>> i.e. messages get dropped (XSite replication, internal JGroups
>> communication).
>>
>> Thanks,
>>
>> Erik Salter
>>
>> Technical Leader I
>>
>> Cisco Systems, SPVTG
>>
>> (404) 317-0693
>>
>
> --
> Bela Ban, JGroups lead (http://www.jgroups.org)
Cheers,
--
Mircea Markus
Infinispan lead (www.infinispan.org)
11 years, 8 months
ISPN-263 and handling partitions
by Manik Surtani
Guys,
So this is what I have in mind for this, looking for opinions.
1. We write a SplitBrainListener which is registered when the channel connects. The aim of this listener is to identify when we have a partition. This can be identified when a view change is detected, and the new view is significantly smaller than the old view. Easier to detect for large clusters, but smaller clusters will be harder - trying to decide between a node leaving vs a partition. (Any better ideas here?)
2. The SBL flips a switch in an interceptor (SplitBrainHandlerInterceptor?) which switches the node to be read-only (reject invocations that change the state of the local node) if it is in the smaller partition (newView.size < oldView.size / 2). Only works reliably for odd-numbered cluster sizes, and the issues with small clusters seen in (1) will affect here as well.
3. The SBL can flip the switch in the interceptor back to normal operation once a MergeView is detected.
It's no way near perfect but at least it means that we can recommend enabling this and setting up an odd number of nodes, with a cluster size of at least N if you want to reduce inconsistency in your grid during partitions.
Is this even useful?
Bela, is there a more reliable mechanism to detect a split in (1)?
Cheers
Manik
--
Manik Surtani
manik(a)jboss.org
twitter.com/maniksurtani
Platform Architect, JBoss Data Grid
http://red.ht/data-grid
11 years, 8 months
Infinispan Query tests development: focus!
by Sanne Grinovero
Hi all,
needing your advice to define the most suited level of coupling we
want of tests in Query towards Lucene API and other behavior which it
is not responsible for.
Recently the amount of tests added to Query has been significantly
reinforced; this is very welcome as it was highly needed, but I'm
getting concerned that we might getting a bit too far with it, or
actually spending energy in the wrong direction.
As most know, Infinispan Query is a module meant to integrate the
Hibernate Search "engine" module with the events and transactions from
Infinispan Core. It also happens to expose the end user to some Lucene
to define the queries.
Since this is a module integrating two other components, it must be
making some assumptions on the other two: it's not going to verify for
example if NBST behaves correctly and without blocking, but of course
tests might fail if state transfer doesn't happen at all.
Most of our tests today are end-to-end but that's not necessarily the
way to go forward.
Let's take the directory provider configuration option: FSDirectory vs
the RAMDirectory: I think it makes perfect sense to have at least one
test setting up either of them:
- to verify the configuration properties are being applied as expected
- to see how different nodes could interact using a shared (disk
based) directory
But I don't think it makes sense to verify that all exotic Queries we
support are going to work equally well on both: that's a waste of time
and will make our testsuite unnecessarily complex to maintain in the
long term. One directory implementation is enough to test, and
actually we might not even want to test all Lucene queries (we don't),
as long as we have some interesting examples.
Of course in the scope of the Infinispan Directory such a test makes a
lot of sense! But that's a different module, with completely different
purpose and level of test needs.
My concern is mainly related with unnecessary duplication: you are all
very welcome to contribute to Hibernate Search as much as I contribute
to Infinispan to make sure all corner cases are properly covered.
The bottomline warning: this summer we'll be moving to Lucene 4. APIs
will change significantly in the Lucene area, but only minimal changes
(hopefully) will be exposed to the Search integration API. You'd
better delegate this complexity to the isolated component as much as
possible! I can help with some 50 tests, I will not help rewriting a
thousand tests, especially if they are duplicates of other tests I've
already been working on in a different workspace.
I don't mean to suggest any hard rule. It's definitely useful to have
some working examples in the Query code base for people to read, and
to verify integration is working. Just be thoughtfull in where you
want some test to be added, and to consider if it shouldn't be better
to spend time on performance, stress tests and especially what happens
during topology changes and interactions with CacheStores configured
in different ways.. all those things which definitely are not covered
by Hibernate Search.
Cheers,
Sanne
11 years, 8 months
CHM or CHMv8?
by Manik Surtani
Guys,
Based on some recent micro benchmarks I've been doing, I've seen:
MapStressTest configuration: capacity 100000, test running time 60 seconds
Testing mixed read/write performance with capacity 100,000, keys 300,000, concurrency level 32, threads 12, read:write ratio 0:1
Container CHM Ops/s 21,165,771.67 Gets/s 0.00 Puts/s 21,165,771.67 HitRatio 100.00 Size 262,682 stdDev 77,540.73
Container CHMV8 Ops/s 33,513,807.09 Gets/s 0.00 Puts/s 33,513,807.09 HitRatio 100.00 Size 262,682 stdDev 77,540.73
So under high concurrency (12 threads, on my workstation with 12 hardware threads - so all threads are always working), we see that Infinispan's CHMv8 implementation is 50% faster than JDK6's CHM implementation when doing puts.
We use a fair number of CHMs all over Infinispan's codebase. By default, these are all JDK-provided CHMs. But we have the option to switch to our CHMv8 implementation by passing in -Dinfinispan.unsafe.allow_jdk8_chm=true.
The question is, should this be the default? Thoughts, opinions?
- M
--
Manik Surtani
manik(a)jboss.org
twitter.com/maniksurtani
Platform Architect, JBoss Data Grid
http://red.ht/data-grid
11 years, 8 months