AtomicHashMap concurrent modifications in pessimistic mode
by Dan Berindei
Hi guys
I'm working on an intermittent failure in NodeMoveAPIPessimisticTest and I
think I've come across what I think is underspecified behaviour in
AtomicHashMap.
Say we have two transactions, tx1 and tx2, and they both work with the same
atomic map in a pessimistic cache:
1. tx1: am1 = AtomicMapLookup.get(cache, key)
2. tx2: am2 = AtomicMapLookup.get(cache, key)
3. tx1: am1.put(subkey1, value1) // locks the map
4. tx2: am2.get(subkey1) // returns null
5. tx1: commit // the map is now {subkey1=value1}
6. tx2: am2.put(subkey2, value2) // locks the map
7. tx2: commit // the map is now {subkey2=value2}
It's not clear to me from the AtomicMap/AtomicHashMap javadoc if this is ok
or if it's a bug...
Note that today the map is overwritten by tx2 even without step 4 ("tx2:
am2.get(subkey1)"). I'm pretty sure that's a bug and I fixed it locally by
using the FORCE_WRITE_LOCK in AtomicHashMapProxy.getDeltaMapForWrite.
However, when the Tree API moves a node it first checks for the existence
of the destination node, which means NodeMoveAPIPessimisticTest is still
failing. I'm not sure if I should fix that by forcing a write lock for all
AtomicHashMap reads, for all TreeCache reads, or only in TreeCache.move().
Cheers
Dan
11 years, 7 months
Re: [infinispan-dev] Hotrod 5.3.0.Beta1
by Mark Addy
Hi,
Doesn't work for me out of the box, here are the steps to reproduce:
I downloaded infinispan-5.3.0.Beta1-all.zip, extracted and started
hotrod with this command:
./startServer.sh -r hotrod --port=11111 --host=0.0.0.0
--cache_config=/opt/temp-infinispan/infinispan-5.3.0.Beta1-all/etc/config-samples/distributed-udp.xml
My client has these properties on the classpath:
infinispan.client.hotrod.transport_factory =
org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory
infinispan.client.hotrod.server_list = 0.0.0.0:11111
infinispan.client.hotrod.marshaller =
org.infinispan.marshall.jboss.GenericJBossMarshaller
infinispan.client.hotrod.async_executor_factory =
org.infinispan.client.hotrod.impl.async.DefaultAsyncExecutorFactory
infinispan.client.hotrod.default_executor_factory.pool_size = 1
infinispan.client.hotrod.default_executor_factory.queue_size = 10000
infinispan.client.hotrod.hash_function_impl.1 =
org.infinispan.client.hotrod.impl.consistenthash.ConsistentHashV1
infinispan.client.hotrod.tcp_no_delay = true
infinispan.client.hotrod.ping_on_startup = true
infinispan.client.hotrod.request_balancing_strategy =
org.infinispan.client.hotrod.impl.transport.tcp.RoundRobinBalancingStrategy
infinispan.client.hotrod.key_size_estimate = 64
infinispan.client.hotrod.value_size_estimate = 512
infinispan.client.hotrod.force_return_values = true
maxActive=-1
maxTotal = -1
maxIdle = -1
whenExhaustedAction = 1
timeBetweenEvictionRunsMillis=120000
minEvictableIdleTimeMillis=300000
testWhileIdle = true
minIdle = 1
And runs the following test:
public class HotrodClient {
public static void main(String[] args) throws Exception {
RemoteCacheManager remoteCacheManager = new RemoteCacheManager();
RemoteCache<String, String> myCache =
remoteCacheManager.getCache();
myCache.put("key", "test");
System.out.println("get " + myCache.get("key"));
}
}
Which fails to find the key:
2013-05-16 18:22:36,952 TRACE [RemoteCacheImpl] (main) For key(key)
returning null
get null
Thanks
Mark
Thanks for the replies, all sorted now. I went back through the
previous archives and found ISPN-2281 and the changes associated to this.
So to get Hotrod working you must supply a custom Equivalence
implementation as an attribute of the dataContainer element for the cache.
I have created a wrapper around the enum
org.infinispan.util.ByteArrayEquivalence and placed it on the server
classpath so I could use it in the XML configuration.
>> ^ The thing is that you shouldn't need to do this. If you've downloaded the new Infinsispan server download from (http://downloads.jboss.org/infinispan/5.3.0.Beta1/infinispan-server-5.3.0...), it ships a default configuration that has this setting correctly set. (@Tristan, correct me if I'm wrong...)
> You are right Galder.
>> Hence, if you can explain how you're using the Hot Rod server, we can see if there's anything wrong here.
> Tristan
>
>
11 years, 7 months
Re: [infinispan-dev] Zero-copy buffer transfers
by William Burns
Bela,
You are thinking something like Netty's composite byte buffer?
http://netty.io/4.0/guide/#architecture.5
On 5/23/13 2:09 PM, Manik Surtani wrote:
> Bela,
>
> We shouldn't need to wait for NIO2/JDK7 for this. We can do this in JDK6
as well, granted the impl may not be as good unless run on a Java7 VM.
In the current NIO impl, unfortunately MulticastSockets are not
supported (MulticastChannel), only DatagramChannels, so I couldn't
implement multicasting, only TCP this way.
> Do you have any notes on your designs somewhere?
Not really, but some JIRAs (se below). The main idea is that JGroups
passes a list of buffers directly to the socket which then creates
packets by writing them in sequence (gathering writes). One of the
buffers will be the buffer passed by Infinispan to JGroups; JGroups then
doesn't copy it into one single larger buffer, but passes it directly to
the socket, avoiding a copy.
- https://issues.jboss.org/browse/JGRP-815
- https://issues.jboss.org/browse/JGRP-809
- https://issues.jboss.org/browse/JGRP-816
--
Bela Ban, JGroups lead (http://www.jgroups.org)
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
11 years, 7 months
Zero-copy buffer transfers
by Manik Surtani
Bela,
We shouldn't need to wait for NIO2/JDK7 for this. We can do this in JDK6 as well, granted the impl may not be as good unless run on a Java7 VM.
Do you have any notes on your designs somewhere?
Cheers
Manik
--
Manik Surtani
manik(a)jboss.org
twitter.com/maniksurtani
Platform Architect, JBoss Data Grid
http://red.ht/data-grid
11 years, 7 months
Github not feeling to well today
by Sanne Grinovero
careful, we just noticed today that github is not showing the same
consistent list of open pull requests to all users.
Specifically on Hibernate Search there are two pulls open but Hardy
doesn't see them in his UI (and it's not a browser problem).
If Hardy tries to to reach the URL directly he gets a 404 on both Search pulls.
If Gunnar tries, he can reach one of them but gets a 404 on the other.
Both URLs work for me.
Hardy merged the second pull from command line as usual, and I could
confirm the state of the pull progressed correctly, and received the
usual notifications. So nothing seems to be lost, just don't trust the
web ui too much.
Sanne
11 years, 7 months
NPE with Cache.replace()
by Bela Ban
Can someone investigate why CacheImpl.replaceInternal() throws an NPE ?
I can reproduce this every time. Using the latest JDG.
See the attached stack trace for details.
--
Bela Ban, JGroups lead (http://www.jgroups.org)
11 years, 7 months
Re: [infinispan-dev] How to get Grouper<T>#computeGroup(key) return value to map to physical Node?
by cotton-ben
I am playing with the Infinispan 5.3 quick-start package to exercise my usage
of the Grouping API. As we know the quick start package is made up of
AbstractNode.java, Node0.java, Node1.java and Node2.java (plus a
util/listener).
My ambition is to demonstrate
1. that any Cache<K,V>.put("DIMENSION.xxx",v) will flow through my Grouper
and "pin" that key in the Cache at @Node=0.
2. that any Cache<K,V>.put("POSITION.xxx",v) will flow through my Grouper
and "pin" that key in the Cache at either @Node=1 or @Node=2 .
Here is my AbstractNode#createCacheManagerProgramatically() config:
private static EmbeddedCacheManager createCacheManagerProgramatically() {
return new DefaultCacheManager(
GlobalConfigurationBuilder.defaultClusteredBuilder()
.transport().addProperty("configurationFile", "jgroups.xml")
.build(),
new org.infinispan.configuration.cache.ConfigurationBuilder()
.clustering()
.cacheMode(CacheMode.DIST_SYNC)
.hash().numOwners(1).groups().enabled(Boolean.TRUE)
.addGrouper(new
com.jpmorgan.ct.lri.cs.ae.test.DimensionGrouper<String>())
.build()
);
}
And here is my Grouper<T> implementation
public class DimensionGrouper<T> implements Grouper<String> {
public String computeGroup(String key, String group) {
if (key.indexOf("DIMENSION.")==0) {
String groupPinned = "0";
System.out.println("Pinning Key=["+key+"] @Node=["+groupPinned+"]");
//node= exactly 0
return groupPinned;
} else if (key.indexOf("POSITION.")==0) {
String groupPinned = ""+(1+ (int)(Math.random()*2));
System.out.println("Pinning Key=["+key+"] @Node=["+groupPinned+"]");
//node= {1,2}
return groupPinned;
} else {
return null;
}
}
public Class<String> getKeyType() {
return String.class;
}
}
The "logic" is working correctly ... i.e. when from Node2.java I call
for (int i = 0; i < 10; i++) {
cacheDP.put("DIMENSION." + i, "DimensionValue." + i);
cacheDP.put("POSITION." + i, "PositionValue." + i);
}
My DimensionGrouper is returning "0" from computeGroup(). My question is
how in Infinispan can I map the computeGroup() return value to a physical
Node? I.e. How can I make it so that when computeGroup() returns "0", I
will *only* add that <K,V> entry to the Cache @Node 0?
--
View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/Re-How-to-get-Group...
Sent from the Infinispan Developer List mailing list archive at Nabble.com.
11 years, 7 months
ISPN-1797 MongoDB cachestore - pending question
by Guillaume SCHEIBEL
Hi Sanne,
You probably missed the notification but there is still one pending
question I asked you for the pull request:
*@Sanne, I would like the MongoDBCacheStoreConfig constructor to return an
exception if the port is not properly set (between 1 and 65535) but how to
handle it in the caller ?
I can't rethrow it from adapt() so ?*
*
*
Thanks
Guillaume
*
*
*
*
11 years, 7 months
XSite Performance
by Erik Salter
Hi all,
I've spent quite a bit of time with the existing XSite implementation,
getting my solution to run in multiple data centers. I've been talking with
Mircea and Bela on and off over the past few weeks, but since this affects
the community and potentially commercial customers, I wanted to share with a
wider audience.
The problems I see are as follows:
1. A bridge end will see all the traffic - sending and receiving - for
all nodes within a cluster.
2. The bridge end of a site will apply each change with the bridge end
as the transaction originator.
In my deployment, this can be three sites backing up their data to the other
two. So for 3 sites of 12 nodes each, a single bridge end will see all 36
nodes worth of traffic. This breaks linear scalability. In my QA's
testing, a 3 DC cluster of 6 nodes is about 1/10 the throughput of a single
cluster.
I think I-RAC solves some of the problem, like the reliable sending of data,
but it doesn't really help with performance in high throughput cases.
(Note: FWIW, my apps do about a 8-12:1 read/write ratio).
So I've prototyped the following:
1. Load-balanced applying the changes from a remote SiteMaster across
all local nodes in a cluster. The basics are that there is still a single
SiteMaster (thereby not breaking the existing JGroups model). This is okay,
since it's the same bandwidth pipe, and as long as there is no
unmarshalling, it's a simple buffer copy. The difference is that the
messages are now forwarded to other nodes in the local cluster and delivered
to the ISPN layer there for unmarshalling and data application. Note that
this does NOT break XSite synchronous replication, as I'm still preserving
the originating site.
2. I also needed more intelligent application of the data that is
replicated. My local cluster will save data to 8-9 caches that need to be
replicated. Instead of replicating data on cache boundaries, I consolidated
the data to only replicate an aggregate object. In turn, I have a custom
BackupReceiver implementation that takes this object and expands it into the
requisite data for the 8-9 caches. Since these caches are a mixture of
optimistic and pessimistic modes, I made liberal use of the Distributed
Executor framework to execute on the data owner for any pessimistic caches.
The early results are very promising, especially WRT async replication.
(Sync replication just sucks - trying to design that out of the system)
There are a few changes made to support a custom BackupReceiver
implementation [1]. There are some other ideas I had floating around in my
head, but it's such a fine line between clever and stupid.
1. I'm prototyping an option where the SiteMaster would load-balance
among the other cluster members, but exclude itself. Preliminary testing
shows that this really only helps when the cluster size > numOwners + 1.
2. I think the staggered get work will be vital in these deployments.
Mircea had the idea of suppressing reads on the SiteMaster node.
3. In the case of numerous modifications, I've seen them processed "out
of order". This is where I-RAC, with batched modifications, can mitigate
this. (Maybe Total Order?)
4. In the case of single-key modifications, I was playing around with
adding a int header to the RELAY2 message to give any application a "hint"
about the hash code of the key. When ISPN received the data, there would be
an initial deserialization penalty as ISPN would need to map the hash code
to the address of the data owner for that cache (in 5.2 onwards, there is no
guarantee that the segments for caches in a cache manager will map as in
5.1.x). This would build a "routing table" of sorts to RELAY2, so if it
sees this key, it'll know to route it to the owner directly. (And on a view
change, this table would be cleared) Really, though, this was a thought for
improving synchronous replication performance.
Any other thoughts? Feedback?
[1] <https://github.com/an1310/infinispan/tree/t_receiver>
https://github.com/an1310/infinispan/tree/t_receiver
Regards,
Erik
11 years, 7 months