[jboss-dev-forums] [Design of JBossCache] - Re: Implicit marshalled values - a better way of handling re
jason.greene@jboss.com
do-not-reply at jboss.com
Mon Jan 7 21:29:36 EST 2008
anonymous wrote :
| Sure you can deadlock. When I said "node A" and "node B" I meant different cache instances in the cluster. The both acquire local write locks on the same node in the tree, insert/update their key, then try to acquire the WL globally as part of tx commit. Fails.
|
Ok right, with sync replication a lock timeout could occur on simultaneous commit including modifications of the same node.
anonymous wrote :
| Assume pessimistic locking here (which may not be an issue if we do this far enough in the future, but partly I'm thinking about whether I want to try it this way now.)
|
This is a problem for all forms of locking. Even with O/L, since a WL is aquired in prepare We really should take a look at handling this condition better. I will start a separate topic on that.
Collisions should not be common since they would require simultaneous update on an identical hash code, so with a reasonable timeout should be ok.
anonymous wrote :
| Let's pretend a bit that the 2 node solution is necessary, in case it leads somewhere. :) You can have concurrent putForExternalRead calls on different cache instances, each of which would store a different UUID for the same entity. You'd end up with two copies of the entity in the cache.
|
Yes, that is possible, since the operations are async and not in the same TX. Hopefully eviction would catch that scenario. You could further reduce the occurence by periodically checking the number of subnodes with the number of key entries. If they are different, purge the dups.
anonymous wrote :
| Hmm -- actually you'd get a weird effect where the PFER call for inserting the key/uuid would be aborted when propagated (since the key already exists on the remote node) but the PFER for the uuid node would succeed.
|
Now that is an interesting scenario. Oh how I love PFER and the problems it causes ;) The above (periodic cleanup) solution should work here too.
anonymous wrote :
| OK, let's ignore the 2 node solution. ;) Lot's of problems like that; weirdness when Hibernate suspends transactions, but now we're dealing with doing multiple cache writes.
|
Right, it sounds like 1 node is better anyway. The write problems still exist today. The only difference is that it could occur more frequently if there are a large number of writes to non-primitive key objects that have the same hash code.
anonymous wrote :
| anonymous wrote : Keep them coming!
| With OL, we'd have versioning problems, since the version is applied to the node, not the key/value pair. 2 node solution rises from the dead....
|
Ugh. Yes. In general I don't think the cache node version should be defined by the app to begin with. Is there any reason why the "version" can't be an application property? Let's say that Object[] becomes a class that contains String version and Object[]
anonymous wrote :
| Architecturally, probably cleaner to have a cache per SessionFactory, with the default classloader for deserialization being the deployment's classloader. Seems the only negative to that is if a tx spans session factories, which is probably not a common case.
But then we are back to region based marshaling, and allowing custom types in the fqn, which is very broken.
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4117779#4117779
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4117779
More information about the jboss-dev-forums
mailing list