Hi,
in the case of Lucene Queries my main concern is about how to a) properly
serialize the query b) how to stream the resusts back, possibly having some
pagination / flow control.
I didn't look into hotrod sources, but I was not assuming the hard part
would to make it possible to send some new commands?
So if that's all what is needed for transactions to work properly, can't you
just add the commands before 5.1?
Cheers,
Sanne
On 15 Jul 2011 12:53, "Mircea Markus" <mircea.markus(a)jboss.com> wrote:
Hi,
As there is a high community demand for having these operations in place,
and most
of these are targeted for post 5.1 releases, I thought about a
workaround for having this functionality in place.
I hijacked Hotrod's put operation and added a custom interceptor,
so that
if a certain object is being "put" into remote cache, the server
side
interceptor jumps in and runs transactions.
This doesn't look too bad for the user, e.g. for supporting
transactions:
RemoteCache rc = getRemoteCache();//from somewhere...
//this is what we'll use for running remote transactions over hotrod
BatchEnabledRemoteCache berc = new BatchEnabledRemoteCache(rc);
berc.startBatch(); //everything from here to endBatch call is a single
transaction
berc.put("k", "v1");
berc.put("k2", "v2");
berc.put("k3", "v3");
berc.endBatch(true); // all or nothing!
Of course this won't work with other clients than the java client, but I
think
most of our users are using that one ATM.
Currently there's only support for transactions but this approach
(and the
code) can be easily extended to mapreduce and querying.
I added s short description on how this can be used [1], also the
source
code is available here[2].
What do you think about it? Is it worth suggesting to the users this
approach(and
possibly the code as well)?