As I told you, we had not finished, but after 10h debugging it was
night and we still where in the office, but the ugly experiment you
see was working fine so I opted to send you a preview of the branch
for the case I would not get up early in the morning and you would
need something which worked ;)
Don't know why it fails for you, it passed our tests. Will look again
and polish the code.
On 13 April 2013 10:41, Ales Justin <ales.justin(a)gmail.com> wrote:
Shouldn't this "synchronous" flag still be used?
https://github.com/Sanne/hibernate-search/blob/077f29c245d2d6e960cd6ab59f...
e.g.
if (synchronous) {
int size = dispatcher.getChannel().getView().getMembers().size();
RequestOptions options = RequestOptions.SYNC();
options.setRspFilter( new WaitAllFilter( size ) );
} else {
options = RequestOptions.ASYNC();
}
-Ales
On Apr 13, 2013, at 11:25 AM, Ales Justin <ales.justin(a)gmail.com> wrote:
Hmmm, did you try our QueryTest with this fix?
With HS update (your jgroupsWorkaround branch), my current run:
Running org.jboss.test.capedwarf.cluster.test.QueryTest
Tests run: 9, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 14.287 sec
<<< FAILURE!
Results :
Failed tests:
deleteAndQueryInA(org.jboss.test.capedwarf.cluster.test.QueryTest): Should
not be here: null
deleteAndQueryInA_2(org.jboss.test.capedwarf.cluster.test.QueryTest):
Should not be here: null
-Ales
On Apr 13, 2013, at 2:02 AM, Sanne Grinovero <sanne(a)hibernate.org> wrote:
that's right, as suggested by Emmanuel I plan to separate the JGroups
Sync/Async options from the worker.execution property so you can play
with the two independently.
I think the JGroups option's default could depend on the backend - if
not otherwise specified, and if we all agree it doesn't make it too
confusing.
@All, the performance problem seemed to be caused by a problem in
JGroups, which I've logged here:
https://issues.jboss.org/browse/JGRP-1617
For the record, the first operation was indeed triggering some lazy
initialization of indexes, which in turn would trigger a Lucene
Directory being started, triggering 3 Cache starts which in turn would
trigger 6 state transfer processes: so indeed the first operation
would not be exactly "cheap" performance wise, still this would
complete in about 120 milliseconds.
The same cost is paid again when the second node is hit the first
time, after that index write operations block the writer for <1ms (not
investigated further on potential throughput).
Not being sure about the options of depending to a newer JGroups
release or the complexity of a fix, I'll implement a workaround in
HSearch in the scope of HSEARCH-1296.
As a lesson learned, I think we need to polish some of our TRACE level
messaged to include the cache name: to resolve this we had not just
many threads and components but also 4 of them where using JGroups
(interleaving messages of all sorts) and 9 different caches where
involved for each simple write operation in CD: made it interesting to
figure what was going on! Also I'm wondering how hard it would be to
have a log parser which converts my 10GB of text log from today in a
graphical sequence diagram.
Big thanks to Mircea who helped me figuring this out.
Sanne
On 12 April 2013 21:10, Ales Justin <ales.justin(a)gmail.com> wrote:
I think we need more fine-grained config for this new JGroups sync feature.
I added this to our cache config
<property
name="hibernate.search.default.worker.execution">async</property>
and it broke our tests.
Where previous (old / non JGroups sync) behavior worked.
It of course also works without this async config,
but in this case we don't need sync / ACK JGroups message.
(we didn't have one before and it worked ;-)
-Ales
On Apr 11, 2013, at 11:51 PM, Sanne Grinovero <sanne(a)hibernate.org> wrote:
There is a "blackhole" indexing backend, which pipes all indexing
requests > /dev/null
Set this as an Infinispan Query configuration property:
default.worker.backend = blackhole
Of course that means that the index will not be updated: you might
need to adapt your test to tolerate that, but the point is not
functional testing but to verify how much the SYNC option on the
JGroups backend is actually slowing you down. I suspect the
performance penalty is not in the network but in the fact you're now
waiting for the index operations, while in async you where not waiting
for them to be flushed.
If you can identify which part is slow, then we can help you with
better configuration options.
On 11 April 2013 20:47, Ales Justin <ales.justin(a)gmail.com> wrote:
What do you mean?
On Apr 11, 2013, at 21:41, Sanne Grinovero <sanne(a)hibernate.org> wrote:
You could try the new sync version but setting the blackhole backend on the
master node to remove the indexing overhead from the picture.
On Apr 11, 2013 8:39 PM, "Sanne Grinovero" <sanne(a)hibernate.org> wrote:
Are you sure that the async version actually had applied all writes to the
index in the measured interval?
On Apr 11, 2013 8:13 PM, "Ales Justin" <ales.justin(a)gmail.com> wrote:
Although this change fixes query lookup,
it adds horrible performance:
Running CapeDwarf cluster QueryTest:
with HSEARCH-1296
21:00:27,188 INFO
[org.hibernate.search.indexes.impl.DirectoryBasedIndexManager]
(http-/192.168.1.102:8080-1) HSEARCH000168: Serialization service Avro
SerializationProvider v1.0 being used for index
'default_capedwarf-test__com.google.appengine.api.datastore.Entity'
21:01:17,911 INFO [org.jboss.web] (ServerService Thread Pool -- 49)
JBAS018224: Unregister web context: /capedwarf-tests
50sec
old 4.2.0.Final HS
21:08:19,988 INFO
[org.hibernate.search.indexes.impl.DirectoryBasedIndexManager]
(http-/192.168.1.102:8080-2) HSEARCH000168: Serialization service Avro
SerializationProvider v1.0 being used for index
'default_capedwarf-test__com.google.appengine.api.datastore.Entity'
21:08:20,829 INFO [org.jboss.web] (ServerService Thread Pool -- 49)
JBAS018224: Unregister web context: /capedwarf-tests
841ms
---
I added
<property name="enable_bundling">true</property>
to AS jgroups transport config, but no improvement.
Any (other) idea?
-Ales
_______________________________________________
hibernate-dev mailing list
hibernate-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hibernate-dev
_______________________________________________
hibernate-dev mailing list
hibernate-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hibernate-dev