On 10/11/2009 12:51 PM, Sanne Grinovero wrote:
Hi,
sorry for late answer:
thank you very much this makes the logs look like much better, I still
have one case in core (stacktrace below) but this doesn't appear to be
blocking for me.
I'm reusing now this tcp.xml and other general test utils in the
lucene-directory module, like SuiteResourcesAndLogTest and
MultipleCacheManagersTest.
Copy-pasting this classes ATM to see how far I can get, but could we
consider moving this kind of testing utilities to a module, so that we
can reuse the code across the test methods of other modules?
Couldn't you just depend on infinispan-core just like other projects
needing these classes do?
Sanne
FYI the remaining stacktrace in logs:
The stacktrace is similar to those you get when you either, you haven't
set the IPv4 preference or you're binding to the wrong NIC. It's
interesting though that you mentioned that when you passed -Dbind... and
-Djava.net.prefer... directly, you had no such issue.
It'd be interesting to see JGroups TRACE logs.
Also, seeing that you're able to reproduce it easily, it'd be great if
you could modify the JGroups source code and in particular the line
before org.jgroups.protocols.UDP._send(UDP.java:205) and in there, print
(TRACE level) the contents of the DatagramPacket being sent around. Note
that it has no toString() method. On top of that, print also -Dbind...
and -Djava.net.prefer... system properties. Once you've done that, run
maven the same both ways mentioned, zip up the log and send it to us.
2009-10-10 18:43:55,204 ERROR [org.jgroups.protocols.UDP]
(Timer-1,infinispan-cluster,Jalapeno-28689) failed sending message to
null (133 bytes)
java.lang.Exception: dest=/232.10.10.10:45588 (136 bytes)
at org.jgroups.protocols.UDP._send(UDP.java:213)
at org.jgroups.protocols.UDP.sendMulticast(UDP.java:170)
at org.jgroups.protocols.TP.doSend(TP.java:1075)
at org.jgroups.protocols.TP.send(TP.java:1061)
at org.jgroups.protocols.TP.down(TP.java:922)
at org.jgroups.protocols.PING.sendMcastDiscoveryRequest(PING.java:72)
at org.jgroups.protocols.PING.sendGetMembersRequest(PING.java:55)
at org.jgroups.protocols.Discovery$PingSenderTask$1.run(Discovery.java:493)
at org.jgroups.util.TimeScheduler$RobustRunnable.run(TimeScheduler.java:194)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.io.IOException: Invalid argument
at java.net.PlainDatagramSocketImpl.send(Native Method)
at java.net.DatagramSocket.send(DatagramSocket.java:612)
at org.jgroups.protocols.UDP._send(UDP.java:205)
... 17 more
2009-10-10 18:43:56,209 ERROR [org.jgroups.protocols.UDP]
(Timer-2,infinispan-cluster,Jalapeno-28689) failed sending message to
null (133 bytes)
java.lang.Exception: dest=/232.10.10.10:45588 (136 bytes)
at org.jgroups.protocols.UDP._send(UDP.java:213)
at org.jgroups.protocols.UDP.sendMulticast(UDP.java:170)
at org.jgroups.protocols.TP.doSend(TP.java:1075)
at org.jgroups.protocols.TP.send(TP.java:1061)
at org.jgroups.protocols.TP.down(TP.java:922)
at org.jgroups.protocols.PING.sendMcastDiscoveryRequest(PING.java:72)
at org.jgroups.protocols.PING.sendGetMembersRequest(PING.java:55)
at org.jgroups.protocols.Discovery$PingSenderTask$1.run(Discovery.java:493)
at org.jgroups.util.TimeScheduler$RobustRunnable.run(TimeScheduler.java:194)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.io.IOException: Invalid argument
at java.net.PlainDatagramSocketImpl.send(Native Method)
at java.net.DatagramSocket.send(DatagramSocket.java:612)
at org.jgroups.protocols.UDP._send(UDP.java:205)
... 17 more
2009/10/8 Vladimir Blagojevic<vblagoje(a)redhat.com>:
> On 09-10-08 9:29 AM, Galder Zamarreno wrote:
>>
>> On 10/08/2009 02:20 AM, Sanne Grinovero wrote:
>>
>>> Hello,
>>> I'd appreciate some advice to understand if my tests are wrong, or if
>>> my environment is screwed.
>>>
>>> Having ported some of Łukasz's tests for the Lucene Directory, I have 6
tests:
>>> one test fails and others appear to work, but in all logs I see this
>>> kind of errors:
>>>
>>> 2009-10-08 00:55:37,493 ERROR [org.jgroups.protocols.UDP]
>>> (Timer-1,bluestar.tana-50817) failed sending message to null (138
>>> bytes)
>>> java.lang.Exception: dest=/228.10.10.10:45588 (141 bytes)
>>> at org.jgroups.protocols.UDP._send(UDP.java:213)
>>>
>>> when running "mvn clean test"
>>>
>> What's the root cause? Could you please show the entire stacktrace? It
>> appears that there's some multicast issue. I don't think it would have
>> much impact cos the issue is to do with JGroups diagnostics and you're
>> running the tests with tcp.
>>
>> Try putting enable_diagnostics="false" in
>> core/src/main/resources/stacks/tcp.xml in the TCP protocol.
>>
> I am confident this is the problem. I don't think we need diagnostics
> running, wasting a thread per channel.
> I'll turn this off today.
>
> Cheers
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache