[JBoss JIRA] (AS7-5951) Cannot reliably deploy OSGi host and fragment bundles on server restart
by Thomas Diesler (JIRA)
[ https://issues.jboss.org/browse/AS7-5951?page=com.atlassian.jira.plugin.s... ]
Thomas Diesler edited comment on AS7-5951 at 12/19/12 11:21 AM:
----------------------------------------------------------------
I think we need more data here.
I updated the framework and bootstrap integration code to produce better logging. You should now see something like this
{code}
[tdiesler@tdmac jboss-as-7.2.0.Alpha1-SNAPSHOT]$ cat standalone/log/server.log | grep "on behalf of"
17:00:43,353 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-3) Install bootstrap deployments [[org.apache.felix.log:1.0.0,location=org.apache.felix.log], [jboss-osgi-logging:1.0.0,location=org.jboss.osgi.logging], [org.apache.felix.configadmin:1.2.8,location=org.apache.felix.configadmin], [jboss-as-osgi-configadmin:7.2.0.Alpha1-SNAPSHOT,location=org.jboss.as.osgi.configadmin]] on behalf of jbosgi.BootstrapBundles.INSTALL
17:00:43,379 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-3) Resolve bootstrap bundles on behalf of jbosgi.BootstrapBundles.RESOLVE
17:00:43,481 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-3) Activate bootstrap bundles on behalf of jbosgi.BootstrapBundles.ACTIVATE
17:00:43,487 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-7) Complete bootstrap bundles on behalf of jbosgi.BootstrapBundles.COMPLETE
17:00:43,502 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-7) Install persistent deployments [] on behalf of jbosgi.PersistentBundles.INSTALL
17:00:43,503 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-7) Resolve persistent bundles on behalf of jbosgi.PersistentBundles.RESOLVE
17:00:43,503 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-7) Activate persistent bundles on behalf of jbosgi.PersistentBundles.ACTIVATE
17:00:43,504 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-8) Complete persistent bundles on behalf of jbosgi.PersistentBundles.COMPLETE
{code}
The idea is that the persistent bundles get installed only after the bootstrap bundles (i.e. the configured capabilities) complete. Also, no persistent bundle should resolve before not all persistent bundles are installed. Could you please verify you output according to these rules.
For a fragment that does not attach, we need to find out why it is not included in the resolve attempt. If it is included, it might be a resolver issue.
The related code change is [here|https://github.com/jbossas/jboss-as/pull/3713]
was (Author: thomas.diesler):
I think we need more data here.
I updated the framework and bootstrap integration code to produce better logging. You should now see something like this
{code}
[tdiesler@tdmac jboss-as-7.2.0.Alpha1-SNAPSHOT]$ cat standalone/log/server.log | grep "on behalf of"
17:00:43,353 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-3) Install bootstrap deployments [[org.apache.felix.log:1.0.0,location=org.apache.felix.log], [jboss-osgi-logging:1.0.0,location=org.jboss.osgi.logging], [org.apache.felix.configadmin:1.2.8,location=org.apache.felix.configadmin], [jboss-as-osgi-configadmin:7.2.0.Alpha1-SNAPSHOT,location=org.jboss.as.osgi.configadmin]] on behalf of jbosgi.BootstrapBundles.INSTALL
17:00:43,379 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-3) Resolve bootstrap bundles on behalf of jbosgi.BootstrapBundles.RESOLVE
17:00:43,481 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-3) Activate bootstrap bundles on behalf of jbosgi.BootstrapBundles.ACTIVATE
17:00:43,487 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-7) Complete bootstrap bundles on behalf of jbosgi.BootstrapBundles.COMPLETE
17:00:43,502 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-7) Install persistent deployments [] on behalf of jbosgi.PersistentBundles.INSTALL
17:00:43,503 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-7) Resolve persistent bundles on behalf of jbosgi.PersistentBundles.RESOLVE
17:00:43,503 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-7) Activate persistent bundles on behalf of jbosgi.PersistentBundles.ACTIVATE
17:00:43,504 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-8) Complete persistent bundles on behalf of jbosgi.PersistentBundles.COMPLETE
{code}
The idea is that the persistent bundles get installed only after the bootstrap bundles (i.e. the configured capabilities) complete. Also, no persistent bundle should resolve before not all persistent bundles are installed. Could you please verify you output according to these rules.
For a fragment that does not attach, we need to find out why it is not included in the resolve attempt. If it is included, it might be a resolver issue.
> Cannot reliably deploy OSGi host and fragment bundles on server restart
> -----------------------------------------------------------------------
>
> Key: AS7-5951
> URL: https://issues.jboss.org/browse/AS7-5951
> Project: Application Server 7
> Issue Type: Bug
> Components: OSGi
> Affects Versions: 7.2.0.Alpha1
> Environment: Windows XP SP3
> Reporter: Paul Illingworth
> Assignee: Thomas Diesler
> Labels: OSGI
> Attachments: this_one_failed.txt, this_one_worked.txt
>
>
> If I deploy guice-3.0.0 (host bundle), guice-servlet (fragment) and guice-persist (fragment) into the "deployments" folder then the there is no guarantee the fragments will be installed before the host and so they may not be attached to the host when it resolves.
> This happens on starting the application server. Sometimes the fragments are attached, sometimes they aren't.
> If I install the bundles into the "bundles" folder structure and add capability entries to the standalone.xml file then it works as expected.
> I am using 7.2.0-Alpha1 built from cb72a7cd1669131b28a552f1dbf3c2582ad19813.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (AS7-5951) Cannot reliably deploy OSGi host and fragment bundles on server restart
by Thomas Diesler (JIRA)
[ https://issues.jboss.org/browse/AS7-5951?page=com.atlassian.jira.plugin.s... ]
Thomas Diesler commented on AS7-5951:
-------------------------------------
I think we need more data here.
I updated the framework and bootstrap integration code to produce better logging. You should now see something like this
{code}
[tdiesler@tdmac jboss-as-7.2.0.Alpha1-SNAPSHOT]$ cat standalone/log/server.log | grep "on behalf of"
17:00:43,353 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-3) Install bootstrap deployments [[org.apache.felix.log:1.0.0,location=org.apache.felix.log], [jboss-osgi-logging:1.0.0,location=org.jboss.osgi.logging], [org.apache.felix.configadmin:1.2.8,location=org.apache.felix.configadmin], [jboss-as-osgi-configadmin:7.2.0.Alpha1-SNAPSHOT,location=org.jboss.as.osgi.configadmin]] on behalf of jbosgi.BootstrapBundles.INSTALL
17:00:43,379 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-3) Resolve bootstrap bundles on behalf of jbosgi.BootstrapBundles.RESOLVE
17:00:43,481 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-3) Activate bootstrap bundles on behalf of jbosgi.BootstrapBundles.ACTIVATE
17:00:43,487 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-7) Complete bootstrap bundles on behalf of jbosgi.BootstrapBundles.COMPLETE
17:00:43,502 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-7) Install persistent deployments [] on behalf of jbosgi.PersistentBundles.INSTALL
17:00:43,503 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-7) Resolve persistent bundles on behalf of jbosgi.PersistentBundles.RESOLVE
17:00:43,503 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-7) Activate persistent bundles on behalf of jbosgi.PersistentBundles.ACTIVATE
17:00:43,504 DEBUG [org.jboss.osgi.framework] (MSC service thread 1-8) Complete persistent bundles on behalf of jbosgi.PersistentBundles.COMPLETE
{code}
The idea is that the persistent bundles get installed only after the bootstrap bundles (i.e. the configured capabilities) complete. Also, no persistent bundle should resolve before not all persistent bundles are installed. Could you please verify you output according to these rules.
For a fragment that does not attach, we need to find out why it is not included in the resolve attempt. If it is included, it might be a resolver issue.
> Cannot reliably deploy OSGi host and fragment bundles on server restart
> -----------------------------------------------------------------------
>
> Key: AS7-5951
> URL: https://issues.jboss.org/browse/AS7-5951
> Project: Application Server 7
> Issue Type: Bug
> Components: OSGi
> Affects Versions: 7.2.0.Alpha1
> Environment: Windows XP SP3
> Reporter: Paul Illingworth
> Assignee: Thomas Diesler
> Labels: OSGI
> Attachments: this_one_failed.txt, this_one_worked.txt
>
>
> If I deploy guice-3.0.0 (host bundle), guice-servlet (fragment) and guice-persist (fragment) into the "deployments" folder then the there is no guarantee the fragments will be installed before the host and so they may not be attached to the host when it resolves.
> This happens on starting the application server. Sometimes the fragments are attached, sometimes they aren't.
> If I install the bundles into the "bundles" folder structure and add capability entries to the standalone.xml file then it works as expected.
> I am using 7.2.0-Alpha1 built from cb72a7cd1669131b28a552f1dbf3c2582ad19813.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (JGRP-1548) UNICAST2: send STABLE message after 'last received' message
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1548?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-1548:
--------------------------------
The good thing about sending STABLE messages that was not mentioned above is that a sender does not need to store messages for a long time; as soon as a STABLE(22) message is received, the sender can purge all messages <= 22.
On the downside, we'll be sending way more STABLE messages than with the current scheme, so they need to be small and efficient, like ACKs in UNICAST.
> UNICAST2: send STABLE message after 'last received' message
> -----------------------------------------------------------
>
> Key: JGRP-1548
> URL: https://issues.jboss.org/browse/JGRP-1548
> Project: JGroups
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 3.3
>
>
> Contrary to UNICAST, which acks every message, UNICAST2 never acks messages, but only asks the sender to retransmit a message when a gap has been detected.
> However, the drawback of this (negative ack) mechanism is the so called last-message-dropped problem: when A sends messages [1..5] to B, but 5 is dropped by the transport, as A doesn't retransmit messages until it gets a retransmission request from B, B only gets messages [1..4].
> B will *not* ask A to retransmit message 5, as B doesn't know A *sent* message 5 in the first place.
> If A doesn't send message 6 for B to detect 5 is missing and asking A for retransmission, B won't get that message.
> The way this is currently handled is with stable messages. A STABLE message is sent from B to A every stable_interval ms or whenever M bytes from A have been received. In the worst case, B will have to wait stable_interval ms until it finally receives message 5 from A.
> SOLUTION:
> In addition to time and size based STABLE messages, we could send a STABLE message whenever the batch of messages removed from the receive window has completed and the receive window is empty.
> This would send a STABLE message immediately when a single message has been received (and no other messages from A are in the receive window), but it would send another STABLE message only when all (e.g.) 200 messages from A have been processed and the receive window is empty.
> With this new mechanism, we could even remove the time-based STABLE messages !
> Example:
> - At time T0, messages M1 and M2 are received. A STABLE message for M2 is sent.
> - At T+500 (ms), messages M3-M100 are received. A STABLE message for M100 is sent
> - At T+1500, M101 is received. A STABLE message for M101 is sent.
> - At T+2000, M102 is received. A STABLE message for M102 is sent.
> - At T+2010, M103-M500 are received. A STABLE message for M500 is sent
> (Note that the example above didn't take size-based STABLE messages into account)
> This is similar to the ACK based scheme in UNICAST where we only send an ack for the last message in a batch (or for a single message if not batch has been received).
> This new mechanism needs to be configurable: if enabled, the time-based STABLE mechanism would be disabled.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (AS7-6219) Whitespace on Bundle-Classpath manifest header should not fail WAB deployment
by Thomas Diesler (JIRA)
[ https://issues.jboss.org/browse/AS7-6219?page=com.atlassian.jira.plugin.s... ]
Thomas Diesler updated AS7-6219:
--------------------------------
Fix Version/s: 7.2.0.Alpha1
> Whitespace on Bundle-Classpath manifest header should not fail WAB deployment
> -----------------------------------------------------------------------------
>
> Key: AS7-6219
> URL: https://issues.jboss.org/browse/AS7-6219
> Project: Application Server 7
> Issue Type: Enhancement
> Components: OSGi
> Affects Versions: 7.2.0.Alpha1
> Environment: Windows 7
> Reporter: jarkko rantavuori
> Assignee: Thomas Diesler
> Fix For: 7.2.0.Alpha1
>
> Attachments: bundle-classpath.txt
>
>
> Indentation of values works for Import-Package header, but if used with Bundle-Classpath header, it doesn't work and classes are not found. See attachment for details.
> This can lead to subtle, hard-to-analyze failures in the deployment. Manifest is often configured in maven pom, where indentation is common.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (JGRP-1540) TP: simplified message bundler
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1540?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-1540:
--------------------------------
This could enable us to use message bundling for *all* messsages, even those tagged with OOB and possibly even DONT_BUNDLE msgs. Since a message marked as DONT_BUNDLE never has to wait for the max bundling timeout to kick in, but instead only for a batch to fill up (should be quick unless the batch size is big !) or no messages remaining in the queue, even sync RPCs might be fast. Need to bench this... e.g. how quickly do 60K fill up when a batch of 1K messages is sent (a few microsecs?)
> TP: simplified message bundler
> ------------------------------
>
> Key: JGRP-1540
> URL: https://issues.jboss.org/browse/JGRP-1540
> Project: JGroups
> Issue Type: Enhancement
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 3.3
>
>
> Instead of maintaining a hashmap (like for the current bundlers), a simple blocking bounded queue of messages would be used. Whenever we've reached M bytes or N milliseconds have elapsed, a consumer thread processes the queue in the following manner:
> * First set the queue to a new queue (volatile assignment), reuse a number of queues
> * Iterate through all messages in the current queue, for each message:
> ** If the destination is the same as the current destination, write the message to the stream for the current destination
> ** Else set the current destination to msg.getDest() and create an output stream (similar to writing a message list)
> *** Stream the current message to the output stream
> *** If there was a previous destination, close the associated output stream and send the message list
> Example:
> * We have messages with the following destinations: A, null, null, B, B, null, A, null, null, null
> * First we send a message list consisting of 1 message to A
> * Next we send a message list consisting of 2 messages to the cluster (dest==null)
> * Then we send a batch of 2 messages to B, 1 to the cluster, 1 to A and 3 to the cluster
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (JGRP-1532) Don't receive heartbeat in Nic Teaming configuration after NIC switch
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1532?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-1532:
--------------------------------
Any progress ? Can I close this issue ?
> Don't receive heartbeat in Nic Teaming configuration after NIC switch
> ---------------------------------------------------------------------
>
> Key: JGRP-1532
> URL: https://issues.jboss.org/browse/JGRP-1532
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 2.12.2
> Environment: Windows Server Standard 2008 SP2.
> two network cards Broadcom BCM5709S NetXtreme II (DualPort) with NIC-Teaming Software (
> BASC3 Version 12.2.9.0. (Broadcom Advanced Control Suite 3)
> Reporter: PASCAL BROUWET
> Assignee: Bela Ban
> Fix For: 3.3
>
>
> we haven't problems in single cards configuration without NIC Teaming.
> But with all machines with dual cards with Nic Teaming is activated, we have a problem of "didn't received heartbeat".
> With WireShark analyser, we observed that when heartbeat Multicast packet stay on same card, we did not have problem but if the heartbeat Multicast packet switches to second card, we have in logs file failure detections.
> For example : the first heartfailure in logs appears at 03:41:25 until 05:03:20
> 2012-10-23 03:41:25.234 [FINE] - FD_ALL: haven't received a heartbeat from ctc809091084-27510(5ae571864ef0) for 11061 ms, adding it to suspect list
> 2012-10-23 03:41:25.234 [FINE] - FD_ALL: suspecting [ctc809091084-27510(5ae571864ef0), ctc804291084-11401(de9a6a421087)]
> 2012-10-23 03:41:28.245 [FINE] - FD_ALL: haven't received a heartbeat from ctc809091084-27510(5ae571864ef0) for 14072 ms, adding it to suspect list
> 2012-10-23 03:41:28.245 [FINE] - FD_ALL: haven't received a heartbeat from ctc804291084-11401(de9a6a421087) for 12044 ms, adding it to suspect list
> 2012-10-23 03:41:28.245 [FINE] - FD_ALL: suspecting [ctc809091084-27510(5ae571864ef0), ctc804291084-11401(de9a6a421087)]
> 2012-10-23 03:41:31.255 [FINE] - FD_ALL: haven't received a heartbeat from ctc809091084-27510(5ae571864ef0) for 17082 ms, adding it to suspect list
> 2012-10-23 03:41:31.255 [FINE] - FD_ALL: haven't received a heartbeat from ctc804291084-11401(de9a6a421087) for 15054 ms, adding it to suspect list
> 2012-10-23 03:41:31.255 [FINE] - FD_ALL: suspecting [ctc809091084-27510(5ae571864ef0), ctc804291084-11401(de9a6a421087)]
> 2012-10-23 03:41:34.266 [FINE] - FD_ALL: haven't received a heartbeat from ctc809091084-27510(5ae571864ef0) for 20093 ms, adding it to suspect list
> 2012-10-23 03:41:34.266 [FINE] - FD_ALL: haven't received a heartbeat from ctc804291084-11401(de9a6a421087) for 18065 ms, adding it to suspect list
> 2012-10-23 03:41:34.266 [FINE] - FD_ALL: suspecting [ctc809091084-27510(5ae571864ef0), ctc804291084-11401(de9a6a421087)]
> 2012-10-23 03:41:37.277 [FINE] - FD_ALL: haven't received a heartbeat from ctc809091084-27510(5ae571864ef0) for 23104 ms, adding it to suspect list
> 2012-10-23 03:41:37.277 [FINE] - FD_ALL: haven't received a heartbeat from ctc804291084-11401(de9a6a421087) for 21076 ms, adding it to suspect list
> 2012-10-23 03:41:37.277 [FINE] - FD_ALL: suspecting [ctc809091084-27510(5ae571864ef0), ctc804291084-11401(de9a6a421087)]
> 2012-10-23 03:41:40.288 [FINE] - FD_ALL: haven't received a heartbeat from ctc809091084-27510(5ae571864ef0) for 26115 ms, adding it to suspect list
> 2012-10-23 03:41:40.288 [FINE] - FD_ALL: haven't received a heartbeat from ctc804291084-11401(de9a6a421087) for 24087 ms, adding it to suspect list
> ...
> the logs of Card 1 during the period :
> ----------------------------------------------------
> 2012-10-23 03:41:15.563 MULTICAST id=321 src=/10.120.180.64:45588 dest=/228.8.8.8:45588 (47 bytes)
> Msg1 src=cc74a22f-6e18-1b7a-5521-3abebdd47ab6(3ba17876e725) dest=ALL
> flags=[OOB]
> headers=[
> HeartbeatHeader:heartbeat
> ]
> ----------------------------------------------------
> 2012-10-23 03:41:15.996 MULTICAST id=7481 src=/10.120.120.64:45588 dest=/228.8.8.8:45588 (47 bytes)
> Msg1 src=17da3e81-158b-4440-50c7-412aebce41e2(de9a6a421087) dest=ALL
> flags=[OOB]
> headers=[
> HeartbeatHeader:heartbeat
> ]
> ----------------------------------------------------
> 2012-10-23 04:25:49.221 MULTICAST id=2868 src=/10.120.180.64:45588 dest=/228.8.8.8:45588 (47 bytes)
> Msg1 src=cc74a22f-6e18-1b7a-5521-3abebdd47ab6(3ba17876e725) dest=ALL
> flags=[OOB]
> headers=[
> HeartbeatHeader:heartbeat
> ]
> The Cards was in standby between 03:41:15 and 04:25:49
> The logs of Card 0 during the period :
> -------------------------------------------------
> ----------------------------------------------------
> 2012-10-23 03:41:25.029 MULTICAST id=74b1 src=/10.120.120.64:45588 dest=/228.8.8.8:45588 (47 bytes)
> Msg1 src=17da3e81-158b-4440-50c7-412aebce41e2(de9a6a421087) dest=ALL
> flags=[OOB]
> headers=[
> HeartbeatHeader:heartbeat
> ]
> ----------------------------------------------------
> 2012-10-23 03:41:25.961 MULTICAST id=5adb src=/10.120.220.64:45588 dest=/228.8.8.8:45588 (47 bytes)
> Msg1 src=f1e9fdac-6d36-d321-6f9d-ec0cbf771608(5ae571864ef0) dest=ALL
> flags=[OOB]
> headers=[
> HeartbeatHeader:heartbeat
> ]
> ----------------------------------------------------
> 2012-10-23 03:41:26.874 MULTICAST id=5ae0 src=/10.120.220.64:45588 dest=/228.8.8.8:45588 (91 bytes)
> Msg1 src=f1e9fdac-6d36-d321-6f9d-ec0cbf771608(5ae571864ef0) dest=ALL
> flags=[OOB]
> headers=[
> PingHeader:[PING: type=GET_MBRS_REQ, cluster=REPL, view_id=[f1e9fdac-6d36-d321-6f9d-ec0cbf771608(5ae571864ef0)|2]]
> ]
> ----------------------------------------------------
> 2012-10-23 03:41:27.607 MULTICAST id=362 src=/10.120.180.64:45588 dest=/228.8.8.8:45588 (47 bytes)
> Msg1 src=cc74a22f-6e18-1b7a-5521-3abebdd47ab6(3ba17876e725) dest=ALL
> flags=[OOB]
> headers=[
> HeartbeatHeader:heartbeat
> ]
> ----------------------------------------------------
> 2012-10-23 03:41:28.040 MULTICAST id=74bf src=/10.120.120.64:45588 dest=/228.8.8.8:45588 (47 bytes)
> Msg1 src=17da3e81-158b-4440-50c7-412aebce41e2(de9a6a421087) dest=ALL
> flags=[OOB]
> headers=[
> HeartbeatHeader:heartbeat
> ]
> ----------------------------------------------------
> 2012-10-23 03:41:28.962 MULTICAST id=5ae8 src=/10.120.220.64:45588 dest=/228.8.8.8:45588 (47 bytes)
> Msg1 src=f1e9fdac-6d36-d321-6f9d-ec0cbf771608(5ae571864ef0) dest=ALL
> flags=[OOB]
> headers=[
> HeartbeatHeader:heartbeat
> ]
> ----------------------------------------------------
> 2012-10-23 03:41:30.617 MULTICAST id=36f src=/10.120.180.64:45588 dest=/228.8.8.8:45588 (47 bytes)
> Msg1 src=cc74a22f-6e18-1b7a-5521-3abebdd47ab6(3ba17876e725) dest=ALL
> flags=[OOB]
> headers=[
> HeartbeatHeader:heartbeat
> ]
> etc ... heartbeats received every 3 secondes until 06:00
> The two cards have been configured with the same IP Address (10.120.180.64) and also virtual NIC (10.120.180.64).
> We tested with Mcast.exe on these configuration without problems.
> All is working like JGroups (or JAVA) was "plugged" only the card n°1.
> JGroups was been configured with this parameters.
> <?xml version="1.0" encoding="UTF-8"?>
> <config xmlns="urn:org:jgroups">
> <UDP bind_addr="10.120.180.64" bind_interface="eth10" bind_port="7800" diagnostics_addr="224.0.75.75" discard_incompatible_packets="true" enable_bundling="true" enable_diagnostics="true" ip_ttl="10" loopback="true" max_bundle_size="64K" max_bundle_timeout="30" mcast_group_addr="228.8.8.8" mcast_port="45588" mcast_recv_buf_size="25M" mcast_send_buf_size="640K" oob_thread_pool.enabled="true" oob_thread_pool.keep_alive_time="5000" oob_thread_pool.max_threads="8" oob_thread_pool.min_threads="1" oob_thread_pool.queue_enabled="false" oob_thread_pool.queue_max_size="100" oob_thread_pool.rejection_policy="Run" singleton_name="UDP" thread_naming_pattern="pl" thread_pool.enabled="true" thread_pool.keep_alive_time="5000" thread_pool.max_threads="8" thread_pool.min_threads="2" thread_pool.queue_enabled="false" thread_pool.queue_max_size="100" thread_pool.rejection_policy="Run" tos="8" ucast_recv_buf_size="20M" ucast_send_buf_size="640K"/>
> <PING num_initial_members="3" timeout="2000"/>
> <MERGE2 max_interval="30000" min_interval="10000"/>
> <FD_SOCK bind_addr="10.120.180.64" bind_interface="eth10"/>
> <FD_ALL/>
> <VERIFY_SUSPECT bind_addr="10.120.180.64" bind_interface="eth10" timeout="1500"/>
> <pbcast.NAKACK discard_delivered_msgs="false" exponential_backoff="150" gc_lag="0" retransmit_timeout="300,600,1200" use_mcast_xmit="true" use_stats_for_retransmission="false"/>
> <UNICAST timeout="300,600,1200"/>
> <pbcast.STABLE desired_avg_gossip="50000" max_bytes="4M" stability_delay="1000"/>
> <pbcast.GMS join_timeout="5000" print_local_addr="true" view_bundling="true"/>
> <UFC max_credits="2M" min_threshold="0.4"/>
> <MFC max_credits="2M" min_threshold="0.4"/>
> <FRAG2 frag_size="60K"/>
> <pbcast.STREAMING_STATE_TRANSFER bind_addr="10.120.180.64" bind_interface="eth10" bind_port="7810" socket_buffer_size="16384" use_default_transport="false"/>
> </config>
> Have you ever heard about NIC teaming problems ?
> Thanks.
> Pascal BROUWET
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years