[JBoss JIRA] (WFLY-11450) Cannot get delegate of JMSContext during an beforeCompletion synch
by Tom Jenkinson (Jira)
[ https://issues.jboss.org/browse/WFLY-11450?page=com.atlassian.jira.plugin... ]
Tom Jenkinson updated WFLY-11450:
---------------------------------
Forum Reference: https://developer.jboss.org/message/985649?et=watches.email.thread#985649
> Cannot get delegate of JMSContext during an beforeCompletion synch
> ------------------------------------------------------------------
>
> Key: WFLY-11450
> URL: https://issues.jboss.org/browse/WFLY-11450
> Project: WildFly
> Issue Type: Bug
> Components: CDI / Weld
> Environment: WildFly 14.0.1.Final, narayana 5.9.0.Final
> WildFly 8.0.0.CR1
> Reporter: Jan-Willem Gmelig Meyling
> Assignee: Matej Novotny
> Priority: Minor
> Attachments: ARJUNA016082.log
>
>
> The {{TransactionContext}} registers a {{TransactionScopeCleanup}} {{Synchronization}} with the active transaction. This prevents a {{TransactionContext}} to be opened from another {{beforeCompletion}} {{Synchronization}}, if the transacted {{JMSContext}} was not interacted with earlier in the transaction. (Because the synchronization cannot be placed)
> *Use case*
> I have a JPA post insert lifecycle listener that needs to publish an event. I'd like this to happen within the same XA transaction. The lifecycle listener is handled in during the pre-commit flush, itself invoked through another {{beforeCompletion}} transaction {{Synchronization}}.
> *Workaround*
> Flush the {{EntityManager}} and don't leave the flushing up to the {{beforeCompletion}} synchronization.
> The issue seems also described in the following Stack Overflow issue: https://stackoverflow.com/questions/21523534/jboss-wildfly-arjuna016082-s...
> (Which refers to WildFly 8.0.0.CR1)
> Back in the day, it seemed [~smarlow] suggested that an *interposed* synchronization should be registered with the {{TransactionSynchronizationRegistry}} instead.
> See the attached log for a full stack trace.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
7 years, 5 months
[JBoss JIRA] (WFLY-11450) Cannot get delegate of JMSContext during an beforeCompletion synch
by Tom Jenkinson (Jira)
[ https://issues.jboss.org/browse/WFLY-11450?page=com.atlassian.jira.plugin... ]
Tom Jenkinson moved JBTM-3083 to WFLY-11450:
--------------------------------------------
Project: WildFly (was: JBoss Transaction Manager)
Key: WFLY-11450 (was: JBTM-3083)
> Cannot get delegate of JMSContext during an beforeCompletion synch
> ------------------------------------------------------------------
>
> Key: WFLY-11450
> URL: https://issues.jboss.org/browse/WFLY-11450
> Project: WildFly
> Issue Type: Bug
> Environment: WildFly 14.0.1.Final, narayana 5.9.0.Final
> WildFly 8.0.0.CR1
> Reporter: Jan-Willem Gmelig Meyling
> Priority: Minor
> Attachments: ARJUNA016082.log
>
>
> The {{TransactionContext}} registers a {{TransactionScopeCleanup}} {{Synchronization}} with the active transaction. This prevents a {{TransactionContext}} to be opened from another {{beforeCompletion}} {{Synchronization}}, if the transacted {{JMSContext}} was not interacted with earlier in the transaction. (Because the synchronization cannot be placed)
> *Use case*
> I have a JPA post insert lifecycle listener that needs to publish an event. I'd like this to happen within the same XA transaction. The lifecycle listener is handled in during the pre-commit flush, itself invoked through another {{beforeCompletion}} transaction {{Synchronization}}.
> *Workaround*
> Flush the {{EntityManager}} and don't leave the flushing up to the {{beforeCompletion}} synchronization.
> The issue seems also described in the following Stack Overflow issue: https://stackoverflow.com/questions/21523534/jboss-wildfly-arjuna016082-s...
> (Which refers to WildFly 8.0.0.CR1)
> Back in the day, it seemed [~smarlow] suggested that an *interposed* synchronization should be registered with the {{TransactionSynchronizationRegistry}} instead.
> See the attached log for a full stack trace.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
7 years, 5 months
[JBoss JIRA] (WFLY-11374) Master Artemis in Wildfly 10.1.0.Final is not announcing backup when restarted
by Miroslav Novak (Jira)
[ https://issues.jboss.org/browse/WFLY-11374?page=com.atlassian.jira.plugin... ]
Miroslav Novak commented on WFLY-11374:
---------------------------------------
[~ev.srinivas] I'm not bash guru and start/stop functions are above my knowledge. I cannot say where is the problem. I would suggest to simplify start/stop functions to most basic level. So have just simple start/stop without any if/then/else noise around and add other parts one by one. This way will lead you to problematic part. Also i'm not sure if there is any debugger for bash scripts. This would could work as well and point to problematic part.
> Master Artemis in Wildfly 10.1.0.Final is not announcing backup when restarted
> ------------------------------------------------------------------------------
>
> Key: WFLY-11374
> URL: https://issues.jboss.org/browse/WFLY-11374
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Affects Versions: 10.1.0.Final
> Reporter: Srinivas ev
> Assignee: Jeff Mesnil
> Priority: Blocker
> Attachments: active standalone full ha.xml, master and slave log samples on startup.txt, master restart.txt, master shutdown.txt, master-server-linux.log, master-server-windows.log, master.xml, rotateserver_active.log, rotateserver_active.log, rotateserver_backup.log, rotateserver_slave.log, slave standalone full ha.xml, slave-server-linux.log, slave-server-windows.log, slave.xml
>
>
> I have 2 wildfly servers acting as artemis master and slave. I am expecting failback and replication and the related configurations are done for this to work.
> This is working as expected when I have the setup in Windows. Failing in linux RHEL 7.3 machine.
> master in standalone-full-ha.xml - refer master.xml
> slave in standalone-full-ha.xml - refer slave.xml
> In the startup script, I am passing all the values for placeholders of my server host ip's accordingly.
> Test scenario -
> 1. Bring master up.
> 2. Bring slave up.
> 3. slave will announce the backup. (AMQ221031: backup announced).
> 4. Make master down.
> 5. Replication is success.
> 6. Slave is acting as master/live.
> 7. Make master up.
> Issue - master is unable to announce the backup and starts normally as a standalone wildfly.
> This backup announcement works fine in windows and failover also works as expected.
> Please let me know if anything specific required along with this details.
> Artemis jar version - artemis-*****-1.1.0.wildfly-017.jar
> in path - /opt/aor/${my project}/wildfly/modules/system/layers/base/org/apache/activemq/artemis/main
> Few logs I found which may be impacting and I am not clear -
> 1.2018-11-21 14:28:07,238 TRACE [org.apache.activemq.artemis.core.server.cluster.impl.ClusterConnectionBridge] (Thread-18 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2@38e819b6-2112524495)) Setting up bridge between TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEnabled=true&httpPpgradeEndpoint=http-acceptor&port=12080&host=135-250-139-30 and ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEnabled=true&httpPpgradeEndpoint=http-acceptor&port=12080&host=135-250-139-41], discoveryGroupConfiguration=null]: java.lang.Exception: trace
> at org.apache.activemq.artemis.core.server.cluster.impl.ClusterConnectionBridge.<init>(ClusterConnectionBridge.java:129)
> at org.apache.activemq.artemis.core.server.cluster.impl.ClusterConnectionImpl.createNewRecord(ClusterConnectionImpl.java:778)
> at org.apache.activemq.artemis.core.server.cluster.impl.ClusterConnectionImpl.nodeUP(ClusterConnectionImpl.java:698)
> at org.apache.activemq.artemis.core.client.impl.Topology$1.run(Topology.java:264)
> at org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$ExecutorTask.run(OrderedExecutorFactory.java:103)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 2.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
7 years, 5 months
[JBoss JIRA] (DROOLS-3388) Scenario Simulation UX
by Klara Kufova (Jira)
[ https://issues.jboss.org/browse/DROOLS-3388?page=com.atlassian.jira.plugi... ]
Klara Kufova moved BAPL-946 to DROOLS-3388:
-------------------------------------------
Project: Drools (was: Business Automation Planning)
Key: DROOLS-3388 (was: BAPL-946)
Workflow: GIT Pull Request workflow (was: CDW with docs v1)
Component/s: Scenario Simulation and Testing
(was: Rules Engine)
Customer Name: (was: Citigroup)
QE Status: NEW
> Scenario Simulation UX
> ----------------------
>
> Key: DROOLS-3388
> URL: https://issues.jboss.org/browse/DROOLS-3388
> Project: Drools
> Issue Type: Epic
> Components: Scenario Simulation and Testing
> Reporter: Liz Clayton
> Assignee: Liz Clayton
> Priority: Major
> Labels: AppFormer, ScenarioSimulation, UX, UXTeam
>
> User experience to support functionality that would allow scenarios (processes and rules, along with known/knowable data sets) to be executed in a simulation mode that provides information that can be used in the post-simulation analysis.
> Scenario simulation creation involves an interplay between several tools, Text, Decision tables, Database, UI, which would ideally be unified through a BRMS user experience solution. The solution should be able to handle:
> * Test scenarios via cucumber, or similar: This key feature is centered around the ease of use for business users (citizen developers) in writing test scenarios. Users don’t like to deal with the Text files, so would like to introduce a UI to make it easier to create scenarios for testing.
> * Build/merge.
> * Editing Decision tables: Decision tables are being used to driving the Questionnaire. Note: New decision tables might not be far off.
> Out of scope:
> Content mgmt interplay - business users do not interact with this directly.
> Each requirement above is described at:
> https://docs.google.com/a/redhat.com/document/d/1cB1Wl2RIDqt66uKqSKh-2uIq...
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
7 years, 5 months
[JBoss JIRA] (DROOLS-3388) Scenario Simulation UX
by Klara Kufova (Jira)
[ https://issues.jboss.org/browse/DROOLS-3388?page=com.atlassian.jira.plugi... ]
Klara Kufova closed DROOLS-3388.
--------------------------------
Closing.
> Scenario Simulation UX
> ----------------------
>
> Key: DROOLS-3388
> URL: https://issues.jboss.org/browse/DROOLS-3388
> Project: Drools
> Issue Type: Epic
> Components: Scenario Simulation and Testing
> Reporter: Liz Clayton
> Assignee: Liz Clayton
> Priority: Major
> Labels: AppFormer, ScenarioSimulation, UX, UXTeam
>
> User experience to support functionality that would allow scenarios (processes and rules, along with known/knowable data sets) to be executed in a simulation mode that provides information that can be used in the post-simulation analysis.
> Scenario simulation creation involves an interplay between several tools, Text, Decision tables, Database, UI, which would ideally be unified through a BRMS user experience solution. The solution should be able to handle:
> * Test scenarios via cucumber, or similar: This key feature is centered around the ease of use for business users (citizen developers) in writing test scenarios. Users don’t like to deal with the Text files, so would like to introduce a UI to make it easier to create scenarios for testing.
> * Build/merge.
> * Editing Decision tables: Decision tables are being used to driving the Questionnaire. Note: New decision tables might not be far off.
> Out of scope:
> Content mgmt interplay - business users do not interact with this directly.
> Each requirement above is described at:
> https://docs.google.com/a/redhat.com/document/d/1cB1Wl2RIDqt66uKqSKh-2uIq...
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
7 years, 5 months
[JBoss JIRA] (WFLY-11374) Master Artemis in Wildfly 10.1.0.Final is not announcing backup when restarted
by Srinivas ev (Jira)
[ https://issues.jboss.org/browse/WFLY-11374?page=com.atlassian.jira.plugin... ]
Srinivas ev commented on WFLY-11374:
------------------------------------
Hi [~mnovak], let me know in case if you got some time to review my magic script :)
> Master Artemis in Wildfly 10.1.0.Final is not announcing backup when restarted
> ------------------------------------------------------------------------------
>
> Key: WFLY-11374
> URL: https://issues.jboss.org/browse/WFLY-11374
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Affects Versions: 10.1.0.Final
> Reporter: Srinivas ev
> Assignee: Jeff Mesnil
> Priority: Blocker
> Attachments: active standalone full ha.xml, master and slave log samples on startup.txt, master restart.txt, master shutdown.txt, master-server-linux.log, master-server-windows.log, master.xml, rotateserver_active.log, rotateserver_active.log, rotateserver_backup.log, rotateserver_slave.log, slave standalone full ha.xml, slave-server-linux.log, slave-server-windows.log, slave.xml
>
>
> I have 2 wildfly servers acting as artemis master and slave. I am expecting failback and replication and the related configurations are done for this to work.
> This is working as expected when I have the setup in Windows. Failing in linux RHEL 7.3 machine.
> master in standalone-full-ha.xml - refer master.xml
> slave in standalone-full-ha.xml - refer slave.xml
> In the startup script, I am passing all the values for placeholders of my server host ip's accordingly.
> Test scenario -
> 1. Bring master up.
> 2. Bring slave up.
> 3. slave will announce the backup. (AMQ221031: backup announced).
> 4. Make master down.
> 5. Replication is success.
> 6. Slave is acting as master/live.
> 7. Make master up.
> Issue - master is unable to announce the backup and starts normally as a standalone wildfly.
> This backup announcement works fine in windows and failover also works as expected.
> Please let me know if anything specific required along with this details.
> Artemis jar version - artemis-*****-1.1.0.wildfly-017.jar
> in path - /opt/aor/${my project}/wildfly/modules/system/layers/base/org/apache/activemq/artemis/main
> Few logs I found which may be impacting and I am not clear -
> 1.2018-11-21 14:28:07,238 TRACE [org.apache.activemq.artemis.core.server.cluster.impl.ClusterConnectionBridge] (Thread-18 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2@38e819b6-2112524495)) Setting up bridge between TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEnabled=true&httpPpgradeEndpoint=http-acceptor&port=12080&host=135-250-139-30 and ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEnabled=true&httpPpgradeEndpoint=http-acceptor&port=12080&host=135-250-139-41], discoveryGroupConfiguration=null]: java.lang.Exception: trace
> at org.apache.activemq.artemis.core.server.cluster.impl.ClusterConnectionBridge.<init>(ClusterConnectionBridge.java:129)
> at org.apache.activemq.artemis.core.server.cluster.impl.ClusterConnectionImpl.createNewRecord(ClusterConnectionImpl.java:778)
> at org.apache.activemq.artemis.core.server.cluster.impl.ClusterConnectionImpl.nodeUP(ClusterConnectionImpl.java:698)
> at org.apache.activemq.artemis.core.client.impl.Topology$1.run(Topology.java:264)
> at org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$ExecutorTask.run(OrderedExecutorFactory.java:103)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 2.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
7 years, 5 months
[JBoss JIRA] (JGRP-2317) Discovery: send multiple discovery requests
by Bela Ban (Jira)
Bela Ban created JGRP-2317:
------------------------------
Summary: Discovery: send multiple discovery requests
Key: JGRP-2317
URL: https://issues.jboss.org/browse/JGRP-2317
Project: JGroups
Issue Type: Enhancement
Reporter: Bela Ban
Assignee: Bela Ban
Fix For: 4.0.16
Define a {{num_requests}} (default: 1) attribute (we used to have this!) that governs how many times a discovery is executed until the timeout is reached, or a coord is found.
This may be useful when multiple members are started at the same time and/or the discovery does not return the correct members the first time around. Especially needed for DNS_PING.
The number of requests should be spaced out evenly across a range [0 .. timeout-(timeout/num_requests)]
Example: timeout=12000, num_requests=3;
* Range: [0 .. 8000] (12-4)
* First req sent at time 0
* Second: time 4000
* Third: 8000
Interval: {{timeout - (timeout/num_reqs) / num_reqs-1 }}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
7 years, 5 months
[JBoss JIRA] (JGRP-2300) DNS_PING in AWS ECS cannot cluster with dynamic port mappings
by Sebastian Łaskawiec (Jira)
[ https://issues.jboss.org/browse/JGRP-2300?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec commented on JGRP-2300:
-------------------------------------------
[~ethompson] The discussion is being continued on JGRP-2316.
> DNS_PING in AWS ECS cannot cluster with dynamic port mappings
> -------------------------------------------------------------
>
> Key: JGRP-2300
> URL: https://issues.jboss.org/browse/JGRP-2300
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.16
> Environment: AWS ECS Cluster with DNS based service discovery using jboss/keycloak:latest containers
> Reporter: Eric Thompson
> Assignee: Sebastian Łaskawiec
> Priority: Critical
> Fix For: 4.0.16
>
>
> When running an ECS cluster with jboss/keycloak:latest containers dynamic port mapping of all ports is required to allow more than one container to run per EC2 instance. Using SRV based service discovery records will allow each node to find the rest of the nodes, but when a discovery request is sent the receiving node sees the sender as IP:7600 instead of the dynamic port. It then sees this as a "new" node and tries to send discovery requests to it. And somehow it is also getting node IDs and trying to send requests to those!
> See the following log, there are only 4 actual nodes and the each have a different 5 digit port number:
> {code}
> ### Service discovery with dynamic port mapping
> 2018-10-10 20:17:44,178 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-240,ejb,17081c624290) Performing discovery of the following hosts [10.42.3.44:7600, 10.42.3.56:32949, 10.42.3.56:32951, 10.42.3.44:32954, c5b479b7b6d5, 10.42.3.44:32952, 10.42.3.56:7600, 17081c624290, 63976b7fae70, 557cbd7891a2]
> 2018-10-10 20:17:44,178 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-240,ejb,17081c624290) 17081c624290: sending discovery request to 10.42.3.44:7600
> 2018-10-10 20:17:44,179 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-240,ejb,17081c624290) 17081c624290: sending discovery request to 10.42.3.56:32949
> 2018-10-10 20:17:44,179 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-240,ejb,17081c624290) 17081c624290: sending discovery request to 10.42.3.56:32951
> 2018-10-10 20:17:44,180 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-240,ejb,17081c624290) 17081c624290: sending discovery request to 10.42.3.44:32954
> 2018-10-10 20:17:44,181 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-240,ejb,17081c624290) 17081c624290: sending discovery request to c5b479b7b6d5
> 2018-10-10 20:17:44,181 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-240,ejb,17081c624290) 17081c624290: sending discovery request to 10.42.3.44:32952
> 2018-10-10 20:17:44,181 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-240,ejb,17081c624290) 17081c624290: sending discovery request to 10.42.3.56:7600
> 2018-10-10 20:17:44,182 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-237,ejb,17081c624290) Received discovery from: 17081c624290, IP: 10.42.3.56:7600
> 2018-10-10 20:17:44,182 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-240,ejb,17081c624290) 17081c624290: sending discovery request to 17081c624290
> 2018-10-10 20:17:44,182 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-238,ejb,17081c624290) Received discovery from: 17081c624290, IP: 10.42.3.56:7600
> 2018-10-10 20:17:44,182 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-240,ejb,17081c624290) 17081c624290: sending discovery request to 63976b7fae70
> 2018-10-10 20:17:44,183 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-240,ejb,17081c624290) 17081c624290: sending discovery request to 557cbd7891a2
> 2018-10-10 20:17:44,187 WARN [org.jgroups.protocols.TCP] (TQ-Bundler-7,ejb,17081c624290) JGRP000032: 17081c624290: no physical address for c5b479b7b6d5, dropping message
> {code}
> This code seems to be part of the problem in this case: https://github.com/belaban/JGroups/blob/87d15ec848aa3d482ae792ef152f7e36e...
> See that code uses the incoming address and adds it to the discocvered_hosts, but those addresses are ALWAYS inaccurate in this case.
> Because this is what the recipient of the service discovery request sees (ie: all the ports are the default 7600):
> {code}
> 2018-10-10 20:35:15,229 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-350,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:15,231 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-350,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:15,232 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-397,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:15,233 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-350,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:17,234 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-350,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:17,236 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-350,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:17,238 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-397,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:17,238 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-350,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:19,239 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-350,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:19,240 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-350,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:19,242 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-237,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:19,243 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-350,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:21,246 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-350,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:21,247 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-350,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:21,253 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-237,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:21,253 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-350,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:23,247 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-350,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:23,249 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-350,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:23,251 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-237,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:23,251 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-350,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:25,252 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-237,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:25,253 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-237,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:25,255 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-237,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> 2018-10-10 20:35:25,256 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-237,ejb,17081c624290) Received discovery from: 63976b7fae70, IP: 10.42.3.44:7600
> {code}
> In this state the cluster never seems to work properly and the Keycloak interface breaks in many frustrating ways.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
7 years, 5 months