[JBoss JIRA] (JBJCA-1353) Parallelize flush of pools
by Flavia Rainone (JIRA)
[ https://issues.jboss.org/browse/JBJCA-1353?page=com.atlassian.jira.plugin... ]
Flavia Rainone resolved JBJCA-1353.
-----------------------------------
Resolution: Done
> Parallelize flush of pools
> --------------------------
>
> Key: JBJCA-1353
> URL: https://issues.jboss.org/browse/JBJCA-1353
> Project: IronJacamar
> Issue Type: Enhancement
> Reporter: Jeff Mesnil
> Assignee: Flavia Rainone
>
> This is related to WFLY-8492 where WildFly server does not shutdown in a timely fashion. There are several tasks to address WFLY-8492 and one of them is related to IronJacamar.
> In the thread dumps done when before we kill -9 the server because it did not shutdown in the expected time, we have a thread:
> {code}
> "ServerService Thread Pool -- 82" #322 prio=5 os_prio=31 tid=0x00007fcad286b800 nid=0x13007 waiting on condition [0x000070000c462000]
> java.lang.Thread.State: TIMED_WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x0000000791e1cf20> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
> at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:375)
> - locked <0x0000000791e1cf60> (a java.lang.Object)
> at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:315)
> at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.sessionStop(ActiveMQSessionContext.java:343)
> at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.stop(ClientSessionImpl.java:662)
> at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.stop(ClientSessionImpl.java:651)
> at org.apache.activemq.artemis.jms.client.ActiveMQSession.stop(ActiveMQSession.java:1023)
> at org.apache.activemq.artemis.jms.client.ActiveMQConnection.stop(ActiveMQConnection.java:321)
> - locked <0x0000000791e1c360> (a org.apache.activemq.artemis.jms.client.ActiveMQXAConnection)
> at org.apache.activemq.artemis.ra.ActiveMQRAManagedConnection.destroyHandles(ActiveMQRAManagedConnection.java:229)
> at org.apache.activemq.artemis.ra.ActiveMQRAManagedConnection.destroy(ActiveMQRAManagedConnection.java:268)
> at org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool.doDestroy(SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:1369)
> at org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool.flush(SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:882)
> at org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool.shutdown(SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:1065)
> at org.jboss.jca.core.connectionmanager.pool.AbstractPool.shutdown(AbstractPool.java:930)
> - locked <0x0000000790f758d0> (a org.jboss.jca.core.connectionmanager.pool.strategy.PoolByCri)
> at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.shutdown(AbstractConnectionManager.java:286)
> - locked <0x0000000790f75868> (a org.jboss.jca.core.connectionmanager.tx.TxConnectionManagerImpl)
> at org.jboss.as.connector.services.resourceadapters.deployment.AbstractResourceAdapterDeploymentService.unregisterAll(AbstractResourceAdapterDeploymentService.java:199)
> at org.jboss.as.connector.services.resourceadapters.deployment.AbstractResourceAdapterDeploymentService$3.run(AbstractResourceAdapterDeploymentService.java:353)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> at org.jboss.threads.JBossThread.run(JBossThread.java:320)
> {code}
> When the server is asked to shutdown, IJ will flush its pools. In turn, this calls Artemis RA managed connection destroy() which will stop any underlying JMS connection.
> The test is using a remote JMS that has been stopped and thus the underlying JMS connection will be blocked for some time.
> I had a look at SemaphoreConcurrentLinkedDequeManagedConnectionPool#flush[1] and it is destroying the pool's connections sequentially.
> This implies that the time to flush the pool can be potentially up to n * t where n is the number of connections managed by the pool and t is the timeout that each connections used when it is stopped.
> In the test run, n = 20, t = 30s so we have to wait 10 minutes to the the pool completely flushed.
> Would it be possible to flush the pool concurrently, so that the time to flush the pools would be close to t?
> Something like:
> {code}
> destroy.parallelStream().forEach(clw -> doDestroy(clw));
> {code}
> With such a code, the time to flush the pool would be close to t.
> In our test run, that would reduce the time to wait from 10 minutes to circa 30s.
> Would there be any issue with parallelizing the flush of the pool in such way?
> [1] https://github.com/ironjacamar/ironjacamar/blob/1.4/core/src/main/java/or...
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 7 months
[JBoss JIRA] (JBJCA-1354) Potential for deadlock on pool's flush
by Flavia Rainone (JIRA)
[ https://issues.jboss.org/browse/JBJCA-1354?page=com.atlassian.jira.plugin... ]
Flavia Rainone resolved JBJCA-1354.
-----------------------------------
Resolution: Done
> Potential for deadlock on pool's flush
> --------------------------------------
>
> Key: JBJCA-1354
> URL: https://issues.jboss.org/browse/JBJCA-1354
> Project: IronJacamar
> Issue Type: Bug
> Reporter: Flavia Rainone
> Assignee: Flavia Rainone
>
> There is a potential for deadlock on all flushes in pools. The problem is that flush is synchronized and inside the flush the code can be delegated to listeners, that are external to IronJacamar.
> If those listeners use their own synchronization system , they could require synchronization locking in themselves. If there is another thread locking the listener and at some point invoking an operation that results in a pool flush, the system could enter a deadlock state.
> An example of stack trace:
> {noformat}
> Found one Java-level deadlock:
> =============================
> "JMSCCThreadPoolWorker-18":
> waiting to lock monitor 0x00007f1d2409ff48 (object 0x00000007853ad060, a org.jboss.jca.core.connectionmanager.pool.strategy.OnePool),
> which is held by "JMSCCThreadPoolWorker-16"
> "JMSCCThreadPoolWorker-16":
> waiting to lock monitor 0x00007f1d1c15d138 (object 0x0000000785998598, a com.ibm.mq.connector.outbound.ConnectionEventHandler),
> which is held by "JMSCCThreadPoolWorker-17"
> "JMSCCThreadPoolWorker-17":
> waiting to lock monitor 0x00007f1d2409ff48 (object 0x00000007853ad060, a org.jboss.jca.core.connectionmanager.pool.strategy.OnePool),
> which is held by "JMSCCThreadPoolWorker-16"
> Java stack information for the threads listed above:
> ===================================================
> "JMSCCThreadPoolWorker-18":
> at org.jboss.jca.core.connectionmanager.pool.AbstractPool.flush(AbstractPool.java:322)
> - waiting to lock <0x00000007853ad060> (a org.jboss.jca.core.connectionmanager.pool.strategy.OnePool)
> at org.jboss.jca.core.connectionmanager.listener.AbstractConnectionListener.connectionErrorOccurred(AbstractConnectionListener.java:368)
> at com.ibm.mq.connector.outbound.ConnectionEventHandler.fireEvent(ConnectionEventHandler.java:141)
> - locked <0x0000000788fe1fa0> (a com.ibm.mq.connector.outbound.ConnectionEventHandler)
> at com.ibm.mq.connector.outbound.ManagedConnectionImpl.onException(ManagedConnectionImpl.java:848)
> at com.ibm.msg.client.jms.internal.JmsProviderExceptionListener.run(JmsProviderExceptionListener.java:427)
> at com.ibm.msg.client.commonservices.workqueue.WorkQueueItem.runTask(WorkQueueItem.java:214)
> at com.ibm.msg.client.commonservices.workqueue.SimpleWorkQueueItem.runItem(SimpleWorkQueueItem.java:105)
> at com.ibm.msg.client.commonservices.workqueue.WorkQueueItem.run(WorkQueueItem.java:229)
> at com.ibm.msg.client.commonservices.workqueue.WorkQueueManager.runWorkQueueItem(WorkQueueManager.java:303)
> at com.ibm.msg.client.commonservices.j2se.workqueue.WorkQueueManagerImplementation$ThreadPoolWorker.run(WorkQueueManagerImplementation.java:1241)
> "JMSCCThreadPoolWorker-16":
> at com.ibm.mq.connector.outbound.ConnectionEventHandler.removeListener(ConnectionEventHandler.java:93)
> - waiting to lock <0x0000000785998598> (a com.ibm.mq.connector.outbound.ConnectionEventHandler)
> at com.ibm.mq.connector.outbound.ManagedConnectionImpl.removeConnectionEventListener(ManagedConnectionImpl.java:434)
> at org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreArrayListManagedConnectionPool.doDestroy(SemaphoreArrayListManagedConnectionPool.java:891)
> at org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreArrayListManagedConnectionPool.flush(SemaphoreArrayListManagedConnectionPool.java:621)
> at org.jboss.jca.core.connectionmanager.pool.AbstractPool.flush(AbstractPool.java:330)
> - locked <0x00000007853ad060> (a org.jboss.jca.core.connectionmanager.pool.strategy.OnePool)
> at org.jboss.jca.core.connectionmanager.listener.AbstractConnectionListener.connectionErrorOccurred(AbstractConnectionListener.java:368)
> at com.ibm.mq.connector.outbound.ConnectionEventHandler.fireEvent(ConnectionEventHandler.java:141)
> - locked <0x0000000788fe20b0> (a com.ibm.mq.connector.outbound.ConnectionEventHandler)
> at com.ibm.mq.connector.outbound.ManagedConnectionImpl.onException(ManagedConnectionImpl.java:848)
> at com.ibm.msg.client.jms.internal.JmsProviderExceptionListener.run(JmsProviderExceptionListener.java:427)
> at com.ibm.msg.client.commonservices.workqueue.WorkQueueItem.runTask(WorkQueueItem.java:214)
> at com.ibm.msg.client.commonservices.workqueue.SimpleWorkQueueItem.runItem(SimpleWorkQueueItem.java:105)
> at com.ibm.msg.client.commonservices.workqueue.WorkQueueItem.run(WorkQueueItem.java:229)
> at com.ibm.msg.client.commonservices.workqueue.WorkQueueManager.runWorkQueueItem(WorkQueueManager.java:303)
> at com.ibm.msg.client.commonservices.j2se.workqueue.WorkQueueManagerImplementation$ThreadPoolWorker.run(WorkQueueManagerImplementation.java:1241)
> "JMSCCThreadPoolWorker-17":
> at org.jboss.jca.core.connectionmanager.pool.AbstractPool.flush(AbstractPool.java:322)
> - waiting to lock <0x00000007853ad060> (a org.jboss.jca.core.connectionmanager.pool.strategy.OnePool)
> at org.jboss.jca.core.connectionmanager.listener.AbstractConnectionListener.connectionErrorOccurred(AbstractConnectionListener.java:368)
> at com.ibm.mq.connector.outbound.ConnectionEventHandler.fireEvent(ConnectionEventHandler.java:141)
> - locked <0x0000000785998598> (a com.ibm.mq.connector.outbound.ConnectionEventHandler)
> at com.ibm.mq.connector.outbound.ManagedConnectionImpl.onException(ManagedConnectionImpl.java:848)
> at com.ibm.msg.client.jms.internal.JmsProviderExceptionListener.run(JmsProviderExceptionListener.java:427)
> at com.ibm.msg.client.commonservices.workqueue.WorkQueueItem.runTask(WorkQueueItem.java:214)
> at com.ibm.msg.client.commonservices.workqueue.SimpleWorkQueueItem.runItem(SimpleWorkQueueItem.java:105)
> at com.ibm.msg.client.commonservices.workqueue.WorkQueueItem.run(WorkQueueItem.java:229)
> at com.ibm.msg.client.commonservices.workqueue.WorkQueueManager.runWorkQueueItem(WorkQueueManager.java:303)
> at com.ibm.msg.client.commonservices.j2se.workqueue.WorkQueueManagerImplementation$ThreadPoolWorker.run(WorkQueueManagerImplementation.java:1241)
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 7 months
[JBoss JIRA] (WFLY-9364) Closing an EJBClientContext sometimes hangs causing high CPU usage
by David Lloyd (JIRA)
[ https://issues.jboss.org/browse/WFLY-9364?page=com.atlassian.jira.plugin.... ]
David Lloyd commented on WFLY-9364:
-----------------------------------
I think you'll need 3.5.x to get the fix for this.
> Closing an EJBClientContext sometimes hangs causing high CPU usage
> ------------------------------------------------------------------
>
> Key: WFLY-9364
> URL: https://issues.jboss.org/browse/WFLY-9364
> Project: WildFly
> Issue Type: Bug
> Affects Versions: 10.1.0.Final
> Environment: Server:
> - OS:
> Red Hat Enterprise Linux Server release 7.2 (Maipo)
> - JDK:
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
> - WildFly:
> WildFly 10.1.0.Final
> Client:
> - OS:
> Red Hat Enterprise Linux Server release 7.2 (Maipo)
> - JDK:
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
> Reporter: Marius Tantareanu
> Assignee: Jason Greene
> Attachments: echo-client.zip, output_rhel7.txt, output_win10.txt, simple-ear.ear, td_rhel7.txt, td_win10.txt
>
>
> EJBClientContext.close() sometimes hangs and causes high CPU usage. We have a WildFly client that uses EJB client API to invoke some EJBs remotely.
> Basically the client executes the following actions in a loop:
> - setup an EJBClientContext programatically
> - create a JNDI context
> - lookup an EJB and invoke some operations on it
> - close the JNDI context and the EJBClientContext
> After the client runs a few hundreds iterations (the actual number of iterations varies quite a lot from one run to another) it blocks while invoking EJBClientContext.close(). Also one XNIO thread in the client app is constantly consuming one CPU core (full thread dump from the client app is attached). I was only able to reproduce this when connecting over TLS (port 8443). When using the unsecure port (8080) the problem does not reproduce (or it reproduces much less frequently and I didn't run enough iterations to catch it).
> Once the client app enters this state, top shows something like (notice the CPU usage of thread 12512):
> >top -H -p 6463
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 12512 root 20 0 11.527g 222680 13788 R 99.3 0.7 141:10.99 java
> 6466 root 20 0 11.527g 222680 13788 S 0.3 0.7 0:07.05 java
> 6467 root 20 0 11.527g 222680 13788 S 0.3 0.7 0:06.97 java
> 6477 root 20 0 11.527g 222680 13788 S 0.3 0.7 0:04.24 java
> 6463 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 6464 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:03.46 java
> 6465 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:06.94 java
> 6468 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:07.02 java
> 6469 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:16.42 java
> 6470 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.01 java
> 6471 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.01 java
> 6472 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 6473 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:06.87 java
> 6474 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:06.70 java
> 6475 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:03.09 java
> 6476 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12513 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12514 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12515 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12516 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12517 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12518 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12519 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12520 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12523 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12524 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12597 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> The thread that causes the CPU usage is the following:
> "Remoting "config-based-ejb-client-endpoint" I/O-1" #6025 daemon prio=5 os_prio=0 tid=0x00007f2f709d3000 nid=0x30e0 runnable [0x00007f2e7f9be000]
> java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
> at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
> at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
> at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
> - locked <0x00000005cc1844e8> (a sun.nio.ch.Util$3)
> - locked <0x00000005cc19f258> (a java.util.Collections$UnmodifiableSet)
> - locked <0x00000005cc184468> (a sun.nio.ch.EPollSelectorImpl)
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101)
> at org.xnio.nio.WorkerThread.run(WorkerThread.java:515)
> The client app main thread is blocked as below:
> "main" #1 prio=5 os_prio=0 tid=0x00007f2f70008800 nid=0x1940 in Object.wait() [0x00007f2f76350000]
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.jboss.remoting3.spi.AbstractHandleableCloseable.close(AbstractHandleableCloseable.java:190)
> - locked <0x00000005cc18de48> (a java.lang.Object)
> at org.jboss.ejb.client.remoting.ConnectionPool.safeClose(ConnectionPool.java:177)
> at org.jboss.ejb.client.remoting.ConnectionPool.release(ConnectionPool.java:104)
> - locked <0x00000005cbd369e0> (a org.jboss.ejb.client.remoting.ConnectionPool)
> at org.jboss.ejb.client.remoting.ConnectionPool$PooledConnection.close(ConnectionPool.java:198)
> at org.jboss.ejb.client.remoting.RemotingConnectionManager.safeClose(RemotingConnectionManager.java:65)
> - locked <0x00000005cc1a6840> (a java.util.Collections$SynchronizedRandomAccessList)
> at org.jboss.ejb.client.remoting.ConfigBasedEJBClientContextSelector$ContextCloseListener.contextClosed(ConfigBasedEJBClientContextSelector.java:220)
> at org.jboss.ejb.client.EJBClientContext.close(EJBClientContext.java:1305)
> - locked <0x00000005cc1a6888> (a org.jboss.ejb.client.EJBClientContext)
> at com.microfocus.echoclient.EchoClient.disconnect(EchoClient.java:66)
> at com.microfocus.echoclient.EchoClient.connectDisconnect(EchoClient.java:54)
> at com.microfocus.echoclient.EchoClient.main(EchoClient.java:36)
> The problem also reproduces on Windows (full thread dump of the client app is attached).
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 7 months
[JBoss JIRA] (WFLY-7345) Fix big jboss-beans.xml files truncation
by Dmitrii Tikhomirov (JIRA)
[ https://issues.jboss.org/browse/WFLY-7345?page=com.atlassian.jira.plugin.... ]
Dmitrii Tikhomirov commented on WFLY-7345:
------------------------------------------
[~penczek] I can't reproduce it, may i ask you to attach a maven example ?
> Fix big jboss-beans.xml files truncation
> ----------------------------------------
>
> Key: WFLY-7345
> URL: https://issues.jboss.org/browse/WFLY-7345
> Project: WildFly
> Issue Type: Bug
> Components: POJO
> Affects Versions: 10.1.0.Final
> Reporter: Leonardo Penczek
> Assignee: Ales Justin
> Attachments: Test.java, TestMBean.java, test-jboss-beans.xml
>
>
> If the jboss-beans.xml file is big enough, the value of a property.may be truncated,
> Fix for this problem:
> In the org.jboss.as.pojo.KernelDeploymentParsingProcessor constructor, add this line of code:
> inputFactory.setProperty("javax.xml.stream.isCoalescing", Boolean.valueOf(true));
> It prevents the truncation.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 7 months
[JBoss JIRA] (WFLY-9379) TUNNEL fails for legacy configurations and gossip_router_hosts does not support capabilities
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/WFLY-9379?page=com.atlassian.jira.plugin.... ]
Radoslav Husar updated WFLY-9379:
---------------------------------
Summary: TUNNEL fails for legacy configurations and gossip_router_hosts does not support capabilities (was: TUNNEL#gossip_router_hosts does not support capabilities)
> TUNNEL fails for legacy configurations and gossip_router_hosts does not support capabilities
> --------------------------------------------------------------------------------------------
>
> Key: WFLY-9379
> URL: https://issues.jboss.org/browse/WFLY-9379
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 11.0.0.CR1
> Reporter: Radoslav Husar
> Assignee: Radoslav Husar
>
> This requires a custom protocol override to support {{gossip_router_hosts}} as references to {{outbound-socket-binding}}-s (and possibly making {{socket-binding}} used for obtaining unique address optional; fails with missing "org.wildfly.network.socket-binding.undefined").
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 7 months
[JBoss JIRA] (WFLY-9382) TUNNEL fails for legacy configurations and gossip_router_hosts does not support capabilities
by Radoslav Husar (JIRA)
Radoslav Husar created WFLY-9382:
------------------------------------
Summary: TUNNEL fails for legacy configurations and gossip_router_hosts does not support capabilities
Key: WFLY-9382
URL: https://issues.jboss.org/browse/WFLY-9382
Project: WildFly
Issue Type: Bug
Components: Clustering
Affects Versions: 11.0.0.CR1
Reporter: Radoslav Husar
Assignee: Radoslav Husar
This requires a custom protocol override to support {{gossip_router_hosts}} as references to {{outbound-socket-binding}}-s (and possibly making {{socket-binding}} used for obtaining unique address optional; fails with missing "org.wildfly.network.socket-binding.undefined").
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 7 months
[JBoss JIRA] (WFLY-9381) Reloading fails with BindException when TUNNEL protocol is configured with socket-binding
by Radoslav Husar (JIRA)
Radoslav Husar created WFLY-9381:
------------------------------------
Summary: Reloading fails with BindException when TUNNEL protocol is configured with socket-binding
Key: WFLY-9381
URL: https://issues.jboss.org/browse/WFLY-9381
Project: WildFly
Issue Type: Bug
Components: Clustering
Affects Versions: 11.0.0.CR1
Reporter: Radoslav Husar
Assignee: Radoslav Husar
{noformat}
15:58:16,189 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool – 68) MSC000001: Failed to start service org.wildfly.clustering.jgroups.channel.ee: org.jboss.msc.service.StartException in service org.wildfly.clustering.jgroups.channel.ee: java.net.BindException: Address already in use (Bind failed)
at org.jboss.as.clustering.jgroups.subsystem.ChannelBuilder.start(ChannelBuilder.java:104)
at org.wildfly.clustering.service.AsynchronousServiceBuilder.lambda$start$0(AsynchronousServiceBuilder.java:99)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at org.jboss.threads.JBossThread.run(JBossThread.java:320)
Caused by: java.net.BindException: Address already in use (Bind failed)
at java.net.PlainDatagramSocketImpl.bind0(Native Method)
at java.net.AbstractPlainDatagramSocketImpl.bind(AbstractPlainDatagramSocketImpl.java:93)
at java.net.DatagramSocket.bind(DatagramSocket.java:392)
at java.net.DatagramSocket.<init>(DatagramSocket.java:242)
at java.net.DatagramSocket.<init>(DatagramSocket.java:299)
at org.jgroups.util.DefaultSocketFactory.createDatagramSocket(DefaultSocketFactory.java:62)
at org.jgroups.protocols.TUNNEL.init(TUNNEL.java:142)
at org.jgroups.stack.ProtocolStack.initProtocolStack(ProtocolStack.java:861)
at org.jgroups.stack.ProtocolStack.init(ProtocolStack.java:831)
at org.jboss.as.clustering.jgroups.JChannelFactory.createChannel(JChannelFactory.java:108)
at org.jboss.as.clustering.jgroups.subsystem.ChannelBuilder.start(ChannelBuilder.java:102)
... 5 more
{noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 7 months
[JBoss JIRA] (WFLY-9380) Attribute TransportResourceDefinition.Attribute#SOCKET_BINDING should be required and with DIAGNOSTICS_SOCKET_BINDING should not allow expressions
by Radoslav Husar (JIRA)
Radoslav Husar created WFLY-9380:
------------------------------------
Summary: Attribute TransportResourceDefinition.Attribute#SOCKET_BINDING should be required and with DIAGNOSTICS_SOCKET_BINDING should not allow expressions
Key: WFLY-9380
URL: https://issues.jboss.org/browse/WFLY-9380
Project: WildFly
Issue Type: Bug
Components: Clustering
Affects Versions: 11.0.0.CR1
Reporter: Radoslav Husar
Assignee: Radoslav Husar
Priority: Blocker
{noformat}
18:26:26,514 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "jgroups"),
("channel" => "ee")
]) - failure description: {"WFLYCTL0288: One or more services were unable to start due to one or more indirect dependencies not being available." => {
"Services that were unable to start:" => [
"org.wildfly.clustering.group.ee",
"org.wildfly.clustering.jgroups.channel.ee"
],
"Services that may be the cause:" => ["org.wildfly.network.socket-binding.undefined"]
}}
{noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 7 months
[JBoss JIRA] (WFLY-9379) TUNNEL#gossip_router_hosts does not support capabilities
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/WFLY-9379?page=com.atlassian.jira.plugin.... ]
Radoslav Husar updated WFLY-9379:
---------------------------------
Description: This requires a custom protocol override to support {{gossip_router_hosts}} as references to {{outbound-socket-binding}}-s (and possibly making {{socket-binding}} used for obtaining unique address optional). (was: This requires a custom protocol override to support {{gossip_router_hosts}} as references to {{outbound-socket-binding}}-s.)
> TUNNEL#gossip_router_hosts does not support capabilities
> --------------------------------------------------------
>
> Key: WFLY-9379
> URL: https://issues.jboss.org/browse/WFLY-9379
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 11.0.0.CR1
> Reporter: Radoslav Husar
> Assignee: Radoslav Husar
>
> This requires a custom protocol override to support {{gossip_router_hosts}} as references to {{outbound-socket-binding}}-s (and possibly making {{socket-binding}} used for obtaining unique address optional).
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 7 months
[JBoss JIRA] (WFLY-9379) TUNNEL#gossip_router_hosts does not support capabilities
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/WFLY-9379?page=com.atlassian.jira.plugin.... ]
Radoslav Husar updated WFLY-9379:
---------------------------------
Description: This requires a custom protocol override to support {{gossip_router_hosts}} as references to {{outbound-socket-binding}}-s (and possibly making {{socket-binding}} used for obtaining unique address optional; fails with missing "org.wildfly.network.socket-binding.undefined"). (was: This requires a custom protocol override to support {{gossip_router_hosts}} as references to {{outbound-socket-binding}}-s (and possibly making {{socket-binding}} used for obtaining unique address optional).)
> TUNNEL#gossip_router_hosts does not support capabilities
> --------------------------------------------------------
>
> Key: WFLY-9379
> URL: https://issues.jboss.org/browse/WFLY-9379
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 11.0.0.CR1
> Reporter: Radoslav Husar
> Assignee: Radoslav Husar
>
> This requires a custom protocol override to support {{gossip_router_hosts}} as references to {{outbound-socket-binding}}-s (and possibly making {{socket-binding}} used for obtaining unique address optional; fails with missing "org.wildfly.network.socket-binding.undefined").
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 7 months