[JBoss JIRA] (DROOLS-2522) [DMN Designer] Palette aesthetics
by Brian Dellascio (JIRA)
[ https://issues.jboss.org/browse/DROOLS-2522?page=com.atlassian.jira.plugi... ]
Brian Dellascio commented on DROOLS-2522:
-----------------------------------------
I'd like to see the implementation once it's been finished. Not sure how we handle that scenario RE: tagging in the JIRA. :-)
> [DMN Designer] Palette aesthetics
> ---------------------------------
>
> Key: DROOLS-2522
> URL: https://issues.jboss.org/browse/DROOLS-2522
> Project: Drools
> Issue Type: Bug
> Components: DMN Editor
> Affects Versions: 7.8.0.Final
> Reporter: Jozef Marko
> Assignee: Michael Anstis
> Priority: Minor
> Labels: UX, UXTeam
> Attachments: palette.png
>
>
> The palette css styling has changed. There appeared margins between items in palette, what is probably not problem, however even when user click to this margin (to space between two item in palette) still some palette item is selected.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 8 months
[JBoss JIRA] (WFLY-10784) WeldStartCompletionService failed when optional components not installed
by Martin Kouba (JIRA)
[ https://issues.jboss.org/browse/WFLY-10784?page=com.atlassian.jira.plugin... ]
Martin Kouba commented on WFLY-10784:
-------------------------------------
bq. Each Phase runs in a separate MSC service that depends on the service for the previous Phase and therefore doesn't run until all active services in that previous phase are in a rest state. So anything you do in CLEANUP will happen after all the stuff done in INSTALL is done.
[~brian.stansberry] But if a deployment processor installs a new service (e.g. https://github.com/wildfly/wildfly/blob/master/weld/subsystem/src/main/ja... or an EE component service) there is no guarantee the service is "ready" during the next deployment phase, is it?
> WeldStartCompletionService failed when optional components not installed
> ------------------------------------------------------------------------
>
> Key: WFLY-10784
> URL: https://issues.jboss.org/browse/WFLY-10784
> Project: WildFly
> Issue Type: Bug
> Components: CDI / Weld
> Affects Versions: 12.0.0.Final, 13.0.0.Final
> Reporter: Alexander Kudrevatykh
> Assignee: Matej Novotny
> Attachments: arq-test.tar.xz
>
>
> WebComponents are optional and fail in initialising it not causes fail of deployment, but when starting EAR application with included WAR fail in deployment since Wildfly 12 because WeldStartCompletionService depends on all components regardless of their type
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 8 months
[JBoss JIRA] (DROOLS-2860) Add second level header
by Gabriele Cardosi (JIRA)
[ https://issues.jboss.org/browse/DROOLS-2860?page=com.atlassian.jira.plugi... ]
Gabriele Cardosi closed DROOLS-2860.
------------------------------------
> Add second level header
> -----------------------
>
> Key: DROOLS-2860
> URL: https://issues.jboss.org/browse/DROOLS-2860
> Project: Drools
> Issue Type: Task
> Components: Scenario Simulation and Testing
> Reporter: Gabriele Cardosi
> Assignee: Gabriele Cardosi
> Labels: ScenarioSimulation
> Original Estimate: 1 day
> Remaining Estimate: 1 day
>
> Inside the grid, the scenario-specific columns are grouped in two categories: "Given" and "Expected".
> All the "Given" columns should go below the main "Given" header, and all the "Expected" ones below "Expected".
> For each column there should be, then, a specific header
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 8 months
[JBoss JIRA] (DROOLS-2815) [DMN Designer] Make Expression Type dependent on Diagram Node
by Jozef Marko (JIRA)
[ https://issues.jboss.org/browse/DROOLS-2815?page=com.atlassian.jira.plugi... ]
Jozef Marko updated DROOLS-2815:
--------------------------------
Description:
The expression editor options should be dependent on the diagram node type.
h2. Decision Node
Should allow all expression types except of *Function*
h2. BKM Node
Should allow only *Function* expression type.
h3. Further improvement
BKM node has automatically function expression inside - separate jira, or won't be implemented?
([~jomarko] IMO If/when [~tirelli] confirms the subject of this JIRA is the expected/required operation I'd ensure BKMs have a Function set and cannot be cleared).
h2. Acceptance tests
- Not possible to change top level *function expression type* for BKM node (/)
- Possible to change not top level *function expression type* for BKM node (/)
- Possible to change any level *function expression type* for Decision node (/)
was:
The expression editor options should be dependent on the diagram node type.
h2. Decision Node
Should allow all expression types except of *Function*
h2. BKM Node
Should allow only *Function* expression type.
h3. Further improvement
BKM node has automatically function expression inside - separate jira, or won't be implemented?
([~jomarko] IMO If/when [~tirelli] confirms the subject of this JIRA is the expected/required operation I'd ensure BKMs have a Function set and cannot be cleared).
h2. Acceptance tests
- Not possible to set *function expression type* for Decision node
- Possible to set just *function expression type* for BKM node
-- Created automatically if we decide to implement here
- Dialog shown if opened file from external tool that doesn't follow this restriction
> [DMN Designer] Make Expression Type dependent on Diagram Node
> -------------------------------------------------------------
>
> Key: DROOLS-2815
> URL: https://issues.jboss.org/browse/DROOLS-2815
> Project: Drools
> Issue Type: Feature Request
> Components: DMN Editor
> Affects Versions: 7.9.0.Final
> Reporter: Jozef Marko
> Assignee: Michael Anstis
> Labels: drools-tools
>
> The expression editor options should be dependent on the diagram node type.
> h2. Decision Node
> Should allow all expression types except of *Function*
> h2. BKM Node
> Should allow only *Function* expression type.
> h3. Further improvement
> BKM node has automatically function expression inside - separate jira, or won't be implemented?
> ([~jomarko] IMO If/when [~tirelli] confirms the subject of this JIRA is the expected/required operation I'd ensure BKMs have a Function set and cannot be cleared).
> h2. Acceptance tests
> - Not possible to change top level *function expression type* for BKM node (/)
> - Possible to change not top level *function expression type* for BKM node (/)
> - Possible to change any level *function expression type* for Decision node (/)
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 8 months
[JBoss JIRA] (WFLY-10941) Testsuite - System.out cleanup - vdx, basic (jms), clustering modules
by Rostislav Svoboda (JIRA)
Rostislav Svoboda created WFLY-10941:
----------------------------------------
Summary: Testsuite - System.out cleanup - vdx, basic (jms), clustering modules
Key: WFLY-10941
URL: https://issues.jboss.org/browse/WFLY-10941
Project: WildFly
Issue Type: Bug
Components: Test Suite
Reporter: Rostislav Svoboda
Assignee: Rostislav Svoboda
Testsuite - System.out cleanup - vdx, basic (jms), clustering modules
System.out is the enemy of TeamCity, using jboss logging or commenting out unnecessary content is much better.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 8 months
[JBoss JIRA] (SWSQE-374) When starting 40> more jenkins slaves some of them fail to start
by Filip Brychta (JIRA)
[ https://issues.jboss.org/browse/SWSQE-374?page=com.atlassian.jira.plugin.... ]
Filip Brychta updated SWSQE-374:
--------------------------------
Team: Infrastructure (was: Infrastructure)
Sprint: (was: Kiali QE Sprint 10)
> When starting 40> more jenkins slaves some of them fail to start
> -----------------------------------------------------------------
>
> Key: SWSQE-374
> URL: https://issues.jboss.org/browse/SWSQE-374
> Project: Kiali QE
> Issue Type: QE Task
> Reporter: Filip Brychta
> Assignee: Filip Brychta
> Priority: Minor
>
> Some slaves fail to start with following errors:
> from jenkins log:
> ARNING: Error in provisioning; agent=KubernetesSlave name: jenkins-slave-kiali-ui-tests-fcdsz, template=PodTemplate{inheritFrom='', name='jenkins-slave-kiali-ui-tests', namespace='jenkins-slaves', label='python kiali-ui-tests', nodeSelector='', nodeUsageMode=NORMAL, workspaceVolume=EmptyDirWorkspaceVolume [memory=false], containers=[ContainerTemplate{name='jnlp', image='docker-registry.default.svc:5000/jenkins-slaves/jenkins-slave-kiali-ui-tests', alwaysPullImage=true, workingDir='/home/jenkins', command='', args='${computer.jnlpmac} ${computer.name} ', resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@3106f760}], yaml=}
> java.lang.IllegalStateException: Node was deleted, computer is null
> at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:177)
> at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:292)
> at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
> at jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:71)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Aug 08, 2018 8:38:46 AM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
> INFO: Terminating Kubernetes instance for agent jenkins-slave-kiali-ui-tests-fcdsz
> Aug 08, 2018 8:38:46 AM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
> SEVERE: Computer for agent is null: jenkins-slave-kiali-ui-tests-fcdsz
> WARNING: Unable to move atomically, falling back to non-atomic move.
> java.nio.file.NoSuchFileException: /var/lib/jenkins/nodes/jenkins-slave-kiali-ui-tests-fcdsz/atomic9143280938281774341tmp -> /var/lib/jenkins/nodes/jenkins-slave-kiali-ui-tests-fcdsz/config.xml
> at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
> at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396)
> at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)
> at java.nio.file.Files.move(Files.java:1395)
> at hudson.util.AtomicFileWriter.commit(AtomicFileWriter.java:191)
> at hudson.XmlFile.write(XmlFile.java:198)
> at jenkins.model.Nodes.persistNode(Nodes.java:175)
> at jenkins.model.Nodes.addNode(Nodes.java:144)
> at jenkins.model.Jenkins.addNode(Jenkins.java:2058)
> at hudson.slaves.NodeProvisioner$2.run(NodeProvisioner.java:241)
> at hudson.model.Queue._withLock(Queue.java:1380)
> at hudson.model.Queue.withLock(Queue.java:1257)
> at hudson.slaves.NodeProvisioner.update(NodeProvisioner.java:207)
> at hudson.slaves.NodeProvisioner.access$000(NodeProvisioner.java:61)
> at hudson.slaves.NodeProvisioner$NodeProvisionerInvoker.doRun(No
> WARNING: Unable to move /var/lib/jenkins/nodes/jenkins-slave-kiali-ui-tests-fcdsz/atomic9143280938281774341tmp to /var/lib/jenkins/nodes/jenkins-slave-kiali-ui-tests-fcdsz/config.xml. Attempting to delete /var/lib/jenkins/nodes/jenkins-slave-kiali-ui-tests-fcdsz/atomic9143280938281774341tmp and abandoning.
> Aug 08, 2018 8:38:34 AM hudson.slaves.NodeProvisioner$2 run
> WARNING: Provisioned agent Kubernetes Pod Template failed to launch
> java.nio.file.NoSuchFileException: /var/lib/jenkins/nodes/jenkins-slave-kiali-ui-tests-fcdsz/atomic9143280938281774341tmp
> at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
> at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:409)
> at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)
> at java.nio.file.Files.move(Files.java:1395)
> at hudson.util.AtomicFileWriter.commit(AtomicFileWriter.java:206)
> at hudson.XmlFile.write(XmlFile.java:198)
> at jenkins.model.Nodes.persistNode(Nodes.java:175)
> at jenkins.model.Nodes.addNode(Nodes.java:144)
> at jenkins.model.Jenkins.addNode(Jenkins.java:2058)
> at hudson.slaves.NodeProvisioner$2.run(NodeProvisioner.java:241)
> From docker log:
> INFO: Handshaking
> Aug 08, 2018 12:38:43 PM hudson.remoting.jnlp.Main$CuiListener status
> INFO: Connecting to jenkins2.bc.jonqe.lab.eng.bos.redhat.com:39765
> Aug 08, 2018 12:38:44 PM hudson.remoting.jnlp.Main$CuiListener status
> INFO: Trying protocol: JNLP4-connect
> Aug 08, 2018 12:38:44 PM hudson.remoting.jnlp.Main$CuiListener status
> INFO: Remote identity confirmed: b0:58:42:9c:19:76:a1:78:81:79:d9:fc:9a:e9:19:fd
> Aug 08, 2018 12:38:44 PM org.jenkinsci.remoting.protocol.impl.ConnectionHeadersFilterLayer onRecv
> INFO: [JNLP4-connect connection to jenkins2.bc.jonqe.lab.eng.bos.redhat.com/10.16.23.71:39765] Local headers refused by remote: Unknown client name: jenkins-slave-kiali-ui-tests-fcdsz
> Aug 08, 2018 12:38:44 PM hudson.remoting.jnlp.Main$CuiListener status
> INFO: Protocol JNLP4-connect encountered an unexpected exception
> java.util.concurrent.ExecutionException: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Unknown client name: jenkins-slave-kiali-ui-tests-fcdsz
> at org.jenkinsci.remoting.util.SettableFuture.get(SettableFuture.java:223)
> at hudson.remoting.Engine.innerRun(Engine.java:609)
> at hudson.remoting.Engine.run(Engine.java:469)
> Caused by: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Unknown client name: jenkins-slave-kiali-ui-tests-fcdsz
> at org.jenkinsci.remoting.protocol.impl.ConnectionHeadersFilterLayer.newAbortCause(ConnectionHeadersFilterLayer.java:378)
> at org.jenkinsci.remoting.protocol.impl.ConnectionHeadersFilterLayer.onRecvClosed(ConnectionHeadersFilterLayer.java:433)
> at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:832)
> at org.jenkinsci.remoting.protocol.FilterLayer.onRecvClosed(FilterLayer.java:287)
> at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecvClosed(SSLEngineFilterLayer.java:172)
> at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:832)
> at org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:154)
> at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer.access$1500(BIONetworkLayer.java:48)
> at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer$Reader.run(BIONetworkLayer.java:247)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:93)
> at java.lang.Thread.run(Thread.java:748)
> Suppressed: java.nio.channels.ClosedChannelException
> ... 7 more
> Aug 08, 2018 12:38:44 PM hudson.remoting.jnlp.Main$CuiListener status
> INFO: Connecting to jenkins2.bc.jonqe.lab.eng.bos.redhat.com:39765
> Aug 08, 2018 12:38:44 PM hudson.remoting.jnlp.Main$CuiListener status
> INFO: Server reports protocol JNLP4-plaintext not supported, skipping
> Aug 08, 2018 12:38:44 PM hudson.remoting.jnlp.Main$CuiListener status
> INFO: Server reports protocol JNLP3-connect not supported, skipping
> Aug 08, 2018 12:38:44 PM hudson.remoting.jnlp.Main$CuiListener status
> INFO: Server reports protocol JNLP2-connect not supported, skipping
> Aug 08, 2018 12:38:44 PM hudson.remoting.jnlp.Main$CuiListener status
> INFO: Server reports protocol JNLP-connect not supported, skipping
> Aug 08, 2018 12:38:44 PM hudson.remoting.jnlp.Main$CuiListener error
> SEVERE: The server rejected the connection: None of the protocols were accepted
> java.lang.Exception: The server rejected the connection: None of the protocols were accepted
> at hudson.remoting.Engine.onConnectionRejected(Engine.java:670)
> at hudson.remoting.Engine.innerRun(Engine.java:634)
> at hudson.remoting.Engine.run(Engine.java:469)
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 8 months
[JBoss JIRA] (JBFORUMS-309) HHTP request not found in custom Login Module in JBOSS 7
by joy sen (JIRA)
joy sen created JBFORUMS-309:
--------------------------------
Summary: HHTP request not found in custom Login Module in JBOSS 7
Key: JBFORUMS-309
URL: https://issues.jboss.org/browse/JBFORUMS-309
Project: JBoss Forums
Issue Type: Bug
Environment: EAP 6.4
Reporter: joy sen
Assignee: Luca Stancapiano
Priority: Blocker
I want to use the HTTP request object in custom login module which extends AbstractServerLoginModule.
In JBOSS 6.x this can be fetched from PolicyContext or FacesContext Object.
But from 7.x onwards these cannot be found. Any approach to share information between authentication and login modules (Using ThreadLocal/Faces Contex/Policy Context) does not work.
Need a way out to share information between authentication in login in EAP 6.4 version.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 8 months
[JBoss JIRA] (WFLY-10736) Server in cluster hangs during start after previous kill
by Miroslav Novak (JIRA)
[ https://issues.jboss.org/browse/WFLY-10736?page=com.atlassian.jira.plugin... ]
Miroslav Novak commented on WFLY-10736:
---------------------------------------
[~pferraro] I've ran the test again with traces for clustering, jgroups and infinispan. Attached logs-traces.zip
> Server in cluster hangs during start after previous kill
> --------------------------------------------------------
>
> Key: WFLY-10736
> URL: https://issues.jboss.org/browse/WFLY-10736
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Reporter: Miroslav Novak
> Assignee: Paul Ferraro
> Priority: Blocker
> Labels: blocker-WF14
> Fix For: 14.0.0.CR1
>
> Attachments: Lodh2TestCase.testRemoteJcaInboundOnly-traces.zip, Lodh2TestCase.testRemoteJcaInboundOnly.zip, Lodh2TestCase.testRemoteJcaInboundOnly2.zip, clusterKilTest.zip, logs-traces.zip, logs-with-workaround.zip, node-1-thread-dump-before-kill-shutdown-sequence.txt, server-with-mdb.log, standalone-full-ha-1.xml, standalone-full-ha-2.xml
>
>
> There is regression in JGroups or Infinispan in one of our tests for fault tolerance of JMS bridges. However work on JMS bridge appears to be unrelated. Issue was hit in WF weekly run.
> Test Scenario:
> * There are two servers. InQueue is deployed on Node 1,
> * OutQueue is deployed on Node 2. Both servers are started.
> * Large byte messages are sent to InQueue deployed on Node 1. Bridge between servers/queues transfers messages from node 1 to node 2.
> * Node 1 is killed and started again.
> * All messages are received from OutQueue deployed on Node 2.
> Result:
> Node 1 does not start after kill and hangs. There is following exception logged in node 2:
> {code}
> :26:17,894 INFO [org.infinispan.CLUSTER] (thread-12,ejb,node-2) ISPN100000: Node node-1 joined the cluster
> 09:26:18,520 INFO [org.infinispan.CLUSTER] (thread-12,ejb,node-2) ISPN000094: Received new cluster view for channel ejb: [node-2|7] (2) [node-2, node-1]
> 09:26:18,521 INFO [org.infinispan.CLUSTER] (thread-12,ejb,node-2) ISPN100001: Node node-1 left the cluster
> 09:26:18,521 INFO [org.infinispan.CLUSTER] (thread-12,ejb,node-2) ISPN000094: Received new cluster view for channel ejb: [node-2|7] (2) [node-2, node-1]
> 09:26:18,522 INFO [org.infinispan.CLUSTER] (thread-12,ejb,node-2) ISPN100001: Node node-1 left the cluster
> 09:26:18,522 INFO [org.infinispan.CLUSTER] (thread-12,ejb,node-2) ISPN000094: Received new cluster view for channel ejb: [node-2|7] (2) [node-2, node-1]
> 09:26:18,522 INFO [org.infinispan.CLUSTER] (thread-12,ejb,node-2) ISPN100001: Node node-1 left the cluster
> 09:26:18,522 INFO [org.infinispan.CLUSTER] (thread-12,ejb,node-2) ISPN000094: Received new cluster view for channel ejb: [node-2|7] (2) [node-2, node-1]
> 09:26:18,523 INFO [org.infinispan.CLUSTER] (thread-12,ejb,node-2) ISPN100001: Node node-1 left the cluster
> 09:26:18,868 INFO [org.infinispan.CLUSTER] (remote-thread--p5-t2) ISPN000310: Starting cluster-wide rebalance for cache default, topology CacheTopology{id=17, phase=READ_OLD_WRITE_ALL, rebalanceId=6, currentCH=ReplicatedConsistentHash{ns = 256, owners = (2)[node-2: 122, node-1: 134]}, pendingCH=ReplicatedConsistentHash{ns = 256, owners = (3)[node-2: 84, node-1: 90, node-1: 82]}, unionCH=null, actualMembers=[node-2, node-1, node-1], persistentUUIDs=[12443bfb-e88a-46f3-919e-9213bf38ce19, 2873237f-d881-463f-8a5a-940bf1d764e5, a05ea8af-a83b-42a9-b937-dc2da1cae6d1]}
> 09:26:18,869 INFO [org.infinispan.CLUSTER] (remote-thread--p5-t2) [Context=default][Scope=node-2]ISPN100002: Started rebalance with topology id 17
> 09:26:18,870 INFO [org.infinispan.CLUSTER] (transport-thread--p14-t5) [Context=default][Scope=node-2]ISPN100003: Node node-2 finished rebalance phase with topology id 17
> 09:26:18,981 INFO [org.infinispan.CLUSTER] (remote-thread--p5-t2) [Context=default][Scope=node-1]ISPN100003: Node node-1 finished rebalance phase with topology id 17
> 09:27:18,530 WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread--p15-t4) ISPN000197: Error updating cluster member list: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 4 from node-1
> at org.infinispan.remoting.transport.impl.MultiTargetRequest.onTimeout(MultiTargetRequest.java:167)
> at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:87)
> at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:22)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0_131]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [rt.jar:1.8.0_131]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_131]
> at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_131]
> Suppressed: java.util.concurrent.ExecutionException: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 4 from node-1
> at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) [rt.jar:1.8.0_131]
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915) [rt.jar:1.8.0_131]
> at org.infinispan.util.concurrent.CompletableFutures.await(CompletableFutures.java:82)
> at org.infinispan.remoting.transport.Transport.invokeRemotely(Transport.java:71)
> at org.infinispan.topology.ClusterTopologyManagerImpl.confirmMembersAvailable(ClusterTopologyManagerImpl.java:540)
> at org.infinispan.topology.ClusterTopologyManagerImpl.updateCacheMembers(ClusterTopologyManagerImpl.java:523)
> at org.infinispan.topology.ClusterTopologyManagerImpl.handleClusterView(ClusterTopologyManagerImpl.java:334)
> at org.infinispan.topology.ClusterTopologyManagerImpl.access$500(ClusterTopologyManagerImpl.java:85)
> at org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener.lambda$handleViewChange$0(ClusterTopologyManagerImpl.java:745)
> at org.infinispan.executors.LimitedExecutor.runTasks(LimitedExecutor.java:144)
> at org.infinispan.executors.LimitedExecutor.access$100(LimitedExecutor.java:33)
> at org.infinispan.executors.LimitedExecutor$Runner.run(LimitedExecutor.java:174)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_131]
> at org.wildfly.clustering.service.concurrent.ClassLoaderThreadFactory.lambda$newThread$0(ClassLoaderThreadFactory.java:47)
> ... 1 more
> Caused by: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 4 from node-1
> at org.infinispan.remoting.transport.impl.MultiTargetRequest.onTimeout(MultiTargetRequest.java:167)
> at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:87)
> at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:22)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0_131]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [rt.jar:1.8.0_131]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_131]
> ... 1 more
> [CIRCULAR REFERENCE:java.util.concurrent.ExecutionException: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 4 from node-1]
> 09:27:18,530 WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread--p16-t4) ISPN000197: Error updating cluster member list: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 4 from node-1
> at org.infinispan.remoting.transport.impl.MultiTargetRequest.onTimeout(MultiTargetRequest.java:167)
> at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:87)
> at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:22)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0_131]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [rt.jar:1.8.0_131]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_131]
> at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_131]
> Suppressed: java.util.concurrent.ExecutionException: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 4 from node-1
> at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) [rt.jar:1.8.0_131]
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915) [rt.jar:1.8.0_131]
> at org.infinispan.util.concurrent.CompletableFutures.await(CompletableFutures.java:82)
> at org.infinispan.remoting.transport.Transport.invokeRemotely(Transport.java:71)
> at org.infinispan.topology.ClusterTopologyManagerImpl.confirmMembersAvailable(ClusterTopologyManagerImpl.java:540)
> at org.infinispan.topology.ClusterTopologyManagerImpl.updateCacheMembers(ClusterTopologyManagerImpl.java:523)
> at org.infinispan.topology.ClusterTopologyManagerImpl.handleClusterView(ClusterTopologyManagerImpl.java:334)
> at org.infinispan.topology.ClusterTopologyManagerImpl.access$500(ClusterTopologyManagerImpl.java:85)
> at org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener.lambda$handleViewChange$0(ClusterTopologyManagerImpl.java:745)
> at org.infinispan.executors.LimitedExecutor.runTasks(LimitedExecutor.java:144)
> at org.infinispan.executors.LimitedExecutor.access$100(LimitedExecutor.java:33)
> at org.infinispan.executors.LimitedExecutor$Runner.run(LimitedExecutor.java:174)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_131]
> at org.wildfly.clustering.service.concurrent.ClassLoaderThreadFactory.lambda$newThread$0(ClassLoaderThreadFactory.java:47)
> ... 1 more
> Caused by: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 4 from node-1
> at org.infinispan.remoting.transport.impl.MultiTargetRequest.onTimeout(MultiTargetRequest.java:167)
> at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:87)
> at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:22)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0_131]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [rt.jar:1.8.0_131]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_131]
> ... 1 more
> [CIRCULAR REFERENCE:java.util.concurrent.ExecutionException: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 4 from node-1]
> {code}
> There is default JGroups udp stack configured which is used by Infinispan. Both of the servers (jgroups udp) are bound to 127.0.0.1. Node 2 has port offset 1000.
> Attaching thread dump from node 1 when it hangs during start.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 8 months