[JBoss JIRA] (WFLY-10630) HttpSessionListener.sessionDestroyed() not called if session invalidated in another WAR
by Bartosz Baranowski (JIRA)
[ https://issues.jboss.org/browse/WFLY-10630?page=com.atlassian.jira.plugin... ]
Bartosz Baranowski edited comment on WFLY-10630 at 8/30/18 2:23 AM:
--------------------------------------------------------------------
This happens because session no longer( I think ) is invalidated, it is transfered under different ID to another context. At least is seems so.
This happens in https://github.com/wildfly/wildfly/blob/master/clustering/web/undertow/sr...
and ends up in https://github.com/undertow-io/undertow/blob/master/servlet/src/main/java...
As is, https://github.com/wildfly/wildfly/blob/master/clustering/web/undertow/sr... is being called only once. Oddly enough this is not true for inifinspan sessions that back this up.
I am not sure if its by design regression or unintentional.
[~swd847] ^^ ?
was (Author: baranowb):
This happens because session no longer( I think ) is invalidated, it is transfered under different ID to another context. At least is seems so.
This happens in https://github.com/wildfly/wildfly/blob/master/clustering/web/undertow/sr...
and ends up in https://github.com/undertow-io/undertow/blob/master/servlet/src/main/java...
As is, https://github.com/wildfly/wildfly/blob/master/clustering/web/undertow/sr... is being called only once. Oddly enough this is not true for inifinspan sessions that back this up.
I am not sure if its by design regression or unintentional.
[~swd847] ^^ ?
> HttpSessionListener.sessionDestroyed() not called if session invalidated in another WAR
> ---------------------------------------------------------------------------------------
>
> Key: WFLY-10630
> URL: https://issues.jboss.org/browse/WFLY-10630
> Project: WildFly
> Issue Type: Bug
> Components: Web (Undertow)
> Affects Versions: 10.0.0.Final, 13.0.0.Final
> Environment: Windows 10, Java 1.8.0_131
> Reproducible with both WildFly-10.0.0.Final and Wildfly-13.0.0.Final
> Reporter: Bernhard Kabelka
> Assignee: Bartosz Baranowski
> Attachments: LoginForm.png, roles.properties, sessionlistener-test.ear, standalone.xml, users.properties
>
>
> For sessions shared across different WARs in a single EAR, the notification of HttpSessionListener works differently in WildFly 10.0.0.Final (and Wildfly 13.0.0.Final) than it it used to work in WildFly 8.2.0.Final:
> I have an EAR containing two WARs with enabled session sharing across the WARs. Basically, one WAR contains the web UI, and the other WAR contains the REST interfaces for AJAX calls made by the UI. The user authenticates against the UI-WAR. On logout, a REST method in the AJAX-WAR is triggered which calls HttpSession.invalidate() on the user session.
> In WildFly 8.2.0.Final, a HttpSessionListener in the UI-WAR gets notified immediately about session creation and destruction.
> In WildFly 13.0.0.Final, however, a HttpSessionListener in either WAR only gets one of the two notifications:
> * In the UI-WAR, I get a notification about the created session immediately when the login form is loaded. However, I do not receive any notification about the session destruction (unless it times out).
> * In the AJAX-WAR, I do not get any notification about the session creation at all, but I immediately receive a notification about the session destruction.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (DROOLS-2590) [DMN Designer] Remove fallback from QNameFieldConverter.toWidgetValue(..)
by Jozef Marko (JIRA)
[ https://issues.jboss.org/browse/DROOLS-2590?page=com.atlassian.jira.plugi... ]
Jozef Marko updated DROOLS-2590:
--------------------------------
Description:
{{QNameFieldConverter.toWidgetValue(..)}} can have the fallback removed once the other sub-tasks are complete.
h2. Acceptance test
- Open existing
- Save new
was:{{QNameFieldConverter.toWidgetValue(..)}} can have the fallback removed once the other sub-tasks are complete.
> [DMN Designer] Remove fallback from QNameFieldConverter.toWidgetValue(..)
> -------------------------------------------------------------------------
>
> Key: DROOLS-2590
> URL: https://issues.jboss.org/browse/DROOLS-2590
> Project: Drools
> Issue Type: Sub-task
> Components: DMN Editor
> Reporter: Michael Anstis
> Assignee: Michael Anstis
>
> {{QNameFieldConverter.toWidgetValue(..)}} can have the fallback removed once the other sub-tasks are complete.
> h2. Acceptance test
> - Open existing
> - Save new
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (WFLY-10949) IllegalArgumentException when get jdbc driver info if xa-datasource-class is not defined
by Lin Gao (JIRA)
Lin Gao created WFLY-10949:
------------------------------
Summary: IllegalArgumentException when get jdbc driver info if xa-datasource-class is not defined
Key: WFLY-10949
URL: https://issues.jboss.org/browse/WFLY-10949
Project: WildFly
Issue Type: Bug
Components: JCA
Reporter: Lin Gao
Assignee: Lin Gao
Priority: Minor
Download oracle jdbc driver at http://www.oracle.com/technetwork/database/features/jdbc/jdbc-ucp-122-311...
Install the driver as a module and add the JDBC driver:
{code:bash}
module add --name=com.oracle.jdbc --resources=/opt/Downloads/ojdbc8.jar --dependencies=[javax.api, javax.transaction.api]
/subsystem=datasources/jdbc-driver=oracle:add(driver-name=oracle, driver-module-name=com.oracle.jdbc, driver-datasource-class-name=oracle.jdbc.pool.OracleDataSource, driver-class-name=oracle.jdbc.OracleDriver)
{code}
Then the following command will fail:
{code:}
/subsystem=datasources:get-installed-driver(driver-name=oracle)
{code}
The fail message is like:
{code:java}
{
"outcome" => "failed",
"failure-description" => "WFLYCTL0158: Operation handler failed: java.lang.IllegalArgumentException: newValue is null",
"rolled-back" => true
}
{code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (WFLY-10736) Server in cluster hangs during start after previous kill
by Paul Ferraro (JIRA)
[ https://issues.jboss.org/browse/WFLY-10736?page=com.atlassian.jira.plugin... ]
Paul Ferraro commented on WFLY-10736:
-------------------------------------
[~mnovak] I've identified a bug in Infinispan that is the source of the state transfer timeouts (see linked jira). This is definitely something that needs to be fixed. However, unless I'm mistaken, the Lodh2TestCase doesn't seem to require any @Remote EJBs, thus you can disable the <remote/> configuration from the ejb3 subsystem. This will block the problematic/unused cache from starting, and allow the test to run.
> Server in cluster hangs during start after previous kill
> --------------------------------------------------------
>
> Key: WFLY-10736
> URL: https://issues.jboss.org/browse/WFLY-10736
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Reporter: Miroslav Novak
> Assignee: Paul Ferraro
> Priority: Blocker
> Labels: blocker-WF14
> Fix For: 14.0.0.CR1
>
> Attachments: Lodh2TestCase.testRemoteJcaInboundOnly-traces.zip, Lodh2TestCase.testRemoteJcaInboundOnly.zip, Lodh2TestCase.testRemoteJcaInboundOnly2.zip, clusterKilTest.zip, logs-traces.zip, logs-traces2.zip, logs-with-workaround.zip, node-1-thread-dump-before-kill-shutdown-sequence.txt, server-with-mdb.log, standalone-full-ha-1.xml, standalone-full-ha-2.xml
>
>
> There is regression in JGroups or Infinispan in one of our tests for fault tolerance of JMS bridges. However work on JMS bridge appears to be unrelated. Issue was hit in WF weekly run.
> Test Scenario:
> * There are two servers. InQueue is deployed on Node 1,
> * OutQueue is deployed on Node 2. Both servers are started.
> * Large byte messages are sent to InQueue deployed on Node 1. Bridge between servers/queues transfers messages from node 1 to node 2.
> * Node 1 is killed and started again.
> * All messages are received from OutQueue deployed on Node 2.
> Result:
> Node 1 does not start after kill and hangs. There is following exception logged in node 2:
> {code}
> :26:17,894 INFO [org.infinispan.CLUSTER] (thread-12,ejb,node-2) ISPN100000: Node node-1 joined the cluster
> 09:26:18,520 INFO [org.infinispan.CLUSTER] (thread-12,ejb,node-2) ISPN000094: Received new cluster view for channel ejb: [node-2|7] (2) [node-2, node-1]
> 09:26:18,521 INFO [org.infinispan.CLUSTER] (thread-12,ejb,node-2) ISPN100001: Node node-1 left the cluster
> 09:26:18,521 INFO [org.infinispan.CLUSTER] (thread-12,ejb,node-2) ISPN000094: Received new cluster view for channel ejb: [node-2|7] (2) [node-2, node-1]
> 09:26:18,522 INFO [org.infinispan.CLUSTER] (thread-12,ejb,node-2) ISPN100001: Node node-1 left the cluster
> 09:26:18,522 INFO [org.infinispan.CLUSTER] (thread-12,ejb,node-2) ISPN000094: Received new cluster view for channel ejb: [node-2|7] (2) [node-2, node-1]
> 09:26:18,522 INFO [org.infinispan.CLUSTER] (thread-12,ejb,node-2) ISPN100001: Node node-1 left the cluster
> 09:26:18,522 INFO [org.infinispan.CLUSTER] (thread-12,ejb,node-2) ISPN000094: Received new cluster view for channel ejb: [node-2|7] (2) [node-2, node-1]
> 09:26:18,523 INFO [org.infinispan.CLUSTER] (thread-12,ejb,node-2) ISPN100001: Node node-1 left the cluster
> 09:26:18,868 INFO [org.infinispan.CLUSTER] (remote-thread--p5-t2) ISPN000310: Starting cluster-wide rebalance for cache default, topology CacheTopology{id=17, phase=READ_OLD_WRITE_ALL, rebalanceId=6, currentCH=ReplicatedConsistentHash{ns = 256, owners = (2)[node-2: 122, node-1: 134]}, pendingCH=ReplicatedConsistentHash{ns = 256, owners = (3)[node-2: 84, node-1: 90, node-1: 82]}, unionCH=null, actualMembers=[node-2, node-1, node-1], persistentUUIDs=[12443bfb-e88a-46f3-919e-9213bf38ce19, 2873237f-d881-463f-8a5a-940bf1d764e5, a05ea8af-a83b-42a9-b937-dc2da1cae6d1]}
> 09:26:18,869 INFO [org.infinispan.CLUSTER] (remote-thread--p5-t2) [Context=default][Scope=node-2]ISPN100002: Started rebalance with topology id 17
> 09:26:18,870 INFO [org.infinispan.CLUSTER] (transport-thread--p14-t5) [Context=default][Scope=node-2]ISPN100003: Node node-2 finished rebalance phase with topology id 17
> 09:26:18,981 INFO [org.infinispan.CLUSTER] (remote-thread--p5-t2) [Context=default][Scope=node-1]ISPN100003: Node node-1 finished rebalance phase with topology id 17
> 09:27:18,530 WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread--p15-t4) ISPN000197: Error updating cluster member list: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 4 from node-1
> at org.infinispan.remoting.transport.impl.MultiTargetRequest.onTimeout(MultiTargetRequest.java:167)
> at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:87)
> at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:22)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0_131]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [rt.jar:1.8.0_131]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_131]
> at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_131]
> Suppressed: java.util.concurrent.ExecutionException: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 4 from node-1
> at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) [rt.jar:1.8.0_131]
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915) [rt.jar:1.8.0_131]
> at org.infinispan.util.concurrent.CompletableFutures.await(CompletableFutures.java:82)
> at org.infinispan.remoting.transport.Transport.invokeRemotely(Transport.java:71)
> at org.infinispan.topology.ClusterTopologyManagerImpl.confirmMembersAvailable(ClusterTopologyManagerImpl.java:540)
> at org.infinispan.topology.ClusterTopologyManagerImpl.updateCacheMembers(ClusterTopologyManagerImpl.java:523)
> at org.infinispan.topology.ClusterTopologyManagerImpl.handleClusterView(ClusterTopologyManagerImpl.java:334)
> at org.infinispan.topology.ClusterTopologyManagerImpl.access$500(ClusterTopologyManagerImpl.java:85)
> at org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener.lambda$handleViewChange$0(ClusterTopologyManagerImpl.java:745)
> at org.infinispan.executors.LimitedExecutor.runTasks(LimitedExecutor.java:144)
> at org.infinispan.executors.LimitedExecutor.access$100(LimitedExecutor.java:33)
> at org.infinispan.executors.LimitedExecutor$Runner.run(LimitedExecutor.java:174)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_131]
> at org.wildfly.clustering.service.concurrent.ClassLoaderThreadFactory.lambda$newThread$0(ClassLoaderThreadFactory.java:47)
> ... 1 more
> Caused by: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 4 from node-1
> at org.infinispan.remoting.transport.impl.MultiTargetRequest.onTimeout(MultiTargetRequest.java:167)
> at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:87)
> at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:22)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0_131]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [rt.jar:1.8.0_131]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_131]
> ... 1 more
> [CIRCULAR REFERENCE:java.util.concurrent.ExecutionException: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 4 from node-1]
> 09:27:18,530 WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread--p16-t4) ISPN000197: Error updating cluster member list: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 4 from node-1
> at org.infinispan.remoting.transport.impl.MultiTargetRequest.onTimeout(MultiTargetRequest.java:167)
> at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:87)
> at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:22)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0_131]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [rt.jar:1.8.0_131]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_131]
> at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_131]
> Suppressed: java.util.concurrent.ExecutionException: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 4 from node-1
> at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) [rt.jar:1.8.0_131]
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915) [rt.jar:1.8.0_131]
> at org.infinispan.util.concurrent.CompletableFutures.await(CompletableFutures.java:82)
> at org.infinispan.remoting.transport.Transport.invokeRemotely(Transport.java:71)
> at org.infinispan.topology.ClusterTopologyManagerImpl.confirmMembersAvailable(ClusterTopologyManagerImpl.java:540)
> at org.infinispan.topology.ClusterTopologyManagerImpl.updateCacheMembers(ClusterTopologyManagerImpl.java:523)
> at org.infinispan.topology.ClusterTopologyManagerImpl.handleClusterView(ClusterTopologyManagerImpl.java:334)
> at org.infinispan.topology.ClusterTopologyManagerImpl.access$500(ClusterTopologyManagerImpl.java:85)
> at org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener.lambda$handleViewChange$0(ClusterTopologyManagerImpl.java:745)
> at org.infinispan.executors.LimitedExecutor.runTasks(LimitedExecutor.java:144)
> at org.infinispan.executors.LimitedExecutor.access$100(LimitedExecutor.java:33)
> at org.infinispan.executors.LimitedExecutor$Runner.run(LimitedExecutor.java:174)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_131]
> at org.wildfly.clustering.service.concurrent.ClassLoaderThreadFactory.lambda$newThread$0(ClassLoaderThreadFactory.java:47)
> ... 1 more
> Caused by: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 4 from node-1
> at org.infinispan.remoting.transport.impl.MultiTargetRequest.onTimeout(MultiTargetRequest.java:167)
> at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:87)
> at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:22)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0_131]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [rt.jar:1.8.0_131]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_131]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_131]
> ... 1 more
> [CIRCULAR REFERENCE:java.util.concurrent.ExecutionException: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 4 from node-1]
> {code}
> There is default JGroups udp stack configured which is used by Infinispan. Both of the servers (jgroups udp) are bound to 127.0.0.1. Node 2 has port offset 1000.
> Attaching thread dump from node 1 when it hangs during start.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months