[JBoss JIRA] (DROOLS-1017) NPE deleting an expired event in equality mode
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/DROOLS-1017?page=com.atlassian.jira.plugi... ]
RH Bugzilla Integration commented on DROOLS-1017:
-------------------------------------------------
Alessandro Lazarotti <alazarot(a)redhat.com> changed the Status of [bug 1295786|https://bugzilla.redhat.com/show_bug.cgi?id=1295786] from NEW to ASSIGNED
> NPE deleting an expired event in equality mode
> ----------------------------------------------
>
> Key: DROOLS-1017
> URL: https://issues.jboss.org/browse/DROOLS-1017
> Project: Drools
> Issue Type: Bug
> Reporter: Mario Fusco
> Assignee: Mario Fusco
> Fix For: 6.4.x
>
>
> Trying to delete an already expired event in equality mode causes the following NPE in the TMS:
> {code}
> java.lang.NullPointerException
> at org.drools.core.common.NamedEntryPoint.delete(NamedEntryPoint.java:506)
> at org.drools.core.common.NamedEntryPoint.delete(NamedEntryPoint.java:442)
> at org.drools.core.common.DefaultAgenda.fireActivation(DefaultAgenda.java:1120)
> at org.drools.core.phreak.RuleExecutor.fire(RuleExecutor.java:121)
> at org.drools.core.phreak.RuleExecutor.evaluateNetworkAndFire(RuleExecutor.java:74)
> at org.drools.core.common.DefaultAgenda.fireNextItem(DefaultAgenda.java:1003)
> at org.drools.core.common.DefaultAgenda.fireLoop(DefaultAgenda.java:1346)
> at org.drools.core.common.DefaultAgenda.fireAllRules(DefaultAgenda.java:1284)
> at org.drools.core.impl.StatefulKnowledgeSessionImpl.internalFireAllRules(StatefulKnowledgeSessionImpl.java:1303)
> at org.drools.core.impl.StatefulKnowledgeSessionImpl.fireAllRules(StatefulKnowledgeSessionImpl.java:1293)
> at org.drools.core.impl.StatefulKnowledgeSessionImpl.fireAllRules(StatefulKnowledgeSessionImpl.java:1274)
> at org.drools.compiler.integrationtests.CepEspTest.testDeleteExpiredEventWithTimestampAndEqualityKey(CepEspTest.java:5682)
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 3 months
[JBoss JIRA] (WFLY-4790) Getting DirectBuffer OOM when sending fragmented binary message to websocket endpoint
by Junshik Jeon (JIRA)
[ https://issues.jboss.org/browse/WFLY-4790?page=com.atlassian.jira.plugin.... ]
Junshik Jeon commented on WFLY-4790:
------------------------------------
[~swd847]
I'm back porting this commit to 1.2.22.Final tag with fixed MAX_QUEUED_READ_BUFFERS size.
I will report test results. Thanks !
> Getting DirectBuffer OOM when sending fragmented binary message to websocket endpoint
> -------------------------------------------------------------------------------------
>
> Key: WFLY-4790
> URL: https://issues.jboss.org/browse/WFLY-4790
> Project: WildFly
> Issue Type: Bug
> Components: Web (Undertow)
> Affects Versions: 10.0.0.Alpha3
> Reporter: Radim Hatlapatka
> Assignee: Stuart Douglas
> Priority: Critical
> Fix For: 10.0.0.Alpha6
>
>
> When sending fragmented binary message (message with message payload of length 4 * 2**20 (4M). Sent out in fragments of 64). The server throws {{java.lang.OutOfMemoryError: Direct buffer memory}} [1]
> The memory for direct buffer by default depends on the size set by -Xmx, which is in EAP 7.0.0.DR4 by default set to -Xmx512m. Increasing it just increases the time before the limit is hit (it is enough to send those messages multiple times to hit the limit again).
> I believe the issue is similar to the one for EAP 6.4: [https://bugzilla.redhat.com/show_bug.cgi?id=1223708]
> [1]
> {noformat}
> 15:10:55,463 ERROR [org.xnio.listener] (default I/O-1) XNIO001007: A channel event listener threw an exception: java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658)
> at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at org.xnio.BufferAllocator$2.allocate(BufferAllocator.java:57)
> at org.xnio.BufferAllocator$2.allocate(BufferAllocator.java:55)
> at org.xnio.ByteBufferSlicePool.allocate(ByteBufferSlicePool.java:143)
> at io.undertow.websockets.core.BufferedBinaryMessage$1.handleEvent(BufferedBinaryMessage.java:106)
> at io.undertow.websockets.core.BufferedBinaryMessage$1.handleEvent(BufferedBinaryMessage.java:97)
> at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
> at io.undertow.server.protocol.framed.AbstractFramedStreamSourceChannel$1.run(AbstractFramedStreamSourceChannel.java:264)
> at org.xnio.nio.WorkerThread.safeRun(WorkerThread.java:560)
> at org.xnio.nio.WorkerThread.run(WorkerThread.java:462)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 3 months
[JBoss JIRA] (WFLY-5484) Calling HttpServletRequest.logout() with single sign-on enabled only works every second time
by Richard Janík (JIRA)
[ https://issues.jboss.org/browse/WFLY-5484?page=com.atlassian.jira.plugin.... ]
Richard Janík reopened WFLY-5484:
---------------------------------
Logout still doesn't work for the first time with 10.0.0.CR5. I'm attaching a reproducer. Unzip the reproducer in a clean EAP/Wildfly installation and run the reproducer.sh script inside.
> Calling HttpServletRequest.logout() with single sign-on enabled only works every second time
> --------------------------------------------------------------------------------------------
>
> Key: WFLY-5484
> URL: https://issues.jboss.org/browse/WFLY-5484
> Project: WildFly
> Issue Type: Bug
> Components: Clustering, Web (Undertow)
> Reporter: Richard Janík
> Assignee: Paul Ferraro
> Priority: Blocker
> Fix For: 10.0.0.CR5
>
> Attachments: reproducer-jbeap-1282.zip
>
>
> See "Steps to Reproduce". Logging out from an application only works every second time, e.g. HttpRequestServlet.logout() has to be called twice in order to have any effect
> This doesn't occur without <single-sign-on/> enabled - logout() has the expected effect. The issue is security related, thus I'm adding our security team members as watchers.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 3 months
[JBoss JIRA] (WFLY-5484) Calling HttpServletRequest.logout() with single sign-on enabled only works every second time
by Richard Janík (JIRA)
[ https://issues.jboss.org/browse/WFLY-5484?page=com.atlassian.jira.plugin.... ]
Richard Janík updated WFLY-5484:
--------------------------------
Attachment: reproducer-jbeap-1282.zip
> Calling HttpServletRequest.logout() with single sign-on enabled only works every second time
> --------------------------------------------------------------------------------------------
>
> Key: WFLY-5484
> URL: https://issues.jboss.org/browse/WFLY-5484
> Project: WildFly
> Issue Type: Bug
> Components: Clustering, Web (Undertow)
> Reporter: Richard Janík
> Assignee: Paul Ferraro
> Priority: Blocker
> Fix For: 10.0.0.CR5
>
> Attachments: reproducer-jbeap-1282.zip
>
>
> See "Steps to Reproduce". Logging out from an application only works every second time, e.g. HttpRequestServlet.logout() has to be called twice in order to have any effect
> This doesn't occur without <single-sign-on/> enabled - logout() has the expected effect. The issue is security related, thus I'm adding our security team members as watchers.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 3 months
[JBoss JIRA] (WFLY-4914) Low performance of CoreBridge
by Jeff Mesnil (JIRA)
[ https://issues.jboss.org/browse/WFLY-4914?page=com.atlassian.jira.plugin.... ]
Jeff Mesnil resolved WFLY-4914.
-------------------------------
Fix Version/s: 10.0.0.CR5
Resolution: Done
Performance of core bridge have been improved in Artemis 1.1.0.wildfly-010
> Low performance of CoreBridge
> -----------------------------
>
> Key: WFLY-4914
> URL: https://issues.jboss.org/browse/WFLY-4914
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Affects Versions: 10.0.0.Alpha5
> Reporter: Ondřej Kalman
> Assignee: Clebert Suconic
> Fix For: 10.0.0.CR5
>
> Attachments: artemis_conf.zip, hornetq_conf.zip
>
>
> We are getting really bad results of our performance tests on core bridges.
> Bandwith on EAP7 with AMQ Artemis is about 2631msgs/sec.
> With similar config on EAP6 with HQ is bandwidth over 20833msgs/sec.
> I tried to use netty nio connector instead of http, to get better performance, but it was slow anyway.
> Performance tests are performed on two bare metal nodes in our lab.
> I'm attaching configs which we use for EAP7 and EAP6.
> We have two configs. One (standalone-full-ha-1.xml) for node with bridge and one (standalone-full-ha-2.xml) for node which receives messages.
> Artemis 1.0.0
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 3 months
[JBoss JIRA] (WFLY-5115) java.lang.OutOfMemoryError: Direct buffer memory
by Jeff Mesnil (JIRA)
[ https://issues.jboss.org/browse/WFLY-5115?page=com.atlassian.jira.plugin.... ]
Jeff Mesnil resolved WFLY-5115.
-------------------------------
Fix Version/s: 10.0.0.CR5
Resolution: Done
Upstream issue has been fixed in Artems 1.1.0.wildlfy-008
> java.lang.OutOfMemoryError: Direct buffer memory
> ------------------------------------------------
>
> Key: WFLY-5115
> URL: https://issues.jboss.org/browse/WFLY-5115
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Affects Versions: 10.0.0.Beta1
> Reporter: Erich Duda
> Assignee: Jeff Mesnil
> Priority: Critical
> Fix For: 10.0.0.CR5
>
> Attachments: server1.xml, server2.xml
>
>
> {panel:title=Stacktrace}
> 16:02:52,112 ERROR [org.apache.activemq.artemis.core.client] (Thread-23 (ActiveMQ-server-ActiveMQServerImpl::serverUUID=1d6a38c6-44e8-11e5-9786-154a71a87770-460139716)) AMQ214017: Caught unexpected Throwable: java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658) [rt.jar:1.8.0_51]
> at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) [rt.jar:1.8.0_51]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) [rt.jar:1.8.0_51]
> at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:437) [netty-all-4.0.26.Final.jar:4.0.26.Final]
> at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:179) [netty-all-4.0.26.Final.jar:4.0.26.Final]
> at io.netty.buffer.PoolArena.allocate(PoolArena.java:168) [netty-all-4.0.26.Final.jar:4.0.26.Final]
> at io.netty.buffer.PoolArena.reallocate(PoolArena.java:280) [netty-all-4.0.26.Final.jar:4.0.26.Final]
> at io.netty.buffer.PooledByteBuf.capacity(PooledByteBuf.java:110) [netty-all-4.0.26.Final.jar:4.0.26.Final]
> at io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:251) [netty-all-4.0.26.Final.jar:4.0.26.Final]
> at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:817) [netty-all-4.0.26.Final.jar:4.0.26.Final]
> at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:825) [netty-all-4.0.26.Final.jar:4.0.26.Final]
> at org.apache.activemq.artemis.core.buffers.impl.ChannelBufferWrapper.writeBytes(ChannelBufferWrapper.java:575) [artemis-commons-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.protocol.core.impl.wireformat.SessionContinuationMessage.encodeRest(SessionContinuationMessage.java:76) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.protocol.core.impl.wireformat.SessionSendContinuationMessage.encodeRest(SessionSendContinuationMessage.java:100) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.protocol.core.impl.PacketImpl.encode(PacketImpl.java:283) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.send(ChannelImpl.java:246) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.send(ChannelImpl.java:216) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.sendLargeMessageChunk(ActiveMQSessionContext.java:441) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.largeMessageSendServer(ClientProducerImpl.java:433) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.largeMessageSend(ClientProducerImpl.java:367) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.doSend(ClientProducerImpl.java:297) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.send(ClientProducerImpl.java:132) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.server.cluster.impl.BridgeImpl$2.run(BridgeImpl.java:711) [artemis-server-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$1.run(OrderedExecutorFactory.java:105) [artemis-core-client-1.0.0.jar:1.0.0]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_51]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_51]
> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_51]
> {panel}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 3 months
[JBoss JIRA] (WFLY-5115) java.lang.OutOfMemoryError: Direct buffer memory
by Jeff Mesnil (JIRA)
[ https://issues.jboss.org/browse/WFLY-5115?page=com.atlassian.jira.plugin.... ]
Jeff Mesnil reassigned WFLY-5115:
---------------------------------
Assignee: Jeff Mesnil (was: Andy Taylor)
> java.lang.OutOfMemoryError: Direct buffer memory
> ------------------------------------------------
>
> Key: WFLY-5115
> URL: https://issues.jboss.org/browse/WFLY-5115
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Affects Versions: 10.0.0.Beta1
> Reporter: Erich Duda
> Assignee: Jeff Mesnil
> Priority: Critical
> Attachments: server1.xml, server2.xml
>
>
> {panel:title=Stacktrace}
> 16:02:52,112 ERROR [org.apache.activemq.artemis.core.client] (Thread-23 (ActiveMQ-server-ActiveMQServerImpl::serverUUID=1d6a38c6-44e8-11e5-9786-154a71a87770-460139716)) AMQ214017: Caught unexpected Throwable: java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658) [rt.jar:1.8.0_51]
> at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) [rt.jar:1.8.0_51]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) [rt.jar:1.8.0_51]
> at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:437) [netty-all-4.0.26.Final.jar:4.0.26.Final]
> at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:179) [netty-all-4.0.26.Final.jar:4.0.26.Final]
> at io.netty.buffer.PoolArena.allocate(PoolArena.java:168) [netty-all-4.0.26.Final.jar:4.0.26.Final]
> at io.netty.buffer.PoolArena.reallocate(PoolArena.java:280) [netty-all-4.0.26.Final.jar:4.0.26.Final]
> at io.netty.buffer.PooledByteBuf.capacity(PooledByteBuf.java:110) [netty-all-4.0.26.Final.jar:4.0.26.Final]
> at io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:251) [netty-all-4.0.26.Final.jar:4.0.26.Final]
> at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:817) [netty-all-4.0.26.Final.jar:4.0.26.Final]
> at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:825) [netty-all-4.0.26.Final.jar:4.0.26.Final]
> at org.apache.activemq.artemis.core.buffers.impl.ChannelBufferWrapper.writeBytes(ChannelBufferWrapper.java:575) [artemis-commons-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.protocol.core.impl.wireformat.SessionContinuationMessage.encodeRest(SessionContinuationMessage.java:76) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.protocol.core.impl.wireformat.SessionSendContinuationMessage.encodeRest(SessionSendContinuationMessage.java:100) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.protocol.core.impl.PacketImpl.encode(PacketImpl.java:283) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.send(ChannelImpl.java:246) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.send(ChannelImpl.java:216) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.sendLargeMessageChunk(ActiveMQSessionContext.java:441) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.largeMessageSendServer(ClientProducerImpl.java:433) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.largeMessageSend(ClientProducerImpl.java:367) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.doSend(ClientProducerImpl.java:297) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.send(ClientProducerImpl.java:132) [artemis-core-client-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.core.server.cluster.impl.BridgeImpl$2.run(BridgeImpl.java:711) [artemis-server-1.0.0.jar:1.0.0]
> at org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$1.run(OrderedExecutorFactory.java:105) [artemis-core-client-1.0.0.jar:1.0.0]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_51]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_51]
> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_51]
> {panel}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 3 months
[JBoss JIRA] (WFLY-5374) Server start-up fails on IBM JDK 1.8 (64-bit) on RHEL 6/7 in full/full-ha profile
by Jeff Mesnil (JIRA)
[ https://issues.jboss.org/browse/WFLY-5374?page=com.atlassian.jira.plugin.... ]
Jeff Mesnil resolved WFLY-5374.
-------------------------------
Fix Version/s: 10.0.0.CR3
Resolution: Done
Upstream issue fixed in 1.1.0.wildfly.007
> Server start-up fails on IBM JDK 1.8 (64-bit) on RHEL 6/7 in full/full-ha profile
> ---------------------------------------------------------------------------------
>
> Key: WFLY-5374
> URL: https://issues.jboss.org/browse/WFLY-5374
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Affects Versions: 10.0.0.CR1, 10.0.0.CR2
> Environment: IBM 1.8 - 64-bit:
> java version "1.8.0"
> Java(TM) SE Runtime Environment (build pxa6480sr1fp10-20150711_01(SR1 FP10))
> IBM J9 VM (build 2.8, JRE 1.8.0 Linux amd64-64 Compressed References 20150630_255633 (JIT enabled, AOT enabled)
> J9VM - R28_jvm.28_20150630_1742_B255633
> JIT - tr.r14.java_20150625_95081.01
> GC - R28_jvm.28_20150630_1742_B255633_CMPRSS
> J9CL - 20150630_255633)
> JCL - 20150711_01 based on Oracle jdk8u51-b15
> Reporter: Miroslav Novak
> Assignee: Clebert Suconic
> Priority: Blocker
> Fix For: 10.0.0.CR3
>
> Attachments: javacore.20150921.051514.4815.0002.txt
>
>
> EAP 7.0.0.DR10/Wildfly 10 CR1 fails to start on IBM 1.8 (64) on RHEL 6/7. This is regression against previous version.
> Console log:
> {code}[hudson@messaging-20 bin]$ sh standalone.sh -c standalone-full-ha.xml
> =========================================================================
> JBoss Bootstrap Environment
> JBOSS_HOME: /tmp/dr10/jboss-eap-7.0
> JAVA: /qa/tools/opt/x86_64/ibm-java-x86_64-80_2015_07_28//bin/java
> JAVA_OPTS: -server -Xms64m -Xmx512m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
> =========================================================================
> 05:15:10,530 INFO [org.jboss.modules] (main) JBoss Modules version 1.4.4.Final
> 05:15:10,915 INFO [org.jboss.msc] (main) JBoss MSC version 1.2.6.Final
> 05:15:11,052 INFO [org.jboss.as] (MSC service thread 1-7) WFLYSRV0049: EAP 7.0.0.Alpha1 (WildFly Core 2.0.0.CR1) starting
> 05:15:12,492 INFO [org.jboss.as.controller.management-deprecated] (ServerService Thread Pool -- 30) WFLYCTL0028: Attribute 'enabled' in the resource at address '/subsystem=datasources/data-source=ExampleDS' is deprecated, and may be removed in future version. See the attribute description in the output of the read-resource-description operation to learn more about the deprecation.
> 05:15:12,726 INFO [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0039: Creating http management service using socket-binding (management-http)
> 05:15:12,760 INFO [org.xnio] (MSC service thread 1-1) XNIO version 3.3.2.Final
> 05:15:12,771 INFO [org.xnio.nio] (MSC service thread 1-1) XNIO NIO Implementation Version 3.3.2.Final
> 05:15:12,814 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 44) WFLYCLINF0001: Activating Infinispan subsystem.
> 05:15:12,839 INFO [org.wildfly.iiop.openjdk] (ServerService Thread Pool -- 45) WFLYIIOP0001: Activating IIOP Subsystem
> 05:15:12,842 WARN [org.jboss.as.txn] (ServerService Thread Pool -- 66) WFLYTX0013: Node identifier property is set to the default value. Please make sure it is unique.
> 05:15:12,874 INFO [org.jboss.as.webservices] (ServerService Thread Pool -- 68) WFLYWS0002: Activating WebServices Extension
> 05:15:12,858 INFO [org.jboss.as.jsf] (ServerService Thread Pool -- 52) WFLYJSF0007: Activated the following JSF Implementations: [main]
> 05:15:12,880 INFO [org.jboss.as.clustering.jgroups] (ServerService Thread Pool -- 49) WFLYCLJG0001: Activating JGroups subsystem.
> 05:15:12,886 INFO [org.wildfly.extension.io] (ServerService Thread Pool -- 43) WFLYIO001: Worker 'default' has auto-configured to 16 core threads with 128 task threads based on your 8 available processors
> 05:15:12,890 INFO [org.jboss.as.naming] (ServerService Thread Pool -- 57) WFLYNAM0001: Activating Naming Subsystem
> 05:15:12,961 INFO [org.jboss.as.security] (ServerService Thread Pool -- 64) WFLYSEC0002: Activating Security Subsystem
> 05:15:12,973 INFO [org.jboss.as.security] (MSC service thread 1-5) WFLYSEC0001: Current PicketBox version=4.9.3.Final
> 05:15:12,990 INFO [org.jboss.as.connector] (MSC service thread 1-2) WFLYJCA0009: Starting JCA Subsystem (WildFly/IronJacamar 1.3.0.Final)
> 05:15:13,061 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 67) WFLYUT0003: Undertow 1.3.0.CR1 starting
> 05:15:13,062 INFO [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 39) WFLYJCA0004: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3)
> 05:15:13,065 INFO [org.wildfly.extension.undertow] (MSC service thread 1-4) WFLYUT0003: Undertow 1.3.0.CR1 starting
> 05:15:13,078 INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-5) WFLYJCA0018: Started Driver service with driver-name = h2
> 05:15:13,150 INFO [org.jboss.remoting] (MSC service thread 1-1) JBoss Remoting version 4.0.10.Final
> 05:15:13,218 INFO [org.jboss.as.naming] (MSC service thread 1-7) WFLYNAM0003: Starting Naming Service
> 05:15:13,218 INFO [org.jboss.as.mail.extension] (MSC service thread 1-2) WFLYMAIL0001: Bound mail session [java:jboss/mail/Default]
> 05:15:13,361 INFO [org.jboss.as.ejb3] (MSC service thread 1-1) WFLYEJB0481: Strict pool slsb-strict-max-pool is using a max instance size of 128 (per class), which is derived from thread worker pool sizing.
> 05:15:13,361 INFO [org.jboss.as.ejb3] (MSC service thread 1-6) WFLYEJB0482: Strict pool mdb-strict-max-pool is using a max instance size of 32 (per class), which is derived from the number of CPUs on this host.
> 05:15:13,463 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 67) WFLYUT0014: Creating file handler for path '/tmp/dr10/jboss-eap-7.0/welcome-content' with options [directory-listing: 'false', follow-symlink: 'false', case-sensitive: 'true', safe-symlink-paths: '[]']
> 05:15:13,492 INFO [org.wildfly.extension.undertow] (MSC service thread 1-8) WFLYUT0012: Started server default-server.
> 05:15:13,499 INFO [org.wildfly.extension.undertow] (MSC service thread 1-5) WFLYUT0018: Host default-host starting
> 05:15:13,598 INFO [org.wildfly.extension.undertow] (MSC service thread 1-8) WFLYUT0006: Undertow HTTP listener default listening on 127.0.0.1:8080
> 05:15:13,598 INFO [org.wildfly.extension.undertow] (MSC service thread 1-1) WFLYUT0006: Undertow AJP listener ajp listening on 127.0.0.1:8009
> 05:15:13,656 INFO [org.jboss.modcluster] (ServerService Thread Pool -- 71) MODCLUSTER000001: Initializing mod_cluster version 1.3.1.Final
> 05:15:13,680 INFO [org.jboss.modcluster] (ServerService Thread Pool -- 71) MODCLUSTER000032: Listening to proxy advertisements on /224.0.1.105:23364
> 05:15:13,826 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-5) WFLYJCA0001: Bound data source [java:jboss/datasources/ExampleDS]
> 05:15:13,928 INFO [org.wildfly.iiop.openjdk] (MSC service thread 1-4) WFLYIIOP0009: CORBA ORB Service started
> 05:15:13,938 INFO [org.jboss.as.server.deployment.scanner] (MSC service thread 1-5) WFLYDS0013: Started FileSystemDeploymentService for directory /tmp/dr10/jboss-eap-7.0/standalone/deployments
> 05:15:14,053 INFO [org.jboss.ws.common.management] (MSC service thread 1-2) JBWS022052: Starting JBoss Web Services - Stack CXF Server 5.1.0.Final
> 05:15:14,163 INFO [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 70) AMQ221000: live Message Broker is starting with configuration Broker Configuration (clustered=true,journalDirectory=/tmp/dr10/jboss-eap-7.0/standalone/data/activemq/journal,bindingsDirectory=/tmp/dr10/jboss-eap-7.0/standalone/data/activemq/bindings,largeMessagesDirectory=/tmp/dr10/jboss-eap-7.0/standalone/data/activemq/largemessages,pagingDirectory=/tmp/dr10/jboss-eap-7.0/standalone/data/activemq/paging)
> 05:15:14,187 INFO [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 70) AMQ221012: Using AIO Journal
> 05:15:14,296 INFO [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 70) AMQ221043: Protocol module found: [artemis-server]. Adding protocol support for: CORE
> 05:15:14,302 WARN [org.jgroups.protocols.UDP] (Thread-0 (ActiveMQ-server-ActiveMQServerImpl::serverUUID=446a359f-6041-11e5-895c-5dd24064fdad-953422266)) JGRP000015: the send buffer of socket DatagramSocket was set to 1MB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
> 05:15:14,303 INFO [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 70) AMQ221043: Protocol module found: [artemis-amqp-protocol]. Adding protocol support for: AMQP
> 05:15:14,303 WARN [org.jgroups.protocols.UDP] (Thread-0 (ActiveMQ-server-ActiveMQServerImpl::serverUUID=446a359f-6041-11e5-895c-5dd24064fdad-953422266)) JGRP000015: the receive buffer of socket DatagramSocket was set to 20MB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
> 05:15:14,304 WARN [org.jgroups.protocols.UDP] (Thread-0 (ActiveMQ-server-ActiveMQServerImpl::serverUUID=446a359f-6041-11e5-895c-5dd24064fdad-953422266)) JGRP000015: the send buffer of socket MulticastSocket was set to 1MB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
> 05:15:14,305 WARN [org.jgroups.protocols.UDP] (Thread-0 (ActiveMQ-server-ActiveMQServerImpl::serverUUID=446a359f-6041-11e5-895c-5dd24064fdad-953422266)) JGRP000015: the receive buffer of socket MulticastSocket was set to 25MB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
> 05:15:14,315 INFO [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 70) AMQ221043: Protocol module found: [artemis-hornetq-protocol]. Adding protocol support for: HORNETQ
> 05:15:14,317 INFO [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 70) AMQ221043: Protocol module found: [artemis-stomp-protocol]. Adding protocol support for: STOMP
> Unhandled exception
> Type=Segmentation error vmState=0x00000000
> J9Generic_Signal_Number=00000004 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000001
> Handler1=00007F100C601D90 Handler2=00007F1007DD03F0 InaccessibleAddress=0000000000000000
> RDI=0000000002C82D00 RSI=0000000000000000 RAX=00007F0FDCB3977F RBX=0000000000000000
> RCX=0000000000000000 RDX=00007F0FDCB39780 R8=00007F0FDCB39780 R9=00007F100C667A58
> R10=00007F0FD03885C0 R11=0000000000200246 R12=0000000002C82D00 R13=00000000000001F4
> R14=00007F0FD02A9020 R15=0000000000000000
> RIP=00007F100C606C45 GS=0000 FS=0000 RSP=00007F0FDCB394C0
> EFlags=0000000000210246 CS=0033 RBP=00007F0FDCB39770 ERR=0000000000000004
> TRAPNO=000000000000000E OLDMASK=0000000000000000 CR2=0000000000000000
> xmm0 00007f1008067fe0 (f: 134643680.000000, d: 6.902435e-310)
> xmm1 0000000002c82d00 (f: 46673152.000000, d: 2.305960e-316)
> xmm2 0000000002c82a78 (f: 46672504.000000, d: 2.305928e-316)
> xmm3 00000000029a16c8 (f: 43652808.000000, d: 2.156735e-316)
> xmm4 0000000000000005 (f: 5.000000, d: 2.470328e-323)
> xmm5 00000000029a1828 (f: 43653160.000000, d: 2.156753e-316)
> xmm6 0000000000000001 (f: 1.000000, d: 4.940656e-324)
> xmm7 0000000000000001 (f: 1.000000, d: 4.940656e-324)
> xmm8 4169e3bd00000000 (f: 0.000000, d: 1.357361e+07)
> xmm9 3fcb1f59ad7ad780 (f: 2910509056.000000, d: 2.118942e-01)
> xmm10 3ff0000000000000 (f: 0.000000, d: 1.000000e+00)
> xmm11 be88000000000000 (f: 0.000000, d: -1.788139e-07)
> xmm12 bcd2800000000000 (f: 0.000000, d: -1.026956e-15)
> xmm13 bc34000000000000 (f: 0.000000, d: -1.084202e-18)
> xmm14 3c66353ab386a94d (f: 3011946752.000000, d: 9.631153e-18)
> xmm15 4030a2b23f3baa00 (f: 1060874752.000000, d: 1.663553e+01)
> Module=/qa/tools/opt/x86_64/ibm-java-x86_64-80_2015_07_28/jre/lib/amd64/compressedrefs/libj9vm28.so
> Module_base_address=00007F100C58E000
> Target=2_80_20150630_255633 (Linux 3.10.0-229.1.2.el7.x86_64)
> CPU=amd64 (8 logical CPUs) (0x4dce1d000 RAM)
> ----------- Stack Backtrace -----------
> (0x00007F100C606C45 [libj9vm28.so+0x78c45])
> (0x00007F100C61BECA [libj9vm28.so+0x8deca])
> ---------------------------------------
> JVMDUMP039I Processing dump event "gpf", detail "" at 2015/09/21 05:15:14 - please wait.
> JVMDUMP032I JVM requested System dump using '/tmp/dr10/jboss-eap-7.0/bin/core.20150921.051514.4815.0001.dmp' in response to an event
> JVMDUMP010I System dump written to /tmp/dr10/jboss-eap-7.0/bin/core.20150921.051514.4815.0001.dmp
> JVMDUMP032I JVM requested Java dump using '/tmp/dr10/jboss-eap-7.0/bin/javacore.20150921.051514.4815.0002.txt' in response to an event
> JVMDUMP010I Java dump written to /tmp/dr10/jboss-eap-7.0/bin/javacore.20150921.051514.4815.0002.txt
> JVMDUMP032I JVM requested Snap dump using '/tmp/dr10/jboss-eap-7.0/bin/Snap.20150921.051514.4815.0003.trc' in response to an event
> JVMDUMP010I Snap dump written to /tmp/dr10/jboss-eap-7.0/bin/Snap.20150921.051514.4815.0003.trc
> JVMDUMP007I JVM Requesting JIT dump using '/tmp/dr10/jboss-eap-7.0/bin/jitdump.20150921.051514.4815.0004.dmp'
> JVMDUMP010I JIT dump written to /tmp/dr10/jboss-eap-7.0/bin/jitdump.20150921.051514.4815.0004.dmp
> JVMDUMP013I Processed dump event "gpf", detail "".
> {code}
> Java version:
> {code}java version "1.8.0"
> Java(TM) SE Runtime Environment (build pxa6480sr1fp10-20150711_01(SR1 FP10))
> IBM J9 VM (build 2.8, JRE 1.8.0 Linux amd64-64 Compressed References 20150630_255633 (JIT enabled, AOT enabled)
> J9VM - R28_jvm.28_20150630_1742_B255633
> JIT - tr.r14.java_20150625_95081.01
> GC - R28_jvm.28_20150630_1742_B255633_CMPRSS
> J9CL - 20150630_255633)
> JCL - 20150711_01 based on Oracle jdk8u51-b15
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 3 months
[JBoss JIRA] (WFLY-5453) NPE during backup activation in collocated HA topology
by Jeff Mesnil (JIRA)
[ https://issues.jboss.org/browse/WFLY-5453?page=com.atlassian.jira.plugin.... ]
Jeff Mesnil resolved WFLY-5453.
-------------------------------
Fix Version/s: 10.0.0.CR5
Resolution: Done
Fixed in Artemis 1.1.0.wildfly-007
> NPE during backup activation in collocated HA topology
> ------------------------------------------------------
>
> Key: WFLY-5453
> URL: https://issues.jboss.org/browse/WFLY-5453
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Affects Versions: 10.0.0.CR2
> Reporter: Miroslav Novak
> Assignee: Clebert Suconic
> Priority: Blocker
> Fix For: 10.0.0.CR5
>
> Attachments: standalone-full-ha-1.xml, standalone-full-ha-2.xml
>
>
> If there are 2 EAP 7.0.0.DR11(Artemis 1.1.0) in collocated HA topology with replicated journal then backup does not activate after one server is killed. There is NPE in log of 2nd EAP server:
> {code}
> 10:13:12,231 ERROR [org.apache.activemq.artemis.core.server] (AMQ119000: Activation for server ActiveMQServerImpl::serverUUID=null) AMQ224000: Failure in initialisation: java.lang.NullPointerException
> at org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation.run(SharedNothingBackupActivation.java:235)
> at java.lang.Thread.run(Thread.java:745)
> 10:13:12,235 ERROR [stderr] (AMQ119000: Activation for server ActiveMQServerImpl::serverUUID=null) java.lang.NullPointerException
> 10:13:12,235 ERROR [stderr] (AMQ119000: Activation for server ActiveMQServerImpl::serverUUID=null) at org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation.run(SharedNothingBackupActivation.java:235)
> 10:13:12,236 ERROR [stderr] (AMQ119000: Activation for server ActiveMQServerImpl::serverUUID=null) at java.lang.Thread.run(Thread.java:745)
> {code}
> Attaching configuration of both of the servers.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 3 months
[JBoss JIRA] (WFLY-5932) Invalidating a session of an SSO on a different node than where the session was created does not logout the user
by Richard Janík (JIRA)
[ https://issues.jboss.org/browse/WFLY-5932?page=com.atlassian.jira.plugin.... ]
Richard Janík updated WFLY-5932:
--------------------------------
Steps to Reproduce:
* Two servers with a distributable deployment capable of calling Session.invalidate() (FORM auth), clustered single-sign-on, user added
** A1 = 127.0.0.1:8080/deployment
** A2 = 127.0.0.2:8180/deployment
* Access A1, authenticate, access A2 (we still only have a single session - everything distributable), invalidate session on A2 (e.g. calling 127.0.0.1:8180/deployment?invalidate=true), access A2:
** Expected: we need to authenticate and then receive a new session
** Actual result: we receive a new session but don't need to authenticate
Affects Version/s: 10.0.0.CR5
> Invalidating a session of an SSO on a different node than where the session was created does not logout the user
> ----------------------------------------------------------------------------------------------------------------
>
> Key: WFLY-5932
> URL: https://issues.jboss.org/browse/WFLY-5932
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 10.0.0.CR5
> Reporter: Richard Janík
> Assignee: Paul Ferraro
> Priority: Critical
>
> See steps to reproduce for description. Additional scenario with a failover where we don't need to authenticate with the last request (but where we should be required to authenticate):
> * Access A1, authenticate, fail A1 (e.g. shutdown the server), access A2, invalidate session on A2, access A2
> Scenarios where the SSO context is destroyed (where we need to authenticate with the last request as expected):
> * Access A1, authenticate, invalidate session on A1, access A1
> * Access A1, authenticate, access A2, invalidate session on A1, access A1
> Possibly related to JBEAP-1228, JBEAP-1282. Note that we always only have a single session bound to an SSO. I'm not flagging this as a blocker, since the issue usually doesn't manifest thanks to sticky sessions on a load balancer.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 3 months