[JBoss JIRA] (JGRP-2137) JGroups: one slow/stuck node slows/freezes entire cluster
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2137?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2137:
--------------------------------
This is the way the GossipRouter worked prior to 3.6.5.Final when NIO was introduced. In 3.6.5+, you can start the GossipRouter with {{-nio true}} and it will use non-blocking NIO instead of blocking TCP.
I therefore recommend upgrade to at least 3.6.5, even better the latest stable version from 3.6, e.g. 3.6.11.Final.
> JGroups: one slow/stuck node slows/freezes entire cluster
> ---------------------------------------------------------
>
> Key: JGRP-2137
> URL: https://issues.jboss.org/browse/JGRP-2137
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.4
> Environment: Multi node cluster. Uses TUNNEL mode with GossipRouter, TCP.
> Reporter: Bharad S
> Assignee: Bela Ban
>
> We have a multi node cluster with one node (say Node A) running the gossip router. We use TUNNEL mode, i.e., other nodes in cluster can talk to each other only via Node A. If one of the nodes in the cluster (say Node B) is slow in reading or gets stuck while reading from the channel, it affects the entire cluster. Inter node gossip also gets stuck and the nodes fall out of cluster.
> Thread dump on Node A indicate that 'ConnectionHandler' for node B stuck (at SocketOutputStream.socketWrite) while it is holding on to a lock, thus blocking ConnectionHandlers for all other nodes.
> --snip (from thread dump on Node A) --
> "gossip-handlers-129" #1088 daemon prio=5 os_prio=0 tid=0x00007f65d20ce800 nid=0x2353 runnable [0x00007f6557efd000]
> java.lang.Thread.State: RUNNABLE
> at java.net.SocketOutputStream.socketWrite0(Native Method)
> at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
> at sun.security.ssl.OutputRecord.writeBuffer(OutputRecord.java:431)
> at sun.security.ssl.OutputRecord.write(OutputRecord.java:417)
> at sun.security.ssl.SSLSocketImpl.writeRecordInternal(SSLSocketImpl.java:857)
> at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:828)
> at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:123)
> - locked <0x00000005f2445028> (a sun.security.ssl.AppOutputStream)
> at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
> at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
> - locked <0x00000005f248a210> (a java.io.BufferedOutputStream)
> at java.io.DataOutputStream.flush(DataOutputStream.java:123)
> at org.jgroups.stack.GossipRouter.sendToMember(GossipRouter.java:607)
> - locked <0x00000005f248a1f0> (a java.io.DataOutputStream)
> at org.jgroups.stack.GossipRouter.sendToAllMembersInGroup(GossipRouter.java:590)
> - locked <0x00000005d4aa1458> (a java.util.concurrent.ConcurrentHashMap)
> at org.jgroups.stack.GossipRouter.route(GossipRouter.java:487)
> at org.jgroups.stack.GossipRouter.access$800(GossipRouter.java:63)
> at org.jgroups.stack.GossipRouter$ConnectionHandler.readLoop(GossipRouter.java:753)
> at org.jgroups.stack.GossipRouter$ConnectionHandler.run(GossipRouter.java:706)
> at java.lang.Thread.run(Thread.java:745)
> --snip end--
> Other gossip-handler threads (meant for other nodes in the cluster) on Node A wait for acquiring lock on the ConnectionHandler map at following place: GossipRouter.java, method: sendToAllMembersInGroup
> --snip--
> "gossip-handlers-128"
> #1078 daemon prio=5 os_prio=0 tid=0x00007f65d20ce000 nid=0x2343 waiting
> for monitor entry [0x00007f654c258000]
> java.lang.Thread.State: BLOCKED (on object monitor)
> at org.jgroups.stack.GossipRouter.sendToAllMembersInGroup(GossipRouter.java:583)
> - waiting to lock <0x00000005d4aa1458> (a java.util.concurrent.ConcurrentHashMap)
> at org.jgroups.stack.GossipRouter.route(GossipRouter.java:487)
> at org.jgroups.stack.GossipRouter.access$800(GossipRouter.java:63)
> at org.jgroups.stack.GossipRouter$ConnectionHandler.readLoop(GossipRouter.java:753)
> at org.jgroups.stack.GossipRouter$ConnectionHandler.run(GossipRouter.java:706)
> at java.lang.Thread.run(Thread.java:745)
> "gossip-handlers-127"
> #1073 daemon prio=5 os_prio=0 tid=0x00007f65d01a6800 nid=0x233c waiting
> for monitor entry [0x00007f6697afb000]
> java.lang.Thread.State: BLOCKED (on object monitor)
> at org.jgroups.stack.GossipRouter.sendToAllMembersInGroup(GossipRouter.java:583)
> - waiting to lock <0x00000005d4aa1458> (a java.util.concurrent.ConcurrentHashMap)
> at org.jgroups.stack.GossipRouter.route(GossipRouter.java:487)
> at org.jgroups.stack.GossipRouter.access$800(GossipRouter.java:63)
> at org.jgroups.stack.GossipRouter$ConnectionHandler.readLoop(GossipRouter.java:753)
> at org.jgroups.stack.GossipRouter$ConnectionHandler.run(GossipRouter.java:706)
> at java.lang.Thread.run(Thread.java:745)
> --snip end--
> If node B were to go down, it is quickly taken out of the cluster and
> there is no problem. But if it stays in the cluster and is slow, is
> there a way to avoid rest of the cluster getting affected? We'd
> appreciate any help/pointers. Thanks.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 5 months
[JBoss JIRA] (JGRP-2137) JGroups: one slow/stuck node slows/freezes entire cluster
by Bharad S (JIRA)
Bharad S created JGRP-2137:
------------------------------
Summary: JGroups: one slow/stuck node slows/freezes entire cluster
Key: JGRP-2137
URL: https://issues.jboss.org/browse/JGRP-2137
Project: JGroups
Issue Type: Bug
Affects Versions: 3.6.4
Environment: Multi node cluster. Uses TUNNEL mode with GossipRouter, TCP.
Reporter: Bharad S
Assignee: Bela Ban
We have a multi node cluster with one node (say Node A) running the gossip router. We use TUNNEL mode, i.e., other nodes in cluster can talk to each other only via Node A. If one of the nodes in the cluster (say Node B) is slow in reading or gets stuck while reading from the channel, it affects the entire cluster. Inter node gossip also gets stuck and the nodes fall out of cluster.
Thread dump on Node A indicate that 'ConnectionHandler' for node B stuck (at SocketOutputStream.socketWrite) while it is holding on to a lock, thus blocking ConnectionHandlers for all other nodes.
--snip (from thread dump on Node A) --
"gossip-handlers-129" #1088 daemon prio=5 os_prio=0 tid=0x00007f65d20ce800 nid=0x2353 runnable [0x00007f6557efd000]
java.lang.Thread.State: RUNNABLE
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at sun.security.ssl.OutputRecord.writeBuffer(OutputRecord.java:431)
at sun.security.ssl.OutputRecord.write(OutputRecord.java:417)
at sun.security.ssl.SSLSocketImpl.writeRecordInternal(SSLSocketImpl.java:857)
at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:828)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:123)
- locked <0x00000005f2445028> (a sun.security.ssl.AppOutputStream)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
- locked <0x00000005f248a210> (a java.io.BufferedOutputStream)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at org.jgroups.stack.GossipRouter.sendToMember(GossipRouter.java:607)
- locked <0x00000005f248a1f0> (a java.io.DataOutputStream)
at org.jgroups.stack.GossipRouter.sendToAllMembersInGroup(GossipRouter.java:590)
- locked <0x00000005d4aa1458> (a java.util.concurrent.ConcurrentHashMap)
at org.jgroups.stack.GossipRouter.route(GossipRouter.java:487)
at org.jgroups.stack.GossipRouter.access$800(GossipRouter.java:63)
at org.jgroups.stack.GossipRouter$ConnectionHandler.readLoop(GossipRouter.java:753)
at org.jgroups.stack.GossipRouter$ConnectionHandler.run(GossipRouter.java:706)
at java.lang.Thread.run(Thread.java:745)
--snip end--
Other gossip-handler threads (meant for other nodes in the cluster) on Node A wait for acquiring lock on the ConnectionHandler map at following place: GossipRouter.java, method: sendToAllMembersInGroup
--snip--
"gossip-handlers-128"
#1078 daemon prio=5 os_prio=0 tid=0x00007f65d20ce000 nid=0x2343 waiting
for monitor entry [0x00007f654c258000]
java.lang.Thread.State: BLOCKED (on object monitor)
at org.jgroups.stack.GossipRouter.sendToAllMembersInGroup(GossipRouter.java:583)
- waiting to lock <0x00000005d4aa1458> (a java.util.concurrent.ConcurrentHashMap)
at org.jgroups.stack.GossipRouter.route(GossipRouter.java:487)
at org.jgroups.stack.GossipRouter.access$800(GossipRouter.java:63)
at org.jgroups.stack.GossipRouter$ConnectionHandler.readLoop(GossipRouter.java:753)
at org.jgroups.stack.GossipRouter$ConnectionHandler.run(GossipRouter.java:706)
at java.lang.Thread.run(Thread.java:745)
"gossip-handlers-127"
#1073 daemon prio=5 os_prio=0 tid=0x00007f65d01a6800 nid=0x233c waiting
for monitor entry [0x00007f6697afb000]
java.lang.Thread.State: BLOCKED (on object monitor)
at org.jgroups.stack.GossipRouter.sendToAllMembersInGroup(GossipRouter.java:583)
- waiting to lock <0x00000005d4aa1458> (a java.util.concurrent.ConcurrentHashMap)
at org.jgroups.stack.GossipRouter.route(GossipRouter.java:487)
at org.jgroups.stack.GossipRouter.access$800(GossipRouter.java:63)
at org.jgroups.stack.GossipRouter$ConnectionHandler.readLoop(GossipRouter.java:753)
at org.jgroups.stack.GossipRouter$ConnectionHandler.run(GossipRouter.java:706)
at java.lang.Thread.run(Thread.java:745)
--snip end--
If node B were to go down, it is quickly taken out of the cluster and
there is no problem. But if it stays in the cluster and is slow, is
there a way to avoid rest of the cluster getting affected? We'd
appreciate any help/pointers. Thanks.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 5 months
[JBoss JIRA] (WFLY-7692) http2-* attributes def values in Undertow for listeners and mod-cluster
by Chao Wang (JIRA)
[ https://issues.jboss.org/browse/WFLY-7692?page=com.atlassian.jira.plugin.... ]
Chao Wang moved JBEAP-7547 to WFLY-7692:
----------------------------------------
Project: WildFly (was: JBoss Enterprise Application Platform)
Key: WFLY-7692 (was: JBEAP-7547)
Workflow: GIT Pull Request workflow (was: CDW with loose statuses v1)
Component/s: Web (Undertow)
(was: Web (Undertow))
Affects Version/s: 10.1.0.Final
(was: 7.1.0.DR7)
> http2-* attributes def values in Undertow for listeners and mod-cluster
> -----------------------------------------------------------------------
>
> Key: WFLY-7692
> URL: https://issues.jboss.org/browse/WFLY-7692
> Project: WildFly
> Issue Type: Bug
> Components: Web (Undertow)
> Affects Versions: 10.1.0.Final
> Reporter: Chao Wang
> Assignee: Chao Wang
> Priority: Minor
>
> For {{http-listener}}, {{https-listener}} and {{mod-cluster}} filter in Undertow subsystem, there are some http2 related attributes:
> {code}
> "http2-header-table-size" => {
> "type" => INT,
> "description" => "The size of the header table used for HPACK compression, in bytes. This amount of memory will be allocated per connection for compression. Larger values use more memory but may give better compression.",
> "expressions-allowed" => true,
> "nillable" => true,
> "unit" => "BYTES",
> "access-type" => "read-write",
> "storage" => "configuration",
> "restart-required" => "all-services"
> },
> "http2-initial-window-size" => {
> "type" => INT,
> "description" => "The flow control window size that controls how quickly the client can send data to the server",
> "expressions-allowed" => true,
> "nillable" => true,
> "unit" => "BYTES",
> "access-type" => "read-write",
> "storage" => "configuration",
> "restart-required" => "all-services"
> },
> "http2-max-concurrent-streams" => {
> "type" => INT,
> "description" => "The maximum number of HTTP/2 streams that can be active at any time on a single connection",
> "expressions-allowed" => true,
> "nillable" => true,
> "access-type" => "read-write",
> "storage" => "configuration",
> "restart-required" => "all-services"
> },
> "http2-max-frame-size" => {
> "type" => INT,
> "description" => "The max HTTP/2 frame size",
> "expressions-allowed" => true,
> "nillable" => true,
> "unit" => "BYTES",
> "access-type" => "read-write",
> "storage" => "configuration",
> "restart-required" => "all-services"
> },
> "http2-max-header-list-size" => {
> "type" => INT,
> "description" => "The maximum size of request headers the server is prepared to accept",
> "expressions-allowed" => true,
> "nillable" => true,
> "unit" => "BYTES",
> "access-type" => "read-write",
> "storage" => "configuration",
> "restart-required" => "all-services"
> },
> {code}
> By default, all of these attributes are set as undefined. This might be reasonable e.g. for {{http2-max-concurrent-streams}} which in that case might mean that actual maximal value is not restricted anyhow (is it actually true?). But for other attributes this might be misleading for user as he actually does not know what is real default and used value (e.g. for {{http2-initial-window-size}} is used 65535) in such situation. Thus I think that we should provide some default values here so user knows what values are used.
> EDIT: also please pay some attention to a {{max-ajp-packet-size}} attribute that is available in {{ajp-listener}} and {{mod-cluster}} filter - there is no default value set here (undefined by default although I believe some default is actually used - 8192?) and also no units are specified in resource-description for that one (bytes, I believe, should that be).
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 5 months
[JBoss JIRA] (JGRP-2136) Merge installs the same view
by Bela Ban (JIRA)
Bela Ban created JGRP-2136:
------------------------------
Summary: Merge installs the same view
Key: JGRP-2136
URL: https://issues.jboss.org/browse/JGRP-2136
Project: JGroups
Issue Type: Bug
Reporter: Bela Ban
Assignee: Bela Ban
Fix For: 3.6.12, 4.0
The cluster is A,B with AUTH and ASYM_ENCRYPT. When a rogue member C (whith an incorrect auth_value in AUTH) attempts to join, it will be rejected by AUTH. However, while the subsequent merge attempts also fail (as designed), this leads to spurious MergeView installations in A and B, e.g.:
{noformat}
** View=[belasmac-17416|1] (2) [belasmac-17416, belasmac-56188]
** MergeView::[belasmac-17416|2] (2) [belasmac-17416, belasmac-56188], 1 subgroups: [belasmac-17416|1] (2) [belasmac-17416, belasmac-56188]
** MergeView::[belasmac-17416|3] (2) [belasmac-17416, belasmac-56188], 1 subgroups: [belasmac-17416|2] (2) [belasmac-17416, belasmac-56188]
** MergeView::[belasmac-17416|4] (2) [belasmac-17416, belasmac-56188], 1 subgroups: [belasmac-17416|3] (2) [belasmac-17416, belasmac-56188]
** MergeView::[belasmac-17416|5] (2) [belasmac-17416, belasmac-56188], 1 subgroups: [belasmac-17416|4] (2) [belasmac-17416, belasmac-56188]
** MergeView::[belasmac-17416|6] (2) [belasmac-17416, belasmac-56188], 1 subgroups: [belasmac-17416|5] (2) [belasmac-17416, belasmac-56188]
** MergeView::[belasmac-17416|7] (2) [belasmac-17416, belasmac-56188], 1 subgroups: [belasmac-17416|6] (2) [belasmac-17416, belasmac-56188]
** MergeView::[belasmac-17416|8] (2) [belasmac-17416, belasmac-56188], 1 subgroups: [belasmac-17416|7] (2) [belasmac-17416, belasmac-56188]
{noformat}
This neither corrupts security of the system (the rogue member cannot merge-join) not correctness, but we need to prevent the spurious views. Systems like Infinispan might start a rebalance on a view, regardless of whether they are the same as before or not.
SOLUTION: the merge leader needs to see if the MergeView it is about to send out is the same as the current view, and simply drop it if that's the case.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 5 months
[JBoss JIRA] (WFLY-7196) The outcome of xa_commit call on non exiting transaction is silently ignored
by Flavia Rainone (JIRA)
[ https://issues.jboss.org/browse/WFLY-7196?page=com.atlassian.jira.plugin.... ]
Flavia Rainone reassigned WFLY-7196:
------------------------------------
Assignee: Flavia Rainone
> The outcome of xa_commit call on non exiting transaction is silently ignored
> ----------------------------------------------------------------------------
>
> Key: WFLY-7196
> URL: https://issues.jboss.org/browse/WFLY-7196
> Project: WildFly
> Issue Type: Bug
> Components: EJB
> Affects Versions: 10.1.0.Final
> Reporter: Tom Ross
> Assignee: Flavia Rainone
>
> This is from mail thread on transactional mailing list:
> {noformat}
> There is definitely a bug in EJBR.
> These two at least look wrong by inspection:
> https://github.com/wildfly/wildfly/blob/master/ejb3/src/main/java/org/jbo...
> https://github.com/wildfly/wildfly/blob/master/ejb3/src/main/java/org/jbo...
> The way they are implemented means that when the transaction manager calls commit/prepare on an Xid if there is no transaction at the remote side EJBR just ignores the problem whereas it needs to return an XA error so the TM knows about it.
> You could simulate this with a test case that gets hold of the EJBR XAResource and invokes:
> xar.prepare(dummyXid) and does not get back XAER_NOTA
> That explains how the root transaction can prepare/commit without error as we can see here:
> 2016-09-12 16:04:02,303 TRACE [com.arjuna.ats.jta] (EJB async - 8) XAResourceRecord.topLevelPrepare for XAResourceRecord < resource:ResourceImpl{transactionKey=0:ffff0af7f6b6:6b296e38:57d14447:63f2e, ejbClientContext=org.jboss.ejb.client.EJBClientContext@18d6826a, nodeName='svc-2-presentation', state=State{transactionID=org.jboss.ejb.client.XidTransactionID@303ce194, suspended=false, participantCnt=0}}, txid:< formatId=131077, gtrid_length=46, bqual_length=36, tx_uid=0:ffff0af7f6b6:6b296e38:57d14447:63f2e, node_name=svc_2_presentation, branch_uid=0:ffff0af7f6b6:6b296e38:57d14447:63f4a, subordinatenodename=null, eis_name=unknown eis name >, heuristic: TwoPhaseOutcome.FINISH_OK com.arjuna.ats.internal.jta.resources.arjunacore.XAResourceRecord@1754f10f >, record id=0:ffff0af7f6b6:6b296e38:57d14447:63f4b
> 2016-09-12 16:04:02,364 TRACE [com.arjuna.ats.jta] (EJB async - 8) XAResourceRecord.topLevelCommit for XAResourceRecord < resource:ResourceImpl{transactionKey=0:ffff0af7f6b6:6b296e38:57d14447:63f2e, ejbClientContext=org.jboss.ejb.client.EJBClientContext@18d6826a, nodeName='svc-2-presentation', state=null}, txid:< formatId=131077, gtrid_length=46, bqual_length=36, tx_uid=0:ffff0af7f6b6:6b296e38:57d14447:63f2e, node_name=svc_2_presentation, branch_uid=0:ffff0af7f6b6:6b296e38:57d14447:63f4a, subordinatenodename=null, eis_name=unknown eis name >, heuristic: TwoPhaseOutcome.FINISH_OK com.arjuna.ats.internal.jta.resources.arjunacore.XAResourceRecord@1754f10f >, record id=0:ffff0af7f6b6:6b296e38:57d14447:63f4b
> Both those lines I indicated above have debug statements but they should be ERROR and return appropriate XA exceptions.
> What it does not explain is why the transaction is initially imported 45 minutes later. I guess it is stuck in some EJB processing queue.
> What I think needs curing first is the data integrity issue (i.e. silently ignoring if there is no imported transaction) in the EJB remoting transport. That should be relatively easy as EJBR can just return appropriate errors when it can't find the imported transaction. This part should be tackled urgently.
> After that the strange error where the import is processed late should be tackled but I am not familiar enough with the EJB remoting async architecture to suggest how to proceed with that.
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 5 months
[JBoss JIRA] (WFLY-7691) Custom pool not working when applied to Message-Driven Bean
by Evandro Pomatti (JIRA)
[ https://issues.jboss.org/browse/WFLY-7691?page=com.atlassian.jira.plugin.... ]
Evandro Pomatti updated WFLY-7691:
----------------------------------
Steps to Reproduce:
1) Create a JMS Queue:
{code:java}
jms-queue add --queue-address=customQueue --entries=java:jboss/jms/customQueue
{code}
2) Create a custom pool with a max-pool-size greater than the default (let's use 50):
{code:java}
/subsystem=ejb3/strict-max-bean-instance-pool=customPool/:add(max-pool-size=50,timeout=5,timeout-unit=MINUTES)
{code}
3) Create a WAR application with an MDB and assign the annotation @Pool("customPool") to the MDB and also @ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50") to match the pool max size. Add a Thread.sleep to the onMessage() method to simulate.
{code:java}
@MessageDriven(name = "CustomMDB", activationConfig = {
@ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
@ActivationConfigProperty(propertyName = "destination", propertyValue = "java:jboss/jms/customQueue"),
@ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50") })
@Pool("customPool")
public class DummyListener implements MessageListener {
private static AtomicInteger count = new AtomicInteger();
@Override
public void onMessage(Message message) {
try {
System.out.println("Count: " + count.incrementAndGet());
Thread.sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
{code}
@Pool annotation needs the following dependency:
{code:xml}
<dependency>
<groupId>org.jboss.ejb3</groupId>
<artifactId>jboss-ejb3-ext-api</artifactId>
<version>2.2.0.Final</version>
<scope>provided</scope>
</dependency>
{code}
4) Deploy the application and send 50 messages to the queue.
{code:java}
@Stateless
@Path("/enqueue")
public class DummyResource {
@Inject
@JMSConnectionFactory("java:/JmsXA")
private JMSContext context;
@Resource(mappedName = "java:jboss/jms/customQueue")
private Queue customQueue;
@GET
public void get() {
IntStream.range(0, 50).forEach(i -> context.createProducer().send(customQueue, "message"));
}
}
{code}
Only 15 queue messages will be dequeue at a time, not the 50 that was specified in the pool.
I also don't know why 15, since the default MDB pool is configured to be 20.
was:
1) Create a JMS Queue:
{code:java}
jms-queue add --queue-address=customQueue --entries=java:jboss/jms/customQueue
{code}
2) Create a custom pool with a max-pool-size greater than the default (let's use 50):
{code:java}
/subsystem=ejb3/strict-max-bean-instance-pool=customPool/:add(max-pool-size=50,timeout=5,timeout-unit=MINUTES)
{code}
3) Create a WAR application with an MDB and assign the annotation @Pool("customPool") to the MDB and also @ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50") to match the pool max size. Add a Thread.sleep to the onMessage() method to simulate.
{code:java}
@MessageDriven(name = "CustomMDB", activationConfig = {
@ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
@ActivationConfigProperty(propertyName = "destination", propertyValue = "java:jboss/jms/customQueue"),
@ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50") })
@Pool("customPool")
public class DummyListener implements MessageListener {
private static AtomicInteger count = new AtomicInteger();
@Override
public void onMessage(Message message) {
try {
System.out.println("Count: " + count.incrementAndGet());
Thread.sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
{code}
@Pool annotation needs the following dependency:
{code:java}
<dependency>
<groupId>org.jboss.ejb3</groupId>
<artifactId>jboss-ejb3-ext-api</artifactId>
<version>2.2.0.Final</version>
<scope>provided</scope>
</dependency>
{code}
4) Deploy the application and send 50 messages to the queue.
@Stateless
@Path("/enqueue")
public class DummyResource {
@Inject
@JMSConnectionFactory("java:/JmsXA")
private JMSContext context;
@Resource(mappedName = "java:jboss/jms/customQueue")
private Queue customQueue;
@GET
public void get() {
IntStream.range(0, 50).forEach(i -> context.createProducer().send(customQueue, "message"));
}
}
Only 15 queue messages will be dequeue at a time, not the 50 that was specified in the pool.
I also don't know why 15, since the default MDB pool is configured to be 20.
> Custom pool not working when applied to Message-Driven Bean
> -----------------------------------------------------------
>
> Key: WFLY-7691
> URL: https://issues.jboss.org/browse/WFLY-7691
> Project: WildFly
> Issue Type: Bug
> Components: EJB
> Affects Versions: 10.1.0.Final
> Reporter: Evandro Pomatti
> Labels: mdb, pool
>
> I tried to configure a custom thread pool por an MDB in order to increase performance, but it seems that the association is being ignored by WildFly.
> When I associate my custom pool with increased size to 50 to my MDB, only 15 messages keep getting dequeue at a time.
> I also don't know why 15, since the default MDB pool "mdb-strict-max-pool" max size is configured to be 20.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 5 months
[JBoss JIRA] (WFLY-7691) Custom pool not working when applied to Message-Driven Bean
by Evandro Pomatti (JIRA)
[ https://issues.jboss.org/browse/WFLY-7691?page=com.atlassian.jira.plugin.... ]
Evandro Pomatti updated WFLY-7691:
----------------------------------
Steps to Reproduce:
1) Create a JMS Queue:
{code:java}
jms-queue add --queue-address=customQueue --entries=java:jboss/jms/customQueue
{code}
2) Create a custom pool with a max-pool-size greater than the default (let's use 50):
{code:java}
/subsystem=ejb3/strict-max-bean-instance-pool=customPool/:add(max-pool-size=50,timeout=5,timeout-unit=MINUTES)
{code}
3) Create a WAR application with an MDB and assign the annotation @Pool("customPool") to the MDB and also @ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50") to match the pool max size. Add a Thread.sleep to the onMessage() method to simulate.
{code:java}
@MessageDriven(name = "CustomMDB", activationConfig = {
@ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
@ActivationConfigProperty(propertyName = "destination", propertyValue = "java:jboss/jms/customQueue"),
@ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50") })
@Pool("customPool")
public class DummyListener implements MessageListener {
private static AtomicInteger count = new AtomicInteger();
@Override
public void onMessage(Message message) {
try {
System.out.println("Count: " + count.incrementAndGet());
Thread.sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
{code}
@Pool annotation needs the following dependency:
{code:java}
<dependency>
<groupId>org.jboss.ejb3</groupId>
<artifactId>jboss-ejb3-ext-api</artifactId>
<version>2.2.0.Final</version>
<scope>provided</scope>
</dependency>
{code}
4) Deploy the application and send 50 messages to the queue.
@Stateless
@Path("/enqueue")
public class DummyResource {
@Inject
@JMSConnectionFactory("java:/JmsXA")
private JMSContext context;
@Resource(mappedName = "java:jboss/jms/customQueue")
private Queue customQueue;
@GET
public void get() {
IntStream.range(0, 50).forEach(i -> context.createProducer().send(customQueue, "message"));
}
}
Only 15 queue messages will be dequeue at a time, not the 50 that was specified in the pool.
I also don't know why 15, since the default MDB pool is configured to be 20.
was:
1) Create a JMS Queue:
{code:java}
jms-queue add --queue-address=customQueue --entries=java:jboss/jms/customQueue
{code}
2) Create a custom pool with a max-pool-size greater than the default (let's use 50):
{code:java}
/subsystem=ejb3/strict-max-bean-instance-pool=customPool/:add(max-pool-size=50,timeout=5,timeout-unit=MINUTES)
{code}
3) Create a WAR application with an MDB and assign the annotation @Pool("customPool")
{code}
to the MDB and also
{code:java}
@ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50")
{code}
to match the pool max size. Add a Thread.sleep to the onMessage() method to simulate.
{code:java}
@MessageDriven(name = "CustomMDB", activationConfig = {
@ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
@ActivationConfigProperty(propertyName = "destination", propertyValue = "java:jboss/jms/customQueue"),
@ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50") })
@Pool("customPool")
public class DummyListener implements MessageListener {
private static AtomicInteger count = new AtomicInteger();
@Override
public void onMessage(Message message) {
try {
System.out.println("Count: " + count.incrementAndGet());
Thread.sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
{code}
@Pool annotation needs the following dependency:
<dependency>
<groupId>org.jboss.ejb3</groupId>
<artifactId>jboss-ejb3-ext-api</artifactId>
<version>2.2.0.Final</version>
<scope>provided</scope>
</dependency>
4) Deploy the application and send 50 messages to the queue.
@Stateless
@Path("/enqueue")
public class DummyResource {
@Inject
@JMSConnectionFactory("java:/JmsXA")
private JMSContext context;
@Resource(mappedName = "java:jboss/jms/customQueue")
private Queue customQueue;
@GET
public void get() {
IntStream.range(0, 50).forEach(i -> context.createProducer().send(customQueue, "message"));
}
}
Only 15 queue messages will be dequeue at a time, not the 50 that was specified in the pool.
I also don't know why 15, since the default MDB pool is configured to be 20.
> Custom pool not working when applied to Message-Driven Bean
> -----------------------------------------------------------
>
> Key: WFLY-7691
> URL: https://issues.jboss.org/browse/WFLY-7691
> Project: WildFly
> Issue Type: Bug
> Components: EJB
> Affects Versions: 10.1.0.Final
> Reporter: Evandro Pomatti
> Labels: mdb, pool
>
> I tried to configure a custom thread pool por an MDB in order to increase performance, but it seems that the association is being ignored by WildFly.
> When I associate my custom pool with increased size to 50 to my MDB, only 15 messages keep getting dequeue at a time.
> I also don't know why 15, since the default MDB pool "mdb-strict-max-pool" max size is configured to be 20.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 5 months
[JBoss JIRA] (WFLY-7691) Custom pool not working when applied to Message-Driven Bean
by Evandro Pomatti (JIRA)
[ https://issues.jboss.org/browse/WFLY-7691?page=com.atlassian.jira.plugin.... ]
Evandro Pomatti updated WFLY-7691:
----------------------------------
Steps to Reproduce:
1) Create a JMS Queue:
{code:java}
jms-queue add --queue-address=customQueue --entries=java:jboss/jms/customQueue
{code}
2) Create a custom pool with a max-pool-size greater than the default (let's use 50):
{code:java}
/subsystem=ejb3/strict-max-bean-instance-pool=customPool/:add(max-pool-size=50,timeout=5,timeout-unit=MINUTES)
{code}
3) Create a WAR application with an MDB and assign the annotation
{code:java}
@Pool("customPool")
{code}
to the MDB and also
{code:java}
@ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50")
{code}
to match the pool max size. Add a Thread.sleep to the onMessage() method to simulate.
{code:java}
@MessageDriven(name = "CustomMDB", activationConfig = {
@ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
@ActivationConfigProperty(propertyName = "destination", propertyValue = "java:jboss/jms/customQueue"),
@ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50") })
@Pool("customPool")
public class DummyListener implements MessageListener {
private static AtomicInteger count = new AtomicInteger();
@Override
public void onMessage(Message message) {
try {
System.out.println("Count: " + count.incrementAndGet());
Thread.sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
{code}
@Pool annotation needs the following dependency:
<dependency>
<groupId>org.jboss.ejb3</groupId>
<artifactId>jboss-ejb3-ext-api</artifactId>
<version>2.2.0.Final</version>
<scope>provided</scope>
</dependency>
4) Deploy the application and send 50 messages to the queue.
@Stateless
@Path("/enqueue")
public class DummyResource {
@Inject
@JMSConnectionFactory("java:/JmsXA")
private JMSContext context;
@Resource(mappedName = "java:jboss/jms/customQueue")
private Queue customQueue;
@GET
public void get() {
IntStream.range(0, 50).forEach(i -> context.createProducer().send(customQueue, "message"));
}
}
Only 15 queue messages will be dequeue at a time, not the 50 that was specified in the pool.
I also don't know why 15, since the default MDB pool is configured to be 20.
was:
1) Create a JMS Queue:
{noformat}
jms-queue add --queue-address=customQueue --entries=java:jboss/jms/customQueue
{noformat}
2) Create a custom pool with a max-pool-size greater than the default (let's use 50):
{code:java}
/subsystem=ejb3/strict-max-bean-instance-pool=customPool/:add(max-pool-size=50,timeout=5,timeout-unit=MINUTES)
{code}
3) Create a WAR application with an MDB and assign the annotation @Pool("customPool") to the MDB and also @ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50") to match the pool max size. Add a Thread.sleep to the onMessage() method to simulate.
{code:java}
@MessageDriven(name = "CustomMDB", activationConfig = {
@ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
@ActivationConfigProperty(propertyName = "destination", propertyValue = "java:jboss/jms/customQueue"),
@ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50") })
@Pool("customPool")
public class DummyListener implements MessageListener {
private static AtomicInteger count = new AtomicInteger();
@Override
public void onMessage(Message message) {
try {
System.out.println("Count: " + count.incrementAndGet());
Thread.sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
{code}
@Pool annotation needs the following dependency:
<dependency>
<groupId>org.jboss.ejb3</groupId>
<artifactId>jboss-ejb3-ext-api</artifactId>
<version>2.2.0.Final</version>
<scope>provided</scope>
</dependency>
4) Deploy the application and send 50 messages to the queue.
@Stateless
@Path("/enqueue")
public class DummyResource {
@Inject
@JMSConnectionFactory("java:/JmsXA")
private JMSContext context;
@Resource(mappedName = "java:jboss/jms/customQueue")
private Queue customQueue;
@GET
public void get() {
IntStream.range(0, 50).forEach(i -> context.createProducer().send(customQueue, "message"));
}
}
Only 15 queue messages will be dequeue at a time, not the 50 that was specified in the pool.
I also don't know why 15, since the default MDB pool is configured to be 20.
> Custom pool not working when applied to Message-Driven Bean
> -----------------------------------------------------------
>
> Key: WFLY-7691
> URL: https://issues.jboss.org/browse/WFLY-7691
> Project: WildFly
> Issue Type: Bug
> Components: EJB
> Affects Versions: 10.1.0.Final
> Reporter: Evandro Pomatti
> Labels: mdb, pool
>
> I tried to configure a custom thread pool por an MDB in order to increase performance, but it seems that the association is being ignored by WildFly.
> When I associate my custom pool with increased size to 50 to my MDB, only 15 messages keep getting dequeue at a time.
> I also don't know why 15, since the default MDB pool "mdb-strict-max-pool" max size is configured to be 20.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 5 months
[JBoss JIRA] (WFLY-7691) Custom pool not working when applied to Message-Driven Bean
by Evandro Pomatti (JIRA)
[ https://issues.jboss.org/browse/WFLY-7691?page=com.atlassian.jira.plugin.... ]
Evandro Pomatti updated WFLY-7691:
----------------------------------
Steps to Reproduce:
1) Create a JMS Queue:
{code:java}
jms-queue add --queue-address=customQueue --entries=java:jboss/jms/customQueue
{code}
2) Create a custom pool with a max-pool-size greater than the default (let's use 50):
{code:java}
/subsystem=ejb3/strict-max-bean-instance-pool=customPool/:add(max-pool-size=50,timeout=5,timeout-unit=MINUTES)
{code}
3) Create a WAR application with an MDB and assign the annotation @Pool("customPool")
{code}
to the MDB and also
{code:java}
@ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50")
{code}
to match the pool max size. Add a Thread.sleep to the onMessage() method to simulate.
{code:java}
@MessageDriven(name = "CustomMDB", activationConfig = {
@ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
@ActivationConfigProperty(propertyName = "destination", propertyValue = "java:jboss/jms/customQueue"),
@ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50") })
@Pool("customPool")
public class DummyListener implements MessageListener {
private static AtomicInteger count = new AtomicInteger();
@Override
public void onMessage(Message message) {
try {
System.out.println("Count: " + count.incrementAndGet());
Thread.sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
{code}
@Pool annotation needs the following dependency:
<dependency>
<groupId>org.jboss.ejb3</groupId>
<artifactId>jboss-ejb3-ext-api</artifactId>
<version>2.2.0.Final</version>
<scope>provided</scope>
</dependency>
4) Deploy the application and send 50 messages to the queue.
@Stateless
@Path("/enqueue")
public class DummyResource {
@Inject
@JMSConnectionFactory("java:/JmsXA")
private JMSContext context;
@Resource(mappedName = "java:jboss/jms/customQueue")
private Queue customQueue;
@GET
public void get() {
IntStream.range(0, 50).forEach(i -> context.createProducer().send(customQueue, "message"));
}
}
Only 15 queue messages will be dequeue at a time, not the 50 that was specified in the pool.
I also don't know why 15, since the default MDB pool is configured to be 20.
was:
1) Create a JMS Queue:
{code:java}
jms-queue add --queue-address=customQueue --entries=java:jboss/jms/customQueue
{code}
2) Create a custom pool with a max-pool-size greater than the default (let's use 50):
{code:java}
/subsystem=ejb3/strict-max-bean-instance-pool=customPool/:add(max-pool-size=50,timeout=5,timeout-unit=MINUTES)
{code}
3) Create a WAR application with an MDB and assign the annotation
{code:java}
@Pool("customPool")
{code}
to the MDB and also
{code:java}
@ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50")
{code}
to match the pool max size. Add a Thread.sleep to the onMessage() method to simulate.
{code:java}
@MessageDriven(name = "CustomMDB", activationConfig = {
@ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
@ActivationConfigProperty(propertyName = "destination", propertyValue = "java:jboss/jms/customQueue"),
@ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50") })
@Pool("customPool")
public class DummyListener implements MessageListener {
private static AtomicInteger count = new AtomicInteger();
@Override
public void onMessage(Message message) {
try {
System.out.println("Count: " + count.incrementAndGet());
Thread.sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
{code}
@Pool annotation needs the following dependency:
<dependency>
<groupId>org.jboss.ejb3</groupId>
<artifactId>jboss-ejb3-ext-api</artifactId>
<version>2.2.0.Final</version>
<scope>provided</scope>
</dependency>
4) Deploy the application and send 50 messages to the queue.
@Stateless
@Path("/enqueue")
public class DummyResource {
@Inject
@JMSConnectionFactory("java:/JmsXA")
private JMSContext context;
@Resource(mappedName = "java:jboss/jms/customQueue")
private Queue customQueue;
@GET
public void get() {
IntStream.range(0, 50).forEach(i -> context.createProducer().send(customQueue, "message"));
}
}
Only 15 queue messages will be dequeue at a time, not the 50 that was specified in the pool.
I also don't know why 15, since the default MDB pool is configured to be 20.
> Custom pool not working when applied to Message-Driven Bean
> -----------------------------------------------------------
>
> Key: WFLY-7691
> URL: https://issues.jboss.org/browse/WFLY-7691
> Project: WildFly
> Issue Type: Bug
> Components: EJB
> Affects Versions: 10.1.0.Final
> Reporter: Evandro Pomatti
> Labels: mdb, pool
>
> I tried to configure a custom thread pool por an MDB in order to increase performance, but it seems that the association is being ignored by WildFly.
> When I associate my custom pool with increased size to 50 to my MDB, only 15 messages keep getting dequeue at a time.
> I also don't know why 15, since the default MDB pool "mdb-strict-max-pool" max size is configured to be 20.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 5 months
[JBoss JIRA] (WFLY-7691) Custom pool not working when applied to Message-Driven Bean
by Evandro Pomatti (JIRA)
[ https://issues.jboss.org/browse/WFLY-7691?page=com.atlassian.jira.plugin.... ]
Evandro Pomatti updated WFLY-7691:
----------------------------------
Steps to Reproduce:
1) Create a JMS Queue:
{noformat}
jms-queue add --queue-address=customQueue --entries=java:jboss/jms/customQueue
{noformat}
2) Create a custom pool with a max-pool-size greater than the default (let's use 50):
{code:java}
/subsystem=ejb3/strict-max-bean-instance-pool=customPool/:add(max-pool-size=50,timeout=5,timeout-unit=MINUTES)
{code}
3) Create a WAR application with an MDB and assign the annotation @Pool("customPool") to the MDB and also @ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50") to match the pool max size. Add a Thread.sleep to the onMessage() method to simulate.
{code:java}
@MessageDriven(name = "CustomMDB", activationConfig = {
@ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
@ActivationConfigProperty(propertyName = "destination", propertyValue = "java:jboss/jms/customQueue"),
@ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50") })
@Pool("customPool")
public class DummyListener implements MessageListener {
private static AtomicInteger count = new AtomicInteger();
@Override
public void onMessage(Message message) {
try {
System.out.println("Count: " + count.incrementAndGet());
Thread.sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
{code}
@Pool annotation needs the following dependency:
<dependency>
<groupId>org.jboss.ejb3</groupId>
<artifactId>jboss-ejb3-ext-api</artifactId>
<version>2.2.0.Final</version>
<scope>provided</scope>
</dependency>
4) Deploy the application and send 50 messages to the queue.
@Stateless
@Path("/enqueue")
public class DummyResource {
@Inject
@JMSConnectionFactory("java:/JmsXA")
private JMSContext context;
@Resource(mappedName = "java:jboss/jms/customQueue")
private Queue customQueue;
@GET
public void get() {
IntStream.range(0, 50).forEach(i -> context.createProducer().send(customQueue, "message"));
}
}
Only 15 queue messages will be dequeue at a time, not the 50 that was specified in the pool.
I also don't know why 15, since the default MDB pool is configured to be 20.
was:
1) Create a JMS Queue:
{noformat}
jms-queue add --queue-address=customQueue --entries=java:jboss/jms/customQueue
{noformat}
2) Create a custom pool with a max-pool-size greater than the default (let's use 50):
{panel:title=My title}
/subsystem=ejb3/strict-max-bean-instance-pool=customPool/:add(max-pool-size=50,timeout=5,timeout-unit=MINUTES)
{panel}
3) Create a WAR application with an MDB and assign the annotation @Pool("customPool") to the MDB and also @ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50") to match the pool max size. Add a Thread.sleep to the onMessage() method to simulate.
@MessageDriven(name = "CustomMDB", activationConfig = {
@ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
@ActivationConfigProperty(propertyName = "destination", propertyValue = "java:jboss/jms/customQueue"),
@ActivationConfigProperty(propertyName = "maxSessions", propertyValue = "50") })
@Pool("customPool")
public class DummyListener implements MessageListener {
private static AtomicInteger count = new AtomicInteger();
@Override
public void onMessage(Message message) {
try {
System.out.println("Count: " + count.incrementAndGet());
Thread.sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
@Pool annotation needs the following dependency:
<dependency>
<groupId>org.jboss.ejb3</groupId>
<artifactId>jboss-ejb3-ext-api</artifactId>
<version>2.2.0.Final</version>
<scope>provided</scope>
</dependency>
4) Deploy the application and send 50 messages to the queue.
@Stateless
@Path("/enqueue")
public class DummyResource {
@Inject
@JMSConnectionFactory("java:/JmsXA")
private JMSContext context;
@Resource(mappedName = "java:jboss/jms/customQueue")
private Queue customQueue;
@GET
public void get() {
IntStream.range(0, 50).forEach(i -> context.createProducer().send(customQueue, "message"));
}
}
Only 15 queue messages will be dequeue at a time, not the 50 that was specified in the pool.
I also don't know why 15, since the default MDB pool is configured to be 20.
> Custom pool not working when applied to Message-Driven Bean
> -----------------------------------------------------------
>
> Key: WFLY-7691
> URL: https://issues.jboss.org/browse/WFLY-7691
> Project: WildFly
> Issue Type: Bug
> Components: EJB
> Affects Versions: 10.1.0.Final
> Reporter: Evandro Pomatti
> Labels: mdb, pool
>
> I tried to configure a custom thread pool por an MDB in order to increase performance, but it seems that the association is being ignored by WildFly.
> When I associate my custom pool with increased size to 50 to my MDB, only 15 messages keep getting dequeue at a time.
> I also don't know why 15, since the default MDB pool "mdb-strict-max-pool" max size is configured to be 20.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 5 months