[JBoss JIRA] (WFLY-6173) Classes not unloaded after undeployment
by Joey Wang (JIRA)
[ https://issues.jboss.org/browse/WFLY-6173?page=com.atlassian.jira.plugin.... ]
Joey Wang commented on WFLY-6173:
---------------------------------
Any updates please :).
> Classes not unloaded after undeployment
> ---------------------------------------
>
> Key: WFLY-6173
> URL: https://issues.jboss.org/browse/WFLY-6173
> Project: WildFly
> Issue Type: Bug
> Components: Server
> Affects Versions: 8.2.0.Final, 10.0.0.Final
> Reporter: Joey Wang
> Assignee: Jason Greene
> Priority: Minor
> Attachments: memory-leak.zip
>
>
> I deployed a small web application with one single JSF and one managed bean, accessed the page and then undeployed the application. I found the classes of this application had never been unloaded via monitoring with Java VistualVM, also using '-XX:+TraceClassUnloading' JVM option proved the classes not unloaded.
> Then checking the heap dump of it, I found there were instance for each enum item (the managed bean has one enum type field, which is always initialized when the managed bean constructed) and one array instance including these enum instances.
> Please refer to the attachment for the same application. I started to verify the classloader memory leak issue because we found hot redeployment of our real application swallow some memory each time, then after lots of redeployment the server was short of memories.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (WFLY-6514) DistributableSingleSignOn register an SessionIdChangeListener for each SSO session
by Stuart Douglas (JIRA)
[ https://issues.jboss.org/browse/WFLY-6514?page=com.atlassian.jira.plugin.... ]
Stuart Douglas commented on WFLY-6514:
--------------------------------------
It was never the intention for session listeners to be constantly dynamically registered and unregistered through the manager, doing so will not have very good performance characteristics even if there is no leak.
> DistributableSingleSignOn register an SessionIdChangeListener for each SSO session
> ----------------------------------------------------------------------------------
>
> Key: WFLY-6514
> URL: https://issues.jboss.org/browse/WFLY-6514
> Project: WildFly
> Issue Type: Bug
> Components: Web (Undertow)
> Affects Versions: 10.0.0.Final
> Reporter: Juan AMAT
> Assignee: Paul Ferraro
> Priority: Blocker
>
> During performance testing on our app we noticed an continuous increase of CPU utilization while the load was constant.
> In turns out that for each session an undertow SessionListener was registered at login and was never unregistered (request.logout or session.invalidate).
> As a result all the operations on undertow SessionListeners are taking more CPU every time a listener is added as we loop over those listeners.
> First of all we should register only one (if needed) such SessionListener per webapp.
> But even in the current implementation, the listener was not unregistered when request.logout was called.
> It is unregistered when session.invalidate is called but then this is not the same listener that is provided and thus nothing is done.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (WFLY-6514) DistributableSingleSignOn register an SessionIdChangeListener for each SSO session
by Stuart Douglas (JIRA)
[ https://issues.jboss.org/browse/WFLY-6514?page=com.atlassian.jira.plugin.... ]
Stuart Douglas reassigned WFLY-6514:
------------------------------------
Assignee: Paul Ferraro (was: Stuart Douglas)
> DistributableSingleSignOn register an SessionIdChangeListener for each SSO session
> ----------------------------------------------------------------------------------
>
> Key: WFLY-6514
> URL: https://issues.jboss.org/browse/WFLY-6514
> Project: WildFly
> Issue Type: Bug
> Components: Web (Undertow)
> Affects Versions: 10.0.0.Final
> Reporter: Juan AMAT
> Assignee: Paul Ferraro
> Priority: Blocker
>
> During performance testing on our app we noticed an continuous increase of CPU utilization while the load was constant.
> In turns out that for each session an undertow SessionListener was registered at login and was never unregistered (request.logout or session.invalidate).
> As a result all the operations on undertow SessionListeners are taking more CPU every time a listener is added as we loop over those listeners.
> First of all we should register only one (if needed) such SessionListener per webapp.
> But even in the current implementation, the listener was not unregistered when request.logout was called.
> It is unregistered when session.invalidate is called but then this is not the same listener that is provided and thus nothing is done.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (WFCORE-1469) Support match-comparison operation for if-command
by Thomas Darimont (JIRA)
[ https://issues.jboss.org/browse/WFCORE-1469?page=com.atlassian.jira.plugi... ]
Thomas Darimont updated WFCORE-1469:
------------------------------------
Description:
Conditional expressions are great help for creating generic wildfly configurations with the CLI.
Currently the {{comparison-operations}} for the {{if-expression}} are limited to
- equal (==)
- not equal (!=)
- lesser than (<)
- not lesser than (>=)
- greater than (>)
- not greater than (<=)
I'd propose an additional comparison operation: a regex based match operation with the symbol {{~=}}
wheras the left operand is a string to match against the right operand which is a regex string.
This would enable a simple way to do feature flag detection via partial regex matches.
The following sequence demonstrates this use case:
{code}
-Dfeatures="activemq jgroups"
{code}
{code}
if (result ~= ".*activemq.*") of /:resolve-expression(expression=${features})
echo configuring activemq
end-if
if (result ~= ".*jgroups.*") of /:resolve-expression(expression=${features})
echo configuring jgroups
end-if
if (result ~= ".*postgres.*") of /:resolve-expression(expression=${features})
echo configuring postgres
end-if
{code}
Example:
{code}
-Dfeatures="feature1 feature2"
[default@local:9990 /] if (result ~= ".*feature1.*") of /:resolve-expression(expression=${features})
[default@local:9990 /] echo configuring feature1
[default@local:9990 /] end-if
configuring feature1
[default@local:9990 /] if (result ~= ".*feature2.*") of /:resolve-expression(expression=${features})
[default@local:9990 /] echo configuring feature2
[default@local:9990 /] end-if
configuring feature2
[default@local:9990 /] if (result ~= ".*feature3.*") of /:resolve-expression(expression=${features})
[default@local:9990 /] echo configuring feature3
[default@local:9990 /] end-if
[default@local:9990 /]
{code}
was:
Conditional expressions are great help for creating generic wildfly configurations with the CLI.
Currently the {{comparison-operations}} for the {{if-expression}} are limited to
- equal (==)
- not equal (!=)
- lesser than (<)
- not lesser than (>=)
- greater than (>)
- not greater than (<=)
I'd propose an additional comparison operation: a regex based match operation with the symbol {{~=}}
wheras the left operand is a string to match against the right operand which is a regex string.
This would enable a simple way to do feature flag detection via partial regex matches.
The following sequence demonstrates this use case:
{code}
-Dfeatures="activemq jgroups"
{code}
{code}
if (features ~= ".*activemq.*") of /:resolve-expression(expression=${features})
echo configuring activemq
end-if
if (features ~= ".*jgroups.*") of /:resolve-expression(expression=${features})
echo configuring jgroups
end-if
if (features ~= ".*postgres.*") of /:resolve-expression(expression=${features})
echo configuring postgres
end-if
{code}
Example:
{code}
-Dfeatures="feature1 feature2"
[default@local:9990 /] if (features ~= ".*feature1.*") of /:resolve-expression(expression=${features})
[default@local:9990 /] echo configuring feature1
[default@local:9990 /] end-if
configuring feature1
[default@local:9990 /] if (features ~= ".*feature2.*") of /:resolve-expression(expression=${features})
[default@local:9990 /] echo configuring feature2
[default@local:9990 /] end-if
configuring feature2
[default@local:9990 /] if (features ~= ".*feature3.*") of /:resolve-expression(expression=${features})
[default@local:9990 /] echo configuring feature3
[default@local:9990 /] end-if
[default@local:9990 /]
{code}
> Support match-comparison operation for if-command
> -------------------------------------------------
>
> Key: WFCORE-1469
> URL: https://issues.jboss.org/browse/WFCORE-1469
> Project: WildFly Core
> Issue Type: Feature Request
> Components: CLI
> Reporter: Thomas Darimont
> Assignee: Alexey Loubyansky
> Priority: Minor
>
> Conditional expressions are great help for creating generic wildfly configurations with the CLI.
> Currently the {{comparison-operations}} for the {{if-expression}} are limited to
> - equal (==)
> - not equal (!=)
> - lesser than (<)
> - not lesser than (>=)
> - greater than (>)
> - not greater than (<=)
> I'd propose an additional comparison operation: a regex based match operation with the symbol {{~=}}
> wheras the left operand is a string to match against the right operand which is a regex string.
> This would enable a simple way to do feature flag detection via partial regex matches.
> The following sequence demonstrates this use case:
> {code}
> -Dfeatures="activemq jgroups"
> {code}
> {code}
> if (result ~= ".*activemq.*") of /:resolve-expression(expression=${features})
> echo configuring activemq
> end-if
> if (result ~= ".*jgroups.*") of /:resolve-expression(expression=${features})
> echo configuring jgroups
> end-if
> if (result ~= ".*postgres.*") of /:resolve-expression(expression=${features})
> echo configuring postgres
> end-if
> {code}
> Example:
> {code}
> -Dfeatures="feature1 feature2"
> [default@local:9990 /] if (result ~= ".*feature1.*") of /:resolve-expression(expression=${features})
> [default@local:9990 /] echo configuring feature1
> [default@local:9990 /] end-if
> configuring feature1
> [default@local:9990 /] if (result ~= ".*feature2.*") of /:resolve-expression(expression=${features})
> [default@local:9990 /] echo configuring feature2
> [default@local:9990 /] end-if
> configuring feature2
> [default@local:9990 /] if (result ~= ".*feature3.*") of /:resolve-expression(expression=${features})
> [default@local:9990 /] echo configuring feature3
> [default@local:9990 /] end-if
> [default@local:9990 /]
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (WFLY-6168) Huge burst of java.lang.IllegalArgumentException: Segments [18] are not owned by <slave-node>:<server-name>
by Stijn de Witt (JIRA)
[ https://issues.jboss.org/browse/WFLY-6168?page=com.atlassian.jira.plugin.... ]
Stijn de Witt commented on WFLY-6168:
-------------------------------------
I'm seeing this issue on WildFly 10.0.0.Final.
I am (was) running two WildFly 10 nodes ('gears') on OpenShift.
I have set the web cache to do optimistic locking. That is about the only change from the default config OpenShift's WildFly 10 cartridge comes with that I made.
Below is a snippet from the logs. The servers had gotten into an infinite loop of printing this exception until the logs drained available cartridge space.
{{2016-04-12 14:59:48,168 WARN [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread--p16-t2) ISPN000209: Failed to retrieve transactions for segments [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79] of cache routing from node 5708481e2d52714ecd000166-brautschloss.rhcloud.com: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 5708481e2d52714ecd000166-brautschloss.rhcloud.com, see cause for remote stack trace
at org.infinispan.remoting.transport.AbstractTransport.checkResponse(AbstractTransport.java:44)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:760)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$72(JGroupsTransport.java:599)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
at org.infinispan.remoting.transport.jgroups.SingleResponseFuture.futureDone(SingleResponseFuture.java:30)
at org.jgroups.blocks.Request.checkCompletion(Request.java:169)
at org.jgroups.blocks.UnicastRequest.receiveResponse(UnicastRequest.java:83)
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:398)
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:250)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:684)
at org.jgroups.JChannel.up(JChannel.java:738)
at org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:119)
at org.jgroups.stack.Protocol.up(Protocol.java:374)
at org.jgroups.protocols.FORK.up(FORK.java:114)
at org.jgroups.protocols.RSVP.up(RSVP.java:201)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:394)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1045)
at org.jgroups.protocols.AUTH.up(AUTH.java:148)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)
at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1064)
at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:779)
at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:426)
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:652)
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:155)
at org.jgroups.protocols.FD.up(FD.java:260)
at org.jgroups.protocols.MERGE3.up(MERGE3.java:285)
at org.jgroups.protocols.Discovery.up(Discovery.java:295)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1577)
at org.jgroups.protocols.TP$MyHandler.run(TP.java:1796)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Node 5708481e2d52714ecd000166-brautschloss.rhcloud.com is not a member
at org.infinispan.distribution.ch.impl.DefaultConsistentHash.getSegmentsForOwner(DefaultConsistentHash.java:115)
at org.infinispan.distribution.group.GroupingConsistentHash.getSegmentsForOwner(GroupingConsistentHash.java:67)
at org.infinispan.statetransfer.StateProviderImpl.getTransactionsForSegments(StateProviderImpl.java:163)
at org.infinispan.statetransfer.StateRequestCommand.perform(StateRequestCommand.java:67)
at org.infinispan.remoting.inboundhandler.BasePerCacheInboundInvocationHandler.invokePerform(BasePerCacheInboundInvocationHandler.java:92)
at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.run(BaseBlockingRunnable.java:34)
... 3 more
}}
> Huge burst of java.lang.IllegalArgumentException: Segments [18] are not owned by <slave-node>:<server-name>
> -----------------------------------------------------------------------------------------------------------
>
> Key: WFLY-6168
> URL: https://issues.jboss.org/browse/WFLY-6168
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 10.0.0.CR4
> Reporter: Thiago Presa
> Assignee: Paul Ferraro
>
> We have a Wildfly 10 CR4 cluster in domain mode with 2 slaves, and we're getting a huge burst of the following log entry:
> 10:57:31,065 WARN [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread--p13-t16) ISPN000209: Failed to retrieve transactions for segments [18] of cache dist from node <slave-host>:<server-name>: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from <slave-host>:<server-name>, see cause for remote stack trace
> at org.infinispan.remoting.transport.AbstractTransport.checkResponse(AbstractTransport.java:44)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:750)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$80(JGroupsTransport.java:589)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
> at org.infinispan.remoting.transport.jgroups.SingleResponseFuture.futureDone(SingleResponseFuture.java:30)
> at org.jgroups.blocks.Request.checkCompletion(Request.java:169)
> at org.jgroups.blocks.UnicastRequest.receiveResponse(UnicastRequest.java:83)
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:398)
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:250)
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:684)
> at org.jgroups.JChannel.up(JChannel.java:738)
> at org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:119)
> at org.jgroups.stack.Protocol.up(Protocol.java:374)
> at org.jgroups.protocols.FORK.up(FORK.java:114)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:394)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1045)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)
> at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1064)
> at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:779)
> at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:426)
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:652)
> at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:155)
> at org.jgroups.protocols.FD.up(FD.java:260)
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:310)
> at org.jgroups.protocols.MERGE3.up(MERGE3.java:285)
> at org.jgroups.protocols.Discovery.up(Discovery.java:295)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1577)
> at org.jgroups.protocols.TP$MyHandler.run(TP.java:1796)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: Segments [18] are not owned by <slave-host>:<server-name>
> at org.infinispan.statetransfer.StateProviderImpl.getTransactionsForSegments(StateProviderImpl.java:166)
> at org.infinispan.statetransfer.StateRequestCommand.perform(StateRequestCommand.java:67)
> at org.infinispan.remoting.inboundhandler.BasePerCacheInboundInvocationHandler.invokePerform(BasePerCacheInboundInvocationHandler.java:92)
> at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.run(BaseBlockingRunnable.java:34)
> ... 3 more
> It's funny because server-name is not the name of the server where this log entry is found. What may be causing this?
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (WFLY-6515) Log message when stopping listeners doesn't point to port shifted by port offset definition
by Stuart Douglas (JIRA)
[ https://issues.jboss.org/browse/WFLY-6515?page=com.atlassian.jira.plugin.... ]
Stuart Douglas moved JBEAP-4191 to WFLY-6515:
---------------------------------------------
Project: WildFly (was: JBoss Enterprise Application Platform)
Key: WFLY-6515 (was: JBEAP-4191)
Workflow: GIT Pull Request workflow (was: CDW with loose statuses v1)
Component/s: Web (Undertow)
(was: Web (Undertow))
Target Release: (was: 7.backlog.GA)
Affects Version/s: (was: 7.0.0.CR1)
> Log message when stopping listeners doesn't point to port shifted by port offset definition
> -------------------------------------------------------------------------------------------
>
> Key: WFLY-6515
> URL: https://issues.jboss.org/browse/WFLY-6515
> Project: WildFly
> Issue Type: Bug
> Components: Web (Undertow)
> Reporter: Stuart Douglas
> Assignee: Stuart Douglas
> Priority: Minor
>
> When stopping server, the log message for undertow listener being stopped points to port which is not the actual port used in case when the server was started with shifted ports offset.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (WFLY-6514) DistributableSingleSignOn register an SessionIdChangeListener for each SSO session
by Juan AMAT (JIRA)
Juan AMAT created WFLY-6514:
-------------------------------
Summary: DistributableSingleSignOn register an SessionIdChangeListener for each SSO session
Key: WFLY-6514
URL: https://issues.jboss.org/browse/WFLY-6514
Project: WildFly
Issue Type: Bug
Components: Web (Undertow)
Affects Versions: 10.0.0.Final
Reporter: Juan AMAT
Assignee: Stuart Douglas
Priority: Blocker
During performance testing on our app we noticed an continuous increase of CPU utilization while the load was constant.
In turns out that for each session an undertow SessionListener was registered at login and was never unregistered (request.logout or session.invalidate).
As a result all the operations on undertow SessionListeners are taking more CPU every time a listener is added as we loop over those listeners.
First of all we should register only one (if needed) such SessionListener per webapp.
But even in the current implementation, the listener was not unregistered when request.logout was called.
It is unregistered when session.invalidate is called but then this is not the same listener that is provided and thus nothing is done.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (JGRP-1957) S3_PING: Nodes never removed from .list file
by Mitchell Ackerman (JIRA)
[ https://issues.jboss.org/browse/JGRP-1957?page=com.atlassian.jira.plugin.... ]
Mitchell Ackerman commented on JGRP-1957:
-----------------------------------------
Unfortunately I seem to be running into the same or similar issue, even though I've updated to JGroups 3.6.8 and am using the settings you suggest in this (and other) posts.
I'm running in AWS using S3_PING, JDK 1.8.0_66, JGroups 3.6.8, Tomcat 8.0.28.
After terminating servers, mostly non-coordinators, I'm left with an S3 bucket with lots of zombies (there are only 2 active members), here is the file after the system has been stable for over an hour, and my JGroups config file. Any suggestions?
thanks, Mitchell
ip-10-89-1-26-8729 72597f74-8a10-04fb-b397-22a3ed35da84 10.89.1.26:7800 F
ip-10-89-0-18-38996 a5325932-e9cd-b281-b367-e2d86845aa75 10.89.0.18:7800 F
ip-10-89-1-62-4868 ef73921a-2265-50a8-95d4-ebb8cae96944 10.89.1.62:7800 T
ip-10-89-1-27-11915 5a0b4a26-b542-56f2-801a-420b5d7dbf34 10.89.1.27:7800 F
ip-10-89-1-19-2542 c30c294d-69b0-b6ca-7010-bf89d1eb8f6f 10.89.1.19:7800 F
ip-10-89-0-62-56914 fa2262c3-9097-7101-b225-24d8a52d905e 10.89.0.62:7800 F
ip-10-89-0-28-32680 5d03124f-b061-becb-d793-6067bf0d7945 10.89.0.28:7800 F
ip-10-89-1-26-51248 07cc18aa-381b-fb5d-0ad6-0612f7a5e9bb 10.89.1.26:7800 F
ip-10-89-1-27-39755 1f9be940-2228-2181-ef80-4a83d319a2b3 10.89.1.27:7800 F
ip-10-89-0-28-41919 4ab543f9-712e-645d-2f20-05304c98a23b 10.89.0.28:7800 F
ip-10-89-1-27-10428 d5b0cb38-75e0-b3e1-c053-66b053b0fb05 10.89.1.27:7800 F
my JGroups config file is:
<?xml version="1.0" encoding="UTF-8"?>
<config>
<TCP
bind_port="7800"
port_range="30"
recv_buf_size="20000000"
send_buf_size="1000000"
max_bundle_size="64000"
max_bundle_timeout="1000"
sock_conn_timeout="2000"
enable_diagnostics="false"
timer_type="new"
timer.min_threads="4"
timer.max_threads="10"
timer.keep_alive_time="3000"
timer.queue_max_size="1000"
timer.wheel_size="200"
timer.tick_time="50"
thread_pool.enabled="true"
thread_pool.min_threads="2"
thread_pool.max_threads="100"
thread_pool.keep_alive_time="60000"
thread_pool.queue_enabled="true"
thread_pool.queue_max_size="100000"
thread_pool.rejection_policy="discard"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="10"
oob_thread_pool.max_threads="100"
oob_thread_pool.keep_alive_time="60000"
oob_thread_pool.queue_enabled="false"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="discard"
logical_addr_cache_expiration="1000"
logical_addr_cache_reaper_interval="10000"
/>
<S3_PING location="bob-s3-ping-dev" remove_all_files_on_view_change="true" remove_old_coords_on_view_change="true"/>
<MERGE3 max_interval="60000" min_interval="30000"/>
<FD_SOCK/>
<FD timeout="3000" max_tries="5"/>
<VERIFY_SUSPECT timeout="2000"/>
<pbcast.NAKACK use_mcast_xmit="false" retransmit_timeout="300,600,1200,2400,4800" discard_delivered_msgs="true"/>
<UNICAST3/>
<pbcast.STABLE stability_delay="1500" desired_avg_gossip="50000" max_bytes="2m"/>
<pbcast.GMS print_local_addr="false" join_timeout="2500" max_bundling_time="50" view_bundling="true" max_join_attempts="${jgroups_max_join_attempts}"/>
<pbcast.STATE_TRANSFER />
<!-- top -->
<!-- /\ down -->
<!-- \/ up -->
</config>
> S3_PING: Nodes never removed from .list file
> --------------------------------------------
>
> Key: JGRP-1957
> URL: https://issues.jboss.org/browse/JGRP-1957
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.4
> Environment: JGroups client running on Mac OS X - Yosemite
> JDK 1.7.71
> Reporter: Nick Sawadsky
> Assignee: Bela Ban
> Priority: Minor
> Fix For: 3.6.6
>
>
> I'm not 100% sure, but it seems like there might be a defect here.
> I'm using TCP, S3_PING, and MERGE3.
> I've set logical_addr_cache_max_size to 2 for testing purposes, although I don't think the value of this setting affects my test results.
> I start a single node, node A. Then I start a second node, node B.
> I then repeatedly shutdown and restart node B.
> Each time node B starts, a new row is added to the .list file stored in S3.
> But even if I continue this process for 15 minutes, old rows are never removed from the .list file, so it continues to grow in size.
> I've read the docs and mailing list threads, so I'm aware that the list is not immediately updated as soon as a member leaves. But I was expecting that when a view change occurs, nodes no longer in the view would be marked for removal (line 2193 of TP.java) and then after the logical_addr_cache_expiration has been reached and the reaper kicks in, once a new node joins, the expired cache entries would be purged from the file.
> I dug in to the code a bit, and what seems to be happening is that the MERGE3 protocol periodically generates a FIND_MBRS event. S3_PING retrieves the membership from the .list file, which includes expired nodes. And then all of these members are re-added to the logical address cache (line 157 of S3_PING.java, line 533 of Discovery.java, line 2263 of TP.java).
> So expired nodes are continually re-added to the logical address cache, preventing them from ever being reaped.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (WFCORE-1469) Support match-comparison operation for if-command
by Thomas Darimont (JIRA)
[ https://issues.jboss.org/browse/WFCORE-1469?page=com.atlassian.jira.plugi... ]
Thomas Darimont updated WFCORE-1469:
------------------------------------
Git Pull Request: https://github.com/wildfly/wildfly-core/pull/1494
> Support match-comparison operation for if-command
> -------------------------------------------------
>
> Key: WFCORE-1469
> URL: https://issues.jboss.org/browse/WFCORE-1469
> Project: WildFly Core
> Issue Type: Feature Request
> Components: CLI
> Reporter: Thomas Darimont
> Assignee: Alexey Loubyansky
> Priority: Minor
>
> Conditional expressions are great help for creating generic wildfly configurations with the CLI.
> Currently the {{comparison-operations}} for the {{if-expression}} are limited to
> - equal (==)
> - not equal (!=)
> - lesser than (<)
> - not lesser than (>=)
> - greater than (>)
> - not greater than (<=)
> I'd propose an additional comparison operation: a regex based match operation with the symbol {{~=}}
> wheras the left operand is a string to match against the right operand which is a regex string.
> This would enable a simple way to do feature flag detection via partial regex matches.
> The following sequence demonstrates this use case:
> {code}
> -Dfeatures="activemq jgroups"
> {code}
> {code}
> if (features ~= ".*activemq.*") of /:resolve-expression(expression=${features})
> echo configuring activemq
> end-if
> if (features ~= ".*jgroups.*") of /:resolve-expression(expression=${features})
> echo configuring jgroups
> end-if
> if (features ~= ".*postgres.*") of /:resolve-expression(expression=${features})
> echo configuring postgres
> end-if
> {code}
> Example:
> {code}
> -Dfeatures="feature1 feature2"
> [default@local:9990 /] if (features ~= ".*feature1.*") of /:resolve-expression(expression=${features})
> [default@local:9990 /] echo configuring feature1
> [default@local:9990 /] end-if
> configuring feature1
> [default@local:9990 /] if (features ~= ".*feature2.*") of /:resolve-expression(expression=${features})
> [default@local:9990 /] echo configuring feature2
> [default@local:9990 /] end-if
> configuring feature2
> [default@local:9990 /] if (features ~= ".*feature3.*") of /:resolve-expression(expression=${features})
> [default@local:9990 /] echo configuring feature3
> [default@local:9990 /] end-if
> [default@local:9990 /]
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (WFCORE-1469) Support match-comparison operation for if-command
by Thomas Darimont (JIRA)
[ https://issues.jboss.org/browse/WFCORE-1469?page=com.atlassian.jira.plugi... ]
Thomas Darimont updated WFCORE-1469:
------------------------------------
Description:
Conditional expressions are great help for creating generic wildfly configurations with the CLI.
Currently the {{comparison-operations}} for the {{if-expression}} are limited to
- equal (==)
- not equal (!=)
- lesser than (<)
- not lesser than (>=)
- greater than (>)
- not greater than (<=)
I'd propose an additional comparison operation: a regex based match operation with the symbol {{~=}}
wheras the left operand is a string to match against the right operand which is a regex string.
This would enable a simple way to do feature flag detection via partial regex matches.
The following sequence demonstrates this use case:
{code}
-Dfeatures="activemq jgroups"
{code}
{code}
if (features ~= ".*activemq.*") of /:resolve-expression(expression=${features})
echo configuring activemq
end-if
if (features ~= ".*jgroups.*") of /:resolve-expression(expression=${features})
echo configuring jgroups
end-if
if (features ~= ".*postgres.*") of /:resolve-expression(expression=${features})
echo configuring postgres
end-if
{code}
Example:
{code}
[default@local:9990 /] if (features ~= ".*feature1.*") of /:resolve-expression(expression=${features})
[default@local:9990 /] echo configuring feature1
[default@local:9990 /] end-if
configuring feature1
[default@local:9990 /] if (features ~= ".*feature2.*") of /:resolve-expression(expression=${features})
[default@local:9990 /] echo configuring feature2
[default@local:9990 /] end-if
configuring feature1
[default@local:9990 /] if (features ~= ".*feature3.*") of /:resolve-expression(expression=${features})
[default@local:9990 /] echo configuring feature3
[default@local:9990 /] end-if
[default@local:9990 /]
{code}
was:
Conditional expressions are great help for creating generic wildfly configurations with the CLI.
Currently the {{comparison-operations}} for the {{if-expression}} are limited to
- equal (==)
- not equal (!=)
- lesser than (<)
- not lesser than (>=)
- greater than (>)
- not greater than (<=)
I'd propose an additional comparison operation: a regex based match operation with the symbol {{~=}}
wheras the left operand is a string to match against the right operand which is a regex string.
This would enable a simple way to do feature flag detection via partial regex matches.
The following sequence demonstrates this use case:
{code}
-Dfeatures="activemq jgroups"
{/code}
{code}
if (features ~= ".*activemq.*") of /:resolve-expression(expression=${features})
echo configuring activemq
end-if
if (features ~= ".*jgroups.*") of /:resolve-expression(expression=${features})
echo configuring jgroups
end-if
if (features ~= ".*postgres.*") of /:resolve-expression(expression=${features})
echo configuring postgres
end-if
{/code}
Example:
{code}
[default@local:9990 /] if (features ~= ".*feature1.*") of /:resolve-expression(expression=${features})
[default@local:9990 /] echo configuring feature1
[default@local:9990 /] end-if
configuring feature1
[default@local:9990 /] if (features ~= ".*feature2.*") of /:resolve-expression(expression=${features})
[default@local:9990 /] echo configuring feature2
[default@local:9990 /] end-if
configuring feature1
[default@local:9990 /] if (features ~= ".*feature3.*") of /:resolve-expression(expression=${features})
[default@local:9990 /] echo configuring feature3
[default@local:9990 /] end-if
[default@local:9990 /]
{/code}
> Support match-comparison operation for if-command
> -------------------------------------------------
>
> Key: WFCORE-1469
> URL: https://issues.jboss.org/browse/WFCORE-1469
> Project: WildFly Core
> Issue Type: Feature Request
> Components: CLI
> Reporter: Thomas Darimont
> Assignee: Alexey Loubyansky
> Priority: Minor
>
> Conditional expressions are great help for creating generic wildfly configurations with the CLI.
> Currently the {{comparison-operations}} for the {{if-expression}} are limited to
> - equal (==)
> - not equal (!=)
> - lesser than (<)
> - not lesser than (>=)
> - greater than (>)
> - not greater than (<=)
> I'd propose an additional comparison operation: a regex based match operation with the symbol {{~=}}
> wheras the left operand is a string to match against the right operand which is a regex string.
> This would enable a simple way to do feature flag detection via partial regex matches.
> The following sequence demonstrates this use case:
> {code}
> -Dfeatures="activemq jgroups"
> {code}
> {code}
> if (features ~= ".*activemq.*") of /:resolve-expression(expression=${features})
> echo configuring activemq
> end-if
> if (features ~= ".*jgroups.*") of /:resolve-expression(expression=${features})
> echo configuring jgroups
> end-if
> if (features ~= ".*postgres.*") of /:resolve-expression(expression=${features})
> echo configuring postgres
> end-if
> {code}
> Example:
> {code}
> [default@local:9990 /] if (features ~= ".*feature1.*") of /:resolve-expression(expression=${features})
> [default@local:9990 /] echo configuring feature1
> [default@local:9990 /] end-if
> configuring feature1
> [default@local:9990 /] if (features ~= ".*feature2.*") of /:resolve-expression(expression=${features})
> [default@local:9990 /] echo configuring feature2
> [default@local:9990 /] end-if
> configuring feature1
> [default@local:9990 /] if (features ~= ".*feature3.*") of /:resolve-expression(expression=${features})
> [default@local:9990 /] echo configuring feature3
> [default@local:9990 /] end-if
> [default@local:9990 /]
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years