[JBoss JIRA] (WFCORE-2691) Elytron modifiable realms should show existing identities in subsystem
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFCORE-2691?page=com.atlassian.jira.plugi... ]
Brian Stansberry commented on WFCORE-2691:
------------------------------------------
[~honza889] AFAIK in the messaging subsystem case it is reading local resources. Those queues are part of the in-vm messaging broker. The biggest concern I have with this security realm stuff is it introduces remote call into the picture. That and the potential for an extremely large number of resources. Granted a messaging broker could have that problem as well.
Please start a wildfly-dev list thread on this. It is something that deserves a broadly visible discussion.
With /subsystem=messaging-activemq:read-resource(include-runtime=false,recursive=true) for core-address do you see the details of the core-address resources or just an empty placeholder? I expect the latter. We could look into eliminating even that.
For the JMX issue, I don't think include-runtime=false is relevant. A query for all mbeans will return all mbeans, runtime-only resource or not.
> Elytron modifiable realms should show existing identities in subsystem
> ----------------------------------------------------------------------
>
> Key: WFCORE-2691
> URL: https://issues.jboss.org/browse/WFCORE-2691
> Project: WildFly Core
> Issue Type: Bug
> Components: Security
> Affects Versions: 3.0.0.Beta15
> Reporter: Jan Kalina
> Assignee: Jan Kalina
> Priority: Blocker
> Labels: eap71_beta, filesystem-realm, security-realm
>
> Elytron {{filesystem-realm}} should load existing identities from file system. The steps to reproduce results in:
> {noformat}
> [standalone@localhost:9990 /] /subsystem=elytron/filesystem-realm=realm/identity=user:read-identity
> {
> "outcome" => "failed",
> "failure-description" => "WFLYCTL0216: Management resource '[
> (\"subsystem\" => \"elytron\"),
> (\"filesystem-realm\" => \"realm\"),
> (\"identity\" => \"user\")
> ]' not found",
> "rolled-back" => true
> }
> [standalone@localhost:9990 /] /subsystem=elytron/filesystem-realm=realm/identity=user:add
> {
> "outcome" => "failed",
> "failure-description" => "WFLYELY01000: Identity with name [user] already exists.",
> "rolled-back" => true
> }
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years
[JBoss JIRA] (ELY-1027) CS tool, Parameter --salt requires --iteration and vice versa
by Ilia Vassilev (JIRA)
[ https://issues.jboss.org/browse/ELY-1027?page=com.atlassian.jira.plugin.s... ]
Ilia Vassilev reassigned ELY-1027:
----------------------------------
Assignee: (was: Ilia Vassilev)
> CS tool, Parameter --salt requires --iteration and vice versa
> -------------------------------------------------------------
>
> Key: ELY-1027
> URL: https://issues.jboss.org/browse/ELY-1027
> Project: WildFly Elytron
> Issue Type: Bug
> Components: Credential Store
> Reporter: Hynek Švábek
>
> If I use only one parameter from --salt or --iteration then this one is ignored and result password is in clear text.
> {code}
> java -jar wildfly-elytron-tool.jar credential-store --add myalias --secret supersecretpassword --location="test.store" --uri "cr-store://test?modifiable=true;create=true;keyStoreType=JCEKS" --password mycspassword --summary --salt="abcdefgh"
> {code}
> Result of this command is:
> {code}
> Alias "myalias" has been successfully stored
> Credential store command summary:
> --------------------------------------
> /subsystem=elytron/credential-store=test:add(uri="cr-store://test?modifiable=true;create=true;keyStoreType=JCEKS",relative-to=jboss.server.data.dir,credential-reference={clear-text="mycspassword"})
> {code}
> *There is expected error.*
> Please add there this constraint: parameter --salt requires --iteration and vice versa
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years
[JBoss JIRA] (WFLY-8615) Unable to process received public key with ASYM_ENCRYPT
by Paul Ferraro (JIRA)
[ https://issues.jboss.org/browse/WFLY-8615?page=com.atlassian.jira.plugin.... ]
Paul Ferraro moved JBEAP-10469 to WFLY-8615:
--------------------------------------------
Project: WildFly (was: JBoss Enterprise Application Platform)
Key: WFLY-8615 (was: JBEAP-10469)
Workflow: GIT Pull Request workflow (was: CDW with loose statuses v1)
Component/s: Clustering
(was: Clustering)
Affects Version/s: 11.0.0.Alpha1
(was: 7.1.0.DR16)
Affects Testing: (was: Regression)
> Unable to process received public key with ASYM_ENCRYPT
> -------------------------------------------------------
>
> Key: WFLY-8615
> URL: https://issues.jboss.org/browse/WFLY-8615
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 11.0.0.Alpha1
> Reporter: Paul Ferraro
> Assignee: Paul Ferraro
> Priority: Critical
>
> After starting a two server cluster with ASYM_ENCRYPT, the servers establish a view and then fail to send any more messages, because those can't be decrypted. One of the servers logs the following:
> {noformat}
> 15:29:42,058 WARN [org.jboss.as.clustering.jgroups.protocol.ASYM_ENCRYPT] (thread-14) node2: unable to process received public key
> {noformat}
> The servers throw ReplicationTimeoutExceptions after some timeout period.
> I'm using the following configuration for the Elytron key-store and ASYM_ENCRYPT:
> {noformat}
> /subsystem=elytron/key-store=jgroups-udp2:add(type=jks,path=/tmp/key3.keystore,credential-reference={clear-text=password}, required=true)
> /subsystem=jgroups/stack=udp2/protocol=ASYM_ENCRYPT:add(key-store=jgroups-udp2,key-alias=alias,credential-reference={clear-text=password})
> {noformat}
> and the following command to create the key stores:
> {noformat}
> keytool -genkeypair -alias alias -keypass password -storepass password -storetype jks -keystore key3.keystore -keyalg RSA
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years
[JBoss JIRA] (WFLY-8614) Clustering performance regression in read-heavy SYNC stress scenarios
by Paul Ferraro (JIRA)
[ https://issues.jboss.org/browse/WFLY-8614?page=com.atlassian.jira.plugin.... ]
Paul Ferraro reassigned WFLY-8614:
----------------------------------
Assignee: Paul Ferraro (was: Radoslav Husar)
> Clustering performance regression in read-heavy SYNC stress scenarios
> ----------------------------------------------------------------------
>
> Key: WFLY-8614
> URL: https://issues.jboss.org/browse/WFLY-8614
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 11.0.0.Alpha1
> Reporter: Paul Ferraro
> Assignee: Paul Ferraro
>
> There is a performance regression in clustering stress tests testing performance of the cluster under increasing load (number of concurrent clients). All tests use a 4-node EAP cluster and 5 nodes that generate load.
> During heavyread tests majority of requests don't write to the session, only read. This should simulate real client behaviour.
> Scenarios affected:
> stress-heavyread-session-dist-sync
> stress-heavyread-session-repl-sync
> There is a difference in throughput graph between 7.1.0.DR15 and 7.1.0.DR16 in both scenarios. This issue is probably causing performance drop after the number of clients reached 4000 (in case of dist-sync scenario). There is also a significant number of sampling errors at that time (comparing to no sampling errors in 7.1.0.DR15).
> See the performance report for heavyread-session-dist-sync scenario:
> http://download.eng.brq.redhat.com/scratch/dcihak/2017-04-19_14-28-37/str...
> and for heavyread-session-repl-sync scenario:
> http://download.eng.brq.redhat.com/scratch/dcihak/2017-04-19_14-40-09/str...
> There are results of another run of the heavyread-session-dist-sync scenario:
> http://download.eng.brq.redhat.com/scratch/dcihak/2017-04-19_16-10-26/str...
> When number of sessions reached 4400 clients start logging following WARN message:
> {code}
> 2017/04/06 18:14:07:058 EDT [WARN ][Runner - 15] HOST dev220.mw.lab.eng.bos.redhat.com:rootProcess:c - Error sampling data: <org.jboss.smartfrog.loaddriver.RequestProcessingException: Invalid response code: 503 Content: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
> <html><head>
> <title>503 Service Temporarily Unavailable</title>
> </head><body>
> <h1>Service Temporarily Unavailable</h1>
> <p>The server is temporarily unable to service your
> request due to maintenance downtime or capacity
> problems. Please try again later.</p>
> <hr>
> <address>Apache/2.2.26 (@VENDOR@) Server at dev224 Port 8080</address>
> </body></html>
> >
> {code}
> Link to client log:
> http://jenkins.hosts.mwqe.eng.bos.redhat.com/hudson/job/eap-7x-stress-hea...
> At the same time server starts logging:
> {code}
> 33m18:13:18,784 WARN [org.jgroups.protocols.pbcast.GMS] (thread-13) dev213: not member of view [dev212|4]; discarding it
> [JBossINF] [0m[31m18:13:18,886 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (default task-75) ISPN000136: Error executing command PrepareCommand, writing keys [SessionAccessMetaDataKey(fS0csQoHO532jD_tHe4XmfRoPQkCI0gg6xv0tvp-), SessionCreationMetaDataKey(fS0csQoHO532jD_tHe4XmfRoPQkCI0gg6xv0tvp-)]: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from dev212, see cause for remote stack trace
> [JBossINF] at org.infinispan.remoting.transport.AbstractTransport.checkResponse(AbstractTransport.java:44)
> [JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:821)
> [JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$1(JGroupsTransport.java:648)
> [JBossINF] at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> [JBossINF] at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> [JBossINF] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> [JBossINF] at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
> [JBossINF] at org.infinispan.remoting.transport.jgroups.RspListFuture.futureDone(RspListFuture.java:40)
> [JBossINF] at org.jgroups.blocks.Request.checkCompletion(Request.java:152)
> [JBossINF] at org.jgroups.blocks.GroupRequest.receiveResponse(GroupRequest.java:115)
> [JBossINF] at org.jgroups.blocks.RequestCorrelator.dispatch(RequestCorrelator.java:427)
> [JBossINF] at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:357)
> [JBossINF] at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:245)
> [JBossINF] at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:664)
> [JBossINF] at org.jgroups.JChannel.up(JChannel.java:738)
> [JBossINF] at org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:120)
> [JBossINF] at org.jgroups.stack.Protocol.up(Protocol.java:380)
> [JBossINF] at org.jgroups.protocols.FORK.up(FORK.java:114)
> [JBossINF] at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)
> [JBossINF] at org.jgroups.protocols.FlowControl.up(FlowControl.java:390)
> [JBossINF] at org.jgroups.protocols.FlowControl.up(FlowControl.java:390)
> [JBossINF] at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1037)
> [JBossINF] at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)
> [JBossINF] at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1077)
> [JBossINF] at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:792)
> [JBossINF] at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:433)
> [JBossINF] at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:649)
> [JBossINF] at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:155)
> [JBossINF] at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:200)
> [JBossINF] at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:325)
> [JBossINF] at org.jgroups.protocols.MERGE3.up(MERGE3.java:292)
> [JBossINF] at org.jgroups.protocols.Discovery.up(Discovery.java:296)
> [JBossINF] at org.jgroups.protocols.TP.passMessageUp(TP.java:1657)
> [JBossINF] at org.jgroups.protocols.TP$SingleMessageHandler.run(TP.java:1872)
> [JBossINF] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [JBossINF] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [JBossINF] at org.jboss.as.clustering.jgroups.ClassLoaderThreadFactory.lambda$newThread$0(ClassLoaderThreadFactory.java:52)
> [JBossINF] at java.lang.Thread.run(Thread.java:745)
> [JBossINF] Caused by: org.infinispan.commons.CacheException: ISPN000332: Remote transaction GlobalTransaction:<dev213>:7915312:remote rolled back because originator is no longer in the cluster
> [JBossINF] at org.infinispan.interceptors.TxInterceptor.verifyRemoteTransaction(TxInterceptor.java:518)
> [JBossINF] at org.infinispan.interceptors.TxInterceptor.invokeNextInterceptorAndVerifyTransaction(TxInterceptor.java:161)
> [JBossINF] at org.infinispan.interceptors.TxInterceptor.visitPrepareCommand(TxInterceptor.java:145)
> [JBossINF] at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:176)
> [JBossINF] at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
> [JBossINF] at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:113)
> [JBossINF] at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:112)
> [JBossINF] at org.infinispan.statetransfer.TransactionSynchronizerInterceptor.visitPrepareCommand(TransactionSynchronizerInterceptor.java:39)
> [JBossINF] at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:176)
> [JBossINF] at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
> [JBossINF] at org.infinispan.statetransfer.StateTransferInterceptor.handleTxCommand(StateTransferInterceptor.java:229)
> [JBossINF] at org.infinispan.statetransfer.StateTransferInterceptor.visitPrepareCommand(StateTransferInterceptor.java:87)
> [JBossINF] at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:176)
> [JBossINF] at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
> [JBossINF] at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:110)
> [JBossINF] at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:79)
> [JBossINF] at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:112)
> [JBossINF] at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:176)
> [JBossINF] at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
> [JBossINF] at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:113)
> [JBossINF] at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:112)
> [JBossINF] at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:176)
> [JBossINF] at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:335)
> [JBossINF] at org.infinispan.commands.tx.PrepareCommand.perform(PrepareCommand.java:100)
> [JBossINF] at org.infinispan.remoting.inboundhandler.BasePerCacheInboundInvocationHandler.invokePerform(BasePerCacheInboundInvocationHandler.java:92)
> [JBossINF] at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.run(BaseBlockingRunnable.java:34)
> [JBossINF] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [JBossINF] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [JBossINF] at org.wildfly.clustering.service.concurrent.ClassLoaderThreadFactory.lambda$newThread$0(ClassLoaderThreadFactory.java:47)
> [JBossINF] ... 1 more
> {code}
> Link to server log:
> http://jenkins.hosts.mwqe.eng.bos.redhat.com/hudson/job/eap-7x-stress-hea...
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years
[JBoss JIRA] (WFLY-8614) Clustering performance regression in read-heavy SYNC stress scenarios
by Paul Ferraro (JIRA)
[ https://issues.jboss.org/browse/WFLY-8614?page=com.atlassian.jira.plugin.... ]
Paul Ferraro moved JBEAP-10468 to WFLY-8614:
--------------------------------------------
Project: WildFly (was: JBoss Enterprise Application Platform)
Key: WFLY-8614 (was: JBEAP-10468)
Workflow: GIT Pull Request workflow (was: CDW with loose statuses v1)
Component/s: Clustering
(was: Clustering)
Affects Version/s: 11.0.0.Alpha1
(was: 7.1.0.DR16)
> Clustering performance regression in read-heavy SYNC stress scenarios
> ----------------------------------------------------------------------
>
> Key: WFLY-8614
> URL: https://issues.jboss.org/browse/WFLY-8614
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 11.0.0.Alpha1
> Reporter: Paul Ferraro
> Assignee: Radoslav Husar
>
> There is a performance regression in clustering stress tests testing performance of the cluster under increasing load (number of concurrent clients). All tests use a 4-node EAP cluster and 5 nodes that generate load.
> During heavyread tests majority of requests don't write to the session, only read. This should simulate real client behaviour.
> Scenarios affected:
> stress-heavyread-session-dist-sync
> stress-heavyread-session-repl-sync
> There is a difference in throughput graph between 7.1.0.DR15 and 7.1.0.DR16 in both scenarios. This issue is probably causing performance drop after the number of clients reached 4000 (in case of dist-sync scenario). There is also a significant number of sampling errors at that time (comparing to no sampling errors in 7.1.0.DR15).
> See the performance report for heavyread-session-dist-sync scenario:
> http://download.eng.brq.redhat.com/scratch/dcihak/2017-04-19_14-28-37/str...
> and for heavyread-session-repl-sync scenario:
> http://download.eng.brq.redhat.com/scratch/dcihak/2017-04-19_14-40-09/str...
> There are results of another run of the heavyread-session-dist-sync scenario:
> http://download.eng.brq.redhat.com/scratch/dcihak/2017-04-19_16-10-26/str...
> When number of sessions reached 4400 clients start logging following WARN message:
> {code}
> 2017/04/06 18:14:07:058 EDT [WARN ][Runner - 15] HOST dev220.mw.lab.eng.bos.redhat.com:rootProcess:c - Error sampling data: <org.jboss.smartfrog.loaddriver.RequestProcessingException: Invalid response code: 503 Content: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
> <html><head>
> <title>503 Service Temporarily Unavailable</title>
> </head><body>
> <h1>Service Temporarily Unavailable</h1>
> <p>The server is temporarily unable to service your
> request due to maintenance downtime or capacity
> problems. Please try again later.</p>
> <hr>
> <address>Apache/2.2.26 (@VENDOR@) Server at dev224 Port 8080</address>
> </body></html>
> >
> {code}
> Link to client log:
> http://jenkins.hosts.mwqe.eng.bos.redhat.com/hudson/job/eap-7x-stress-hea...
> At the same time server starts logging:
> {code}
> 33m18:13:18,784 WARN [org.jgroups.protocols.pbcast.GMS] (thread-13) dev213: not member of view [dev212|4]; discarding it
> [JBossINF] [0m[31m18:13:18,886 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (default task-75) ISPN000136: Error executing command PrepareCommand, writing keys [SessionAccessMetaDataKey(fS0csQoHO532jD_tHe4XmfRoPQkCI0gg6xv0tvp-), SessionCreationMetaDataKey(fS0csQoHO532jD_tHe4XmfRoPQkCI0gg6xv0tvp-)]: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from dev212, see cause for remote stack trace
> [JBossINF] at org.infinispan.remoting.transport.AbstractTransport.checkResponse(AbstractTransport.java:44)
> [JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:821)
> [JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$1(JGroupsTransport.java:648)
> [JBossINF] at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> [JBossINF] at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> [JBossINF] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> [JBossINF] at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
> [JBossINF] at org.infinispan.remoting.transport.jgroups.RspListFuture.futureDone(RspListFuture.java:40)
> [JBossINF] at org.jgroups.blocks.Request.checkCompletion(Request.java:152)
> [JBossINF] at org.jgroups.blocks.GroupRequest.receiveResponse(GroupRequest.java:115)
> [JBossINF] at org.jgroups.blocks.RequestCorrelator.dispatch(RequestCorrelator.java:427)
> [JBossINF] at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:357)
> [JBossINF] at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:245)
> [JBossINF] at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:664)
> [JBossINF] at org.jgroups.JChannel.up(JChannel.java:738)
> [JBossINF] at org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:120)
> [JBossINF] at org.jgroups.stack.Protocol.up(Protocol.java:380)
> [JBossINF] at org.jgroups.protocols.FORK.up(FORK.java:114)
> [JBossINF] at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)
> [JBossINF] at org.jgroups.protocols.FlowControl.up(FlowControl.java:390)
> [JBossINF] at org.jgroups.protocols.FlowControl.up(FlowControl.java:390)
> [JBossINF] at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1037)
> [JBossINF] at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)
> [JBossINF] at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1077)
> [JBossINF] at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:792)
> [JBossINF] at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:433)
> [JBossINF] at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:649)
> [JBossINF] at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:155)
> [JBossINF] at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:200)
> [JBossINF] at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:325)
> [JBossINF] at org.jgroups.protocols.MERGE3.up(MERGE3.java:292)
> [JBossINF] at org.jgroups.protocols.Discovery.up(Discovery.java:296)
> [JBossINF] at org.jgroups.protocols.TP.passMessageUp(TP.java:1657)
> [JBossINF] at org.jgroups.protocols.TP$SingleMessageHandler.run(TP.java:1872)
> [JBossINF] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [JBossINF] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [JBossINF] at org.jboss.as.clustering.jgroups.ClassLoaderThreadFactory.lambda$newThread$0(ClassLoaderThreadFactory.java:52)
> [JBossINF] at java.lang.Thread.run(Thread.java:745)
> [JBossINF] Caused by: org.infinispan.commons.CacheException: ISPN000332: Remote transaction GlobalTransaction:<dev213>:7915312:remote rolled back because originator is no longer in the cluster
> [JBossINF] at org.infinispan.interceptors.TxInterceptor.verifyRemoteTransaction(TxInterceptor.java:518)
> [JBossINF] at org.infinispan.interceptors.TxInterceptor.invokeNextInterceptorAndVerifyTransaction(TxInterceptor.java:161)
> [JBossINF] at org.infinispan.interceptors.TxInterceptor.visitPrepareCommand(TxInterceptor.java:145)
> [JBossINF] at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:176)
> [JBossINF] at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
> [JBossINF] at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:113)
> [JBossINF] at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:112)
> [JBossINF] at org.infinispan.statetransfer.TransactionSynchronizerInterceptor.visitPrepareCommand(TransactionSynchronizerInterceptor.java:39)
> [JBossINF] at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:176)
> [JBossINF] at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
> [JBossINF] at org.infinispan.statetransfer.StateTransferInterceptor.handleTxCommand(StateTransferInterceptor.java:229)
> [JBossINF] at org.infinispan.statetransfer.StateTransferInterceptor.visitPrepareCommand(StateTransferInterceptor.java:87)
> [JBossINF] at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:176)
> [JBossINF] at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
> [JBossINF] at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:110)
> [JBossINF] at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:79)
> [JBossINF] at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:112)
> [JBossINF] at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:176)
> [JBossINF] at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
> [JBossINF] at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:113)
> [JBossINF] at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:112)
> [JBossINF] at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:176)
> [JBossINF] at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:335)
> [JBossINF] at org.infinispan.commands.tx.PrepareCommand.perform(PrepareCommand.java:100)
> [JBossINF] at org.infinispan.remoting.inboundhandler.BasePerCacheInboundInvocationHandler.invokePerform(BasePerCacheInboundInvocationHandler.java:92)
> [JBossINF] at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.run(BaseBlockingRunnable.java:34)
> [JBossINF] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [JBossINF] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [JBossINF] at org.wildfly.clustering.service.concurrent.ClassLoaderThreadFactory.lambda$newThread$0(ClassLoaderThreadFactory.java:47)
> [JBossINF] ... 1 more
> {code}
> Link to server log:
> http://jenkins.hosts.mwqe.eng.bos.redhat.com/hudson/job/eap-7x-stress-hea...
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years
[JBoss JIRA] (WFCORE-2620) Add ability to read computed runtime values of IO subsystem buffer-pool attributes
by Romain Pelisse (JIRA)
[ https://issues.jboss.org/browse/WFCORE-2620?page=com.atlassian.jira.plugi... ]
Romain Pelisse updated WFCORE-2620:
-----------------------------------
Git Pull Request: (was: https://github.com/wildfly/wildfly-core/pull/2330)
> Add ability to read computed runtime values of IO subsystem buffer-pool attributes
> ----------------------------------------------------------------------------------
>
> Key: WFCORE-2620
> URL: https://issues.jboss.org/browse/WFCORE-2620
> Project: WildFly Core
> Issue Type: Enhancement
> Affects Versions: 3.0.0.Beta13
> Reporter: Romain Pelisse
> Assignee: Romain Pelisse
> Original Estimate: 2 days
> Remaining Estimate: 2 days
>
> In IO subsystem there are some attributes which are calculated based on available system resources if not explicitly specified. These attributes are:
> * worker
> ** io-threads
> ** task-max-threads
> * buffer-pool
> ** buffer-size
> ** buffers-per-slice
> ** direct-buffers
> Currently these computed values are not visible for user in the subsystem configuration even with include-runtime=true.
> To show these runtime values would definitely improve UX.
> Worker attributes are covered by EAP7-616 .
> This issue is about buffer-pool attributes.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years
[JBoss JIRA] (WFCORE-1536) NPE thrown during application redeployment, slaves taken offline
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/WFCORE-1536?page=com.atlassian.jira.plugi... ]
RH Bugzilla Integration commented on WFCORE-1536:
-------------------------------------------------
Jiří Bílek <jbilek(a)redhat.com> changed the Status of [bug 1406562|https://bugzilla.redhat.com/show_bug.cgi?id=1406562] from ON_QA to VERIFIED
> NPE thrown during application redeployment, slaves taken offline
> ----------------------------------------------------------------
>
> Key: WFCORE-1536
> URL: https://issues.jboss.org/browse/WFCORE-1536
> Project: WildFly Core
> Issue Type: Bug
> Components: Domain Management
> Reporter: Matthew Casperson
> Assignee: ehsavoie Hugonnet
> Fix For: 3.0.0.Alpha2
>
>
> We have some development Wildfly 10.0.0 servers running as slaves in a domain that frequently have WAR files redeployed. We have noticed that these slaves will often go offline after a redeployment of WAR files with the following stack trace:
> {code}
> 2016-05-06 05:05:51,306 ERROR [org.jboss.as.controller.management-operation] (Host Controller Service Threads - 1012) WFLYCTL0190: Step handler org.jboss.as.domain.controller.operations.deployment.DeploymentFullReplaceHandler@3f68226b for operation {"operation" => "full-replace-deployment","name" => "whatever.war","enabled" => true,"content" => [{"hash" => bytes { 0x5d, 0x12, 0x18, 0x2b, 0x1c, 0x86, 0x71, 0x27, 0x08, 0x3d, 0xf1, 0x75, 0x08, 0x29, 0xa6, 0x49, 0x1f, 0x16, 0xe8, 0x22 }}],"operation-headers" => {"access-mechanism" => "NATIVE","domain-uuid" => "802ab616-dd2c-4081-a79c-c4d54e14c384","push-to-servers" => undefined},"address" => [],"runtime-name" => undefined} at address [] failed handling operation rollback -- java.lang.NullPointerException: java.lang.NullPointerException
> at org.jboss.as.repository.LocalDeploymentFileRepository.deleteDeployment(LocalDeploymentFileRepository.java:59)
> at org.jboss.as.host.controller.RemoteDomainConnectionService$RemoteFileRepository.deleteDeployment(RemoteDomainConnectionService.java:756)
> at org.jboss.as.domain.controller.operations.deployment.DeploymentFullReplaceHandler$1.handleResult(DeploymentFullReplaceHandler.java:181)
> at org.jboss.as.controller.AbstractOperationContext$Step.invokeResultHandler(AbstractOperationContext.java:1384)
> at org.jboss.as.controller.AbstractOperationContext$Step.handleResult(AbstractOperationContext.java:1366)
> at org.jboss.as.controller.AbstractOperationContext$Step.finalizeInternal(AbstractOperationContext.java:1328)
> at org.jboss.as.controller.AbstractOperationContext$Step.finalizeStep(AbstractOperationContext.java:1311)
> at org.jboss.as.controller.AbstractOperationContext$Step.access$300(AbstractOperationContext.java:1185)
> at org.jboss.as.controller.AbstractOperationContext.executeResultHandlerPhase(AbstractOperationContext.java:767)
> at org.jboss.as.controller.AbstractOperationContext.executeDoneStage(AbstractOperationContext.java:753)
> at org.jboss.as.controller.AbstractOperationContext.processStages(AbstractOperationContext.java:680)
> at org.jboss.as.controller.AbstractOperationContext.executeOperation(AbstractOperationContext.java:370)
> at org.jboss.as.controller.OperationContextImpl.executeOperation(OperationContextImpl.java:1344)
> at org.jboss.as.controller.ModelControllerImpl.internalExecute(ModelControllerImpl.java:392)
> at org.jboss.as.controller.ModelControllerImpl.execute(ModelControllerImpl.java:217)
> at org.jboss.as.controller.remote.TransactionalProtocolOperationHandler.internalExecute(TransactionalProtocolOperationHandler.java:247)
> at org.jboss.as.controller.remote.TransactionalProtocolOperationHandler$ExecuteRequestHandler.doExecute(TransactionalProtocolOperationHandler.java:185)
> at org.jboss.as.controller.remote.TransactionalProtocolOperationHandler$ExecuteRequestHandler$1.run(TransactionalProtocolOperationHandler.java:138)
> at org.jboss.as.controller.remote.TransactionalProtocolOperationHandler$ExecuteRequestHandler$1.run(TransactionalProtocolOperationHandler.java:134)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:360)
> at org.jboss.as.controller.AccessAuditContext.doAs(AccessAuditContext.java:81)
> at org.jboss.as.controller.remote.TransactionalProtocolOperationHandler$ExecuteRequestHandler$2$1.run(TransactionalProtocolOperationHandler.java:157)
> at org.jboss.as.controller.remote.TransactionalProtocolOperationHandler$ExecuteRequestHandler$2$1.run(TransactionalProtocolOperationHandler.java:153)
> at java.security.AccessController.doPrivileged(Native Method)
> at org.jboss.as.controller.remote.TransactionalProtocolOperationHandler$ExecuteRequestHandler$2.execute(TransactionalProtocolOperationHandler.java:153)
> at org.jboss.as.protocol.mgmt.AbstractMessageHandler$ManagementRequestContextImpl$1.doExecute(AbstractMessageHandler.java:363)
> at org.jboss.as.protocol.mgmt.AbstractMessageHandler$AsyncTaskRunner.run(AbstractMessageHandler.java:472)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> at org.jboss.threads.JBossThread.run(JBossThread.java:320)
> {code}
> This error will usually only happen for 2 out of the 4 identically configured slaves, and seems to happen randomly, although frequently enough.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years
[JBoss JIRA] (WFCORE-2691) Elytron modifiable realms should show existing identities in subsystem
by Jan Kalina (JIRA)
[ https://issues.jboss.org/browse/WFCORE-2691?page=com.atlassian.jira.plugi... ]
Jan Kalina edited comment on WFCORE-2691 at 4/20/17 9:02 AM:
-------------------------------------------------------------
In ActiveMQServerResource it is not resolved - if I run
{{/subsystem=messaging-activemq:read-resource(include-runtime=false,recursive=true)}}
list of all *core-address* children is obtained. As well as on server boot.
({{getChildren("core-address")}} is called)
So there is no example how to do it better...
was (Author: honza889):
In ActiveMQServerResource it is not resolved - if I run
{{/subsystem=messaging-activemq:read-resource(include-runtime=false,recursive=true)}}
list of all *core-address* children is obtained. As well as on server boot.
({{getChildren("core-address")}} is called)
> Elytron modifiable realms should show existing identities in subsystem
> ----------------------------------------------------------------------
>
> Key: WFCORE-2691
> URL: https://issues.jboss.org/browse/WFCORE-2691
> Project: WildFly Core
> Issue Type: Bug
> Components: Security
> Affects Versions: 3.0.0.Beta15
> Reporter: Jan Kalina
> Assignee: Jan Kalina
> Priority: Blocker
> Labels: eap71_beta, filesystem-realm, security-realm
>
> Elytron {{filesystem-realm}} should load existing identities from file system. The steps to reproduce results in:
> {noformat}
> [standalone@localhost:9990 /] /subsystem=elytron/filesystem-realm=realm/identity=user:read-identity
> {
> "outcome" => "failed",
> "failure-description" => "WFLYCTL0216: Management resource '[
> (\"subsystem\" => \"elytron\"),
> (\"filesystem-realm\" => \"realm\"),
> (\"identity\" => \"user\")
> ]' not found",
> "rolled-back" => true
> }
> [standalone@localhost:9990 /] /subsystem=elytron/filesystem-realm=realm/identity=user:add
> {
> "outcome" => "failed",
> "failure-description" => "WFLYELY01000: Identity with name [user] already exists.",
> "rolled-back" => true
> }
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years
[JBoss JIRA] (WFCORE-2691) Elytron modifiable realms should show existing identities in subsystem
by Jan Kalina (JIRA)
[ https://issues.jboss.org/browse/WFCORE-2691?page=com.atlassian.jira.plugi... ]
Jan Kalina commented on WFCORE-2691:
------------------------------------
In ActiveMQServerResource it is not resolved - if I run
{{/subsystem=messaging-activemq:read-resource(include-runtime=false,recursive=true)}}
list of all *core-address* children is obtained. As well as on server boot.
({{getChildren("core-address")}} is called)
> Elytron modifiable realms should show existing identities in subsystem
> ----------------------------------------------------------------------
>
> Key: WFCORE-2691
> URL: https://issues.jboss.org/browse/WFCORE-2691
> Project: WildFly Core
> Issue Type: Bug
> Components: Security
> Affects Versions: 3.0.0.Beta15
> Reporter: Jan Kalina
> Assignee: Jan Kalina
> Priority: Blocker
> Labels: eap71_beta, filesystem-realm, security-realm
>
> Elytron {{filesystem-realm}} should load existing identities from file system. The steps to reproduce results in:
> {noformat}
> [standalone@localhost:9990 /] /subsystem=elytron/filesystem-realm=realm/identity=user:read-identity
> {
> "outcome" => "failed",
> "failure-description" => "WFLYCTL0216: Management resource '[
> (\"subsystem\" => \"elytron\"),
> (\"filesystem-realm\" => \"realm\"),
> (\"identity\" => \"user\")
> ]' not found",
> "rolled-back" => true
> }
> [standalone@localhost:9990 /] /subsystem=elytron/filesystem-realm=realm/identity=user:add
> {
> "outcome" => "failed",
> "failure-description" => "WFLYELY01000: Identity with name [user] already exists.",
> "rolled-back" => true
> }
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years