[JBoss JIRA] (WFCORE-1628) Create module using 'module add' CLI command with absolute-path in resource-root element
by Martin Simka (JIRA)
[ https://issues.jboss.org/browse/WFCORE-1628?page=com.atlassian.jira.plugi... ]
Martin Simka reassigned WFCORE-1628:
------------------------------------
Assignee: Martin Simka (was: Alexey Loubyansky)
> Create module using 'module add' CLI command with absolute-path in resource-root element
> ----------------------------------------------------------------------------------------
>
> Key: WFCORE-1628
> URL: https://issues.jboss.org/browse/WFCORE-1628
> Project: WildFly Core
> Issue Type: Enhancement
> Components: CLI
> Reporter: Martin Simka
> Assignee: Martin Simka
>
> MODULES-218, included in WildFly 10/WildFly Core 2.0.10, allows absolute paths to be used for resource-roots in module.xml. But there is no way to create module with absolute paths in resource-roots using CLI command {{module add}}.
> {quote}
> resource-root's path previously only allowed resources within the module's directory, e.g.:
> <resource-root path="wildfly-controller-2.2.0.CR2.jar"/>
> With MODULES-218 it allows resources to exist anywhere on the filesystem, in the form of an absolute path, e.g.:
> <resource-root path="/Users/whoever/mymodules/wildfly-controller-2.2.0.CR2.jar"/>
> {quote}
> {{module add}} command could have optional argument {{--copy-resources=(true|false)}} with default {{true}}, which would specify how new module will look like.
> * {{true}} copy files to the created module's directory = no change from how it works now.
> * {{false}} don't copy files and use absolute paths in {{resource-root}} instead.
> Another option could be to add optional argument without value, which would only specify to not copy files to the created module's directory and use absolute paths in {{resource-root}} instead. Name of this argument could be something like {{--use-absolute-paths-for-resources}}.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 5 months
[JBoss JIRA] (WFLY-6781) Wildfly cluster's failover functionality doesn't work as expected
by Miroslav Novak (JIRA)
[ https://issues.jboss.org/browse/WFLY-6781?page=com.atlassian.jira.plugin.... ]
Miroslav Novak commented on WFLY-6781:
--------------------------------------
HornetQ cluster does not provide HA. For HA, replicated backup or shared store backup would have to be configured. HornetQ cluster just load balances messages across all active nodes. Every node in cluster has part of the messages for the given queue. Thus shutting down one node in cluster leads to "loss" of the messages. (of course of this node is started again and rejoins the HornetQ cluster then everything is ok again)
In your case there is no HornetQ cluster. All messages should be on node 1. I don't understand how SL.war could be disconnected if it's connected to node 1. Changing {{connection-ttl}} could at least show some warnings that something went wrong. Also could you double check sources of RC.war and SL.war that you do not ignore any exceptions?
> Wildfly cluster's failover functionality doesn't work as expected
> -----------------------------------------------------------------
>
> Key: WFLY-6781
> URL: https://issues.jboss.org/browse/WFLY-6781
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Affects Versions: 8.2.0.Final
> Reporter: Preeta Kuruvilla
> Assignee: Jeff Mesnil
> Priority: Blocker
> Attachments: domain.Node1.xml, host.Node1.xml, host.Node2.xml, server.RC.Node1.AfterFailover.log, server.RC.Node1.BeforeFailover.log, server.RC.Node2.AfterFailover.log, server.RC.Node2.BeforeFailover.log, server.SL.Node1.AfterFailover.log, server.SL.Node1.BeforeFailover.log
>
>
> Following are the testing scenarios we did and the outcome:-
> 1. Network disabling on a VM for testing failover – Not working for both Linux and Windows environment.
> 2. Power off of a VM using VMware client for testing failover – Is working on Linux environment but not working on windows environment.
> 3. Ctrl + C method to stop services on a node for testing failover – works on both linux and windows environment
> 4. Stopping server running on Node /VM using Admin Console for testing failover - works on both linux and windows environment.
> Jgroups subsystem configuration in domain.xml we have is below:-
> <subsystem xmlns="urn:jboss:domain:jgroups:2.0" default-stack="udp">
> <stack name="udp">
> <transport type="UDP" socket-binding="jgroups-udp"/>
> <protocol type="PING"/>
> <protocol type="MERGE3"/>
> <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
> <protocol type="FD_ALL"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS"/>
> <protocol type="UFC"/>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> <protocol type="RSVP"/>
> </stack>
> <stack name="tcp">
> <transport type="TCP" socket-binding="jgroups-tcp"/>
> <protocol type="MPING" socket-binding="jgroups-mping"/>
> <protocol type="MERGE2"/>
> <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
> <protocol type="FD"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS"/>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> <protocol type="RSVP"/>
> </stack>
> </subsystem>
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 5 months
[JBoss JIRA] (WFLY-6781) Wildfly cluster's failover functionality doesn't work as expected
by Preeta Kuruvilla (JIRA)
[ https://issues.jboss.org/browse/WFLY-6781?page=com.atlassian.jira.plugin.... ]
Preeta Kuruvilla edited comment on WFLY-6781 at 7/1/16 6:44 AM:
----------------------------------------------------------------
Thanks. I will set it to 60000ms.
Earlier in your comment, you also mentioned that - "no HornetQ cluster configured". Is that the reason why JMS is not able to handle the failover condition ?
Also to let you know, we have not implemented JMS replication for cluster.
Also would like to remind you that the failover by power off method works properly on Linux. Only its got issues (probably JMS related) on Windows OS.
-preeta
was (Author: kpreeta12):
Thanks. I will set it to 60000ms.
Earlier in your comment, you also mentioned that - "no HornetQ cluster configured". Is that the reason why JMS is not able to handle the failover condition ?
Also to let you know, we have not implemented JMS replication for cluster.
-preeta
> Wildfly cluster's failover functionality doesn't work as expected
> -----------------------------------------------------------------
>
> Key: WFLY-6781
> URL: https://issues.jboss.org/browse/WFLY-6781
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Affects Versions: 8.2.0.Final
> Reporter: Preeta Kuruvilla
> Assignee: Jeff Mesnil
> Priority: Blocker
> Attachments: domain.Node1.xml, host.Node1.xml, host.Node2.xml, server.RC.Node1.AfterFailover.log, server.RC.Node1.BeforeFailover.log, server.RC.Node2.AfterFailover.log, server.RC.Node2.BeforeFailover.log, server.SL.Node1.AfterFailover.log, server.SL.Node1.BeforeFailover.log
>
>
> Following are the testing scenarios we did and the outcome:-
> 1. Network disabling on a VM for testing failover – Not working for both Linux and Windows environment.
> 2. Power off of a VM using VMware client for testing failover – Is working on Linux environment but not working on windows environment.
> 3. Ctrl + C method to stop services on a node for testing failover – works on both linux and windows environment
> 4. Stopping server running on Node /VM using Admin Console for testing failover - works on both linux and windows environment.
> Jgroups subsystem configuration in domain.xml we have is below:-
> <subsystem xmlns="urn:jboss:domain:jgroups:2.0" default-stack="udp">
> <stack name="udp">
> <transport type="UDP" socket-binding="jgroups-udp"/>
> <protocol type="PING"/>
> <protocol type="MERGE3"/>
> <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
> <protocol type="FD_ALL"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS"/>
> <protocol type="UFC"/>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> <protocol type="RSVP"/>
> </stack>
> <stack name="tcp">
> <transport type="TCP" socket-binding="jgroups-tcp"/>
> <protocol type="MPING" socket-binding="jgroups-mping"/>
> <protocol type="MERGE2"/>
> <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
> <protocol type="FD"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS"/>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> <protocol type="RSVP"/>
> </stack>
> </subsystem>
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 5 months
[JBoss JIRA] (WFLY-6781) Wildfly cluster's failover functionality doesn't work as expected
by Preeta Kuruvilla (JIRA)
[ https://issues.jboss.org/browse/WFLY-6781?page=com.atlassian.jira.plugin.... ]
Preeta Kuruvilla edited comment on WFLY-6781 at 7/1/16 6:35 AM:
----------------------------------------------------------------
Thanks. I will set it to 60000ms.
Earlier in your comment, you also mentioned that - "no HornetQ cluster configured". Is that the reason why JMS is not able to handle the failover condition ?
Also to let you know, we have not implemented JMS replication for cluster.
-preeta
was (Author: kpreeta12):
Thanks. I will set it to 60000ms.
Earlier in your comment, you also mentioned that - "no HornetQ cluster configured". Is that the reason why JMS is not able to handle the failover condition ?
-preeta
> Wildfly cluster's failover functionality doesn't work as expected
> -----------------------------------------------------------------
>
> Key: WFLY-6781
> URL: https://issues.jboss.org/browse/WFLY-6781
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Affects Versions: 8.2.0.Final
> Reporter: Preeta Kuruvilla
> Assignee: Jeff Mesnil
> Priority: Blocker
> Attachments: domain.Node1.xml, host.Node1.xml, host.Node2.xml, server.RC.Node1.AfterFailover.log, server.RC.Node1.BeforeFailover.log, server.RC.Node2.AfterFailover.log, server.RC.Node2.BeforeFailover.log, server.SL.Node1.AfterFailover.log, server.SL.Node1.BeforeFailover.log
>
>
> Following are the testing scenarios we did and the outcome:-
> 1. Network disabling on a VM for testing failover – Not working for both Linux and Windows environment.
> 2. Power off of a VM using VMware client for testing failover – Is working on Linux environment but not working on windows environment.
> 3. Ctrl + C method to stop services on a node for testing failover – works on both linux and windows environment
> 4. Stopping server running on Node /VM using Admin Console for testing failover - works on both linux and windows environment.
> Jgroups subsystem configuration in domain.xml we have is below:-
> <subsystem xmlns="urn:jboss:domain:jgroups:2.0" default-stack="udp">
> <stack name="udp">
> <transport type="UDP" socket-binding="jgroups-udp"/>
> <protocol type="PING"/>
> <protocol type="MERGE3"/>
> <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
> <protocol type="FD_ALL"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS"/>
> <protocol type="UFC"/>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> <protocol type="RSVP"/>
> </stack>
> <stack name="tcp">
> <transport type="TCP" socket-binding="jgroups-tcp"/>
> <protocol type="MPING" socket-binding="jgroups-mping"/>
> <protocol type="MERGE2"/>
> <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
> <protocol type="FD"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS"/>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> <protocol type="RSVP"/>
> </stack>
> </subsystem>
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 5 months
[JBoss JIRA] (WFLY-6781) Wildfly cluster's failover functionality doesn't work as expected
by Preeta Kuruvilla (JIRA)
[ https://issues.jboss.org/browse/WFLY-6781?page=com.atlassian.jira.plugin.... ]
Preeta Kuruvilla commented on WFLY-6781:
----------------------------------------
Thanks. I will set it to 60000ms.
Earlier in your comment, you also mentioned that - "no HornetQ cluster configured". Is that the reason why JMS is not able to handle the failover condition ?
-preeta
> Wildfly cluster's failover functionality doesn't work as expected
> -----------------------------------------------------------------
>
> Key: WFLY-6781
> URL: https://issues.jboss.org/browse/WFLY-6781
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Affects Versions: 8.2.0.Final
> Reporter: Preeta Kuruvilla
> Assignee: Jeff Mesnil
> Priority: Blocker
> Attachments: domain.Node1.xml, host.Node1.xml, host.Node2.xml, server.RC.Node1.AfterFailover.log, server.RC.Node1.BeforeFailover.log, server.RC.Node2.AfterFailover.log, server.RC.Node2.BeforeFailover.log, server.SL.Node1.AfterFailover.log, server.SL.Node1.BeforeFailover.log
>
>
> Following are the testing scenarios we did and the outcome:-
> 1. Network disabling on a VM for testing failover – Not working for both Linux and Windows environment.
> 2. Power off of a VM using VMware client for testing failover – Is working on Linux environment but not working on windows environment.
> 3. Ctrl + C method to stop services on a node for testing failover – works on both linux and windows environment
> 4. Stopping server running on Node /VM using Admin Console for testing failover - works on both linux and windows environment.
> Jgroups subsystem configuration in domain.xml we have is below:-
> <subsystem xmlns="urn:jboss:domain:jgroups:2.0" default-stack="udp">
> <stack name="udp">
> <transport type="UDP" socket-binding="jgroups-udp"/>
> <protocol type="PING"/>
> <protocol type="MERGE3"/>
> <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
> <protocol type="FD_ALL"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS"/>
> <protocol type="UFC"/>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> <protocol type="RSVP"/>
> </stack>
> <stack name="tcp">
> <transport type="TCP" socket-binding="jgroups-tcp"/>
> <protocol type="MPING" socket-binding="jgroups-mping"/>
> <protocol type="MERGE2"/>
> <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
> <protocol type="FD"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS"/>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> <protocol type="RSVP"/>
> </stack>
> </subsystem>
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 5 months
[JBoss JIRA] (JGRP-2065) RoundTrip: latency is high compared to RoundTripTcp/RoundTripServer
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2065?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-2065:
---------------------------
Fix Version/s: 4.1
(was: 4.0)
> RoundTrip: latency is high compared to RoundTripTcp/RoundTripServer
> -------------------------------------------------------------------
>
> Key: JGRP-2065
> URL: https://issues.jboss.org/browse/JGRP-2065
> Project: JGroups
> Issue Type: Task
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 4.1
>
>
> {{RoundTrip}} is a simple test between 2 members measuring round-trip latency. The sender continually sends a message, the receiver receives it and sends the response, and the sender unblocks when the response is received. Then the sender sends the next message.
> The time for the request is then logged at the sender and a min/avg/max value is computed (probably changed to histograms later).
> {{RoundTrip}} uses a JGroups channel, configured with {{-props}}, e.g. {{-props ~/tcp.xml}}.
> {{RoundTripTcp}} does the same, but uses direct TCP sockets (no JGroups) for communication.
> {{RoundTripServer}} uses the client-server classes of JGroups for communication, but no channel is used.
> Round trip times (both processes on the same box, a Mac mini):
> * {{RoundTrip}} (with {{tcp.xml}} shipped with JGroups): *110 us*
> * {{RoundTripTcp}}: *46 us*
> * {{RoundTripServer}}: *49 us*
> Note that the client/server classes used by {{RoundTripServer}} are also used by the TCP transport (configured in {{tcp.xml}}.
> {{RoundTripServer}} is ~6% slower than {{RoundTripTcp}}, but that can be attributed to the additional work the former has to do (e.g. connection reaping etc). This is something we can focus on later.
> The big difference are the 110 us for {{RoundTrip}}. The goal is to find out what causes this and eliminate it. Since {{RoundTrip}} and {{RoundTripServer}} use the same underlying client/server classes in JGroups, let's compare these 2.
> Tasks:
> * Remove all protocols other than TCP in the running stack: (e.g. {{probe.sh remove-protocol=MFC}}). I already did this and the diff was negligible, but let's run this again
> * Try various bundlers (e.g. NoBundler)
> * Reduce threads in the threadpools, possibly disable (regular and OOB) thread pools (replace with DirectExecutor)
> * The default is 1 sender thread, but try with multiple threads
> * Measure perf between sending a message (in the bundler) and receiving it
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 5 months
[JBoss JIRA] (WFLY-6781) Wildfly cluster's failover functionality doesn't work as expected
by Miroslav Novak (JIRA)
[ https://issues.jboss.org/browse/WFLY-6781?page=com.atlassian.jira.plugin.... ]
Miroslav Novak commented on WFLY-6781:
--------------------------------------
I don't see any kind of interesting failure or error. Not sure where the problem might be as I'm not sure what SL.war is actually doing. How it's reading messages from queue? I guess it's using {{RemoteConnectionFactory}} to do that so it most likely does not use MDB (message driven beans).
What is interesting is that {{<connection-ttl>-1</connection-ttl>}} in configuration of {{RemoteConnectionFactory}}. Could you set it to 60000ms and try again? With -1 dead connections never timeout.
> Wildfly cluster's failover functionality doesn't work as expected
> -----------------------------------------------------------------
>
> Key: WFLY-6781
> URL: https://issues.jboss.org/browse/WFLY-6781
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Affects Versions: 8.2.0.Final
> Reporter: Preeta Kuruvilla
> Assignee: Jeff Mesnil
> Priority: Blocker
> Attachments: domain.Node1.xml, host.Node1.xml, host.Node2.xml, server.RC.Node1.AfterFailover.log, server.RC.Node1.BeforeFailover.log, server.RC.Node2.AfterFailover.log, server.RC.Node2.BeforeFailover.log, server.SL.Node1.AfterFailover.log, server.SL.Node1.BeforeFailover.log
>
>
> Following are the testing scenarios we did and the outcome:-
> 1. Network disabling on a VM for testing failover – Not working for both Linux and Windows environment.
> 2. Power off of a VM using VMware client for testing failover – Is working on Linux environment but not working on windows environment.
> 3. Ctrl + C method to stop services on a node for testing failover – works on both linux and windows environment
> 4. Stopping server running on Node /VM using Admin Console for testing failover - works on both linux and windows environment.
> Jgroups subsystem configuration in domain.xml we have is below:-
> <subsystem xmlns="urn:jboss:domain:jgroups:2.0" default-stack="udp">
> <stack name="udp">
> <transport type="UDP" socket-binding="jgroups-udp"/>
> <protocol type="PING"/>
> <protocol type="MERGE3"/>
> <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
> <protocol type="FD_ALL"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS"/>
> <protocol type="UFC"/>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> <protocol type="RSVP"/>
> </stack>
> <stack name="tcp">
> <transport type="TCP" socket-binding="jgroups-tcp"/>
> <protocol type="MPING" socket-binding="jgroups-mping"/>
> <protocol type="MERGE2"/>
> <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
> <protocol type="FD"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS"/>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> <protocol type="RSVP"/>
> </stack>
> </subsystem>
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 5 months
[JBoss JIRA] (WFLY-6797) Is there a way to have more than one logging subystem belonging to one profile
by Preeta Kuruvilla (JIRA)
[ https://issues.jboss.org/browse/WFLY-6797?page=com.atlassian.jira.plugin.... ]
Preeta Kuruvilla updated WFLY-6797:
-----------------------------------
Description:
I have "ha" profile in my domain.xml which has many subsystems.
Below is the structure I have:-
<server-groups>
<server-group name="main-server-group" profile="ha">
<socket-binding-group ref="ha-sockets"/>
<deployments>
<deployment name="RC.war" runtime-name="RC.war"/>
</deployments>
</server-group>
<server-group name="other-server-group" profile="ha">
<socket-binding-group ref="standard-sockets"/>
<deployments>
<deployment name="SL.war" runtime-name="SL.war"/>
</deployments>
</server-group>
</server-groups>
There is a logging subsystem which needs to be different for RC.war and SL.war. Both RC.war and SL.war belong to same profile.
Is there a way to have 2 logging related subsystems defined in "ha" profile, such that RC can point to one and SL to the other although they belong to same profile?
Thanks,
Preeta
> Is there a way to have more than one logging subystem belonging to one profile
> ------------------------------------------------------------------------------
>
> Key: WFLY-6797
> URL: https://issues.jboss.org/browse/WFLY-6797
> Project: WildFly
> Issue Type: Task
> Components: Domain Management
> Affects Versions: 8.2.0.Final
> Reporter: Preeta Kuruvilla
> Assignee: Brian Stansberry
>
> I have "ha" profile in my domain.xml which has many subsystems.
> Below is the structure I have:-
> <server-groups>
> <server-group name="main-server-group" profile="ha">
> <socket-binding-group ref="ha-sockets"/>
> <deployments>
> <deployment name="RC.war" runtime-name="RC.war"/>
> </deployments>
> </server-group>
> <server-group name="other-server-group" profile="ha">
> <socket-binding-group ref="standard-sockets"/>
> <deployments>
> <deployment name="SL.war" runtime-name="SL.war"/>
> </deployments>
> </server-group>
> </server-groups>
> There is a logging subsystem which needs to be different for RC.war and SL.war. Both RC.war and SL.war belong to same profile.
>
> Is there a way to have 2 logging related subsystems defined in "ha" profile, such that RC can point to one and SL to the other although they belong to same profile?
> Thanks,
> Preeta
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 5 months
[JBoss JIRA] (WFCORE-710) Make ServerOperationResolver handle deployment-overlays similarly to deployments
by Kabir Khan (JIRA)
[ https://issues.jboss.org/browse/WFCORE-710?page=com.atlassian.jira.plugin... ]
Kabir Khan reassigned WFCORE-710:
---------------------------------
Assignee: Brian Stansberry (was: Kabir Khan)
> Make ServerOperationResolver handle deployment-overlays similarly to deployments
> --------------------------------------------------------------------------------
>
> Key: WFCORE-710
> URL: https://issues.jboss.org/browse/WFCORE-710
> Project: WildFly Core
> Issue Type: Feature Request
> Components: Domain Management
> Affects Versions: 2.0.0.Alpha2
> Reporter: Kabir Khan
> Assignee: Brian Stansberry
> Fix For: 3.0.0.Alpha4
>
>
> Currently in domain mode a
> {code}
> /deployment-overlay=xxx:add(...)
> {code}
> results in a deployment overlay on ALL servers.
> However for deployments
> {code}
> /deployment=xxx:add(...)
> {code}
> does not get pushed to the servers. This happens when it is associated with a server group:
> {code}
> /server-group=zzz/deployment=xxx:add(...)
> {code}
> Similarly
> {code}
> /deployment-overlay=xxx:add(...)
> {code}
> should not get pushed to the servers, until we have a
> {code}
> /server-group=zzz/deployment=yyy:add(...)
> {code}
> which picks out the servers we want to have the overlay
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 5 months