Subsystem exposing a port?
by Heiko Braun
Can someone give me a brief overview how a subsystem would expose a port? How are services actually wired to the protocol multiplexing ? Can you point me to some examples?
Regards, Heiko
10 years, 3 months
Re: [wildfly-dev] profile cloning tool
by Brian Stansberry
Following is a discussion that started being about low level details on
how to implement a particular feature and has instead morphed into a
broader design discussion. So it seemed better to have it on the dev
list. Dev list readers, sorry for the jump into the middle of something
with no background.
Heiko,
The only connection to you is about the record/replay format, yes. Not
just the format in a general sense though, but also architecturally
where/how the conversion to that format is done. Which depends on what
the format is and how it's used.
The fundamental format is the native DMR format. If the only use cases
the console has involve storing stuff for its own use, without user
involvement, then you guys could stick with that.
Once end users start dealing with recording and replay, we already have
a format, in the form of CLI scripts, and a replay device, the CLI.
These scripts are very commonly used. So for sure we are going to get
requests to record stuff in CLI script form, at least from CLI users.
If the console use cases also involve recording in the CLI format, then
I think we should aim for a common code base for converting to/from that
format. Maintain all the quoting and escaping stuff in one place. Deal
with quirks there. For example, the CLI uses the 'batch' commands to
encapsulate steps into a composite op. CLI users don't have to deal with
the complex nesting involved in a raw DMR composite op.
If that code is all maintained in one place, then the question is how
does the console use it? Compile it in? Ask the server to do conversion
for it? My mention of a generic op was about the latter.
On 9/24/14, 2:15 AM, Heiko Braun wrote:
>
>
> Apologies, I didn't really follow this thread. But one question comes to my mind:
>
> We always had requests for record/replay functionality. This ultimately boils down to the question what format the recorded operation should use. It seems to be related to questions Brian raised. Isn't that central data format part of both the profile cloning and the record/replay features? Do these two share anything else?
>
>
> My 2 cents.
>
> On 23 Sep 2014, at 21:16, Brian Stansberry <brian.stansberry(a)redhat.com> wrote:
>
>> I see. That's not the WFLY-1106 task though. WFLY-1106 is meant to be a live clone. Sorry if that wasn't clear.
>>
>> I've added Alexey and Heiko Braun in cc.
>>
>> I'm not comfortable with having the server directly create CLI scripts. Particularly writing a file. I think that belongs on the client side.
>>
>> Brainstorming a bit now...
>>
>> I also don't like returning CLI syntax as the response for anything but a single totally generic op. Not for something custom purpose like cloning. Over time we'd end up with a bunch of those kind of custom things, each added for some special purpose.
>>
>> I can, however, see a totally generic op being useful. We have, for example, console RFEs for outputing CLI scripts to correspond to what the console has done. Having a generic op the console could invoke to get back the necessary string *might* make sense.
>>
>> A facility like that could then be used by the CLI itself as part of a client-side solution to this kind of thing.
>>
>> Possibly whether to perform the live update could be a param to the clone op. Either way the return value of the op is the list of needed operations; the param simply controls whether the server applies them.
>>
>> If the live update isn't performed, the client can instead turn around and ask the server to translate the return value into CLI syntax.
>>
>> It would make more sense for the CLI to do that translation itself though. But the DMR->CLI logic it uses could be reused by the server for the generic op I mentioned.
>>
>> On 9/23/14, 7:19 AM, Tom Fonteyne wrote:
>>> all customer are asking how to turn a profile into a CLI script. None
>>> that I know are actually interested in a "live" clone.
>>>
>>> I started with a standalone client:
>>> https://github.com/tfonteyn/profilecloner
>>>
>>>
>>> On 23/09/14 13:11, Emanuel Muckenhuber wrote:
>>>> Hi,
>>>>
>>>> i am not so sure whether writing those operations to a file and
>>>> converting them to the CLI format makes much sense as part of the mgmt
>>>> API in general. We don't have this concept anywhere in the mgmt API at
>>>> the moment and only the CLI really understands (implements) the way
>>>> commands are parsed on the client side.
>>>>
>>>> In particular i am concerned about having those workarounds for
>>>> generating CLI scripts as part of the mgmt API contracts. This really
>>>> should be a client side thing (if required), but i don't really know
>>>> what the failures you were running into are.
>>>>
>>>> The ProfileCloneLiveHandler looks correct and having nested classes
>>>> there seems to be totally fine. Speaking of composite operations, we
>>>> should make sure to also test a composite operations cloning a profile
>>>> and reading it as a 2nd step. I suspect this will cause some problems
>>>> with the ordering of some handlers.
>>>>
>>>> I hope this helps and thanks,
>>>> Emanuel
>>>>
>>>> On 23/09/14 10:04, Tom Fonteyne wrote:
>>>>>
>>>>> On 22/09/14 17:37, Brian Stansberry wrote:
>>>>>> On 9/22/14, 10:44 AM, Tom Fonteyne wrote:
>>>>>>> well... if it was not for some bugs in wildfly ... it would all work
>>>>>>> now :)
>>>>>>>
>>>>>>> 1. PathAddress.pathAddress(node.require(OP_ADDR)).toCLIStyleString()
>>>>>>> -> toCLIStyleString does not quote the value part
>>>>>>> --> temp workaround, but I'll log a JIRA and do a pull-request
>>>>>>>
>>>>>>
>>>>>> OK, but how is this relevant to this work?
>>>>> writing the profile to a CLI file. For example without quoting:
>>>>>
>>>>> /profile=default/subsystem=undertow/server=default-server/host=default-host/location=/:add(handler="welcome-content")
>>>>>
>>>>>
>>>>> => breaks as the value "/" is seen as an address separator.
>>>>> Its no big deal as I just build the string myself for now.
>>>>>
>>>>> https://github.com/tfonteyn/wildfly-core/tree/WFLY-1106
>>>>>
>>>>> modified:
>>>>> controller/src/main/java/org/jboss/as/controller/descriptions/ModelDescriptionConstants.java
>>>>>
>>>>>
>>>>> modified:
>>>>> controller/src/main/java/org/jboss/as/controller/logging/ControllerLogger.java
>>>>>
>>>>>
>>>>> modified:
>>>>> host-controller/src/main/java/org/jboss/as/domain/controller/operations/ProfileCloneHandler.java
>>>>>
>>>>>
>>>>> modified:
>>>>> host-controller/src/main/java/org/jboss/as/domain/controller/resources/ProfileResourceDefinition.java
>>>>>
>>>>>
>>>>> modified:
>>>>> host-controller/src/main/resources/org/jboss/as/domain/controller/resources/LocalDescriptions.properties
>>>>>
>>>>>
>>>>>
>>>>> ProfileCloneHandler.java
>>>>>
>>>>> Look at private ProfileCloneLiveHandler class.
>>>>>
>>>>> Note I used a single java file with two internal classes - I wasn't sure
>>>>> if there is a "contained" policy or if I should simply use 3 classes.
>>>>>
>>>>> All feedback is obviously very welcome.
>>>>>
>>>>> cheers
>>>>> Tom
>>>>>
>>>>>
>>>>>>
>>>>>>> 2. The describe information for the transaction subsystem used as-is
>>>>>>> does do the "add" but will in fact install broken xml
>>>>>>> -> will investigate, then log JIRA
>>>>>>>
>>>>>>> 3. The datasource describe can be added, but fails to marchal its
>>>>>>> config
>>>>>>> to xml due to issues with the "driver-name" child
>>>>>>> -> will investigate, then log JIRA
>>>>>>>
>>>>>>> 4. JGroups subsystem is problematic due to it's use of the
>>>>>>> "add-protocol" command.In CLI you would do:
>>>>>>>
>>>>>>> /profile="test"/subsystem="jgroups"/stack="udp":add()
>>>>>>> /profile="test"/subsystem="jgroups"/stack="udp":add-protocol(type="PING")
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> but the ProfileAddHandler sees this as the same address (which it
>>>>>>> is) and tells me "Duplicate resource"
>>>>>>> In my humble opinion we should change that to:
>>>>>>> /profile="test"/subsystem="jgroups"/stack="udp"/protocol=PING:add()
>>>>>>>
>>>>>>> Keep add-protocol for backwards compatibility, but let
>>>>>>> "describe" use
>>>>>>> a normal "add"
>>>>>>> -> will log JIRA
>>>>>>>
>>>>>>> Other then that (ahum) cloning to a new profile works.
>>>>>>
>>>>>> Do you have a link to what you're doing? Having all these issues with
>>>>>> "describe" sounds a bit odd. You wouldn't be able to boot a managed
>>>>>> domain server if describe doesn't work.
>>>>>>
>>>>>>> Cloning to a file fully works.
>>>>>>>
>>>>>>> Kind regards
>>>>>>> Tom
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 18/09/14 14:44, Emanuel Muckenhuber wrote:
>>>>>>>>
>>>>>>>> On 18/09/14 15:34, Tom Fonteyne wrote:
>>>>>>>>> ok - part one is done and functioning well
>>>>>>>>> I can now write the profile to a file as CLI commands
>>>>>>>>>
>>>>>>>>> second part is the live cloning option still todo.
>>>>>>>>>
>>>>>>>>> But I have a small issue left. The cloning to the file works
>>>>>>>>> fine, and
>>>>>>>>> yet despite setting
>>>>>>>>>
>>>>>>>>> context.getResult().set(SUCCESS);
>>>>>>>>
>>>>>>>> The outcome of the operation will be a success if the operation
>>>>>>>> completes. The context.getResult() is the response i.e. a simple
>>>>>>>> value
>>>>>>>> for metrics or a more complex structure for the read-resource
>>>>>>>> operation.
>>>>>>>>
>>>>>>>>> context.stepCompleted();
>>>>>>>>>
>>>>>>>>> the operation says:
>>>>>>>>>
>>>>>>>>> [domain@localhost:9990 /]
>>>>>>>>> /profile=full-ha:clone(file=/tmp/q2,profile-name=qq)
>>>>>>>>> {
>>>>>>>>> "outcome" => "failed",
>>>>>>>>> "result" => "success",
>>>>>>>>> "failure-description" => "WFLYCTL0159: Operation handler
>>>>>>>>> failed to
>>>>>>>>> complete",
>>>>>>>>> "rolled-back" => true
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> I presume that I need some sort of commit ? I did look at some other
>>>>>>>>> handlers of course but I seem to miss it.
>>>>>>>>>
>>>>>>>>
>>>>>>>> This just means there is a context.stepCompleted(); call missing for
>>>>>>>> one of the stepHandlers and therefore the operation gets rolled back.
>>>>>>>> This is easy to miss in particular for nested handlers.
>>>>>>>>
>>>>>>>> Emanuel
>>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 17/09/14 16:19, Emanuel Muckenhuber wrote:
>>>>>>>>>>
>>>>>>>>>> On 17/09/14 16:44, Tom Fonteyne wrote:
>>>>>>>>>>>
>>>>>>>>>>> I started on doing this but could use some doc's for those steps.
>>>>>>>>>>>
>>>>>>>>>>> I have so far:
>>>>>>>>>>> - added "clone" with attributes
>>>>>>>>>>> - registered the ProfileCloneHandler
>>>>>>>>>>> - ProfileCloneHandler sets up the describe opp.
>>>>>>>>>>> - must now execute it....
>>>>>>>>>>>
>>>>>>>>>>> public void execute(OperationContext context, ModelNode operation)
>>>>>>>>>>> throws OperationFailedException {
>>>>>>>>>>> <snip>
>>>>>>>>>>> ModelNode profileDescription = new ModelNode();
>>>>>>>>>>> ModelNode describeProfile = new ModelNode();
>>>>>>>>>>> describeProfile.get(OP).set(ModelDescriptionConstants.DESCRIBE);
>>>>>>>>>>> describeProfile.get(OP_ADDR).set(address.toModelNode());
>>>>>>>>>>>
>>>>>>>>>>> // is this correct ? will it execute the describe (and block while
>>>>>>>>>>> doing
>>>>>>>>>>> so)
>>>>>>>>>>> context.addStep(profileDescription, describeProfile,
>>>>>>>>>>> ProfileDescribeHandler.INSTANCE, OperationContext.Stage.MODEL);
>>>>>>>>>>>
>>>>>>>>>>> // I presume I can now take profileDescription and create file
>>>>>>>>>>> and/or
>>>>>>>>>>> new profile...
>>>>>>>>>>> doBla();
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> You would need to add another nested step to have access to the
>>>>>>>>>> result
>>>>>>>>>> of the describe operations. The context.addStep() is not
>>>>>>>>>> blocking and
>>>>>>>>>> just adds it to a queue of handlers which are executed after the
>>>>>>>>>> current one finishes.
>>>>>>>>>>
>>>>>>>>>> In this step you should be able to get the resolved operations
>>>>>>>>>> using
>>>>>>>>>> profileDescription.get("result").asList() and would then need to
>>>>>>>>>> get
>>>>>>>>>> the operation handlers from the mgmt resource registration and add
>>>>>>>>>> those as another step. So there are going to be a lot of nested
>>>>>>>>>> steps ;)
>>>>>>>>>>
>>>>>>>>>>> // is this the last step before leaving execute, or should this be
>>>>>>>>>>> done
>>>>>>>>>>> to actually kick of the describe .. and should doBla() be done
>>>>>>>>>>> *after*
>>>>>>>>>>> this ?
>>>>>>>>>>>
>>>>>>>>>>> context.stepCompleted();
>>>>>>>>>>>
>>>>>>>>>>> any good doc on StepHandler and related would be gladly
>>>>>>>>>>> welcomed :)
>>>>>>>>>>>
>>>>>>>>>>> Kind regards
>>>>>>>>>>> Tom
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 16/09/14 16:23, Brian Stansberry wrote:
>>>>>>>>>>>> This time really bringing him into the loop.
>>>>>>>>>>>>
>>>>>>>>>>>> On 9/16/14, 10:10 AM, Brian Stansberry wrote:
>>>>>>>>>>>>> Bringing Emanuel Muckenhuber into the loop.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 8/26/14, 9:32 AM, Brian Stansberry wrote:
>>>>>>>>>>>>>> On 8/26/14, 7:43 AM, Tom Fonteyne wrote:
>>>>>>>>>>>>>>> Hi Brian,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I've been digging into the relevant source (what else does
>>>>>>>>>>>>>>> one do
>>>>>>>>>>>>>>> on a
>>>>>>>>>>>>>>> bank holiday Monday in the UK in the bleeping torrential
>>>>>>>>>>>>>>> rain...)
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I looked at:
>>>>>>>>>>>>>>> org.jboss.as.domain.controller.resources.ProfileResourceDefinition
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> to add the "clone" command
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Yep.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> (I do need to look at commands that take
>>>>>>>>>>>>>>> arguments as well)
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> org.jboss.as.controller.operations.global.WriteAttributeHandler
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I don't think this is relevant. You'd add a step that
>>>>>>>>>>>>>> invokes the
>>>>>>>>>>>>>> ProfileDescribeHandler and then take the output (a list of
>>>>>>>>>>>>>> ops) and
>>>>>>>>>>>>>> change the address of each op to the new profile name, and
>>>>>>>>>>>>>> then add
>>>>>>>>>>>>>> each
>>>>>>>>>>>>>> of those steps.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> org.jboss.as.domain.controller.operations.ProfileDescribeHandler
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> on how to do this stephandler
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I wanted to start getting the command "working" but without
>>>>>>>>>>>>>>> doing
>>>>>>>>>>>>>>> anything
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Idiot questions:
>>>>>>>>>>>>>>> - is the above the right way of tackling this ?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Yes.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I presume the correct place to add a handler is in
>>>>>>>>>>>>>>> "org.jboss.as.domain.controller.operations"
>>>>>>>>>>>>>>> but that would mean clone is only on profile level (and
>>>>>>>>>>>>>>> not for
>>>>>>>>>>>>>>> example on datasource level)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I think that's fine to start. If you get a clone of an entire
>>>>>>>>>>>>>> profile
>>>>>>>>>>>>>> clone working, you'll learn a lot, we'll have something useful,
>>>>>>>>>>>>>> and it
>>>>>>>>>>>>>> should be straightforward enough to further advance that
>>>>>>>>>>>>>> work to
>>>>>>>>>>>>>> make it
>>>>>>>>>>>>>> more generally useful.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> -> hopefully I'm wrong, but would a clone need to be
>>>>>>>>>>>>>>> added to
>>>>>>>>>>>>>>> every
>>>>>>>>>>>>>>> single level ??
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> What I want to do later is add a 'copy' op to the root
>>>>>>>>>>>>>> resource,
>>>>>>>>>>>>>> which
>>>>>>>>>>>>>> will take a source and target address. It can use the existing
>>>>>>>>>>>>>> subsystem
>>>>>>>>>>>>>> level describe ops to get anything related to subsystems. That
>>>>>>>>>>>>>> would
>>>>>>>>>>>>>> get
>>>>>>>>>>>>>> more data than is needed if, say, cloning only a single
>>>>>>>>>>>>>> datasource
>>>>>>>>>>>>>> was
>>>>>>>>>>>>>> the goal, but that's ok. I don't want to add any further
>>>>>>>>>>>>>> requirement to
>>>>>>>>>>>>>> subsystem authors to support this, so we'll rely on the
>>>>>>>>>>>>>> existing
>>>>>>>>>>>>>> 'describe' op.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> ... as a global operation would be out as "describe"
>>>>>>>>>>>>>>> does
>>>>>>>>>>>>>>> not
>>>>>>>>>>>>>>> exist other then on profile (and sub-profile)
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Yes, a 'describe'-like function for resources outside
>>>>>>>>>>>>>> subsystems
>>>>>>>>>>>>>> is a
>>>>>>>>>>>>>> prerequisite for doing more than the profile clone op. But we
>>>>>>>>>>>>>> can do
>>>>>>>>>>>>>> that within core, without adding any requirement on subsystem
>>>>>>>>>>>>>> authors. I
>>>>>>>>>>>>>> don't know exactly when we'll be able to provide that for you,
>>>>>>>>>>>>>> which is
>>>>>>>>>>>>>> one reason I think starting with just the full profile clone
>>>>>>>>>>>>>> makes
>>>>>>>>>>>>>> sense.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> - how do I make wildfly use my newly build wild-fly core ?
>>>>>>>>>>>>>>> I tried updating the pom to the new core version, but it
>>>>>>>>>>>>>>> seems to
>>>>>>>>>>>>>>> refuse to use the local copy
>>>>>>>>>>>>>>> Note; I'm *not* good at maven :/
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> You need the following in your command to maven:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> -Dversion.org.wildfly.core=1.0.0.Alpha6-SNAPSHOT
>>>>>>>>>>>>>> -Dskip-enforce=true
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> e.g.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> ./build.sh -Dversion.org.wildfly.core=1.0.0.Alpha6-SNAPSHOT
>>>>>>>>>>>>>> -Dskip-enforce=true
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The first one is a general thing you can do to change a version
>>>>>>>>>>>>>> from
>>>>>>>>>>>>>> what's defined in the pom. If you look in the root pom.xml,
>>>>>>>>>>>>>> there's a
>>>>>>>>>>>>>> long properties section where the version of everything is
>>>>>>>>>>>>>> defined. So
>>>>>>>>>>>>>> that -D just overrides one of those property values.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The -Dskip-enforce=true turns of a validation thing that bans
>>>>>>>>>>>>>> transitive
>>>>>>>>>>>>>> dependencies. That validation just causes trouble when you are
>>>>>>>>>>>>>> working
>>>>>>>>>>>>>> with snapshots.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> - do I make any sense ?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Yes.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> - is there any dev doc I can read on this subject ?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This doc is really about adding subsystems, but it's the only
>>>>>>>>>>>>>> docs
>>>>>>>>>>>>>> about
>>>>>>>>>>>>>> writing handlers, etc:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> https://docs.jboss.org/author/display/WFLY9/Extending+WildFly
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The biggest thing is to really understand the OperationContext
>>>>>>>>>>>>>> interface, which is extensively javadoced.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> In particular, you need to understand the addStep methods,
>>>>>>>>>>>>>> since
>>>>>>>>>>>>>> the
>>>>>>>>>>>>>> essence of what you are doing is adding steps and passing data
>>>>>>>>>>>>>> back and
>>>>>>>>>>>>>> forth between their handlers. A trick to understand is the
>>>>>>>>>>>>>> variant(s)
>>>>>>>>>>>>>> of addStep that let you pass in a 'response' ModelNode. For
>>>>>>>>>>>>>> example,
>>>>>>>>>>>>>> you
>>>>>>>>>>>>>> could
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 1) create a ModelNode.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 2) add a step that will trigger use of ProfileDescribeHandler,
>>>>>>>>>>>>>> passing
>>>>>>>>>>>>>> in the ModelNode from 1) as the response.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 3) Add a step that will take the 'describe' results from 2,
>>>>>>>>>>>>>> munge the
>>>>>>>>>>>>>> addresses, and then add steps to execute all those ops. This 3)
>>>>>>>>>>>>>> step
>>>>>>>>>>>>>> will have a ref to the ModelNode from 1), allowing it to see
>>>>>>>>>>>>>> the
>>>>>>>>>>>>>> result
>>>>>>>>>>>>>> of the work done by 2).
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> - I will be out on holiday next week, so take your take
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Enjoy!
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> cheers
>>>>>>>>>>>>>>> Tom
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On 18/08/14 16:45, Brian Stansberry wrote:
>>>>>>>>>>>>>>>> On 8/18/14, 10:30 AM, Tom Fonteyne wrote:
>>>>>>>>>>>>>>>>>> Do you have any interest in looking into that?
>>>>>>>>>>>>>>>>> sure - if I can I will :)
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> But you'll need to send me details on where to hook up in
>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>> code/modules
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Great. Just a warning -- this will be a fairly complex task.
>>>>>>>>>>>>>>>> But
>>>>>>>>>>>>>>>> hopefully not crazily so. :) And hopefully interesting.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Seems JIRA works, so long as you don't need to log in.
>>>>>>>>>>>>>>>> There are two JIRAs:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> https://issues.jboss.org/browse/WFLY-1106
>>>>>>>>>>>>>>>> https://issues.jboss.org/browse/WFLY-1706
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> The second is really more of a duplicate of the former and
>>>>>>>>>>>>>>>> I'll
>>>>>>>>>>>>>>>> probably close it as such.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> This would be part of WildFly Core (I'll move the issues to
>>>>>>>>>>>>>>>> WFCORE at
>>>>>>>>>>>>>>>> some point.)
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> What's needed is an implementation of the
>>>>>>>>>>>>>>>> OperationStepHandler
>>>>>>>>>>>>>>>> interface in the WildFly Core controller module. We can chat
>>>>>>>>>>>>>>>> off-list
>>>>>>>>>>>>>>>> about details.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Doing this for profiles (or children of standalone server
>>>>>>>>>>>>>>>> subsystems)
>>>>>>>>>>>>>>>> should be fairly straightforward, because the "describe"
>>>>>>>>>>>>>>>> operation
>>>>>>>>>>>>>>>> that I mentioned exists. Moving to other resources outside
>>>>>>>>>>>>>>>> of a
>>>>>>>>>>>>>>>> profile/subsystem will require some work from my team, to
>>>>>>>>>>>>>>>> provide
>>>>>>>>>>>>>>>> some
>>>>>>>>>>>>>>>> sort of analogue to the "describe" operation.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> To show you what I mean about the describe operation, here I
>>>>>>>>>>>>>>>> run it
>>>>>>>>>>>>>>>> from the CLI against a standalone server's datasource
>>>>>>>>>>>>>>>> subsystem.
>>>>>>>>>>>>>>>> The
>>>>>>>>>>>>>>>> result is a list, where each element in the list is the
>>>>>>>>>>>>>>>> equivalent of
>>>>>>>>>>>>>>>> a low level operation.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [standalone@localhost:9990 /] /subsystem=datasources:describe
>>>>>>>>>>>>>>>> {
>>>>>>>>>>>>>>>> "outcome" => "success",
>>>>>>>>>>>>>>>> "result" => [
>>>>>>>>>>>>>>>> {
>>>>>>>>>>>>>>>> "operation" => "add",
>>>>>>>>>>>>>>>> "address" => [("subsystem" => "datasources")]
>>>>>>>>>>>>>>>> },
>>>>>>>>>>>>>>>> {
>>>>>>>>>>>>>>>> "deployment-name" => undefined,
>>>>>>>>>>>>>>>> "driver-name" => "h2",
>>>>>>>>>>>>>>>> "driver-module-name" => "com.h2database.h2",
>>>>>>>>>>>>>>>> "module-slot" => undefined,
>>>>>>>>>>>>>>>> "driver-class-name" => undefined,
>>>>>>>>>>>>>>>> "driver-datasource-class-name" => undefined,
>>>>>>>>>>>>>>>> "driver-xa-datasource-class-name" =>
>>>>>>>>>>>>>>>> "org.h2.jdbcx.JdbcDataSource",
>>>>>>>>>>>>>>>> "xa-datasource-class" => undefined,
>>>>>>>>>>>>>>>> "driver-major-version" => undefined,
>>>>>>>>>>>>>>>> "driver-minor-version" => undefined,
>>>>>>>>>>>>>>>> "jdbc-compliant" => undefined,
>>>>>>>>>>>>>>>> "operation" => "add",
>>>>>>>>>>>>>>>> "address" => [
>>>>>>>>>>>>>>>> ("subsystem" => "datasources"),
>>>>>>>>>>>>>>>> ("jdbc-driver" => "h2")
>>>>>>>>>>>>>>>> ]
>>>>>>>>>>>>>>>> },
>>>>>>>>>>>>>>>> {
>>>>>>>>>>>>>>>> "connection-url" =>
>>>>>>>>>>>>>>>> "jdbc:h2:mem:test;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE",
>>>>>>>>>>>>>>>> "driver-class" => undefined,
>>>>>>>>>>>>>>>> "datasource-class" => undefined,
>>>>>>>>>>>>>>>> "jndi-name" =>
>>>>>>>>>>>>>>>> "java:jboss/datasources/ExampleDS",
>>>>>>>>>>>>>>>> "driver-name" => "h2",
>>>>>>>>>>>>>>>> "new-connection-sql" => undefined,
>>>>>>>>>>>>>>>> "url-delimiter" => undefined,
>>>>>>>>>>>>>>>> "url-selector-strategy-class-name" => undefined,
>>>>>>>>>>>>>>>> "use-java-context" => true,
>>>>>>>>>>>>>>>> "jta" => undefined,
>>>>>>>>>>>>>>>> "max-pool-size" => undefined,
>>>>>>>>>>>>>>>> "min-pool-size" => undefined,
>>>>>>>>>>>>>>>> "initial-pool-size" => undefined,
>>>>>>>>>>>>>>>> "pool-prefill" => undefined,
>>>>>>>>>>>>>>>> "pool-use-strict-min" => undefined,
>>>>>>>>>>>>>>>> "capacity-incrementer-class" => undefined,
>>>>>>>>>>>>>>>> "capacity-decrementer-class" => undefined,
>>>>>>>>>>>>>>>> "user-name" => "sa",
>>>>>>>>>>>>>>>> "password" => "sa",
>>>>>>>>>>>>>>>> "security-domain" => undefined,
>>>>>>>>>>>>>>>> "reauth-plugin-class-name" => undefined,
>>>>>>>>>>>>>>>> "flush-strategy" => undefined,
>>>>>>>>>>>>>>>> "allow-multiple-users" => undefined,
>>>>>>>>>>>>>>>> "connection-listener-class" => undefined,
>>>>>>>>>>>>>>>> "connection-properties" => undefined,
>>>>>>>>>>>>>>>> "prepared-statements-cache-size" => undefined,
>>>>>>>>>>>>>>>> "share-prepared-statements" => undefined,
>>>>>>>>>>>>>>>> "track-statements" => undefined,
>>>>>>>>>>>>>>>> "allocation-retry" => undefined,
>>>>>>>>>>>>>>>> "allocation-retry-wait-millis" => undefined,
>>>>>>>>>>>>>>>> "blocking-timeout-wait-millis" => undefined,
>>>>>>>>>>>>>>>> "idle-timeout-minutes" => undefined,
>>>>>>>>>>>>>>>> "query-timeout" => undefined,
>>>>>>>>>>>>>>>> "use-try-lock" => undefined,
>>>>>>>>>>>>>>>> "set-tx-query-timeout" => undefined,
>>>>>>>>>>>>>>>> "transaction-isolation" => undefined,
>>>>>>>>>>>>>>>> "check-valid-connection-sql" => undefined,
>>>>>>>>>>>>>>>> "exception-sorter-class-name" => undefined,
>>>>>>>>>>>>>>>> "stale-connection-checker-class-name" => undefined,
>>>>>>>>>>>>>>>> "valid-connection-checker-class-name" => undefined,
>>>>>>>>>>>>>>>> "background-validation-millis" => undefined,
>>>>>>>>>>>>>>>> "background-validation" => undefined,
>>>>>>>>>>>>>>>> "use-fast-fail" => undefined,
>>>>>>>>>>>>>>>> "validate-on-match" => undefined,
>>>>>>>>>>>>>>>> "spy" => undefined,
>>>>>>>>>>>>>>>> "use-ccm" => undefined,
>>>>>>>>>>>>>>>> "enabled" => true,
>>>>>>>>>>>>>>>> "connectable" => undefined,
>>>>>>>>>>>>>>>> "statistics-enabled" => undefined,
>>>>>>>>>>>>>>>> "tracking" => undefined,
>>>>>>>>>>>>>>>> "reauth-plugin-properties" => undefined,
>>>>>>>>>>>>>>>> "exception-sorter-properties" => undefined,
>>>>>>>>>>>>>>>> "stale-connection-checker-properties" => undefined,
>>>>>>>>>>>>>>>> "valid-connection-checker-properties" => undefined,
>>>>>>>>>>>>>>>> "connection-listener-property" => undefined,
>>>>>>>>>>>>>>>> "capacity-incrementer-properties" => undefined,
>>>>>>>>>>>>>>>> "capacity-decrementer-properties" => undefined,
>>>>>>>>>>>>>>>> "operation" => "add",
>>>>>>>>>>>>>>>> "address" => [
>>>>>>>>>>>>>>>> ("subsystem" => "datasources"),
>>>>>>>>>>>>>>>> ("data-source" => "ExampleDS")
>>>>>>>>>>>>>>>> ]
>>>>>>>>>>>>>>>> },
>>>>>>>>>>>>>>>> {
>>>>>>>>>>>>>>>> "connection-url" =>
>>>>>>>>>>>>>>>> "jdbc:h2:mem:test;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE",
>>>>>>>>>>>>>>>> "driver-class" => undefined,
>>>>>>>>>>>>>>>> "datasource-class" => undefined,
>>>>>>>>>>>>>>>> "jndi-name" =>
>>>>>>>>>>>>>>>> "java:jboss/datasources/ExampleDS",
>>>>>>>>>>>>>>>> "driver-name" => "h2",
>>>>>>>>>>>>>>>> "new-connection-sql" => undefined,
>>>>>>>>>>>>>>>> "url-delimiter" => undefined,
>>>>>>>>>>>>>>>> "url-selector-strategy-class-name" => undefined,
>>>>>>>>>>>>>>>> "use-java-context" => true,
>>>>>>>>>>>>>>>> "jta" => undefined,
>>>>>>>>>>>>>>>> "max-pool-size" => undefined,
>>>>>>>>>>>>>>>> "min-pool-size" => undefined,
>>>>>>>>>>>>>>>> "initial-pool-size" => undefined,
>>>>>>>>>>>>>>>> "pool-prefill" => undefined,
>>>>>>>>>>>>>>>> "pool-use-strict-min" => undefined,
>>>>>>>>>>>>>>>> "capacity-incrementer-class" => undefined,
>>>>>>>>>>>>>>>> "capacity-decrementer-class" => undefined,
>>>>>>>>>>>>>>>> "user-name" => "sa",
>>>>>>>>>>>>>>>> "password" => "sa",
>>>>>>>>>>>>>>>> "security-domain" => undefined,
>>>>>>>>>>>>>>>> "reauth-plugin-class-name" => undefined,
>>>>>>>>>>>>>>>> "flush-strategy" => undefined,
>>>>>>>>>>>>>>>> "allow-multiple-users" => undefined,
>>>>>>>>>>>>>>>> "connection-listener-class" => undefined,
>>>>>>>>>>>>>>>> "connection-properties" => undefined,
>>>>>>>>>>>>>>>> "prepared-statements-cache-size" => undefined,
>>>>>>>>>>>>>>>> "share-prepared-statements" => undefined,
>>>>>>>>>>>>>>>> "track-statements" => undefined,
>>>>>>>>>>>>>>>> "allocation-retry" => undefined,
>>>>>>>>>>>>>>>> "allocation-retry-wait-millis" => undefined,
>>>>>>>>>>>>>>>> "blocking-timeout-wait-millis" => undefined,
>>>>>>>>>>>>>>>> "idle-timeout-minutes" => undefined,
>>>>>>>>>>>>>>>> "query-timeout" => undefined,
>>>>>>>>>>>>>>>> "use-try-lock" => undefined,
>>>>>>>>>>>>>>>> "set-tx-query-timeout" => undefined,
>>>>>>>>>>>>>>>> "transaction-isolation" => undefined,
>>>>>>>>>>>>>>>> "check-valid-connection-sql" => undefined,
>>>>>>>>>>>>>>>> "exception-sorter-class-name" => undefined,
>>>>>>>>>>>>>>>> "stale-connection-checker-class-name" => undefined,
>>>>>>>>>>>>>>>> "valid-connection-checker-class-name" => undefined,
>>>>>>>>>>>>>>>> "background-validation-millis" => undefined,
>>>>>>>>>>>>>>>> "background-validation" => undefined,
>>>>>>>>>>>>>>>> "use-fast-fail" => undefined,
>>>>>>>>>>>>>>>> "validate-on-match" => undefined,
>>>>>>>>>>>>>>>> "spy" => undefined,
>>>>>>>>>>>>>>>> "use-ccm" => undefined,
>>>>>>>>>>>>>>>> "enabled" => true,
>>>>>>>>>>>>>>>> "connectable" => undefined,
>>>>>>>>>>>>>>>> "statistics-enabled" => undefined,
>>>>>>>>>>>>>>>> "tracking" => undefined,
>>>>>>>>>>>>>>>> "reauth-plugin-properties" => undefined,
>>>>>>>>>>>>>>>> "exception-sorter-properties" => undefined,
>>>>>>>>>>>>>>>> "stale-connection-checker-properties" => undefined,
>>>>>>>>>>>>>>>> "valid-connection-checker-properties" => undefined,
>>>>>>>>>>>>>>>> "connection-listener-property" => undefined,
>>>>>>>>>>>>>>>> "capacity-incrementer-properties" => undefined,
>>>>>>>>>>>>>>>> "capacity-decrementer-properties" => undefined,
>>>>>>>>>>>>>>>> "operation" => "add",
>>>>>>>>>>>>>>>> "address" => [
>>>>>>>>>>>>>>>> ("subsystem" => "datasources"),
>>>>>>>>>>>>>>>> ("data-source" => "ExampleDS")
>>>>>>>>>>>>>>>> ]
>>>>>>>>>>>>>>>> }
>>>>>>>>>>>>>>>> ]
>>>>>>>>>>>>>>>> }
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On 18/08/14 16:04, Brian Stansberry wrote:
>>>>>>>>>>>>>>>>>> Cool. That can be quite helpful. It's nice to see folks
>>>>>>>>>>>>>>>>>> writing
>>>>>>>>>>>>>>>>>> tooling based on the management API. :)
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> There's a JIRA out there for doing this on the server side
>>>>>>>>>>>>>>>>>> instead of
>>>>>>>>>>>>>>>>>> the client side. So a user invokes an op and the server
>>>>>>>>>>>>>>>>>> handles it.
>>>>>>>>>>>>>>>>>> For sure that will be done for EAP 7. Do you have any
>>>>>>>>>>>>>>>>>> interest in
>>>>>>>>>>>>>>>>>> looking into that?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> The jboss.org site is down for maintenance, or I'd send
>>>>>>>>>>>>>>>>>> you the
>>>>>>>>>>>>>>>>>> link
>>>>>>>>>>>>>>>>>> to the JIRA.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> The server-side can utilize an internal operation called
>>>>>>>>>>>>>>>>>> "describe" to
>>>>>>>>>>>>>>>>>> get the exact ops needed to create a profile. From there
>>>>>>>>>>>>>>>>>> it's
>>>>>>>>>>>>>>>>>> just a
>>>>>>>>>>>>>>>>>> matter of changing the addresses to point to the target.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On 8/18/14, 7:04 AM, Tom Fonteyne wrote:
>>>>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I decided to stick my head into DMR code and as a result
>>>>>>>>>>>>>>>>>>> ended up
>>>>>>>>>>>>>>>>>>> writing a Profile Clone tool
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> It connects to a running domain controller, reads the
>>>>>>>>>>>>>>>>>>> desired
>>>>>>>>>>>>>>>>>>> origin
>>>>>>>>>>>>>>>>>>> profile and spits out a file with CLI commands.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I't not limited to profiles, it can basically read any
>>>>>>>>>>>>>>>>>>> root
>>>>>>>>>>>>>>>>>>> element
>>>>>>>>>>>>>>>>>>> (not
>>>>>>>>>>>>>>>>>>> all make sense of course) so you can also for example
>>>>>>>>>>>>>>>>>>> clone
>>>>>>>>>>>>>>>>>>> socket-binding-group
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I've tested it with full-ha with added datasources and
>>>>>>>>>>>>>>>>>>> security
>>>>>>>>>>>>>>>>>>> domains.
>>>>>>>>>>>>>>>>>>> Will it handle any profile ? Well, it should... but if you
>>>>>>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>>>>>> one
>>>>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>>> breaks then please let me know.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> There is a caveat with hornetq conection factories though:
>>>>>>>>>>>>>>>>>>> double
>>>>>>>>>>>>>>>>>>> check
>>>>>>>>>>>>>>>>>>> if the right connector is set.
>>>>>>>>>>>>>>>>>>> Correct manually in the output if needed !
>>>>>>>>>>>>>>>>>>> Check the entries:
>>>>>>>>>>>>>>>>>>> /profile=.*/subsystem=messaging/hornetq-server=.*/connection-factory=.*
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> and see/correct the connector attribute:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> connector={\"in-vm\" => undefined}
>>>>>>>>>>>>>>>>>>> or
>>>>>>>>>>>>>>>>>>> connector={\"netty\" => undefined}
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The reason is that the Cloner class does not set undefined
>>>>>>>>>>>>>>>>>>> values
>>>>>>>>>>>>>>>>>>> which
>>>>>>>>>>>>>>>>>>> is logical,
>>>>>>>>>>>>>>>>>>> but the hornetq connector must be defined with an
>>>>>>>>>>>>>>>>>>> "undefined"
>>>>>>>>>>>>>>>>>>> which is
>>>>>>>>>>>>>>>>>>> not logical...
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Get the binary from here:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> http://zen.usersys.redhat.com/downloads/eap/apps/profilecloner.jar
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> http://zen.usersys.redhat.com/downloads/eap/apps/profilecloner.sh
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The sh file is bare-bones and expect JBOSS_HOME to be
>>>>>>>>>>>>>>>>>>> set. Or
>>>>>>>>>>>>>>>>>>> without
>>>>>>>>>>>>>>>>>>> the sh:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> java -cp
>>>>>>>>>>>>>>>>>>> $JBOSS_HOME/bin/client/jboss-cli-client.jar:profilecloner.jar
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> org.jboss.tfonteyne.profilecloner.Main
>>>>>>>>>>>>>>>>>>> --controller=<host> --username=<user>
>>>>>>>>>>>>>>>>>>> --password=<password>
>>>>>>>>>>>>>>>>>>> --port=<number> --file=<name>
>>>>>>>>>>>>>>>>>>> rootelement from to [rootelement from to] ....
>>>>>>>>>>>>>>>>>>> where "rootelement from to" is for example:
>>>>>>>>>>>>>>>>>>> socket-binding-group full-ha-sockets
>>>>>>>>>>>>>>>>>>> full-ha-sockets-copy
>>>>>>>>>>>>>>>>>>> profile full-ha full-ha-copy
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> example:
>>>>>>>>>>>>>>>>>>> java -cp
>>>>>>>>>>>>>>>>>>> $JBOSS_HOME/bin/client/jboss-cli-client.jar:profilecloner.jar
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> org.jboss.tfonteyne.profilecloner.Main
>>>>>>>>>>>>>>>>>>> --controller=localhost
>>>>>>>>>>>>>>>>>>> --user=admin --password=secret --file=output.cli profile
>>>>>>>>>>>>>>>>>>> full-ha
>>>>>>>>>>>>>>>>>>> mycopy
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The sources are here:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> https://github.com/tfonteyn/profilecloner
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Is the code any good ? Can engineering use it to stick it
>>>>>>>>>>>>>>>>>>> into the
>>>>>>>>>>>>>>>>>>> product ?
>>>>>>>>>>>>>>>>>>> I dunno... it works but was build with trial and error.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> PLEASE send me feedback !
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Kind regards
>>>>>>>>>>>>>>>>>>> Tom
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>
>>
>>
>> --
>> Brian Stansberry
>> Senior Principal Software Engineer
>> JBoss by Red Hat
>
--
Brian Stansberry
Senior Principal Software Engineer
JBoss by Red Hat
10 years, 3 months
Re: [wildfly-dev] [undertow-dev] single-sign-on and reauthenticate=false
by Tomaž Cerar
This is discussion that is more appropriate for wildfly-dev mailing list.
(cced now)
On Tue, Sep 23, 2014 at 5:29 PM, Mattias Nilsson Grip <
mattias.nilsson.grip(a)redpill-linpro.com> wrote:
> Hi,
>
> I see in a commit message from February "Drop superfluous re-authenticate
> attribute of <single-sign-on/>."
>
> Looks like re-authenticate=true is still the default behaviour? In
> previous JBoss versions it was possible to use re-authenticate=false to do
> single-sign-on for two web applications in different security domains
> without the need to reauthenticate. What is the proper way to do that now?
> Should we configure an identity provider?
>
> Regards,
> Mattias
> _______________________________________________
> undertow-dev mailing list
> undertow-dev(a)lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/undertow-dev
>
10 years, 3 months
JAR scanning
by Thomas Segismont
Hi everyone,
In RHQ I need a way to list my domain classes at runtime (when preparing
the CLI environment for execution, but that's another story).
In the past, for the same issue with another project, I did something
like looking for the persistence.xml file in the classpath and then scan
the JAR where it is found.
I wonder if there's something I could re-use from our EAP6.3-alpha base.
Any idea?
Thanks,
Thomas
10 years, 3 months
Changes to WildFly pull request testing
by Tomaž Cerar
Hi guys,
As most of you already noticed we started testing wildfly-core pull
requests on both Linux and Windows build agents few weeks back.
Since than we ironed out few remaining problems, added few more windows
agents and started working on enabling testing this kind
of testing also for WildFly pull requests.
As of today also all pull requests (or retests) for WildFly will be tested
on both platforms.
How does all this work?
We now have 3 separate CI jobs for each repository acting as part of build
chain
- one for linux build
- one for windows build
- one aggregate job
Build is first started for given feature branch (pull request number) on
aggregate job which than uses chain build to start linux & windows job
simultaneously while the aggregate jobs is waiting in queue for both jobs
to complete.
When both jobs complete successfully then also aggregate job will start and
immediately complete with success status
which will be posted as result to pull request.
If any of the jobs fail then aggregate will fail with message like
"Snapshot dependency failed: WildFly :: Pull Request :: Windows"
in case that windows job had test failures.
On pull requests you open you will now see 3 different messages posted,
each one prefixed with a platform where job is running.
This way you can know that job/tests only failed/passed on linux or windows.
Guys among you that have rights to manually start pull request jobs on
brones directly I have just few advices,
if you want to test only on one platform, then open build for platform you
want to test and find the proper feature branch (PR number) and run it.
If you want to test both platforms do that via agregate job.
For easier overview pull request jobs have been moved to a sub project for
both WildFly and WildFly core project in teamcity.
As many of you have asked about what kind of OS/HW config do build agents
have:
- Linux agents have 4gb ram and 2 vCPUs and are running CentOS 6.5 x64
- Windows agents have 4gb ram and 2-4 vCPUs and are running Windows Server
2012R2 x64
In following weeks there will be few more windows agents added to the build
pool,
how many will be decided based on how much currrent are utilized.
If you see any problems let me know.
cheers,
tomaz
10 years, 3 months
Automatically resolving expressions in the CLI
by Edward Wertz
I'm looking into whether it's possible to automatically resolve expressions when executing operations and commands in the CLI.
>From my understanding, there are two variations of the problem.
* Operations are server-side processes that are accessed via ':' in the CLI and, currently, the CLI presents the results returned as-is to the users. ex: ':read-resource'
* Commands are processes that get manipulated by the CLI before getting presented to users. ex: 'ls'
I've been experimenting with adding arguments to the CLI commands, like 'ls --resolve-expressions', and gotten it working for the standalone and domain side of things. However, I can't control the scope of the argument, so it's available in situations that cannot accurately resolve expressions like the 'profile=full' section of the domain tree. The results wouldn't be reliable.
The same problem would apply to adding parameters to the server-side operations. The scope of the operations themselves can be controlled, but not their parameters. An execution like ':read-resource(recursive=true resolve-expressions=true)' can't resolve expressions unless it's used against an actual server or host, but the operation is available almost everywhere. Again, the results wouldn't be reliable.
I'm wondering if anyone can suggest a way to attack this problem? There is already a ':resolve-expression(expression=___)' operation, so users can somewhat laboriously get the runtime values they want, but I can't figure out a way to integrate the values into the existing framework successfully. Other than creating entirely new operations and commands, like 'ls-resolve' and ':read-resource-resolve', which seems like an unsustainable way to solve the problem.
Thanks,
Joe Wertz
10 years, 3 months