[JBoss JIRA] (WFLY-6278) Requesting a session with an unexpected character causes request to fail
by Paul Ferraro (JIRA)
[ https://issues.jboss.org/browse/WFLY-6278?page=com.atlassian.jira.plugin.... ]
Paul Ferraro updated WFLY-6278:
-------------------------------
Priority: Blocker (was: Critical)
> Requesting a session with an unexpected character causes request to fail
> ------------------------------------------------------------------------
>
> Key: WFLY-6278
> URL: https://issues.jboss.org/browse/WFLY-6278
> Project: WildFly
> Issue Type: Bug
> Components: Clustering, Web (Undertow)
> Affects Versions: 10.0.0.Final
> Reporter: Paul Ferraro
> Assignee: Paul Ferraro
> Priority: Blocker
>
> The root cause of the problem is that the distributed web session code optimizes the marshalling of the session identifier, by using a URL safe Base64 codec. Because this marshalling happens transparently, when Cache.get(...) goes remote (since the session ID containing an invalid character will never be found locally), the resulting IllegalArgumentException goes undetected - and propagates back to the client.
> To prevent this, we need to validate that the requested session ID can be serialized - and if not, respond as if the session was not found.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (WFLY-6279) Requesting a session with an unexpected character causes request to fail
by Paul Ferraro (JIRA)
Paul Ferraro created WFLY-6279:
----------------------------------
Summary: Requesting a session with an unexpected character causes request to fail
Key: WFLY-6279
URL: https://issues.jboss.org/browse/WFLY-6279
Project: WildFly
Issue Type: Bug
Components: Clustering, Web (Undertow)
Affects Versions: 10.0.0.Final
Reporter: Paul Ferraro
Assignee: Paul Ferraro
Priority: Blocker
The root cause of the problem is that the distributed web session code optimizes the marshalling of the session identifier, by using a URL safe Base64 codec. Because this marshalling happens transparently, when Cache.get(...) goes remote (since the session ID containing an invalid character will never be found locally), the resulting IllegalArgumentException goes undetected - and propagates back to the client.
To prevent this, we need to validate that the requested session ID can be serialized - and if not, respond as if the session was not found.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (WFCORE-433) git backend for loading/storing the configuration XML for wildfly
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFCORE-433?page=com.atlassian.jira.plugin... ]
Brian Stansberry edited comment on WFCORE-433 at 2/24/16 4:36 PM:
------------------------------------------------------------------
[~jastrachan]
Thanks for the inputs. I'll brain dump here...
First, to get this out of the way... while I don't think it will be an emphasis outside of the WildFly project, I think it's important that WildFly's existing domain mode (DC etc) can be made to work reasonably well in a kubernetes environment. For a few reasons -- 1) it's a reasonable bet some customers will demand it so better to be prepared; 2) Thomas Diesler, Heiko Braun and Harald Pehl were able to make it work fairly easily in a prototype way a year ago, so it's something already there, and 3) the PetSets proposal (https://github.com/smarterclayton/kubernetes/blob/petset/docs/proposals/p...) sounds like it will address one of the main pain points with running a DC.
BUT, I don't expect using a DC to be the direction Red Hat's overall efforts go. That's because it's specific to WildFly/EAP containers, while the goal in our OpenShift offerings is to do things in a *consistent* way. Let Kubernetes control container lifecycle, and have a common approach for how containers interact with externalized configuration, and, hopefully, for how users update that configuration.
It's important that there's an agreed upon solution to the issues in the last paragraph across the middleware cloud efforts. That's something that needs to get sorted out amongst the various players in cloud enablement, xpaas and the various middleware projects. WildFly/EAP putting effort into something that ends up not being standard is a poor use of resources -- we already have something that's specific to WildFly.
Enough, blah blah blah, now to dig into the architecture you outline. Each item below is an element in the architecture. I'm basically just regurgitating here.
1) We have immutable pods running WildFly/EAP containers (standalone server) where it's the container and not WildFly itself that cares about git. Container start pulls from git to put the right files on disk and then starts WF. This is easy for WildFly; the cloud enablement guys do the work creating the container images. ;) Pods are immutable so there are no concerns about writes to git from these pods. (BTW WF supports a mode where a user can tweak a normally persistent config setting, e.g. a log level, but the change is not persisted to standalone.xml. So a semi-immutable pod is possible.)
AIUI for this piece of the architecture this same basic approach could be used, swapping in ConfigMap for git, with the container and not WF/EAP itself being concerned with using the ConfigMap.
2) Some process needs to take user config changes and write them to git or ConfigMap. This isn't the WF/EAP Admin Console, which is a browser based client. It's a separate controller type from what's used for pods, as there should only be one (or some coordination to avoid write conflicts is needed). PetSets? And it needs to understand how to write to git/ConfigMap. This sounds like a WF/EAP server running in --admin-only mode (so it doesn't launch unneeded services to handle normal user requests, with possible negative impacts like joining JGroups clusters with the type 1) pods). For this part, new functionality in WF is needed -- the ability to update git/ConfigMap when the config is persisted.
We use a WF process for this, because it understands our supported management API and knows how take changes and persist them.
3) The management client. AIUI we are not planning on using WF's HAL Admin Console in the cloud; the goal is to have an overall management console, not container-specific ones. Is this changing? HAL is currently not enabled when the server is --admin-only, so to use HAL this would have to be addressed somehow.
4) Something that reacts to changes and does all the new ReplicationController, rolling upgrade stuff you outline. This sounds like a general function of the xpaas, not something WF/EAP does. There would need to be some sort of information exposed by the type 1) and type 2) containers though so the coordination stuff knows which type 1) pods are related to which type 2).
I suppose this functionality can be run in the type 2) container, but it should be general purpose logic.
Comparing all this to using a DC, it seems quite similar, as the type 2) element is quite similar to a DC. Basic differences:
1) Writes are not directly pushed to servers; instead they only get picked up when a new pod starts. I think it would be pretty simple to add that kind of semantic to servers managed by a DC.
2) Type 1) pods don't need a running type 2) pod to start; they just need to be able to pull git/ConfigMap. That's pretty important. Hmmm, is that true though? Something needs to provide the remote git repo / ConfigMap. (Time to read up more.)
3) A DC provides a central point for coordinating reads of the servers, which is nice. It's also non-standard.
4) The DC keeps a config history, which you can use to revert to a previous config. It's not as simple as using git though.
Re: lifecycle hooks, the EAP images cloud enablement produces for OpenShift use those. WF 10 / EAP 7 are much better in terms of how graceful shutdown works though, so we make it easier to write good hooks. I talked about this some with Ales Justin a couple weeks ago at the cloud enablement meetings.
was (Author: brian.stansberry):
[~jastrachan]
Thanks for the inputs. I'll brain dump here...
First, to get this out of the way... while I don't think it will be an emphasis outside of the WildFly project, I think it's important that WildFly's existing domain mode (DC etc) can be made to work reasonably well in a kubernetes environment. For a few reasons -- 1) it's a reasonable bet some customers will demand it so better to be prepared; 2) Thomas Diesler, Heiko Braun and Harald Pehl were able to make it work fairly easily in a prototype way a year ago, so it's something already there, and 3) the PetSets proposal (https://github.com/smarterclayton/kubernetes/blob/petset/docs/proposals/p...) sounds like it will address one of the main pain points with running a DC.
BUT, I don't expect using a DC to be the direction Red Hat's overall efforts go. That's because it's specific to WildFly/EAP containers, while the goal in our OpenShift offerings is to do things in a *consistent* way. Let Kubernetes control container lifecycle, and have a common approach for how containers interact with externalized configuration, and, hopefully, for how users update that configuration.
It's important that there's an agreed upon solution to the issues in the last paragraph across the middleware cloud efforts. That's something that needs to get sorted out amongst the various players in cloud enablement, xpaas and the various middleware projects. WildFly/EAP putting effort into something that ends up not being standard is a poor use of resources -- we already have something that's specific to WildFly.
Enough, blah blah blah, now to dig into the architecture you outline. Each item below is an element in the architecture. I'm basically just regurgitating here.
1) We have immutable pods running WildFly/EAP containers (standalone server) where it's the container and not WildFly itself that cares about git. Container start pulls from git to put the right files on disk and then starts WF. This is easy for WildFly; the cloud enablement guys do the work creating the container images. ;) Pods are immutable so there are no concerns about writes to git from these pods. (BTW WF supports a mode where a user can tweak a normally persistent config setting, e.g. a log level, but the change is not persisted to standalone.xml. So a semi-immutable pod is possible.)
AIUI for this piece of the architecture this same basic approach could be used, swapping in ConfigMap for git, with the container and not WF/EAP itself being concerned with using the ConfigMap.
2) Some process needs to take user config changes and write them to git or ConfigMap. This isn't the WF/EAP Admin Console, which is a browser based client. It's a separate controller type from what's used for pods, as there should only be one (or some coordination to avoid write conflicts is needed). PetSets? And it needs to understand how to write to git/ConfigMap. This sounds like a WF/EAP server running in --admin-only mode (so it doesn't launch unneeded services to handle normal user requests, with possible negative impacts like joining JGroups clusters with the type 1) pods). For this part, new functionality in WF is needed -- the ability to update git/ConfigMap when the config is persisted.
We use a WF process for this, because it understands our supported management API and knows how take changes and persist them.
3) The management client. AIUI we are not planning on using WF's HAL Admin Console in the cloud; the goal is to have an overall management console, not container-specific ones. Is this changing? HAL is currently not enabled when the server is --admin-only, so to use HAL this would have to be addressed somehow.
4) Something that reacts to changes and does all the new ReplicationController, rolling upgrade stuff you outline. This sounds like a general function of the xpaas, not something WF/EAP does. There would need to be some sort of information exposed by the type 1) and type 2) containers though so the coordination stuff knows which type 1) pods are related to which type 2).
I suppose this functionality can be run in the type 2) container, but it should be general purpose logic.
Comparing all this to using a DC, it seems quite similar, as the type 2) element is quite similar to a DC. Basic differences:
1) Writes are not directly pushed to servers; instead they only get picked up when a new pod starts. I think it would be pretty simple to add that kind of semantic to servers managed by a DC.
2) Type 1) pods don't need a running type 2) pod to start; they just need to be able to pull git/ConfigMap. That's pretty important. Hmmm, is that true though? Something needs to provide the remote git repo / ConfigMap. (Time to read up more.)
3) A DC provides a central point for coordinating reads of the servers, which is nice. It's also non-standard.
Re: lifecycle hooks, the EAP images cloud enablement produces for OpenShift use those. WF 10 / EAP 7 are much better in terms of how graceful shutdown works though, so we make it easier to write good hooks. I talked about this some with Ales Justin a couple weeks ago at the cloud enablement meetings.
> git backend for loading/storing the configuration XML for wildfly
> -----------------------------------------------------------------
>
> Key: WFCORE-433
> URL: https://issues.jboss.org/browse/WFCORE-433
> Project: WildFly Core
> Issue Type: Feature Request
> Components: Domain Management
> Reporter: James Strachan
> Assignee: Jason Greene
>
> when working with wildfly in a cloud/paas environment (like openshift, fabric8, docker, heroku et al) it'd be great to have a git repository for the configuration folder so that writes work something like:
> * git pull
> * write the, say, standalone.xml file
> * git commit -a -m "some comment"
> * git push
> (with a handler to deal with conflicts; such as last write wins).
> Then an optional periodic 'git pull' and reload configuration if there is a change.
> This would then mean that folks could use a number of wildfly containers using docker / openshift / fabric8 and then have a shared git repository (e.g. the git repo in openshift or fabric8) to configure a group of wildfly containers. Folks could then reuse the wildfly management console within cloud environments (as the management console would, under the covers, be loading/saving from/to git)
> Folks could then benefit from git tooling when dealing with versioning and audit logs of changes to the XML; along with getting the benefit of branching, tagging.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (WFCORE-433) git backend for loading/storing the configuration XML for wildfly
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFCORE-433?page=com.atlassian.jira.plugin... ]
Brian Stansberry commented on WFCORE-433:
-----------------------------------------
[~jastrachan]
Thanks for the inputs. I'll brain dump here...
First, to get this out of the way... while I don't think it will be an emphasis outside of the WildFly project, I think it's important that WildFly's existing domain mode (DC etc) can be made to work reasonably well in a kubernetes environment. For a few reasons -- 1) it's a reasonable bet some customers will demand it so better to be prepared; 2) Thomas Diesler, Heiko Braun and Harald Pehl were able to make it work fairly easily in a prototype way a year ago, so it's something already there, and 3) the PetSets proposal (https://github.com/smarterclayton/kubernetes/blob/petset/docs/proposals/p...) sounds like it will address one of the main pain points with running a DC.
BUT, I don't expect using a DC to be the direction Red Hat's overall efforts go. That's because it's specific to WildFly/EAP containers, while the goal in our OpenShift offerings is to do things in a *consistent* way. Let Kubernetes control container lifecycle, and have a common approach for how containers interact with externalized configuration, and, hopefully, for how users update that configuration.
It's important that there's an agreed upon solution to the issues in the last paragraph across the middleware cloud efforts. That's something that needs to get sorted out amongst the various players in cloud enablement, xpaas and the various middleware projects. WildFly/EAP putting effort into something that ends up not being standard is a poor use of resources -- we already have something that's specific to WildFly.
Enough, blah blah blah, now to dig into the architecture you outline. Each item below is an element in the architecture. I'm basically just regurgitating here.
1) We have immutable pods running WildFly/EAP containers (standalone server) where it's the container and not WildFly itself that cares about git. Container start pulls from git to put the right files on disk and then starts WF. This is easy for WildFly; the cloud enablement guys do the work creating the container images. ;) Pods are immutable so there are no concerns about writes to git from these pods. (BTW WF supports a mode where a user can tweak a normally persistent config setting, e.g. a log level, but the change is not persisted to standalone.xml. So a semi-immutable pod is possible.)
AIUI for this piece of the architecture this same basic approach could be used, swapping in ConfigMap for git, with the container and not WF/EAP itself being concerned with using the ConfigMap.
2) Some process needs to take user config changes and write them to git or ConfigMap. This isn't the WF/EAP Admin Console, which is a browser based client. It's a separate controller type from what's used for pods, as there should only be one (or some coordination to avoid write conflicts is needed). PetSets? And it needs to understand how to write to git/ConfigMap. This sounds like a WF/EAP server running in --admin-only mode (so it doesn't launch unneeded services to handle normal user requests, with possible negative impacts like joining JGroups clusters with the type 1) pods). For this part, new functionality in WF is needed -- the ability to update git/ConfigMap when the config is persisted.
We use a WF process for this, because it understands our supported management API and knows how take changes and persist them.
3) The management client. AIUI we are not planning on using WF's HAL Admin Console in the cloud; the goal is to have an overall management console, not container-specific ones. Is this changing? HAL is currently not enabled when the server is --admin-only, so to use HAL this would have to be addressed somehow.
4) Something that reacts to changes and does all the new ReplicationController, rolling upgrade stuff you outline. This sounds like a general function of the xpaas, not something WF/EAP does. There would need to be some sort of information exposed by the type 1) and type 2) containers though so the coordination stuff knows which type 1) pods are related to which type 2).
I suppose this functionality can be run in the type 2) container, but it should be general purpose logic.
Comparing all this to using a DC, it seems quite similar, as the type 2) element is quite similar to a DC. Basic differences:
1) Writes are not directly pushed to servers; instead they only get picked up when a new pod starts. I think it would be pretty simple to add that kind of semantic to servers managed by a DC.
2) Type 1) pods don't need a running type 2) pod to start; they just need to be able to pull git/ConfigMap. That's pretty important. Hmmm, is that true though? Something needs to provide the remote git repo / ConfigMap. (Time to read up more.)
3) A DC provides a central point for coordinating reads of the servers, which is nice. It's also non-standard.
Re: lifecycle hooks, the EAP images cloud enablement produces for OpenShift use those. WF 10 / EAP 7 are much better in terms of how graceful shutdown works though, so we make it easier to write good hooks. I talked about this some with Ales Justin a couple weeks ago at the cloud enablement meetings.
> git backend for loading/storing the configuration XML for wildfly
> -----------------------------------------------------------------
>
> Key: WFCORE-433
> URL: https://issues.jboss.org/browse/WFCORE-433
> Project: WildFly Core
> Issue Type: Feature Request
> Components: Domain Management
> Reporter: James Strachan
> Assignee: Jason Greene
>
> when working with wildfly in a cloud/paas environment (like openshift, fabric8, docker, heroku et al) it'd be great to have a git repository for the configuration folder so that writes work something like:
> * git pull
> * write the, say, standalone.xml file
> * git commit -a -m "some comment"
> * git push
> (with a handler to deal with conflicts; such as last write wins).
> Then an optional periodic 'git pull' and reload configuration if there is a change.
> This would then mean that folks could use a number of wildfly containers using docker / openshift / fabric8 and then have a shared git repository (e.g. the git repo in openshift or fabric8) to configure a group of wildfly containers. Folks could then reuse the wildfly management console within cloud environments (as the management console would, under the covers, be loading/saving from/to git)
> Folks could then benefit from git tooling when dealing with versioning and audit logs of changes to the XML; along with getting the benefit of branching, tagging.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (WFLY-6278) Requesting a session with an unexpected character causes request to fail
by Paul Ferraro (JIRA)
Paul Ferraro created WFLY-6278:
----------------------------------
Summary: Requesting a session with an unexpected character causes request to fail
Key: WFLY-6278
URL: https://issues.jboss.org/browse/WFLY-6278
Project: WildFly
Issue Type: Bug
Components: Clustering, Web (Undertow)
Affects Versions: 10.0.0.Final
Reporter: Paul Ferraro
Assignee: Paul Ferraro
Priority: Critical
The root cause of the problem is that the distributed web session code optimizes the marshalling of the session identifier, by using a URL safe Base64 codec. Because this marshalling happens transparently, when Cache.get(...) goes remote (since the session ID containing an invalid character will never be found locally), the resulting IllegalArgumentException goes undetected - and propagates back to the client.
To prevent this, we need to validate that the requested session ID can be serialized - and if not, respond as if the session was not found.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (JGRP-2020) Remove demos and tests packages from final artifact
by Mathieu Lachance (JIRA)
Mathieu Lachance created JGRP-2020:
--------------------------------------
Summary: Remove demos and tests packages from final artifact
Key: JGRP-2020
URL: https://issues.jboss.org/browse/JGRP-2020
Project: JGroups
Issue Type: Enhancement
Affects Versions: 3.6.6
Environment: Wildfly 10.0.0.Final
Reporter: Mathieu Lachance
Assignee: Bela Ban
Would it be possible to remove unecessary (I assume) demos and tests packages from the final built artifact.
Thanks,
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (WFLY-6274) Please improve deployment logging
by David Lloyd (JIRA)
[ https://issues.jboss.org/browse/WFLY-6274?page=com.atlassian.jira.plugin.... ]
David Lloyd commented on WFLY-6274:
-----------------------------------
Unless someone has a quick fix for this, I think this falls squarely under the deployment revamp work in my queue.
> Please improve deployment logging
> ---------------------------------
>
> Key: WFLY-6274
> URL: https://issues.jboss.org/browse/WFLY-6274
> Project: WildFly
> Issue Type: Feature Request
> Reporter: Rick Wagner
> Assignee: David Lloyd
>
> Wildfly attempts to deploy mal-formed artifacts and does not provide useful feedback to the user. (In fact, it provides feedback that hints that things are ok, when really they are not.)
> As an example, deploying a flawed XML descriptor (targeted at no particular Wildfly subsystem) brings this:
> 08:30:37,675 INFO [org.jboss.as.server.deployment] (MSC service thread 1-11) JBAS015876: Starting deployment of "bogus.xml" (runtime-name: "bogus.xml")
> 08:30:38,085 INFO [org.jboss.as.server] (DeploymentScanner-threads - 1) JBAS015859: Deployed "bogus.xml" (runtime-name : "bogus.xml")
> If the user is attempting to make use of a Wildfly subsystem (say Wildlfy-Camel) but has an unacceptable descriptor, no useful feedback is provided. The user will think their application is 'deployed' (as logged) when really it has been ignored.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (WFLY-6274) Please improve deployment logging
by David Lloyd (JIRA)
[ https://issues.jboss.org/browse/WFLY-6274?page=com.atlassian.jira.plugin.... ]
David Lloyd reassigned WFLY-6274:
---------------------------------
Assignee: David Lloyd (was: Jason Greene)
> Please improve deployment logging
> ---------------------------------
>
> Key: WFLY-6274
> URL: https://issues.jboss.org/browse/WFLY-6274
> Project: WildFly
> Issue Type: Feature Request
> Reporter: Rick Wagner
> Assignee: David Lloyd
>
> Wildfly attempts to deploy mal-formed artifacts and does not provide useful feedback to the user. (In fact, it provides feedback that hints that things are ok, when really they are not.)
> As an example, deploying a flawed XML descriptor (targeted at no particular Wildfly subsystem) brings this:
> 08:30:37,675 INFO [org.jboss.as.server.deployment] (MSC service thread 1-11) JBAS015876: Starting deployment of "bogus.xml" (runtime-name: "bogus.xml")
> 08:30:38,085 INFO [org.jboss.as.server] (DeploymentScanner-threads - 1) JBAS015859: Deployed "bogus.xml" (runtime-name : "bogus.xml")
> If the user is attempting to make use of a Wildfly subsystem (say Wildlfy-Camel) but has an unacceptable descriptor, no useful feedback is provided. The user will think their application is 'deployed' (as logged) when really it has been ignored.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (WFCORE-521) setting the local to english in CLI commands on non-english systems does not produce english output
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/WFCORE-521?page=com.atlassian.jira.plugin... ]
RH Bugzilla Integration commented on WFCORE-521:
------------------------------------------------
Vladimir Dosoudil <dosoudil(a)redhat.com> changed the Status of [bug 1128132|https://bugzilla.redhat.com/show_bug.cgi?id=1128132] from MODIFIED to ON_QA
> setting the local to english in CLI commands on non-english systems does not produce english output
> ---------------------------------------------------------------------------------------------------
>
> Key: WFCORE-521
> URL: https://issues.jboss.org/browse/WFCORE-521
> Project: WildFly Core
> Issue Type: Bug
> Components: Domain Management
> Affects Versions: 1.0.0.Alpha15
> Environment: Tested on MacOS running in German
> Reporter: Tom Fonteyne
> Assignee: Romain Pelisse
> Priority: Minor
> Fix For: 1.0.0.Alpha16
>
>
> A German (or french etc...) system must be used to reproduce.
> It is likely this is not limited to MacOS, but I do not have a non-english Linux system available
> An out of the box install of wildfly/EAP:
> Without configuration, the log file is in German as expected.
> Using these CLI comands:
> :read-operation-description(name=stop-servers,locale=de_DE) -> german
> :read-operation-description(name=stop-servers,locale=en_US) -> german
> :read-operation-description(name=stop-servers,locale=fr_FR) -> french
> So we cannot get the CLI to produce english output
> when configuring JAVA_OPTS in domain.conf with:
> JAVA_OPTS="$JAVA_OPTS -Duser.language=en -Duser.country=DE -Duser.encoding=utf-8
> The log is now in English -> works as expected; and:
> :read-operation-description(name=stop-servers,locale=de_DE) -> german
> :read-operation-description(name=stop-servers,locale=en_US) -> english
> So it seems we have a bug where the locale set to start the domain takes precedence over the locale set in the CLI command (but only when English is asked)
> I presume this is because English is the default locale.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months