[JBoss JIRA] (JGRP-2470) JBDC_PING can face a split-brain issue when restarting a coordinator node
by Radoslav Husar (Jira)
[ https://issues.redhat.com/browse/JGRP-2470?page=com.atlassian.jira.plugin... ]
Radoslav Husar updated JGRP-2470:
---------------------------------
Summary: JBDC_PING can face a split-brain issue when restarting a coordinator node (was: JBDC_PING can face a split-brain issue when restarting a cooridnator node)
> JBDC_PING can face a split-brain issue when restarting a coordinator node
> -------------------------------------------------------------------------
>
> Key: JGRP-2470
> URL: https://issues.redhat.com/browse/JGRP-2470
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.1.9, 4.0.22
> Reporter: Masafumi Miura
> Assignee: Radoslav Husar
> Priority: Major
> Fix For: 4.2.4
>
>
> After [the change|https://github.com/belaban/JGroups/commit/215cdb6] for JGRP-2199, JDBC_PING deletes all entries from the table during the shutdown of the coordinator node.
> This behavior has a possibility to cause a split-brain when restarting a coordinator node. Because, as all entries are lost in the following scenario, the restarting node can not find any information about existing nodes from the table and does not form a cluster.
> 0. node1 and node2 form a cluster. The node1 is a coordinator.
> 1. Trigger a restart of the node1
> 2. The node1 removes their node information from the table
> 3. The node2 becomes a new coordinator
> 4. The node2 updates their node information in the table
> 5. The node1 clears all entries from the table
> 6. The node1 starts again
> 7. The node1 does not join the existing cluster because there's no node information in the table
> Note: If step 5 happens before step 4, the split-brain issue does not happen. However, as step 4 and step 5 happen on different nodes, these steps can happen in parallel. So, the order is undefined. So, for example, if the shutdown of node1 takes a long time, there's a high possibility to face this issue.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years
[JBoss JIRA] (WFWIP-315) XP OpenShift image http management interface secured with no user by default
by Jean Francois Denise (Jira)
[ https://issues.redhat.com/browse/WFWIP-315?page=com.atlassian.jira.plugin... ]
Jean Francois Denise commented on WFWIP-315:
--------------------------------------------
[~mchoma], do you have an extensions directory or a custom config that would be copied to server during s2i?
Are you provisioning some galleon layers during s2i?
> XP OpenShift image http management interface secured with no user by default
> ----------------------------------------------------------------------------
>
> Key: WFWIP-315
> URL: https://issues.redhat.com/browse/WFWIP-315
> Project: WildFly WIP
> Issue Type: Bug
> Components: OpenShift
> Reporter: Martin Choma
> Assignee: Jeff Mesnil
> Priority: Critical
> Attachments: standalone-openshift.cd19.xml, standalone-openshift.xp.xml
>
>
> In one test with XP image [1] I am experiencing problem of failing rediness probe.
> {code}
> sh-4.2$ python -d /opt/eap/bin/probes/runner.py --debug -c READY --loglevel=DEBUG probe.eap.dmr.EapProbe probe.eap.dmr.HealthCheckProbe
> DEBUG:__main__:Starting probe runner with args: Namespace(check=[<Status.READY: 8>], debug=True, logfile=None, loglevel='DEBUG', probes=['probe.eap.dmr.EapProbe', 'probe.eap.dmr.HealthCheckProbe'])
> INFO:__main__:Loading probe: probe.eap.dmr.EapProbe
> DEBUG:probe.eap.dmr.EapProbe:Configuration set as follows: host=localhost, port=9990, user=eapadmin, password=***
> INFO:__main__:Loading probe: probe.eap.dmr.HealthCheckProbe
> DEBUG:probe.eap.dmr.HealthCheckProbe:Configuration set as follows: host=localhost, port=9990, user=eapadmin, password=***
> INFO:__main__:Probes will fail for the following states: [HARD_FAILURE, FAILURE, NOT_READY]
> INFO:__main__:Running probes
> INFO:__main__.ProbeRunner:Running the following probes: [probe.eap.dmr.EapProbe, probe.eap.dmr.HealthCheckProbe]
> INFO:__main__.ProbeRunner:Running probe: probe.eap.dmr.EapProbe
> INFO:probe.eap.dmr.EapProbe:Executing the following tests: [probe.eap.dmr.ServerStatusTest, probe.eap.dmr.BootErrorsTest, probe.eap.dmr.DeploymentTest]
> INFO:probe.eap.dmr.EapProbe:Sending probe request to http://localhost:9990/management
> DEBUG:probe.eap.dmr.EapProbe:Probe request = {
> "operation": "composite",
> "json.pretty": 1,
> "steps": [
> {
> "operation": "read-attribute",
> "name": "server-state"
> },
> {
> "operation": "read-boot-errors",
> "address": {
> "core-service": "management"
> }
> },
> {
> "operation": "read-attribute",
> "name": "status",
> "address": {
> "deployment": "*"
> }
> }
> ],
> "address": []
> }
> INFO:urllib3.connectionpool:Starting new HTTP connection (1): localhost
> DEBUG:urllib3.connectionpool:"POST /management HTTP/1.1" 403 188
> DEBUG:probe.eap.dmr.EapProbe:Probe response: <Response [403]>
> ERROR:probe.eap.dmr.EapProbe:Unexpected failure sending probe request
> Traceback (most recent call last):
> File "/s2i-output/server/bin/probes/probe/api.py", line 142, in execute
> results = self.sendRequest(request)
> File "/s2i-output/server/bin/probes/probe/dmr.py", line 97, in sendRequest
> self.failUnusableResponse(response, request, url)
> File "/s2i-output/server/bin/probes/probe/dmr.py", line 108, in failUnusableResponse
> unusable = not respDict or not respDict["outcome"] or respDict["outcome"] != "failed" or not respDict["result"]
> KeyError: 'result'
> INFO:__main__.ProbeRunner:Probe probe.eap.dmr.EapProbe returned statuses [FAILURE]
> DEBUG:__main__.ProbeRunner:Probe probe.eap.dmr.EapProbe returned messages "Error sending probe request: 'result'"
> {code}
> Note there is {{Response [403]}} which makes me think it will be related with legacy security switch with Elytron.
> When I look at CD19 standalon-openshift.xml I see by default management interface is unsecured. Once ADMIN_PASSWORD, ADMIN_USERNAME is applied it is secured by legacy security ManagementRealm pointing to mgmt-users.properties
> In contrast in XP images it is secured by default by {{management-http-authentication}} which is pointing to {{mgmt-users.properties}}, which is empty by default. Once ADMIN_PASSWORD, ADMIN_USERNAME is applied it is filled with that user
> {code}
> <management-interfaces>
> <http-interface http-authentication-factory="management-http-authentication" console-enabled="false">
> <http-upgrade enabled="true" sasl-authentication-factory="management-sasl-authentication"/>
> <socket-binding http="management-http"/>
> </http-interface>
> </management-interfaces>
> {code}
> I think both approaches should be consistent (no matter if legacy or Elytron). E.g. unsecured by default and secured when ADMIN_PASSWORD, ADMIN_USERNAME specified (like in case of CD19)
> [1] docker-registry.upshift.redhat.com/kwills/eap-xp1-openjdk8-openshift-rhel...
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years
[JBoss JIRA] (DROOLS-5061) DMN codegen DMNContext and DMNResult
by Matteo Mortari (Jira)
[ https://issues.redhat.com/browse/DROOLS-5061?page=com.atlassian.jira.plug... ]
Matteo Mortari updated DROOLS-5061:
-----------------------------------
Description:
A codegen facility is required to generate statically code strongly typed for the input/output of a DMN model, considering also the data types defined inside the DMN model as ItemDefinition.
Surface and interface common to:
* BPMN integration: when invoking DMN from BPMN, the return of the evaluation is a Map and not the domain model
* Kogito/OpenAPI/Swagger: when using the codegenerated version, annotation shall explain the swagger API the expected input/output type of the REST request for DMN
** for Quarkus based, use RegisterForReflection annotation
** use MP validation for allowedvalues
* DMN Editor: when importing a Java Class in the editor, it shall use the user-supplied Java Class
Documentation:
* the programmatic use of the generated classes, directly, is DISCOURAGED
* if anything, a programmatic use will eventually be stable by making use of the available interfaces in kie-dmn-core
was:
A codegen facility is required to generate statically code strongly typed for the input/output of a DMN model, considering also the data types defined inside the DMN model as ItemDefinition.
Surface and interface common to:
* BPMN integration: when invoking DMN from BPMN, the return of the evaluation is a Map and not the domain model
* Kogito/OpenAPI/Swagger: when using the codegenerated version, annotation shall explain the swagger API the expected input/output type of the REST request for DMN
** for Quarkus based, use RegisterForReflection annotation
** use MP validation for allowedvalues
* DMN Editor: when importing a Java Class in the editor, it shall use the user-supplied Java Class
> DMN codegen DMNContext and DMNResult
> ------------------------------------
>
> Key: DROOLS-5061
> URL: https://issues.redhat.com/browse/DROOLS-5061
> Project: Drools
> Issue Type: Epic
> Components: dmn engine
> Reporter: Matteo Mortari
> Assignee: Luca Molteni
> Priority: Major
>
> A codegen facility is required to generate statically code strongly typed for the input/output of a DMN model, considering also the data types defined inside the DMN model as ItemDefinition.
> Surface and interface common to:
> * BPMN integration: when invoking DMN from BPMN, the return of the evaluation is a Map and not the domain model
> * Kogito/OpenAPI/Swagger: when using the codegenerated version, annotation shall explain the swagger API the expected input/output type of the REST request for DMN
> ** for Quarkus based, use RegisterForReflection annotation
> ** use MP validation for allowedvalues
> * DMN Editor: when importing a Java Class in the editor, it shall use the user-supplied Java Class
> Documentation:
> * the programmatic use of the generated classes, directly, is DISCOURAGED
> * if anything, a programmatic use will eventually be stable by making use of the available interfaces in kie-dmn-core
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years
[JBoss JIRA] (DROOLS-5276) Better error message when a DMN model doesn't have a name
by Toshiya Kobayashi (Jira)
Toshiya Kobayashi created DROOLS-5276:
-----------------------------------------
Summary: Better error message when a DMN model doesn't have a name
Key: DROOLS-5276
URL: https://issues.redhat.com/browse/DROOLS-5276
Project: Drools
Issue Type: Bug
Reporter: Toshiya Kobayashi
Assignee: Mario Fusco
If DMN doesn't have a name, kogito start up fails with the following Exception. We may log a more informative error message.
{noformat}
2020-04-27 18:22:51,063 ERROR [io.qua.dep.dev.DevModeMain] (main) Failed to start Quarkus: java.lang.RuntimeException: io.quarkus.builder.BuildException: Build failure: Build failed due to errors
[error]: Build step org.kie.kogito.quarkus.deployment.KogitoAssetsProcessor#generateModel threw an exception: java.lang.StringIndexOutOfBoundsException: String index out of range: 0
at java.base/java.lang.StringLatin1.charAt(StringLatin1.java:47)
at java.base/java.lang.String.charAt(String.java:693)
at org.drools.core.util.StringUtils.capitalize(StringUtils.java:1292)
at org.kie.kogito.codegen.decision.DMNRestResourceGenerator.<init>(DMNRestResourceGenerator.java:69)
at org.kie.kogito.codegen.decision.DecisionCodegen.generate(DecisionCodegen.java:158)
at org.kie.kogito.codegen.decision.DecisionCodegen.generate(DecisionCodegen.java:59)
at org.kie.kogito.codegen.ApplicationGenerator.lambda$generateComponents$10(ApplicationGenerator.java:226)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:271)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1654)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
at org.kie.kogito.codegen.ApplicationGenerator.generateComponents(ApplicationGenerator.java:227)
at org.kie.kogito.codegen.ApplicationGenerator.generate(ApplicationGenerator.java:208)
at org.kie.kogito.quarkus.deployment.KogitoAssetsProcessor.generateModel(KogitoAssetsProcessor.java:180)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at io.quarkus.deployment.ExtensionLoader$2.execute(ExtensionLoader.java:931)
at io.quarkus.builder.BuildContext.run(BuildContext.java:277)
at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:2027)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1551)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1442)
at java.base/java.lang.Thread.run(Thread.java:834)
at org.jboss.threads.JBossThread.run(JBossThread.java:479)
{noformat}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years
[JBoss JIRA] (WFWIP-315) XP OpenShift image http management interface secured with no user by default
by Martin Choma (Jira)
[ https://issues.redhat.com/browse/WFWIP-315?page=com.atlassian.jira.plugin... ]
Martin Choma commented on WFWIP-315:
------------------------------------
I see what you say when playing with podman locally after start (no openshift)
{code}
<management-interfaces>
<http-interface console-enabled="false">
<http-upgrade enabled="true" sasl-authentication-factory="management-sasl-authentication"/>
<socket-binding http="management-http"/>
</http-interface>
</management-interfaces>
{code}
But on OpenShift I see this after start.
{code}
...
<security-realm name="ManagementRealm">
<authentication>
<local default-user="$local" skip-group-loading="true"/>
<properties path="mgmt-users.properties" relative-to="jboss.server.config.dir"/>
</authentication>
<authorization map-groups-to-roles="false">
<properties path="mgmt-groups.properties" relative-to="jboss.server.config.dir"/>
</authorization>
</security-realm>
...
<management-interfaces>
<http-interface security-realm="ManagementRealm" console-enabled="false">
<http-upgrade enabled="true"/>
<socket-binding http="management-http"/>
</http-interface>
</management-interfaces>
{code}
Now I don't understand what is going on. Is it possible scripts returns legacy configuration?
> XP OpenShift image http management interface secured with no user by default
> ----------------------------------------------------------------------------
>
> Key: WFWIP-315
> URL: https://issues.redhat.com/browse/WFWIP-315
> Project: WildFly WIP
> Issue Type: Bug
> Components: OpenShift
> Reporter: Martin Choma
> Assignee: Jeff Mesnil
> Priority: Critical
> Attachments: standalone-openshift.cd19.xml, standalone-openshift.xp.xml
>
>
> In one test with XP image [1] I am experiencing problem of failing rediness probe.
> {code}
> sh-4.2$ python -d /opt/eap/bin/probes/runner.py --debug -c READY --loglevel=DEBUG probe.eap.dmr.EapProbe probe.eap.dmr.HealthCheckProbe
> DEBUG:__main__:Starting probe runner with args: Namespace(check=[<Status.READY: 8>], debug=True, logfile=None, loglevel='DEBUG', probes=['probe.eap.dmr.EapProbe', 'probe.eap.dmr.HealthCheckProbe'])
> INFO:__main__:Loading probe: probe.eap.dmr.EapProbe
> DEBUG:probe.eap.dmr.EapProbe:Configuration set as follows: host=localhost, port=9990, user=eapadmin, password=***
> INFO:__main__:Loading probe: probe.eap.dmr.HealthCheckProbe
> DEBUG:probe.eap.dmr.HealthCheckProbe:Configuration set as follows: host=localhost, port=9990, user=eapadmin, password=***
> INFO:__main__:Probes will fail for the following states: [HARD_FAILURE, FAILURE, NOT_READY]
> INFO:__main__:Running probes
> INFO:__main__.ProbeRunner:Running the following probes: [probe.eap.dmr.EapProbe, probe.eap.dmr.HealthCheckProbe]
> INFO:__main__.ProbeRunner:Running probe: probe.eap.dmr.EapProbe
> INFO:probe.eap.dmr.EapProbe:Executing the following tests: [probe.eap.dmr.ServerStatusTest, probe.eap.dmr.BootErrorsTest, probe.eap.dmr.DeploymentTest]
> INFO:probe.eap.dmr.EapProbe:Sending probe request to http://localhost:9990/management
> DEBUG:probe.eap.dmr.EapProbe:Probe request = {
> "operation": "composite",
> "json.pretty": 1,
> "steps": [
> {
> "operation": "read-attribute",
> "name": "server-state"
> },
> {
> "operation": "read-boot-errors",
> "address": {
> "core-service": "management"
> }
> },
> {
> "operation": "read-attribute",
> "name": "status",
> "address": {
> "deployment": "*"
> }
> }
> ],
> "address": []
> }
> INFO:urllib3.connectionpool:Starting new HTTP connection (1): localhost
> DEBUG:urllib3.connectionpool:"POST /management HTTP/1.1" 403 188
> DEBUG:probe.eap.dmr.EapProbe:Probe response: <Response [403]>
> ERROR:probe.eap.dmr.EapProbe:Unexpected failure sending probe request
> Traceback (most recent call last):
> File "/s2i-output/server/bin/probes/probe/api.py", line 142, in execute
> results = self.sendRequest(request)
> File "/s2i-output/server/bin/probes/probe/dmr.py", line 97, in sendRequest
> self.failUnusableResponse(response, request, url)
> File "/s2i-output/server/bin/probes/probe/dmr.py", line 108, in failUnusableResponse
> unusable = not respDict or not respDict["outcome"] or respDict["outcome"] != "failed" or not respDict["result"]
> KeyError: 'result'
> INFO:__main__.ProbeRunner:Probe probe.eap.dmr.EapProbe returned statuses [FAILURE]
> DEBUG:__main__.ProbeRunner:Probe probe.eap.dmr.EapProbe returned messages "Error sending probe request: 'result'"
> {code}
> Note there is {{Response [403]}} which makes me think it will be related with legacy security switch with Elytron.
> When I look at CD19 standalon-openshift.xml I see by default management interface is unsecured. Once ADMIN_PASSWORD, ADMIN_USERNAME is applied it is secured by legacy security ManagementRealm pointing to mgmt-users.properties
> In contrast in XP images it is secured by default by {{management-http-authentication}} which is pointing to {{mgmt-users.properties}}, which is empty by default. Once ADMIN_PASSWORD, ADMIN_USERNAME is applied it is filled with that user
> {code}
> <management-interfaces>
> <http-interface http-authentication-factory="management-http-authentication" console-enabled="false">
> <http-upgrade enabled="true" sasl-authentication-factory="management-sasl-authentication"/>
> <socket-binding http="management-http"/>
> </http-interface>
> </management-interfaces>
> {code}
> I think both approaches should be consistent (no matter if legacy or Elytron). E.g. unsecured by default and secured when ADMIN_PASSWORD, ADMIN_USERNAME specified (like in case of CD19)
> [1] docker-registry.upshift.redhat.com/kwills/eap-xp1-openjdk8-openshift-rhel...
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years
[JBoss JIRA] (WFWIP-315) XP OpenShift image http management interface secured with no user by default
by Martin Choma (Jira)
[ https://issues.redhat.com/browse/WFWIP-315?page=com.atlassian.jira.plugin... ]
Martin Choma edited comment on WFWIP-315 at 4/27/20 5:18 AM:
-------------------------------------------------------------
[~jdenise]
I see what you say when playing with podman locally after start (no openshift)
{code}
<management-interfaces>
<http-interface console-enabled="false">
<http-upgrade enabled="true" sasl-authentication-factory="management-sasl-authentication"/>
<socket-binding http="management-http"/>
</http-interface>
</management-interfaces>
{code}
But on OpenShift I see this after start.
{code}
...
<security-realm name="ManagementRealm">
<authentication>
<local default-user="$local" skip-group-loading="true"/>
<properties path="mgmt-users.properties" relative-to="jboss.server.config.dir"/>
</authentication>
<authorization map-groups-to-roles="false">
<properties path="mgmt-groups.properties" relative-to="jboss.server.config.dir"/>
</authorization>
</security-realm>
...
<management-interfaces>
<http-interface security-realm="ManagementRealm" console-enabled="false">
<http-upgrade enabled="true"/>
<socket-binding http="management-http"/>
</http-interface>
</management-interfaces>
{code}
Now I don't understand what is going on. Is it possible scripts returns legacy configuration?
was (Author: mchoma):
I see what you say when playing with podman locally after start (no openshift)
{code}
<management-interfaces>
<http-interface console-enabled="false">
<http-upgrade enabled="true" sasl-authentication-factory="management-sasl-authentication"/>
<socket-binding http="management-http"/>
</http-interface>
</management-interfaces>
{code}
But on OpenShift I see this after start.
{code}
...
<security-realm name="ManagementRealm">
<authentication>
<local default-user="$local" skip-group-loading="true"/>
<properties path="mgmt-users.properties" relative-to="jboss.server.config.dir"/>
</authentication>
<authorization map-groups-to-roles="false">
<properties path="mgmt-groups.properties" relative-to="jboss.server.config.dir"/>
</authorization>
</security-realm>
...
<management-interfaces>
<http-interface security-realm="ManagementRealm" console-enabled="false">
<http-upgrade enabled="true"/>
<socket-binding http="management-http"/>
</http-interface>
</management-interfaces>
{code}
Now I don't understand what is going on. Is it possible scripts returns legacy configuration?
> XP OpenShift image http management interface secured with no user by default
> ----------------------------------------------------------------------------
>
> Key: WFWIP-315
> URL: https://issues.redhat.com/browse/WFWIP-315
> Project: WildFly WIP
> Issue Type: Bug
> Components: OpenShift
> Reporter: Martin Choma
> Assignee: Jeff Mesnil
> Priority: Critical
> Attachments: standalone-openshift.cd19.xml, standalone-openshift.xp.xml
>
>
> In one test with XP image [1] I am experiencing problem of failing rediness probe.
> {code}
> sh-4.2$ python -d /opt/eap/bin/probes/runner.py --debug -c READY --loglevel=DEBUG probe.eap.dmr.EapProbe probe.eap.dmr.HealthCheckProbe
> DEBUG:__main__:Starting probe runner with args: Namespace(check=[<Status.READY: 8>], debug=True, logfile=None, loglevel='DEBUG', probes=['probe.eap.dmr.EapProbe', 'probe.eap.dmr.HealthCheckProbe'])
> INFO:__main__:Loading probe: probe.eap.dmr.EapProbe
> DEBUG:probe.eap.dmr.EapProbe:Configuration set as follows: host=localhost, port=9990, user=eapadmin, password=***
> INFO:__main__:Loading probe: probe.eap.dmr.HealthCheckProbe
> DEBUG:probe.eap.dmr.HealthCheckProbe:Configuration set as follows: host=localhost, port=9990, user=eapadmin, password=***
> INFO:__main__:Probes will fail for the following states: [HARD_FAILURE, FAILURE, NOT_READY]
> INFO:__main__:Running probes
> INFO:__main__.ProbeRunner:Running the following probes: [probe.eap.dmr.EapProbe, probe.eap.dmr.HealthCheckProbe]
> INFO:__main__.ProbeRunner:Running probe: probe.eap.dmr.EapProbe
> INFO:probe.eap.dmr.EapProbe:Executing the following tests: [probe.eap.dmr.ServerStatusTest, probe.eap.dmr.BootErrorsTest, probe.eap.dmr.DeploymentTest]
> INFO:probe.eap.dmr.EapProbe:Sending probe request to http://localhost:9990/management
> DEBUG:probe.eap.dmr.EapProbe:Probe request = {
> "operation": "composite",
> "json.pretty": 1,
> "steps": [
> {
> "operation": "read-attribute",
> "name": "server-state"
> },
> {
> "operation": "read-boot-errors",
> "address": {
> "core-service": "management"
> }
> },
> {
> "operation": "read-attribute",
> "name": "status",
> "address": {
> "deployment": "*"
> }
> }
> ],
> "address": []
> }
> INFO:urllib3.connectionpool:Starting new HTTP connection (1): localhost
> DEBUG:urllib3.connectionpool:"POST /management HTTP/1.1" 403 188
> DEBUG:probe.eap.dmr.EapProbe:Probe response: <Response [403]>
> ERROR:probe.eap.dmr.EapProbe:Unexpected failure sending probe request
> Traceback (most recent call last):
> File "/s2i-output/server/bin/probes/probe/api.py", line 142, in execute
> results = self.sendRequest(request)
> File "/s2i-output/server/bin/probes/probe/dmr.py", line 97, in sendRequest
> self.failUnusableResponse(response, request, url)
> File "/s2i-output/server/bin/probes/probe/dmr.py", line 108, in failUnusableResponse
> unusable = not respDict or not respDict["outcome"] or respDict["outcome"] != "failed" or not respDict["result"]
> KeyError: 'result'
> INFO:__main__.ProbeRunner:Probe probe.eap.dmr.EapProbe returned statuses [FAILURE]
> DEBUG:__main__.ProbeRunner:Probe probe.eap.dmr.EapProbe returned messages "Error sending probe request: 'result'"
> {code}
> Note there is {{Response [403]}} which makes me think it will be related with legacy security switch with Elytron.
> When I look at CD19 standalon-openshift.xml I see by default management interface is unsecured. Once ADMIN_PASSWORD, ADMIN_USERNAME is applied it is secured by legacy security ManagementRealm pointing to mgmt-users.properties
> In contrast in XP images it is secured by default by {{management-http-authentication}} which is pointing to {{mgmt-users.properties}}, which is empty by default. Once ADMIN_PASSWORD, ADMIN_USERNAME is applied it is filled with that user
> {code}
> <management-interfaces>
> <http-interface http-authentication-factory="management-http-authentication" console-enabled="false">
> <http-upgrade enabled="true" sasl-authentication-factory="management-sasl-authentication"/>
> <socket-binding http="management-http"/>
> </http-interface>
> </management-interfaces>
> {code}
> I think both approaches should be consistent (no matter if legacy or Elytron). E.g. unsecured by default and secured when ADMIN_PASSWORD, ADMIN_USERNAME specified (like in case of CD19)
> [1] docker-registry.upshift.redhat.com/kwills/eap-xp1-openjdk8-openshift-rhel...
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years