[JBoss JIRA] (WFCORE-1220) Try closing the channel if java.lang.Error prevents sending error responses to the client
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFCORE-1220?page=com.atlassian.jira.plugi... ]
Brian Stansberry commented on WFCORE-1220:
------------------------------------------
I have a number of "WIP" branches that have been sitting inactive for a long time and I'm linking them to the related issues in case the work on them is useful. For this one:
https://github.com/bstansberry/wildfly-core/tree/WFCORE-1220
> Try closing the channel if java.lang.Error prevents sending error responses to the client
> -----------------------------------------------------------------------------------------
>
> Key: WFCORE-1220
> URL: https://issues.jboss.org/browse/WFCORE-1220
> Project: WildFly Core
> Issue Type: Sub-task
> Components: Management
> Reporter: Brian Stansberry
> Labels: domain-mode
>
> Beyond the basic work on WFCORE-1134, look into further reaction when Errors occur and the server or HC cannot even send an error response to the caller. If we get to this point, the task has already failed to handle a problem and now we can't notify the remote side either. Most likely this is an OOME situation, although any Error here means a major issue.
> Options:
> 1) Try and close the channel to disconnect this process from the remote end so it doesn't disrupt the remote end. If this is an intra-HC or HC-server connection a successful close will remove this process from normal domain control. If this is a server the HC still has some control over the server via the ProcessController, e.g. to handle a 'kill' or 'destroy' management op. If this is a slave HC, the slave is disconnected from the domain. Either a server or a slave HC may try to reconnect, although it's likely better if that fails and the user just restarts the process.
> If the remote side is an end user client (e.g. CLI) then a successful close will be noticed by the client. The user can reconnect or take action to terminate this process.
> 2) Commit suicide via SystemExiter.exit. But I'm not certain complete termination of the process is how we want to deal with problems with management requests. Probably some sort of configurable policy would be better.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (DROOLS-3041) [DMN Designer] Can not edit node after diagram cleared
by Michael Anstis (JIRA)
[ https://issues.jboss.org/browse/DROOLS-3041?page=com.atlassian.jira.plugi... ]
Michael Anstis updated DROOLS-3041:
-----------------------------------
Story Points: 2
Sprint: 2018 Week 39-41
> [DMN Designer] Can not edit node after diagram cleared
> ------------------------------------------------------
>
> Key: DROOLS-3041
> URL: https://issues.jboss.org/browse/DROOLS-3041
> Project: Drools
> Issue Type: Bug
> Components: DMN Editor
> Affects Versions: 7.12.0.Final
> Reporter: Jozef Marko
> Assignee: Michael Anstis
> Labels: drools-tools
> Attachments: DMCommunity Challenge - March 2017.dmn, Screenshot from 2018-09-26 13-18-33.png
>
>
> If user clears the diagram and then add new node, he is unable to edit this node.
> h2. Acceptance test
> h3. New diagram
> - Add nodes
> - Clear
> - Add new decision node
> - Put some expression inside
> h3. Non empty existing diagram
> - Clear
> - Add new decision node
> - Put some expression inside
> - Save reopen
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (DROOLS-3041) [DMN Designer] Can not edit node after diagram cleared
by Michael Anstis (JIRA)
[ https://issues.jboss.org/browse/DROOLS-3041?page=com.atlassian.jira.plugi... ]
Michael Anstis commented on DROOLS-3041:
----------------------------------------
{{DecisionNavigatorObserver.onCanvasClear(..)}} observes the {{CanvasClearEvent}}.
This causes {{DecisionNavigatorPresenter.removeAllElements()}} to be invoked that (unsurprisingly) removes all elements from the Decision Navigator. Consequentially when {{DecisionNavigatorObserver.onNestedElementAdded(..)}} executes in response to the Expression Type being changed there is no active parent node found in {{DecisionNavigatorTreePresenter.getActiveParent()}} leading to the NPE.
> [DMN Designer] Can not edit node after diagram cleared
> ------------------------------------------------------
>
> Key: DROOLS-3041
> URL: https://issues.jboss.org/browse/DROOLS-3041
> Project: Drools
> Issue Type: Bug
> Components: DMN Editor
> Affects Versions: 7.12.0.Final
> Reporter: Jozef Marko
> Assignee: Michael Anstis
> Labels: drools-tools
> Attachments: DMCommunity Challenge - March 2017.dmn, Screenshot from 2018-09-26 13-18-33.png
>
>
> If user clears the diagram and then add new node, he is unable to edit this node.
> h2. Acceptance test
> h3. New diagram
> - Add nodes
> - Clear
> - Add new decision node
> - Put some expression inside
> h3. Non empty existing diagram
> - Clear
> - Add new decision node
> - Put some expression inside
> - Save reopen
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (WFLY-11089) Uploading content from HAL in SSL doesn't work
by Claudio Miranda (JIRA)
[ https://issues.jboss.org/browse/WFLY-11089?page=com.atlassian.jira.plugin... ]
Claudio Miranda updated WFLY-11089:
-----------------------------------
Attachment: https_webcons.jks
> Uploading content from HAL in SSL doesn't work
> ----------------------------------------------
>
> Key: WFLY-11089
> URL: https://issues.jboss.org/browse/WFLY-11089
> Project: WildFly
> Issue Type: Bug
> Components: Web (Undertow), Web Console
> Affects Versions: 14.0.0.Final
> Reporter: ehsavoie Hugonnet
> Assignee: Stuart Douglas
> Attachments: application-roles.properties, https_webcons.jks, localhost.har, standalone.xml, truststore.jks, windows2008-1.gsslab.rdu2.redhat.com.jks
>
>
> When uploading a file from the web console to WildFly using an SSL secured connection the content is not stored, it is replaced by the Base64 DMR operation.
> Looking at the request I didn't see the file part of the multipart request the parsing return only 1 part while the request on management-upload seems to have 2 parts (see attached har file).
> It doesn't happen with the jboss-cli (ssl or not) nor in HAL on HTTP.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (WFLY-11089) Uploading content from HAL in SSL doesn't work
by Claudio Miranda (JIRA)
[ https://issues.jboss.org/browse/WFLY-11089?page=com.atlassian.jira.plugin... ]
Claudio Miranda commented on WFLY-11089:
----------------------------------------
If comment the truststore from the authentication on the ManagementRealm, the upload works.
If use elytron settings instead of ManagementRealm, for the https it works.
http management interface
{code}
<management-interfaces>
<http-interface ssl-context="sslctx_https_webcons" security-realm="ManagementRealm">
<http-upgrade enabled="true"/>
<socket-binding http="management-http" https="management-https"/>
</http-interface>
</management-interfaces>
{code}
The elytron settings
{code}
<tls>
<key-stores>
<key-store name="ks_https_webcons">
<credential-reference clear-text="admin123"/>
<implementation type="JKS"/>
<file path="https_webcons.jks" relative-to="jboss.server.config.dir" />
</key-store>
</key-stores>
<key-managers>
<key-manager name="ekm_https_webcons" algorithm="SunX509" key-store="ks_https_webcons">
<credential-reference clear-text="admin123"/>
</key-manager>
</key-managers>
<server-ssl-contexts>
<server-ssl-context name="sslctx_https_webcons" protocols="TLSv1.2" key-manager="ekm_https_webcons"/>
</server-ssl-contexts>
</tls>
{code}
> Uploading content from HAL in SSL doesn't work
> ----------------------------------------------
>
> Key: WFLY-11089
> URL: https://issues.jboss.org/browse/WFLY-11089
> Project: WildFly
> Issue Type: Bug
> Components: Web (Undertow), Web Console
> Affects Versions: 14.0.0.Final
> Reporter: ehsavoie Hugonnet
> Assignee: Stuart Douglas
> Attachments: application-roles.properties, https_webcons.jks, localhost.har, standalone.xml, truststore.jks, windows2008-1.gsslab.rdu2.redhat.com.jks
>
>
> When uploading a file from the web console to WildFly using an SSL secured connection the content is not stored, it is replaced by the Base64 DMR operation.
> Looking at the request I didn't see the file part of the multipart request the parsing return only 1 part while the request on management-upload seems to have 2 parts (see attached har file).
> It doesn't happen with the jboss-cli (ssl or not) nor in HAL on HTTP.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (JGRP-2296) DNS_PING is dropping port values with SRV based service discovery
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2296?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2296:
--------------------------------
[~ethompson] Pull from master
> DNS_PING is dropping port values with SRV based service discovery
> -----------------------------------------------------------------
>
> Key: JGRP-2296
> URL: https://issues.jboss.org/browse/JGRP-2296
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.11
> Environment: JGroups version 4.0.11.Final
> Used in Keycloak 4.4.0
> Deployed as Jboss based Docker container from jboss/keycloak into AWS ECS
> Reporter: Eric Thompson
> Assignee: Bela Ban
> Priority: Blocker
> Fix For: 4.0.16
>
>
> Using DNS_PING in Jgroups 4.0.11 and SRV records the port from the SRV record is being dropped (set to zero) and the default is used instead (7600).
> I am using this Jgroups config:
> {code}
> <subsystem xmlns="urn:jboss:domain:jgroups:6.0">
> <channels default="ee">
> <channel name="ee" stack="tcp" cluster="ejb"/>
> </channels>
> <stacks>
> <stack name="tcp">
> <transport type="TCP" socket-binding="jgroups-tcp">
> <property name="external_addr">${env.EXTERNAL_ADDR}</property>
> </transport>
> <protocol type="dns.DNS_PING">
> <property name="dns_query">
> jgroups.${env.DNS_NAME}.svc.cluster.local
> </property>
> <property name="dns_record_type">
> SRV
> </property>
> </protocol>
> <protocol type="MERGE3"/>
> <protocol type="FD_SOCK"/>
> <protocol type="FD_ALL"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS"/>
> <protocol type="MFC"/>
> <protocol type="FRAG3"/>
> </stack>
> </stacks>
> </subsystem>
> {code}
> I have these service discovery DNS entries
> {code}
> $ dig jgroups.dev.auth.example.com.svc.cluster.local SRV
> ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.68.rc1.58.amzn1 <<>> jgroups.dev.auth.example.com.svc.cluster.local SRV
> ;; global options: +cmd
> ;; Got answer:
> ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16690
> ;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 0
> ;; QUESTION SECTION:
> ;jgroups.dev.auth.example.com.svc.cluster.local. IN SRV
> ;; ANSWER SECTION:
> jgroups.dev.auth.example.com.svc.cluster.local. 10 IN SRV 1 1 32921 9ec82e3f-3a0e-4e30-b785-17879c63cd7d.jgroups.dev.auth.example.com.svc.cluster.local.
> jgroups.dev.auth.example.com.svc.cluster.local. 10 IN SRV 1 1 32923 60b5a820-9678-4bd2-84c6-00061a52bde0.jgroups.dev.auth.example.com.svc.cluster.local.
> jgroups.dev.auth.example.com.svc.cluster.local. 10 IN SRV 1 1 32915 9d9d78d0-8919-4b91-9df8-2e4e65afedae.jgroups.dev.auth.example.com.svc.cluster.local.
> jgroups.dev.auth.example.com.svc.cluster.local. 10 IN SRV 1 1 32917 161f3d66-f1e3-46f4-a44f-ebda925a25c6.jgroups.dev.auth.example.com.svc.cluster.local.
> ;; Query time: 2 msec
> ;; SERVER: 10.42.3.2#53(10.42.3.2)
> ;; WHEN: Fri Sep 21 01:45:44 2018
> ;; MSG SIZE rcvd: 481
> {code}
> But I get this in the logs when running Keycloak in standalone cluster:
> {code}
> 17:45:10,121 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) Performing initial discovery
> 17:45:10,154 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) Entries collected from DNS: [10.42.3.56:0, 10.42.3.56:0, 10.42.3.44:0, 10.42.3.44:0]
> 17:45:10,155 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) Discovered IP Address with port 0 (10.42.3.56:0). Replacing with default Transport port: 7600
> 17:45:10,159 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) Discovered IP Address with port 0 (10.42.3.56:0). Replacing with default Transport port: 7600
> 17:45:10,159 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) Discovered IP Address with port 0 (10.42.3.44:0). Replacing with default Transport port: 7600
> 17:45:10,159 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) Discovered IP Address with port 0 (10.42.3.44:0). Replacing with default Transport port: 7600
> 17:45:10,159 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) Performing discovery of the following hosts [10.42.3.56:7600, 10.42.3.44:7600, e200a617bf7a]
> 17:45:10,159 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) e200a617bf7a: sending discovery request to 10.42.3.56:7600
> 17:45:10,160 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) e200a617bf7a: sending discovery request to 10.42.3.44:7600
> 17:45:10,160 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-10,ejb,e200a617bf7a) Received discovery from: e200a617bf7a, IP: 10.42.3.44:7600
> 17:45:10,161 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) e200a617bf7a: sending discovery request to e200a617bf7a
> 17:45:10,162 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-11,ejb,e200a617bf7a) Received discovery from: e200a617bf7a, IP: 10.42.3.44:7600
> {code}
> As you can see it is resolving the DNS addresses, but discarding the ports.
> To be clear, in this example 32923 ids the port (eg:
> 1 1 32923 60b5a820-9678-4bd2-84c6-00061a52bde0.jgroups.dev.auth.example.com.svc.cluster.local).
> These are dynamic ports mapped to port 7600 in order to put more Keycloak containers on each instance.
> {code}
> $ docker ps
> CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
> f67e39f8f403 datadog/agent:latest-jmx "/init" 8 hours ago Up 8 hours (healthy) 8125/udp, 8126/tcp ecs-auth-service-dev-26-datadog-agent-a2b7f783ddd0ba9cf601
> bbb12f0c43a5 233747045000.dkr.ecr.us-east-2.amazonaws.com/ops/keycloak:latest "/opt/jboss/tools/do…" 8 hours ago Up 8 hours 0.0.0.0:32923->7600/tcp, 0.0.0.0:32922->8080/tcp ecs-auth-service-dev-26-keycloak-f4bd8f8dca9fd4cd4f00
> 932cad7c4fb9 datadog/agent:latest-jmx "/init" 8 hours ago Up 8 hours (healthy) 8125/udp, 8126/tcp ecs-auth-service-dev-26-datadog-agent-baa38a98ccaddea6f501
> e200a617bf7a 233747045000.dkr.ecr.us-east-2.amazonaws.com/ops/keycloak:latest "/opt/jboss/tools/do…" 8 hours ago Up 8 hours 0.0.0.0:32921->7600/tcp, 0.0.0.0:32920->8080/tcp ecs-auth-service-dev-26-keycloak-e6f398e6cc8db5b5f101
> 73bc0b863c73 amazon/amazon-ecs-agent:latest "/agent" 2 days ago Up 2 days ecs-agent
> {code}
> This seems like it might be where ports are getting lost:
> https://github.com/belaban/JGroups/blob/07060c3ba6e52ad4aad3ac799c2bc95ff...
> I don't see the port number being extracted from the SRV entry and appended to the IP returned from resolveAEntries.
> Let me know if I am missing any details. This is a major blocker for development.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (JGRP-2296) DNS_PING is dropping port values with SRV based service discovery
by Eric Thompson (JIRA)
[ https://issues.jboss.org/browse/JGRP-2296?page=com.atlassian.jira.plugin.... ]
Eric Thompson commented on JGRP-2296:
-------------------------------------
[~belaban] Should I pull from master or is there another branch/PR?
> DNS_PING is dropping port values with SRV based service discovery
> -----------------------------------------------------------------
>
> Key: JGRP-2296
> URL: https://issues.jboss.org/browse/JGRP-2296
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.11
> Environment: JGroups version 4.0.11.Final
> Used in Keycloak 4.4.0
> Deployed as Jboss based Docker container from jboss/keycloak into AWS ECS
> Reporter: Eric Thompson
> Assignee: Bela Ban
> Priority: Blocker
> Fix For: 4.0.16
>
>
> Using DNS_PING in Jgroups 4.0.11 and SRV records the port from the SRV record is being dropped (set to zero) and the default is used instead (7600).
> I am using this Jgroups config:
> {code}
> <subsystem xmlns="urn:jboss:domain:jgroups:6.0">
> <channels default="ee">
> <channel name="ee" stack="tcp" cluster="ejb"/>
> </channels>
> <stacks>
> <stack name="tcp">
> <transport type="TCP" socket-binding="jgroups-tcp">
> <property name="external_addr">${env.EXTERNAL_ADDR}</property>
> </transport>
> <protocol type="dns.DNS_PING">
> <property name="dns_query">
> jgroups.${env.DNS_NAME}.svc.cluster.local
> </property>
> <property name="dns_record_type">
> SRV
> </property>
> </protocol>
> <protocol type="MERGE3"/>
> <protocol type="FD_SOCK"/>
> <protocol type="FD_ALL"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS"/>
> <protocol type="MFC"/>
> <protocol type="FRAG3"/>
> </stack>
> </stacks>
> </subsystem>
> {code}
> I have these service discovery DNS entries
> {code}
> $ dig jgroups.dev.auth.example.com.svc.cluster.local SRV
> ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.68.rc1.58.amzn1 <<>> jgroups.dev.auth.example.com.svc.cluster.local SRV
> ;; global options: +cmd
> ;; Got answer:
> ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16690
> ;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 0
> ;; QUESTION SECTION:
> ;jgroups.dev.auth.example.com.svc.cluster.local. IN SRV
> ;; ANSWER SECTION:
> jgroups.dev.auth.example.com.svc.cluster.local. 10 IN SRV 1 1 32921 9ec82e3f-3a0e-4e30-b785-17879c63cd7d.jgroups.dev.auth.example.com.svc.cluster.local.
> jgroups.dev.auth.example.com.svc.cluster.local. 10 IN SRV 1 1 32923 60b5a820-9678-4bd2-84c6-00061a52bde0.jgroups.dev.auth.example.com.svc.cluster.local.
> jgroups.dev.auth.example.com.svc.cluster.local. 10 IN SRV 1 1 32915 9d9d78d0-8919-4b91-9df8-2e4e65afedae.jgroups.dev.auth.example.com.svc.cluster.local.
> jgroups.dev.auth.example.com.svc.cluster.local. 10 IN SRV 1 1 32917 161f3d66-f1e3-46f4-a44f-ebda925a25c6.jgroups.dev.auth.example.com.svc.cluster.local.
> ;; Query time: 2 msec
> ;; SERVER: 10.42.3.2#53(10.42.3.2)
> ;; WHEN: Fri Sep 21 01:45:44 2018
> ;; MSG SIZE rcvd: 481
> {code}
> But I get this in the logs when running Keycloak in standalone cluster:
> {code}
> 17:45:10,121 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) Performing initial discovery
> 17:45:10,154 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) Entries collected from DNS: [10.42.3.56:0, 10.42.3.56:0, 10.42.3.44:0, 10.42.3.44:0]
> 17:45:10,155 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) Discovered IP Address with port 0 (10.42.3.56:0). Replacing with default Transport port: 7600
> 17:45:10,159 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) Discovered IP Address with port 0 (10.42.3.56:0). Replacing with default Transport port: 7600
> 17:45:10,159 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) Discovered IP Address with port 0 (10.42.3.44:0). Replacing with default Transport port: 7600
> 17:45:10,159 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) Discovered IP Address with port 0 (10.42.3.44:0). Replacing with default Transport port: 7600
> 17:45:10,159 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) Performing discovery of the following hosts [10.42.3.56:7600, 10.42.3.44:7600, e200a617bf7a]
> 17:45:10,159 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) e200a617bf7a: sending discovery request to 10.42.3.56:7600
> 17:45:10,160 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) e200a617bf7a: sending discovery request to 10.42.3.44:7600
> 17:45:10,160 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-10,ejb,e200a617bf7a) Received discovery from: e200a617bf7a, IP: 10.42.3.44:7600
> 17:45:10,161 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-3,null,null) e200a617bf7a: sending discovery request to e200a617bf7a
> 17:45:10,162 DEBUG [org.jgroups.protocols.dns.DNS_PING] (thread-11,ejb,e200a617bf7a) Received discovery from: e200a617bf7a, IP: 10.42.3.44:7600
> {code}
> As you can see it is resolving the DNS addresses, but discarding the ports.
> To be clear, in this example 32923 ids the port (eg:
> 1 1 32923 60b5a820-9678-4bd2-84c6-00061a52bde0.jgroups.dev.auth.example.com.svc.cluster.local).
> These are dynamic ports mapped to port 7600 in order to put more Keycloak containers on each instance.
> {code}
> $ docker ps
> CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
> f67e39f8f403 datadog/agent:latest-jmx "/init" 8 hours ago Up 8 hours (healthy) 8125/udp, 8126/tcp ecs-auth-service-dev-26-datadog-agent-a2b7f783ddd0ba9cf601
> bbb12f0c43a5 233747045000.dkr.ecr.us-east-2.amazonaws.com/ops/keycloak:latest "/opt/jboss/tools/do…" 8 hours ago Up 8 hours 0.0.0.0:32923->7600/tcp, 0.0.0.0:32922->8080/tcp ecs-auth-service-dev-26-keycloak-f4bd8f8dca9fd4cd4f00
> 932cad7c4fb9 datadog/agent:latest-jmx "/init" 8 hours ago Up 8 hours (healthy) 8125/udp, 8126/tcp ecs-auth-service-dev-26-datadog-agent-baa38a98ccaddea6f501
> e200a617bf7a 233747045000.dkr.ecr.us-east-2.amazonaws.com/ops/keycloak:latest "/opt/jboss/tools/do…" 8 hours ago Up 8 hours 0.0.0.0:32921->7600/tcp, 0.0.0.0:32920->8080/tcp ecs-auth-service-dev-26-keycloak-e6f398e6cc8db5b5f101
> 73bc0b863c73 amazon/amazon-ecs-agent:latest "/agent" 2 days ago Up 2 days ecs-agent
> {code}
> This seems like it might be where ports are getting lost:
> https://github.com/belaban/JGroups/blob/07060c3ba6e52ad4aad3ac799c2bc95ff...
> I don't see the port number being extracted from the SRV entry and appended to the IP returned from resolveAEntries.
> Let me know if I am missing any details. This is a major blocker for development.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (WFCORE-4116) JDK 10: illegal reflective access by ClassReflectionIndex
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFCORE-4116?page=com.atlassian.jira.plugi... ]
Brian Stansberry updated WFCORE-4116:
-------------------------------------
Summary: JDK 10: illegal reflective access by ClassReflectionIndex (was: JDK 10: illegal reflective access)
> JDK 10: illegal reflective access by ClassReflectionIndex
> ---------------------------------------------------------
>
> Key: WFCORE-4116
> URL: https://issues.jboss.org/browse/WFCORE-4116
> Project: WildFly Core
> Issue Type: Bug
> Components: Server
> Affects Versions: 6.0.0.Final, 6.0.2.Final
> Reporter: Thorsten Moeller
> Assignee: Jason Greene
>
> Using WF14 and Oracle JDK 10.0.2, there is an illegal reflective access warning while WF boots:
> {code}
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by org.jboss.as.server.deployment.reflect.ClassReflectionIndex (jar:file:/Users/.../wildfly-14.0.0.Final/modules/system/layers/base/org/jboss/as/server/main/wildfly-server-6.0.1.Final.jar!/) to method java.lang.Object.finalize()
> WARNING: Please consider reporting this to the maintainers of org.jboss.as.server.deployment.reflect.ClassReflectionIndex
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (WFLY-10531) Wildfly leaks ActiveMQ connections
by gunter zeilinger (JIRA)
[ https://issues.jboss.org/browse/WFLY-10531?page=com.atlassian.jira.plugin... ]
gunter zeilinger commented on WFLY-10531:
-----------------------------------------
Is there any progress in solving that issue? For us, it's a blocker to upgrade from Wildfly 12!
> Wildfly leaks ActiveMQ connections
> ----------------------------------
>
> Key: WFLY-10531
> URL: https://issues.jboss.org/browse/WFLY-10531
> Project: WildFly
> Issue Type: Bug
> Affects Versions: 13.0.0.Final
> Environment: openjdk 8 / openjdk 9, Linux
> Reporter: Marcel Šebek
> Assignee: Jeff Mesnil
> Attachments: WFLY-10531-ear-1.0.ear, WFLY10531.zip
>
>
> After upgrading our application from wildfly 12 to 13, the app started to crash after a while (hours, days, depending on circumstances). It crashes on
> IJ000453: Unable to get managed connection for java:/JmsXA
> and other errors (it simply cannot perform all the jobs it contains). I found that when shutting down the server which has been running for a while, I can see a bunch of these messages in the log:
> WARN [org.jboss.jca.core.connectionmanager.pool.strategy.PoolByCri] (ServerService Thread Pool -- 117) [:::] IJ000615: Destroying active connection in pool: ActiveMQConnectionDefinition (org.apache.activemq.artemis.ra.ActiveMQRAManagedConnection@2f37f69)
> Bascially, the longer the server was running, more of these messages are shown. I cannot find a way how to reproduce the issue. When the server runs for short time but with some load, no connection is leaked (or just one, rarely). On the other side, it leaks connections even without any particularly high load (just a few requests and @Schedule jobs) when running for longer time.
> It may also be a bug in our application, which just happen to have more serious impact with the new wildfly version.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months