[JBoss JIRA] (JGRP-2086) FD_SOCK is keep trying to create a new socket to the killed server
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2086?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2086:
--------------------------------
OK, these steps reproduce the issue:
* The view is \{A,B,C,D,E,F\}
* There's a network partition creating subclusters \{A,B,C\} and \{D,E,F\}
* As a result, the ABC side removes information about D,E,F from its cache ({{FD_SOCK.cache}}), and vice versa
* The partition heals and cluster \{A,B,C,D,E,F\} forms in a MergeView
* However, right after the merge, D, E and F crash
* C tries to get the IP address and port information about D (to connect to it), but because neither D nor E or F can provide this information, C remains in the endless loop, until FD_ALL kicks in
> FD_SOCK is keep trying to create a new socket to the killed server
> ------------------------------------------------------------------
>
> Key: JGRP-2086
> URL: https://issues.jboss.org/browse/JGRP-2086
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.3
> Environment: JDG 6.6.0 (jgroups-3.6.3.Final-redhat-4.jar)
> Reporter: Osamu Nagano
> Assignee: Bela Ban
> Fix For: 3.6.11, 4.0
>
>
> In most cases FD_SOCK can detect a killed server immediately. But for unknown reason, FD_SOCK is keep trying to create a new socket to the killed server. As a consequence, installing a new cluster view is delayed until FD_ALL is triggered.
> m04_n007_server.log is showing the behaviour. There is 28 nodes (4 machines (m03, ..., m06) and 7 nodes (n001, ..., n007) on each) and all nodes on m03 are killed at the same time on 15:07:34,543. FD_SOCK is keep trying to connect to a killed node saying "socket address for m03_n001/clustered could not be fetched, retrying".
> {noformat}
> [n007] 15:07:39,543 TRACE [org.jgroups.protocols.FD_SOCK] (Timer-8,shared=udp) m04_n007/clustered: broadcasting SUSPECT message (suspected_mbrs=[m03_n005/clustered, m03_n007/clustered])
> [n007] 15:07:39,544 TRACE [org.jgroups.protocols.FD_SOCK] (INT-20,shared=udp) m04_n007/clustered: received SUSPECT message from m04_n007/clustered: suspects=[m03_n005/clustered, m03_n007/clustered]
> [n007] 15:07:39,546 TRACE [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: socket address for m03_n001/clustered could not be fetched, retrying
> [n007] 15:07:40,546 DEBUG [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: ping_dest is m03_n001/clustered, pingable_mbrs=[m03_n001/clustered, m03_n002/clustered, m03_n003/clustered, m03_n004/clustered, m03_n006/clustered, m06_n001/clustered, m06_n002/clustered, m06_n003/clustered, m06_n004/clustered, m06_n005/clustered, m06_n006/clustered, m06_n007/clustered, m05_n001/clustered, m05_n002/clustered, m05_n003/clustered, m05_n004/clustered, m05_n005/clustered, m05_n006/clustered, m05_n007/clustered, m04_n001/clustered, m04_n002/clustered, m04_n003/clustered, m04_n004/clustered, m04_n005/clustered, m04_n006/clustered, m04_n007/clustered]
> [n007] 15:07:41,546 TRACE [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: socket address for m03_n001/clustered could not be fetched, retrying
> [n007] 15:07:42,546 DEBUG [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: ping_dest is m03_n001/clustered, pingable_mbrs=[m03_n001/clustered, m03_n002/clustered, m03_n003/clustered, m03_n004/clustered, m03_n006/clustered, m06_n001/clustered, m06_n002/clustered, m06_n003/clustered, m06_n004/clustered, m06_n005/clustered, m06_n006/clustered, m06_n007/clustered, m05_n001/clustered, m05_n002/clustered, m05_n003/clustered, m05_n004/clustered, m05_n005/clustered, m05_n006/clustered, m05_n007/clustered, m04_n001/clustered, m04_n002/clustered, m04_n003/clustered, m04_n004/clustered, m04_n005/clustered, m04_n006/clustered, m04_n007/clustered]
> [n007] 15:07:43,547 TRACE [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: socket address for m03_n001/clustered could not be fetched, retrying
> ...
> [n007] 15:10:53,700 DEBUG [org.jgroups.protocols.FD_ALL] (Timer-26,shared=udp) haven't received a heartbeat from m03_n005/clustered for 200059 ms, adding it to suspect list
> {noformat}
> From the TRACE log, you can find an address cache of FD_SOCK has only 23 members.
> {noformat}
> [n007] 14:40:50,471 TRACE [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: got cache from m03_n005/clustered: cache is {
> m04_n006/clustered=172.20.66.34:9945,
> m05_n005/clustered=172.20.66.35:9938,
> m06_n004/clustered=172.20.66.36:9931,
> m03_n007/clustered=172.20.66.33:9952,
> m05_n001/clustered=172.20.66.35:9910,
> m06_n005/clustered=172.20.66.36:9938,
> m05_n006/clustered=172.20.66.35:9945,
> m03_n005/clustered=172.20.66.33:9938,
> m05_n004/clustered=172.20.66.35:9931,
> m04_n003/clustered=172.20.66.34:9924,
> m04_n007/clustered=172.20.66.34:9952,
> m05_n002/clustered=172.20.66.35:9917,
> m05_n003/clustered=172.20.66.35:9924,
> m04_n004/clustered=172.20.66.34:9931,
> m06_n001/clustered=172.20.66.36:9910,
> m06_n007/clustered=172.20.66.36:9952,
> m04_n005/clustered=172.20.66.34:9938,
> m04_n001/clustered=172.20.66.34:9910,
> m05_n007/clustered=172.20.66.35:9952,
> m06_n002/clustered=172.20.66.36:9917,
> m06_n006/clustered=172.20.66.36:9945,
> m04_n002/clustered=172.20.66.34:9917,
> m06_n003/clustered=172.20.66.36:9924}
> {noformat}
> While pingable_mbrs has all 28 members which is from the current available cluster view.
> {noformat}
> [n007] 14:40:50,472 DEBUG [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: ping_dest is m03_n005/clustered, pingable_mbrs=[
> m03_n005/clustered,
> m03_n007/clustered,
> m03_n001/clustered,
> m03_n002/clustered,
> m03_n003/clustered,
> m03_n004/clustered,
> m03_n006/clustered,
> m06_n001/clustered,
> m06_n002/clustered,
> m06_n003/clustered,
> m06_n004/clustered,
> m06_n005/clustered,
> m06_n006/clustered,
> m06_n007/clustered,
> m05_n001/clustered,
> m05_n002/clustered,
> m05_n003/clustered,
> m05_n004/clustered,
> m05_n005/clustered,
> m05_n006/clustered,
> m05_n007/clustered,
> m04_n001/clustered,
> m04_n002/clustered,
> m04_n003/clustered,
> m04_n004/clustered,
> m04_n005/clustered,
> m04_n006/clustered,
> m04_n007/clustered]
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 7 months
[JBoss JIRA] (WFLY-7133) Automatically define resource adapter for deployed RARs
by Guillermo González de Agüero (JIRA)
Guillermo González de Agüero created WFLY-7133:
--------------------------------------------------
Summary: Automatically define resource adapter for deployed RARs
Key: WFLY-7133
URL: https://issues.jboss.org/browse/WFLY-7133
Project: WildFly
Issue Type: Feature Request
Components: JCA
Affects Versions: 10.1.0.Final
Reporter: Guillermo González de Agüero
Assignee: Jesper Pedersen
Deploying a RAR file enables the resource adapter for the server, but in order to configure it, you must manually add it to the Resource Adapters subsystem. This is error prone (you can put in invalid "archive name" for the resource adapter and it doesn't complaint) and little user friendly.
>From a user perspective and from what can be read from the documentation (https://docs.jboss.org/author/display/WFLY10/Resource+adapters), it seems like the resource adapter is not enabled unless you explicitly define it.
It would be very simpler if RA deployments would be detected and registered as it's done for deployed JDBC drivers.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 7 months
[JBoss JIRA] (WFLY-7131) Wildfly 10.0.0.Final Messaging System issue
by Preeta Kuruvilla (JIRA)
[ https://issues.jboss.org/browse/WFLY-7131?page=com.atlassian.jira.plugin.... ]
Preeta Kuruvilla updated WFLY-7131:
-----------------------------------
Summary: Wildfly 10.0.0.Final Messaging System issue (was: Wildfly 10.0.Final Messaging System issue)
> Wildfly 10.0.0.Final Messaging System issue
> -------------------------------------------
>
> Key: WFLY-7131
> URL: https://issues.jboss.org/browse/WFLY-7131
> Project: WildFly
> Issue Type: Component Upgrade
> Components: JMS
> Affects Versions: 10.0.0.Final
> Reporter: Preeta Kuruvilla
> Assignee: Jeff Mesnil
> Priority: Critical
> Attachments: standalone-full-WL-10.0.Final.xml, standalone-full-WL-8.2.xml, standalone-full.xml
>
>
> Widlffly 8.2 was based on HornetQ JMS message broker while Wildfly 10 uses ActiveMQ JMS message broker.
> The HornetQ code base was donated to the Apache ActiveMQ community late last year and now resides as a sub project under the ActiveMQ umbrella named 'Artemis’. http://activemq.apache.org/artemis.
> http://hornetq.blogspot.in/2015/06/hornetq-apache-donation-and-apache.html
> The issue we are facing is after we upgraded our application to wildfly 10.0.0.Final from Wildfly 8.2 the messages are not consistently getting into the queue. Below is how we configured the messaging subsystem in standalone-full.xml:-
> Let me know if this is good. Also I am attaching the entire standalone-full.xml with this case.
> <subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">
> <server name="default">
> <security enabled="false"/>
> <statistics enabled="true"/>
> <security-setting name="#">
> <role name="guest" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/>
> <role name="jmsrole" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/>
> </security-setting>
> <address-setting name="#" message-counter-history-day-limit="10" page-size-bytes="2097152" max-size-bytes="10485760" expiry-address="jms.queue.ExpiryQueue" dead-letter-address="jms.queue.DLQ"/>
> <http-connector name="http-connector" endpoint="http-acceptor" socket-binding="remote-http"/>
> <http-connector name="http-connector-throughput" endpoint="http-acceptor-throughput" socket-binding="http"/>
> <in-vm-connector name="in-vm" server-id="0"/>
> <http-acceptor name="http-acceptor" http-listener="default"/>
> <http-acceptor name="http-acceptor-throughput" http-listener="default">
> <param name="batch-delay" value="50"/>
> <param name="direct-deliver" value="false"/>
> </http-acceptor>
> <in-vm-acceptor name="in-vm" server-id="0"/>
> <jms-queue name="testQueue" entries="queue/test jboss/exported/jms/queue/test"/>
> <jms-queue name="ISEEOutboundQueue" entries="/ISEEOutboundQueue java:jboss/exported/jms/queue/ISEEOutboundQueue"/>
> <jms-queue name="ISEEInboundQueue" entries="/ISEEInboundQueue java:jboss/exported/jms/queue/ISEEInboundQueue"/>
> <jms-queue name="BEEEAuthorizationsQueue" entries="/BEEEAuthorizationsQueue java:jboss/exported/jms/queue/BEEEAuthorizationsQueue"/>
> <jms-queue name="BEEERequisitionsQueue" entries="/BEEERequisitionsQueue java:jboss/exported/jms/queue/BEEERequisitionsQueue"/>
> <jms-queue name="BEEEInboundQueue" entries="/BEEEInboundQueue java:jboss/exported/jms/queue/BEEEInboundQueue"/>
> <jms-topic name="testTopic" entries="topic/test java:jboss/exported/jms/topic/test"/>
> <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
> <connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory /RemoteConnectionFactory" connectors="http-connector"/>
> <pooled-connection-factory name="activemq-ra" transaction="xa" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm"/>
> </server>
> </subsystem>
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 7 months
[JBoss JIRA] (WFLY-7131) Wildfly 10.0.Final Messaging System issue
by Preeta Kuruvilla (JIRA)
[ https://issues.jboss.org/browse/WFLY-7131?page=com.atlassian.jira.plugin.... ]
Preeta Kuruvilla updated WFLY-7131:
-----------------------------------
Issue Type: Component Upgrade (was: Bug)
> Wildfly 10.0.Final Messaging System issue
> -----------------------------------------
>
> Key: WFLY-7131
> URL: https://issues.jboss.org/browse/WFLY-7131
> Project: WildFly
> Issue Type: Component Upgrade
> Components: JMS
> Affects Versions: 10.0.0.Final
> Reporter: Preeta Kuruvilla
> Assignee: Jeff Mesnil
> Priority: Critical
> Attachments: standalone-full-WL-10.0.Final.xml, standalone-full-WL-8.2.xml, standalone-full.xml
>
>
> Widlffly 8.2 was based on HornetQ JMS message broker while Wildfly 10 uses ActiveMQ JMS message broker.
> The HornetQ code base was donated to the Apache ActiveMQ community late last year and now resides as a sub project under the ActiveMQ umbrella named 'Artemis’. http://activemq.apache.org/artemis.
> http://hornetq.blogspot.in/2015/06/hornetq-apache-donation-and-apache.html
> The issue we are facing is after we upgraded our application to wildfly 10.0.0.Final from Wildfly 8.2 the messages are not consistently getting into the queue. Below is how we configured the messaging subsystem in standalone-full.xml:-
> Let me know if this is good. Also I am attaching the entire standalone-full.xml with this case.
> <subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">
> <server name="default">
> <security enabled="false"/>
> <statistics enabled="true"/>
> <security-setting name="#">
> <role name="guest" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/>
> <role name="jmsrole" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/>
> </security-setting>
> <address-setting name="#" message-counter-history-day-limit="10" page-size-bytes="2097152" max-size-bytes="10485760" expiry-address="jms.queue.ExpiryQueue" dead-letter-address="jms.queue.DLQ"/>
> <http-connector name="http-connector" endpoint="http-acceptor" socket-binding="remote-http"/>
> <http-connector name="http-connector-throughput" endpoint="http-acceptor-throughput" socket-binding="http"/>
> <in-vm-connector name="in-vm" server-id="0"/>
> <http-acceptor name="http-acceptor" http-listener="default"/>
> <http-acceptor name="http-acceptor-throughput" http-listener="default">
> <param name="batch-delay" value="50"/>
> <param name="direct-deliver" value="false"/>
> </http-acceptor>
> <in-vm-acceptor name="in-vm" server-id="0"/>
> <jms-queue name="testQueue" entries="queue/test jboss/exported/jms/queue/test"/>
> <jms-queue name="ISEEOutboundQueue" entries="/ISEEOutboundQueue java:jboss/exported/jms/queue/ISEEOutboundQueue"/>
> <jms-queue name="ISEEInboundQueue" entries="/ISEEInboundQueue java:jboss/exported/jms/queue/ISEEInboundQueue"/>
> <jms-queue name="BEEEAuthorizationsQueue" entries="/BEEEAuthorizationsQueue java:jboss/exported/jms/queue/BEEEAuthorizationsQueue"/>
> <jms-queue name="BEEERequisitionsQueue" entries="/BEEERequisitionsQueue java:jboss/exported/jms/queue/BEEERequisitionsQueue"/>
> <jms-queue name="BEEEInboundQueue" entries="/BEEEInboundQueue java:jboss/exported/jms/queue/BEEEInboundQueue"/>
> <jms-topic name="testTopic" entries="topic/test java:jboss/exported/jms/topic/test"/>
> <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
> <connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory /RemoteConnectionFactory" connectors="http-connector"/>
> <pooled-connection-factory name="activemq-ra" transaction="xa" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm"/>
> </server>
> </subsystem>
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 7 months
[JBoss JIRA] (WFLY-7131) Wildfly 10.0.Final Messaging System issue
by Preeta Kuruvilla (JIRA)
[ https://issues.jboss.org/browse/WFLY-7131?page=com.atlassian.jira.plugin.... ]
Preeta Kuruvilla updated WFLY-7131:
-----------------------------------
Priority: Critical (was: Blocker)
> Wildfly 10.0.Final Messaging System issue
> -----------------------------------------
>
> Key: WFLY-7131
> URL: https://issues.jboss.org/browse/WFLY-7131
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Affects Versions: 10.0.0.Final
> Reporter: Preeta Kuruvilla
> Assignee: Jeff Mesnil
> Priority: Critical
> Attachments: standalone-full-WL-10.0.Final.xml, standalone-full-WL-8.2.xml, standalone-full.xml
>
>
> Widlffly 8.2 was based on HornetQ JMS message broker while Wildfly 10 uses ActiveMQ JMS message broker.
> The HornetQ code base was donated to the Apache ActiveMQ community late last year and now resides as a sub project under the ActiveMQ umbrella named 'Artemis’. http://activemq.apache.org/artemis.
> http://hornetq.blogspot.in/2015/06/hornetq-apache-donation-and-apache.html
> The issue we are facing is after we upgraded our application to wildfly 10.0.0.Final from Wildfly 8.2 the messages are not consistently getting into the queue. Below is how we configured the messaging subsystem in standalone-full.xml:-
> Let me know if this is good. Also I am attaching the entire standalone-full.xml with this case.
> <subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">
> <server name="default">
> <security enabled="false"/>
> <statistics enabled="true"/>
> <security-setting name="#">
> <role name="guest" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/>
> <role name="jmsrole" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/>
> </security-setting>
> <address-setting name="#" message-counter-history-day-limit="10" page-size-bytes="2097152" max-size-bytes="10485760" expiry-address="jms.queue.ExpiryQueue" dead-letter-address="jms.queue.DLQ"/>
> <http-connector name="http-connector" endpoint="http-acceptor" socket-binding="remote-http"/>
> <http-connector name="http-connector-throughput" endpoint="http-acceptor-throughput" socket-binding="http"/>
> <in-vm-connector name="in-vm" server-id="0"/>
> <http-acceptor name="http-acceptor" http-listener="default"/>
> <http-acceptor name="http-acceptor-throughput" http-listener="default">
> <param name="batch-delay" value="50"/>
> <param name="direct-deliver" value="false"/>
> </http-acceptor>
> <in-vm-acceptor name="in-vm" server-id="0"/>
> <jms-queue name="testQueue" entries="queue/test jboss/exported/jms/queue/test"/>
> <jms-queue name="ISEEOutboundQueue" entries="/ISEEOutboundQueue java:jboss/exported/jms/queue/ISEEOutboundQueue"/>
> <jms-queue name="ISEEInboundQueue" entries="/ISEEInboundQueue java:jboss/exported/jms/queue/ISEEInboundQueue"/>
> <jms-queue name="BEEEAuthorizationsQueue" entries="/BEEEAuthorizationsQueue java:jboss/exported/jms/queue/BEEEAuthorizationsQueue"/>
> <jms-queue name="BEEERequisitionsQueue" entries="/BEEERequisitionsQueue java:jboss/exported/jms/queue/BEEERequisitionsQueue"/>
> <jms-queue name="BEEEInboundQueue" entries="/BEEEInboundQueue java:jboss/exported/jms/queue/BEEEInboundQueue"/>
> <jms-topic name="testTopic" entries="topic/test java:jboss/exported/jms/topic/test"/>
> <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
> <connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory /RemoteConnectionFactory" connectors="http-connector"/>
> <pooled-connection-factory name="activemq-ra" transaction="xa" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm"/>
> </server>
> </subsystem>
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 7 months
[JBoss JIRA] (WFCORE-234) Inconsistent synchronization in ConfigurationFile
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFCORE-234?page=com.atlassian.jira.plugin... ]
Brian Stansberry updated WFCORE-234:
------------------------------------
Fix Version/s: (was: 3.0.0.Alpha9)
> Inconsistent synchronization in ConfigurationFile
> -------------------------------------------------
>
> Key: WFCORE-234
> URL: https://issues.jboss.org/browse/WFCORE-234
> Project: WildFly Core
> Issue Type: Bug
> Components: Domain Management
> Affects Versions: 1.0.0.Alpha11
> Reporter: Brian Stansberry
>
> ConfigurationFile synchronizes on itself in some places and not in others. This may cause problems, particularly with the history dir.
> The one that comes to mind is successfulBoot is synchronized, but all the methods called by ConfigurationFilePersistenceResource are not. The latter is called with the controller lock held, but the former is not. So there's a possibility of two threads interacting with the files concurrently if an operation executes immediately after boot.
> The deployment scanner schedules such an op, so it's possible. Currently the schedule is for 200 ms after deployment-scanner add runs during boot.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 7 months
[JBoss JIRA] (WFCORE-301) Configuration of individual contexts for http management interface.
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFCORE-301?page=com.atlassian.jira.plugin... ]
Brian Stansberry commented on WFCORE-301:
-----------------------------------------
What is the status of this?
> Configuration of individual contexts for http management interface.
> -------------------------------------------------------------------
>
> Key: WFCORE-301
> URL: https://issues.jboss.org/browse/WFCORE-301
> Project: WildFly Core
> Issue Type: Sub-task
> Components: Domain Management
> Reporter: Darran Lofthouse
> Assignee: Darran Lofthouse
> Labels: affects_elytron
> Fix For: 3.0.0.Alpha9
>
>
> At the moment all management requests are handled over the '/management' context, we also have a '/console' context to serve up the files for the admin console.
> The '/management' context is secured using standard HTTP mechanisms, this decision was taken so that clients could be written in different languages and all they would need to know is how to use standard authentication mechanisms. Due to problems where web browsers could run malicious scripts cross origin resource sharing is completely disabled for this context.
> We need to start to open up the handling of cross origin requests for a couple of reasons: -
> - Enabling Keycloak SSO support.
> - Alternative console distribution options
> The '/management' context is going to be retained as-is for legacy clients, possibly even switched off by default.
> A new context can then be added using non-browser based authentication, this could be SSO Keycloak or could be a form of Digest authentication where the response is handled by the console and not the web browser - either way as the browser is bypassed it is no longer at risk of sending malicious cross origin requests.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 7 months