[JBoss JIRA] (WFLY-315) Avoid running out of threads when connecting to the DC from a slave to pull down missing data
by Emanuel Muckenhuber (JIRA)
[ https://issues.jboss.org/browse/WFLY-315?page=com.atlassian.jira.plugin.s... ]
Emanuel Muckenhuber commented on WFLY-315:
------------------------------------------
But yeah all blocking/cancellable task have to be executed outside of the IO (remoting) threads. This should be the case already, if you spot something like that it's definitely a bug and has to be fixed.
> Avoid running out of threads when connecting to the DC from a slave to pull down missing data
> ---------------------------------------------------------------------------------------------
>
> Key: WFLY-315
> URL: https://issues.jboss.org/browse/WFLY-315
> Project: WildFly
> Issue Type: Feature Request
> Security Level: Public(Everyone can see)
> Components: Domain Management
> Reporter: Kabir Khan
> Assignee: Emanuel Muckenhuber
> Priority: Blocker
> Fix For: 9.0.0.CR1
>
>
> For WFLY-259 when a slave connects to the DC to pull down missing data, it does this by either getting a lock for the DC, or by joining the permit of the existing DC lock if the request to update a slave's server-config was executed as part of a composite obtaining a lock on the DC.
> The way it works at present there is a thread per slave which is blocked until the transaction completes. The DC threads are a finite resource, so a large number of slaves trying to pull down dats will cause deadlock
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 9 months
[JBoss JIRA] (WFLY-315) Avoid running out of threads when connecting to the DC from a slave to pull down missing data
by Emanuel Muckenhuber (JIRA)
[ https://issues.jboss.org/browse/WFLY-315?page=com.atlassian.jira.plugin.s... ]
Emanuel Muckenhuber edited comment on WFLY-315 at 3/21/14 9:40 AM:
-------------------------------------------------------------------
But yeah all blocking/cancellable tasks have to be executed outside of the IO (remoting) threads. This should be the case already, if you spot something like that it's definitely a bug and has to be fixed.
was (Author: emuckenhuber):
But yeah all blocking/cancellable task have to be executed outside of the IO (remoting) threads. This should be the case already, if you spot something like that it's definitely a bug and has to be fixed.
> Avoid running out of threads when connecting to the DC from a slave to pull down missing data
> ---------------------------------------------------------------------------------------------
>
> Key: WFLY-315
> URL: https://issues.jboss.org/browse/WFLY-315
> Project: WildFly
> Issue Type: Feature Request
> Security Level: Public(Everyone can see)
> Components: Domain Management
> Reporter: Kabir Khan
> Assignee: Emanuel Muckenhuber
> Priority: Blocker
> Fix For: 9.0.0.CR1
>
>
> For WFLY-259 when a slave connects to the DC to pull down missing data, it does this by either getting a lock for the DC, or by joining the permit of the existing DC lock if the request to update a slave's server-config was executed as part of a composite obtaining a lock on the DC.
> The way it works at present there is a thread per slave which is blocked until the transaction completes. The DC threads are a finite resource, so a large number of slaves trying to pull down dats will cause deadlock
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 9 months
[JBoss JIRA] (WFLY-315) Avoid running out of threads when connecting to the DC from a slave to pull down missing data
by Emanuel Muckenhuber (JIRA)
[ https://issues.jboss.org/browse/WFLY-315?page=com.atlassian.jira.plugin.s... ]
Emanuel Muckenhuber commented on WFLY-315:
------------------------------------------
Yeah, i mean most of this relies on unbounded thread pools, so that operations don't deadlock. We should really get rid of those at one point though.
We do throttle (1) mainly because when i was running a lot of concurrent requests with an not limited thread pool the server pretty much ran out of memory pretty quickly (thanks to ModelNode.clone()). (2) are executed by single thread, because it's kinda pointless have multiple registration just use threads, when they cannot be executed concurrently. (3) we have a limited pool on the DC, however proxied request cannot be limited, because they are all blocking (awaiting the tx.complete from the DC). Also inbound request for fetching data and files are unlimited to avoid deadlocks. This all is a huge mess needs to be cleaned up at one point, however would require some fundamental changes.
> Avoid running out of threads when connecting to the DC from a slave to pull down missing data
> ---------------------------------------------------------------------------------------------
>
> Key: WFLY-315
> URL: https://issues.jboss.org/browse/WFLY-315
> Project: WildFly
> Issue Type: Feature Request
> Security Level: Public(Everyone can see)
> Components: Domain Management
> Reporter: Kabir Khan
> Assignee: Emanuel Muckenhuber
> Priority: Blocker
> Fix For: 9.0.0.CR1
>
>
> For WFLY-259 when a slave connects to the DC to pull down missing data, it does this by either getting a lock for the DC, or by joining the permit of the existing DC lock if the request to update a slave's server-config was executed as part of a composite obtaining a lock on the DC.
> The way it works at present there is a thread per slave which is blocked until the transaction completes. The DC threads are a finite resource, so a large number of slaves trying to pull down dats will cause deadlock
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 9 months
[JBoss JIRA] (WFLY-315) Avoid running out of threads when connecting to the DC from a slave to pull down missing data
by David Lloyd (JIRA)
[ https://issues.jboss.org/browse/WFLY-315?page=com.atlassian.jira.plugin.s... ]
David Lloyd commented on WFLY-315:
----------------------------------
You mean, a thread pool with unlimited queue size? We do need to establish a finite size on the thread pool itself to avoid memory issues, as each thread has a potentially large stack and there are often OS limits on process/thread count. If you set up a standard ThreadPoolExecutor with a very large core size, it will spawn as many threads as your maximum concurrently executing tasks, which could be very high if you have lots of blocking tasks.
> Avoid running out of threads when connecting to the DC from a slave to pull down missing data
> ---------------------------------------------------------------------------------------------
>
> Key: WFLY-315
> URL: https://issues.jboss.org/browse/WFLY-315
> Project: WildFly
> Issue Type: Feature Request
> Security Level: Public(Everyone can see)
> Components: Domain Management
> Reporter: Kabir Khan
> Assignee: Emanuel Muckenhuber
> Priority: Blocker
> Fix For: 9.0.0.CR1
>
>
> For WFLY-259 when a slave connects to the DC to pull down missing data, it does this by either getting a lock for the DC, or by joining the permit of the existing DC lock if the request to update a slave's server-config was executed as part of a composite obtaining a lock on the DC.
> The way it works at present there is a thread per slave which is blocked until the transaction completes. The DC threads are a finite resource, so a large number of slaves trying to pull down dats will cause deadlock
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 9 months
[JBoss JIRA] (WFLY-315) Avoid running out of threads when connecting to the DC from a slave to pull down missing data
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFLY-315?page=com.atlassian.jira.plugin.s... ]
Brian Stansberry commented on WFLY-315:
---------------------------------------
My work on mgmt op cancellation has led into a general look at thread management issues (to allow interruption of request handling without screwing up connections, we have to isolate IO tasks from the cancellable thread, so more async tasks, more thread complication.)
I'm thinking we should go for a single thread pool (the host or server's main pool) with an unlimited size. We can use other techniques to throttle certain kinds of requests. For example, stick tasks in a queue and then only let a certain number of the general purpose threads concurrently process the queue.
Tasks we might throttle:
1) End user requests. I believe this case was the original reason for a limited pool anyway. Note that I don't think there's any limit on the number of HTTP/REST requests, so this limit is pretty incomplete anyway.
2) Slave registration requests. These will block anyway if executed concurrently, so why have > 1 thread doing them? I'm referring to the initial request -- once the message exchange starts no reason to throttle the other requests, as the initial request throttle will naturally do this.
3) Master->Slave and Slave->server requests. Why throttle these? It can just lead to deadlock issues. If we're throttling end user requests we shouldn't need to worry about how many requests a controlling process is sending to its controlled process.
4) Post-registration slave->master requests. Same as 3 above.
The other aspect of this we should look into is why these "slave pulls down missing data" requests are transactional. It's not obvious to me why they can't return and release the lock immediately. But I'm probably missing something.
> Avoid running out of threads when connecting to the DC from a slave to pull down missing data
> ---------------------------------------------------------------------------------------------
>
> Key: WFLY-315
> URL: https://issues.jboss.org/browse/WFLY-315
> Project: WildFly
> Issue Type: Feature Request
> Security Level: Public(Everyone can see)
> Components: Domain Management
> Reporter: Kabir Khan
> Assignee: Emanuel Muckenhuber
> Priority: Blocker
> Fix For: 9.0.0.CR1
>
>
> For WFLY-259 when a slave connects to the DC to pull down missing data, it does this by either getting a lock for the DC, or by joining the permit of the existing DC lock if the request to update a slave's server-config was executed as part of a composite obtaining a lock on the DC.
> The way it works at present there is a thread per slave which is blocked until the transaction completes. The DC threads are a finite resource, so a large number of slaves trying to pull down dats will cause deadlock
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 9 months
[JBoss JIRA] (WFLY-3131) isSensitiveValue of class SensitiveVaultExpressionConstraint uses incorrect index in java.lang.String.substring method
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/WFLY-3131?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on WFLY-3131:
-----------------------------------------------
Kabir Khan <kkhan(a)redhat.com> changed the Status of [bug 1077838|https://bugzilla.redhat.com/show_bug.cgi?id=1077838] from POST to MODIFIED
> isSensitiveValue of class SensitiveVaultExpressionConstraint uses incorrect index in java.lang.String.substring method
> -----------------------------------------------------------------------------------------------------------------------
>
> Key: WFLY-3131
> URL: https://issues.jboss.org/browse/WFLY-3131
> Project: WildFly
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Domain Management
> Affects Versions: 8.0.0.Final
> Environment: All
> Reporter: Jay Kumar SenSharma
> Assignee: Jay Kumar SenSharma
>
> The isSensitiveValue(ModelNode value) method of class "org.jboss.as.controller.access.constraint.SensitiveVaultExpressionConstraint" seems to be using the incorrect index in java.lang.String.substring method. Which is causing the following exceptions in the logs while executing the following kind of CLI command:
> {code}
> [standalone@localhost:9990 /] /subsystem=logging/periodic-rotating-file-handler=FILE:write-attribute(name=formatter, value="%d{HH:mm:ss,SSS} %-5p [%c] (${jboss.node.name} %t) %s%E%n")
> {
> "outcome" => "failed",
> "failure-description" => "JBAS014749: Operation handler failed: String index out of range: -15",
> "rolled-back" => true
> }
> {code}
> The Exception can be seen as following in the WildFly Logs:
> {code}
> 21:58:04,821 ERROR [org.jboss.as.controller.management-operation] (management-handler-thread - 25) JBAS014612: Operation ("write-attribute") failed - address: ([
> ("subsystem" => "logging"),
> ("periodic-rotating-file-handler" => "FILE")
> ]): java.lang.StringIndexOutOfBoundsException: String index out of range: -15
> at java.lang.String.substring(String.java:1911) [rt.jar:1.7.0_51]
> at org.jboss.as.controller.access.constraint.SensitiveVaultExpressionConstraint$Factory.isSensitiveValue(SensitiveVaultExpressionConstraint.java:128) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.access.constraint.SensitiveVaultExpressionConstraint$Factory.isSensitiveAction(SensitiveVaultExpressionConstraint.java:89) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.access.constraint.SensitiveVaultExpressionConstraint$Factory.getRequiredConstraint(SensitiveVaultExpressionConstraint.java:81) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.access.rbac.DefaultPermissionFactory.getRequiredPermissions(DefaultPermissionFactory.java:201) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.access.permission.ManagementPermissionAuthorizer.authorize(ManagementPermissionAuthorizer.java:100) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.access.management.DelegatingConfigurableAuthorizer.authorize(DelegatingConfigurableAuthorizer.java:98) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.OperationContextImpl.getBasicAuthorizationResponse(OperationContextImpl.java:1153) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.OperationContextImpl.authorize(OperationContextImpl.java:1055) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.OperationContextImpl.authorize(OperationContextImpl.java:1015) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.OperationContextImpl.getResourceRegistration(OperationContextImpl.java:265) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.operations.global.WriteAttributeHandler.execute(WriteAttributeHandler.java:72) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.AbstractOperationContext.executeStep(AbstractOperationContext.java:591) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.AbstractOperationContext.doCompleteStep(AbstractOperationContext.java:469) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.AbstractOperationContext.completeStepInternal(AbstractOperationContext.java:273) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.AbstractOperationContext.executeOperation(AbstractOperationContext.java:268) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.ModelControllerImpl.internalExecute(ModelControllerImpl.java:272) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.ModelControllerImpl.execute(ModelControllerImpl.java:146) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.remote.ModelControllerClientOperationHandler$ExecuteRequestHandler.doExecute(ModelControllerClientOperationHandler.java:174) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.remote.ModelControllerClientOperationHandler$ExecuteRequestHandler.access$300(ModelControllerClientOperationHandler.java:105) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.remote.ModelControllerClientOperationHandler$ExecuteRequestHandler$1$1.run(ModelControllerClientOperationHandler.java:125) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.remote.ModelControllerClientOperationHandler$ExecuteRequestHandler$1$1.run(ModelControllerClientOperationHandler.java:121) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at java.security.AccessController.doPrivileged(Native Method) [rt.jar:1.7.0_51]
> at javax.security.auth.Subject.doAs(Subject.java:415) [rt.jar:1.7.0_51]
> at org.jboss.as.controller.AccessAuditContext.doAs(AccessAuditContext.java:94) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.controller.remote.ModelControllerClientOperationHandler$ExecuteRequestHandler$1.execute(ModelControllerClientOperationHandler.java:121) [wildfly-controller-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.protocol.mgmt.AbstractMessageHandler$2$1.doExecute(AbstractMessageHandler.java:283) [wildfly-protocol-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at org.jboss.as.protocol.mgmt.AbstractMessageHandler$AsyncTaskRunner.run(AbstractMessageHandler.java:504) [wildfly-protocol-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_51]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_51]
> at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_51]
> at org.jboss.threads.JBossThread.run(JBossThread.java:122) [jboss-threads-2.1.1.Final.jar:2.1.1.Final]
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 9 months
[JBoss JIRA] (WFLY-3149) Windows service on 64 bit systems cannot be stopped
by Mohan Potturi (JIRA)
Mohan Potturi created WFLY-3149:
-----------------------------------
Summary: Windows service on 64 bit systems cannot be stopped
Key: WFLY-3149
URL: https://issues.jboss.org/browse/WFLY-3149
Project: WildFly
Issue Type: Bug
Security Level: Public (Everyone can see)
Affects Versions: 8.0.0.Final
Environment: Windows 2008 Server, Windows 7 professional - both 64 bit systems using JDK 1.7.0_09
Reporter: Mohan Potturi
The Windows service cannot be stopped. It says 'stopping' in the windows service user interface window. The only way to stop it is ti actually kill the java process. It works flawlessly on 32 bit systems though.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 9 months
[JBoss JIRA] (JGRP-1809) New discovery protocol to be used in conjunction with SHARED_LOOPBACK
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1809?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-1809:
---------------------------
Description:
Sometimes tests in the testsuite fail because the nodes run over SHARED_LOOPBACK fail to discover each other and therefore don't form a cluster. This can happen because discovery requests sent via PING are discarded on a full thread pool.
Because the default test stack doesn't have a MERGE protocol, members won't get merged back.
h5. GOAL
* Create a discovery protocol which always discovers other members registered with the same cluster
h5. SOLUTION
* Instead of using SHARED_LOOPBACK to send and receive discovery requests, query SHARED_LOOPBACK directly for discovery information. E.g. if we have members A,B,C registered with cluster "demo" in SHARED_LOOPBACK, then a node D joining can fetch (via SHARED_LOOPBACK_PING) the PingData of A, B and C directly from SHARED_LOOPBACK.
* This way, we won't have any failed test cases (run over SHARED_LOOPBACK) which fail because no other members were discovered. Also, discovery (even for the first node started in a cluster) is very fast, and we don't need the timeout at all.
* Also, we don't need to add a MERGE protocol to the stack as merging is not needed
was:
Sometimes tests in the testsuite fail because the nodes run over SHARED_LOOPBACK fail to discover each other and therefore don't form a cluster. This can happen because discovery requests sent via PING are discarded on a full thread pool.
Because the default test stack doesn't have a MERGE protocol, members won't get merged back.
h5. GOAL
* Create a discovery protocol which always discovers other members registered with the same cluster
h5. SOLUTION
* Instead of using SHARED_LOOPBACK to send and receive discovery requests, query SHARED_LOOPBACK directly for discovery information. E.g. if we have members A,B,C registered with cluster "demo" in SHARED_LOOPBACK, then a node D joining can fetch (via SHARED_LOOPBACK_PING) the PingData of A, B and C directly from SHARED_LOOPBACK.
This way, we won't have any failed test cases (run over SHARED_LOOPBACK) which fail because no other members were discovered. Also, discovery (even for the first node started in a cluster) is very fast, and we don't need the timeout at all.
> New discovery protocol to be used in conjunction with SHARED_LOOPBACK
> ---------------------------------------------------------------------
>
> Key: JGRP-1809
> URL: https://issues.jboss.org/browse/JGRP-1809
> Project: JGroups
> Issue Type: Enhancement
> Reporter: Bela Ban
> Assignee: Bela Ban
> Priority: Minor
> Fix For: 3.5
>
>
> Sometimes tests in the testsuite fail because the nodes run over SHARED_LOOPBACK fail to discover each other and therefore don't form a cluster. This can happen because discovery requests sent via PING are discarded on a full thread pool.
> Because the default test stack doesn't have a MERGE protocol, members won't get merged back.
> h5. GOAL
> * Create a discovery protocol which always discovers other members registered with the same cluster
> h5. SOLUTION
> * Instead of using SHARED_LOOPBACK to send and receive discovery requests, query SHARED_LOOPBACK directly for discovery information. E.g. if we have members A,B,C registered with cluster "demo" in SHARED_LOOPBACK, then a node D joining can fetch (via SHARED_LOOPBACK_PING) the PingData of A, B and C directly from SHARED_LOOPBACK.
> * This way, we won't have any failed test cases (run over SHARED_LOOPBACK) which fail because no other members were discovered. Also, discovery (even for the first node started in a cluster) is very fast, and we don't need the timeout at all.
> * Also, we don't need to add a MERGE protocol to the stack as merging is not needed
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 9 months
[JBoss JIRA] (JGRP-1809) New discovery protocol to be used in conjunction with SHARED_LOOPBACK
by Bela Ban (JIRA)
Bela Ban created JGRP-1809:
------------------------------
Summary: New discovery protocol to be used in conjunction with SHARED_LOOPBACK
Key: JGRP-1809
URL: https://issues.jboss.org/browse/JGRP-1809
Project: JGroups
Issue Type: Enhancement
Reporter: Bela Ban
Assignee: Bela Ban
Priority: Minor
Fix For: 3.5
Sometimes tests in the testsuite fail because the nodes run over SHARED_LOOPBACK fail to discover each other and therefore don't form a cluster. This can happen because discovery requests sent via PING are discarded on a full thread pool.
Because the default test stack doesn't have a MERGE protocol, members won't get merged back.
h5. GOAL
* Create a discovery protocol which always discovers other members registered with the same cluster
h5. SOLUTION
* Instead of using SHARED_LOOPBACK to send and receive discovery requests, query SHARED_LOOPBACK directly for discovery information. E.g. if we have members A,B,C registered with cluster "demo" in SHARED_LOOPBACK, then a node D joining can fetch (via SHARED_LOOPBACK_PING) the PingData of A, B and C directly from SHARED_LOOPBACK.
This way, we won't have any failed test cases (run over SHARED_LOOPBACK) which fail because no other members were discovered. Also, discovery (even for the first node started in a cluster) is very fast, and we don't need the timeout at all.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 9 months