[JBoss JIRA] (WFCORE-2145) Confusing transitions of ControlledProcessState
by ehsavoie Hugonnet (JIRA)
[ https://issues.jboss.org/browse/WFCORE-2145?page=com.atlassian.jira.plugi... ]
ehsavoie Hugonnet moved JBEAP-8046 to WFCORE-2145:
--------------------------------------------------
Project: WildFly Core (was: JBoss Enterprise Application Platform)
Key: WFCORE-2145 (was: JBEAP-8046)
Workflow: GIT Pull Request workflow (was: CDW with loose statuses v1)
Component/s: Domain Management
(was: Domain Management)
Affects Version/s: 3.0.0.Alpha16
(was: 7.1.0.DR9)
> Confusing transitions of ControlledProcessState
> -----------------------------------------------
>
> Key: WFCORE-2145
> URL: https://issues.jboss.org/browse/WFCORE-2145
> Project: WildFly Core
> Issue Type: Bug
> Components: Domain Management
> Affects Versions: 3.0.0.Alpha16
> Reporter: ehsavoie Hugonnet
> Assignee: ehsavoie Hugonnet
>
> Starting a server into normal mode emits these transitions for the process running state:
> - starting -> suspended -> normal
> Similarly when transitioning to admin-only mode:
> - starting -> suspended -> admin-only
> Stopping from normal mode:
> - normal -> suspending -> suspended -> stopping
> Starting into suspended mode emits:
> - starting -> stopping -> stopped
> (and the {{stopped}} state represents that the server is actually in {{suspended}} mode)
> I find it confusing that:
> - During a normal start, it transitions through a state named {{suspended}} even though the server is not suspended.
> - During a stop, it is even more confusing, because it goes through one extra state - {{suspending}} which doesn't occur during start.
> - The state to represent a suspended server is named {{stopped}} and not {{suspended}}.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 3 months
[JBoss JIRA] (JGRP-2143) TP: use only one thread per member to pass up regular messages
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2143?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-2143:
---------------------------
Description:
This applies only to _regular_ messages; OOB and internal messages are processed by passing them to the thread pool directly when they're received.
The processing of a message received from B is as follows:
* A regular message (or message batch) is assigned a thread from the thread pool and passed up to the reliability protocol, e.g. NAKACK2 or UNICAST3.
* There it is added to the table for B.
* The thread sees if another thread is already delivering messages from B to the application. If not, it grabs as many consecutive (ordered) messages from the table as it can and delivers them to the application. Otherwise, it returns and can be assigned other tasks.
The problem here is that more than one thread may be passing up messages from a given sender B; only at the NAKACK2 or UNICAST3 level will a single thread be selected to deliver the messages to the application.
This causes higher thread pool usage than required, with all of its drawbacks, e.g. more context switching, higher contention on adding messages to the table for B, and possibly exhaustion of the thread pool.
An example of where service is denied or delayed:
* We have a cluster of \{A,B,C,D\}
* A receives 10 messages from B, 4 from C and 1 from D
* The thread pool's max size is 20
* The 10 messages from B are processed; all 10 threads add their messages to the table, but only 1 delivers them to the application and the other 9 return to the pool
* 4 messages from C are added to C's table, 1 thread delivers them and 3 return
* The 1 message from D is added to D's table and the same thread is used to deliver the message up the stack to the application
So while we receive 15 messages, effectively only 3 threads are needed to deliver them to the application: as these are regular messages, they need to be delivered in _sender order_.
The 9 threads which process messages from B are only adding them to B's table and then return immediately. This causes increased context switching, plus more contention on B's table (which is synchronized), and possibly exhaustion of the thread pool. For example, if the pool's max size was only 10, then processing the first 10 messages from B would exhaust the table, and the other messages from C and D would be processed in newly spawned threads.
SOLUTION
* (Only applicable to _regular_ messages)
* When a message (or batch) from sender P is received, we check if another thread is already passing up messages from B. If not, we pass the message up by grabbing a thread from the thread pool. This will add the message to P's table and deliver as many messages (from from the table) as possible to the application.
* If there's currently a thread delivering P's message, we simply add the message (or batch) to a queue for P and return.
* When the delivery thread returns, it checks the queue for P and delivers all queued messages, or returns if the queue is empty.
* (The queue is actually a MessageBatch, and new messages are simply appended to it. On delivery, the batch is cleared)
The effects of this for regular messages are
* Fewer threads: the thread pool only has a max of <cluster-members> threads for regular messages where <cluster-members> is the number of members in the cluster from whom we are concurrently receiving messages. E.g. for a cluster \{A,B,C,D\}, if we're receiving messages at the same time from all members, then the max size is 4.
** Of course, OOB and internal messages, plus timer tasks will add to this number.
* Less contention on the table for a given member: instead of 10 threads all adding their messages to B's table (contention on the table lock) and then CASing a boolean, only 1 thread ever adds and removes messages to/from the table. This means uncontended (= fast) lock acquisition for regular messages (of course, if we use OOB messages, then we do have contention).
* Appending to a batch is much faster then adding to a table
* The downside is that we're storing messages actually twice: once in the batch for P and once in P's table. But these are arrays of pointers, so not a lot of memory required.
Example: the 10 threads for messages from B above, will create a batch of 9 messages in B's queue and grab 1 thread from the pool to deliver its message. When the thread is done, it will grab the message batch of 9 and also add it to the table and deliver it.
This is similar to the bulkhead pattern \[1\].
\[1\] http://stackoverflow.com/questions/30391809/what-is-bulkhead-pattern-used...
was:
This applies only to _regular_ messages; OOB and internal messages are processed by passing them to the thread pool directly when they're received.
The processing of a message received from B is as follows:
* A regular message (or message batch) is assigned a thread from the thread pool and passed up to the reliability protocol, e.g. NAKACK2 or UNICAST3.
* There is is added to the table for B.
* The thread sees if another thread is already delivering messages from B to the application. If not, it grabs as many consecutive (ordered) messages from the table as it can and delivers them to the application. Otherwise, it returns and can be assigned other tasks.
The problem here is that more than one thread may be passing up messages from a given sender B; only at the NAKACK2 or UNICAST3 level will a single thread be selected to deliver the messages to the application.
This causes higher thread pool usage than required, with all of its drawbacks, e.g. more context switching, higher contention on adding messages to the table for B, and possibly exhaustion of the thread pool.
An example of where service is denied or delayed:
* We have a cluster of \{A,B,C,D\}
* A receives 10 messages from B, 4 from C and 1 from D
* The thread pool's max size is 20
* The 10 messages from B are processed; all 10 threads add their messages to the table, but only 1 delivers them to the application and the other 9 return to the pool
* 4 messages from C are added to C's table, 1 thread delivers them and 3 return
* The 1 message from D is added to D's table and the same thread is used to deliver the message up the stack to the application
So while we receive 15 messages, effectively only 3 threads are needed to deliver them to the application: as these are regular messages, they need to be delivered in _sender order_.
The 9 threads which process messages from B are only adding them to B's table and then return immediately. This causes increased context switching, plus more contention on B's table (which is synchronized), and possibly exhaustion of the thread pool. For example, if the pool's max size was only 10, then processing the first 10 messages from B would exhaust the table, and the other messages from C and D would be processed in newly spawned threads.
SOLUTION
* (Only applicable to _regular_ messages)
* When a message (or batch) from sender P is received, we check if another thread is already passing up messages from B. If not, we pass the message up by grabbing a thread from the thread pool. This will add the message to P's table and deliver as many messages (from from the table) as possible to the application.
* If there's currently a thread delivering P's message, we simply add the message (or batch) to a queue for P and return.
* When the delivery thread returns, it checks the queue for P and delivers all queued messages, or returns if the queue is empty.
* (The queue is actually a MessageBatch, and new messages are simply appended to it. On delivery, the batch is cleared)
The effects of this for regular messages are
* Fewer threads: the thread pool only has a max of <cluster-members> threads for regular messages where <cluster-members> is the number of members in the cluster from whom we are concurrently receiving messages. E.g. for a cluster \{A,B,C,D\}, if we're receiving messages at the same time from all members, then the max size is 4.
** Of course, OOB and internal messages, plus timer tasks will add to this number.
* Less contention on the table for a given member: instead of 10 threads all adding their messages to B's table (contention on the table lock) and then CASing a boolean, only 1 thread ever adds and removes messages to/from the table. This means uncontended (= fast) lock acquisition for regular messages (of course, if we use OOB messages, then we do have contention).
* Appending to a batch is much faster then adding to a table
* The downside is that we're storing messages actually twice: once in the batch for P and once in P's table. But these are arrays of pointers, so not a lot of memory required.
Example: the 10 threads for messages from B above, will create a batch of 9 messages in B's queue and grab 1 thread from the pool to deliver its message. When the thread is done, it will grab the message batch of 9 and also add it to the table and deliver it.
This is similar to the bulkhead pattern \[1\].
\[1\] http://stackoverflow.com/questions/30391809/what-is-bulkhead-pattern-used...
> TP: use only one thread per member to pass up regular messages
> --------------------------------------------------------------
>
> Key: JGRP-2143
> URL: https://issues.jboss.org/browse/JGRP-2143
> Project: JGroups
> Issue Type: Enhancement
> Reporter: Bela Ban
> Assignee: Bela Ban
> Labels: CR3
> Fix For: 4.0
>
>
> This applies only to _regular_ messages; OOB and internal messages are processed by passing them to the thread pool directly when they're received.
> The processing of a message received from B is as follows:
> * A regular message (or message batch) is assigned a thread from the thread pool and passed up to the reliability protocol, e.g. NAKACK2 or UNICAST3.
> * There it is added to the table for B.
> * The thread sees if another thread is already delivering messages from B to the application. If not, it grabs as many consecutive (ordered) messages from the table as it can and delivers them to the application. Otherwise, it returns and can be assigned other tasks.
> The problem here is that more than one thread may be passing up messages from a given sender B; only at the NAKACK2 or UNICAST3 level will a single thread be selected to deliver the messages to the application.
> This causes higher thread pool usage than required, with all of its drawbacks, e.g. more context switching, higher contention on adding messages to the table for B, and possibly exhaustion of the thread pool.
> An example of where service is denied or delayed:
> * We have a cluster of \{A,B,C,D\}
> * A receives 10 messages from B, 4 from C and 1 from D
> * The thread pool's max size is 20
> * The 10 messages from B are processed; all 10 threads add their messages to the table, but only 1 delivers them to the application and the other 9 return to the pool
> * 4 messages from C are added to C's table, 1 thread delivers them and 3 return
> * The 1 message from D is added to D's table and the same thread is used to deliver the message up the stack to the application
> So while we receive 15 messages, effectively only 3 threads are needed to deliver them to the application: as these are regular messages, they need to be delivered in _sender order_.
> The 9 threads which process messages from B are only adding them to B's table and then return immediately. This causes increased context switching, plus more contention on B's table (which is synchronized), and possibly exhaustion of the thread pool. For example, if the pool's max size was only 10, then processing the first 10 messages from B would exhaust the table, and the other messages from C and D would be processed in newly spawned threads.
> SOLUTION
> * (Only applicable to _regular_ messages)
> * When a message (or batch) from sender P is received, we check if another thread is already passing up messages from B. If not, we pass the message up by grabbing a thread from the thread pool. This will add the message to P's table and deliver as many messages (from from the table) as possible to the application.
> * If there's currently a thread delivering P's message, we simply add the message (or batch) to a queue for P and return.
> * When the delivery thread returns, it checks the queue for P and delivers all queued messages, or returns if the queue is empty.
> * (The queue is actually a MessageBatch, and new messages are simply appended to it. On delivery, the batch is cleared)
> The effects of this for regular messages are
> * Fewer threads: the thread pool only has a max of <cluster-members> threads for regular messages where <cluster-members> is the number of members in the cluster from whom we are concurrently receiving messages. E.g. for a cluster \{A,B,C,D\}, if we're receiving messages at the same time from all members, then the max size is 4.
> ** Of course, OOB and internal messages, plus timer tasks will add to this number.
> * Less contention on the table for a given member: instead of 10 threads all adding their messages to B's table (contention on the table lock) and then CASing a boolean, only 1 thread ever adds and removes messages to/from the table. This means uncontended (= fast) lock acquisition for regular messages (of course, if we use OOB messages, then we do have contention).
> * Appending to a batch is much faster then adding to a table
> * The downside is that we're storing messages actually twice: once in the batch for P and once in P's table. But these are arrays of pointers, so not a lot of memory required.
> Example: the 10 threads for messages from B above, will create a batch of 9 messages in B's queue and grab 1 thread from the pool to deliver its message. When the thread is done, it will grab the message batch of 9 and also add it to the table and deliver it.
> This is similar to the bulkhead pattern \[1\].
> \[1\] http://stackoverflow.com/questions/30391809/what-is-bulkhead-pattern-used...
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 3 months
[JBoss JIRA] (JGRP-2126) Table.removeMany() creates unneeded temporary list
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2126?page=com.atlassian.jira.plugin.... ]
Bela Ban resolved JGRP-2126.
----------------------------
Resolution: Done
> Table.removeMany() creates unneeded temporary list
> --------------------------------------------------
>
> Key: JGRP-2126
> URL: https://issues.jboss.org/browse/JGRP-2126
> Project: JGroups
> Issue Type: Enhancement
> Reporter: Bela Ban
> Assignee: Bela Ban
> Labels: CR3
> Fix For: 4.0
>
>
> When a thread acquires the CAS in NAKACK2 or UNICAST3 to deliver messages, it calls Table.removeMany() which removes messages that satisfy a condition and return them as a list. Next, a MessageBatch is created off of that list and passed up.
> The creation of the temp list is unnecessary; instead create a properly sized MessageBatch and make Table.removeMany() add the messages directly into the batch.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 3 months
[JBoss JIRA] (JGRP-2143) TP: use only one thread per member to pass up regular messages
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2143?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-2143:
---------------------------
Labels: CR3 (was: )
> TP: use only one thread per member to pass up regular messages
> --------------------------------------------------------------
>
> Key: JGRP-2143
> URL: https://issues.jboss.org/browse/JGRP-2143
> Project: JGroups
> Issue Type: Enhancement
> Reporter: Bela Ban
> Assignee: Bela Ban
> Labels: CR3
> Fix For: 4.0
>
>
> This applies only to _regular_ messages; OOB and internal messages are processed by passing them to the thread pool directly when they're received.
> The processing of a message received from B is as follows:
> * A regular message (or message batch) is assigned a thread from the thread pool and passed up to the reliability protocol, e.g. NAKACK2 or UNICAST3.
> * There is is added to the table for B.
> * The thread sees if another thread is already delivering messages from B to the application. If not, it grabs as many consecutive (ordered) messages from the table as it can and delivers them to the application. Otherwise, it returns and can be assigned other tasks.
> The problem here is that more than one thread may be passing up messages from a given sender B; only at the NAKACK2 or UNICAST3 level will a single thread be selected to deliver the messages to the application.
> This causes higher thread pool usage than required, with all of its drawbacks, e.g. more context switching, higher contention on adding messages to the table for B, and possibly exhaustion of the thread pool.
> An example of where service is denied or delayed:
> * We have a cluster of \{A,B,C,D\}
> * A receives 10 messages from B, 4 from C and 1 from D
> * The thread pool's max size is 20
> * The 10 messages from B are processed; all 10 threads add their messages to the table, but only 1 delivers them to the application and the other 9 return to the pool
> * 4 messages from C are added to C's table, 1 thread delivers them and 3 return
> * The 1 message from D is added to D's table and the same thread is used to deliver the message up the stack to the application
> So while we receive 15 messages, effectively only 3 threads are needed to deliver them to the application: as these are regular messages, they need to be delivered in _sender order_.
> The 9 threads which process messages from B are only adding them to B's table and then return immediately. This causes increased context switching, plus more contention on B's table (which is synchronized), and possibly exhaustion of the thread pool. For example, if the pool's max size was only 10, then processing the first 10 messages from B would exhaust the table, and the other messages from C and D would be processed in newly spawned threads.
> SOLUTION
> * (Only applicable to _regular_ messages)
> * When a message (or batch) from sender P is received, we check if another thread is already passing up messages from B. If not, we pass the message up by grabbing a thread from the thread pool. This will add the message to P's table and deliver as many messages (from from the table) as possible to the application.
> * If there's currently a thread delivering P's message, we simply add the message (or batch) to a queue for P and return.
> * When the delivery thread returns, it checks the queue for P and delivers all queued messages, or returns if the queue is empty.
> * (The queue is actually a MessageBatch, and new messages are simply appended to it. On delivery, the batch is cleared)
> The effects of this for regular messages are
> * Fewer threads: the thread pool only has a max of <cluster-members> threads for regular messages where <cluster-members> is the number of members in the cluster from whom we are concurrently receiving messages. E.g. for a cluster \{A,B,C,D\}, if we're receiving messages at the same time from all members, then the max size is 4.
> ** Of course, OOB and internal messages, plus timer tasks will add to this number.
> * Less contention on the table for a given member: instead of 10 threads all adding their messages to B's table (contention on the table lock) and then CASing a boolean, only 1 thread ever adds and removes messages to/from the table. This means uncontended (= fast) lock acquisition for regular messages (of course, if we use OOB messages, then we do have contention).
> * Appending to a batch is much faster then adding to a table
> * The downside is that we're storing messages actually twice: once in the batch for P and once in P's table. But these are arrays of pointers, so not a lot of memory required.
> Example: the 10 threads for messages from B above, will create a batch of 9 messages in B's queue and grab 1 thread from the pool to deliver its message. When the thread is done, it will grab the message batch of 9 and also add it to the table and deliver it.
> This is similar to the bulkhead pattern \[1\].
> \[1\] http://stackoverflow.com/questions/30391809/what-is-bulkhead-pattern-used...
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 3 months
[JBoss JIRA] (JGRP-2150) More efficient message adding and draining
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2150?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-2150:
---------------------------
Labels: CR3 (was: )
> More efficient message adding and draining
> ------------------------------------------
>
> Key: JGRP-2150
> URL: https://issues.jboss.org/browse/JGRP-2150
> Project: JGroups
> Issue Type: Enhancement
> Reporter: Bela Ban
> Assignee: Bela Ban
> Labels: CR3
> Fix For: 4.0
>
>
> In NAKACK2, UNICAST3 and in MaxOneThreadPerSenderPolicy, we have a pattern where aone or more producers add messages (to a table in NAKACK2 and UNICAST3, or to a MessageBatch in MaxOneThreadPerSenderPolicy) and then only *a single thread* can remove and deliver messages up the stack.
> This requires synchronization around (1) determining the thread will be remove messages, (2) adding messages to the table (or batch) and (3) removing messages from the table or batch.
> Unit tests DrainTest and MessageBatchDrainTest show how a simple AtomicInteger can be used to do this.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 3 months
[JBoss JIRA] (WFCORE-2139) domain-organization attribute causes error when starting domain
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFCORE-2139?page=com.atlassian.jira.plugi... ]
Brian Stansberry updated WFCORE-2139:
-------------------------------------
Fix Version/s: 3.0.0.Beta1
> domain-organization attribute causes error when starting domain
> ---------------------------------------------------------------
>
> Key: WFCORE-2139
> URL: https://issues.jboss.org/browse/WFCORE-2139
> Project: WildFly Core
> Issue Type: Bug
> Components: Domain Management
> Reporter: Claudio Miranda
> Assignee: Ken Wills
> Priority: Blocker
> Fix For: 3.0.0.Beta1
>
>
> There is error when starting/reload a domain with domain-organization filled.
> {code}
> :write-attribute(name=domain-organization,value="my test organization")
> {
> "outcome" => "success",
> "result" => undefined,
> "server-groups" => undefined
> }
> reload --host=master
> {code}
> The server log throws
> {code}
> [Host Controller] 16:06:42,629 ERROR [org.jboss.as.controller] (Controller Boot Thread)
> [Host Controller]
> [Host Controller] OPVDX001: Validation error in domain.xml ---------------------------------------
> [Host Controller] |
> [Host Controller] | 1: <?xml version='1.0' encoding='UTF-8'?>
> [Host Controller] | 2:
> [Host Controller] | 3: <domain xmlns="urn:jboss:domain:5.0" domain-organization="my test organization">
> [Host Controller] | ^^^^ 'domain-organization' isn't an allowed attribute for the 'domain'
> [Host Controller] | element
> [Host Controller] |
> [Host Controller] | Attributes allowed here are: name
> [Host Controller] |
> [Host Controller] | 4:
> [Host Controller] | 5: <extensions>
> [Host Controller] | 6: <extension module="org.jboss.as.clustering.infinispan"/>
> [Host Controller] |
> [Host Controller] | The primary underlying error message was:
> [Host Controller] | > ParseError at [row,col]:[3,1]
> [Host Controller] | > Message: WFLYCTL0197: Unexpected attribute 'domain-organization'
> [Host Controller] | > encountered
> [Host Controller] |
> [Host Controller] |-------------------------------------------------------------------------------
> [Host Controller]
> [Host Controller] 16:06:42,630 ERROR [org.jboss.as.host.controller] (Controller Boot Thread) WFLYHC0033: Caught exception during boot: org.jboss.as.controller.persistence.ConfigurationPersistenceException: WFLYCTL0085: Failed to parse configuration
> [Host Controller] at org.jboss.as.controller.persistence.XmlConfigurationPersister.load(XmlConfigurationPersister.java:143)
> [Host Controller] at org.jboss.as.host.controller.DomainModelControllerService.boot(DomainModelControllerService.java:738)
> [Host Controller] at org.jboss.as.controller.AbstractControllerService$1.run(AbstractControllerService.java:314)
> [Host Controller] at java.lang.Thread.run(Thread.java:745)
> [Host Controller]
> [Host Controller] 16:06:42,630 FATAL [org.jboss.as.host.controller] (Controller Boot Thread) WFLYHC0034: Host Controller boot has failed in an unrecoverable manner; exiting. See previous messages for details.
> [Host Controller] 16:06:42,631 FATAL [org.jboss.as.host.controller] (Controller Boot Thread) WFLYHC0178: Aborting with exit code 99
> {code}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 3 months
[JBoss JIRA] (WFCORE-2139) domain-organization attribute causes error when starting domain
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFCORE-2139?page=com.atlassian.jira.plugi... ]
Brian Stansberry updated WFCORE-2139:
-------------------------------------
Priority: Blocker (was: Major)
> domain-organization attribute causes error when starting domain
> ---------------------------------------------------------------
>
> Key: WFCORE-2139
> URL: https://issues.jboss.org/browse/WFCORE-2139
> Project: WildFly Core
> Issue Type: Bug
> Components: Domain Management
> Reporter: Claudio Miranda
> Assignee: Ken Wills
> Priority: Blocker
> Fix For: 3.0.0.Beta1
>
>
> There is error when starting/reload a domain with domain-organization filled.
> {code}
> :write-attribute(name=domain-organization,value="my test organization")
> {
> "outcome" => "success",
> "result" => undefined,
> "server-groups" => undefined
> }
> reload --host=master
> {code}
> The server log throws
> {code}
> [Host Controller] 16:06:42,629 ERROR [org.jboss.as.controller] (Controller Boot Thread)
> [Host Controller]
> [Host Controller] OPVDX001: Validation error in domain.xml ---------------------------------------
> [Host Controller] |
> [Host Controller] | 1: <?xml version='1.0' encoding='UTF-8'?>
> [Host Controller] | 2:
> [Host Controller] | 3: <domain xmlns="urn:jboss:domain:5.0" domain-organization="my test organization">
> [Host Controller] | ^^^^ 'domain-organization' isn't an allowed attribute for the 'domain'
> [Host Controller] | element
> [Host Controller] |
> [Host Controller] | Attributes allowed here are: name
> [Host Controller] |
> [Host Controller] | 4:
> [Host Controller] | 5: <extensions>
> [Host Controller] | 6: <extension module="org.jboss.as.clustering.infinispan"/>
> [Host Controller] |
> [Host Controller] | The primary underlying error message was:
> [Host Controller] | > ParseError at [row,col]:[3,1]
> [Host Controller] | > Message: WFLYCTL0197: Unexpected attribute 'domain-organization'
> [Host Controller] | > encountered
> [Host Controller] |
> [Host Controller] |-------------------------------------------------------------------------------
> [Host Controller]
> [Host Controller] 16:06:42,630 ERROR [org.jboss.as.host.controller] (Controller Boot Thread) WFLYHC0033: Caught exception during boot: org.jboss.as.controller.persistence.ConfigurationPersistenceException: WFLYCTL0085: Failed to parse configuration
> [Host Controller] at org.jboss.as.controller.persistence.XmlConfigurationPersister.load(XmlConfigurationPersister.java:143)
> [Host Controller] at org.jboss.as.host.controller.DomainModelControllerService.boot(DomainModelControllerService.java:738)
> [Host Controller] at org.jboss.as.controller.AbstractControllerService$1.run(AbstractControllerService.java:314)
> [Host Controller] at java.lang.Thread.run(Thread.java:745)
> [Host Controller]
> [Host Controller] 16:06:42,630 FATAL [org.jboss.as.host.controller] (Controller Boot Thread) WFLYHC0034: Host Controller boot has failed in an unrecoverable manner; exiting. See previous messages for details.
> [Host Controller] 16:06:42,631 FATAL [org.jboss.as.host.controller] (Controller Boot Thread) WFLYHC0178: Aborting with exit code 99
> {code}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 3 months
[JBoss JIRA] (WFCORE-2139) domain-organization attribute causes error when starting domain
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFCORE-2139?page=com.atlassian.jira.plugi... ]
Brian Stansberry reassigned WFCORE-2139:
----------------------------------------
Assignee: Ken Wills (was: Brian Stansberry)
> domain-organization attribute causes error when starting domain
> ---------------------------------------------------------------
>
> Key: WFCORE-2139
> URL: https://issues.jboss.org/browse/WFCORE-2139
> Project: WildFly Core
> Issue Type: Bug
> Components: Domain Management
> Reporter: Claudio Miranda
> Assignee: Ken Wills
> Fix For: 3.0.0.Beta1
>
>
> There is error when starting/reload a domain with domain-organization filled.
> {code}
> :write-attribute(name=domain-organization,value="my test organization")
> {
> "outcome" => "success",
> "result" => undefined,
> "server-groups" => undefined
> }
> reload --host=master
> {code}
> The server log throws
> {code}
> [Host Controller] 16:06:42,629 ERROR [org.jboss.as.controller] (Controller Boot Thread)
> [Host Controller]
> [Host Controller] OPVDX001: Validation error in domain.xml ---------------------------------------
> [Host Controller] |
> [Host Controller] | 1: <?xml version='1.0' encoding='UTF-8'?>
> [Host Controller] | 2:
> [Host Controller] | 3: <domain xmlns="urn:jboss:domain:5.0" domain-organization="my test organization">
> [Host Controller] | ^^^^ 'domain-organization' isn't an allowed attribute for the 'domain'
> [Host Controller] | element
> [Host Controller] |
> [Host Controller] | Attributes allowed here are: name
> [Host Controller] |
> [Host Controller] | 4:
> [Host Controller] | 5: <extensions>
> [Host Controller] | 6: <extension module="org.jboss.as.clustering.infinispan"/>
> [Host Controller] |
> [Host Controller] | The primary underlying error message was:
> [Host Controller] | > ParseError at [row,col]:[3,1]
> [Host Controller] | > Message: WFLYCTL0197: Unexpected attribute 'domain-organization'
> [Host Controller] | > encountered
> [Host Controller] |
> [Host Controller] |-------------------------------------------------------------------------------
> [Host Controller]
> [Host Controller] 16:06:42,630 ERROR [org.jboss.as.host.controller] (Controller Boot Thread) WFLYHC0033: Caught exception during boot: org.jboss.as.controller.persistence.ConfigurationPersistenceException: WFLYCTL0085: Failed to parse configuration
> [Host Controller] at org.jboss.as.controller.persistence.XmlConfigurationPersister.load(XmlConfigurationPersister.java:143)
> [Host Controller] at org.jboss.as.host.controller.DomainModelControllerService.boot(DomainModelControllerService.java:738)
> [Host Controller] at org.jboss.as.controller.AbstractControllerService$1.run(AbstractControllerService.java:314)
> [Host Controller] at java.lang.Thread.run(Thread.java:745)
> [Host Controller]
> [Host Controller] 16:06:42,630 FATAL [org.jboss.as.host.controller] (Controller Boot Thread) WFLYHC0034: Host Controller boot has failed in an unrecoverable manner; exiting. See previous messages for details.
> [Host Controller] 16:06:42,631 FATAL [org.jboss.as.host.controller] (Controller Boot Thread) WFLYHC0178: Aborting with exit code 99
> {code}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 3 months
[JBoss JIRA] (WFLY-7850) Configuration of JDBC persistence-store is too complicated
by Jeff Mesnil (JIRA)
[ https://issues.jboss.org/browse/WFLY-7850?page=com.atlassian.jira.plugin.... ]
Jeff Mesnil moved JBEAP-8050 to WFLY-7850:
------------------------------------------
Project: WildFly (was: JBoss Enterprise Application Platform)
Key: WFLY-7850 (was: JBEAP-8050)
Workflow: GIT Pull Request workflow (was: CDW with loose statuses v1)
Component/s: JMS
(was: JMS)
(was: User Experience)
Affects Version/s: (was: 7.1.0.DR9)
> Configuration of JDBC persistence-store is too complicated
> ----------------------------------------------------------
>
> Key: WFLY-7850
> URL: https://issues.jboss.org/browse/WFLY-7850
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Reporter: Jeff Mesnil
> Assignee: Jeff Mesnil
> Priority: Critical
>
> Configuration of JDBC persistence-store consists of several steps:
> # Deploy jdbc driver
> # Configure data source
> # Create EAP module with correct SQLProviderFactory implementation
> # Configure Artemis
> For performing the step 3 you need to know
> * which SQLProviderFactory should be used,
> * where can be downloaded (who will compile it?),
> * how to create EAP module
> * which dependencies the module should have
> Additionally in domain, you need to do all these steps for each EAP instance.
> Suggestion for improvement:
> SQLProviderFactories for all supported databases should be delivered with EAP. The correct implementation should be chosen based on database metadata.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 3 months
[JBoss JIRA] (WFLY-7850) Simplify Configuration of JDBC persistence-store
by Jeff Mesnil (JIRA)
[ https://issues.jboss.org/browse/WFLY-7850?page=com.atlassian.jira.plugin.... ]
Jeff Mesnil updated WFLY-7850:
------------------------------
Summary: Simplify Configuration of JDBC persistence-store (was: Configuration of JDBC persistence-store is too complicated)
> Simplify Configuration of JDBC persistence-store
> ------------------------------------------------
>
> Key: WFLY-7850
> URL: https://issues.jboss.org/browse/WFLY-7850
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Reporter: Jeff Mesnil
> Assignee: Jeff Mesnil
> Priority: Critical
>
> Configuration of JDBC persistence-store consists of several steps:
> # Deploy jdbc driver
> # Configure data source
> # Create EAP module with correct SQLProviderFactory implementation
> # Configure Artemis
> For performing the step 3 you need to know
> * which SQLProviderFactory should be used,
> * where can be downloaded (who will compile it?),
> * how to create EAP module
> * which dependencies the module should have
> Additionally in domain, you need to do all these steps for each EAP instance.
> Suggestion for improvement:
> SQLProviderFactories for all supported databases should be delivered with EAP. The correct implementation should be chosen based on database metadata.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 3 months