[JBoss JIRA] (WFLY-6789) 'xa-datasource-properties' is not found among the supported properties
by Stefano Maestri (JIRA)
[ https://issues.jboss.org/browse/WFLY-6789?page=com.atlassian.jira.plugin.... ]
Stefano Maestri reassigned WFLY-6789:
-------------------------------------
Assignee: Stefano Maestri (was: Jesper Pedersen)
> 'xa-datasource-properties' is not found among the supported properties
> ----------------------------------------------------------------------
>
> Key: WFLY-6789
> URL: https://issues.jboss.org/browse/WFLY-6789
> Project: WildFly
> Issue Type: Bug
> Components: JCA
> Affects Versions: 10.0.0.Final
> Reporter: Kylin Soong
> Assignee: Stefano Maestri
> Priority: Critical
>
> WildFly 10 wants a xa-datasource-properties, but xa-datasource-properties not added as a xa datasources attributes in management resource definition , eg, execute
> {code}
> /subsystem=datasources/xa-data-source=MariaDBXADS:add(driver-name=mariadb-xa, jndi-name=java:jboss/datasources/MariaDBXADS, user-name=jdv_user, password=jdv_pass, use-java-context=true, xa-datasource-properties=[DatabaseName=>products, PortNumber=>3306, ServerName=>localhost])
> {code}
> will throw a exception
> {code}
> 'xa-datasource-properties' is not found among the supported properties: [allocation-retry, allocation-retry-wait-millis, allow-multiple-users, background-validation, background-validation-millis, blocking-timeout-wait-millis, capacity-decrementer-class, capacity-decrementer-properties, capacity-incrementer-class, capacity-incrementer-properties, check-valid-connection-sql, connectable, connection-listener-class, connection-listener-property, driver-name, enabled, enlistment-trace, exception-sorter-class-name, exception-sorter-properties, flush-strategy, idle-timeout-minutes, initial-pool-size, interleaving, jndi-name, max-pool-size, mcp, min-pool-size, new-connection-sql, no-recovery, no-tx-separate-pool, pad-xid, password, pool-fair, pool-prefill, pool-use-strict-min, prepared-statements-cache-size, query-timeout, reauth-plugin-class-name, reauth-plugin-properties, recovery-password, recovery-plugin-class-name, recovery-plugin-properties, recovery-security-domain, recovery-username, same-rm-override, security-domain, set-tx-query-timeout, share-prepared-statements, spy, stale-connection-checker-class-name, stale-connection-checker-properties, statistics-enabled, track-statements, tracking, transaction-isolation, url-delimiter, url-property, url-selector-strategy-class-name, use-ccm, use-fast-fail, use-java-context, use-try-lock, user-name, valid-connection-checker-class-name, valid-connection-checker-properties, validate-on-match, wrap-xa-resource, xa-datasource-class, xa-resource-timeout]
> {code}
> If execute without xa-datasource-properties which like previous version
> {code}
> /subsystem=datasources/xa-data-source=MariaDBXADS:add(driver-name=mariadb-xa, jndi-name=java:jboss/datasources/MariaDBXADS, user-name=jdv_user, password=jdv_pass, use-java-context=true)
> {code}
> the exception like
> {code}
> {
> "outcome" => "failed",
> "failure-description" => "WFLYJCA0069: At least one xa-datasource-property is required for an xa-datasource",
> "rolled-back" => true
> }
> {code}
> To create with xa-data-source add command is a workaround
> {code}
> xa-data-source add --name=MariaDBXADS --driver-name=mariadb-xa --jndi-name=java:jboss/datasources/MariaDBXADS --user-name=jdv_user --password=jdv_pass --use-java-context=true --xa-datasource-properties=[DatabaseName=>products, PortNumber=>3306, ServerName=>localhost]
> {code}
> but most users expect the tree-structure cli and xa-data-source compatible.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 3 months
[JBoss JIRA] (WFLY-6773) Provide ability to start/stop Data Source creation without restart of server
by Kylin Soong (JIRA)
[ https://issues.jboss.org/browse/WFLY-6773?page=com.atlassian.jira.plugin.... ]
Kylin Soong commented on WFLY-6773:
-----------------------------------
Another inconvenience caused by restart of server to make sure data source be removed.added, execute the following cli
{code}
xa-data-source remove --name=MariaDBXADS // output hints reload is necessary
:reload() // the MariaDBXADS should be remove after reload, check standalone.xml, MariaDBXADS be removed
/subsystem=datasources/jdbc-driver=mariadb-xa:remove() // due to MariaDBXADS removed ,so remove driver, this should be success, but it execute failed
{
"outcome" => "failed",
"failure-description" => "WFLYCTL0171: Removing services has lead to unsatisfied dependencies:
Service jboss.jdbc-driver.mariadb-xa was depended upon by service org.wildfly.data-source.MariaDBXADS, service jboss.driver-demander.java:jboss/datasources/MariaDBXADS",
"rolled-back" => true,
"response-headers" => {"process-state" => "reload-required"}
}
{code}
> Provide ability to start/stop Data Source creation without restart of server
> ----------------------------------------------------------------------------
>
> Key: WFLY-6773
> URL: https://issues.jboss.org/browse/WFLY-6773
> Project: WildFly
> Issue Type: Enhancement
> Components: JCA
> Affects Versions: 10.0.0.Final
> Reporter: Ramesh Reddy
> Assignee: Stefano Maestri
>
> Currently in WF 10, where a data source is created / removed using the CLI the "reload-status" is set to "requires restart", based how the resource is expected to be managed in WF.
> h3. Why We need it?
> In Teiid, it is common practice to add/remove data sources dynamically and restarting server will affect performance and usability severely.
> * Because delete/re-add is in Teiid's workflow to manage a data sources and when we have to restart we loose the whole state of the virtual database. That means we need re-establish runtime status. For example, all the existing sessions will be killed.
> * Runtime environment are often shared, that could kill other person's tasks in flight leaving them hanging with errors.
> * Every time data source starts we fetch metadata from the source, this is very expensive operation.
> * With multi-source feature, it is a feature that user dynamically brings in/out sources as they show up on their dashboard, it would be not possible to support this feature.
> * This is a change of behavior from earlier versions of the EAP, our users and customers rely on this feature
> As per Teiid project is concerned, we consider this is regression on WF and thus a bug.
> h3. Proposed Solution
> [~brian.stansberry] suggested the WF management practices here in this document https://docs.jboss.org/author/display/WFLY10/Admin+Guide#AdminGuide-Apply...
> Based on this the conclusion is that Data Sources, is developed under "all-services" paradigm, not under "resource-services" paradigm, where a explicit header from client to whether to restart or not can avoid having to "reload" the server when a new DS is added or removed. We understand the nature of service dependencies in WF, and how this can affect the other dependent services, but we verified that Teiid will not have those side effects as we designed. Since these sources are effectively exclusively defined for Teiid should not interfere with others. Also, since the request is explicit, should not affect current behavior.
> h3. Workarounds Considered
> Since this highly dependent on configuration based data source creation, we can opt to a deployment based data source creation (-ds.xml), however GSS is quick to dismiss this as this not supported feature.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 3 months
[JBoss JIRA] (WFLY-6789) 'xa-datasource-properties' is not found among the supported properties
by Kylin Soong (JIRA)
[ https://issues.jboss.org/browse/WFLY-6789?page=com.atlassian.jira.plugin.... ]
Kylin Soong updated WFLY-6789:
------------------------------
Description:
WildFly 10 wants a xa-datasource-properties, but xa-datasource-properties not added as a xa datasources attributes in management resource definition , eg, execute
{code}
/subsystem=datasources/xa-data-source=MariaDBXADS:add(driver-name=mariadb-xa, jndi-name=java:jboss/datasources/MariaDBXADS, user-name=jdv_user, password=jdv_pass, use-java-context=true, xa-datasource-properties=[DatabaseName=>products, PortNumber=>3306, ServerName=>localhost])
{code}
will throw a exception
{code}
'xa-datasource-properties' is not found among the supported properties: [allocation-retry, allocation-retry-wait-millis, allow-multiple-users, background-validation, background-validation-millis, blocking-timeout-wait-millis, capacity-decrementer-class, capacity-decrementer-properties, capacity-incrementer-class, capacity-incrementer-properties, check-valid-connection-sql, connectable, connection-listener-class, connection-listener-property, driver-name, enabled, enlistment-trace, exception-sorter-class-name, exception-sorter-properties, flush-strategy, idle-timeout-minutes, initial-pool-size, interleaving, jndi-name, max-pool-size, mcp, min-pool-size, new-connection-sql, no-recovery, no-tx-separate-pool, pad-xid, password, pool-fair, pool-prefill, pool-use-strict-min, prepared-statements-cache-size, query-timeout, reauth-plugin-class-name, reauth-plugin-properties, recovery-password, recovery-plugin-class-name, recovery-plugin-properties, recovery-security-domain, recovery-username, same-rm-override, security-domain, set-tx-query-timeout, share-prepared-statements, spy, stale-connection-checker-class-name, stale-connection-checker-properties, statistics-enabled, track-statements, tracking, transaction-isolation, url-delimiter, url-property, url-selector-strategy-class-name, use-ccm, use-fast-fail, use-java-context, use-try-lock, user-name, valid-connection-checker-class-name, valid-connection-checker-properties, validate-on-match, wrap-xa-resource, xa-datasource-class, xa-resource-timeout]
{code}
If execute without xa-datasource-properties which like previous version
{code}
/subsystem=datasources/xa-data-source=MariaDBXADS:add(driver-name=mariadb-xa, jndi-name=java:jboss/datasources/MariaDBXADS, user-name=jdv_user, password=jdv_pass, use-java-context=true)
{code}
the exception like
{code}
{
"outcome" => "failed",
"failure-description" => "WFLYJCA0069: At least one xa-datasource-property is required for an xa-datasource",
"rolled-back" => true
}
{code}
To create with xa-data-source add command is a workaround
{code}
xa-data-source add --name=MariaDBXADS --driver-name=mariadb-xa --jndi-name=java:jboss/datasources/MariaDBXADS --user-name=jdv_user --password=jdv_pass --use-java-context=true --xa-datasource-properties=[DatabaseName=>products, PortNumber=>3306, ServerName=>localhost]
{code}
but most users expect the tree-structure cli and xa-data-source compatible.
was:
WildFly 10 wants a xa-datasource-properties, but xa-datasource-properties not added as a xa datasources attributes in management resource definition , eg, execute
{code}
/subsystem=datasources/xa-data-source=MariaDBXADS:add(driver-name=mariadb-xa, jndi-name=java:jboss/datasources/MariaDBXADS, user-name=jdv_user, password=jdv_pass, use-java-context=true, xa-datasource-properties=[DatabaseName=>products, PortNumber=>3306, ServerName=>localhost])
{code}
will throw a exception
{code}
'xa-datasource-properties' is not found among the supported properties: [allocation-retry, allocation-retry-wait-millis, allow-multiple-users, background-validation, background-validation-millis, blocking-timeout-wait-millis, capacity-decrementer-class, capacity-decrementer-properties, capacity-incrementer-class, capacity-incrementer-properties, check-valid-connection-sql, connectable, connection-listener-class, connection-listener-property, driver-name, enabled, enlistment-trace, exception-sorter-class-name, exception-sorter-properties, flush-strategy, idle-timeout-minutes, initial-pool-size, interleaving, jndi-name, max-pool-size, mcp, min-pool-size, new-connection-sql, no-recovery, no-tx-separate-pool, pad-xid, password, pool-fair, pool-prefill, pool-use-strict-min, prepared-statements-cache-size, query-timeout, reauth-plugin-class-name, reauth-plugin-properties, recovery-password, recovery-plugin-class-name, recovery-plugin-properties, recovery-security-domain, recovery-username, same-rm-override, security-domain, set-tx-query-timeout, share-prepared-statements, spy, stale-connection-checker-class-name, stale-connection-checker-properties, statistics-enabled, track-statements, tracking, transaction-isolation, url-delimiter, url-property, url-selector-strategy-class-name, use-ccm, use-fast-fail, use-java-context, use-try-lock, user-name, valid-connection-checker-class-name, valid-connection-checker-properties, validate-on-match, wrap-xa-resource, xa-datasource-class, xa-resource-timeout]
{code}
If execute without xa-datasource-properties which like previous version
{code}
/subsystem=datasources/xa-data-source=MariaDBXADS:add(driver-name=mariadb-xa, jndi-name=java:jboss/datasources/MariaDBXADS, user-name=jdv_user, password=jdv_pass, use-java-context=true)
{code}
the exception like
{code}
{
"outcome" => "failed",
"failure-description" => "WFLYJCA0069: At least one xa-datasource-property is required for an xa-datasource",
"rolled-back" => true
}
{code}
> 'xa-datasource-properties' is not found among the supported properties
> ----------------------------------------------------------------------
>
> Key: WFLY-6789
> URL: https://issues.jboss.org/browse/WFLY-6789
> Project: WildFly
> Issue Type: Bug
> Components: JCA
> Affects Versions: 10.0.0.Final
> Reporter: Kylin Soong
> Assignee: Jesper Pedersen
> Priority: Critical
>
> WildFly 10 wants a xa-datasource-properties, but xa-datasource-properties not added as a xa datasources attributes in management resource definition , eg, execute
> {code}
> /subsystem=datasources/xa-data-source=MariaDBXADS:add(driver-name=mariadb-xa, jndi-name=java:jboss/datasources/MariaDBXADS, user-name=jdv_user, password=jdv_pass, use-java-context=true, xa-datasource-properties=[DatabaseName=>products, PortNumber=>3306, ServerName=>localhost])
> {code}
> will throw a exception
> {code}
> 'xa-datasource-properties' is not found among the supported properties: [allocation-retry, allocation-retry-wait-millis, allow-multiple-users, background-validation, background-validation-millis, blocking-timeout-wait-millis, capacity-decrementer-class, capacity-decrementer-properties, capacity-incrementer-class, capacity-incrementer-properties, check-valid-connection-sql, connectable, connection-listener-class, connection-listener-property, driver-name, enabled, enlistment-trace, exception-sorter-class-name, exception-sorter-properties, flush-strategy, idle-timeout-minutes, initial-pool-size, interleaving, jndi-name, max-pool-size, mcp, min-pool-size, new-connection-sql, no-recovery, no-tx-separate-pool, pad-xid, password, pool-fair, pool-prefill, pool-use-strict-min, prepared-statements-cache-size, query-timeout, reauth-plugin-class-name, reauth-plugin-properties, recovery-password, recovery-plugin-class-name, recovery-plugin-properties, recovery-security-domain, recovery-username, same-rm-override, security-domain, set-tx-query-timeout, share-prepared-statements, spy, stale-connection-checker-class-name, stale-connection-checker-properties, statistics-enabled, track-statements, tracking, transaction-isolation, url-delimiter, url-property, url-selector-strategy-class-name, use-ccm, use-fast-fail, use-java-context, use-try-lock, user-name, valid-connection-checker-class-name, valid-connection-checker-properties, validate-on-match, wrap-xa-resource, xa-datasource-class, xa-resource-timeout]
> {code}
> If execute without xa-datasource-properties which like previous version
> {code}
> /subsystem=datasources/xa-data-source=MariaDBXADS:add(driver-name=mariadb-xa, jndi-name=java:jboss/datasources/MariaDBXADS, user-name=jdv_user, password=jdv_pass, use-java-context=true)
> {code}
> the exception like
> {code}
> {
> "outcome" => "failed",
> "failure-description" => "WFLYJCA0069: At least one xa-datasource-property is required for an xa-datasource",
> "rolled-back" => true
> }
> {code}
> To create with xa-data-source add command is a workaround
> {code}
> xa-data-source add --name=MariaDBXADS --driver-name=mariadb-xa --jndi-name=java:jboss/datasources/MariaDBXADS --user-name=jdv_user --password=jdv_pass --use-java-context=true --xa-datasource-properties=[DatabaseName=>products, PortNumber=>3306, ServerName=>localhost]
> {code}
> but most users expect the tree-structure cli and xa-data-source compatible.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 3 months
[JBoss JIRA] (WFLY-6789) 'xa-datasource-properties' is not found among the supported properties
by Kylin Soong (JIRA)
[ https://issues.jboss.org/browse/WFLY-6789?page=com.atlassian.jira.plugin.... ]
Kylin Soong updated WFLY-6789:
------------------------------
Description:
WildFly 10 wants a xa-datasource-properties, but xa-datasource-properties not added as a xa datasources attributes in management resource definition , eg, execute
{code}
/subsystem=datasources/xa-data-source=MariaDBXADS:add(driver-name=mariadb-xa, jndi-name=java:jboss/datasources/MariaDBXADS, user-name=jdv_user, password=jdv_pass, use-java-context=true, xa-datasource-properties=[DatabaseName=>products, PortNumber=>3306, ServerName=>localhost])
{code}
will throw a exception
{code}
'xa-datasource-properties' is not found among the supported properties: [allocation-retry, allocation-retry-wait-millis, allow-multiple-users, background-validation, background-validation-millis, blocking-timeout-wait-millis, capacity-decrementer-class, capacity-decrementer-properties, capacity-incrementer-class, capacity-incrementer-properties, check-valid-connection-sql, connectable, connection-listener-class, connection-listener-property, driver-name, enabled, enlistment-trace, exception-sorter-class-name, exception-sorter-properties, flush-strategy, idle-timeout-minutes, initial-pool-size, interleaving, jndi-name, max-pool-size, mcp, min-pool-size, new-connection-sql, no-recovery, no-tx-separate-pool, pad-xid, password, pool-fair, pool-prefill, pool-use-strict-min, prepared-statements-cache-size, query-timeout, reauth-plugin-class-name, reauth-plugin-properties, recovery-password, recovery-plugin-class-name, recovery-plugin-properties, recovery-security-domain, recovery-username, same-rm-override, security-domain, set-tx-query-timeout, share-prepared-statements, spy, stale-connection-checker-class-name, stale-connection-checker-properties, statistics-enabled, track-statements, tracking, transaction-isolation, url-delimiter, url-property, url-selector-strategy-class-name, use-ccm, use-fast-fail, use-java-context, use-try-lock, user-name, valid-connection-checker-class-name, valid-connection-checker-properties, validate-on-match, wrap-xa-resource, xa-datasource-class, xa-resource-timeout]
{code}
If execute without xa-datasource-properties which like previous version
{code}
/subsystem=datasources/xa-data-source=MariaDBXADS:add(driver-name=mariadb-xa, jndi-name=java:jboss/datasources/MariaDBXADS, user-name=jdv_user, password=jdv_pass, use-java-context=true)
{code}
the exception like
{code}
{
"outcome" => "failed",
"failure-description" => "WFLYJCA0069: At least one xa-datasource-property is required for an xa-datasource",
"rolled-back" => true
}
{code}
was:
WildFly 10 wants a xa-datasource-properties, but xa-datasource-properties not added as a xa datasources properties, eg, execute
{code}
/subsystem=datasources/xa-data-source=MariaDBXADS:add(driver-name=mariadb-xa, jndi-name=java:jboss/datasources/MariaDBXADS, user-name=jdv_user, password=jdv_pass, use-java-context=true, xa-datasource-properties=[DatabaseName=>products, PortNumber=>3306, ServerName=>localhost])
{code}
will throw a exception
{code}
'xa-datasource-properties' is not found among the supported properties: [allocation-retry, allocation-retry-wait-millis, allow-multiple-users, background-validation, background-validation-millis, blocking-timeout-wait-millis, capacity-decrementer-class, capacity-decrementer-properties, capacity-incrementer-class, capacity-incrementer-properties, check-valid-connection-sql, connectable, connection-listener-class, connection-listener-property, driver-name, enabled, enlistment-trace, exception-sorter-class-name, exception-sorter-properties, flush-strategy, idle-timeout-minutes, initial-pool-size, interleaving, jndi-name, max-pool-size, mcp, min-pool-size, new-connection-sql, no-recovery, no-tx-separate-pool, pad-xid, password, pool-fair, pool-prefill, pool-use-strict-min, prepared-statements-cache-size, query-timeout, reauth-plugin-class-name, reauth-plugin-properties, recovery-password, recovery-plugin-class-name, recovery-plugin-properties, recovery-security-domain, recovery-username, same-rm-override, security-domain, set-tx-query-timeout, share-prepared-statements, spy, stale-connection-checker-class-name, stale-connection-checker-properties, statistics-enabled, track-statements, tracking, transaction-isolation, url-delimiter, url-property, url-selector-strategy-class-name, use-ccm, use-fast-fail, use-java-context, use-try-lock, user-name, valid-connection-checker-class-name, valid-connection-checker-properties, validate-on-match, wrap-xa-resource, xa-datasource-class, xa-resource-timeout]
{code}
If execute without xa-datasource-properties which like previous version
{code}
/subsystem=datasources/xa-data-source=MariaDBXADS:add(driver-name=mariadb-xa, jndi-name=java:jboss/datasources/MariaDBXADS, user-name=jdv_user, password=jdv_pass, use-java-context=true)
{code}
the exception like
{code}
{
"outcome" => "failed",
"failure-description" => "WFLYJCA0069: At least one xa-datasource-property is required for an xa-datasource",
"rolled-back" => true
}
{code}
> 'xa-datasource-properties' is not found among the supported properties
> ----------------------------------------------------------------------
>
> Key: WFLY-6789
> URL: https://issues.jboss.org/browse/WFLY-6789
> Project: WildFly
> Issue Type: Bug
> Components: JCA
> Affects Versions: 10.0.0.Final
> Reporter: Kylin Soong
> Assignee: Jesper Pedersen
> Priority: Critical
>
> WildFly 10 wants a xa-datasource-properties, but xa-datasource-properties not added as a xa datasources attributes in management resource definition , eg, execute
> {code}
> /subsystem=datasources/xa-data-source=MariaDBXADS:add(driver-name=mariadb-xa, jndi-name=java:jboss/datasources/MariaDBXADS, user-name=jdv_user, password=jdv_pass, use-java-context=true, xa-datasource-properties=[DatabaseName=>products, PortNumber=>3306, ServerName=>localhost])
> {code}
> will throw a exception
> {code}
> 'xa-datasource-properties' is not found among the supported properties: [allocation-retry, allocation-retry-wait-millis, allow-multiple-users, background-validation, background-validation-millis, blocking-timeout-wait-millis, capacity-decrementer-class, capacity-decrementer-properties, capacity-incrementer-class, capacity-incrementer-properties, check-valid-connection-sql, connectable, connection-listener-class, connection-listener-property, driver-name, enabled, enlistment-trace, exception-sorter-class-name, exception-sorter-properties, flush-strategy, idle-timeout-minutes, initial-pool-size, interleaving, jndi-name, max-pool-size, mcp, min-pool-size, new-connection-sql, no-recovery, no-tx-separate-pool, pad-xid, password, pool-fair, pool-prefill, pool-use-strict-min, prepared-statements-cache-size, query-timeout, reauth-plugin-class-name, reauth-plugin-properties, recovery-password, recovery-plugin-class-name, recovery-plugin-properties, recovery-security-domain, recovery-username, same-rm-override, security-domain, set-tx-query-timeout, share-prepared-statements, spy, stale-connection-checker-class-name, stale-connection-checker-properties, statistics-enabled, track-statements, tracking, transaction-isolation, url-delimiter, url-property, url-selector-strategy-class-name, use-ccm, use-fast-fail, use-java-context, use-try-lock, user-name, valid-connection-checker-class-name, valid-connection-checker-properties, validate-on-match, wrap-xa-resource, xa-datasource-class, xa-resource-timeout]
> {code}
> If execute without xa-datasource-properties which like previous version
> {code}
> /subsystem=datasources/xa-data-source=MariaDBXADS:add(driver-name=mariadb-xa, jndi-name=java:jboss/datasources/MariaDBXADS, user-name=jdv_user, password=jdv_pass, use-java-context=true)
> {code}
> the exception like
> {code}
> {
> "outcome" => "failed",
> "failure-description" => "WFLYJCA0069: At least one xa-datasource-property is required for an xa-datasource",
> "rolled-back" => true
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 3 months
[JBoss JIRA] (WFLY-6789) 'xa-datasource-properties' is not found among the supported properties
by Kylin Soong (JIRA)
Kylin Soong created WFLY-6789:
---------------------------------
Summary: 'xa-datasource-properties' is not found among the supported properties
Key: WFLY-6789
URL: https://issues.jboss.org/browse/WFLY-6789
Project: WildFly
Issue Type: Bug
Components: JCA
Affects Versions: 10.0.0.Final
Reporter: Kylin Soong
Assignee: Jesper Pedersen
Priority: Critical
WildFly 10 wants a xa-datasource-properties, but xa-datasource-properties not added as a xa datasources properties, eg, execute
{code}
/subsystem=datasources/xa-data-source=MariaDBXADS:add(driver-name=mariadb-xa, jndi-name=java:jboss/datasources/MariaDBXADS, user-name=jdv_user, password=jdv_pass, use-java-context=true, xa-datasource-properties=[DatabaseName=>products, PortNumber=>3306, ServerName=>localhost])
{code}
will throw a exception
{code}
'xa-datasource-properties' is not found among the supported properties: [allocation-retry, allocation-retry-wait-millis, allow-multiple-users, background-validation, background-validation-millis, blocking-timeout-wait-millis, capacity-decrementer-class, capacity-decrementer-properties, capacity-incrementer-class, capacity-incrementer-properties, check-valid-connection-sql, connectable, connection-listener-class, connection-listener-property, driver-name, enabled, enlistment-trace, exception-sorter-class-name, exception-sorter-properties, flush-strategy, idle-timeout-minutes, initial-pool-size, interleaving, jndi-name, max-pool-size, mcp, min-pool-size, new-connection-sql, no-recovery, no-tx-separate-pool, pad-xid, password, pool-fair, pool-prefill, pool-use-strict-min, prepared-statements-cache-size, query-timeout, reauth-plugin-class-name, reauth-plugin-properties, recovery-password, recovery-plugin-class-name, recovery-plugin-properties, recovery-security-domain, recovery-username, same-rm-override, security-domain, set-tx-query-timeout, share-prepared-statements, spy, stale-connection-checker-class-name, stale-connection-checker-properties, statistics-enabled, track-statements, tracking, transaction-isolation, url-delimiter, url-property, url-selector-strategy-class-name, use-ccm, use-fast-fail, use-java-context, use-try-lock, user-name, valid-connection-checker-class-name, valid-connection-checker-properties, validate-on-match, wrap-xa-resource, xa-datasource-class, xa-resource-timeout]
{code}
If execute without xa-datasource-properties which like previous version
{code}
/subsystem=datasources/xa-data-source=MariaDBXADS:add(driver-name=mariadb-xa, jndi-name=java:jboss/datasources/MariaDBXADS, user-name=jdv_user, password=jdv_pass, use-java-context=true)
{code}
the exception like
{code}
{
"outcome" => "failed",
"failure-description" => "WFLYJCA0069: At least one xa-datasource-property is required for an xa-datasource",
"rolled-back" => true
}
{code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 3 months
[JBoss JIRA] (WFLY-6781) Wildfly cluster's failover functionality doesn't work as expected
by Preeta Kuruvilla (JIRA)
[ https://issues.jboss.org/browse/WFLY-6781?page=com.atlassian.jira.plugin.... ]
Preeta Kuruvilla commented on WFLY-6781:
----------------------------------------
The cluster that we have is a domain managed cluster where we have domain.xml and host.xml configured on Node1 and only host.xml configured on Node2. The jgroups is a subsystem in the domain.xml for the profile "ha".
Regarding our application, we have 2 components - RC.war and SL.war. The JMS is configured on SL. Only component RC is clustered. SL is not clustered.
Node 1 has - 2 server instances- RC and SL.
<servers>
<server name="server-host1-RC" group="main-server-group" auto-start="true">
<jvm name="default">
<heap size="2048m" max-size="2048m"/>
<permgen size="512m" max-size="512m"/>
<jvm-options>
<!--<option value="-Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n"/>-->
<option value="-XX:CompileCommand=exclude,com/newscale/bfw/signon/filters,AuthenticationFilter"/>
<option value="-XX:CompileCommand=exclude,org/apache/xml/dtm/ref/sax2dtm/SAX2DTM,startElement"/>
<option value="-XX:CompileCommand=exclude,org/exolab/castor/xml/Marshaller,marshal"/>
<option value="-XX:CompileCommand=exclude,org/exolab/castor/xml/Marshaller,marshal"/>
<option value="-XX:CompileCommand=exclude,org/apache/xpath/compiler/XPathParser,UnionExpr"/>
</jvm-options>
</jvm>
<socket-bindings socket-binding-group="ha-sockets" port-offset="0"/>
</server>
<server name="server-host1-SL" group="other-server-group" auto-start="true">
<jvm name="default">
<heap size="2048m" max-size="2048m"/>
<permgen size="512m" max-size="512m"/>
<jvm-options>
<option value="-server"/>
</jvm-options>
</jvm>
<socket-bindings socket-binding-group="standard-sockets" port-offset="0"/>
</server>
</servers>
Node 2 : has only one server instance and that has RC
<servers>
<server name="server-host2-RC" group="main-server-group" auto-start="true">
<jvm name="default">
<heap size="2048m" max-size="2048m"/>
<permgen size="512m" max-size="512m"/>
<jvm-options>
<!--<option value="-Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n"/>-->
<option value="-XX:CompileCommand=exclude,com/newscale/bfw/signon/filters,AuthenticationFilter"/>
<option value="-XX:CompileCommand=exclude,org/apache/xml/dtm/ref/sax2dtm/SAX2DTM,startElement"/>
<option value="-XX:CompileCommand=exclude,org/exolab/castor/xml/Marshaller,marshal"/>
<option value="-XX:CompileCommand=exclude,org/exolab/castor/xml/Marshaller,marshal"/>
<option value="-XX:CompileCommand=exclude,org/apache/xpath/compiler/XPathParser,UnionExpr"/>
</jvm-options>
</jvm>
<socket-bindings socket-binding-group="ha-sockets" port-offset="0"/>
</server>
</servers>
Now when I say its not working as expected, when we test failover, I mean - the communication of RC and SL is broken.
By the way RC communicates remotely with SL using the below url :
http-remoting://<ip address of SL which is Node1>:6080/
Just a note:- everything is working properly in production if we don't try disabling network or powering off etc.
Let me know if you need any other info.
Thanks,
Preeta
> Wildfly cluster's failover functionality doesn't work as expected
> -----------------------------------------------------------------
>
> Key: WFLY-6781
> URL: https://issues.jboss.org/browse/WFLY-6781
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 8.2.0.Final
> Reporter: Preeta Kuruvilla
> Assignee: Paul Ferraro
> Priority: Blocker
>
> Following are the testing scenarios we did and the outcome:-
> 1. Network disabling on a VM for testing failover – Not working for both Linux and Windows environment.
> 2. Power off of a VM using VMware client for testing failover – Is working on Linux environment but not working on windows environment.
> 3. Ctrl + C method to stop services on a node for testing failover – works on both linux and windows environment
> 4. Stopping server running on Node /VM using Admin Console for testing failover - works on both linux and windows environment.
> Jgroups subsystem configuration in domain.xml we have is below:-
> <subsystem xmlns="urn:jboss:domain:jgroups:2.0" default-stack="udp">
> <stack name="udp">
> <transport type="UDP" socket-binding="jgroups-udp"/>
> <protocol type="PING"/>
> <protocol type="MERGE3"/>
> <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
> <protocol type="FD_ALL"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS"/>
> <protocol type="UFC"/>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> <protocol type="RSVP"/>
> </stack>
> <stack name="tcp">
> <transport type="TCP" socket-binding="jgroups-tcp"/>
> <protocol type="MPING" socket-binding="jgroups-mping"/>
> <protocol type="MERGE2"/>
> <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
> <protocol type="FD"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS"/>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> <protocol type="RSVP"/>
> </stack>
> </subsystem>
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 3 months
[JBoss JIRA] (JGRP-2086) FD_SOCK is keep trying to create a new socket to the killed server
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2086?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-2086:
---------------------------
Fix Version/s: 3.6.11
4.0
> FD_SOCK is keep trying to create a new socket to the killed server
> ------------------------------------------------------------------
>
> Key: JGRP-2086
> URL: https://issues.jboss.org/browse/JGRP-2086
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.3
> Environment: JDG 6.6.0 (jgroups-3.6.3.Final-redhat-4.jar)
> Reporter: Osamu Nagano
> Assignee: Bela Ban
> Fix For: 3.6.11, 4.0
>
>
> In most cases FD_SOCK can detect a killed server immediately. But for unknown reason, FD_SOCK is keep trying to create a new socket to the killed server. As a consequence, installing a new cluster view is delayed until FD_ALL is triggered.
> m04_n007_server.log is showing the behaviour. There is 28 nodes (4 machines (m03, ..., m06) and 7 nodes (n001, ..., n007) on each) and all nodes on m03 are killed at the same time on 15:07:34,543. FD_SOCK is keep trying to connect to a killed node saying "socket address for m03_n001/clustered could not be fetched, retrying".
> {noformat}
> [n007] 15:07:39,543 TRACE [org.jgroups.protocols.FD_SOCK] (Timer-8,shared=udp) m04_n007/clustered: broadcasting SUSPECT message (suspected_mbrs=[m03_n005/clustered, m03_n007/clustered])
> [n007] 15:07:39,544 TRACE [org.jgroups.protocols.FD_SOCK] (INT-20,shared=udp) m04_n007/clustered: received SUSPECT message from m04_n007/clustered: suspects=[m03_n005/clustered, m03_n007/clustered]
> [n007] 15:07:39,546 TRACE [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: socket address for m03_n001/clustered could not be fetched, retrying
> [n007] 15:07:40,546 DEBUG [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: ping_dest is m03_n001/clustered, pingable_mbrs=[m03_n001/clustered, m03_n002/clustered, m03_n003/clustered, m03_n004/clustered, m03_n006/clustered, m06_n001/clustered, m06_n002/clustered, m06_n003/clustered, m06_n004/clustered, m06_n005/clustered, m06_n006/clustered, m06_n007/clustered, m05_n001/clustered, m05_n002/clustered, m05_n003/clustered, m05_n004/clustered, m05_n005/clustered, m05_n006/clustered, m05_n007/clustered, m04_n001/clustered, m04_n002/clustered, m04_n003/clustered, m04_n004/clustered, m04_n005/clustered, m04_n006/clustered, m04_n007/clustered]
> [n007] 15:07:41,546 TRACE [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: socket address for m03_n001/clustered could not be fetched, retrying
> [n007] 15:07:42,546 DEBUG [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: ping_dest is m03_n001/clustered, pingable_mbrs=[m03_n001/clustered, m03_n002/clustered, m03_n003/clustered, m03_n004/clustered, m03_n006/clustered, m06_n001/clustered, m06_n002/clustered, m06_n003/clustered, m06_n004/clustered, m06_n005/clustered, m06_n006/clustered, m06_n007/clustered, m05_n001/clustered, m05_n002/clustered, m05_n003/clustered, m05_n004/clustered, m05_n005/clustered, m05_n006/clustered, m05_n007/clustered, m04_n001/clustered, m04_n002/clustered, m04_n003/clustered, m04_n004/clustered, m04_n005/clustered, m04_n006/clustered, m04_n007/clustered]
> [n007] 15:07:43,547 TRACE [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: socket address for m03_n001/clustered could not be fetched, retrying
> ...
> [n007] 15:10:53,700 DEBUG [org.jgroups.protocols.FD_ALL] (Timer-26,shared=udp) haven't received a heartbeat from m03_n005/clustered for 200059 ms, adding it to suspect list
> {noformat}
> From the TRACE log, you can find an address cache of FD_SOCK has only 23 members.
> {noformat}
> [n007] 14:40:50,471 TRACE [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: got cache from m03_n005/clustered: cache is {
> m04_n006/clustered=172.20.66.34:9945,
> m05_n005/clustered=172.20.66.35:9938,
> m06_n004/clustered=172.20.66.36:9931,
> m03_n007/clustered=172.20.66.33:9952,
> m05_n001/clustered=172.20.66.35:9910,
> m06_n005/clustered=172.20.66.36:9938,
> m05_n006/clustered=172.20.66.35:9945,
> m03_n005/clustered=172.20.66.33:9938,
> m05_n004/clustered=172.20.66.35:9931,
> m04_n003/clustered=172.20.66.34:9924,
> m04_n007/clustered=172.20.66.34:9952,
> m05_n002/clustered=172.20.66.35:9917,
> m05_n003/clustered=172.20.66.35:9924,
> m04_n004/clustered=172.20.66.34:9931,
> m06_n001/clustered=172.20.66.36:9910,
> m06_n007/clustered=172.20.66.36:9952,
> m04_n005/clustered=172.20.66.34:9938,
> m04_n001/clustered=172.20.66.34:9910,
> m05_n007/clustered=172.20.66.35:9952,
> m06_n002/clustered=172.20.66.36:9917,
> m06_n006/clustered=172.20.66.36:9945,
> m04_n002/clustered=172.20.66.34:9917,
> m06_n003/clustered=172.20.66.36:9924}
> {noformat}
> While pingable_mbrs has all 28 members which is from the current available cluster view.
> {noformat}
> [n007] 14:40:50,472 DEBUG [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: ping_dest is m03_n005/clustered, pingable_mbrs=[
> m03_n005/clustered,
> m03_n007/clustered,
> m03_n001/clustered,
> m03_n002/clustered,
> m03_n003/clustered,
> m03_n004/clustered,
> m03_n006/clustered,
> m06_n001/clustered,
> m06_n002/clustered,
> m06_n003/clustered,
> m06_n004/clustered,
> m06_n005/clustered,
> m06_n006/clustered,
> m06_n007/clustered,
> m05_n001/clustered,
> m05_n002/clustered,
> m05_n003/clustered,
> m05_n004/clustered,
> m05_n005/clustered,
> m05_n006/clustered,
> m05_n007/clustered,
> m04_n001/clustered,
> m04_n002/clustered,
> m04_n003/clustered,
> m04_n004/clustered,
> m04_n005/clustered,
> m04_n006/clustered,
> m04_n007/clustered]
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 3 months
[JBoss JIRA] (JGRP-2086) FD_SOCK is keep trying to create a new socket to the killed server
by Osamu Nagano (JIRA)
Osamu Nagano created JGRP-2086:
----------------------------------
Summary: FD_SOCK is keep trying to create a new socket to the killed server
Key: JGRP-2086
URL: https://issues.jboss.org/browse/JGRP-2086
Project: JGroups
Issue Type: Bug
Affects Versions: 3.6.3
Environment: JDG 6.6.0 (jgroups-3.6.3.Final-redhat-4.jar)
Reporter: Osamu Nagano
Assignee: Bela Ban
In most cases FD_SOCK can detect a killed server immediately. But for unknown reason, FD_SOCK is keep trying to create a new socket to the killed server. As a consequence, installing a new cluster view is delayed until FD_ALL is triggered.
m04_n007_server.log is showing the behaviour. There is 28 nodes (4 machines (m03, ..., m06) and 7 nodes (n001, ..., n007) on each) and all nodes on m03 are killed at the same time on 15:07:34,543. FD_SOCK is keep trying to connect to a killed node saying "socket address for m03_n001/clustered could not be fetched, retrying".
{noformat}
[n007] 15:07:39,543 TRACE [org.jgroups.protocols.FD_SOCK] (Timer-8,shared=udp) m04_n007/clustered: broadcasting SUSPECT message (suspected_mbrs=[m03_n005/clustered, m03_n007/clustered])
[n007] 15:07:39,544 TRACE [org.jgroups.protocols.FD_SOCK] (INT-20,shared=udp) m04_n007/clustered: received SUSPECT message from m04_n007/clustered: suspects=[m03_n005/clustered, m03_n007/clustered]
[n007] 15:07:39,546 TRACE [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: socket address for m03_n001/clustered could not be fetched, retrying
[n007] 15:07:40,546 DEBUG [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: ping_dest is m03_n001/clustered, pingable_mbrs=[m03_n001/clustered, m03_n002/clustered, m03_n003/clustered, m03_n004/clustered, m03_n006/clustered, m06_n001/clustered, m06_n002/clustered, m06_n003/clustered, m06_n004/clustered, m06_n005/clustered, m06_n006/clustered, m06_n007/clustered, m05_n001/clustered, m05_n002/clustered, m05_n003/clustered, m05_n004/clustered, m05_n005/clustered, m05_n006/clustered, m05_n007/clustered, m04_n001/clustered, m04_n002/clustered, m04_n003/clustered, m04_n004/clustered, m04_n005/clustered, m04_n006/clustered, m04_n007/clustered]
[n007] 15:07:41,546 TRACE [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: socket address for m03_n001/clustered could not be fetched, retrying
[n007] 15:07:42,546 DEBUG [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: ping_dest is m03_n001/clustered, pingable_mbrs=[m03_n001/clustered, m03_n002/clustered, m03_n003/clustered, m03_n004/clustered, m03_n006/clustered, m06_n001/clustered, m06_n002/clustered, m06_n003/clustered, m06_n004/clustered, m06_n005/clustered, m06_n006/clustered, m06_n007/clustered, m05_n001/clustered, m05_n002/clustered, m05_n003/clustered, m05_n004/clustered, m05_n005/clustered, m05_n006/clustered, m05_n007/clustered, m04_n001/clustered, m04_n002/clustered, m04_n003/clustered, m04_n004/clustered, m04_n005/clustered, m04_n006/clustered, m04_n007/clustered]
[n007] 15:07:43,547 TRACE [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: socket address for m03_n001/clustered could not be fetched, retrying
...
[n007] 15:10:53,700 DEBUG [org.jgroups.protocols.FD_ALL] (Timer-26,shared=udp) haven't received a heartbeat from m03_n005/clustered for 200059 ms, adding it to suspect list
{noformat}
>From the TRACE log, you can find an address cache of FD_SOCK has only 23 members.
{noformat}
[n007] 14:40:50,471 TRACE [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: got cache from m03_n005/clustered: cache is {
m04_n006/clustered=172.20.66.34:9945,
m05_n005/clustered=172.20.66.35:9938,
m06_n004/clustered=172.20.66.36:9931,
m03_n007/clustered=172.20.66.33:9952,
m05_n001/clustered=172.20.66.35:9910,
m06_n005/clustered=172.20.66.36:9938,
m05_n006/clustered=172.20.66.35:9945,
m03_n005/clustered=172.20.66.33:9938,
m05_n004/clustered=172.20.66.35:9931,
m04_n003/clustered=172.20.66.34:9924,
m04_n007/clustered=172.20.66.34:9952,
m05_n002/clustered=172.20.66.35:9917,
m05_n003/clustered=172.20.66.35:9924,
m04_n004/clustered=172.20.66.34:9931,
m06_n001/clustered=172.20.66.36:9910,
m06_n007/clustered=172.20.66.36:9952,
m04_n005/clustered=172.20.66.34:9938,
m04_n001/clustered=172.20.66.34:9910,
m05_n007/clustered=172.20.66.35:9952,
m06_n002/clustered=172.20.66.36:9917,
m06_n006/clustered=172.20.66.36:9945,
m04_n002/clustered=172.20.66.34:9917,
m06_n003/clustered=172.20.66.36:9924}
{noformat}
While pingable_mbrs has all 28 members which is from the current available cluster view.
{noformat}
[n007] 14:40:50,472 DEBUG [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: ping_dest is m03_n005/clustered, pingable_mbrs=[
m03_n005/clustered,
m03_n007/clustered,
m03_n001/clustered,
m03_n002/clustered,
m03_n003/clustered,
m03_n004/clustered,
m03_n006/clustered,
m06_n001/clustered,
m06_n002/clustered,
m06_n003/clustered,
m06_n004/clustered,
m06_n005/clustered,
m06_n006/clustered,
m06_n007/clustered,
m05_n001/clustered,
m05_n002/clustered,
m05_n003/clustered,
m05_n004/clustered,
m05_n005/clustered,
m05_n006/clustered,
m05_n007/clustered,
m04_n001/clustered,
m04_n002/clustered,
m04_n003/clustered,
m04_n004/clustered,
m04_n005/clustered,
m04_n006/clustered,
m04_n007/clustered]
{noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 3 months
[JBoss JIRA] (WFLY-6402) EJBs accessible too early (spec violation)
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/WFLY-6402?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on WFLY-6402:
-----------------------------------------------
Brad Maxwell <bmaxwell(a)redhat.com> changed the Status of [bug 1350355|https://bugzilla.redhat.com/show_bug.cgi?id=1350355] from POST to ASSIGNED
> EJBs accessible too early (spec violation)
> ------------------------------------------
>
> Key: WFLY-6402
> URL: https://issues.jboss.org/browse/WFLY-6402
> Project: WildFly
> Issue Type: Bug
> Components: EJB
> Affects Versions: 10.0.0.Final
> Reporter: Brad Maxwell
> Assignee: Fedor Gavrilov
> Labels: downstream_dependency
> Attachments: auto-test-reproducer.zip
>
>
> {code}
> EJB 3.1 spec, section 4.8.1:
> "If the Startup annotation appears on the Singleton bean class or if the Singleton has been designated via the deployment descriptor as requiring eager initialization, the container must initialize the Singleton bean instance during the application startup sequence. The container must initialize all such startup-time Singletons before any external client requests (that is, client requests originating outside of the application) are delivered to any enterprise bean components in the application.
> {code}
> Wildlfy does not implement this correctly, and allows calls to other EJBs before a @Startup @Singleton finishes its @PostConstruct call.
> This Jira ticket handles two PR's on WFLY:
> https://github.com/wildfly/wildfly/pull/8824 (that is already merged)
> and https://github.com/wildfly/wildfly/pull/8989
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 3 months