[JBoss JIRA] (JBJCA-1253) Multiple JDBC drivers versions - wrong driver is used
by Enrique González Martínez (JIRA)
Enrique González Martínez created JBJCA-1253:
------------------------------------------------
Summary: Multiple JDBC drivers versions - wrong driver is used
Key: JBJCA-1253
URL: https://issues.jboss.org/browse/JBJCA-1253
Project: IronJacamar
Issue Type: Bug
Reporter: Enrique González Martínez
Assignee: Enrique González Martínez
When multiple verions of the driver are being used only one is being used. the class driver is cached with the same string, i.e: "jdbc:mysql". This cache is shared among every mcf (the field LocalManagedConnectionFactory::driverCache is static).
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
9 years, 7 months
[JBoss JIRA] (JBASMP-68) execute-commands does not work for module command
by Václav Chalupa (JIRA)
[ https://issues.jboss.org/browse/JBASMP-68?page=com.atlassian.jira.plugin.... ]
Václav Chalupa commented on JBASMP-68:
--------------------------------------
Is planned any progress with this issue for WildFly maven plugin?
> execute-commands does not work for module command
> -------------------------------------------------
>
> Key: JBASMP-68
> URL: https://issues.jboss.org/browse/JBASMP-68
> Project: JBoss AS Maven Plugins
> Issue Type: Bug
> Affects Versions: 7.6.Final
> Reporter: Alfio Gloria
> Assignee: James Perkins
>
> I'm trying to remove a jboss module by means of maven.
> {code:xml}
> <configuration>
> <jbossHome>${project.build.directory}/server/jboss72</jbossHome>
> <execute-commands>
> <commands>
> <command>module remove --slot=main --name=system.layers.base.org.jboss.weld.core</command>
> </commands>
> </execute-commands>
> </configuration>
> {code}
> execute-commands fails with the following stack trace:
> {code}
> Failed to execute goal org.jboss.as.plugins:jboss-as-maven-plugin:7.5.Final:execute-commands (install-patched-weld) on project tools: Execution install-patched-weld of goal org.jboss.as.plugins:jboss-as-maven-plugin:7.5.Final:execute-commands failed: Command execution failed for command 'module remove --slot=main --name=system.layers.base.org.jboss.weld.core'. JBOSS_HOME environment variable is not set. -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.jboss.as.plugins:jboss-as-maven-plugin:7.5.Final:execute-commands (install-patched-weld) on project tools: Execution install-patched-weld of goal org.jboss.as.plugins:jboss-as-maven-plugin:7.5.Final:execute-commands failed: Command execution failed for command 'module remove --slot=main --name=system.layers.base.org.jboss.weld.core'. JBOSS_HOME environment variable is not set.
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:224)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
> at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
> at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
> at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
> at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
> at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
> at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
> at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
> Caused by: org.apache.maven.plugin.PluginExecutionException: Execution install-patched-weld of goal org.jboss.as.plugins:jboss-as-maven-plugin:7.5.Final:execute-commands failed: Command execution failed for command 'module remove --slot=main --name=system.layers.base.org.jboss.weld.core'. JBOSS_HOME environment variable is not set.
> at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:115)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> ... 19 more
> Caused by: java.lang.IllegalArgumentException: Command execution failed for command 'module remove --slot=main --name=system.layers.base.org.jboss.weld.core'. JBOSS_HOME environment variable is not set.
> at org.jboss.as.plugin.cli.Commands.executeCommands(Commands.java:180)
> at org.jboss.as.plugin.cli.Commands.execute(Commands.java:134)
> at org.jboss.as.plugin.cli.ExecuteCommands.execute(ExecuteCommands.java:71)
> at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:106)
> ... 20 more
> Caused by: org.jboss.as.cli.CommandLineException: JBOSS_HOME environment variable is not set.
> at org.jboss.as.cli.handlers.module.ASModuleHandler.getModulesDir(ASModuleHandler.java:362)
> at org.jboss.as.cli.handlers.module.ASModuleHandler.removeModule(ASModuleHandler.java:326)
> at org.jboss.as.cli.handlers.module.ASModuleHandler.doHandle(ASModuleHandler.java:214)
> at org.jboss.as.cli.handlers.CommandHandlerWithHelp.handle(CommandHandlerWithHelp.java:86)
> at org.jboss.as.cli.impl.CommandContextImpl.handle(CommandContextImpl.java:581)
> at org.jboss.as.plugin.cli.Commands.executeCommands(Commands.java:176)
> ... 23 more
> {code}
> (tested with the last snapshot too)
> The problem comes from jboss-as-cli that needs the JBOSS_HOME to be set in order to add or remove modules. This is not a requirement for other commands such as deploy.
> There are some points of confusion:
> # JBOSS_HOME is set in .bashrc but IDEs do not read .bashrc, thereby it can work in some cases but not in others;
> # usually there are more than one installation or jboss-as-maven-plugin can download the server;
> # jbossHome config parameter is not take into account;
> # what if I want to make operations on more the one server at the same time?
> At the moment I solved by patching jboss-as-cli using a system property instead of environment variable. I don't know how to solve without touching jboss-as-cli.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
9 years, 7 months
[JBoss JIRA] (JGRP-1910) MERGE3: Do not lose any members from view during a series of merges
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1910?page=com.atlassian.jira.plugin.... ]
Bela Ban edited comment on JGRP-1910 at 3/11/15 9:49 AM:
---------------------------------------------------------
OK, so the problem is this: if we have a split like the one below:
{noformat}
0: [0|9] (3) [0, 2, 1]
1: [0|9] (3) [0, 2, 1]
2: [0|9] (3) [0, 2, 1]
3: [3|9] (2) [3, 4]
4: [3|9] (2) [3, 4]
5: [5|10] (3) [5, 6, 7]
6: [5|10] (3) [5, 6, 7]
7: [5|10] (3) [5, 6, 7]
8: [9|9] (2) [9, 8]
9: [9|9] (2) [9, 8]
{noformat}
, members send {{INFO}} messages (in {{MERGE3}}) with the view-id, e.g. 7 sends {{\[5|10\]}}.
Every member multicasts an {{INFO}} message at a random interval \[{{MERGE3.min_interval..MERGE3_max_interval}}\].
The coordinators of the subclusters (0, 3, 5 and 9) check every {{MERGE3.check_interval}} ms if they've received different view-ids and initiate a merge if so.
The problem with your test is that it drops {{INFO}} messages (not retransmitted as {{MERGE3}} is below {{UNICASTX}} and {{NAKACKX}}, so when a coordinator starts the view-id check, it may not have received all view-ids.
Example:
{noformat}
5: I will be the merge leader. Starting the merge task. Coords: [5], views: {5=[5|10] (3) [5, 6, 7], 1=[0|9] (3) [0, 2, 1]}
9: I will be the merge leader. Starting the merge task. Coords: [9, 3], views: {4=[3|9] (2) [3, 4], 9=[9|9] (2) [9, 8], 3=[3|9] (2) [3, 4], 7=[5|11] (6) [5, 0, 2, 1, 6, 7]}
{noformat}
Here, we can see that 5 and 9 were merge leaders and started merges at almost the same time. Depending on which merge leader is able to contact more members, there will be more than 1 merge needed to end up with a fully merged view.
E.g if 5 succeeds in contacting itself, 3 and 0 *before 9 contacts them* , then a merge will ensue including 0, 3 and 5. 9 will not merge with anyone else, as it started a merge on its own and therefore rejects 5's merge request.
The problem can be mitigated (but not eliminated altogether) by reducing {{min_interval}} and {{max_interval}} and increasing {{check_interval}} in {{MERGE3}}: this way, coordinators are more likely to get more {{INFO}} messages from everyone (despite the message drops) and this reduces the chances of multiple merge leaders being chosen.
was (Author: belaban):
OK, so the problem is this: if we have a split like the one below:
{noformat}
0: [0|9] (3) [0, 2, 1]
1: [0|9] (3) [0, 2, 1]
2: [0|9] (3) [0, 2, 1]
3: [3|9] (2) [3, 4]
4: [3|9] (2) [3, 4]
5: [5|10] (3) [5, 6, 7]
6: [5|10] (3) [5, 6, 7]
7: [5|10] (3) [5, 6, 7]
8: [9|9] (2) [9, 8]
9: [9|9] (2) [9, 8]
{noformat}
, members send {{INFO}} messages (in {{MERGE3}}) with the view-id, e.g. 7 sends {{\[5|10\]}}.
Every member multicasts an {{INFO}} message at a random interval [{{MERGE3.min_interval..MERGE3_max_interval}}].
The coordinators of the subclusters (0, 3, 5 and 9) check every {{MERGE3.check_interval}} ms if they've received different view-ids and initiate a merge if so.
The problem with your test is that it drops {{INFO}} messages (not retransmitted as {{MERGE3}} is below {{UNICASTX}} and {{NAKACKX}}, so when a coordinator starts the view-id check, it may not have received all view-ids.
Example:
{noformat}
5: I will be the merge leader. Starting the merge task. Coords: [5], views: {5=[5|10] (3) [5, 6, 7], 1=[0|9] (3) [0, 2, 1]}
9: I will be the merge leader. Starting the merge task. Coords: [9, 3], views: {4=[3|9] (2) [3, 4], 9=[9|9] (2) [9, 8], 3=[3|9] (2) [3, 4], 7=[5|11] (6) [5, 0, 2, 1, 6, 7]}
{noformat}
Here, we can see that 5 and 9 were merge leaders and started merges at almost the same time. Depending on which merge leader is able to contact more members, there will be more than 1 merge needed to end up with a fully merged view.
E.g if 5 succeeds in contacting itself, 3 and 0 *before 9 contacts them* , then a merge will ensue including 0, 3 and 5. 9 will not merge with anyone else, as it started a merge on its own and therefore rejects 5's merge request.
The problem can be mitigated (but not eliminated altogether) by reducing {{min_interval}} and {{max_interval}} and increasing {{check_interval}} in {{MERGE3}}: this way, coordinators are more likely to get more {{INFO}} messages from everyone (despite the message drops) and this reduces the chances of multiple merge leaders being chosen.
> MERGE3: Do not lose any members from view during a series of merges
> -------------------------------------------------------------------
>
> Key: JGRP-1910
> URL: https://issues.jboss.org/browse/JGRP-1910
> Project: JGroups
> Issue Type: Bug
> Reporter: Radim Vansa
> Assignee: Bela Ban
> Fix For: 3.6.3
>
> Attachments: SplitMergeFailFastTest.java, SplitMergeTest.java
>
>
> When connection between nodes is re-established, MERGE3 should merge the cluster together. This often does not involve a single MergeView but a series of such events. The problematic property of this protocol is that some of those views can lack certain members, though these are reachable.
> This causes problem in Infinispan since the cache cannot be fully rebalanced before another merge arrives, and all owners of certain segment can be gradually removed (and added again) to the view, while this is not detected as partition but crashed nodes -> losing all owners means data loss.
> Removing members from view should be the role of FDx protocols, not MERGEx.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
9 years, 7 months
[JBoss JIRA] (JGRP-1910) MERGE3: Do not lose any members from view during a series of merges
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1910?page=com.atlassian.jira.plugin.... ]
Bela Ban edited comment on JGRP-1910 at 3/11/15 9:49 AM:
---------------------------------------------------------
OK, so the problem is this: if we have a split like the one below:
{noformat}
0: [0|9] (3) [0, 2, 1]
1: [0|9] (3) [0, 2, 1]
2: [0|9] (3) [0, 2, 1]
3: [3|9] (2) [3, 4]
4: [3|9] (2) [3, 4]
5: [5|10] (3) [5, 6, 7]
6: [5|10] (3) [5, 6, 7]
7: [5|10] (3) [5, 6, 7]
8: [9|9] (2) [9, 8]
9: [9|9] (2) [9, 8]
{noformat}
, members send {{INFO}} messages (in {{MERGE3}}) with the view-id, e.g. 7 sends {{\[5|10\]}}.
Every member multicasts an {{INFO}} message at a random interval [{{MERGE3.min_interval..MERGE3_max_interval}}].
The coordinators of the subclusters (0, 3, 5 and 9) check every {{MERGE3.check_interval}} ms if they've received different view-ids and initiate a merge if so.
The problem with your test is that it drops {{INFO}} messages (not retransmitted as {{MERGE3}} is below {{UNICASTX}} and {{NAKACKX}}, so when a coordinator starts the view-id check, it may not have received all view-ids.
Example:
{noformat}
5: I will be the merge leader. Starting the merge task. Coords: [5], views: {5=[5|10] (3) [5, 6, 7], 1=[0|9] (3) [0, 2, 1]}
9: I will be the merge leader. Starting the merge task. Coords: [9, 3], views: {4=[3|9] (2) [3, 4], 9=[9|9] (2) [9, 8], 3=[3|9] (2) [3, 4], 7=[5|11] (6) [5, 0, 2, 1, 6, 7]}
{noformat}
Here, we can see that 5 and 9 were merge leaders and started merges at almost the same time. Depending on which merge leader is able to contact more members, there will be more than 1 merge needed to end up with a fully merged view.
E.g if 5 succeeds in contacting itself, 3 and 0 *before 9 contacts them* , then a merge will ensue including 0, 3 and 5. 9 will not merge with anyone else, as it started a merge on its own and therefore rejects 5's merge request.
The problem can be mitigated (but not eliminated altogether) by reducing {{min_interval}} and {{max_interval}} and increasing {{check_interval}} in {{MERGE3}}: this way, coordinators are more likely to get more {{INFO}} messages from everyone (despite the message drops) and this reduces the chances of multiple merge leaders being chosen.
was (Author: belaban):
OK, so the problem is this: if we have a split like the one below:
{noformat}
0: [0|9] (3) [0, 2, 1]
1: [0|9] (3) [0, 2, 1]
2: [0|9] (3) [0, 2, 1]
3: [3|9] (2) [3, 4]
4: [3|9] (2) [3, 4]
5: [5|10] (3) [5, 6, 7]
6: [5|10] (3) [5, 6, 7]
7: [5|10] (3) [5, 6, 7]
8: [9|9] (2) [9, 8]
9: [9|9] (2) [9, 8]
{noformat}
, members send {{INFO}} messages (in {{MERGE3}}) with the view-id, e.g. 7 sends {{\[5|10\]}}.
Every member multicasts an {{INFO}} message at a random interval {{\[MERGE3.min_interval..MERGE3_max_interval\]}}.
The coordinators of the subclusters (0, 3, 5 and 9) check every {MERGE3.check_interval}} ms if they've received different view-ids and initiate a merge if so.
The problem with your test is that it drops {{INFO}} messages (not retransmitted as {{MERGE3}} is below {{UNICASTX}} and {{NAKACKX}}, so when a coordinator starts the view-id check, it may not have received all view-ids.
Example:
{noformat}
5: I will be the merge leader. Starting the merge task. Coords: [5], views: {5=[5|10] (3) [5, 6, 7], 1=[0|9] (3) [0, 2, 1]}
9: I will be the merge leader. Starting the merge task. Coords: [9, 3], views: {4=[3|9] (2) [3, 4], 9=[9|9] (2) [9, 8], 3=[3|9] (2) [3, 4], 7=[5|11] (6) [5, 0, 2, 1, 6, 7]}
{noformat}
Here, we can see that 5 and 9 were merge leaders and started merges at almost the same time. Depending on which merge leader is able to contact more members, there will be more than 1 merge needed to end up with a fully merged view.
E.g if 5 succeeds in contacting itself, 3 and 0 *before 9 contacts them* , then a merge will ensue including 0, 3 and 5. 9 will not merge with anyone else, as it started a merge on its own and therefore rejects 5's merge request.
The problem can be mitigated (but not eliminated altogether) by reducing {{min_interval}} and {{max_interval}} and increasing {{check_interval}} in {{MERGE3}}: this way, coordinators are more likely to get more {{INFO}} messages from everyone (despite the message drops) and this reduces the chances of multiple merge leaders being chosen.
> MERGE3: Do not lose any members from view during a series of merges
> -------------------------------------------------------------------
>
> Key: JGRP-1910
> URL: https://issues.jboss.org/browse/JGRP-1910
> Project: JGroups
> Issue Type: Bug
> Reporter: Radim Vansa
> Assignee: Bela Ban
> Fix For: 3.6.3
>
> Attachments: SplitMergeFailFastTest.java, SplitMergeTest.java
>
>
> When connection between nodes is re-established, MERGE3 should merge the cluster together. This often does not involve a single MergeView but a series of such events. The problematic property of this protocol is that some of those views can lack certain members, though these are reachable.
> This causes problem in Infinispan since the cache cannot be fully rebalanced before another merge arrives, and all owners of certain segment can be gradually removed (and added again) to the view, while this is not detected as partition but crashed nodes -> losing all owners means data loss.
> Removing members from view should be the role of FDx protocols, not MERGEx.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
9 years, 7 months
[JBoss JIRA] (JGRP-1910) MERGE3: Do not lose any members from view during a series of merges
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1910?page=com.atlassian.jira.plugin.... ]
Bela Ban edited comment on JGRP-1910 at 3/11/15 9:34 AM:
---------------------------------------------------------
No, I haven't yet had a chance to complete the PlusCal manual.
What do you mean ? If you're C having view {{\[A|5] \[A,B,C\]}} and then get view {{\[B|6\] \[B,C\]}}, on what grounds would you make C reject it ?
was (Author: belaban):
No, I haven't yet had a chance to complete the PlusCal manual.
What do you mean ? If you're C have view {{\[A|5] \[A,B,C\]}} and then get view {{\[B|6\] \[B,C\]}}, on what grounds would you make C reject it ?
> MERGE3: Do not lose any members from view during a series of merges
> -------------------------------------------------------------------
>
> Key: JGRP-1910
> URL: https://issues.jboss.org/browse/JGRP-1910
> Project: JGroups
> Issue Type: Bug
> Reporter: Radim Vansa
> Assignee: Bela Ban
> Fix For: 3.6.3
>
> Attachments: SplitMergeFailFastTest.java, SplitMergeTest.java
>
>
> When connection between nodes is re-established, MERGE3 should merge the cluster together. This often does not involve a single MergeView but a series of such events. The problematic property of this protocol is that some of those views can lack certain members, though these are reachable.
> This causes problem in Infinispan since the cache cannot be fully rebalanced before another merge arrives, and all owners of certain segment can be gradually removed (and added again) to the view, while this is not detected as partition but crashed nodes -> losing all owners means data loss.
> Removing members from view should be the role of FDx protocols, not MERGEx.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
9 years, 7 months
[JBoss JIRA] (JGRP-1910) MERGE3: Do not lose any members from view during a series of merges
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1910?page=com.atlassian.jira.plugin.... ]
Bela Ban edited comment on JGRP-1910 at 3/11/15 9:34 AM:
---------------------------------------------------------
No, I haven't yet had a chance to complete the PlusCal manual.
What do you mean ? If you're C have view {{\[A|5] \[A,B,C\]}} and then get view {{\[B|6\] \[B,C\]}}, on what grounds would you make C reject it ?
was (Author: belaban):
No, I haven't yet had a chance to complete the PlusCal manual.
What do you mean ? If you're C have view {{\[A|5] \[A,B,C\}}} and the get view {{\[B|6\] \[B,C\}}}, on what grounds would you make C reject it ?
> MERGE3: Do not lose any members from view during a series of merges
> -------------------------------------------------------------------
>
> Key: JGRP-1910
> URL: https://issues.jboss.org/browse/JGRP-1910
> Project: JGroups
> Issue Type: Bug
> Reporter: Radim Vansa
> Assignee: Bela Ban
> Fix For: 3.6.3
>
> Attachments: SplitMergeFailFastTest.java, SplitMergeTest.java
>
>
> When connection between nodes is re-established, MERGE3 should merge the cluster together. This often does not involve a single MergeView but a series of such events. The problematic property of this protocol is that some of those views can lack certain members, though these are reachable.
> This causes problem in Infinispan since the cache cannot be fully rebalanced before another merge arrives, and all owners of certain segment can be gradually removed (and added again) to the view, while this is not detected as partition but crashed nodes -> losing all owners means data loss.
> Removing members from view should be the role of FDx protocols, not MERGEx.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
9 years, 7 months
[JBoss JIRA] (JGRP-1910) MERGE3: Do not lose any members from view during a series of merges
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1910?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-1910:
--------------------------------
No, I haven't yet had a chance to complete the PlusCal manual.
What do you mean ? If you're C have view {{\[A|5] \[A,B,C\}}} and the get view {{\[B|6\] \[B,C\}}}, on what grounds would you make C reject it ?
> MERGE3: Do not lose any members from view during a series of merges
> -------------------------------------------------------------------
>
> Key: JGRP-1910
> URL: https://issues.jboss.org/browse/JGRP-1910
> Project: JGroups
> Issue Type: Bug
> Reporter: Radim Vansa
> Assignee: Bela Ban
> Fix For: 3.6.3
>
> Attachments: SplitMergeFailFastTest.java, SplitMergeTest.java
>
>
> When connection between nodes is re-established, MERGE3 should merge the cluster together. This often does not involve a single MergeView but a series of such events. The problematic property of this protocol is that some of those views can lack certain members, though these are reachable.
> This causes problem in Infinispan since the cache cannot be fully rebalanced before another merge arrives, and all owners of certain segment can be gradually removed (and added again) to the view, while this is not detected as partition but crashed nodes -> losing all owners means data loss.
> Removing members from view should be the role of FDx protocols, not MERGEx.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
9 years, 7 months
[JBoss JIRA] (WFLY-4416) Cannot obtain DOMImplementationRegistry instance
by Thomas Diesler (JIRA)
[ https://issues.jboss.org/browse/WFLY-4416?page=com.atlassian.jira.plugin.... ]
Thomas Diesler commented on WFLY-4416:
--------------------------------------
Yes, same on 9.x
https://github.com/wildfly/wildfly/pull/7249
> Cannot obtain DOMImplementationRegistry instance
> ------------------------------------------------
>
> Key: WFLY-4416
> URL: https://issues.jboss.org/browse/WFLY-4416
> Project: WildFly
> Issue Type: Bug
> Components: XML Frameworks
> Affects Versions: 8.2.0.Final
> Reporter: Thomas Diesler
> Assignee: Jason Greene
>
> {code}
> testDOMImplementationRegistry(org.jboss.as.test.smoke.xml.DOMImplementationRegistryTestCase) Time elapsed: 0.09 sec <<< ERROR!
> java.lang.ClassNotFoundException: com.sun.org.apache.xerces.internal.dom.DOMXSImplementationSourceImpl from [Module "deployment.dom-registry-test:main" from Service Module Loader]
> at org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:213)
> at org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:459)
> at org.jboss.modules.ConcurrentClassLoader.performLoadClassChecked(ConcurrentClassLoader.java:408)
> at org.jboss.modules.ConcurrentClassLoader.performLoadClass(ConcurrentClassLoader.java:389)
> at org.jboss.modules.ConcurrentClassLoader.loadClass(ConcurrentClassLoader.java:134)
> at org.w3c.dom.bootstrap.DOMImplementationRegistry.newInstance(DOMImplementationRegistry.java:182)
> at org.jboss.as.test.smoke.xml.DOMImplementationRegistryTestCase.testDOMImplementationRegistry(DOMImplementationRegistryTestCase.java:52)
> {code}
> CrossRef: https://github.com/wildfly-extras/wildfly-camel/issues/391
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
9 years, 7 months