[JBoss JIRA] (WFLY-5850) Can't deploy an EJB to a WildFly 10 server migrated from EAP 6.4
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFLY-5850?page=com.atlassian.jira.plugin.... ]
Brian Stansberry commented on WFLY-5850:
----------------------------------------
There is logic in WildFly to introduce the bean validation extension and subsystem at boot if it isn't present and the ee subsystem xsd uses a version prior to the creation of a separate bean validation subsystem.
It looks like that logic just isn't covering enough xsd versions.
> Can't deploy an EJB to a WildFly 10 server migrated from EAP 6.4
> ----------------------------------------------------------------
>
> Key: WFLY-5850
> URL: https://issues.jboss.org/browse/WFLY-5850
> Project: WildFly
> Issue Type: Bug
> Components: EJB
> Reporter: Ladislav Thon
> Assignee: Jason Greene
> Priority: Critical
>
> I just tried to migrate a {{standalone.xml}} server from clean EAP 6.4.0 to WildFly 10.0.0.CR4 and then deploy the {{server-side}} part of the {{ejb-remote}} quickstart:
> {code}
> git clone git@github.com:jboss-developer/jboss-eap-quickstarts.git
> cd jboss-eap-quickstarts/
> git checkout -b 6.4.x origin/6.4.x
> cd ejb-remote/server-side/
> mvn clean package -s ../../settings.xml
> cd target
> unzip .../jboss-eap-6.4.0.zip
> unzip .../wildfly-10.0.0.CR4.zip
> cp jboss-eap-6.4/standalone/configuration/standalone.xml wildfly-10.0.0.CR4/standalone/configuration/test.xml
> ./wildfly-10.0.0.CR4/bin/standalone.sh -c test.xml --admin-only
> # on a separate console
> ./wildfly-10.0.0.CR4/bin/jboss-cli.sh -c --controller=localhost:9999
> /subsystem=threads:remove
> /extension=org.jboss.as.threads:remove
> /subsystem=web:migrate
> shutdown
> # on the original console
> cp jboss-ejb-remote-server-side.jar wildfly-10.0.0.CR4/standalone/deployments/
> ./wildfly-10.0.0.CR4/bin/standalone.sh -c test.xml
> {code}
> What I get is this horrible stack trace:
> {code}
> 15:35:50,913 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-2) MSC000001: Failed to start service jboss.deployment.unit."jboss-ejb-remote-server-side.jar".POST_MODULE: org.jboss.msc.service.StartException in service jboss.deployment.unit."jboss-ejb-remote-server-side.jar".POST_MODULE: WFLYSRV0153: Failed to process phase POST_MODULE of deployment "jboss-ejb-remote-server-side.jar"
> at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:154) [wildfly-server-2.0.0.CR8.jar:2.0.0.CR8]
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948) [jboss-msc-1.2.6.Final.jar:1.2.6.Final]
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881) [jboss-msc-1.2.6.Final.jar:1.2.6.Final]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_66]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_66]
> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_66]
> Caused by: javax.validation.ValidationException: Unable to create a Configuration, because no Bean Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath.
> at javax.validation.Validation$GenericBootstrapImpl.configure(Validation.java:271)
> at org.hibernate.validator.internal.cdi.ValidationExtension.<init>(ValidationExtension.java:109)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [rt.jar:1.8.0_66]
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [rt.jar:1.8.0_66]
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [rt.jar:1.8.0_66]
> at java.lang.reflect.Constructor.newInstance(Constructor.java:422) [rt.jar:1.8.0_66]
> at java.lang.Class.newInstance(Class.java:442) [rt.jar:1.8.0_66]
> at org.jboss.as.weld.deployment.WeldPortableExtensions.tryRegisterExtension(WeldPortableExtensions.java:53)
> at org.jboss.as.weld.deployment.processors.WeldPortableExtensionProcessor.loadAttachments(WeldPortableExtensionProcessor.java:121)
> at org.jboss.as.weld.deployment.processors.WeldPortableExtensionProcessor.deploy(WeldPortableExtensionProcessor.java:81)
> at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:147) [wildfly-server-2.0.0.CR8.jar:2.0.0.CR8]
> ... 5 more
> 15:35:50,921 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("deploy") failed - address: ([("deployment" => "jboss-ejb-remote-server-side.jar")]) - failure description: {"WFLYCTL0080: Failed services" => {"jboss.deployment.unit.\"jboss-ejb-remote-server-side.jar\".POST_MODULE" => "org.jboss.msc.service.StartException in service jboss.deployment.unit.\"jboss-ejb-remote-server-side.jar\".POST_MODULE: WFLYSRV0153: Failed to process phase POST_MODULE of deployment \"jboss-ejb-remote-server-side.jar\"
> Caused by: javax.validation.ValidationException: Unable to create a Configuration, because no Bean Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath."}}
> {code}
> When I deploy the same JAR to a clean install of WildFly 10.0.0.CR4, it works just fine. This suggests that something (probably the EJB subsystem?) doesn't correctly parse/serialize the legacy configuration. Or something like that.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 3 months
[JBoss JIRA] (JGRP-1265) Member can not join cluster after JVM high load
by kostd kostd (JIRA)
[ https://issues.jboss.org/browse/JGRP-1265?page=com.atlassian.jira.plugin.... ]
kostd kostd commented on JGRP-1265:
-----------------------------------
We have seen this issue on our customer`s production environment.
Affects version: 3.4.5
Node`s environment: linux RHEL 2.6.32-431.1.2.el6.x86_64, wildfly 8.2.0.Final, hibernate 4.3.7.Final, infinispan 6.0.2.Final, jgroups 3.4.5.Final
There is 40Gb heap on each node`s jvm, and during heap-dump (after full gc not reproduced) creation on coordinator node, the second node receives truncated cluster vew message.
After dump creation, there is no new fully-rebuilded cluster view message, and we have case when one node think, what both nodes in cluster, but other node think what only it included.
In logs from both node`s hosts we can see what some ISPN000094-messages is missing:
{code}
N1 -- the first node of cluster, ip1
N2 -- the second one, ip2
heap on each node is 40Gb
dump creation takin` about ~30s.
12:xx N1 started first and isCoordinator at startup moment
13:5x N2 begin starting
//both nodes can see each other:
13:59:42,354 INFO N1 [JGroupsTransport] (Incoming-16,shared=tcp-for-l2) ISPN000094: Received new cluster view: [ip1[0]/hibernate|3] (2) [ip1[0]/hibernate, ip2[0]/hibernate]
13:59:42,422 INFO N2 [JGroupsTransport] (ServerService Thread Pool -- 57) ISPN000094: Received new cluster view: [ip1[0]/hibernate|3] (2) [ip1[0]/hibernate, ip2[0]/hibernate]
14:07 N2 started
14.30 heap-dump on N1!!!
//N2 recieves truncated cluster view during N1 creating heap-dump
14:31:20,390 INFO N2 [JGroupsTransport] (Incoming-4,shared=tcp-for-l2) ISPN000094: Received new cluster view: [ip2[0]/hibernate|4] (1) [ip2[0]/hibernate]
// hibernate|5 is missing!!! Why? dump was created during transfer state?
15.01 heap-dump on N1!!!
//N2 recieves truncated cluster view during N1 creating heap-dump
15:01:21,928 INFO N2 [JGroupsTransport] (Incoming-6,shared=tcp-for-l2) ISPN000094: Received new cluster view: [ip2[0]/hibernate|6] (1) [ip2[0]/hibernate]
// hibernate|7 is missing!!!
19:25 heap-dump on N1!!!
//N2 recieves truncated cluster view during N1 creating heap-dump
19:26:16,928 INFO N2 [JGroupsTransport] (Incoming-19,shared=tcp-for-l2) ISPN000094: Received new cluster view: [ip2[0]/hibernate|8] (1) [ip2[0]/hibernate]
//hibernate|9 is missing!!!
19:42:31,221 INFO N1 [JGroupsTransport] (Incoming-3,shared=tcp-for-l2) ISPN000094: Received new cluster view: [ip1[0]/hibernate|10] (1) [ip1[0]/hibernate]
{code}
{code:title=configuration}
-- hibernate-l2 cache:
<cache-container name="hibernate" default-cache="local-query" module="org.hibernate">
<transport lock-timeout="60000" stack="tcp-for-l2"/>
<local-cache name="local-query">
<transaction mode="NONE" locking="OPTIMISTIC"/>
<eviction strategy="LIRS" max-entries="500000"/>
<expiration max-idle="3600000" lifespan="3600000" interval="60000"/>
<!-- TASK-64293 -->
<locking isolation="READ_COMMITTED"/>
</local-cache>
<invalidation-cache name="entity" mode="SYNC">
<transaction mode="NON_XA" locking="OPTIMISTIC"/>
<eviction strategy="LIRS" max-entries="500000"/>
<expiration max-idle="3600000" lifespan="3600000" interval="60000"/>
<!-- TASK-64293 -->
<locking isolation="READ_COMMITTED"/>
</invalidation-cache>
<replicated-cache name="timestamps" mode="ASYNC">
<transaction mode="NONE" locking="OPTIMISTIC"/>
<eviction strategy="NONE"/>
<!-- TASK-64293 -->
<locking isolation="READ_COMMITTED"/>
</replicated-cache>
</cache-container>
-- jgroups subsystem:
<subsystem xmlns="urn:jboss:domain:jgroups:2.0" default-stack="udp">
<stack name="udp">
<transport type="UDP" socket-binding="jgroups-udp"/>
<protocol type="PING"/>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
<stack name="tcp-for-l2">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="TCPPING">
<property name="initial_hosts">${argus.jgroups-l2.tcpping.initial_hosts}</property>
<property name="port_range">0</property>
</protocol>
<protocol type="MERGE2"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
</subsystem>
{code}
> Member can not join cluster after JVM high load
> -----------------------------------------------
>
> Key: JGRP-1265
> URL: https://issues.jboss.org/browse/JGRP-1265
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 2.11
> Environment: linux, kernel 2.6.18
> Reporter: Victor N
> Assignee: Bela Ban
> Fix For: 2.12
>
> Attachments: jgroups-tcp.xml
>
>
> In our production system I can see that a node desappers from the cluster if its server was heavily-loaded. It's OK, but the node never comes back to the cluster even after its server is working normally, without load. I can easily reproduce the problem in 2 cases:
> 1) by taking a memory dump on the node: jmap -dump:format=b,file=dump.hprof <pid>
> Since we have 8-16 GB of RAM, this operation takes much time and blocks JVM - so other members exclude this node from View.
> 2) GC (garbage collection) - if JVM is doing GC constantly (and almost can not work)
> In both situations the stuck node never reappears in the cluster (even after 1 h). Below are more details.
> We have 12 nodes in our cluster, we problematic node is "gate5".
> View on gate5: [gate11.mydomain|869] [gate11.mydomain, gate2.mydomain, gate6.mydomain, gate7.mydomain, gate12.mydomain, gate4.mydomain, gate3.mydomain, gate10.mydomain, gate8.mydomain, gate9.mydomain, gate14.mydomain, gate5.mydomain]
> View on gate11 (coordinator): [gate11.mydomain|870] [gate11.mydomain, gate2.mydomain, gate6.mydomain, gate7.mydomain, gate12.mydomain, gate4.mydomain, gate3.mydomain, gate10.mydomain, gate8.mydomain, gate9.mydomain, gate14.mydomain]
> The coordinator (gate11) is sending GET_MBRS_REQ periodically - I see them in gate5. But I do NOT see response to this request!
> All jgroups threads are alive, not dead (I took stack traces).
> Another strange thing is that the problematic gate5 sends messages to other nodes and even receives messages from SOME of them! How is it possible - I double-checked that ALL other nodes have view_id=870 (without gate5)?
> The only assumption I have is race-conditions which occurs (as always) under high load.
> In normal situations such as temporary network failure everything works perfectly - gate5 joins the cluster.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 3 months
[JBoss JIRA] (WFLY-2845) On WildFly clean shutdown: failed marshalling rsp (UnsuccessfulResponse)
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/WFLY-2845?page=com.atlassian.jira.plugin.... ]
Radoslav Husar commented on WFLY-2845:
--------------------------------------
Is this still an issue?
> On WildFly clean shutdown: failed marshalling rsp (UnsuccessfulResponse)
> ------------------------------------------------------------------------
>
> Key: WFLY-2845
> URL: https://issues.jboss.org/browse/WFLY-2845
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 8.0.0.CR1
> Environment: RHEL6 x86_64; Oracle JDK 7
> Reporter: Michal Karm Babacek
> Assignee: Paul Ferraro
> Labels: clustering
> Fix For: 10.1.0.Final
>
>
> I've got two standalone-ha instances running on a one box. Each of these instances has {{<distributable/>}} [clusterbench|https://github.com/Karm/clusterbench/tree/simplified-and-pure] app deployed. When I start to shut the instances down via CLI command, one of them throws:
> {noformat}
> 08:59:33,341 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-4,shared=udp) ISPN000094: Received new cluster view: [jboss-eap-8.0/web|4] (1) [jboss-eap-8.0/web]
> 08:59:33,368 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-8) ISPN000082: Stopping the RpcDispatcher
> 08:59:33,379 ERROR [org.jgroups.blocks.RequestCorrelator] (remote-thread-0) failed marshalling rsp (UnsuccessfulResponse): java.lang.NullPointerException
> 08:59:33,380 ERROR [org.jgroups.blocks.RequestCorrelator] (remote-thread-1) failed marshalling rsp (UnsuccessfulResponse): java.lang.NullPointerException
> 08:59:33,392 INFO [org.jboss.as] (MSC service thread 1-1) JBAS015950: WildFly 8.0.0.Final-SNAPSHOT "WildFly" stopped in 558ms
> {noformat}
> In the log. It's a non-deterministic problem and it does not emerge always.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 3 months
[JBoss JIRA] (JGRP-1926) ForkChannel needs to call init() and start() correctly on all added protocols
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1926?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-1926:
---------------------------
Attachment: bla.java
Modified version which uses RaftHandle to modify state. Works with FORK.
> ForkChannel needs to call init() and start() correctly on all added protocols
> -----------------------------------------------------------------------------
>
> Key: JGRP-1926
> URL: https://issues.jboss.org/browse/JGRP-1926
> Project: JGroups
> Issue Type: Bug
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 3.6.4
>
> Attachments: bla.java, bla.java
>
>
> The attached program doesn't work because `RAFT.init()` tries to find `GMS` on the main stack, but `GMS` hasn't yet been created. Same issue with adding an `AddressGenerator` in `RAFT.init()`.
> Is `start()` called at all on protocols added by `FORK` on top of the main channel ? Is `init()` called at the right time ?
> The attached bla program reproduces the issue (in jgroups-raft).
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 3 months