[JBoss JIRA] (WFLY-6702) Infinispan Singleton silently dies in wildfly 9 cluster setup
by Divey Gupta (JIRA)
[ https://issues.jboss.org/browse/WFLY-6702?page=com.atlassian.jira.plugin.... ]
Divey Gupta updated WFLY-6702:
------------------------------
Description:
I am using wildfly 9 in a cluster setup of 3 nodes (standalone-full-ha.xml) and use Singleton service for some of our operations. Sometimes (during heavy load/traffic) we are seeing that singleton service silently dies without giving any error or exception. There was no exception like : "Failed to get quorum.."
When load reduces (number of concurrent requests) on wildfly, then also it doesn't recover i.e. reactivate singleton in some node. In order to start singleton again, the only option that works is manually restarting wildfly
Following is my standalone-full-ha.xml config for infinispan and Jgroups.
{code:xml}
<stack name="tcp">
<transport socket-binding="jgroups-tcp" type="TCP"/>
<protocol type="TCPPING">
<property name="initial_hosts">
10.0.1.32[7600],10.0.1.38[7600],10.0.1.39[7600]</property>
<property name="port_range">
0
</property>
</protocol>
<protocol type="MERGE2"/>
<protocol socket-binding="jgroups-tcp-fd" type="FD_SOCK"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS">
<property name="join_timeout">
5000
</property>
</protocol>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
{code}
....
{code:xml}
<subsystem xmlns="urn:jboss:domain:infinispan:3.0">
<cache-container aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server" name="server">
<transport lock-timeout="120000"/>
<replicated-cache mode="ASYNC" name="default">
<state-transfer enabled="true" timeout="300000"/>
<transaction locking="OPTIMISTIC" mode="BATCH"/>
</replicated-cache>
</cache-container>
<cache-container default-cache="session" module="org.wildfly.clustering.web.infinispan" name="web">
<transport lock-timeout="120000"/>
<replicated-cache mode="ASYNC" name="session">
<state-transfer enabled="true" timeout="300000"/>
<locking isolation="READ_COMMITTED"/>
<transaction locking="OPTIMISTIC" mode="BATCH"/>
</replicated-cache>
</cache-container>
</subsystem>
{code}
Following is a java code snippet that we use to activate and start singleton in a cluster:
{code}
public class SingletonServiceActivator implements ServiceActivator {
public static final ServiceName SINGLETON_SERVICE_NAME =
ServiceName.JBOSS.append("ha", "singleton");
private static final String CONTAINER_NAME = "server";
private static final String CACHE_NAME = "default";
@Override
public void activate(ServiceActivatorContext context) throws ServiceRegistryException {
int quorum = 2;
InjectedValue<ServerEnvironment> env = new InjectedValue<>();
SingletonServiceClient srv = new SingletonServiceClient(env);
ServiceController<?> factoryService = context.getServiceRegistry().getRequiredService(SingletonServiceBuilderFactory.SERVICE_NAME.append(CONTAINER_NAME, CACHE_NAME));
SingletonServiceBuilderFactory factory = (SingletonServiceBuilderFactory) factoryService.getValue();
SingletonElectionPolicy policy = new SimpleSingletonElectionPolicy(0);
factory.createSingletonServiceBuilder(SINGLETON_SERVICE_NAME, srv)
.requireQuorum(quorum)
.electionPolicy(policy)
.build(new DelegatingServiceContainer(context.getServiceTarget(),context.getServiceRegistry()))
.addDependency(ServerEnvironmentService.SERVICE_NAME, ServerEnvironment.class, env)
.setInitialMode(ServiceController.Mode.ACTIVE)
.install();
}
public final class SingletonServiceClient extends AbstractService<Serializable> {
private final Value<ServerEnvironment> env;
public SingletonServiceClient(Value<ServerEnvironment> env) {
this.env = env;
}
@Override
public void start(StartContext startContext) {
// startContext.
log("SingletonService started");
//do work
}
@Override
public void stop(StopContext stopContext) {
log("SingletonService stopped"); // THIS NEVER GETS CALLED
//stop
}
}
{code}
Is there something wrong in the config or in the way I am trying to activate and start singleton ?
I thought that there could be some connectivity issue between nodes in a cluster because of which its unable to get desired quorum to start singleton. Just to experiment, I changed quorum to 1. But still sometimes I see this issue during heavy load.
I will really appreciate some help or suggestions on this issue.
Also, is there a way to monitor state of singleton from application code and trigger it from our application code ?
was:
I am using wildfly 9 in a cluster setup of 3 nodes (standalone-full-ha.xml) and use Singleton service for some of our operations. Sometimes (during heavy load/traffic) we are seeing that singleton service silently dies without giving any error or exception. There was no exception like : "Failed to get quorum.."
When load reduces (number of concurrent requests) on wildfly, then also it doesn't recover i.e. reactivate singleton in some node. In order to start singleton again, the only option that works is manually restarting wildfly
Following is my standalone-full-ha.xml config for infinispan and Jgroups.
<stack name="tcp">
<transport socket-binding="jgroups-tcp" type="TCP"/>
<protocol type="TCPPING">
<property name="initial_hosts">
10.0.1.32[7600],10.0.1.38[7600],10.0.1.39[7600]</property>
<property name="port_range">
0
</property>
</protocol>
<protocol type="MERGE2"/>
<protocol socket-binding="jgroups-tcp-fd" type="FD_SOCK"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS">
<property name="join_timeout">
5000
</property>
</protocol>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
....
<subsystem xmlns="urn:jboss:domain:infinispan:3.0">
<cache-container aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server" name="server">
<transport lock-timeout="120000"/>
<replicated-cache mode="ASYNC" name="default">
<state-transfer enabled="true" timeout="300000"/>
<transaction locking="OPTIMISTIC" mode="BATCH"/>
</replicated-cache>
</cache-container>
<cache-container default-cache="session" module="org.wildfly.clustering.web.infinispan" name="web">
<transport lock-timeout="120000"/>
<replicated-cache mode="ASYNC" name="session">
<state-transfer enabled="true" timeout="300000"/>
<locking isolation="READ_COMMITTED"/>
<transaction locking="OPTIMISTIC" mode="BATCH"/>
</replicated-cache>
</cache-container>
</subsystem>
Following is a java code snippet that we use to activate and start singleton in a cluster:
{code}
public class SingletonServiceActivator implements ServiceActivator {
public static final ServiceName SINGLETON_SERVICE_NAME =
ServiceName.JBOSS.append("ha", "singleton");
private static final String CONTAINER_NAME = "server";
private static final String CACHE_NAME = "default";
@Override
public void activate(ServiceActivatorContext context) throws ServiceRegistryException {
int quorum = 2;
InjectedValue<ServerEnvironment> env = new InjectedValue<>();
SingletonServiceClient srv = new SingletonServiceClient(env);
ServiceController<?> factoryService = context.getServiceRegistry().getRequiredService(SingletonServiceBuilderFactory.SERVICE_NAME.append(CONTAINER_NAME, CACHE_NAME));
SingletonServiceBuilderFactory factory = (SingletonServiceBuilderFactory) factoryService.getValue();
SingletonElectionPolicy policy = new SimpleSingletonElectionPolicy(0);
factory.createSingletonServiceBuilder(SINGLETON_SERVICE_NAME, srv)
.requireQuorum(quorum)
.electionPolicy(policy)
.build(new DelegatingServiceContainer(context.getServiceTarget(),context.getServiceRegistry()))
.addDependency(ServerEnvironmentService.SERVICE_NAME, ServerEnvironment.class, env)
.setInitialMode(ServiceController.Mode.ACTIVE)
.install();
}
public final class SingletonServiceClient extends AbstractService<Serializable> {
private final Value<ServerEnvironment> env;
public SingletonServiceClient(Value<ServerEnvironment> env) {
this.env = env;
}
@Override
public void start(StartContext startContext) {
// startContext.
log("SingletonService started");
//do work
}
@Override
public void stop(StopContext stopContext) {
log("SingletonService stopped"); // THIS NEVER GETS CALLED
//stop
}
}
{code}
Is there something wrong in the config or in the way I am trying to activate and start singleton ?
I thought that there could be some connectivity issue between nodes in a cluster because of which its unable to get desired quorum to start singleton. Just to experiment, I changed quorum to 1. But still sometimes I see this issue during heavy load.
I will really appreciate some help or suggestions on this issue.
Also, is there a way to monitor state of singleton from application code and trigger it from our application code ?
> Infinispan Singleton silently dies in wildfly 9 cluster setup
> -------------------------------------------------------------
>
> Key: WFLY-6702
> URL: https://issues.jboss.org/browse/WFLY-6702
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 9.0.1.Final
> Environment: linux,64gb RAM
> Reporter: Divey Gupta
> Assignee: Paul Ferraro
> Priority: Blocker
> Labels: cluster, infinispan, singleton, wildfly
>
> I am using wildfly 9 in a cluster setup of 3 nodes (standalone-full-ha.xml) and use Singleton service for some of our operations. Sometimes (during heavy load/traffic) we are seeing that singleton service silently dies without giving any error or exception. There was no exception like : "Failed to get quorum.."
> When load reduces (number of concurrent requests) on wildfly, then also it doesn't recover i.e. reactivate singleton in some node. In order to start singleton again, the only option that works is manually restarting wildfly
> Following is my standalone-full-ha.xml config for infinispan and Jgroups.
> {code:xml}
> <stack name="tcp">
> <transport socket-binding="jgroups-tcp" type="TCP"/>
> <protocol type="TCPPING">
> <property name="initial_hosts">
> 10.0.1.32[7600],10.0.1.38[7600],10.0.1.39[7600]</property>
> <property name="port_range">
> 0
> </property>
> </protocol>
> <protocol type="MERGE2"/>
> <protocol socket-binding="jgroups-tcp-fd" type="FD_SOCK"/>
> <protocol type="FD"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS">
> <property name="join_timeout">
> 5000
> </property>
> </protocol>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> <protocol type="RSVP"/>
> </stack>
> {code}
> ....
> {code:xml}
> <subsystem xmlns="urn:jboss:domain:infinispan:3.0">
> <cache-container aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server" name="server">
> <transport lock-timeout="120000"/>
> <replicated-cache mode="ASYNC" name="default">
> <state-transfer enabled="true" timeout="300000"/>
> <transaction locking="OPTIMISTIC" mode="BATCH"/>
> </replicated-cache>
> </cache-container>
> <cache-container default-cache="session" module="org.wildfly.clustering.web.infinispan" name="web">
> <transport lock-timeout="120000"/>
> <replicated-cache mode="ASYNC" name="session">
> <state-transfer enabled="true" timeout="300000"/>
> <locking isolation="READ_COMMITTED"/>
> <transaction locking="OPTIMISTIC" mode="BATCH"/>
> </replicated-cache>
> </cache-container>
> </subsystem>
> {code}
> Following is a java code snippet that we use to activate and start singleton in a cluster:
> {code}
> public class SingletonServiceActivator implements ServiceActivator {
> public static final ServiceName SINGLETON_SERVICE_NAME =
> ServiceName.JBOSS.append("ha", "singleton");
> private static final String CONTAINER_NAME = "server";
> private static final String CACHE_NAME = "default";
> @Override
> public void activate(ServiceActivatorContext context) throws ServiceRegistryException {
> int quorum = 2;
> InjectedValue<ServerEnvironment> env = new InjectedValue<>();
> SingletonServiceClient srv = new SingletonServiceClient(env);
> ServiceController<?> factoryService = context.getServiceRegistry().getRequiredService(SingletonServiceBuilderFactory.SERVICE_NAME.append(CONTAINER_NAME, CACHE_NAME));
> SingletonServiceBuilderFactory factory = (SingletonServiceBuilderFactory) factoryService.getValue();
> SingletonElectionPolicy policy = new SimpleSingletonElectionPolicy(0);
> factory.createSingletonServiceBuilder(SINGLETON_SERVICE_NAME, srv)
> .requireQuorum(quorum)
> .electionPolicy(policy)
> .build(new DelegatingServiceContainer(context.getServiceTarget(),context.getServiceRegistry()))
> .addDependency(ServerEnvironmentService.SERVICE_NAME, ServerEnvironment.class, env)
> .setInitialMode(ServiceController.Mode.ACTIVE)
> .install();
> }
> public final class SingletonServiceClient extends AbstractService<Serializable> {
> private final Value<ServerEnvironment> env;
> public SingletonServiceClient(Value<ServerEnvironment> env) {
> this.env = env;
> }
> @Override
> public void start(StartContext startContext) {
> // startContext.
> log("SingletonService started");
> //do work
> }
> @Override
> public void stop(StopContext stopContext) {
> log("SingletonService stopped"); // THIS NEVER GETS CALLED
> //stop
> }
> }
> {code}
> Is there something wrong in the config or in the way I am trying to activate and start singleton ?
> I thought that there could be some connectivity issue between nodes in a cluster because of which its unable to get desired quorum to start singleton. Just to experiment, I changed quorum to 1. But still sometimes I see this issue during heavy load.
> I will really appreciate some help or suggestions on this issue.
> Also, is there a way to monitor state of singleton from application code and trigger it from our application code ?
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6702) Infinispan Singleton silently dies in wildfly 9 cluster setup
by Divey Gupta (JIRA)
[ https://issues.jboss.org/browse/WFLY-6702?page=com.atlassian.jira.plugin.... ]
Divey Gupta updated WFLY-6702:
------------------------------
Description:
I am using wildfly 9 in a cluster setup of 3 nodes (standalone-full-ha.xml) and use Singleton service for some of our operations. Sometimes (during heavy load/traffic) we are seeing that singleton service silently dies without giving any error or exception. There was no exception like : "Failed to get quorum.."
When load reduces (number of concurrent requests) on wildfly, then also it doesn't recover i.e. reactivate singleton in some node. In order to start singleton again, the only option that works is manually restarting wildfly
Following is my standalone-full-ha.xml config for infinispan and Jgroups.
<stack name="tcp">
<transport socket-binding="jgroups-tcp" type="TCP"/>
<protocol type="TCPPING">
<property name="initial_hosts">
10.0.1.32[7600],10.0.1.38[7600],10.0.1.39[7600]</property>
<property name="port_range">
0
</property>
</protocol>
<protocol type="MERGE2"/>
<protocol socket-binding="jgroups-tcp-fd" type="FD_SOCK"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS">
<property name="join_timeout">
5000
</property>
</protocol>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
....
<subsystem xmlns="urn:jboss:domain:infinispan:3.0">
<cache-container aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server" name="server">
<transport lock-timeout="120000"/>
<replicated-cache mode="ASYNC" name="default">
<state-transfer enabled="true" timeout="300000"/>
<transaction locking="OPTIMISTIC" mode="BATCH"/>
</replicated-cache>
</cache-container>
<cache-container default-cache="session" module="org.wildfly.clustering.web.infinispan" name="web">
<transport lock-timeout="120000"/>
<replicated-cache mode="ASYNC" name="session">
<state-transfer enabled="true" timeout="300000"/>
<locking isolation="READ_COMMITTED"/>
<transaction locking="OPTIMISTIC" mode="BATCH"/>
</replicated-cache>
</cache-container>
</subsystem>
Following is a java code snippet that we use to activate and start singleton in a cluster:
{code}
public class SingletonServiceActivator implements ServiceActivator {
public static final ServiceName SINGLETON_SERVICE_NAME =
ServiceName.JBOSS.append("ha", "singleton");
private static final String CONTAINER_NAME = "server";
private static final String CACHE_NAME = "default";
@Override
public void activate(ServiceActivatorContext context) throws ServiceRegistryException {
int quorum = 2;
InjectedValue<ServerEnvironment> env = new InjectedValue<>();
SingletonServiceClient srv = new SingletonServiceClient(env);
ServiceController<?> factoryService = context.getServiceRegistry().getRequiredService(SingletonServiceBuilderFactory.SERVICE_NAME.append(CONTAINER_NAME, CACHE_NAME));
SingletonServiceBuilderFactory factory = (SingletonServiceBuilderFactory) factoryService.getValue();
SingletonElectionPolicy policy = new SimpleSingletonElectionPolicy(0);
factory.createSingletonServiceBuilder(SINGLETON_SERVICE_NAME, srv)
.requireQuorum(quorum)
.electionPolicy(policy)
.build(new DelegatingServiceContainer(context.getServiceTarget(),context.getServiceRegistry()))
.addDependency(ServerEnvironmentService.SERVICE_NAME, ServerEnvironment.class, env)
.setInitialMode(ServiceController.Mode.ACTIVE)
.install();
}
public final class SingletonServiceClient extends AbstractService<Serializable> {
private final Value<ServerEnvironment> env;
public SingletonServiceClient(Value<ServerEnvironment> env) {
this.env = env;
}
@Override
public void start(StartContext startContext) {
// startContext.
log("SingletonService started");
//do work
}
@Override
public void stop(StopContext stopContext) {
log("SingletonService stopped"); // THIS NEVER GETS CALLED
//stop
}
}
{code}
Is there something wrong in the config or in the way I am trying to activate and start singleton ?
I thought that there could be some connectivity issue between nodes in a cluster because of which its unable to get desired quorum to start singleton. Just to experiment, I changed quorum to 1. But still sometimes I see this issue during heavy load.
I will really appreciate some help or suggestions on this issue.
Also, is there a way to monitor state of singleton from application code and trigger it from our application code ?
was:
I am using wildfly 9 in a cluster setup of 3 nodes (standalone-full-ha.xml) and use Singleton service for some of our operations. Sometimes (during heavy load/traffic) we are seeing that singleton service silently dies without giving any error or exception. There was no exception like : "Failed to get quorum.."
When load reduces (number of concurrent requests) on wildfly, then also it doesn't recover i.e. reactivate singleton in some node. In order to start singleton again, the only option that works is manually restarting wildfly
Following is my standalone-full-ha.xml config for infinispan and Jgroups.
<stack name="tcp">
<transport socket-binding="jgroups-tcp" type="TCP"/>
<protocol type="TCPPING">
<property name="initial_hosts">
10.0.1.32[7600],10.0.1.38[7600],10.0.1.39[7600]</property>
<property name="port_range">
0
</property>
</protocol>
<protocol type="MERGE2"/>
<protocol socket-binding="jgroups-tcp-fd" type="FD_SOCK"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS">
<property name="join_timeout">
5000
</property>
</protocol>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
....
<subsystem xmlns="urn:jboss:domain:infinispan:3.0">
<cache-container aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server" name="server">
<transport lock-timeout="120000"/>
<replicated-cache mode="ASYNC" name="default">
<state-transfer enabled="true" timeout="300000"/>
<transaction locking="OPTIMISTIC" mode="BATCH"/>
</replicated-cache>
</cache-container>
<cache-container default-cache="session" module="org.wildfly.clustering.web.infinispan" name="web">
<transport lock-timeout="120000"/>
<replicated-cache mode="ASYNC" name="session">
<state-transfer enabled="true" timeout="300000"/>
<locking isolation="READ_COMMITTED"/>
<transaction locking="OPTIMISTIC" mode="BATCH"/>
</replicated-cache>
</cache-container>
</subsystem>
Following is a java code snippet that we use to activate and start singleton in a cluster:
public class SingletonServiceActivator implements ServiceActivator {
public static final ServiceName SINGLETON_SERVICE_NAME =
ServiceName.JBOSS.append("ha", "singleton");
private static final String CONTAINER_NAME = "server";
private static final String CACHE_NAME = "default";
@Override
public void activate(ServiceActivatorContext context) throws ServiceRegistryException {
int quorum = 2;
InjectedValue<ServerEnvironment> env = new InjectedValue<>();
SingletonServiceClient srv = new SingletonServiceClient(env);
ServiceController<?> factoryService = context.getServiceRegistry().getRequiredService(SingletonServiceBuilderFactory.SERVICE_NAME.append(CONTAINER_NAME, CACHE_NAME));
SingletonServiceBuilderFactory factory = (SingletonServiceBuilderFactory) factoryService.getValue();
SingletonElectionPolicy policy = new SimpleSingletonElectionPolicy(0);
factory.createSingletonServiceBuilder(SINGLETON_SERVICE_NAME, srv)
.requireQuorum(quorum)
.electionPolicy(policy)
.build(new DelegatingServiceContainer(context.getServiceTarget(),context.getServiceRegistry()))
.addDependency(ServerEnvironmentService.SERVICE_NAME, ServerEnvironment.class, env)
.setInitialMode(ServiceController.Mode.ACTIVE)
.install();
}
public final class SingletonServiceClient extends AbstractService<Serializable> {
private final Value<ServerEnvironment> env;
public SingletonServiceClient(Value<ServerEnvironment> env) {
this.env = env;
}
@Override
public void start(StartContext startContext) {
// startContext.
log("SingletonService started");
//do work
}
@Override
public void stop(StopContext stopContext) {
log("SingletonService stopped"); // THIS NEVER GETS CALLED
//stop
}
}
Is there something wrong in the config or in the way I am trying to activate and start singleton ?
I thought that there could be some connectivity issue between nodes in a cluster because of which its unable to get desired quorum to start singleton. Just to experiment, I changed quorum to 1. But still sometimes I see this issue during heavy load.
I will really appreciate some help or suggestions on this issue.
Also, is there a way to monitor state of singleton from application code and trigger it from our application code ?
> Infinispan Singleton silently dies in wildfly 9 cluster setup
> -------------------------------------------------------------
>
> Key: WFLY-6702
> URL: https://issues.jboss.org/browse/WFLY-6702
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 9.0.1.Final
> Environment: linux,64gb RAM
> Reporter: Divey Gupta
> Assignee: Paul Ferraro
> Priority: Blocker
> Labels: cluster, infinispan, singleton, wildfly
>
> I am using wildfly 9 in a cluster setup of 3 nodes (standalone-full-ha.xml) and use Singleton service for some of our operations. Sometimes (during heavy load/traffic) we are seeing that singleton service silently dies without giving any error or exception. There was no exception like : "Failed to get quorum.."
> When load reduces (number of concurrent requests) on wildfly, then also it doesn't recover i.e. reactivate singleton in some node. In order to start singleton again, the only option that works is manually restarting wildfly
> Following is my standalone-full-ha.xml config for infinispan and Jgroups.
> <stack name="tcp">
> <transport socket-binding="jgroups-tcp" type="TCP"/>
> <protocol type="TCPPING">
> <property name="initial_hosts">
> 10.0.1.32[7600],10.0.1.38[7600],10.0.1.39[7600]</property>
> <property name="port_range">
> 0
> </property>
> </protocol>
> <protocol type="MERGE2"/>
> <protocol socket-binding="jgroups-tcp-fd" type="FD_SOCK"/>
> <protocol type="FD"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS">
> <property name="join_timeout">
> 5000
> </property>
> </protocol>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> <protocol type="RSVP"/>
> </stack>
> ....
> <subsystem xmlns="urn:jboss:domain:infinispan:3.0">
> <cache-container aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server" name="server">
> <transport lock-timeout="120000"/>
> <replicated-cache mode="ASYNC" name="default">
> <state-transfer enabled="true" timeout="300000"/>
> <transaction locking="OPTIMISTIC" mode="BATCH"/>
> </replicated-cache>
> </cache-container>
> <cache-container default-cache="session" module="org.wildfly.clustering.web.infinispan" name="web">
> <transport lock-timeout="120000"/>
> <replicated-cache mode="ASYNC" name="session">
> <state-transfer enabled="true" timeout="300000"/>
> <locking isolation="READ_COMMITTED"/>
> <transaction locking="OPTIMISTIC" mode="BATCH"/>
> </replicated-cache>
> </cache-container>
> </subsystem>
> Following is a java code snippet that we use to activate and start singleton in a cluster:
> {code}
> public class SingletonServiceActivator implements ServiceActivator {
> public static final ServiceName SINGLETON_SERVICE_NAME =
> ServiceName.JBOSS.append("ha", "singleton");
> private static final String CONTAINER_NAME = "server";
> private static final String CACHE_NAME = "default";
> @Override
> public void activate(ServiceActivatorContext context) throws ServiceRegistryException {
> int quorum = 2;
> InjectedValue<ServerEnvironment> env = new InjectedValue<>();
> SingletonServiceClient srv = new SingletonServiceClient(env);
> ServiceController<?> factoryService = context.getServiceRegistry().getRequiredService(SingletonServiceBuilderFactory.SERVICE_NAME.append(CONTAINER_NAME, CACHE_NAME));
> SingletonServiceBuilderFactory factory = (SingletonServiceBuilderFactory) factoryService.getValue();
> SingletonElectionPolicy policy = new SimpleSingletonElectionPolicy(0);
> factory.createSingletonServiceBuilder(SINGLETON_SERVICE_NAME, srv)
> .requireQuorum(quorum)
> .electionPolicy(policy)
> .build(new DelegatingServiceContainer(context.getServiceTarget(),context.getServiceRegistry()))
> .addDependency(ServerEnvironmentService.SERVICE_NAME, ServerEnvironment.class, env)
> .setInitialMode(ServiceController.Mode.ACTIVE)
> .install();
> }
> public final class SingletonServiceClient extends AbstractService<Serializable> {
> private final Value<ServerEnvironment> env;
> public SingletonServiceClient(Value<ServerEnvironment> env) {
> this.env = env;
> }
> @Override
> public void start(StartContext startContext) {
> // startContext.
> log("SingletonService started");
> //do work
> }
> @Override
> public void stop(StopContext stopContext) {
> log("SingletonService stopped"); // THIS NEVER GETS CALLED
> //stop
> }
> }
> {code}
> Is there something wrong in the config or in the way I am trying to activate and start singleton ?
> I thought that there could be some connectivity issue between nodes in a cluster because of which its unable to get desired quorum to start singleton. Just to experiment, I changed quorum to 1. But still sometimes I see this issue during heavy load.
> I will really appreciate some help or suggestions on this issue.
> Also, is there a way to monitor state of singleton from application code and trigger it from our application code ?
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6703) Failover intermittently fails with WELD-000227: Bean identifier index inconsistency detected - the distributed container probably does not work with identical applications
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/WFLY-6703?page=com.atlassian.jira.plugin.... ]
Radoslav Husar closed WFLY-6703.
--------------------------------
Resolution: Rejected
Ah, I accidentally had activemq in a server profile, which adds
{code} 10: WELD%AbstractBuiltInBean%org.wildfly.extension.messaging-activemq:main.additionalClasses%HttpSession{code}
to the index.
> Failover intermittently fails with WELD-000227: Bean identifier index inconsistency detected - the distributed container probably does not work with identical applications
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: WFLY-6703
> URL: https://issues.jboss.org/browse/WFLY-6703
> Project: WildFly
> Issue Type: Bug
> Components: CDI / Weld
> Affects Versions: 10.0.0.Final
> Reporter: Radoslav Husar
> Assignee: Tomas Remes
>
> {noformat}
> 13:57:41,855 ERROR [io.undertow.request] (default task-1) UT005023: Exception handling request to /clusterbench/session: org.jboss.weld.exceptions.IllegalStateException: WELD-000227: Bean identifier index inconsistency detected - the distributed container probably does not work with identical applications
> Expected hash: 1931672237
> Current index: BeanIdentifierIndex [hash=1185198536, indexed=13]
> 0: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-ejb.jar%HttpSession
> 1: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-default.war%HttpSession
> 2: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-granular.war%HttpSession
> 3: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-passivating.war%HttpSession
> 4: WELD%AbstractBuiltInBean%clusterbench-ee7.ear%HttpSession
> 5: WELD%AbstractBuiltInBean%com.sun.jsf-impl:main.additionalClasses%HttpSession
> 6: WELD%AbstractBuiltInBean%org.hibernate.validator.cdi:main.additionalClasses%HttpSession
> 7: WELD%AbstractBuiltInBean%org.jberet.jberet-core:main.additionalClasses%HttpSession
> 8: WELD%AbstractBuiltInBean%org.jboss.as.jsf:main.additionalClasses%HttpSession
> 9: WELD%AbstractBuiltInBean%org.jboss.resteasy.resteasy-cdi:main.additionalClasses%HttpSession
> 10: WELD%ManagedBean%clusterbench-ee7.ear|/content/clusterbench-ee7.ear/clusterbench-ee7-web-default.war|org.jboss.test.clusterbench.web.cdi.SessionScopedCdiSerialBean|null|false
> 11: WELD%ManagedBean%clusterbench-ee7.ear|/content/clusterbench-ee7.ear/clusterbench-ee7-web-passivating.war|org.jboss.test.clusterbench.web.cdi.SessionScopedCdiSerialBean|null|false
> 12: WELD%SessionBean%LocalStatefulSB%org.jboss.test.clusterbench.ejb.stateful.LocalStatefulSB
> at org.jboss.weld.context.http.HttpSessionContextImpl.checkBeanIdentifierIndexConsistency(HttpSessionContextImpl.java:101)
> at org.jboss.weld.context.http.HttpSessionContextImpl.associate(HttpSessionContextImpl.java:47)
> at org.jboss.weld.context.http.HttpSessionContextImpl.associate(HttpSessionContextImpl.java:23)
> at org.jboss.weld.servlet.HttpContextLifecycle.requestInitialized(HttpContextLifecycle.java:237)
> at org.jboss.weld.servlet.WeldInitialListener.requestInitialized(WeldInitialListener.java:152)
> at io.undertow.servlet.core.ApplicationListeners.requestInitialized(ApplicationListeners.java:245)
> at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:284)
> at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:264)
> at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
> at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:175)
> at io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
> at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:792)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 13:57:41,941 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel server: [node1|2] (1) [node1]
> 13:57:41,942 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel web: [node1|2] (1) [node1]
> 13:57:41,943 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel hibernate: [node1|2] (1) [node1]
> 13:57:41,944 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel ejb: [node1|2] (1) [node1]
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6703) Failover intermittently fails with WELD-000227: Bean identifier index inconsistency detected - the distributed container probably does not work with identical applications
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/WFLY-6703?page=com.atlassian.jira.plugin.... ]
Radoslav Husar updated WFLY-6703:
---------------------------------
Description:
{noformat}
13:57:41,855 ERROR [io.undertow.request] (default task-1) UT005023: Exception handling request to /clusterbench/session: org.jboss.weld.exceptions.IllegalStateException: WELD-000227: Bean identifier index inconsistency detected - the distributed container probably does not work with identical applications
Expected hash: 1931672237
Current index: BeanIdentifierIndex [hash=1185198536, indexed=13]
0: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-ejb.jar%HttpSession
1: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-default.war%HttpSession
2: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-granular.war%HttpSession
3: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-passivating.war%HttpSession
4: WELD%AbstractBuiltInBean%clusterbench-ee7.ear%HttpSession
5: WELD%AbstractBuiltInBean%com.sun.jsf-impl:main.additionalClasses%HttpSession
6: WELD%AbstractBuiltInBean%org.hibernate.validator.cdi:main.additionalClasses%HttpSession
7: WELD%AbstractBuiltInBean%org.jberet.jberet-core:main.additionalClasses%HttpSession
8: WELD%AbstractBuiltInBean%org.jboss.as.jsf:main.additionalClasses%HttpSession
9: WELD%AbstractBuiltInBean%org.jboss.resteasy.resteasy-cdi:main.additionalClasses%HttpSession
10: WELD%ManagedBean%clusterbench-ee7.ear|/content/clusterbench-ee7.ear/clusterbench-ee7-web-default.war|org.jboss.test.clusterbench.web.cdi.SessionScopedCdiSerialBean|null|false
11: WELD%ManagedBean%clusterbench-ee7.ear|/content/clusterbench-ee7.ear/clusterbench-ee7-web-passivating.war|org.jboss.test.clusterbench.web.cdi.SessionScopedCdiSerialBean|null|false
12: WELD%SessionBean%LocalStatefulSB%org.jboss.test.clusterbench.ejb.stateful.LocalStatefulSB
at org.jboss.weld.context.http.HttpSessionContextImpl.checkBeanIdentifierIndexConsistency(HttpSessionContextImpl.java:101)
at org.jboss.weld.context.http.HttpSessionContextImpl.associate(HttpSessionContextImpl.java:47)
at org.jboss.weld.context.http.HttpSessionContextImpl.associate(HttpSessionContextImpl.java:23)
at org.jboss.weld.servlet.HttpContextLifecycle.requestInitialized(HttpContextLifecycle.java:237)
at org.jboss.weld.servlet.WeldInitialListener.requestInitialized(WeldInitialListener.java:152)
at io.undertow.servlet.core.ApplicationListeners.requestInitialized(ApplicationListeners.java:245)
at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:284)
at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:264)
at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:175)
at io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:792)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
13:57:41,941 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel server: [node1|2] (1) [node1]
13:57:41,942 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel web: [node1|2] (1) [node1]
13:57:41,943 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel hibernate: [node1|2] (1) [node1]
13:57:41,944 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel ejb: [node1|2] (1) [node1]
{noformat}
was:
13:57:41,855 ERROR [io.undertow.request] (default task-1) UT005023: Exception handling request to /clusterbench/session: org.jboss.weld.exceptions.IllegalStateException: WELD-000227: Bean identifier index inconsistency detected - the distributed container probably does not work with identical applications
Expected hash: 1931672237
Current index: BeanIdentifierIndex [hash=1185198536, indexed=13]
0: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-ejb.jar%HttpSession
1: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-default.war%HttpSession
2: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-granular.war%HttpSession
3: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-passivating.war%HttpSession
4: WELD%AbstractBuiltInBean%clusterbench-ee7.ear%HttpSession
5: WELD%AbstractBuiltInBean%com.sun.jsf-impl:main.additionalClasses%HttpSession
6: WELD%AbstractBuiltInBean%org.hibernate.validator.cdi:main.additionalClasses%HttpSession
7: WELD%AbstractBuiltInBean%org.jberet.jberet-core:main.additionalClasses%HttpSession
8: WELD%AbstractBuiltInBean%org.jboss.as.jsf:main.additionalClasses%HttpSession
9: WELD%AbstractBuiltInBean%org.jboss.resteasy.resteasy-cdi:main.additionalClasses%HttpSession
10: WELD%ManagedBean%clusterbench-ee7.ear|/content/clusterbench-ee7.ear/clusterbench-ee7-web-default.war|org.jboss.test.clusterbench.web.cdi.SessionScopedCdiSerialBean|null|false
11: WELD%ManagedBean%clusterbench-ee7.ear|/content/clusterbench-ee7.ear/clusterbench-ee7-web-passivating.war|org.jboss.test.clusterbench.web.cdi.SessionScopedCdiSerialBean|null|false
12: WELD%SessionBean%LocalStatefulSB%org.jboss.test.clusterbench.ejb.stateful.LocalStatefulSB
at org.jboss.weld.context.http.HttpSessionContextImpl.checkBeanIdentifierIndexConsistency(HttpSessionContextImpl.java:101)
at org.jboss.weld.context.http.HttpSessionContextImpl.associate(HttpSessionContextImpl.java:47)
at org.jboss.weld.context.http.HttpSessionContextImpl.associate(HttpSessionContextImpl.java:23)
at org.jboss.weld.servlet.HttpContextLifecycle.requestInitialized(HttpContextLifecycle.java:237)
at org.jboss.weld.servlet.WeldInitialListener.requestInitialized(WeldInitialListener.java:152)
at io.undertow.servlet.core.ApplicationListeners.requestInitialized(ApplicationListeners.java:245)
at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:284)
at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:264)
at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:175)
at io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:792)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
13:57:41,941 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel server: [node1|2] (1) [node1]
13:57:41,942 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel web: [node1|2] (1) [node1]
13:57:41,943 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel hibernate: [node1|2] (1) [node1]
13:57:41,944 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel ejb: [node1|2] (1) [node1]
> Failover intermittently fails with WELD-000227: Bean identifier index inconsistency detected - the distributed container probably does not work with identical applications
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: WFLY-6703
> URL: https://issues.jboss.org/browse/WFLY-6703
> Project: WildFly
> Issue Type: Bug
> Components: CDI / Weld
> Affects Versions: 10.0.0.Final
> Reporter: Radoslav Husar
> Assignee: Tomas Remes
>
> {noformat}
> 13:57:41,855 ERROR [io.undertow.request] (default task-1) UT005023: Exception handling request to /clusterbench/session: org.jboss.weld.exceptions.IllegalStateException: WELD-000227: Bean identifier index inconsistency detected - the distributed container probably does not work with identical applications
> Expected hash: 1931672237
> Current index: BeanIdentifierIndex [hash=1185198536, indexed=13]
> 0: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-ejb.jar%HttpSession
> 1: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-default.war%HttpSession
> 2: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-granular.war%HttpSession
> 3: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-passivating.war%HttpSession
> 4: WELD%AbstractBuiltInBean%clusterbench-ee7.ear%HttpSession
> 5: WELD%AbstractBuiltInBean%com.sun.jsf-impl:main.additionalClasses%HttpSession
> 6: WELD%AbstractBuiltInBean%org.hibernate.validator.cdi:main.additionalClasses%HttpSession
> 7: WELD%AbstractBuiltInBean%org.jberet.jberet-core:main.additionalClasses%HttpSession
> 8: WELD%AbstractBuiltInBean%org.jboss.as.jsf:main.additionalClasses%HttpSession
> 9: WELD%AbstractBuiltInBean%org.jboss.resteasy.resteasy-cdi:main.additionalClasses%HttpSession
> 10: WELD%ManagedBean%clusterbench-ee7.ear|/content/clusterbench-ee7.ear/clusterbench-ee7-web-default.war|org.jboss.test.clusterbench.web.cdi.SessionScopedCdiSerialBean|null|false
> 11: WELD%ManagedBean%clusterbench-ee7.ear|/content/clusterbench-ee7.ear/clusterbench-ee7-web-passivating.war|org.jboss.test.clusterbench.web.cdi.SessionScopedCdiSerialBean|null|false
> 12: WELD%SessionBean%LocalStatefulSB%org.jboss.test.clusterbench.ejb.stateful.LocalStatefulSB
> at org.jboss.weld.context.http.HttpSessionContextImpl.checkBeanIdentifierIndexConsistency(HttpSessionContextImpl.java:101)
> at org.jboss.weld.context.http.HttpSessionContextImpl.associate(HttpSessionContextImpl.java:47)
> at org.jboss.weld.context.http.HttpSessionContextImpl.associate(HttpSessionContextImpl.java:23)
> at org.jboss.weld.servlet.HttpContextLifecycle.requestInitialized(HttpContextLifecycle.java:237)
> at org.jboss.weld.servlet.WeldInitialListener.requestInitialized(WeldInitialListener.java:152)
> at io.undertow.servlet.core.ApplicationListeners.requestInitialized(ApplicationListeners.java:245)
> at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:284)
> at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:264)
> at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
> at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:175)
> at io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
> at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:792)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 13:57:41,941 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel server: [node1|2] (1) [node1]
> 13:57:41,942 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel web: [node1|2] (1) [node1]
> 13:57:41,943 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel hibernate: [node1|2] (1) [node1]
> 13:57:41,944 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel ejb: [node1|2] (1) [node1]
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6703) Failover intermittently fails with WELD-000227: Bean identifier index inconsistency detected - the distributed container probably does not work with identical applications
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/WFLY-6703?page=com.atlassian.jira.plugin.... ]
Radoslav Husar commented on WFLY-6703:
--------------------------------------
Hm, also reproducible on 10.0.0.Final.
> Failover intermittently fails with WELD-000227: Bean identifier index inconsistency detected - the distributed container probably does not work with identical applications
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: WFLY-6703
> URL: https://issues.jboss.org/browse/WFLY-6703
> Project: WildFly
> Issue Type: Bug
> Components: CDI / Weld
> Affects Versions: 10.0.0.Final
> Reporter: Radoslav Husar
> Assignee: Tomas Remes
>
> 13:57:41,855 ERROR [io.undertow.request] (default task-1) UT005023: Exception handling request to /clusterbench/session: org.jboss.weld.exceptions.IllegalStateException: WELD-000227: Bean identifier index inconsistency detected - the distributed container probably does not work with identical applications
> Expected hash: 1931672237
> Current index: BeanIdentifierIndex [hash=1185198536, indexed=13]
> 0: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-ejb.jar%HttpSession
> 1: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-default.war%HttpSession
> 2: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-granular.war%HttpSession
> 3: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-passivating.war%HttpSession
> 4: WELD%AbstractBuiltInBean%clusterbench-ee7.ear%HttpSession
> 5: WELD%AbstractBuiltInBean%com.sun.jsf-impl:main.additionalClasses%HttpSession
> 6: WELD%AbstractBuiltInBean%org.hibernate.validator.cdi:main.additionalClasses%HttpSession
> 7: WELD%AbstractBuiltInBean%org.jberet.jberet-core:main.additionalClasses%HttpSession
> 8: WELD%AbstractBuiltInBean%org.jboss.as.jsf:main.additionalClasses%HttpSession
> 9: WELD%AbstractBuiltInBean%org.jboss.resteasy.resteasy-cdi:main.additionalClasses%HttpSession
> 10: WELD%ManagedBean%clusterbench-ee7.ear|/content/clusterbench-ee7.ear/clusterbench-ee7-web-default.war|org.jboss.test.clusterbench.web.cdi.SessionScopedCdiSerialBean|null|false
> 11: WELD%ManagedBean%clusterbench-ee7.ear|/content/clusterbench-ee7.ear/clusterbench-ee7-web-passivating.war|org.jboss.test.clusterbench.web.cdi.SessionScopedCdiSerialBean|null|false
> 12: WELD%SessionBean%LocalStatefulSB%org.jboss.test.clusterbench.ejb.stateful.LocalStatefulSB
> at org.jboss.weld.context.http.HttpSessionContextImpl.checkBeanIdentifierIndexConsistency(HttpSessionContextImpl.java:101)
> at org.jboss.weld.context.http.HttpSessionContextImpl.associate(HttpSessionContextImpl.java:47)
> at org.jboss.weld.context.http.HttpSessionContextImpl.associate(HttpSessionContextImpl.java:23)
> at org.jboss.weld.servlet.HttpContextLifecycle.requestInitialized(HttpContextLifecycle.java:237)
> at org.jboss.weld.servlet.WeldInitialListener.requestInitialized(WeldInitialListener.java:152)
> at io.undertow.servlet.core.ApplicationListeners.requestInitialized(ApplicationListeners.java:245)
> at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:284)
> at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:264)
> at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
> at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:175)
> at io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
> at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:792)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 13:57:41,941 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel server: [node1|2] (1) [node1]
> 13:57:41,942 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel web: [node1|2] (1) [node1]
> 13:57:41,943 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel hibernate: [node1|2] (1) [node1]
> 13:57:41,944 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel ejb: [node1|2] (1) [node1]
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6703) Failover intermittently fails with WELD-000227: Bean identifier index inconsistency detected - the distributed container probably does not work with identical applications
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/WFLY-6703?page=com.atlassian.jira.plugin.... ]
Radoslav Husar updated WFLY-6703:
---------------------------------
Affects Version/s: 10.0.0.Final
> Failover intermittently fails with WELD-000227: Bean identifier index inconsistency detected - the distributed container probably does not work with identical applications
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: WFLY-6703
> URL: https://issues.jboss.org/browse/WFLY-6703
> Project: WildFly
> Issue Type: Bug
> Components: CDI / Weld
> Affects Versions: 10.0.0.Final
> Reporter: Radoslav Husar
> Assignee: Tomas Remes
>
> 13:57:41,855 ERROR [io.undertow.request] (default task-1) UT005023: Exception handling request to /clusterbench/session: org.jboss.weld.exceptions.IllegalStateException: WELD-000227: Bean identifier index inconsistency detected - the distributed container probably does not work with identical applications
> Expected hash: 1931672237
> Current index: BeanIdentifierIndex [hash=1185198536, indexed=13]
> 0: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-ejb.jar%HttpSession
> 1: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-default.war%HttpSession
> 2: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-granular.war%HttpSession
> 3: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-passivating.war%HttpSession
> 4: WELD%AbstractBuiltInBean%clusterbench-ee7.ear%HttpSession
> 5: WELD%AbstractBuiltInBean%com.sun.jsf-impl:main.additionalClasses%HttpSession
> 6: WELD%AbstractBuiltInBean%org.hibernate.validator.cdi:main.additionalClasses%HttpSession
> 7: WELD%AbstractBuiltInBean%org.jberet.jberet-core:main.additionalClasses%HttpSession
> 8: WELD%AbstractBuiltInBean%org.jboss.as.jsf:main.additionalClasses%HttpSession
> 9: WELD%AbstractBuiltInBean%org.jboss.resteasy.resteasy-cdi:main.additionalClasses%HttpSession
> 10: WELD%ManagedBean%clusterbench-ee7.ear|/content/clusterbench-ee7.ear/clusterbench-ee7-web-default.war|org.jboss.test.clusterbench.web.cdi.SessionScopedCdiSerialBean|null|false
> 11: WELD%ManagedBean%clusterbench-ee7.ear|/content/clusterbench-ee7.ear/clusterbench-ee7-web-passivating.war|org.jboss.test.clusterbench.web.cdi.SessionScopedCdiSerialBean|null|false
> 12: WELD%SessionBean%LocalStatefulSB%org.jboss.test.clusterbench.ejb.stateful.LocalStatefulSB
> at org.jboss.weld.context.http.HttpSessionContextImpl.checkBeanIdentifierIndexConsistency(HttpSessionContextImpl.java:101)
> at org.jboss.weld.context.http.HttpSessionContextImpl.associate(HttpSessionContextImpl.java:47)
> at org.jboss.weld.context.http.HttpSessionContextImpl.associate(HttpSessionContextImpl.java:23)
> at org.jboss.weld.servlet.HttpContextLifecycle.requestInitialized(HttpContextLifecycle.java:237)
> at org.jboss.weld.servlet.WeldInitialListener.requestInitialized(WeldInitialListener.java:152)
> at io.undertow.servlet.core.ApplicationListeners.requestInitialized(ApplicationListeners.java:245)
> at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:284)
> at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:264)
> at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
> at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:175)
> at io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
> at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:792)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 13:57:41,941 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel server: [node1|2] (1) [node1]
> 13:57:41,942 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel web: [node1|2] (1) [node1]
> 13:57:41,943 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel hibernate: [node1|2] (1) [node1]
> 13:57:41,944 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel ejb: [node1|2] (1) [node1]
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6671) ajp connection hangs if a post HTTP request header contains 'Transfer-Encoding: chunked'
by river shen (JIRA)
[ https://issues.jboss.org/browse/WFLY-6671?page=com.atlassian.jira.plugin.... ]
river shen commented on WFLY-6671:
----------------------------------
Thanks for the quick response.
Does the attached war work as expected on the Apache 2.4.16?
> ajp connection hangs if a post HTTP request header contains 'Transfer-Encoding: chunked'
> -----------------------------------------------------------------------------------------
>
> Key: WFLY-6671
> URL: https://issues.jboss.org/browse/WFLY-6671
> Project: WildFly
> Issue Type: Bug
> Components: Web (Undertow)
> Affects Versions: 10.0.0.Final
> Environment: Apache HTTP server 2.2.22 with mod_jk
> Reporter: river shen
> Assignee: Stuart Douglas
> Attachments: service-1.0-SNAPSHOT.war, src.zip, stacks.txt, standalone.xml, workers.properties
>
>
> When upgrading from JBOSS 7 to WILDFLY10, we observed following behavior:
> if an HTTP post contains 'Transfer-Encoding: chunked' and 'Content-Type:appliation/octet-stream' in its head, A servlet which handles it will hang for ever ( until the client drop the connection) if it calls HttpServletRequest.getInputStream() and tries to read the whole content of the returned InputStream. The InputStream's read() method will block for ever at the end of the stream as opposed to return -1.
> It only happens when the request is routed by apache web server through ajp; it does not happen if the client talks to wildfly directly through its 8080 http port.
> We have attached a minimal web application that reproduce this issue.
> Also attached is the standalone.xml and the apache configuration file.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6703) Failover intermittently fails with WELD-000227: Bean identifier index inconsistency detected - the distributed container probably does not work with identical applications
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/WFLY-6703?page=com.atlassian.jira.plugin.... ]
Radoslav Husar edited comment on WFLY-6703 at 6/13/16 11:48 AM:
----------------------------------------------------------------
No modifications, just current vanilla WildFly master (10.1.0.Final-SNAPSHOT); commit hash b81f09dc10943e77f87cd321551e1a9b3f0eec9b as of now.
Just start up 2 nodes, second with port offset, deploy clusterbench, start 1 client (no delay between requests) and shutdown one server (ctrl+c). Possibly not even intermittent, happens almost always.
was (Author: rhusar):
No modifications, just vanilla WildFly master (10.1.0.Final-SNAPSHOT) probably commit f47d481da4bab239153e3a9fc29df6044faf3867. Started up 2 nodes, deployed clusterbench, fail one server.
> Failover intermittently fails with WELD-000227: Bean identifier index inconsistency detected - the distributed container probably does not work with identical applications
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: WFLY-6703
> URL: https://issues.jboss.org/browse/WFLY-6703
> Project: WildFly
> Issue Type: Bug
> Components: CDI / Weld
> Reporter: Radoslav Husar
> Assignee: Tomas Remes
>
> 13:57:41,855 ERROR [io.undertow.request] (default task-1) UT005023: Exception handling request to /clusterbench/session: org.jboss.weld.exceptions.IllegalStateException: WELD-000227: Bean identifier index inconsistency detected - the distributed container probably does not work with identical applications
> Expected hash: 1931672237
> Current index: BeanIdentifierIndex [hash=1185198536, indexed=13]
> 0: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-ejb.jar%HttpSession
> 1: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-default.war%HttpSession
> 2: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-granular.war%HttpSession
> 3: WELD%AbstractBuiltInBean%/content/clusterbench-ee7.ear/clusterbench-ee7-web-passivating.war%HttpSession
> 4: WELD%AbstractBuiltInBean%clusterbench-ee7.ear%HttpSession
> 5: WELD%AbstractBuiltInBean%com.sun.jsf-impl:main.additionalClasses%HttpSession
> 6: WELD%AbstractBuiltInBean%org.hibernate.validator.cdi:main.additionalClasses%HttpSession
> 7: WELD%AbstractBuiltInBean%org.jberet.jberet-core:main.additionalClasses%HttpSession
> 8: WELD%AbstractBuiltInBean%org.jboss.as.jsf:main.additionalClasses%HttpSession
> 9: WELD%AbstractBuiltInBean%org.jboss.resteasy.resteasy-cdi:main.additionalClasses%HttpSession
> 10: WELD%ManagedBean%clusterbench-ee7.ear|/content/clusterbench-ee7.ear/clusterbench-ee7-web-default.war|org.jboss.test.clusterbench.web.cdi.SessionScopedCdiSerialBean|null|false
> 11: WELD%ManagedBean%clusterbench-ee7.ear|/content/clusterbench-ee7.ear/clusterbench-ee7-web-passivating.war|org.jboss.test.clusterbench.web.cdi.SessionScopedCdiSerialBean|null|false
> 12: WELD%SessionBean%LocalStatefulSB%org.jboss.test.clusterbench.ejb.stateful.LocalStatefulSB
> at org.jboss.weld.context.http.HttpSessionContextImpl.checkBeanIdentifierIndexConsistency(HttpSessionContextImpl.java:101)
> at org.jboss.weld.context.http.HttpSessionContextImpl.associate(HttpSessionContextImpl.java:47)
> at org.jboss.weld.context.http.HttpSessionContextImpl.associate(HttpSessionContextImpl.java:23)
> at org.jboss.weld.servlet.HttpContextLifecycle.requestInitialized(HttpContextLifecycle.java:237)
> at org.jboss.weld.servlet.WeldInitialListener.requestInitialized(WeldInitialListener.java:152)
> at io.undertow.servlet.core.ApplicationListeners.requestInitialized(ApplicationListeners.java:245)
> at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:284)
> at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:264)
> at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
> at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:175)
> at io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
> at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:792)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 13:57:41,941 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel server: [node1|2] (1) [node1]
> 13:57:41,942 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel web: [node1|2] (1) [node1]
> 13:57:41,943 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel hibernate: [node1|2] (1) [node1]
> 13:57:41,944 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-6,ee,node1) ISPN000094: Received new cluster view for channel ejb: [node1|2] (1) [node1]
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months