[JBoss JIRA] (JBJCA-1321) Statement.cancel() is not invoked until the statement is completed
by Lin Gao (JIRA)
[ https://issues.jboss.org/browse/JBJCA-1321?page=com.atlassian.jira.plugin... ]
Lin Gao commented on JBJCA-1321:
--------------------------------
PRs are sent: https://github.com/ironjacamar/ironjacamar/pull/526, https://github.com/ironjacamar/ironjacamar/pull/527
The status should be changed after PRs are merged.
> Statement.cancel() is not invoked until the statement is completed
> ------------------------------------------------------------------
>
> Key: JBJCA-1321
> URL: https://issues.jboss.org/browse/JBJCA-1321
> Project: IronJacamar
> Issue Type: Bug
> Components: JDBC
> Affects Versions: WildFly/IronJacamar 1.3.4.Final, 1.2.7.Final
> Reporter: lorenzo benvenuti
> Assignee: Lin Gao
> Priority: Critical
> Fix For: WildFly/IronJacamar 1.3.5.Final, 1.2.8.Final
>
>
> Hi,
> in our application we are using the {{Statement.cancel()}} method to stop long-running queries; in Wildfly 9.0.2 this is not working because the {{cancel()}} method is synchronized using a lock which is not released until the query is executed. In {{WrappedStatement}}:
> {code:java}
> public void cancel() throws SQLException
> {
> if (doLocking)
> lock();
> try
> {
> /* ... */
> {code}
> It seems this behaviour has changed from version 1.2.5.Final of ironjacamar-jdbc; in version 1.2.4.Final {{WrappedStatement.cancel}} doesn't try to obtain the lock.
> Probably I'm missing something, but to me it's strange that in order to cancel a statement you have to wait for its completion.
> Thank you,
> lorenzo
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (JBJCA-1321) Statement.cancel() is not invoked until the statement is completed
by Lin Gao (JIRA)
[ https://issues.jboss.org/browse/JBJCA-1321?page=com.atlassian.jira.plugin... ]
Lin Gao reopened JBJCA-1321:
----------------------------
> Statement.cancel() is not invoked until the statement is completed
> ------------------------------------------------------------------
>
> Key: JBJCA-1321
> URL: https://issues.jboss.org/browse/JBJCA-1321
> Project: IronJacamar
> Issue Type: Bug
> Components: JDBC
> Affects Versions: WildFly/IronJacamar 1.3.4.Final, 1.2.7.Final
> Reporter: lorenzo benvenuti
> Assignee: Lin Gao
> Priority: Critical
> Fix For: WildFly/IronJacamar 1.3.5.Final, 1.2.8.Final
>
>
> Hi,
> in our application we are using the {{Statement.cancel()}} method to stop long-running queries; in Wildfly 9.0.2 this is not working because the {{cancel()}} method is synchronized using a lock which is not released until the query is executed. In {{WrappedStatement}}:
> {code:java}
> public void cancel() throws SQLException
> {
> if (doLocking)
> lock();
> try
> {
> /* ... */
> {code}
> It seems this behaviour has changed from version 1.2.5.Final of ironjacamar-jdbc; in version 1.2.4.Final {{WrappedStatement.cancel}} doesn't try to obtain the lock.
> Probably I'm missing something, but to me it's strange that in order to cancel a statement you have to wait for its completion.
> Thank you,
> lorenzo
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6671) ajp connection hangs if a post HTTP request header contains 'Transfer-Encoding: chunked'
by Stuart Douglas (JIRA)
[ https://issues.jboss.org/browse/WFLY-6671?page=com.atlassian.jira.plugin.... ]
Stuart Douglas commented on WFLY-6671:
--------------------------------------
It worked when using mod_proxy_ajp (as that is what I had set up locally). When i have some time I will also test it out with mod_jk but I probably won't have time till next week.
> ajp connection hangs if a post HTTP request header contains 'Transfer-Encoding: chunked'
> -----------------------------------------------------------------------------------------
>
> Key: WFLY-6671
> URL: https://issues.jboss.org/browse/WFLY-6671
> Project: WildFly
> Issue Type: Bug
> Components: Web (Undertow)
> Affects Versions: 10.0.0.Final
> Environment: Apache HTTP server 2.2.22 with mod_jk
> Reporter: river shen
> Assignee: Stuart Douglas
> Attachments: service-1.0-SNAPSHOT.war, src.zip, stacks.txt, standalone.xml, workers.properties
>
>
> When upgrading from JBOSS 7 to WILDFLY10, we observed following behavior:
> if an HTTP post contains 'Transfer-Encoding: chunked' and 'Content-Type:appliation/octet-stream' in its head, A servlet which handles it will hang for ever ( until the client drop the connection) if it calls HttpServletRequest.getInputStream() and tries to read the whole content of the returned InputStream. The InputStream's read() method will block for ever at the end of the stream as opposed to return -1.
> It only happens when the request is routed by apache web server through ajp; it does not happen if the client talks to wildfly directly through its 8080 http port.
> We have attached a minimal web application that reproduce this issue.
> Also attached is the standalone.xml and the apache configuration file.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFCORE-1106) Better handling of subsequent changes once the server is placed into reload-required
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFCORE-1106?page=com.atlassian.jira.plugi... ]
Brian Stansberry updated WFCORE-1106:
-------------------------------------
Git Pull Request: https://github.com/wildfly/wildfly-core/pull/1566 (was: https://github.com/wildfly/wildfly-core/pull/1475)
> Better handling of subsequent changes once the server is placed into reload-required
> ------------------------------------------------------------------------------------
>
> Key: WFCORE-1106
> URL: https://issues.jboss.org/browse/WFCORE-1106
> Project: WildFly Core
> Issue Type: Enhancement
> Components: Domain Management
> Reporter: Brian Stansberry
> Assignee: Brian Stansberry
>
> When the handler for a configuration change operation determines that it cannot effect the change to the current runtime services, it places the process into "reload-required" state. From the moment this occurs until the reload is performed, the configuration model is inconsistent with the runtime services.
> This can lead to problems when, prior to reload, the user makes further configuration changes. Those changes can succeed in Stage.MODEL, since the change is valid given the current state of the configuration model, but then when the handler attempts to update the runtime the changes fail because the runtime services are in a different state. Some common scenarios:
> 1) User removes a resource, triggering reload required. Then they re-add the resource, which fails with a DuplicateServiceException since the service from the original version of the resource hasn't been removed yet.
> 2) User makes some other config change to a resource which can't be effected immediately, so the server is put into reload-required. The user then adds another resource that depends on the services from the first resource, and that add fails because the runtime service from the first resource is not in the expected state.
> A naive fix for this would be once the process goes into reload-required state to stop making any further runtime changes for steps that alter the persistent config. (Runtime changes for ops that don't touch persistent config would be ok, e.g. reload itself, or runtime-only ops like popping a message off a JMS queue.)
> The problem with the naive approach is config changes that could take immediate effect no longer will. This could break existing scripts, or just be annoying in general. For example, a server is in reload-required state but is still running. Then the user wants to add a logger category or change the level of an existing one in order to get some diagnostic info. The logging change would not affect the runtime until the reload is done, forcing a reload to get the diagnostic data.
> Stuart Douglas had an excellent suggestion today of looking into tying this in to capabilities and requirements. So, for example:
> 1) An op targeted at resource foo=bar causes the process to go into reload-required.
> 2) The kernel detects this and finds the registration for the foo=bar resource type, and sees that the resource provides capability org.wildfly.foo.bar.
> 3) The kernel records in the capability registry that org.wildfly.foo.bar is now "reload-required".
> 4) Thereafter, for any op that changes the model and then adds a runtime step, the kernel:
> a) finds the registration for the resource type associated with that op's target address
> b) finds any capabilities provided by the resource type
> c) looks for direct or transitive requirements for those capabilites that are "reload-required"
> d) if found, the runtime step is not executed, and instead the "server-requires-reload" response-header is added.
> The effect here is the granularity of what ops have their runtime changes skipped is reduced to those associated with capabilities that put the server into reload-required. Unrelated ops, e.g. the logging changes mentioned above, are unaffected.
> Some fine points:
> 1) The restart-required and reload-required states need to be tracked separately. The information regarding any restart-required capabilities needs to survive a reload.
> 2) The information that a capability is reload/restart-required needs to survive the removal of the capability. This allows the remove+add scenario to work. The remove op removes the capability, but the fact it is still present in the runtime is tracked, so when the add comes in no runtime changes are made.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6706) Domain mode stack trace on session invalidate
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFLY-6706?page=com.atlassian.jira.plugin.... ]
Brian Stansberry commented on WFLY-6706:
----------------------------------------
Very strange. It's very odd to have this kind of behavior difference between a standalone server and domain as they run the same code. The differences are how the process is launched (which could affect VM settings and system properties) and a slightly different location for the server's writable data (data, log, tmp dirs). But it's hard to see how those are relevant here.
[~swd847] does this ring any bells for you?
> Domain mode stack trace on session invalidate
> ---------------------------------------------
>
> Key: WFLY-6706
> URL: https://issues.jboss.org/browse/WFLY-6706
> Project: WildFly
> Issue Type: Feature Request
> Components: Domain Management, Web (Undertow)
> Affects Versions: 9.0.2.Final
> Reporter: Robert Smith
> Assignee: Brian Stansberry
>
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6702) Infinispan Singleton silently dies in wildfly 9 cluster setup
by Paul Ferraro (JIRA)
[ https://issues.jboss.org/browse/WFLY-6702?page=com.atlassian.jira.plugin.... ]
Paul Ferraro closed WFLY-6702.
------------------------------
Resolution: Out of Date
This is due to WFLY-4748, which was fixed in WF10.
> Infinispan Singleton silently dies in wildfly 9 cluster setup
> -------------------------------------------------------------
>
> Key: WFLY-6702
> URL: https://issues.jboss.org/browse/WFLY-6702
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 9.0.1.Final
> Environment: linux,64gb RAM
> Reporter: Divey Gupta
> Assignee: Paul Ferraro
> Priority: Blocker
> Labels: cluster, infinispan, singleton, wildfly
>
> I am using wildfly 9 in a cluster setup of 3 nodes (standalone-full-ha.xml) and use Singleton service for some of our operations. Sometimes (during heavy load/traffic) we are seeing that singleton service silently dies without giving any error or exception. There was no exception like : "Failed to get quorum.."
> When load reduces (number of concurrent requests) on wildfly, then also it doesn't recover i.e. reactivate singleton in some node. In order to start singleton again, the only option that works is manually restarting wildfly
> Following is my standalone-full-ha.xml config for infinispan and Jgroups.
> {code:xml}
> <stack name="tcp">
> <transport socket-binding="jgroups-tcp" type="TCP"/>
> <protocol type="TCPPING">
> <property name="initial_hosts">
> 10.0.1.32[7600],10.0.1.38[7600],10.0.1.39[7600]</property>
> <property name="port_range">
> 0
> </property>
> </protocol>
> <protocol type="MERGE2"/>
> <protocol socket-binding="jgroups-tcp-fd" type="FD_SOCK"/>
> <protocol type="FD"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS">
> <property name="join_timeout">
> 5000
> </property>
> </protocol>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> <protocol type="RSVP"/>
> </stack>
> {code}
> ....
> {code:xml}
> <subsystem xmlns="urn:jboss:domain:infinispan:3.0">
> <cache-container aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server" name="server">
> <transport lock-timeout="120000"/>
> <replicated-cache mode="ASYNC" name="default">
> <state-transfer enabled="true" timeout="300000"/>
> <transaction locking="OPTIMISTIC" mode="BATCH"/>
> </replicated-cache>
> </cache-container>
> <cache-container default-cache="session" module="org.wildfly.clustering.web.infinispan" name="web">
> <transport lock-timeout="120000"/>
> <replicated-cache mode="ASYNC" name="session">
> <state-transfer enabled="true" timeout="300000"/>
> <locking isolation="READ_COMMITTED"/>
> <transaction locking="OPTIMISTIC" mode="BATCH"/>
> </replicated-cache>
> </cache-container>
> </subsystem>
> {code}
> Following is a java code snippet that we use to activate and start singleton in a cluster:
> {code}
> public class SingletonServiceActivator implements ServiceActivator {
> public static final ServiceName SINGLETON_SERVICE_NAME =
> ServiceName.JBOSS.append("ha", "singleton");
> private static final String CONTAINER_NAME = "server";
> private static final String CACHE_NAME = "default";
> @Override
> public void activate(ServiceActivatorContext context) throws ServiceRegistryException {
> int quorum = 2;
> InjectedValue<ServerEnvironment> env = new InjectedValue<>();
> SingletonServiceClient srv = new SingletonServiceClient(env);
> ServiceController<?> factoryService = context.getServiceRegistry().getRequiredService(SingletonServiceBuilderFactory.SERVICE_NAME.append(CONTAINER_NAME, CACHE_NAME));
> SingletonServiceBuilderFactory factory = (SingletonServiceBuilderFactory) factoryService.getValue();
> SingletonElectionPolicy policy = new SimpleSingletonElectionPolicy(0);
> factory.createSingletonServiceBuilder(SINGLETON_SERVICE_NAME, srv)
> .requireQuorum(quorum)
> .electionPolicy(policy)
> .build(new DelegatingServiceContainer(context.getServiceTarget(),context.getServiceRegistry()))
> .addDependency(ServerEnvironmentService.SERVICE_NAME, ServerEnvironment.class, env)
> .setInitialMode(ServiceController.Mode.ACTIVE)
> .install();
> }
> public final class SingletonServiceClient extends AbstractService<Serializable> {
> private final Value<ServerEnvironment> env;
> public SingletonServiceClient(Value<ServerEnvironment> env) {
> this.env = env;
> }
> @Override
> public void start(StartContext startContext) {
> // startContext.
> log("SingletonService started");
> //do work
> }
> @Override
> public void stop(StopContext stopContext) {
> log("SingletonService stopped"); // THIS NEVER GETS CALLED
> //stop
> }
> }
> {code}
> Is there something wrong in the config or in the way I am trying to activate and start singleton ?
> I thought that there could be some connectivity issue between nodes in a cluster because of which its unable to get desired quorum to start singleton. Just to experiment, I changed quorum to 1. But still sometimes I see this issue during heavy load.
> I will really appreciate some help or suggestions on this issue.
> Also, is there a way to monitor state of singleton from application code and trigger it from our application code ?
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6702) Infinispan Singleton silently dies in wildfly 9 cluster setup
by Paul Ferraro (JIRA)
[ https://issues.jboss.org/browse/WFLY-6702?page=com.atlassian.jira.plugin.... ]
Paul Ferraro updated WFLY-6702:
-------------------------------
Forum Reference: https://developer.jboss.org/thread/270363
> Infinispan Singleton silently dies in wildfly 9 cluster setup
> -------------------------------------------------------------
>
> Key: WFLY-6702
> URL: https://issues.jboss.org/browse/WFLY-6702
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 9.0.1.Final
> Environment: linux,64gb RAM
> Reporter: Divey Gupta
> Assignee: Paul Ferraro
> Priority: Blocker
> Labels: cluster, infinispan, singleton, wildfly
>
> I am using wildfly 9 in a cluster setup of 3 nodes (standalone-full-ha.xml) and use Singleton service for some of our operations. Sometimes (during heavy load/traffic) we are seeing that singleton service silently dies without giving any error or exception. There was no exception like : "Failed to get quorum.."
> When load reduces (number of concurrent requests) on wildfly, then also it doesn't recover i.e. reactivate singleton in some node. In order to start singleton again, the only option that works is manually restarting wildfly
> Following is my standalone-full-ha.xml config for infinispan and Jgroups.
> {code:xml}
> <stack name="tcp">
> <transport socket-binding="jgroups-tcp" type="TCP"/>
> <protocol type="TCPPING">
> <property name="initial_hosts">
> 10.0.1.32[7600],10.0.1.38[7600],10.0.1.39[7600]</property>
> <property name="port_range">
> 0
> </property>
> </protocol>
> <protocol type="MERGE2"/>
> <protocol socket-binding="jgroups-tcp-fd" type="FD_SOCK"/>
> <protocol type="FD"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS">
> <property name="join_timeout">
> 5000
> </property>
> </protocol>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> <protocol type="RSVP"/>
> </stack>
> {code}
> ....
> {code:xml}
> <subsystem xmlns="urn:jboss:domain:infinispan:3.0">
> <cache-container aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server" name="server">
> <transport lock-timeout="120000"/>
> <replicated-cache mode="ASYNC" name="default">
> <state-transfer enabled="true" timeout="300000"/>
> <transaction locking="OPTIMISTIC" mode="BATCH"/>
> </replicated-cache>
> </cache-container>
> <cache-container default-cache="session" module="org.wildfly.clustering.web.infinispan" name="web">
> <transport lock-timeout="120000"/>
> <replicated-cache mode="ASYNC" name="session">
> <state-transfer enabled="true" timeout="300000"/>
> <locking isolation="READ_COMMITTED"/>
> <transaction locking="OPTIMISTIC" mode="BATCH"/>
> </replicated-cache>
> </cache-container>
> </subsystem>
> {code}
> Following is a java code snippet that we use to activate and start singleton in a cluster:
> {code}
> public class SingletonServiceActivator implements ServiceActivator {
> public static final ServiceName SINGLETON_SERVICE_NAME =
> ServiceName.JBOSS.append("ha", "singleton");
> private static final String CONTAINER_NAME = "server";
> private static final String CACHE_NAME = "default";
> @Override
> public void activate(ServiceActivatorContext context) throws ServiceRegistryException {
> int quorum = 2;
> InjectedValue<ServerEnvironment> env = new InjectedValue<>();
> SingletonServiceClient srv = new SingletonServiceClient(env);
> ServiceController<?> factoryService = context.getServiceRegistry().getRequiredService(SingletonServiceBuilderFactory.SERVICE_NAME.append(CONTAINER_NAME, CACHE_NAME));
> SingletonServiceBuilderFactory factory = (SingletonServiceBuilderFactory) factoryService.getValue();
> SingletonElectionPolicy policy = new SimpleSingletonElectionPolicy(0);
> factory.createSingletonServiceBuilder(SINGLETON_SERVICE_NAME, srv)
> .requireQuorum(quorum)
> .electionPolicy(policy)
> .build(new DelegatingServiceContainer(context.getServiceTarget(),context.getServiceRegistry()))
> .addDependency(ServerEnvironmentService.SERVICE_NAME, ServerEnvironment.class, env)
> .setInitialMode(ServiceController.Mode.ACTIVE)
> .install();
> }
> public final class SingletonServiceClient extends AbstractService<Serializable> {
> private final Value<ServerEnvironment> env;
> public SingletonServiceClient(Value<ServerEnvironment> env) {
> this.env = env;
> }
> @Override
> public void start(StartContext startContext) {
> // startContext.
> log("SingletonService started");
> //do work
> }
> @Override
> public void stop(StopContext stopContext) {
> log("SingletonService stopped"); // THIS NEVER GETS CALLED
> //stop
> }
> }
> {code}
> Is there something wrong in the config or in the way I am trying to activate and start singleton ?
> I thought that there could be some connectivity issue between nodes in a cluster because of which its unable to get desired quorum to start singleton. Just to experiment, I changed quorum to 1. But still sometimes I see this issue during heavy load.
> I will really appreciate some help or suggestions on this issue.
> Also, is there a way to monitor state of singleton from application code and trigger it from our application code ?
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6702) Infinispan Singleton silently dies in wildfly 9 cluster setup
by Divey Gupta (JIRA)
[ https://issues.jboss.org/browse/WFLY-6702?page=com.atlassian.jira.plugin.... ]
Divey Gupta commented on WFLY-6702:
-----------------------------------
[~rhusar] We are going to upgrade it to Wildfly 10 in our next major release but currently we are using wildfly 9.0.0.2 in our builds which we can't change till next release.
I would like to know if you know any work around for this issue if you have seen this in past or if there is anything that we can change in our configuration. It doesn't even print the logs when the singleton dies; so its hard to get the stack trace also.
> Infinispan Singleton silently dies in wildfly 9 cluster setup
> -------------------------------------------------------------
>
> Key: WFLY-6702
> URL: https://issues.jboss.org/browse/WFLY-6702
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 9.0.1.Final
> Environment: linux,64gb RAM
> Reporter: Divey Gupta
> Assignee: Paul Ferraro
> Priority: Blocker
> Labels: cluster, infinispan, singleton, wildfly
>
> I am using wildfly 9 in a cluster setup of 3 nodes (standalone-full-ha.xml) and use Singleton service for some of our operations. Sometimes (during heavy load/traffic) we are seeing that singleton service silently dies without giving any error or exception. There was no exception like : "Failed to get quorum.."
> When load reduces (number of concurrent requests) on wildfly, then also it doesn't recover i.e. reactivate singleton in some node. In order to start singleton again, the only option that works is manually restarting wildfly
> Following is my standalone-full-ha.xml config for infinispan and Jgroups.
> {code:xml}
> <stack name="tcp">
> <transport socket-binding="jgroups-tcp" type="TCP"/>
> <protocol type="TCPPING">
> <property name="initial_hosts">
> 10.0.1.32[7600],10.0.1.38[7600],10.0.1.39[7600]</property>
> <property name="port_range">
> 0
> </property>
> </protocol>
> <protocol type="MERGE2"/>
> <protocol socket-binding="jgroups-tcp-fd" type="FD_SOCK"/>
> <protocol type="FD"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS">
> <property name="join_timeout">
> 5000
> </property>
> </protocol>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> <protocol type="RSVP"/>
> </stack>
> {code}
> ....
> {code:xml}
> <subsystem xmlns="urn:jboss:domain:infinispan:3.0">
> <cache-container aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server" name="server">
> <transport lock-timeout="120000"/>
> <replicated-cache mode="ASYNC" name="default">
> <state-transfer enabled="true" timeout="300000"/>
> <transaction locking="OPTIMISTIC" mode="BATCH"/>
> </replicated-cache>
> </cache-container>
> <cache-container default-cache="session" module="org.wildfly.clustering.web.infinispan" name="web">
> <transport lock-timeout="120000"/>
> <replicated-cache mode="ASYNC" name="session">
> <state-transfer enabled="true" timeout="300000"/>
> <locking isolation="READ_COMMITTED"/>
> <transaction locking="OPTIMISTIC" mode="BATCH"/>
> </replicated-cache>
> </cache-container>
> </subsystem>
> {code}
> Following is a java code snippet that we use to activate and start singleton in a cluster:
> {code}
> public class SingletonServiceActivator implements ServiceActivator {
> public static final ServiceName SINGLETON_SERVICE_NAME =
> ServiceName.JBOSS.append("ha", "singleton");
> private static final String CONTAINER_NAME = "server";
> private static final String CACHE_NAME = "default";
> @Override
> public void activate(ServiceActivatorContext context) throws ServiceRegistryException {
> int quorum = 2;
> InjectedValue<ServerEnvironment> env = new InjectedValue<>();
> SingletonServiceClient srv = new SingletonServiceClient(env);
> ServiceController<?> factoryService = context.getServiceRegistry().getRequiredService(SingletonServiceBuilderFactory.SERVICE_NAME.append(CONTAINER_NAME, CACHE_NAME));
> SingletonServiceBuilderFactory factory = (SingletonServiceBuilderFactory) factoryService.getValue();
> SingletonElectionPolicy policy = new SimpleSingletonElectionPolicy(0);
> factory.createSingletonServiceBuilder(SINGLETON_SERVICE_NAME, srv)
> .requireQuorum(quorum)
> .electionPolicy(policy)
> .build(new DelegatingServiceContainer(context.getServiceTarget(),context.getServiceRegistry()))
> .addDependency(ServerEnvironmentService.SERVICE_NAME, ServerEnvironment.class, env)
> .setInitialMode(ServiceController.Mode.ACTIVE)
> .install();
> }
> public final class SingletonServiceClient extends AbstractService<Serializable> {
> private final Value<ServerEnvironment> env;
> public SingletonServiceClient(Value<ServerEnvironment> env) {
> this.env = env;
> }
> @Override
> public void start(StartContext startContext) {
> // startContext.
> log("SingletonService started");
> //do work
> }
> @Override
> public void stop(StopContext stopContext) {
> log("SingletonService stopped"); // THIS NEVER GETS CALLED
> //stop
> }
> }
> {code}
> Is there something wrong in the config or in the way I am trying to activate and start singleton ?
> I thought that there could be some connectivity issue between nodes in a cluster because of which its unable to get desired quorum to start singleton. Just to experiment, I changed quorum to 1. But still sometimes I see this issue during heavy load.
> I will really appreciate some help or suggestions on this issue.
> Also, is there a way to monitor state of singleton from application code and trigger it from our application code ?
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months