[JBoss JIRA] (JGRP-1886) CENTRAL_LOCK: provide option to make a node the lock owner instead of node:thread
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1886?page=com.atlassian.jira.plugin.... ]
Bela Ban resolved JGRP-1886.
----------------------------
Resolution: Done
> CENTRAL_LOCK: provide option to make a node the lock owner instead of node:thread
> ---------------------------------------------------------------------------------
>
> Key: JGRP-1886
> URL: https://issues.jboss.org/browse/JGRP-1886
> Project: JGroups
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 3.6
>
>
> When using distributed locks, we currently use {{Owner}} as identity that holds a lock. {{Owner}} consists of the node's address and the thread's ID.
> This has the same semantics as {{ReentrantLock}} which makes the thread which called {{Lock.lock()}} the owner.
> However, in some scenarios, this is too strong and some applications only want the current node to be the lock owner. This is needed in cases where thread T1 locks a lock but thread T2 needs to unlock it.
> This means, however, that all threads in a given node can lock or unlock the same lock.
> h4. Solution
> * Add property {{use_thread_id_for_lock_owner}} to {{CENTRAL_LOCK}} (default: false)
> * When set, {{getOwner()}} sets the thread-id to -1
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
10 years, 5 months
[JBoss JIRA] (JGRP-1886) CENTRAL_LOCK: provide option to make a node the lock owner instead of node:thread
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1886?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-1886:
---------------------------
Description:
When using distributed locks, we currently use {{Owner}} as identity that holds a lock. {{Owner}} consists of the node's address and the thread's ID.
This has the same semantics as {{ReentrantLock}} which makes the thread which called {{Lock.lock()}} the owner.
However, in some scenarios, this is too strong and some applications only want the current node to be the lock owner. This is needed in cases where thread T1 locks a lock but thread T2 needs to unlock it.
This means, however, that all threads in a given node can lock or unlock the same lock.
h4. Solution
* Add property {{use_thread_id_for_lock_owner}} to {{CENTRAL_LOCK}} (default: false)
* When set, {{getOwner()}} sets the thread-id to -1
was:
When using distributed locks, we currently use {{Owner}} as identity that holds a lock. {{Owner}} consists of the node's address and the thread's ID.
This has the same semantics as {{ReentrantLock}} which makes the thread which called {{Lock.lock()}} the owner.
However, in some scenarios, this is too strong and some applications only want the current node to be the lock owner. This is needed in cases where thread T1 locks a lock but thread T2 needs to unlock it.
This means, however, that all threads in a given node can lock or unlock the same lock.
h4. Solution
* Add property {{use_thread_id_for_lock_owner}} to {{CENTRAL_LOCK}} (default: false)
* When set, {{getOwner()}} sets the thread-id to 0
> CENTRAL_LOCK: provide option to make a node the lock owner instead of node:thread
> ---------------------------------------------------------------------------------
>
> Key: JGRP-1886
> URL: https://issues.jboss.org/browse/JGRP-1886
> Project: JGroups
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 3.6
>
>
> When using distributed locks, we currently use {{Owner}} as identity that holds a lock. {{Owner}} consists of the node's address and the thread's ID.
> This has the same semantics as {{ReentrantLock}} which makes the thread which called {{Lock.lock()}} the owner.
> However, in some scenarios, this is too strong and some applications only want the current node to be the lock owner. This is needed in cases where thread T1 locks a lock but thread T2 needs to unlock it.
> This means, however, that all threads in a given node can lock or unlock the same lock.
> h4. Solution
> * Add property {{use_thread_id_for_lock_owner}} to {{CENTRAL_LOCK}} (default: false)
> * When set, {{getOwner()}} sets the thread-id to -1
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
10 years, 5 months
[JBoss JIRA] (JBMETA-379) Missing param-name in a web.xml causes NullPointerException during deployment
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/JBMETA-379?page=com.atlassian.jira.plugin... ]
RH Bugzilla Integration commented on JBMETA-379:
------------------------------------------------
baranowb <bbaranow(a)redhat.com> changed the Status of [bug 1125421|https://bugzilla.redhat.com/show_bug.cgi?id=1125421] from POST to MODIFIED
> Missing param-name in a web.xml causes NullPointerException during deployment
> ------------------------------------------------------------------------------
>
> Key: JBMETA-379
> URL: https://issues.jboss.org/browse/JBMETA-379
> Project: JBoss Metadata
> Issue Type: Bug
> Components: web
> Affects Versions: 8.0.0.Final
> Reporter: Jay Kumar SenSharma
> Assignee: Jean-Frederic Clere
> Fix For: 8.0.1.Final
>
> Attachments: ContextParamNullDemo.war
>
>
> - While deploying a WAR, If the web.xml file is used which has context <param-value> defined, However it has missing <param-name> then it causes NullPointerException as following:
> {code}
> 00:12:09,583 INFO [org.jboss.as.server.deployment] (MSC service thread 1-6) WFLYSRV0027: Starting deployment of "ContextParamNullDemo.war" (runtime-name: "ContextParamNullDemo.war")
> 00:12:09,591 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-7) MSC000001: Failed to start service jboss.deployment.unit."ContextParamNullDemo.war".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."ContextParamNullDemo.war".PARSE: WFLYSRV0153: Failed to process phase PARSE of deployment "ContextParamNullDemo.war"
> at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:163) [wildfly-server-9.0.0.Alpha1-SNAPSHOT.jar:9.0.0.Alpha1-SNAPSHOT]
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948) [jboss-msc-1.2.2.Final.jar:1.2.2.Final]
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881) [jboss-msc-1.2.2.Final.jar:1.2.2.Final]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_51]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_51]
> at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_51]
> Caused by: java.lang.NullPointerException
> at org.jboss.as.jsf.deployment.JSFVersionProcessor.deploy(JSFVersionProcessor.java:91)
> at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:156) [wildfly-server-9.0.0.Alpha1-SNAPSHOT.jar:9.0.0.Alpha1-SNAPSHOT]
> ... 5 more
> 00:12:09,598 ERROR [org.jboss.as.controller.management-operation] (DeploymentScanner-threads - 1) WFLYCTL0013: Operation ("deploy") failed - address: ([("deployment" => "ContextParamNullDemo.war")]) - failure description: {"WFLYCTL0080: Failed services" => {"jboss.deployment.unit.\"ContextParamNullDemo.war\".PARSE" => "org.jboss.msc.service.StartException in service jboss.deployment.unit.\"ContextParamNullDemo.war\".PARSE: WFLYSRV0153: Failed to process phase PARSE of deployment \"ContextParamNullDemo.war\"
> Caused by: java.lang.NullPointerException"}}
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
10 years, 5 months
[JBoss JIRA] (WFCORE-133) CLI -connect option throws full stacktrace exception
by Joe Wertz (JIRA)
Joe Wertz created WFCORE-133:
--------------------------------
Summary: CLI -connect option throws full stacktrace exception
Key: WFCORE-133
URL: https://issues.jboss.org/browse/WFCORE-133
Project: WildFly Core
Issue Type: Bug
Components: CLI
Affects Versions: 1.0.0.Alpha8
Reporter: Joe Wertz
Assignee: Joe Wertz
Priority: Minor
Fix For: 1.0.0.Alpha9
Full stacktrace is printed for exceptions that occur before the CLI connects to a server.
Instead, only the exception messages should be printed.
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
10 years, 5 months
[JBoss JIRA] (WFLY-1149) Naming lookup intermittently fails on IBM JDK due to org.jboss.remoting3.NotOpenException: Endpoint is not open.
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/WFLY-1149?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration updated WFLY-1149:
------------------------------------------
Bugzilla Update: Perform
Bugzilla References: https://bugzilla.redhat.com/show_bug.cgi?id=923836
> Naming lookup intermittently fails on IBM JDK due to org.jboss.remoting3.NotOpenException: Endpoint is not open.
> ----------------------------------------------------------------------------------------------------------------
>
> Key: WFLY-1149
> URL: https://issues.jboss.org/browse/WFLY-1149
> Project: WildFly
> Issue Type: Bug
> Components: Naming, Test Suite
> Environment: IBM JDK 6 (build 20110203_074623)
> IBM JDK 7 (build 20120809_118929)
> Reporter: Ivo Studensky
> Attachments: endpoint_is_not_open_2012-11-26.xml, failed_with_status_cancelled_2012-11-26.xml, test_output_with_trace_logging_in_EndpointCache.xml
>
>
> RemoteNamingTestCase intermittently fails when running on IBM JDK. According to logs the remoting channel had been closed before the endpoint tried to connect to it. Unfortunately, when I was trying to debug this issue the tests always nicely passed.
> test.log snippet:
> {noformat}
> 13:16:31,115 DEBUG [org.xnio.nio] (Remoting "config-based-naming-client-endpoint" read-1) Started channel thread 'Remoting "config-based-naming-client-endpoint" read-1', selector sun.nio.ch.EPollSelectorImpl@345642e1
> 13:16:31,115 DEBUG [org.xnio.nio] (Remoting "config-based-naming-client-endpoint" write-1) Started channel thread 'Remoting "config-based-naming-client-endpoint" write-1', selector sun.nio.ch.EPollSelectorImpl@1dc68cf2
> 13:16:31,121 DEBUG [org.jboss.naming.remote.client.InitialContextFactory] (main) jboss.naming.client.connect.options. has the following options {org.xnio.Options.SASL_POLICY_NOPLAINTEXT=>false}
> 13:16:31,191 ERROR [org.jboss.naming.remote.protocol.v1.RemoteNamingStoreV1] (Remoting "config-based-naming-client-endpoint" task-1) Channel end notification received, closing channel Channel ID d1f17196 (outbound) of Remoting connection fd3dcedc to /127.0.0.1:4447
> 13:16:31,204 DEBUG [org.jboss.naming.remote.client.HaRemoteNamingStore] (main) Failed to connect to server remote://127.0.0.1:4447: org.jboss.remoting3.NotOpenException: Endpoint is not open
> at org.jboss.remoting3.EndpointImpl.resourceUntick(EndpointImpl.java:182)
> at org.jboss.remoting3.EndpointImpl.doConnect(EndpointImpl.java:261)
> at org.jboss.remoting3.EndpointImpl.doConnect(EndpointImpl.java:251)
> at org.jboss.remoting3.EndpointImpl.connect(EndpointImpl.java:349)
> at org.jboss.remoting3.EndpointImpl.connect(EndpointImpl.java:333)
> at org.jboss.naming.remote.client.EndpointCache$EndpointWrapper.connect(EndpointCache.java:105)
> at org.jboss.naming.remote.client.HaRemoteNamingStore.failOverSequence(HaRemoteNamingStore.java:179)
> at org.jboss.naming.remote.client.HaRemoteNamingStore.namingOperation(HaRemoteNamingStore.java:117)
> at org.jboss.naming.remote.client.HaRemoteNamingStore.lookup(HaRemoteNamingStore.java:223)
> at org.jboss.naming.remote.client.RemoteContext.lookup(RemoteContext.java:79)
> at org.jboss.naming.remote.client.RemoteContext.lookup(RemoteContext.java:83)
> at javax.naming.InitialContext.lookup(InitialContext.java:422)
> at org.jboss.as.test.integration.naming.remote.simple.RemoteNamingTestCase.testRemoteLookup(RemoteNamingTestCase.java:74)
> {noformat}
> server.log snippet:
> {noformat}
> 13:16:31,025 INFO [org.jboss.as.server] (management-handler-thread - 3) JBAS018559: Deployed "test.jar"
> 13:16:31,163 DEBUG [org.jboss.naming.remote.server.RemoteNamingService] (Remoting "thinkpax" task-3) Channel Opened - Channel ID 51f17196 (inbound) of Remoting connection b9da2788 to /127.0.0.1:46866
> 13:16:31,176 DEBUG [org.jboss.naming.remote.server.RemoteNamingService] (Remoting "thinkpax" task-4) Chosen version 0x01
> 13:16:31,189 DEBUG [org.jboss.naming.remote.server.RemoteNamingService] (Remoting "thinkpax" read-1) Channel Channel ID 51f17196 (inbound) of Remoting connection b9da2788 to /127.0.0.1:46866 closed.
> 13:16:31,193 INFO [org.jboss.as.naming] (Remoting "thinkpax" task-1) JBAS011806: Channel end notification received, closing channel Channel ID 51f17196 (inbound) of Remoting connection b9da2788 to null
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
10 years, 5 months
[JBoss JIRA] (JGRP-1886) CENTRAL_LOCK: provide option to make a node the lock owner instead of node:thread
by Bela Ban (JIRA)
Bela Ban created JGRP-1886:
------------------------------
Summary: CENTRAL_LOCK: provide option to make a node the lock owner instead of node:thread
Key: JGRP-1886
URL: https://issues.jboss.org/browse/JGRP-1886
Project: JGroups
Issue Type: Feature Request
Reporter: Bela Ban
Assignee: Bela Ban
Fix For: 3.6
When using distributed locks, we currently use {{Owner}} as identity that holds a lock. {{Owner}} consists of the node's address and the thread's ID.
This has the same semantics as {{ReentrantLock}} which makes the thread which called {{Lock.lock()}} the owner.
However, in some scenarios, this is too strong and some applications only want the current node to be the lock owner. This is needed in cases where thread T1 locks a lock but thread T2 needs to unlock it.
This means, however, that all threads in a given node can lock or unlock the same lock.
h4. Solution
* Add property {{use_thread_id_for_lock_owner}} to {{CENTRAL_LOCK}} (default: false)
* When set, {{getOwner()}} sets the thread-id to 0
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
10 years, 5 months
[JBoss JIRA] (JGRP-1851) RSVP: add option to not block caller
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1851?page=com.atlassian.jira.plugin.... ]
Bela Ban resolved JGRP-1851.
----------------------------
Resolution: Done
> RSVP: add option to not block caller
> ------------------------------------
>
> Key: JGRP-1851
> URL: https://issues.jboss.org/browse/JGRP-1851
> Project: JGroups
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Bela Ban
> Priority: Minor
> Fix For: 3.6
>
>
> In RSVP we have a {{resend_interval}} at which empty messages are sent to trigger retransmission and a {{timeout}} which is the max time a caller will block until all acks have been received.
> In the {{down()}} method, the caller blocks until all acks have been received or a timeout occured. Then the resend task is stopped and the entry removed from the {{ids}} map.
> In some cases, we may want to not block the caller, so that the calling thread returns immediately, but nevertheless perform resending until all acks have been received or the timeout occurred. This is a new property {{block}}.
> An example of where this is useful is {{RpcDispatcher.callRemoteMethodsWithFuture()}}: here, we want the call to return immediately with a future, and then - later - potentially block on it. However, if we have the RSVP flag set, then sending the message will block in RSVP until all acks have been received. This breaks the semantics of {{callRemoteMethodsWithFuture()}}.
> Another example is async RPCs (mode={{GET_NONE}}); here we'd block if RSVP is set even though we shouldn't.
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
10 years, 5 months