[JBoss JIRA] (WFLY-465) Attribute check-for-live-server must set on live server to faillback
by Jason Greene (JIRA)
[ https://issues.jboss.org/browse/WFLY-465?page=com.atlassian.jira.plugin.s... ]
Jason Greene updated WFLY-465:
------------------------------
Fix Version/s: 8.2.0.CR1
(was: 8.1.0.Final)
> Attribute check-for-live-server must set on live server to faillback
> --------------------------------------------------------------------
>
> Key: WFLY-465
> URL: https://issues.jboss.org/browse/WFLY-465
> Project: WildFly
> Issue Type: Task
> Security Level: Public(Everyone can see)
> Components: JMS
> Reporter: Miroslav Novak
> Assignee: Jeff Mesnil
> Fix For: 8.2.0.CR1
>
> Attachments: standalone-full-ha-backup.xml, standalone-full-ha-live.xml
>
>
> When there is live/backup pair with replicated journal then it should be sufficient to set attribute "check-for-live-server" in messaging subsystem only on backup to force backup server to shutdown when live server comes alive again. Problem is this won't happen.
> Only when attribute "check-for-live-server" is set on live server then failback is successful (backup shutdown itself)
> It's not well documented where "check-for-live-server" should be set in HornetQ project documentation:
> http://docs.jboss.org/hornetq/2.3.0.CR1/docs/user-manual/html_single/inde...
> This issue was hit with EAP 6.1.0.DR2 (HQ 2.3.0.CR1).
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
11 years, 11 months
[JBoss JIRA] (WFLY-3110) Don't copy the contents to all hosts when assigning a deployment to a server-group
by Jason Greene (JIRA)
[ https://issues.jboss.org/browse/WFLY-3110?page=com.atlassian.jira.plugin.... ]
Jason Greene updated WFLY-3110:
-------------------------------
Fix Version/s: 8.2.0.CR1
(was: 8.1.0.Final)
> Don't copy the contents to all hosts when assigning a deployment to a server-group
> ----------------------------------------------------------------------------------
>
> Key: WFLY-3110
> URL: https://issues.jboss.org/browse/WFLY-3110
> Project: WildFly
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Domain Management
> Affects Versions: 8.0.0.Final
> Reporter: Emanuel Muckenhuber
> Assignee: Emanuel Muckenhuber
> Fix For: 8.2.0.CR1
>
>
> When assigning a deployment to a server-group it also copies the actual deployment contents to all hosts regardless of the whether the deployment is used locally or not. This should be changed that the contents are only retrieved if they are actually needed on a local server.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
11 years, 11 months
[JBoss JIRA] (WFLY-2275) StackOverflowError on DataSource resource injection
by Jason Greene (JIRA)
[ https://issues.jboss.org/browse/WFLY-2275?page=com.atlassian.jira.plugin.... ]
Jason Greene updated WFLY-2275:
-------------------------------
Fix Version/s: 8.2.0.CR1
(was: 8.1.0.Final)
> StackOverflowError on DataSource resource injection
> ---------------------------------------------------
>
> Key: WFLY-2275
> URL: https://issues.jboss.org/browse/WFLY-2275
> Project: WildFly
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Naming
> Affects Versions: 8.0.0.Beta1
> Reporter: Thomas Diesler
> Assignee: Thomas Diesler
> Fix For: 8.2.0.CR1
>
> Attachments: standalone-osgi.xml
>
>
> {code}
> @Resource(name="java:comp/DefaultDataSource")
> public DataSource ds;
> {code}
> leads to
> {code}
> 10:59:21,901 INFO [org.jboss.osgi.framework] (MSC service thread 1-3) JBOSGI011001: Bundle installed: resource-injection.jar:0.0.0
> 10:59:21,937 INFO [org.jboss.as.arquillian] (MSC service thread 1-3) Arquillian deployment detected: ArquillianConfig[service=jboss.arquillian.config."resource-injection.jar",unit=resource-injection.jar,tests=[org.jboss.test.osgi.example.resource.ResourceInjectionTestCase]]
> 10:59:21,993 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-1) MSC000001: Failed to start service jboss.deployment.unit."resource-injection.jar".Activate: org.jboss.msc.service.StartException in service jboss.deployment.unit."resource-injection.jar".Activate: Failed to start service
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1900) [jboss-msc-1.2.0.Beta2.jar:1.2.0.Beta2]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_25]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_25]
> at java.lang.Thread.run(Thread.java:724) [rt.jar:1.7.0_25]
> Caused by: java.lang.StackOverflowError
> at java.util.Vector.<init>(Vector.java:127) [rt.jar:1.7.0_25]
> at java.util.Vector.<init>(Vector.java:144) [rt.jar:1.7.0_25]
> at java.util.Vector.<init>(Vector.java:153) [rt.jar:1.7.0_25]
> at javax.naming.NameImpl.<init>(NameImpl.java:273) [rt.jar:1.7.0_25]
> at javax.naming.NameImpl.<init>(NameImpl.java:277) [rt.jar:1.7.0_25]
> at javax.naming.CompositeName.<init>(CompositeName.java:231) [rt.jar:1.7.0_25]
> at org.jboss.as.naming.util.NameParser.parse(NameParser.java:49)
> at org.jboss.as.naming.NamingContext.parseName(NamingContext.java:496)
> at org.jboss.as.naming.NamingContext.lookup(NamingContext.java:188)
> at org.jboss.as.naming.NamingContext.lookup(NamingContext.java:184)
> at org.jboss.as.naming.deployment.ContextNames$BindInfo$1$1.getReference(ContextNames.java:302)
> at org.jboss.as.naming.ServiceBasedNamingStore.lookup(ServiceBasedNamingStore.java:140)
> at org.jboss.as.naming.ServiceBasedNamingStore.lookup(ServiceBasedNamingStore.java:81)
> at org.jboss.as.naming.NamingContext.lookup(NamingContext.java:202)
> at org.jboss.as.naming.NamingContext.lookup(NamingContext.java:188)
> at org.jboss.as.naming.NamingContext.lookup(NamingContext.java:184)
> at org.jboss.as.naming.deployment.ContextNames$BindInfo$1$1.getReference(ContextNames.java:302)
> at org.jboss.as.naming.ServiceBasedNamingStore.lookup(ServiceBasedNamingStore.java:140)
> at org.jboss.as.naming.ServiceBasedNamingStore.lookup(ServiceBasedNamingStore.java:81)
> at org.jboss.as.naming.NamingContext.lookup(NamingContext.java:202)
> at org.jboss.as.naming.NamingContext.lookup(NamingContext.java:188)
> at org.jboss.as.naming.NamingContext.lookup(NamingContext.java:184)
> at org.jboss.as.naming.deployment.ContextNames$BindInfo$1$1.getReference(ContextNames.java:302)
> at org.jboss.as.naming.ServiceBasedNamingStore.lookup(ServiceBasedNamingStore.java:140)
> at org.jboss.as.naming.ServiceBasedNamingStore.lookup(ServiceBasedNamingStore.java:81)
> {code}
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
11 years, 11 months
[JBoss JIRA] (WFLY-84) Jasper using wrong ProtectionDomain for compiled JSP
by Jason Greene (JIRA)
[ https://issues.jboss.org/browse/WFLY-84?page=com.atlassian.jira.plugin.sy... ]
Jason Greene updated WFLY-84:
-----------------------------
Fix Version/s: 8.2.0.CR1
(was: 8.1.0.Final)
> Jasper using wrong ProtectionDomain for compiled JSP
> ----------------------------------------------------
>
> Key: WFLY-84
> URL: https://issues.jboss.org/browse/WFLY-84
> Project: WildFly
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Web (Undertow)
> Reporter: David Lloyd
> Assignee: Remy Maucherat
> Fix For: 8.2.0.CR1
>
>
> Compiled JSPs loaded via JasperLoader appear to be using a different ProtectionDomain than the rest of the WAR deployment. I think it should probably be using a PD which contains the permissions from the deployment's ClassLoader, and probably the CodeSource from the deployment unit from which the JSP file originated. This will ensure that permissions set via deployment descriptor and/or the management model will take proper effect.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
11 years, 11 months
[JBoss JIRA] (WFLY-261) Add way to properly parse JNDI urls in AS codebase
by Jason Greene (JIRA)
[ https://issues.jboss.org/browse/WFLY-261?page=com.atlassian.jira.plugin.s... ]
Jason Greene updated WFLY-261:
------------------------------
Fix Version/s: 8.2.0.CR1
(was: 8.1.0.Final)
> Add way to properly parse JNDI urls in AS codebase
> --------------------------------------------------
>
> Key: WFLY-261
> URL: https://issues.jboss.org/browse/WFLY-261
> Project: WildFly
> Issue Type: Feature Request
> Security Level: Public(Everyone can see)
> Components: Naming
> Reporter: Bartosz Baranowski
> Assignee: Bartosz Baranowski
> Fix For: 8.2.0.CR1
>
>
> This is a followup of AS7-2138. Original code, in case no URL context threw NamingException. The AS7-2138 introduced a fallback to NamingContext in case AS7 does not provide custom hack for url( like EJB ). However the fallback did not fail with NamingException in case it did not locate Context. This essentially made it go knee deep into AS7 service lookups, which hides real cause of failure.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
11 years, 11 months
[JBoss JIRA] (WFLY-3421) Rehashing on view change can result in premature session/ejb expiration
by Jason Greene (JIRA)
[ https://issues.jboss.org/browse/WFLY-3421?page=com.atlassian.jira.plugin.... ]
Jason Greene updated WFLY-3421:
-------------------------------
Fix Version/s: 8.2.0.CR1
(was: 8.1.0.Final)
(was: 9.0.0.Alpha1)
> Rehashing on view change can result in premature session/ejb expiration
> -----------------------------------------------------------------------
>
> Key: WFLY-3421
> URL: https://issues.jboss.org/browse/WFLY-3421
> Project: WildFly
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Clustering
> Affects Versions: 8.1.0.CR2
> Reporter: Paul Ferraro
> Assignee: Paul Ferraro
> Priority: Critical
> Fix For: 8.2.0.CR1
>
>
> Session/ejb expiration is scheduled only the the owning node of a given session/ejb. When a node leaves each node that assumes ownership of the sessions/ejbs that were previously owned by the leaving node schedules expiration of those sessions. However, view change can also lead to ownership changes for any session/ejb. We are currently handling this properly. If a session/ejb changes ownership, the expiration scheduling is never cancelled, and that session/ejb will expire prematurely, unless the node reacquires ownership. When using sticky sessions, this issue is not apparent, since subsequent requests will direct to the previous owner, who will cancel expiration on the old owner and reschedule expiration on the new owner properly. However, this will be a problem for web sessions if sticky sessions is disabled - and for @Stateful EJBs, if the ejb client receives updated affinity information prior to subsequent requests.
> There are at least 2 ways to address this:
> # When a request arrives for an existing session/ejb, we immediately cancel any scheduled expiration/eviction. This is currently a unicast, which typically results in a local call - but can go remote if the ownership has changed. Making this a cluster-wide broadcast would fix the issue.
> # We can allow the scheduler to expose the set of keys that are currently schedule, and, on topology change, cancel those sessions/ejbs for which the current node is no longer the owner - and reschedule on the new owner.
> Option 1 adds an additional cluster-wide RPC per request.
> Option 2 adds N*(N-1) unicast RPCs per view change, where N is the cluster size (i.e. each node sends 1 rpc to every other node containing the set of session/ejb IDs to schedule for expiration),
> Option 2 is the least invasive solution of the two.
> EDIT: There is a 3rd options, i.e. modify the expiration tasks such that they skip expiration if the session/ejb is not owned by the current node. This is prevents the premature expiration issue, but we need some additional strategy to reschedule the session/ejb expiration on the node on the current owner.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
11 years, 11 months