[JBoss JIRA] (WFCORE-761) Not possible to overlay non existing file in WAR
by Lin Gao (JIRA)
[ https://issues.jboss.org/browse/WFCORE-761?page=com.atlassian.jira.plugin... ]
Lin Gao updated WFCORE-761:
---------------------------
Git Pull Request: https://github.com/wildfly/wildfly-core/pull/1332
> Not possible to overlay non existing file in WAR
> ------------------------------------------------
>
> Key: WFCORE-761
> URL: https://issues.jboss.org/browse/WFCORE-761
> Project: WildFly Core
> Issue Type: Bug
> Components: Server
> Reporter: Bartosz Baranowski
> Assignee: Jason Greene
> Priority: Critical
> Labels: downstream_dependency
>
> It is either bug in how deployments are treated or how overlay/vfs work.
> Steps to reproduce:
> 1. deploy undexploded war with jar inside
> 2. add overlay that will add non existing file in jar
> Result: exception:
> Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: JBAS018776: Failed to get content for deployment overlay WEB-INF/lib/overlayed.jar//META-INF/x/file.txt at WEB-INF/lib/overlayed.jar//META-INF/x/file.txt
> Caused by: java.io.FileNotFoundException: /content/shell.war/WEB-INF/lib/overlayed.jar/META-INF/x/file.txt"}}
> at org.jboss.as.test.integration.management.ManagementOperations.executeOperation(ManagementOperations.java:67)
> at org.jboss.as.test.integration.management.ManagementOperations.executeOperation(ManagementOperations.java:37)
> at org.jboss.as.test.integration.deployment.deploymentoverlay.jar.OverlayUtils.setupOverlay(OverlayUtils.java:76)
> at org.jboss.as.test.integration.deployment.deploymentoverlay.war.OverlayNonExistingResourceTestCase.testOverlay(OverlayNonExistingResourceTestCase.java:67)
> Expectation:
> should work. It actually does work, if war is really exploded or
> 'if(exploded)' part in overlay is removed from overlay processor and everything is handled via: https://github.com/stuartwdouglas/wildfly-core/blob/a75af9118c4062fafb899...
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 9 months
[JBoss JIRA] (WFLY-6699) Custom principal is lost during remote ejb authentication
by Vlado Pakan (JIRA)
[ https://issues.jboss.org/browse/WFLY-6699?page=com.atlassian.jira.plugin.... ]
Vlado Pakan closed WFLY-6699.
-----------------------------
Resolution: Won't Fix
Downstream issue fixed by customer implementation.
> Custom principal is lost during remote ejb authentication
> ---------------------------------------------------------
>
> Key: WFLY-6699
> URL: https://issues.jboss.org/browse/WFLY-6699
> Project: WildFly
> Issue Type: Bug
> Components: EJB, Security
> Affects Versions: 10.0.0.Final
> Reporter: Vlado Pakan
> Assignee: Vlado Pakan
> Original Estimate: 1 week, 2 days
> Remaining Estimate: 1 week, 2 days
>
> A custom principal (instead of the JBoss provided SimplePrincipal class) is used to store the authenticated username in a custom login module. The custom principal class is lost when attempting to retrieve it from the subject from within a secured EJB. The custom principal is only lost if it is used to store the username instead of using the JBoss provided SimplePrincipal class. Other instances of the custom principal class are passed along successfully if they are storing something besides the username (SSN, CustomerID, etc).
> It looks like this is happening due to a change (introduced in 6.4.6) in the org.jboss.as.security.service.SimpleSecurityManager.authenticate method
> @@ -445,8 +408,11 @@ public class SimpleSecurityManager implements ServerSecurityManager
> { auditPrincipal = unauthenticatedIdentity.asPrincipal(); subject.getPrincipals().add(auditPrincipal); authenticated = true; + }
> else
> { + subject.getPrincipals().add(principal); }
> This change was associated with bz-921217.
> This only happens when the EJB is accessed from a remote standalone client. If the EJB is accessed from a secured web app (locally), then the custom principal is not lost.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 9 months
[JBoss JIRA] (JGRP-2086) FD_SOCK is keep trying to create a new socket to the killed server
by Osamu Nagano (JIRA)
[ https://issues.jboss.org/browse/JGRP-2086?page=com.atlassian.jira.plugin.... ]
Osamu Nagano commented on JGRP-2086:
------------------------------------
This test scenario is to kill a set of servers, not a crash of a host. So FD_HOST is not triggered. I can't reproduce it too but the user is still reporting it several times. What kind of info should be collected to investigate this issue? Maybe we can ask them for it.
> FD_SOCK is keep trying to create a new socket to the killed server
> ------------------------------------------------------------------
>
> Key: JGRP-2086
> URL: https://issues.jboss.org/browse/JGRP-2086
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.3
> Environment: JDG 6.6.0 (jgroups-3.6.3.Final-redhat-4.jar)
> Reporter: Osamu Nagano
> Assignee: Bela Ban
> Fix For: 3.6.11, 4.0
>
>
> In most cases FD_SOCK can detect a killed server immediately. But for unknown reason, FD_SOCK is keep trying to create a new socket to the killed server. As a consequence, installing a new cluster view is delayed until FD_ALL is triggered.
> m04_n007_server.log is showing the behaviour. There is 28 nodes (4 machines (m03, ..., m06) and 7 nodes (n001, ..., n007) on each) and all nodes on m03 are killed at the same time on 15:07:34,543. FD_SOCK is keep trying to connect to a killed node saying "socket address for m03_n001/clustered could not be fetched, retrying".
> {noformat}
> [n007] 15:07:39,543 TRACE [org.jgroups.protocols.FD_SOCK] (Timer-8,shared=udp) m04_n007/clustered: broadcasting SUSPECT message (suspected_mbrs=[m03_n005/clustered, m03_n007/clustered])
> [n007] 15:07:39,544 TRACE [org.jgroups.protocols.FD_SOCK] (INT-20,shared=udp) m04_n007/clustered: received SUSPECT message from m04_n007/clustered: suspects=[m03_n005/clustered, m03_n007/clustered]
> [n007] 15:07:39,546 TRACE [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: socket address for m03_n001/clustered could not be fetched, retrying
> [n007] 15:07:40,546 DEBUG [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: ping_dest is m03_n001/clustered, pingable_mbrs=[m03_n001/clustered, m03_n002/clustered, m03_n003/clustered, m03_n004/clustered, m03_n006/clustered, m06_n001/clustered, m06_n002/clustered, m06_n003/clustered, m06_n004/clustered, m06_n005/clustered, m06_n006/clustered, m06_n007/clustered, m05_n001/clustered, m05_n002/clustered, m05_n003/clustered, m05_n004/clustered, m05_n005/clustered, m05_n006/clustered, m05_n007/clustered, m04_n001/clustered, m04_n002/clustered, m04_n003/clustered, m04_n004/clustered, m04_n005/clustered, m04_n006/clustered, m04_n007/clustered]
> [n007] 15:07:41,546 TRACE [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: socket address for m03_n001/clustered could not be fetched, retrying
> [n007] 15:07:42,546 DEBUG [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: ping_dest is m03_n001/clustered, pingable_mbrs=[m03_n001/clustered, m03_n002/clustered, m03_n003/clustered, m03_n004/clustered, m03_n006/clustered, m06_n001/clustered, m06_n002/clustered, m06_n003/clustered, m06_n004/clustered, m06_n005/clustered, m06_n006/clustered, m06_n007/clustered, m05_n001/clustered, m05_n002/clustered, m05_n003/clustered, m05_n004/clustered, m05_n005/clustered, m05_n006/clustered, m05_n007/clustered, m04_n001/clustered, m04_n002/clustered, m04_n003/clustered, m04_n004/clustered, m04_n005/clustered, m04_n006/clustered, m04_n007/clustered]
> [n007] 15:07:43,547 TRACE [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: socket address for m03_n001/clustered could not be fetched, retrying
> ...
> [n007] 15:10:53,700 DEBUG [org.jgroups.protocols.FD_ALL] (Timer-26,shared=udp) haven't received a heartbeat from m03_n005/clustered for 200059 ms, adding it to suspect list
> {noformat}
> From the TRACE log, you can find an address cache of FD_SOCK has only 23 members.
> {noformat}
> [n007] 14:40:50,471 TRACE [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: got cache from m03_n005/clustered: cache is {
> m04_n006/clustered=172.20.66.34:9945,
> m05_n005/clustered=172.20.66.35:9938,
> m06_n004/clustered=172.20.66.36:9931,
> m03_n007/clustered=172.20.66.33:9952,
> m05_n001/clustered=172.20.66.35:9910,
> m06_n005/clustered=172.20.66.36:9938,
> m05_n006/clustered=172.20.66.35:9945,
> m03_n005/clustered=172.20.66.33:9938,
> m05_n004/clustered=172.20.66.35:9931,
> m04_n003/clustered=172.20.66.34:9924,
> m04_n007/clustered=172.20.66.34:9952,
> m05_n002/clustered=172.20.66.35:9917,
> m05_n003/clustered=172.20.66.35:9924,
> m04_n004/clustered=172.20.66.34:9931,
> m06_n001/clustered=172.20.66.36:9910,
> m06_n007/clustered=172.20.66.36:9952,
> m04_n005/clustered=172.20.66.34:9938,
> m04_n001/clustered=172.20.66.34:9910,
> m05_n007/clustered=172.20.66.35:9952,
> m06_n002/clustered=172.20.66.36:9917,
> m06_n006/clustered=172.20.66.36:9945,
> m04_n002/clustered=172.20.66.34:9917,
> m06_n003/clustered=172.20.66.36:9924}
> {noformat}
> While pingable_mbrs has all 28 members which is from the current available cluster view.
> {noformat}
> [n007] 14:40:50,472 DEBUG [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,m04_n007/clustered) m04_n007/clustered: ping_dest is m03_n005/clustered, pingable_mbrs=[
> m03_n005/clustered,
> m03_n007/clustered,
> m03_n001/clustered,
> m03_n002/clustered,
> m03_n003/clustered,
> m03_n004/clustered,
> m03_n006/clustered,
> m06_n001/clustered,
> m06_n002/clustered,
> m06_n003/clustered,
> m06_n004/clustered,
> m06_n005/clustered,
> m06_n006/clustered,
> m06_n007/clustered,
> m05_n001/clustered,
> m05_n002/clustered,
> m05_n003/clustered,
> m05_n004/clustered,
> m05_n005/clustered,
> m05_n006/clustered,
> m05_n007/clustered,
> m04_n001/clustered,
> m04_n002/clustered,
> m04_n003/clustered,
> m04_n004/clustered,
> m04_n005/clustered,
> m04_n006/clustered,
> m04_n007/clustered]
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 9 months