[JBoss JIRA] (ELY-646) Unable to setup CLIENT_CERT authentication with elytron.
by Jan Kalina (JIRA)
[ https://issues.jboss.org/browse/ELY-646?page=com.atlassian.jira.plugin.sy... ]
Jan Kalina commented on ELY-646:
--------------------------------
Note: Pull request in header is sufficient to fix the problem, but to have green subsystem tests, following subsystem pull request is need:
https://github.com/wildfly-security/elytron-subsystem/pull/240
(Changed exception, which is thrown when client-auth was not provided.)
> Unable to setup CLIENT_CERT authentication with elytron.
> --------------------------------------------------------
>
> Key: ELY-646
> URL: https://issues.jboss.org/browse/ELY-646
> Project: WildFly Elytron
> Issue Type: Bug
> Components: SSL
> Reporter: Martin Choma
> Assignee: Jan Kalina
> Priority: Blocker
>
> Following Zach's notes on [How to setup 2 way TLS|https://gitlab.cee.redhat.com/zrhoads/kbase/blob/master/eap71.elytron...] I am unable to setup it properly. User is not requested by browser for specifying client certificate and get access to application without certificate.
> In log you there is:
> 1. Server send request for certificate
> {code}
> ^[[0m^[[0m13:55:33,309 INFO [stdout] (default task-1) *** CertificateRequest
> ^[[0m^[[0m13:55:33,309 INFO [stdout] (default task-1) Cert Types: RSA, DSS, ECDSA
> ^[[0m^[[0m13:55:33,309 INFO [stdout] (default task-1) Cert Authorities:
> ^[[0m^[[0m13:55:33,310 INFO [stdout] (default task-1) <CN=client>
> {code}
> 2. And client responds with empty certificate chain. Without asking for certificate
> {code}
> ^[[0m^[[0m13:55:33,432 INFO [stdout] (default task-2) *** Certificate chain
> ^[[0m^[[0m13:55:33,432 INFO [stdout] (default task-2) <Empty>
> ^[[0m^[[0m13:55:33,432 INFO [stdout] (default task-2) ***
> {code}
> I am attaching:
> * server.log - server log with -Djavax.net.debug=all turn on.
> * 2wayTLS.pcap - wireshark recording of port 8443
> * secured-app - tested application
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 3 months
[JBoss JIRA] (WFLY-7240) CacheRegistry is missing entries (e.g. client mappings) following a merge after a cluster split
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/WFLY-7240?page=com.atlassian.jira.plugin.... ]
Radoslav Husar updated WFLY-7240:
---------------------------------
Fix Version/s: 11.0.0.Alpha1
> CacheRegistry is missing entries (e.g. client mappings) following a merge after a cluster split
> -----------------------------------------------------------------------------------------------
>
> Key: WFLY-7240
> URL: https://issues.jboss.org/browse/WFLY-7240
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 10.0.0.Final, 10.1.0.Final
> Reporter: Radoslav Husar
> Assignee: Radoslav Husar
> Priority: Critical
> Fix For: 11.0.0.Alpha1
>
>
> One of the manifestation of the issue:
> # start 2 nodes with SLSB with TUNNEL transport
> # start both nodes creating 2 clusters (or partitions)
> # start ejb client
> # start GossipRouter and wait for merge
> # ejb client keeps talking only to known node; never receives a topology update
> This is because org.wildfly.clustering.server.registry.CacheRegistry#topologyChanged does not handle cluster merges and thus all entries from a given partition will be lost forever.
> The effects are especially missing client mappings and broken session stickiness.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 3 months
[JBoss JIRA] (WFLY-7240) CacheRegistry is missing entries (e.g. client mappings) following a merge after a cluster split
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/WFLY-7240?page=com.atlassian.jira.plugin.... ]
Radoslav Husar updated WFLY-7240:
---------------------------------
Description:
One of the manifestation of the issue:
# start 2 nodes with SLSB with TUNNEL transport
# start both nodes creating 2 clusters (or partitions)
# start ejb client
# start GossipRouter and wait for merge
# ejb client keeps talking only to known node; never receives a topology update
This is because org.wildfly.clustering.server.registry.CacheRegistry#topologyChanged does not handle cluster merges and thus all entries from a given partition will be lost forever.
The effects are especially missing client mappings and broken session stickiness.
was:
One of the manifestation of the issue:
# start 2 nodes with SLSB with TUNNEL transport
# start both nodes creating 2 clusters (or partitions)
# start ejb client
# start GossipRouter and wait for merge
# ejb client keeps talking only to known node; never receives a topology update
This is because org.wildfly.clustering.server.registry.CacheRegistry#topologyChanged does not handle cluster merges.
> CacheRegistry is missing entries (e.g. client mappings) following a merge after a cluster split
> -----------------------------------------------------------------------------------------------
>
> Key: WFLY-7240
> URL: https://issues.jboss.org/browse/WFLY-7240
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 10.0.0.Final, 10.1.0.Final
> Reporter: Radoslav Husar
> Assignee: Radoslav Husar
> Priority: Critical
> Fix For: 11.0.0.Alpha1
>
>
> One of the manifestation of the issue:
> # start 2 nodes with SLSB with TUNNEL transport
> # start both nodes creating 2 clusters (or partitions)
> # start ejb client
> # start GossipRouter and wait for merge
> # ejb client keeps talking only to known node; never receives a topology update
> This is because org.wildfly.clustering.server.registry.CacheRegistry#topologyChanged does not handle cluster merges and thus all entries from a given partition will be lost forever.
> The effects are especially missing client mappings and broken session stickiness.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 3 months
[JBoss JIRA] (WFLY-7242) CacheRegistry is missing entries (e.g. client mappings) following a merge after a cluster split
by Radoslav Husar (JIRA)
Radoslav Husar created WFLY-7242:
------------------------------------
Summary: CacheRegistry is missing entries (e.g. client mappings) following a merge after a cluster split
Key: WFLY-7242
URL: https://issues.jboss.org/browse/WFLY-7242
Project: WildFly
Issue Type: Bug
Components: Clustering
Affects Versions: 10.0.0.Final, 10.1.0.Final
Reporter: Radoslav Husar
Assignee: Radoslav Husar
Priority: Critical
Fix For: 11.0.0.Alpha1
One of the manifestation of the issue:
# start 2 nodes with SLSB with TUNNEL transport
# start both nodes creating 2 clusters (or partitions)
# start ejb client
# start GossipRouter and wait for merge
# ejb client keeps talking only to known node; never receives a topology update
This is because org.wildfly.clustering.server.registry.CacheRegistry#topologyChanged does not handle cluster merges and thus all entries from a given partition will be lost forever.
The effects are especially missing client mappings and broken session stickiness.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 3 months
[JBoss JIRA] (DROOLS-1313) Memory Leak - but is this a supported scenario for Dynamic rule management
by Bill Tuminaro (JIRA)
[ https://issues.jboss.org/browse/DROOLS-1313?page=com.atlassian.jira.plugi... ]
Bill Tuminaro updated DROOLS-1313:
----------------------------------
Attachment: SimpleTest2_dump2.PNG
SimpleTest2_dump1.PNG
SimpleTest2_dump3.PNG
SAVE_SimpleTest2.java
> Memory Leak - but is this a supported scenario for Dynamic rule management
> --------------------------------------------------------------------------
>
> Key: DROOLS-1313
> URL: https://issues.jboss.org/browse/DROOLS-1313
> Project: Drools
> Issue Type: Bug
> Components: core engine
> Affects Versions: 6.3.0.Final
> Reporter: Bill Tuminaro
> Assignee: Mario Fusco
> Attachments: SAVE_SimpleTest.java, SAVE_SimpleTest2.java, SimpleTest2_dump1.PNG, SimpleTest2_dump2.PNG, SimpleTest2_dump3.PNG, SimpleTestDump1.PNG, SimpleTestDump2.PNG, SimpleTestDump3.PNG
>
>
> I have a reproducer that shows a clear memory leak based on heap dumps created and reviewing them with the Eclipse Memory Analyzer tool (http://www.eclipse.org/mat/).
> However, I am not sure this is a supported scenario. If this is a supported approach this needs to get fixed, otherwise we need to use another approach.
> The attached source does this:
> +*Initialize stuff*+
> - Create a new ReleaseId
> - Create a new KieFileSystem
> - Generate and write the PomXML for the ReleaseId created above
> - Create a new KieModuleModel
> - Create a new KieBaseModel
> - Write the ModuleModel XML to the KieFileSystem
> - Write 2 rules into the KieFileSystem
> +*1st build and dump*+
> - Create a new KieBuilder
> - Do a buildall() with the KieBuilder
> - Create a new KieContainer
> - Create a new KieSession from the KieContainer
> - Print out the rules in the KieContainer for the package used in my rules
> - Create a java heap dump (SimpleTestFirstDump.dmp), see SimpleTestDump1.png as you can see we have 2 classloaders for each class created for these rules. This is not the leak, yet, just curious if this is expected.
> +*2nd build and dump*+
> - Delete 2 rules from the KieFileSystem created above
> - Call incrementalBuild() on the KieBuilder created above
> - Call updateToVersion() on the KieContaincer created above, using the SAME ReleaseID created above
> - Add 2 new rules to the KieFileSystem created above
> - Call incrementalBuild() on the KieBuilder created above
> - Call updateToVersion() on the KieContaincer created above, using the SAME ReleaseID created above
> - Print out the rules in the KieContainer for the package used in my rules
> - Create a java heap dump (SimpleTestSecondDump.dmp), see SimpleTestDump2.png.
> - Rule_120_Triggered_Part_1_ 0 is not there
> - Another class loader and instances of Rule_Internal_rule_0_DefaultConsequenceInvoker is present ( I think this is the leak)
> +*3rd build and dump*+
> - Delete 1 rule from the KieFileSystem created above
> - Call incrementalBuild() on the KieBuilder created above
> - Call updateToVersion() on the KieContaincer created above, using the SAME ReleaseID created above
> - Add 2 new rules to the KieFileSystem created above
> - Call incrementalBuild() on the KieBuilder created above
> - Call updateToVersion() on the KieContaincer created above, using the SAME ReleaseID created above
> - Print out the rules in the KieContainer for the package used in my rules
> - Create a java heap dump (SimpleTestThirdDump.dmp), see SimpleTestDump3.png.
> - Rule_120_Triggered_Part_1_ 0 is STILL not there
> - TWO more class loaders and instances of Rule_Internal_rule_0_DefaultConsequenceInvoker is present ( I think this is the leak)
> - Another class loader and instances of Rule_120_Triggered_part_10DefaultConsequenceInvoker is present ( I think this is also part of the leak)
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 3 months
[JBoss JIRA] (DROOLS-1313) Memory Leak - but is this a supported scenario for Dynamic rule management
by Bill Tuminaro (JIRA)
[ https://issues.jboss.org/browse/DROOLS-1313?page=com.atlassian.jira.plugi... ]
Bill Tuminaro commented on DROOLS-1313:
---------------------------------------
I wrote another reproducer (SAVE_SimpleTest2.java) that uses the approach outlined in the org.drools.compiler.integrationtests.incrementalcompilation.IncrementalCompilationTest.java code.
I am really confused now, this approach seems to generate even more classloaders, classes and instances.
Can you shed some light on what the correct way to dynamically modify the rules in a module AND not keep consuming memory/heap space?
I have attached the code and 3 MAT screenshots.
-BillT
> Memory Leak - but is this a supported scenario for Dynamic rule management
> --------------------------------------------------------------------------
>
> Key: DROOLS-1313
> URL: https://issues.jboss.org/browse/DROOLS-1313
> Project: Drools
> Issue Type: Bug
> Components: core engine
> Affects Versions: 6.3.0.Final
> Reporter: Bill Tuminaro
> Assignee: Mario Fusco
> Attachments: SAVE_SimpleTest.java, SimpleTestDump1.PNG, SimpleTestDump2.PNG, SimpleTestDump3.PNG
>
>
> I have a reproducer that shows a clear memory leak based on heap dumps created and reviewing them with the Eclipse Memory Analyzer tool (http://www.eclipse.org/mat/).
> However, I am not sure this is a supported scenario. If this is a supported approach this needs to get fixed, otherwise we need to use another approach.
> The attached source does this:
> +*Initialize stuff*+
> - Create a new ReleaseId
> - Create a new KieFileSystem
> - Generate and write the PomXML for the ReleaseId created above
> - Create a new KieModuleModel
> - Create a new KieBaseModel
> - Write the ModuleModel XML to the KieFileSystem
> - Write 2 rules into the KieFileSystem
> +*1st build and dump*+
> - Create a new KieBuilder
> - Do a buildall() with the KieBuilder
> - Create a new KieContainer
> - Create a new KieSession from the KieContainer
> - Print out the rules in the KieContainer for the package used in my rules
> - Create a java heap dump (SimpleTestFirstDump.dmp), see SimpleTestDump1.png as you can see we have 2 classloaders for each class created for these rules. This is not the leak, yet, just curious if this is expected.
> +*2nd build and dump*+
> - Delete 2 rules from the KieFileSystem created above
> - Call incrementalBuild() on the KieBuilder created above
> - Call updateToVersion() on the KieContaincer created above, using the SAME ReleaseID created above
> - Add 2 new rules to the KieFileSystem created above
> - Call incrementalBuild() on the KieBuilder created above
> - Call updateToVersion() on the KieContaincer created above, using the SAME ReleaseID created above
> - Print out the rules in the KieContainer for the package used in my rules
> - Create a java heap dump (SimpleTestSecondDump.dmp), see SimpleTestDump2.png.
> - Rule_120_Triggered_Part_1_ 0 is not there
> - Another class loader and instances of Rule_Internal_rule_0_DefaultConsequenceInvoker is present ( I think this is the leak)
> +*3rd build and dump*+
> - Delete 1 rule from the KieFileSystem created above
> - Call incrementalBuild() on the KieBuilder created above
> - Call updateToVersion() on the KieContaincer created above, using the SAME ReleaseID created above
> - Add 2 new rules to the KieFileSystem created above
> - Call incrementalBuild() on the KieBuilder created above
> - Call updateToVersion() on the KieContaincer created above, using the SAME ReleaseID created above
> - Print out the rules in the KieContainer for the package used in my rules
> - Create a java heap dump (SimpleTestThirdDump.dmp), see SimpleTestDump3.png.
> - Rule_120_Triggered_Part_1_ 0 is STILL not there
> - TWO more class loaders and instances of Rule_Internal_rule_0_DefaultConsequenceInvoker is present ( I think this is the leak)
> - Another class loader and instances of Rule_120_Triggered_part_10DefaultConsequenceInvoker is present ( I think this is also part of the leak)
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 3 months
[JBoss JIRA] (DROOLS-1314) Compilation of spreadsheet fails with specific condition
by Alessandro Lazarotti (JIRA)
Alessandro Lazarotti created DROOLS-1314:
--------------------------------------------
Summary: Compilation of spreadsheet fails with specific condition
Key: DROOLS-1314
URL: https://issues.jboss.org/browse/DROOLS-1314
Project: Drools
Issue Type: Bug
Components: build, decision tables
Affects Versions: 6.4.0.Final
Reporter: Alessandro Lazarotti
Assignee: Michael Anstis
Priority: Critical
Fix For: 6.5.0.Final
Reproducer is attached.with problematic spredsheet named SampleNG.xml.
Compilation of this fails with the following error.
java.lang.RuntimeException: Error while creating KieBase[Message [id=1, level=ERROR, path=dtables/SampleNG.xls, line=8, column=0
text=Unable to Analyse Expression checktest == AAA:
[Error: unable to resolve method using strict-mode: com.sample.DecisionTableTest$Message.AAA()]
[Near : {... checktest == AAA ....}]
because the following DRL is generated.
rule "HelloWorld_12"
when
m:Message(checktest in (AAA), status == "Message.HELLO")
...
i.e. double quotation of value ("AAA") specified in the cell is removed.
If D column does not exist like Sample.xml(also included in reproducer) or rule template of D column is modified like "status == $param " (see SampleOK.xml), this does not happen.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 3 months
[JBoss JIRA] (WFCORE-1760) Extension initialization handling makes use of PersistentResourceDefinition overly difficult
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFCORE-1760?page=com.atlassian.jira.plugi... ]
Brian Stansberry resolved WFCORE-1760.
--------------------------------------
Fix Version/s: 3.0.0.Alpha9
Assignee: Tomaz Cerar
Resolution: Done
Tomaz solved this problem in a much much much simpler way with WFCORE-1831. :)
> Extension initialization handling makes use of PersistentResourceDefinition overly difficult
> --------------------------------------------------------------------------------------------
>
> Key: WFCORE-1760
> URL: https://issues.jboss.org/browse/WFCORE-1760
> Project: WildFly Core
> Issue Type: Enhancement
> Components: Domain Management
> Reporter: Brian Stansberry
> Assignee: Tomaz Cerar
> Fix For: 3.0.0.Alpha9
>
>
> PersistentResourceXMLBuilder.build() requires a PersistentResourceDefinition as an input. This is a problem because the PersistentResourcXmlDefinition is needed to initialize parsers for Extension.initializeParsers(), which is called *before* Extension.initialize(). And Extension.initialize() is when the PersistentResourceDefinition would normally be constructed.
> An Extension implementation could overcome this by maintaining internal state. Construct the PersistentResourceDefinition in initializeParsers() and store it in an instance field for use in initialize(). Or vice versa. That gets messy though as now the Extension impl is needing to worry about the order in which the two methods are called and tracking whether both have been called so it can drop the cached object.
> A possible thing to do is the have the ExtensionContext and ExtensionParsingContext offer an attachment API, with the lifecycle of attachments documented as being scoped to a single overall extension initialization. That could work but isn't very elegant.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 3 months
[JBoss JIRA] (WFLY-6784) Add possibility to enable websocket compression via management model
by Kabir Khan (JIRA)
[ https://issues.jboss.org/browse/WFLY-6784?page=com.atlassian.jira.plugin.... ]
Kabir Khan commented on WFLY-6784:
----------------------------------
[~swd847] [~iweiss] The PR has been reverted since it caused TCK failures. [~smarlow] can provide more details.
> Add possibility to enable websocket compression via management model
> --------------------------------------------------------------------
>
> Key: WFLY-6784
> URL: https://issues.jboss.org/browse/WFLY-6784
> Project: WildFly
> Issue Type: Feature Request
> Components: Web (Undertow)
> Affects Versions: 10.0.0.Final
> Reporter: Radim Hatlapatka
> Assignee: Ingo Weiss
> Priority: Critical
> Labels: downstream_dependency
> Fix For: 11.0.0.Alpha1
>
> Original Estimate: 2 days
> Time Spent: 2 days
> Remaining Estimate: 0 minutes
>
> In EAP 6 the websockets compression was enabled by default allowing to use pre-deflate compression when requested by client.
> There is support for it in Undertow but there is no option to enable it in WildFly 10. This option should be added to WildFly and should probably set by default to true as that would be consistent with default behaviour when using WebSockets with EAP 6.4.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 3 months
[JBoss JIRA] (WFLY-6784) Add possibility to enable websocket compression via management model
by Kabir Khan (JIRA)
[ https://issues.jboss.org/browse/WFLY-6784?page=com.atlassian.jira.plugin.... ]
Kabir Khan reopened WFLY-6784:
------------------------------
> Add possibility to enable websocket compression via management model
> --------------------------------------------------------------------
>
> Key: WFLY-6784
> URL: https://issues.jboss.org/browse/WFLY-6784
> Project: WildFly
> Issue Type: Feature Request
> Components: Web (Undertow)
> Affects Versions: 10.0.0.Final
> Reporter: Radim Hatlapatka
> Assignee: Ingo Weiss
> Priority: Critical
> Labels: downstream_dependency
> Fix For: 11.0.0.Alpha1
>
> Original Estimate: 2 days
> Time Spent: 2 days
> Remaining Estimate: 0 minutes
>
> In EAP 6 the websockets compression was enabled by default allowing to use pre-deflate compression when requested by client.
> There is support for it in Undertow but there is no option to enable it in WildFly 10. This option should be added to WildFly and should probably set by default to true as that would be consistent with default behaviour when using WebSockets with EAP 6.4.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 3 months