[JBoss JIRA] (JBRULES-1058) nested accessors with Sets - "not contains" is not a valid operator for MVEL
by Rahul Vanimisetty (JIRA)
[ https://issues.jboss.org/browse/JBRULES-1058?page=com.atlassian.jira.plug... ]
Rahul Vanimisetty commented on JBRULES-1058:
--------------------------------------------
I have a mvel expression that requires the not contains operator. The expression is
input.shipment.sHIPMENTDESC not contains 'GINGER'
I get a mvel compilation error when i try the above in a junit test. The stacktrace is as below
at org.mvel2.compiler.AbstractParser.procTypedNode(AbstractParser.java:1505)
at org.mvel2.compiler.AbstractParser.createPropertyToken(AbstractParser.java:1405)
at org.mvel2.compiler.AbstractParser.nextToken(AbstractParser.java:893)
at org.mvel2.compiler.ExpressionCompiler._compile(ExpressionCompiler.java:126)
at org.mvel2.compiler.ExpressionCompiler.compile(ExpressionCompiler.java:67)
at org.mvel2.MVEL.compileExpression(MVEL.java:810)
at org.mvel2.MVEL.compileExpression(MVEL.java:819)
at org.mvel2.MVEL.compileExpression(MVEL.java:723)
What is the solution to the problem? Can not contains be made a valid operator. The workaround i found is to use
!(input.shipment.sHIPMENTDESC not contains 'GINGER').
But life is easier when the not contains operator is available. Please advice.
> nested accessors with Sets - "not contains" is not a valid operator for MVEL
> ----------------------------------------------------------------------------
>
> Key: JBRULES-1058
> URL: https://issues.jboss.org/browse/JBRULES-1058
> Project: JBRULES
> Issue Type: Bug
> Components: drools-compiler
> Affects Versions: 4.0.0.GA
> Reporter: Mark McNally
> Assignee: Edson Tirelli
> Fix For: 4.0.1
>
>
> Following does not work:
> rule StateMatch
> when
> $ca:CandidateAssociation(nurseDetails.stateLicensures excludes patientDetails.state )
> then
> retract( $ca );
> end
>
>
> public class CandidateAssociation {
> private PatientDetails patientDetails;
> private NurseDetails nurseDetails;
> private int overlapHours;
>
> public CandidateAssociation( PatientDetails patientDetails, NurseDetails nurseDetails) {
> super();
> this.patientDetails = patientDetails;
> this.nurseDetails = nurseDetails;
> overlapHours = participantDetails.getNumberOverlapHourCnt(nurseDetails);
> }
> [...]
> }
>
> public class NurseDetails {
> private Set stateLicensures = new HashSet();
> [...]
> }
> public class PatientDetails {
> private String state;
> [...]
> }
> Edson suggested that the problem is that "not contains" is not a valid operator for MVEL.
> Also Noticed that the following workaround did not work:
> rule State
> dialect "mvel"
> when
> $ca:CandidateAssociation( eval ( ! nurseDetails.stateLicensures.contains( patientDetails.state ) ) )
> then
> retract( $ca );
> end
>
> This produced this Exception:
> org.drools.rule.InvalidRulePackage: Unable to determine the used declarations : [Rule name=State, agendaGroup=MAIN, salience=0, no-loop=false]
> at org.drools.rule.Package.checkValidity(Package.java:408)
> at org.drools.common.AbstractRuleBase.addPackage(AbstractRuleBase.java:288)
> at [...]
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years, 1 month
[JBoss JIRA] (WFLY-10134) ee8.preview.mode property is racy
by David Lloyd (JIRA)
David Lloyd created WFLY-10134:
----------------------------------
Summary: ee8.preview.mode property is racy
Key: WFLY-10134
URL: https://issues.jboss.org/browse/WFLY-10134
Project: WildFly
Issue Type: Bug
Components: EE
Reporter: David Lloyd
Priority: Critical
The {{ee8-temp}} tests set the {{ee8.preview.mode}} property in the server management model, relying on system properties to get parsed and set before extensions which use Java EE 8 APIs are loaded. This assumption appears to be invalid.
System properties are installed by the boot controller thread, and extensions are loaded in server service threads. In testing with the latest jboss-modules, I've observed cases where the controller thread does not add system properties until after some extension loading has happened in the server service threads. I haven't untangled why this happens only with the most recent jboss-modules in play, but the race exists regardless.
Setting the {{ee8.preview.mode}} in {{arquillian.xml}} appears to work around the issue.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years, 1 month
[JBoss JIRA] (WFLY-10133) JBoss Web migrate op does not create security realms when invoked from embedded process
by Eduardo Martins (JIRA)
[ https://issues.jboss.org/browse/WFLY-10133?page=com.atlassian.jira.plugin... ]
Eduardo Martins updated WFLY-10133:
-----------------------------------
Description:
When migrating a legacy JBoss Web subsystem configuration, which includes an SSL connector, the security realm, referenced by the resulting Undertow subsystem configuration, is not created if the migrate: op is invoked by a standalone embedded process.
The concrete issue is that the migrate: op logic, as expected, is different wether the process type in context is wrt a managed domain configuration or not, but it wrongly considers any ProcessType that is not STANDALONE or SELF_CONTAINED as managed domain, applying the wrong logic to process types such as EMBEDDED_SERVER.
was:
When migrating a legacy JBoss Web subsystem configuration, which includes an SSL connector, the security realm, referenced by the resulting Undertow subsystem configuration, is not created if the migrate: op is invoked by a standalone embedded process.
The concrete issue is that the migrate: op logic, as expected, is different wether the process type in context is wrt a managed domain configuration or not, but it wrongly considers any ProcessType that is not STANDALONE or SELF_CONTAINED as managed domain.
> JBoss Web migrate op does not create security realms when invoked from embedded process
> ---------------------------------------------------------------------------------------
>
> Key: WFLY-10133
> URL: https://issues.jboss.org/browse/WFLY-10133
> Project: WildFly
> Issue Type: Bug
> Components: Web (Undertow)
> Reporter: Eduardo Martins
> Assignee: Eduardo Martins
>
> When migrating a legacy JBoss Web subsystem configuration, which includes an SSL connector, the security realm, referenced by the resulting Undertow subsystem configuration, is not created if the migrate: op is invoked by a standalone embedded process.
> The concrete issue is that the migrate: op logic, as expected, is different wether the process type in context is wrt a managed domain configuration or not, but it wrongly considers any ProcessType that is not STANDALONE or SELF_CONTAINED as managed domain, applying the wrong logic to process types such as EMBEDDED_SERVER.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years, 1 month
[JBoss JIRA] (WFLY-10133) JBoss Web migrate op does not create security realms when invoked from embedded process
by Eduardo Martins (JIRA)
Eduardo Martins created WFLY-10133:
--------------------------------------
Summary: JBoss Web migrate op does not create security realms when invoked from embedded process
Key: WFLY-10133
URL: https://issues.jboss.org/browse/WFLY-10133
Project: WildFly
Issue Type: Bug
Components: Web (Undertow)
Reporter: Eduardo Martins
Assignee: Stuart Douglas
When migrating a legacy JBoss Web subsystem configuration, which includes an SSL connector, the security realm, referenced by the resulting Undertow subsystem configuration, is not created if the migrate: op is invoked by a standalone embedded process.
The concrete issue is that the migrate: op logic, as expected, is different wether the process type in context is wrt a managed domain configuration or not, but it wrongly considers any ProcessType that is not STANDALONE or SELF_CONTAINED as managed domain.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years, 1 month
[JBoss JIRA] (WFLY-10133) JBoss Web migrate op does not create security realms when invoked from embedded process
by Eduardo Martins (JIRA)
[ https://issues.jboss.org/browse/WFLY-10133?page=com.atlassian.jira.plugin... ]
Eduardo Martins reassigned WFLY-10133:
--------------------------------------
Assignee: Eduardo Martins (was: Stuart Douglas)
> JBoss Web migrate op does not create security realms when invoked from embedded process
> ---------------------------------------------------------------------------------------
>
> Key: WFLY-10133
> URL: https://issues.jboss.org/browse/WFLY-10133
> Project: WildFly
> Issue Type: Bug
> Components: Web (Undertow)
> Reporter: Eduardo Martins
> Assignee: Eduardo Martins
>
> When migrating a legacy JBoss Web subsystem configuration, which includes an SSL connector, the security realm, referenced by the resulting Undertow subsystem configuration, is not created if the migrate: op is invoked by a standalone embedded process.
> The concrete issue is that the migrate: op logic, as expected, is different wether the process type in context is wrt a managed domain configuration or not, but it wrongly considers any ProcessType that is not STANDALONE or SELF_CONTAINED as managed domain.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years, 1 month
[JBoss JIRA] (JGRP-2260) UNICAST3 doesn't remove dead nodes from its tables
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2260?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2260:
--------------------------------
OK, I'll try to set up a scenario like this. Not sure the ForkChannel plays a role, as the physical address for the destinations FC1 and FC2 are the same even though FC1 != FC2.
I'll post my findings here.
> UNICAST3 doesn't remove dead nodes from its tables
> --------------------------------------------------
>
> Key: JGRP-2260
> URL: https://issues.jboss.org/browse/JGRP-2260
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.10
> Environment: WildFly 12.0.0.Final
> Reporter: Rich DiCroce
> Assignee: Bela Ban
>
> Scenario: 2 WildFly instances clustered together. A ForkChannel is defined, with a MessageDispatcher on top. I start both nodes, then stop the second one. 6-7 minutes after stopping the second node, I start getting log spam on the first node:
> {quote}
> 12:47:04,519 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> 12:47:06,522 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> 12:47:08,524 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> {quote}
> After some debugging, I discovered that the reason is because UNICAST3 is still trying to retransmit to the dead node. Its send_table still contains an entry for the dead node with state OPEN.
> After looking at the source code for UNICAST3, I have a theory about what's happening.
> * When a node leaves the cluster, down(Event) gets invoked with a view change, which calls closeConnection(Address) for each node that left. That sets the connection state to CLOSING.
> * Suppose that immediately after the view change is handled, a message with the dead node as its destination gets passed to down(Message). That invokes getSenderEntry(Address), which finds the connection... and sets the state back to OPEN.
> Consequently, the connection is never closed or removed from the table, so retransmit attempts continue forever even though they will never succeed.
> This issue is easily reproducible for me, although unfortunately I can't give you the application in question. But if you have fixes you want to try, I'm happy to drop in a patched JAR and see if the issue still happens.
> This is my JGroups subsystem configuration:
> {code:xml}
> <subsystem xmlns="urn:jboss:domain:jgroups:6.0">
> <channels default="ee">
> <channel name="ee" stack="main">
> <fork name="shared-dispatcher"/>
> <fork name="group-topology"/>
> </channel>
> </channels>
> <stacks>
> <stack name="main">
> <transport type="UDP" socket-binding="jgroups" site="${gp.site:DEFAULT}"/>
> <protocol type="PING"/>
> <protocol type="MERGE3">
> <property name="min_interval">
> 1000
> </property>
> <property name="max_interval">
> 5000
> </property>
> </protocol>
> <protocol type="FD_SOCK"/>
> <protocol type="FD_ALL2">
> <property name="interval">
> 3000
> </property>
> <property name="timeout">
> 8000
> </property>
> </protocol>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS">
> <property name="join_timeout">
> 100
> </property>
> </protocol>
> <protocol type="UFC"/>
> <protocol type="MFC"/>
> <protocol type="FRAG3"/>
> </stack>
> </stacks>
> </subsystem>
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years, 1 month
[JBoss JIRA] (JGRP-2260) UNICAST3 doesn't remove dead nodes from its tables
by Rich DiCroce (JIRA)
[ https://issues.jboss.org/browse/JGRP-2260?page=com.atlassian.jira.plugin.... ]
Rich DiCroce commented on JGRP-2260:
------------------------------------
I don't have a reproducer for you. Try setting up a scenario like this:
* Start 2 nodes (A and B)
* Send a message from A to B
* Stop B
* Send another message from A to B
I'm pretty sure the ForkChannel and MessageDispatcher don't have anything to do with this. It looks more like a race that occurs if a message gets sent to a single node after the destination node has left the view.
> UNICAST3 doesn't remove dead nodes from its tables
> --------------------------------------------------
>
> Key: JGRP-2260
> URL: https://issues.jboss.org/browse/JGRP-2260
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.10
> Environment: WildFly 12.0.0.Final
> Reporter: Rich DiCroce
> Assignee: Bela Ban
>
> Scenario: 2 WildFly instances clustered together. A ForkChannel is defined, with a MessageDispatcher on top. I start both nodes, then stop the second one. 6-7 minutes after stopping the second node, I start getting log spam on the first node:
> {quote}
> 12:47:04,519 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> 12:47:06,522 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> 12:47:08,524 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> {quote}
> After some debugging, I discovered that the reason is because UNICAST3 is still trying to retransmit to the dead node. Its send_table still contains an entry for the dead node with state OPEN.
> After looking at the source code for UNICAST3, I have a theory about what's happening.
> * When a node leaves the cluster, down(Event) gets invoked with a view change, which calls closeConnection(Address) for each node that left. That sets the connection state to CLOSING.
> * Suppose that immediately after the view change is handled, a message with the dead node as its destination gets passed to down(Message). That invokes getSenderEntry(Address), which finds the connection... and sets the state back to OPEN.
> Consequently, the connection is never closed or removed from the table, so retransmit attempts continue forever even though they will never succeed.
> This issue is easily reproducible for me, although unfortunately I can't give you the application in question. But if you have fixes you want to try, I'm happy to drop in a patched JAR and see if the issue still happens.
> This is my JGroups subsystem configuration:
> {code:xml}
> <subsystem xmlns="urn:jboss:domain:jgroups:6.0">
> <channels default="ee">
> <channel name="ee" stack="main">
> <fork name="shared-dispatcher"/>
> <fork name="group-topology"/>
> </channel>
> </channels>
> <stacks>
> <stack name="main">
> <transport type="UDP" socket-binding="jgroups" site="${gp.site:DEFAULT}"/>
> <protocol type="PING"/>
> <protocol type="MERGE3">
> <property name="min_interval">
> 1000
> </property>
> <property name="max_interval">
> 5000
> </property>
> </protocol>
> <protocol type="FD_SOCK"/>
> <protocol type="FD_ALL2">
> <property name="interval">
> 3000
> </property>
> <property name="timeout">
> 8000
> </property>
> </protocol>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS">
> <property name="join_timeout">
> 100
> </property>
> </protocol>
> <protocol type="UFC"/>
> <protocol type="MFC"/>
> <protocol type="FRAG3"/>
> </stack>
> </stacks>
> </subsystem>
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years, 1 month