[JBoss JIRA] (DROOLS-2286) [DMN engine] Java Object in DMNContext not working properly with Filter function.
by Thomas Mantegazzi (JIRA)
Thomas Mantegazzi created DROOLS-2286:
-----------------------------------------
Summary: [DMN engine] Java Object in DMNContext not working properly with Filter function.
Key: DROOLS-2286
URL: https://issues.jboss.org/browse/DROOLS-2286
Project: Drools
Issue Type: Bug
Components: dmn engine
Affects Versions: 7.5.0.Final, 7.4.1.Final
Reporter: Thomas Mantegazzi
Assignee: Edson Tirelli
Attachments: FilterJohns.dmn, FilterOnObjectListBug.java
When trying to evaluate a FEEL expression like:
{code:java}
personList[name = "John"]
{code}
by inserting into the _DMNContext_ a _Java Object_, the _DMN engine_ doesn't seem to be able to fetch the _name_ field from the object. This doesn't happen if instead of _Java Objects_ we insert a Map.
While trying to debug, it seems that the problem is happening in the following method of _FilterExpressionNode_.
{code:java}
private void evaluateExpressionInContext(EvaluationContext ctx, List results, Object v) {
try {
ctx.enterFrame();
// handle it as a predicate
ctx.setValue( "item", v );
// if it is a Map, need to add all string keys as variables in the context
if( v instanceof Map ) {
Set<Map.Entry> set = ((Map) v).entrySet();
for( Map.Entry ce : set ) {
if( ce.getKey() instanceof String ) {
ctx.setValue( (String) ce.getKey(), ce.getValue() );
}
}
}
Object r = this.filter.evaluate( ctx );
if( r instanceof Boolean && ((Boolean)r) == Boolean.TRUE ) {
results.add( v );
}
} finally {
ctx.exitFrame();
}
}
{code}
Also attatched java and DMN test files that showcases the issue.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 11 months
[JBoss JIRA] (JGRP-2245) JGroup JDBC_PING is not clearing the crashed members
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2245?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-2245:
---------------------------
Description:
1) In AWS cloud environments, IP address will be different when a node crashes and when a new cluster node gets recreated.
2) In this situation, JGroup is not clearing logical_addr_cache and it gets confused, when we restart the cluster nodes.
3)logical_addr_cache_max_size and the eviction did not work because, the cache is again getting updated from the ping and it never getting marked as removable.
I think the issue is
handleView method is always re writing the entire cache on view change to the db. So even if we clear the table with the help of above mentioned flags (remove_all_data_on_view_change && remove_old_coords_on_view_change) , its getting re written to the table.
{code:java}
// remove all files which are not from the current members
protected void handleView(View new_view, View old_view, boolean coord_changed) {
if(is_coord) {
if(coord_changed) {
if(remove_all_data_on_view_change)
removeAll(cluster_name);
else if(remove_old_coords_on_view_change) {
Address old_coord=old_view != null? old_view.getCreator() : null;
if(old_coord != null)
remove(cluster_name, old_coord);
}
}
if(coord_changed || View.diff(old_view, new_view)[1].length > 0) {
writeAll();
if(remove_all_data_on_view_change || remove_old_coords_on_view_change)
startInfoWriter();
}
}
else if(coord_changed) // I'm no longer the coordinator
remove(cluster_name, local_addr);
}
{code}
4) Because of the crashed members (non existing ip address), we are getting lot of socket timeouts
sendToMembers of TP is trying to send messages to old crashed members and writing error logs while startup.
was:
1) In AWS cloud environments, IP address will be different when a node crashes and when a new cluster node gets recreated.
2) In this situation, JGroup is not clearing logical_addr_cache and it gets confused, when we restart the cluster nodes.
3)logical_addr_cache_max_size and the eviction did not work because, the cache is again getting updated from the ping and it never getting marked as removable.
I think the issue is
handleView method is always re writing the entire cache on view change to the db. So even if we clear the table with the help of above mentioned flags (remove_all_data_on_view_change && remove_old_coords_on_view_change) , its getting re written to the table.
// remove all files which are not from the current members
protected void handleView(View new_view, View old_view, boolean coord_changed) {
if(is_coord) {
if(coord_changed) {
if(remove_all_data_on_view_change)
removeAll(cluster_name);
else if(remove_old_coords_on_view_change) {
Address old_coord=old_view != null? old_view.getCreator() : null;
if(old_coord != null)
remove(cluster_name, old_coord);
}
}
if(coord_changed || View.diff(old_view, new_view)[1].length > 0) {
writeAll();
if(remove_all_data_on_view_change || remove_old_coords_on_view_change)
startInfoWriter();
}
}
else if(coord_changed) // I'm no longer the coordinator
remove(cluster_name, local_addr);
}
4) Because of the crashed members (non existing ip address), we are getting lot of socket timeouts
sendToMembers of TP is trying to send messages to old crashed members and writing error logs while startup.
> JGroup JDBC_PING is not clearing the crashed members
> ----------------------------------------------------
>
> Key: JGRP-2245
> URL: https://issues.jboss.org/browse/JGRP-2245
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.8
> Reporter: Sibin Karnavar
> Assignee: Bela Ban
> Priority: Critical
> Fix For: 4.0.10
>
>
> 1) In AWS cloud environments, IP address will be different when a node crashes and when a new cluster node gets recreated.
> 2) In this situation, JGroup is not clearing logical_addr_cache and it gets confused, when we restart the cluster nodes.
> 3)logical_addr_cache_max_size and the eviction did not work because, the cache is again getting updated from the ping and it never getting marked as removable.
> I think the issue is
> handleView method is always re writing the entire cache on view change to the db. So even if we clear the table with the help of above mentioned flags (remove_all_data_on_view_change && remove_old_coords_on_view_change) , its getting re written to the table.
> {code:java}
> // remove all files which are not from the current members
> protected void handleView(View new_view, View old_view, boolean coord_changed) {
> if(is_coord) {
> if(coord_changed) {
> if(remove_all_data_on_view_change)
> removeAll(cluster_name);
> else if(remove_old_coords_on_view_change) {
> Address old_coord=old_view != null? old_view.getCreator() : null;
> if(old_coord != null)
> remove(cluster_name, old_coord);
> }
> }
> if(coord_changed || View.diff(old_view, new_view)[1].length > 0) {
> writeAll();
> if(remove_all_data_on_view_change || remove_old_coords_on_view_change)
> startInfoWriter();
> }
> }
> else if(coord_changed) // I'm no longer the coordinator
> remove(cluster_name, local_addr);
> }
> {code}
> 4) Because of the crashed members (non existing ip address), we are getting lot of socket timeouts
> sendToMembers of TP is trying to send messages to old crashed members and writing error logs while startup.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 11 months
[JBoss JIRA] (JGRP-2232) Using NATIVE_S3_PING old members doesn't seem to get removed
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2232?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2232:
--------------------------------
OK, this issue has been fixed and tested with {{FILE_PING}}. Todo: test with {{NATIVE_S3_PING}}. This can be done one {{4.0.10.Final}} has been released.
> Using NATIVE_S3_PING old members doesn't seem to get removed
> ------------------------------------------------------------
>
> Key: JGRP-2232
> URL: https://issues.jboss.org/browse/JGRP-2232
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.7
> Environment: Spring Boot / Boxfuse / AWS
> Reporter: Jesper Blomquist
> Assignee: Bela Ban
> Priority: Minor
> Fix For: 4.0.10
>
>
> According to: http://www.jgroups.org/manual4/index.html#FILE_PING old members should be removed if there is a reaper task is running (which seem to be the default), but this does not happen for us.
> Both the s3 file, and the "logical address cache" keeps growing. We have a cluster of about 10 members (they comes and goes due to auto scaling). Nothing is ever marked as "removable" and nothing is ever older than 60 sec (same as the reaper interval?!).
> Below is a (truncated) dump from JMX of the logical address cache (everything always has the same age):
> 967 elements:
> {noformat}
> i-0704cf3786731075b-10202: 25c4cd46-6e4d-d198-88f5-bfa65b4bfb4e: 10.0.82.106:7800 (20 secs old)
> i-08fb0ad436efed1b2-18812: f4fef542-42ab-2c7b-a1f1-10ad90112e27: 10.0.118.75:7800 (20 secs old)
> i-0b9f077af97ef256f-11379: 47aea44c-9f2d-4200-d606-2f4c2844efc8: 10.0.85.52:7800 (20 secs old)
> i-06e220104b9e0069a-55132: b86864f0-8961-4565-c935-dc03137a16da: 10.0.67.5:7800 (20 secs old)
> i-0d3bbedeca8c7eb7d-33369: 9b37f425-7da5-d3ee-cfd5-5d1b4d2266b9: 10.0.116.207:7800 (20 secs old)
> i-074806dc606197fc9-46262: 99a2f550-5628-5d2c-1167-38268f804139: 10.0.109.149:7800 (20 secs old)
> i-0bbd38020b6184cb1-22367: e46e3ed5-0c75-1230-94aa-deb1cd1a9bf1: 10.0.124.183:7800 (20 secs old)
> i-0ff325c578faf6ad9-2376: c4c48178-cdbf-530a-155f-bba1f01a65e2: 10.0.100.143:7800 (20 secs old)
> i-03d819b23eb1357ba-64126: b89f5117-8ebf-df14-ece1-adba632c0245: 10.0.67.163:7800 (20 secs old)
> i-09e9907ee27aef2a0-37490: 8ee85310-39c7-0617-0fc8-3d4f002a1894: 10.0.108.234:7800 (20 secs old)
> i-0da90751a5093a880-28069: ecd33ad7-f261-5b71-8deb-b9fe5b4ed05d: 10.0.77.132:7800 (20 secs old)
> i-03213f181d96c70d3-57318: d962cfd0-8c5e-4129-334f-ffa10309ec30: 10.0.112.182:7800 (20 secs old)
> ...
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 11 months
[JBoss JIRA] (JGRP-2232) Using NATIVE_S3_PING old members doesn't seem to get removed
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2232?page=com.atlassian.jira.plugin.... ]
Bela Ban resolved JGRP-2232.
----------------------------
Resolution: Done
> Using NATIVE_S3_PING old members doesn't seem to get removed
> ------------------------------------------------------------
>
> Key: JGRP-2232
> URL: https://issues.jboss.org/browse/JGRP-2232
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.7
> Environment: Spring Boot / Boxfuse / AWS
> Reporter: Jesper Blomquist
> Assignee: Bela Ban
> Priority: Minor
> Fix For: 4.0.10
>
>
> According to: http://www.jgroups.org/manual4/index.html#FILE_PING old members should be removed if there is a reaper task is running (which seem to be the default), but this does not happen for us.
> Both the s3 file, and the "logical address cache" keeps growing. We have a cluster of about 10 members (they comes and goes due to auto scaling). Nothing is ever marked as "removable" and nothing is ever older than 60 sec (same as the reaper interval?!).
> Below is a (truncated) dump from JMX of the logical address cache (everything always has the same age):
> 967 elements:
> {noformat}
> i-0704cf3786731075b-10202: 25c4cd46-6e4d-d198-88f5-bfa65b4bfb4e: 10.0.82.106:7800 (20 secs old)
> i-08fb0ad436efed1b2-18812: f4fef542-42ab-2c7b-a1f1-10ad90112e27: 10.0.118.75:7800 (20 secs old)
> i-0b9f077af97ef256f-11379: 47aea44c-9f2d-4200-d606-2f4c2844efc8: 10.0.85.52:7800 (20 secs old)
> i-06e220104b9e0069a-55132: b86864f0-8961-4565-c935-dc03137a16da: 10.0.67.5:7800 (20 secs old)
> i-0d3bbedeca8c7eb7d-33369: 9b37f425-7da5-d3ee-cfd5-5d1b4d2266b9: 10.0.116.207:7800 (20 secs old)
> i-074806dc606197fc9-46262: 99a2f550-5628-5d2c-1167-38268f804139: 10.0.109.149:7800 (20 secs old)
> i-0bbd38020b6184cb1-22367: e46e3ed5-0c75-1230-94aa-deb1cd1a9bf1: 10.0.124.183:7800 (20 secs old)
> i-0ff325c578faf6ad9-2376: c4c48178-cdbf-530a-155f-bba1f01a65e2: 10.0.100.143:7800 (20 secs old)
> i-03d819b23eb1357ba-64126: b89f5117-8ebf-df14-ece1-adba632c0245: 10.0.67.163:7800 (20 secs old)
> i-09e9907ee27aef2a0-37490: 8ee85310-39c7-0617-0fc8-3d4f002a1894: 10.0.108.234:7800 (20 secs old)
> i-0da90751a5093a880-28069: ecd33ad7-f261-5b71-8deb-b9fe5b4ed05d: 10.0.77.132:7800 (20 secs old)
> i-03213f181d96c70d3-57318: d962cfd0-8c5e-4129-334f-ffa10309ec30: 10.0.112.182:7800 (20 secs old)
> ...
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 11 months
[JBoss JIRA] (WFCORE-3562) Deployment disable-all doesn't correct function at domain
by Marek Kopecký (JIRA)
[ https://issues.jboss.org/browse/WFCORE-3562?page=com.atlassian.jira.plugi... ]
Marek Kopecký edited comment on WFCORE-3562 at 1/31/18 8:46 AM:
----------------------------------------------------------------
I'm able to reproduce this issue with three deployments too. I attached deployments that I used.
This is not a regression against EAP 7.1, because legacy deploy commands doesn't work correctly on EAP 7.1 too.
*These are my experiments:*
*New way:*
{noformat}
deployment deploy-file --server-groups=main-server-group ~/erase15/app01.war
deployment deploy-file --server-groups=main-server-group ~/erase15/app02.war
deployment deploy-file --server-groups=other-server-group,main-server-group ~/erase15/app03.war
deployment disable --server-groups=main-server-group app01.war
deployment info --server-group=main-server-group
deployment disable-all --all-relevant-server-groups
deployment info --server-group=main-server-group
{noformat}
WF master results (error occurs):
{noformat}
[domain@localhost:9990 /] deployment deploy-file --server-groups=main-server-group ~/erase15/app01.war
[domain@localhost:9990 /] deployment deploy-file --server-groups=main-server-group ~/erase15/app02.war
[domain@localhost:9990 /] deployment deploy-file --server-groups=other-server-group,main-server-group ~/erase15/app03.war
[domain@localhost:9990 /] deployment disable --server-groups=main-server-group app01.war
[domain@localhost:9990 /] deployment info --server-group=main-server-group
NAME RUNTIME-NAME STATE
app01.war app01.war added
app02.war app02.war enabled
app03.war app03.war enabled
[domain@localhost:9990 /] deployment disable-all --all-relevant-server-groups
org.jboss.as.cli.operation.OperationFormatException: None of the server groups is specified or references specified deployment.
[domain@localhost:9990 /] deployment info --server-group=main-server-group
NAME RUNTIME-NAME STATE
app01.war app01.war added
app02.war app02.war enabled
app03.war app03.war enabled
[domain@localhost:9990 /]
{noformat}
*Legacy way:*
{noformat}
deploy ~/erase15/app01.war --server-groups=main-server-group
deploy ~/erase15/app02.war --server-groups=main-server-group
deploy ~/erase15/app03.war --server-groups=other-server-group,main-server-group
undeploy app01.war --keep-content --server-groups=main-server-group
deployment-info --server-group=main-server-group
undeploy * --keep-content --server-groups=main-server-group,other-server-group
deployment-info --server-group=main-server-group
{noformat}
EAP 7.1 results (error occurs):
{noformat}
[domain@localhost:9990 /] deploy ~/erase15/app01.war --server-groups=main-server-group
[domain@localhost:9990 /] deploy ~/erase15/app02.war --server-groups=main-server-group
[domain@localhost:9990 /] deploy ~/erase15/app03.war --server-groups=other-server-group,main-server-group
[domain@localhost:9990 /] undeploy app01.war --keep-content --server-groups=main-server-group
[domain@localhost:9990 /] deployment-info --server-group=main-server-group
NAME RUNTIME-NAME STATE
app01.war app01.war added
app02.war app02.war enabled
app03.war app03.war enabled
[domain@localhost:9990 /] undeploy * --keep-content --server-groups=main-server-group,other-server-group
{"WFLYCTL0062: Composite operation failed and was rolled back. Steps that failed:" => {"Operation step-2" => "WFLYCTL0216: Management resource '[
(\"server-group\" => \"other-server-group\"),
(\"deployment\" => \"app01.war\")
]' not found"}}
[domain@localhost:9990 /] deployment-info --server-group=main-server-group
NAME RUNTIME-NAME STATE
app01.war app01.war added
app02.war app02.war enabled
app03.war app03.war enabled
[domain@localhost:9990 /]
{noformat}
WF master results (error occurs):
{noformat}
[domain@localhost:9990 /] deploy ~/erase15/app01.war --server-groups=main-server-group
[domain@localhost:9990 /] deploy ~/erase15/app02.war --server-groups=main-server-group
[domain@localhost:9990 /] deploy ~/erase15/app03.war --server-groups=other-server-group,main-server-group
[domain@localhost:9990 /] undeploy app01.war --keep-content --server-groups=main-server-group
[domain@localhost:9990 /] deployment-info --server-group=main-server-group
NAME RUNTIME-NAME STATE
app01.war app01.war added
app02.war app02.war enabled
app03.war app03.war enabled
[domain@localhost:9990 /] undeploy * --keep-content --server-groups=main-server-group,other-server-group
Undeploy failed: {"WFLYCTL0062: Composite operation failed and was rolled back. Steps that failed:" => {"Operation step-2" => "WFLYCTL0216: Management resource '[
(\"server-group\" => \"other-server-group\"),
(\"deployment\" => \"app01.war\")
]' not found"}}
[domain@localhost:9990 /] deployment-info --server-group=main-server-group
NAME RUNTIME-NAME STATE
app01.war app01.war added
app02.war app02.war enabled
app03.war app03.war enabled
[domain@localhost:9990 /]
{noformat}
was (Author: mkopecky):
I'm able to reproduce this issue with three deployments too. I attached deployments that I used.
This is not a regression against EAP 7.1, because legacy deploy commands works correctly, but new deployment command doesn't work in this scenario.
*These are my experiments:*
*New way:*
{noformat}
deployment deploy-file --server-groups=main-server-group ~/erase15/app01.war
deployment deploy-file --server-groups=main-server-group ~/erase15/app02.war
deployment deploy-file --server-groups=other-server-group,main-server-group ~/erase15/app03.war
deployment disable --server-groups=main-server-group app01.war
deployment info --server-group=main-server-group
deployment disable-all --all-relevant-server-groups
deployment info --server-group=main-server-group
{noformat}
WF master results (error occurs):
{noformat}
[domain@localhost:9990 /] deployment deploy-file --server-groups=main-server-group ~/erase15/app01.war
[domain@localhost:9990 /] deployment deploy-file --server-groups=main-server-group ~/erase15/app02.war
[domain@localhost:9990 /] deployment deploy-file --server-groups=other-server-group,main-server-group ~/erase15/app03.war
[domain@localhost:9990 /] deployment disable --server-groups=main-server-group app01.war
[domain@localhost:9990 /] deployment info --server-group=main-server-group
NAME RUNTIME-NAME STATE
app01.war app01.war added
app02.war app02.war enabled
app03.war app03.war enabled
[domain@localhost:9990 /] deployment disable-all --all-relevant-server-groups
org.jboss.as.cli.operation.OperationFormatException: None of the server groups is specified or references specified deployment.
[domain@localhost:9990 /] deployment info --server-group=main-server-group
NAME RUNTIME-NAME STATE
app01.war app01.war added
app02.war app02.war enabled
app03.war app03.war enabled
[domain@localhost:9990 /]
{noformat}
*Legacy way:*
{noformat}
deploy ~/erase15/app01.war --server-groups=main-server-group
deploy ~/erase15/app02.war --server-groups=main-server-group
deploy ~/erase15/app03.war --server-groups=other-server-group,main-server-group
undeploy app01.war --keep-content --server-groups=main-server-group
deployment-info --server-group=main-server-group
undeploy * --keep-content --server-groups=main-server-group
deployment-info --server-group=main-server-group
{noformat}
EAP 7.1 results (error doesn't occurs):
{noformat}
[mkopecky@dhcp-10-40-5-4 bin]$ ./jboss-cli.sh -c
[domain@localhost:9990 /] deploy ~/erase15/app01.war --server-groups=main-server-group
[domain@localhost:9990 /] deploy ~/erase15/app02.war --server-groups=main-server-group
[domain@localhost:9990 /] deploy ~/erase15/app03.war --server-groups=other-server-group,main-server-group
[domain@localhost:9990 /] undeploy app01.war --keep-content --server-groups=main-server-group
[domain@localhost:9990 /] deployment-info --server-group=main-server-group
NAME RUNTIME-NAME STATE
app01.war app01.war added
app02.war app02.war enabled
app03.war app03.war enabled
[domain@localhost:9990 /] undeploy * --keep-content --server-groups=main-server-group
[domain@localhost:9990 /] deployment-info --server-group=main-server-group
NAME RUNTIME-NAME STATE
app01.war app01.war added
app02.war app02.war added
app03.war app03.war added
[domain@localhost:9990 /]
{noformat}
WF master results (error doesn't occurs):
{noformat}
[domain@localhost:9990 /] deploy ~/erase15/app01.war --server-groups=main-server-group
[domain@localhost:9990 /] deploy ~/erase15/app02.war --server-groups=main-server-group
[domain@localhost:9990 /] deploy ~/erase15/app03.war --server-groups=other-server-group,main-server-group
[domain@localhost:9990 /] undeploy app01.war --keep-content --server-groups=main-server-group
[domain@localhost:9990 /] deployment-info --server-group=main-server-group
NAME RUNTIME-NAME STATE
app01.war app01.war added
app02.war app02.war enabled
app03.war app03.war enabled
[domain@localhost:9990 /] undeploy * --keep-content --server-groups=main-server-group
[domain@localhost:9990 /] deployment-info --server-group=main-server-group
NAME RUNTIME-NAME STATE
app01.war app01.war added
app02.war app02.war added
app03.war app03.war added
[domain@localhost:9990 /]
{noformat}
> Deployment disable-all doesn't correct function at domain
> ---------------------------------------------------------
>
> Key: WFCORE-3562
> URL: https://issues.jboss.org/browse/WFCORE-3562
> Project: WildFly Core
> Issue Type: Bug
> Components: CLI
> Reporter: Vratislav Marek
> Assignee: Jean-Francois Denise
> Attachments: app01.war, app02.war, app03.war
>
>
> Domain
> {noformat}
> [domain@localhost:9990 /] deployment disable-all --all-relevant-server-groups
> org.jboss.as.cli.operation.OperationFormatException: None of the server groups is specified or references specified deployment.
> [domain@localhost:9990 /]
> {noformat}
> {noformat}
> [domain@localhost:9990 /] undeploy * --keep-content --all-relevant-server-groups
> org.jboss.as.cli.operation.OperationFormatException: None of the server groups is specified or references specified deployment.
> [domain@localhost:9990 /]
> {noformat}
> {noformat}
> [domain@localhost:9990 /] deployment disable-all --server-groups=main-server-group
> {"WFLYCTL0062: Composite operation failed and was rolled back. Steps that failed:" => {"Operation step-4" => "WFLYCTL0216: Management resource '[
> (\"server-group\" => \"main-server-group\"),
> (\"deployment\" => \"cli-test-app2-deploy-all.war\")
> ]' not found"}}
> [domain@localhost:9990 /]
> {noformat}
> {noformat}
> [domain@localhost:9990 /] undeploy * --keep-content --server-groups=main-server-group
> {"WFLYCTL0062: Composite operation failed and was rolled back. Steps that failed:" => {"Operation step-4" => "WFLYCTL0216: Management resource '[
> (\"server-group\" => \"main-server-group\"),
> (\"deployment\" => \"cli-test-app2-deploy-all.war\")
> ]' not found"}}
> [domain@localhost:9990 /]
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 11 months
[JBoss JIRA] (DROOLS-2263) Unexpected results in GDST when using enumerations with commas
by Toni Rikkola (JIRA)
[ https://issues.jboss.org/browse/DROOLS-2263?page=com.atlassian.jira.plugi... ]
Toni Rikkola updated DROOLS-2263:
---------------------------------
Summary: Unexpected results in GDST when using enumerations with commas (was: [GSS] (6.4.z) Unexpected results in GDST when using enumerations with commas)
> Unexpected results in GDST when using enumerations with commas
> --------------------------------------------------------------
>
> Key: DROOLS-2263
> URL: https://issues.jboss.org/browse/DROOLS-2263
> Project: Drools
> Issue Type: Bug
> Components: Guided Decision Table Editor
> Reporter: Toni Rikkola
> Assignee: Toni Rikkola
>
> When using enumerations where the values itself contain a comma, the rules generated by a GDST are unexpected, as the "contains in" operator splits those values in the enumerations. Example enumeration:
> {noformat}
> fact: person
> field: city
> context: ['paris','london','new york,boston']
> {noformat}
> Note: see the 'new york, boston' sample.
> The code generated will be:
> {noformat}
> rule "Row 1 personGDT"
> dialect "mvel"
> when
> p : person( city in ( "new york", "boston" ) )
> then
> end
> {noformat}
> Basically "paris" and "new york,boston" will be treated by the DSL parser as 3 strings in the DRL generation and will produce someting simiular to
> p : person( city in ( "paris", "new york", "boston" ) )
> But what the customer expects is the following
> p : person( city in ( "paris", "new york,boston" ) )
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 11 months
[JBoss JIRA] (WFLY-232) Deployment fails when Dependencies is empty in a jars manifest
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/WFLY-232?page=com.atlassian.jira.plugin.s... ]
RH Bugzilla Integration commented on WFLY-232:
----------------------------------------------
Anup Kumar Dey <andey(a)redhat.com> changed the Status of [bug 1539985|https://bugzilla.redhat.com/show_bug.cgi?id=1539985] from NEW to ASSIGNED
> Deployment fails when Dependencies is empty in a jars manifest
> --------------------------------------------------------------
>
> Key: WFLY-232
> URL: https://issues.jboss.org/browse/WFLY-232
> Project: WildFly
> Issue Type: Enhancement
> Reporter: Gábor Farkas
> Assignee: Stuart Douglas
> Fix For: 8.0.0.Alpha1
>
> Attachments: empty-manifest.ear
>
>
> I have a WAR, which produces the following error on deployment:
> Caused by: java.lang.IllegalArgumentException: Empty module specification
> at org.jboss.modules.ModuleIdentifier.fromString(ModuleIdentifier.java:169) [jboss-modules.jar:1.1.1.GA]
> at org.jboss.as.server.deployment.module.ManifestDependencyProcessor.deploy(ManifestDependencyProcessor.java:83) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final]
> at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:113) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final]
> ... 5 more
> I found that my WAR contains a jar in WEB-INF/lib that has empty dependencies defined in its manifest. More exactly it's "Dependencies: ", so the colon is followed by a space character before the new line characters. The ManifestDependencyProcessor parses this as one dependency, the name of which is the empty string.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 11 months
[JBoss JIRA] (WFCORE-3563) Could not enable application deployment on two server groups in domain
by Marek Kopecký (JIRA)
[ https://issues.jboss.org/browse/WFCORE-3563?page=com.atlassian.jira.plugi... ]
Marek Kopecký commented on WFCORE-3563:
---------------------------------------
[~jdenise]: I confirm [~vmarek] previous comment, this issue is valid also for EAP 7.1
> Could not enable application deployment on two server groups in domain
> ----------------------------------------------------------------------
>
> Key: WFCORE-3563
> URL: https://issues.jboss.org/browse/WFCORE-3563
> Project: WildFly Core
> Issue Type: Bug
> Components: CLI
> Reporter: Vratislav Marek
> Assignee: Jean-Francois Denise
>
> {noformat}
> [domain@localhost:9990 /] deployment enable --server-groups=other-server-group,main-server-group cli-test-app-deploy-all.ear
> {"WFLYCTL0062: Composite operation failed and was rolled back. Steps that failed:" => {"Operation step-1" => "WFLYCTL0212: Duplicate resource [
> (\"server-group\" => \"other-server-group\"),
> (\"deployment\" => \"cli-test-app-deploy-all.ear\")
> ]"}}
> [domain@localhost:9990 /]
> {noformat}
> {noformat}
> [domain@localhost:9990 /] deploy --name=cli-test-app-deploy-all.ear --server-groups=other-server-group,main-server-group
> {"WFLYCTL0062: Composite operation failed and was rolled back. Steps that failed:" => {"Operation step-1" => "WFLYCTL0212: Duplicate resource [
> (\"server-group\" => \"other-server-group\"),
> (\"deployment\" => \"cli-test-app-deploy-all.ear\")
> ]"}}
> [domain@localhost:9990 /]
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 11 months