[JBoss JIRA] (DROOLS-2419) Validator fails schema validation for definition xml element with prefix
by Matteo Mortari (JIRA)
[ https://issues.jboss.org/browse/DROOLS-2419?page=com.atlassian.jira.plugi... ]
Matteo Mortari updated DROOLS-2419:
-----------------------------------
Sprint: 2018 Week 13-14
> Validator fails schema validation for definition xml element with prefix
> ------------------------------------------------------------------------
>
> Key: DROOLS-2419
> URL: https://issues.jboss.org/browse/DROOLS-2419
> Project: Drools
> Issue Type: Bug
> Components: dmn engine
> Reporter: Matteo Mortari
> Assignee: Matteo Mortari
> Attachments: UsingSemanticNS.dmn
>
>
> Given the attached model and the following test
> {code:java}
> @Test
> public void testTEMP() {
> List<DMNMessage> validate = validator.validate(getReader("UsingSemanticNS.dmn"), VALIDATE_SCHEMA, VALIDATE_MODEL, VALIDATE_COMPILATION);
> assertThat(ValidatorUtil.formatMessages(validate), validate.size(), is(0));
> }
> {code}
> fails with:
> {code:java}
> java.lang.AssertionError: DMNMessage{ severity=ERROR, type=FAILED_XML_VALIDATION, message='Failed XML validation of DMN file: cvc-elt.1: Cannot find the declaration of element 'semantic:definitions'.', sourceId='null', exception='SAXParseException : cvc-elt.1: Cannot find the declaration of element 'semantic:definitions'.', feelEvent=''}
> Expected: is <0>
> but: was <1>
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> at org.junit.Assert.assertThat(Assert.java:956)
> at org.kie.dmn.validation.ValidatorTest.testTEMP(ValidatorTest.java:92)
> ...
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 6 months
[JBoss JIRA] (DROOLS-2419) Validator fails schema validation for definition xml element with prefix
by Matteo Mortari (JIRA)
[ https://issues.jboss.org/browse/DROOLS-2419?page=com.atlassian.jira.plugi... ]
Matteo Mortari reassigned DROOLS-2419:
--------------------------------------
Assignee: Matteo Mortari (was: Fedor Gavrilov)
> Validator fails schema validation for definition xml element with prefix
> ------------------------------------------------------------------------
>
> Key: DROOLS-2419
> URL: https://issues.jboss.org/browse/DROOLS-2419
> Project: Drools
> Issue Type: Bug
> Components: dmn engine
> Reporter: Matteo Mortari
> Assignee: Matteo Mortari
> Attachments: UsingSemanticNS.dmn
>
>
> Given the attached model and the following test
> {code:java}
> @Test
> public void testTEMP() {
> List<DMNMessage> validate = validator.validate(getReader("UsingSemanticNS.dmn"), VALIDATE_SCHEMA, VALIDATE_MODEL, VALIDATE_COMPILATION);
> assertThat(ValidatorUtil.formatMessages(validate), validate.size(), is(0));
> }
> {code}
> fails with:
> {code:java}
> java.lang.AssertionError: DMNMessage{ severity=ERROR, type=FAILED_XML_VALIDATION, message='Failed XML validation of DMN file: cvc-elt.1: Cannot find the declaration of element 'semantic:definitions'.', sourceId='null', exception='SAXParseException : cvc-elt.1: Cannot find the declaration of element 'semantic:definitions'.', feelEvent=''}
> Expected: is <0>
> but: was <1>
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> at org.junit.Assert.assertThat(Assert.java:956)
> at org.kie.dmn.validation.ValidatorTest.testTEMP(ValidatorTest.java:92)
> ...
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 6 months
[JBoss JIRA] (DROOLS-2425) [DMN Designer] Undo of relation column deletion is not proper
by Jozef Marko (JIRA)
[ https://issues.jboss.org/browse/DROOLS-2425?page=com.atlassian.jira.plugi... ]
Jozef Marko commented on DROOLS-2425:
-------------------------------------
Still can reproduce, if I delete and undo deletion of decision table column that some column previously completely visible are not more visible.
> [DMN Designer] Undo of relation column deletion is not proper
> -------------------------------------------------------------
>
> Key: DROOLS-2425
> URL: https://issues.jboss.org/browse/DROOLS-2425
> Project: Drools
> Issue Type: Bug
> Components: DMN Editor
> Affects Versions: 7.8.0.Final
> Reporter: Jozef Marko
> Assignee: Michael Anstis
> Priority: Minor
> Attachments: Screenshot from 2018-03-27 12-12-05.png, Screenshot from 2018-03-27 12-14-47.png, Screenshot from 2018-03-27 12-14-55.png
>
>
> This issue was spotted during DROOLS-2392 review.
> In special cases undo of delete relation column is not proper.
> h2. Acceptance test
> # Steps to reproduce fixed (/)
> # Deletion is undone properly
> ## Deletion of con. entry (/)
> ## Deletion of Dec. table row (/)
> ## Deletion of Dec. table column (x)
> ## Deletion of Relation column
> ## Deletion of Relation row
> ## Deletion of Function parameter
> # Clear of expression type is undone properly
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 6 months
[JBoss JIRA] (WFLY-10170) A Timer will hang forever if the database connection is not available
by Wolf-Dieter Fink (JIRA)
Wolf-Dieter Fink created WFLY-10170:
---------------------------------------
Summary: A Timer will hang forever if the database connection is not available
Key: WFLY-10170
URL: https://issues.jboss.org/browse/WFLY-10170
Project: WildFly
Issue Type: Bug
Components: EJB
Affects Versions: 12.0.0.Final
Reporter: Wolf-Dieter Fink
Assignee: Jörg Bäsner
Fix For: 13.0.0.Beta1
Having a Timer annotated like:
{code}
@Schedule(second = "*/15", minute = "*", hour = "*", persistent = true)
public void timeoutMethod() {
{code}
and a server configuration like:
{code:xml}
<timer-service thread-pool-name="default" default-data-store="clustered-store">
<data-stores>
<database-data-store name="clustered-store" datasource-jndi-name="java:jboss/datasources/ExampleDS"/>
</data-stores>
</timer-service>
{code}
will lead to a hanging Timer in case the Database connection is not available and only a restart of the server will recover the Timer.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 6 months
[JBoss JIRA] (JGRP-2260) UNICAST3 doesn't remove dead nodes from its tables
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2260?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2260:
--------------------------------
Can you close this issue?
> UNICAST3 doesn't remove dead nodes from its tables
> --------------------------------------------------
>
> Key: JGRP-2260
> URL: https://issues.jboss.org/browse/JGRP-2260
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.10
> Environment: WildFly 12.0.0.Final
> Reporter: Rich DiCroce
> Assignee: Bela Ban
>
> Scenario: 2 WildFly instances clustered together. A ForkChannel is defined, with a MessageDispatcher on top. I start both nodes, then stop the second one. 6-7 minutes after stopping the second node, I start getting log spam on the first node:
> {quote}
> 12:47:04,519 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> 12:47:06,522 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> 12:47:08,524 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> {quote}
> After some debugging, I discovered that the reason is because UNICAST3 is still trying to retransmit to the dead node. Its send_table still contains an entry for the dead node with state OPEN.
> After looking at the source code for UNICAST3, I have a theory about what's happening.
> * When a node leaves the cluster, down(Event) gets invoked with a view change, which calls closeConnection(Address) for each node that left. That sets the connection state to CLOSING.
> * Suppose that immediately after the view change is handled, a message with the dead node as its destination gets passed to down(Message). That invokes getSenderEntry(Address), which finds the connection... and sets the state back to OPEN.
> Consequently, the connection is never closed or removed from the table, so retransmit attempts continue forever even though they will never succeed.
> This issue is easily reproducible for me, although unfortunately I can't give you the application in question. But if you have fixes you want to try, I'm happy to drop in a patched JAR and see if the issue still happens.
> This is my JGroups subsystem configuration:
> {code:xml}
> <subsystem xmlns="urn:jboss:domain:jgroups:6.0">
> <channels default="ee">
> <channel name="ee" stack="main">
> <fork name="shared-dispatcher"/>
> <fork name="group-topology"/>
> </channel>
> </channels>
> <stacks>
> <stack name="main">
> <transport type="UDP" socket-binding="jgroups" site="${gp.site:DEFAULT}"/>
> <protocol type="PING"/>
> <protocol type="MERGE3">
> <property name="min_interval">
> 1000
> </property>
> <property name="max_interval">
> 5000
> </property>
> </protocol>
> <protocol type="FD_SOCK"/>
> <protocol type="FD_ALL2">
> <property name="interval">
> 3000
> </property>
> <property name="timeout">
> 8000
> </property>
> </protocol>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS">
> <property name="join_timeout">
> 100
> </property>
> </protocol>
> <protocol type="UFC"/>
> <protocol type="MFC"/>
> <protocol type="FRAG3"/>
> </stack>
> </stacks>
> </subsystem>
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 6 months
[JBoss JIRA] (JGRP-2260) UNICAST3 doesn't remove dead nodes from its tables
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2260?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2260:
--------------------------------
You're right regarding the default values for {{conn_expiry_timeout}} and {{conn_close_timeout}}: both have sensible default values (2 and 4 minutes) and I recommend leaving them.
Can you create the Infinispan and Wildfly teams to fix their changing of the default values? IIRC, both should be using {{UNICAST3}} by now.
Cheers,
> UNICAST3 doesn't remove dead nodes from its tables
> --------------------------------------------------
>
> Key: JGRP-2260
> URL: https://issues.jboss.org/browse/JGRP-2260
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.10
> Environment: WildFly 12.0.0.Final
> Reporter: Rich DiCroce
> Assignee: Bela Ban
>
> Scenario: 2 WildFly instances clustered together. A ForkChannel is defined, with a MessageDispatcher on top. I start both nodes, then stop the second one. 6-7 minutes after stopping the second node, I start getting log spam on the first node:
> {quote}
> 12:47:04,519 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> 12:47:06,522 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> 12:47:08,524 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> {quote}
> After some debugging, I discovered that the reason is because UNICAST3 is still trying to retransmit to the dead node. Its send_table still contains an entry for the dead node with state OPEN.
> After looking at the source code for UNICAST3, I have a theory about what's happening.
> * When a node leaves the cluster, down(Event) gets invoked with a view change, which calls closeConnection(Address) for each node that left. That sets the connection state to CLOSING.
> * Suppose that immediately after the view change is handled, a message with the dead node as its destination gets passed to down(Message). That invokes getSenderEntry(Address), which finds the connection... and sets the state back to OPEN.
> Consequently, the connection is never closed or removed from the table, so retransmit attempts continue forever even though they will never succeed.
> This issue is easily reproducible for me, although unfortunately I can't give you the application in question. But if you have fixes you want to try, I'm happy to drop in a patched JAR and see if the issue still happens.
> This is my JGroups subsystem configuration:
> {code:xml}
> <subsystem xmlns="urn:jboss:domain:jgroups:6.0">
> <channels default="ee">
> <channel name="ee" stack="main">
> <fork name="shared-dispatcher"/>
> <fork name="group-topology"/>
> </channel>
> </channels>
> <stacks>
> <stack name="main">
> <transport type="UDP" socket-binding="jgroups" site="${gp.site:DEFAULT}"/>
> <protocol type="PING"/>
> <protocol type="MERGE3">
> <property name="min_interval">
> 1000
> </property>
> <property name="max_interval">
> 5000
> </property>
> </protocol>
> <protocol type="FD_SOCK"/>
> <protocol type="FD_ALL2">
> <property name="interval">
> 3000
> </property>
> <property name="timeout">
> 8000
> </property>
> </protocol>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS">
> <property name="join_timeout">
> 100
> </property>
> </protocol>
> <protocol type="UFC"/>
> <protocol type="MFC"/>
> <protocol type="FRAG3"/>
> </stack>
> </stacks>
> </subsystem>
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 6 months
[JBoss JIRA] (JGRP-2260) UNICAST3 doesn't remove dead nodes from its tables
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2260?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2260:
--------------------------------
Re the {{conn_expiry_timeout}} being 0: yes, connections will not get reaped, which leads to memory leaks. This may not usually be an issue as many clusters have a fixed number of nodes, and if there's not a lot of membership churn, then connections not being reaped is harmless.
I've added a comment to the {{conn_expiry_timeout}} attribute, and it is 2 minutes anyway by default.
> UNICAST3 doesn't remove dead nodes from its tables
> --------------------------------------------------
>
> Key: JGRP-2260
> URL: https://issues.jboss.org/browse/JGRP-2260
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.10
> Environment: WildFly 12.0.0.Final
> Reporter: Rich DiCroce
> Assignee: Bela Ban
>
> Scenario: 2 WildFly instances clustered together. A ForkChannel is defined, with a MessageDispatcher on top. I start both nodes, then stop the second one. 6-7 minutes after stopping the second node, I start getting log spam on the first node:
> {quote}
> 12:47:04,519 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> 12:47:06,522 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> 12:47:08,524 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> {quote}
> After some debugging, I discovered that the reason is because UNICAST3 is still trying to retransmit to the dead node. Its send_table still contains an entry for the dead node with state OPEN.
> After looking at the source code for UNICAST3, I have a theory about what's happening.
> * When a node leaves the cluster, down(Event) gets invoked with a view change, which calls closeConnection(Address) for each node that left. That sets the connection state to CLOSING.
> * Suppose that immediately after the view change is handled, a message with the dead node as its destination gets passed to down(Message). That invokes getSenderEntry(Address), which finds the connection... and sets the state back to OPEN.
> Consequently, the connection is never closed or removed from the table, so retransmit attempts continue forever even though they will never succeed.
> This issue is easily reproducible for me, although unfortunately I can't give you the application in question. But if you have fixes you want to try, I'm happy to drop in a patched JAR and see if the issue still happens.
> This is my JGroups subsystem configuration:
> {code:xml}
> <subsystem xmlns="urn:jboss:domain:jgroups:6.0">
> <channels default="ee">
> <channel name="ee" stack="main">
> <fork name="shared-dispatcher"/>
> <fork name="group-topology"/>
> </channel>
> </channels>
> <stacks>
> <stack name="main">
> <transport type="UDP" socket-binding="jgroups" site="${gp.site:DEFAULT}"/>
> <protocol type="PING"/>
> <protocol type="MERGE3">
> <property name="min_interval">
> 1000
> </property>
> <property name="max_interval">
> 5000
> </property>
> </protocol>
> <protocol type="FD_SOCK"/>
> <protocol type="FD_ALL2">
> <property name="interval">
> 3000
> </property>
> <property name="timeout">
> 8000
> </property>
> </protocol>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS">
> <property name="join_timeout">
> 100
> </property>
> </protocol>
> <protocol type="UFC"/>
> <protocol type="MFC"/>
> <protocol type="FRAG3"/>
> </stack>
> </stacks>
> </subsystem>
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 6 months
[JBoss JIRA] (DROOLS-2423) [DMN Designer] Clear command caches context entries
by Jozef Marko (JIRA)
[ https://issues.jboss.org/browse/DROOLS-2423?page=com.atlassian.jira.plugi... ]
Jozef Marko updated DROOLS-2423:
--------------------------------
Description:
This issue was spotted during review of DROOLS-2392, however relation is not probable between both.
If user clears the top level context entry and then select same context entry exactly the same context entry will appear. There should appear context entry with default values.
h2. Acceptance test
# check scenario described in [PR comments|https://github.com/kiegroup/kie-wb-common/pull/1548] (/)
# check scenario from DROOLS-2424 (/)
# check scenario form DROOLS-2425 (x)
# Steps to reproduce fixed - Clear the context entry at depth:
-- 0 (/)
-- 1 (/)
-- 2 (/)
-- 3 (/)
was:
This issue was spotted during review of DROOLS-2392, however relation is not probable between both.
If user clears the top level context entry and then select same context entry exactly the same context entry will appear. There should appear context entry with default values.
h2. Acceptance test
# check scenario described in [PR comments|https://github.com/kiegroup/kie-wb-common/pull/1548] (/)
# check scenario from DROOLS-2424 (/)
# check scenario form DROOLS-2425
# Steps to reproduce fixed - Clear the context entry at depth:
-- 0 (/)
-- 1 (/)
-- 2 (/)
-- 3 (/)
> [DMN Designer] Clear command caches context entries
> ---------------------------------------------------
>
> Key: DROOLS-2423
> URL: https://issues.jboss.org/browse/DROOLS-2423
> Project: Drools
> Issue Type: Bug
> Components: DMN Editor
> Affects Versions: 7.8.0.Final
> Reporter: Jozef Marko
> Assignee: Michael Anstis
> Priority: Minor
> Attachments: Screenshot from 2018-03-27 11-31-01.png, Screenshot from 2018-03-27 11-31-34.png, Screenshot from 2018-03-27 11-32-52.png
>
>
> This issue was spotted during review of DROOLS-2392, however relation is not probable between both.
> If user clears the top level context entry and then select same context entry exactly the same context entry will appear. There should appear context entry with default values.
> h2. Acceptance test
> # check scenario described in [PR comments|https://github.com/kiegroup/kie-wb-common/pull/1548] (/)
> # check scenario from DROOLS-2424 (/)
> # check scenario form DROOLS-2425 (x)
> # Steps to reproduce fixed - Clear the context entry at depth:
> -- 0 (/)
> -- 1 (/)
> -- 2 (/)
> -- 3 (/)
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 6 months
[JBoss JIRA] (DROOLS-2425) [DMN Designer] Undo of relation column deletion is not proper
by Jozef Marko (JIRA)
[ https://issues.jboss.org/browse/DROOLS-2425?page=com.atlassian.jira.plugi... ]
Jozef Marko updated DROOLS-2425:
--------------------------------
Description:
This issue was spotted during DROOLS-2392 review.
In special cases undo of delete relation column is not proper.
h2. Acceptance test
# Steps to reproduce fixed (/)
# Deletion is undone properly
## Deletion of con. entry (/)
## Deletion of Dec. table row (/)
## Deletion of Dec. table column (x)
## Deletion of Relation column
## Deletion of Relation row
## Deletion of Function parameter
# Clear of expression type is undone properly
was:
This issue was spotted during DROOLS-2392 review.
In special cases undo of delete relation column is not proper.
h2. Acceptance test
# Steps to reproduce fixed (/)
# Deletion is undone rpoeprly
## Deletion of con. entry
## Deletion of Dec. table row
## Deletion of Dec. table column
## Deletion of Relation column
## Deletion of Relation row
## Deletion of Function parameter
# Clear of expression type is undone properly
> [DMN Designer] Undo of relation column deletion is not proper
> -------------------------------------------------------------
>
> Key: DROOLS-2425
> URL: https://issues.jboss.org/browse/DROOLS-2425
> Project: Drools
> Issue Type: Bug
> Components: DMN Editor
> Affects Versions: 7.8.0.Final
> Reporter: Jozef Marko
> Assignee: Michael Anstis
> Priority: Minor
> Attachments: Screenshot from 2018-03-27 12-12-05.png, Screenshot from 2018-03-27 12-14-47.png, Screenshot from 2018-03-27 12-14-55.png
>
>
> This issue was spotted during DROOLS-2392 review.
> In special cases undo of delete relation column is not proper.
> h2. Acceptance test
> # Steps to reproduce fixed (/)
> # Deletion is undone properly
> ## Deletion of con. entry (/)
> ## Deletion of Dec. table row (/)
> ## Deletion of Dec. table column (x)
> ## Deletion of Relation column
> ## Deletion of Relation row
> ## Deletion of Function parameter
> # Clear of expression type is undone properly
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 6 months
[JBoss JIRA] (DROOLS-2425) [DMN Designer] Undo of relation column deletion is not proper
by Jozef Marko (JIRA)
[ https://issues.jboss.org/browse/DROOLS-2425?page=com.atlassian.jira.plugi... ]
Jozef Marko updated DROOLS-2425:
--------------------------------
Description:
This issue was spotted during DROOLS-2392 review.
In special cases undo of delete relation column is not proper.
h2. Acceptance test
# Steps to reproduce fixed (/)
# Deletion is undone rpoeprly
## Deletion of con. entry
## Deletion of Dec. table row
## Deletion of Dec. table column
## Deletion of Relation column
## Deletion of Relation row
## Deletion of Function parameter
# Clear of expression type is undone properly
was:
This issue was spotted during DROOLS-2392 review.
In special cases undo of delete relation column is not proper.
h2. Acceptance test
# Steps to reproduce fixed
# Deletion is undone rpoeprly
## Deletion of con. entry
## Deletion of Dec. table row
## Deletion of Dec. table column
## Deletion of Relation column
## Deletion of Relation row
## Deletion of Function parameter
# Clear of expression type is undone properly
> [DMN Designer] Undo of relation column deletion is not proper
> -------------------------------------------------------------
>
> Key: DROOLS-2425
> URL: https://issues.jboss.org/browse/DROOLS-2425
> Project: Drools
> Issue Type: Bug
> Components: DMN Editor
> Affects Versions: 7.8.0.Final
> Reporter: Jozef Marko
> Assignee: Michael Anstis
> Priority: Minor
> Attachments: Screenshot from 2018-03-27 12-12-05.png, Screenshot from 2018-03-27 12-14-47.png, Screenshot from 2018-03-27 12-14-55.png
>
>
> This issue was spotted during DROOLS-2392 review.
> In special cases undo of delete relation column is not proper.
> h2. Acceptance test
> # Steps to reproduce fixed (/)
> # Deletion is undone rpoeprly
> ## Deletion of con. entry
> ## Deletion of Dec. table row
> ## Deletion of Dec. table column
> ## Deletion of Relation column
> ## Deletion of Relation row
> ## Deletion of Function parameter
> # Clear of expression type is undone properly
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 6 months