[JBoss JIRA] (WFLY-9975) infinispan cache-container[jndi-name] fails validation in WildFly 12
by Paul Ferraro (JIRA)
[ https://issues.jboss.org/browse/WFLY-9975?page=com.atlassian.jira.plugin.... ]
Paul Ferraro closed WFLY-9975.
------------------------------
Resolution: Rejected
jndi-name is not a valid attribute in the 5.0 version of the Infinispan subsystem schema.
> infinispan cache-container[jndi-name] fails validation in WildFly 12
> --------------------------------------------------------------------
>
> Key: WFLY-9975
> URL: https://issues.jboss.org/browse/WFLY-9975
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 12.0.0.Final
> Environment: ALL
> Reporter: Pratik Parikh
> Assignee: Paul Ferraro
> Priority: Blocker
>
> Widfly 12 fails to recognize jndi-name attribute and emits exception the following:
> OPVDX001: Validation error in standalone.xml -----------------------------------
> |
> | 336:
> | 337:
> | 338:
> | ^^^^ 'jndi-name' isn't an allowed attribute for the 'cache-container'
> | element
> |
> | Attributes allowed here are:
> | aliases jndi-name name
> | default-cache module statistics-enabled
> |
> | 339:
> | 340:
> | 341:
> |
> | 'jndi-name' is allowed on elements:
> | - server > profile > {urn:jboss:domain:ee:4.0}subsystem > concurrent > context-services > context-service
> | - server > profile > {urn:jboss:domain:ee:4.0}subsystem > concurrent > managed-thread-factories > managed-thread-factory
> | - server > profile > {urn:jboss:domain:ee:4.0}subsystem > concurrent > managed-executor-services > managed-executor-service
> | - server > profile > {urn:jboss:domain:ee:4.0}subsystem > concurrent > managed-scheduled-executor-services > managed-scheduled-executor-service
> | - server > profile > {urn:jboss:domain:infinispan:5.0}subsystem > cache-container
> | - server > profile > {urn:jboss:domain:infinispan:5.0}subsystem > cache-container > local-cache
> | - server > profile > {urn:jboss:domain:infinispan:5.0}subsystem > cache-container > replicated-cache
> | - server > profile > {urn:jboss:domain:infinispan:5.0}subsystem > cache-container > invalidation-cache
> | - server > profile > {urn:jboss:domain:mail:3.0}subsystem > mail-session
> | - server > profile > {urn:jboss:domain:transactions:4.0}subsystem > commit-markable-resources > commit-markable-resource
> |
> |
> | The primary underlying error message was:
> | > ParseError at [row,col]:[338,4]
> | > Message: WFLYCTL0197: Unexpected attribute 'jndi-name' encountered
> |
> |-------------------------------------------------------------------------------
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 6 months
[JBoss JIRA] (JGRP-2260) UNICAST3 doesn't remove dead nodes from its tables
by Rich DiCroce (JIRA)
[ https://issues.jboss.org/browse/JGRP-2260?page=com.atlassian.jira.plugin.... ]
Rich DiCroce closed JGRP-2260.
------------------------------
Resolution: Explained
> UNICAST3 doesn't remove dead nodes from its tables
> --------------------------------------------------
>
> Key: JGRP-2260
> URL: https://issues.jboss.org/browse/JGRP-2260
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.10
> Environment: WildFly 12.0.0.Final
> Reporter: Rich DiCroce
> Assignee: Bela Ban
>
> Scenario: 2 WildFly instances clustered together. A ForkChannel is defined, with a MessageDispatcher on top. I start both nodes, then stop the second one. 6-7 minutes after stopping the second node, I start getting log spam on the first node:
> {quote}
> 12:47:04,519 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> 12:47:06,522 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> 12:47:08,524 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> {quote}
> After some debugging, I discovered that the reason is because UNICAST3 is still trying to retransmit to the dead node. Its send_table still contains an entry for the dead node with state OPEN.
> After looking at the source code for UNICAST3, I have a theory about what's happening.
> * When a node leaves the cluster, down(Event) gets invoked with a view change, which calls closeConnection(Address) for each node that left. That sets the connection state to CLOSING.
> * Suppose that immediately after the view change is handled, a message with the dead node as its destination gets passed to down(Message). That invokes getSenderEntry(Address), which finds the connection... and sets the state back to OPEN.
> Consequently, the connection is never closed or removed from the table, so retransmit attempts continue forever even though they will never succeed.
> This issue is easily reproducible for me, although unfortunately I can't give you the application in question. But if you have fixes you want to try, I'm happy to drop in a patched JAR and see if the issue still happens.
> This is my JGroups subsystem configuration:
> {code:xml}
> <subsystem xmlns="urn:jboss:domain:jgroups:6.0">
> <channels default="ee">
> <channel name="ee" stack="main">
> <fork name="shared-dispatcher"/>
> <fork name="group-topology"/>
> </channel>
> </channels>
> <stacks>
> <stack name="main">
> <transport type="UDP" socket-binding="jgroups" site="${gp.site:DEFAULT}"/>
> <protocol type="PING"/>
> <protocol type="MERGE3">
> <property name="min_interval">
> 1000
> </property>
> <property name="max_interval">
> 5000
> </property>
> </protocol>
> <protocol type="FD_SOCK"/>
> <protocol type="FD_ALL2">
> <property name="interval">
> 3000
> </property>
> <property name="timeout">
> 8000
> </property>
> </protocol>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS">
> <property name="join_timeout">
> 100
> </property>
> </protocol>
> <protocol type="UFC"/>
> <protocol type="MFC"/>
> <protocol type="FRAG3"/>
> </stack>
> </stacks>
> </subsystem>
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 6 months
[JBoss JIRA] (JGRP-2260) UNICAST3 doesn't remove dead nodes from its tables
by Rich DiCroce (JIRA)
[ https://issues.jboss.org/browse/JGRP-2260?page=com.atlassian.jira.plugin.... ]
Rich DiCroce commented on JGRP-2260:
------------------------------------
Created ISPN-9038 and WFLY-10171. Will close this issue.
> UNICAST3 doesn't remove dead nodes from its tables
> --------------------------------------------------
>
> Key: JGRP-2260
> URL: https://issues.jboss.org/browse/JGRP-2260
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.10
> Environment: WildFly 12.0.0.Final
> Reporter: Rich DiCroce
> Assignee: Bela Ban
>
> Scenario: 2 WildFly instances clustered together. A ForkChannel is defined, with a MessageDispatcher on top. I start both nodes, then stop the second one. 6-7 minutes after stopping the second node, I start getting log spam on the first node:
> {quote}
> 12:47:04,519 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> 12:47:06,522 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> 12:47:08,524 WARN [org.jgroups.protocols.UDP] (TQ-Bundler-4,ee,RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null)) JGRP000032: RCD_GP (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null): no physical address for RCD_NMS (flags=0), site-id=DEFAULT, rack-id=null, machine-id=null), dropping message
> {quote}
> After some debugging, I discovered that the reason is because UNICAST3 is still trying to retransmit to the dead node. Its send_table still contains an entry for the dead node with state OPEN.
> After looking at the source code for UNICAST3, I have a theory about what's happening.
> * When a node leaves the cluster, down(Event) gets invoked with a view change, which calls closeConnection(Address) for each node that left. That sets the connection state to CLOSING.
> * Suppose that immediately after the view change is handled, a message with the dead node as its destination gets passed to down(Message). That invokes getSenderEntry(Address), which finds the connection... and sets the state back to OPEN.
> Consequently, the connection is never closed or removed from the table, so retransmit attempts continue forever even though they will never succeed.
> This issue is easily reproducible for me, although unfortunately I can't give you the application in question. But if you have fixes you want to try, I'm happy to drop in a patched JAR and see if the issue still happens.
> This is my JGroups subsystem configuration:
> {code:xml}
> <subsystem xmlns="urn:jboss:domain:jgroups:6.0">
> <channels default="ee">
> <channel name="ee" stack="main">
> <fork name="shared-dispatcher"/>
> <fork name="group-topology"/>
> </channel>
> </channels>
> <stacks>
> <stack name="main">
> <transport type="UDP" socket-binding="jgroups" site="${gp.site:DEFAULT}"/>
> <protocol type="PING"/>
> <protocol type="MERGE3">
> <property name="min_interval">
> 1000
> </property>
> <property name="max_interval">
> 5000
> </property>
> </protocol>
> <protocol type="FD_SOCK"/>
> <protocol type="FD_ALL2">
> <property name="interval">
> 3000
> </property>
> <property name="timeout">
> 8000
> </property>
> </protocol>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS">
> <property name="join_timeout">
> 100
> </property>
> </protocol>
> <protocol type="UFC"/>
> <protocol type="MFC"/>
> <protocol type="FRAG3"/>
> </stack>
> </stacks>
> </subsystem>
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 6 months
[JBoss JIRA] (DROOLS-1597) Implement profile for integration with Signavio's DMN modeler
by Michael Biarnes Kiefer (JIRA)
[ https://issues.jboss.org/browse/DROOLS-1597?page=com.atlassian.jira.plugi... ]
Michael Biarnes Kiefer updated DROOLS-1597:
-------------------------------------------
Fix Version/s: 7.8.0.Final
(was: 7.7.0.Final)
> Implement profile for integration with Signavio's DMN modeler
> -------------------------------------------------------------
>
> Key: DROOLS-1597
> URL: https://issues.jboss.org/browse/DROOLS-1597
> Project: Drools
> Issue Type: Enhancement
> Components: dmn engine
> Affects Versions: 7.1.0.Beta2
> Reporter: Edson Tirelli
> Assignee: Edson Tirelli
> Fix For: 7.8.0.Final
>
>
> Signavio implements a number of extensions to the DMN standard. As they are a Red Hat partner, we will need to implement a profile in the runtime engine that enables and supports those extensions.
> A short list of extensions is as follows. Details will be added to individual tickets:
> * Support additional FEEL functions and alternate names for existing functions
> * Support the Multi Instance Decision node
> * Support character '?' for interpolation of values in a DT cell
> * Support constraints on List inputs in DT cells
> * Support model composition through BKMs
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 6 months