[JBoss JIRA] (LOGTOOL-85) Suppress all warnings in generated sources
by Vlad Arkhipov (JIRA)
Vlad Arkhipov created LOGTOOL-85:
------------------------------------
Summary: Suppress all warnings in generated sources
Key: LOGTOOL-85
URL: https://issues.jboss.org/browse/LOGTOOL-85
Project: Log Tool
Issue Type: Enhancement
Security Level: Public (Everyone can see)
Affects Versions: 1.2.0.Final
Reporter: Vlad Arkhipov
Assignee: James Perkins
Generated logger/messages implementaions may cause the compiler to emit warnings (for example, if your interface method has generic parameters). It will be a problem if you compile with -Xlint:all -Werror flags.
I suppose there is no real reason to make the processor honour generics, and these warnings can be safely ignored?
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
11 years, 2 months
[JBoss JIRA] (DROOLS-583) Drools WB jcr2vfs: make the migration tool work again
by Petr Široký (JIRA)
Petr Široký created DROOLS-583:
----------------------------------
Summary: Drools WB jcr2vfs: make the migration tool work again
Key: DROOLS-583
URL: https://issues.jboss.org/browse/DROOLS-583
Project: Drools
Issue Type: Bug
Security Level: Public (Everyone can see)
Affects Versions: 6.2.0.Beta1
Reporter: Petr Široký
Assignee: Petr Široký
The current 6.2.0 SNAPSHOTS of the jcr2vfs tool are broken. There seems to some issue with incompatibility with between Drools 5.x and 6.x.
{code}
Asset [Bankruptcy history] with format [brl] is being migrated...
java.lang.ClassCastException: org.drools.core.base.accumulators.BigDecimalSumAccumulateFunction cannot be cast to org.drools.runtime.rule.AccumulateFunction
at org.drools.compiler.PackageBuilderConfiguration.loadAccumulateFunction(PackageBuilderConfiguration.java:530)
at org.drools.compiler.PackageBuilderConfiguration.buildAccumulateFunctionsMap(PackageBuilderConfiguration.java:479)
at org.drools.compiler.PackageBuilderConfiguration.init(PackageBuilderConfiguration.java:194)
at org.drools.compiler.PackageBuilderConfiguration.<init>(PackageBuilderConfiguration.java:165)
{code}
Another very serious issue is that even when the jcr2vfs tests fail, the Jenkins drools-wb CI build succeeds. This needs to be fixed, so that when the tests fail, the build fails too.
I will try to fix the tool in time for 6.2.0.Final.
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
11 years, 2 months
[JBoss JIRA] (WFLY-2770) CDI Decorator should be enable on Websocket enpoint
by Jozef Hartinger (JIRA)
[ https://issues.jboss.org/browse/WFLY-2770?page=com.atlassian.jira.plugin.... ]
Jozef Hartinger commented on WFLY-2770:
---------------------------------------
Plus if the bean is not enabled for some reason (e.g. an alternative or specialized) we should not probably be applying decorators to it?
> CDI Decorator should be enable on Websocket enpoint
> ----------------------------------------------------
>
> Key: WFLY-2770
> URL: https://issues.jboss.org/browse/WFLY-2770
> Project: WildFly
> Issue Type: Feature Request
> Security Level: Public(Everyone can see)
> Components: CDI / Weld, Web (Undertow)
> Affects Versions: 8.0.0.CR1
> Reporter: Antoine Sabot-Durand
> Assignee: Stuart Douglas
> Attachments: websocket-decorator.zip
>
>
> Defining a CDI decorator on Websocket Endpoint doesn't work. I know it's not strictly required by websocket spec (it only mentions interceptors) but as it's working in Glassfish 4, it could be nice and less confusing to add this feature to Wildfly
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
11 years, 2 months
[JBoss JIRA] (WFLY-2770) CDI Decorator should be enable on Websocket enpoint
by Jozef Hartinger (JIRA)
[ https://issues.jboss.org/browse/WFLY-2770?page=com.atlassian.jira.plugin.... ]
Jozef Hartinger commented on WFLY-2770:
---------------------------------------
You may end up using Bean object for a subclass of the component class.
> CDI Decorator should be enable on Websocket enpoint
> ----------------------------------------------------
>
> Key: WFLY-2770
> URL: https://issues.jboss.org/browse/WFLY-2770
> Project: WildFly
> Issue Type: Feature Request
> Security Level: Public(Everyone can see)
> Components: CDI / Weld, Web (Undertow)
> Affects Versions: 8.0.0.CR1
> Reporter: Antoine Sabot-Durand
> Assignee: Stuart Douglas
> Attachments: websocket-decorator.zip
>
>
> Defining a CDI decorator on Websocket Endpoint doesn't work. I know it's not strictly required by websocket spec (it only mentions interceptors) but as it's working in Glassfish 4, it could be nice and less confusing to add this feature to Wildfly
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
11 years, 2 months
[JBoss JIRA] (WFLY-3711) Topology updates of EJBClient ClusterContexts not being processed correctly after failover
by Richard Achmatowicz (JIRA)
[ https://issues.jboss.org/browse/WFLY-3711?page=com.atlassian.jira.plugin.... ]
Richard Achmatowicz commented on WFLY-3711:
-------------------------------------------
There were three problems described here:
- cluster contexts not getting closed on client
- unnecessary ClusterNodeManagers being created as a result of cluster topology updates
- no differentiation between cluster topology message types
The first issue is addressed by the PR attached to this issue (6643)
The second problem is addressed by EJBCLIENT-110.
The third issue is one which requires protocol changes, and it may be that such changes are not absolutely necessary. Cluster topology updates arriving at an EJBCLient may be sent directly, and contain full cluster topologies, or via gossip, and contain only changes to the cluster topology. Direct receipt examples include when a node joins a cluster and the node immediately sends a full cluster topology update to the client, if it is connected. Another example is when a client connects to a node: a full topology update is again sent at that time to the client. On the other hand, indirect or gossip information is received from members of the same cluster who just note which (other) nodes have just joined or left the cluster; in other words, the changes or delta of the topology.. Despite the fact that delta updates and complete topology updates are not differentiated (and they almost certainly should be), it looks very much as though the current implementation will eventually end up with the same ClusterContext membership.
> Topology updates of EJBClient ClusterContexts not being processed correctly after failover
> ------------------------------------------------------------------------------------------
>
> Key: WFLY-3711
> URL: https://issues.jboss.org/browse/WFLY-3711
> Project: WildFly
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Clustering
> Affects Versions: 9.0.0.Beta1
> Reporter: Richard Achmatowicz
> Assignee: Richard Achmatowicz
>
> ClusterContexts are used by EJBClient to keep track of the current set of nodes in a cluster, so that if an EJBClient invocation fails on one node, it may failover to another node in the same cluster. The ClusterContext is made up of ClusterNodeManagers which are responsible for setting up the connections between the EJBClient and the nodes in the cluster.
> Cluster topology updates are sent to registered EJBClients whenever the cluster topology changes (nodes join, nodes leave a cluster). Thse topology updates are processed on the client side by ClusterTopologyUpdateHandler and are used to update the current contents of the associated ClusterContext held on the client.
> The current implementation of the handling of topology updates does not correctly handle the addition/removal of ClusterNodeManagers from the cluster context - namely, rather than check whether or not a new ClusterNodeManager really needs to be added, ClusterNodeManagers are added for every node in the received topology update, leading to many unnecessary EJBReceivers and their channels being created.
> The logs show that up to 18 cluster node manager instances may be created, have their EJBReceivers registered and channels to the remote node created, when only one node has been added to the cluster.
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
11 years, 2 months
[JBoss JIRA] (WFLY-3711) Topology updates of EJBClient ClusterContexts not being processed correctly after failover
by Richard Achmatowicz (JIRA)
[ https://issues.jboss.org/browse/WFLY-3711?page=com.atlassian.jira.plugin.... ]
Richard Achmatowicz commented on WFLY-3711:
-------------------------------------------
Attached PR which implements the above change.
> Topology updates of EJBClient ClusterContexts not being processed correctly after failover
> ------------------------------------------------------------------------------------------
>
> Key: WFLY-3711
> URL: https://issues.jboss.org/browse/WFLY-3711
> Project: WildFly
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Clustering
> Affects Versions: 9.0.0.Beta1
> Reporter: Richard Achmatowicz
> Assignee: Richard Achmatowicz
>
> ClusterContexts are used by EJBClient to keep track of the current set of nodes in a cluster, so that if an EJBClient invocation fails on one node, it may failover to another node in the same cluster. The ClusterContext is made up of ClusterNodeManagers which are responsible for setting up the connections between the EJBClient and the nodes in the cluster.
> Cluster topology updates are sent to registered EJBClients whenever the cluster topology changes (nodes join, nodes leave a cluster). Thse topology updates are processed on the client side by ClusterTopologyUpdateHandler and are used to update the current contents of the associated ClusterContext held on the client.
> The current implementation of the handling of topology updates does not correctly handle the addition/removal of ClusterNodeManagers from the cluster context - namely, rather than check whether or not a new ClusterNodeManager really needs to be added, ClusterNodeManagers are added for every node in the received topology update, leading to many unnecessary EJBReceivers and their channels being created.
> The logs show that up to 18 cluster node manager instances may be created, have their EJBReceivers registered and channels to the remote node created, when only one node has been added to the cluster.
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
11 years, 2 months
[JBoss JIRA] (WFLY-3711) Topology updates of EJBClient ClusterContexts not being processed correctly after failover
by Richard Achmatowicz (JIRA)
[ https://issues.jboss.org/browse/WFLY-3711?page=com.atlassian.jira.plugin.... ]
Richard Achmatowicz updated WFLY-3711:
--------------------------------------
Git Pull Request: https://github.com/wildfly/wildfly/pull/6643
> Topology updates of EJBClient ClusterContexts not being processed correctly after failover
> ------------------------------------------------------------------------------------------
>
> Key: WFLY-3711
> URL: https://issues.jboss.org/browse/WFLY-3711
> Project: WildFly
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Clustering
> Affects Versions: 9.0.0.Beta1
> Reporter: Richard Achmatowicz
> Assignee: Richard Achmatowicz
>
> ClusterContexts are used by EJBClient to keep track of the current set of nodes in a cluster, so that if an EJBClient invocation fails on one node, it may failover to another node in the same cluster. The ClusterContext is made up of ClusterNodeManagers which are responsible for setting up the connections between the EJBClient and the nodes in the cluster.
> Cluster topology updates are sent to registered EJBClients whenever the cluster topology changes (nodes join, nodes leave a cluster). Thse topology updates are processed on the client side by ClusterTopologyUpdateHandler and are used to update the current contents of the associated ClusterContext held on the client.
> The current implementation of the handling of topology updates does not correctly handle the addition/removal of ClusterNodeManagers from the cluster context - namely, rather than check whether or not a new ClusterNodeManager really needs to be added, ClusterNodeManagers are added for every node in the received topology update, leading to many unnecessary EJBReceivers and their channels being created.
> The logs show that up to 18 cluster node manager instances may be created, have their EJBReceivers registered and channels to the remote node created, when only one node has been added to the cluster.
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
11 years, 2 months
[JBoss JIRA] (JGRP-1873) UNICAST2: unilateral connection close of receiver can lead to missing seqnos in sender
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1873?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-1873:
---------------------------
Description:
In {{UNICAST2}}, if we have a connection between sender A and receiver B, and B closes the connection (but not A), then A can end up with missing messages in its send table.
Example:
* A sends messages to B
* A has an entry for B in its send-table: {{B: 10|20}} (lowest sent=10, highest sent=20)
* B has an entry for A in its recv-table: {{A: 10|20}} (lowest received=10, highest received=20)
* B now gets a view that doesn't contain A and closes its connection to A
** This results in B's connection to A getting removed
* A now sends message {{A::21}}
* B doesn't find an entry in its recv-table for A and sends {{GET-FIRST-SEQNO}} to A
* A receives the request and sends message {{A::11 first}} - {{A:21}} to B. These messages are sent unreliably, so they can get dropped. Let's assume (for this example) that some of them are dropped.
* B does receive {{A::11 first}} and creates an entry for A in its recv-table: {{A: 11|21}} (next to be received is {{A:12}})
* Now a spurious {{STABLE(A::15)}} message by B is received by A
** This can happen when B sent the {{STABLE}} message *before* its connection to A was removed, but the message was delayed, e.g. by garbage collection
** Note that the connection ID ({{conn-id}} is the same, so A will _not_ reject the {{STABLE}} message by B
* A receives the {{STABLE}} message and purges elements up to 15, so its new entry for B is: {{B:: 15|21}}
* When B asks A for retransmission of messages {{A::12}} - {{A:21}}, A can only retransmit messages 16-21, but *not* {{A::12}} - {{A:15}} !
Depending on which messages from A (which it sent unreliably on reception of {{GET-FIRST-SEQNO}}) were received by B, there would be never-ending retransmission requests from B to A for all or some messages in {{A[12..15]}}, e.g.
{noformat}
WARN [org.jgroups.protocols.UNICAST2] A: (requester=B) message B::13 not found in
retransmission table of B: [15 | 15 | 22] (X elements, Y missing)
{noformat}
h5. Reordering of STABLE messages
In the worst case, as {{STABLE}} messages are not sent reliably and can therefore get dropped or reordered, if A gets another {{STABLE(10)}} message after the {{STABLE(15)}} message, the error message above would look like this:
{noformat}
WARN [org.jgroups.protocols.UNICAST2] A: (requester=B) message B::13 not found in
retransmission table of B: [10 | 10 | 22] (X elements, Y missing)
{noformat}
Note that, with https://issues.jboss.org/browse/JGRP-1872 fixed, this cannot occur anymore.
h5. Solution
There's no real solution but to upgrade to {{UNICAST3}}: when {{UNICAST3}} receives a view, it doesn't _remove_ receive (and send) connections immediately, but merely marks them as _closed_. The connection will only be removed after {{conn_close_timeout}} ms. If B therefore gets further messages from A, it will simply mark the receive connection as _open_ and doesn't need to send a {{GET-FIRST-SEQNO}} message to A as it still has all of A's messages.
We could think of a connection establishment and teardown protocol used by all of the unicast protocols, which establishes connections similar to TCP. Senders would block until a connection is established etc and new conn-ids would be created, plus the current send- and receive- seqnos would be exchanged. This could also be used as a second line of defense, to re-establish the connection when a sender doesn't find messages requested for retransmission by a receiver. As an alternative, we could create a new protocol which syncs a receive table with a sender, e.g. https://issues.jboss.org/browse/JGRP-1875.
To mitigate the above issue, {{FD_ALL}} rather than {{FD}} should be used, so that members suspect each other more or less at the same time. This is not the case with FD, where multiple hung (or GC'ing) members take N * timeout time to suspect. With {{FD_ALL}}, chances are that A and B suspect each other and later, both establish a new connection.
was:
In {{UNICAST2}}, if we have a connection between sender A and receiver B, and B closes the connection (but not A), then A can end up with missing messages in its send table.
Example:
* A sends messages to B
* A has an entry for B in its send-table: {{B: 10|20}} (lowest sent=10, highest sent=20)
* B has an entry for A in its recv-table: {{A: 10|20}} (lowest received=10, highest received=20)
* B now gets a view that doesn't contain A and closes its connection to A
** This results in B's connection to A getting removed
* A now sends message {{A::21}}
* B doesn't find an entry in its recv-table for A and sends {{GET-FIRST-SEQNO}} to A
* A receives the request and sends message {{A::11 first}} - {{A:21}} to B. These messages are sent unreliably, so they can get dropped. Let's assume (for this example) that some of them are dropped.
* B does receive {{A::11 first}} and creates an entry for A in its recv-table: {{A: 11|21}} (next to be received is {{A:12}})
* Now a spurious {{STABLE(A::15)}} message by B is received by A
** This can happen when B sent the {{STABLE}} message *before* its connection to A was removed, but the message was delayed, e.g. by garbage collection
** Note that the connection ID ({{conn-id}} is the same, so A will _not_ reject the {{STABLE}} message by B
* A receives the {{STABLE}} message and purges elements up to 15, so its new entry for B is: {{B:: 15|21}}
* When B asks A for retransmission of messages {{A::12}} - {{A:21}}, A can only retransmit messages 16-21, but *not* {{A::12}} - {{A:15}} !
Depending on which messages from A (which it sent unreliably on reception of {{GET-FIRST-SEQNO}}) were received by B, there would be never-ending retransmission requests from B to A for all or some messages in {{A[12..15]}}, e.g.
{noformat}
WARN [org.jgroups.protocols.UNICAST2] A: (requester=B) message B::13 not found in retransmission table of B:
[15 | 15 | 22] (X elements, Y missing)
{noformat}
h5. Reordering of STABLE messages
In the worst case, as {{STABLE}} messages are not sent reliably and can therefore get dropped or reordered, if A gets another {{STABLE(10)}} message after the {{STABLE(15)}} message, the error message above would look like this:
{noformat}
WARN [org.jgroups.protocols.UNICAST2] A: (requester=B) message B::13 not found in retransmission table of B:
[10 | 10 | 22] (X elements, Y missing)
{noformat}
Note that, with https://issues.jboss.org/browse/JGRP-1872 fixed, this cannot occur anymore.
h5. Solution
There's no real solution but to upgrade to {{UNICAST3}}: when {{UNICAST3}} receives a view, it doesn't _remove_ receive (and send) connections immediately, but merely marks them as _closed_. The connection will only be removed after {{conn_close_timeout}} ms. If B therefore gets further messages from A, it will simply mark the receive connection as _open_ and doesn't need to send a {{GET-FIRST-SEQNO}} message to A as it still has all of A's messages.
We could think of a connection establishment and teardown protocol used by all of the unicast protocols, which establishes connections similar to TCP. Senders would block until a connection is established etc and new conn-ids would be created, plus the current send- and receive- seqnos would be exchanged. This could also be used as a second line of defense, to re-establish the connection when a sender doesn't find messages requested for retransmission by a receiver. As an alternative, we could create a new protocol which syncs a receive table with a sender, e.g. https://issues.jboss.org/browse/JGRP-1875.
To mitigate the above issue, {{FD_ALL}} rather than {{FD}} should be used, so that members suspect each other more or less at the same time. This is not the case with FD, where multiple hung (or GC'ing) members take N * timeout time to suspect. With {{FD_ALL}}, chances are that A and B suspect each other and later, both establish a new connection.
> UNICAST2: unilateral connection close of receiver can lead to missing seqnos in sender
> --------------------------------------------------------------------------------------
>
> Key: JGRP-1873
> URL: https://issues.jboss.org/browse/JGRP-1873
> Project: JGroups
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 3.5
>
>
> In {{UNICAST2}}, if we have a connection between sender A and receiver B, and B closes the connection (but not A), then A can end up with missing messages in its send table.
> Example:
> * A sends messages to B
> * A has an entry for B in its send-table: {{B: 10|20}} (lowest sent=10, highest sent=20)
> * B has an entry for A in its recv-table: {{A: 10|20}} (lowest received=10, highest received=20)
> * B now gets a view that doesn't contain A and closes its connection to A
> ** This results in B's connection to A getting removed
> * A now sends message {{A::21}}
> * B doesn't find an entry in its recv-table for A and sends {{GET-FIRST-SEQNO}} to A
> * A receives the request and sends message {{A::11 first}} - {{A:21}} to B. These messages are sent unreliably, so they can get dropped. Let's assume (for this example) that some of them are dropped.
> * B does receive {{A::11 first}} and creates an entry for A in its recv-table: {{A: 11|21}} (next to be received is {{A:12}})
> * Now a spurious {{STABLE(A::15)}} message by B is received by A
> ** This can happen when B sent the {{STABLE}} message *before* its connection to A was removed, but the message was delayed, e.g. by garbage collection
> ** Note that the connection ID ({{conn-id}} is the same, so A will _not_ reject the {{STABLE}} message by B
> * A receives the {{STABLE}} message and purges elements up to 15, so its new entry for B is: {{B:: 15|21}}
> * When B asks A for retransmission of messages {{A::12}} - {{A:21}}, A can only retransmit messages 16-21, but *not* {{A::12}} - {{A:15}} !
> Depending on which messages from A (which it sent unreliably on reception of {{GET-FIRST-SEQNO}}) were received by B, there would be never-ending retransmission requests from B to A for all or some messages in {{A[12..15]}}, e.g.
> {noformat}
> WARN [org.jgroups.protocols.UNICAST2] A: (requester=B) message B::13 not found in
> retransmission table of B: [15 | 15 | 22] (X elements, Y missing)
> {noformat}
> h5. Reordering of STABLE messages
> In the worst case, as {{STABLE}} messages are not sent reliably and can therefore get dropped or reordered, if A gets another {{STABLE(10)}} message after the {{STABLE(15)}} message, the error message above would look like this:
> {noformat}
> WARN [org.jgroups.protocols.UNICAST2] A: (requester=B) message B::13 not found in
> retransmission table of B: [10 | 10 | 22] (X elements, Y missing)
> {noformat}
> Note that, with https://issues.jboss.org/browse/JGRP-1872 fixed, this cannot occur anymore.
> h5. Solution
> There's no real solution but to upgrade to {{UNICAST3}}: when {{UNICAST3}} receives a view, it doesn't _remove_ receive (and send) connections immediately, but merely marks them as _closed_. The connection will only be removed after {{conn_close_timeout}} ms. If B therefore gets further messages from A, it will simply mark the receive connection as _open_ and doesn't need to send a {{GET-FIRST-SEQNO}} message to A as it still has all of A's messages.
> We could think of a connection establishment and teardown protocol used by all of the unicast protocols, which establishes connections similar to TCP. Senders would block until a connection is established etc and new conn-ids would be created, plus the current send- and receive- seqnos would be exchanged. This could also be used as a second line of defense, to re-establish the connection when a sender doesn't find messages requested for retransmission by a receiver. As an alternative, we could create a new protocol which syncs a receive table with a sender, e.g. https://issues.jboss.org/browse/JGRP-1875.
> To mitigate the above issue, {{FD_ALL}} rather than {{FD}} should be used, so that members suspect each other more or less at the same time. This is not the case with FD, where multiple hung (or GC'ing) members take N * timeout time to suspect. With {{FD_ALL}}, chances are that A and B suspect each other and later, both establish a new connection.
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
11 years, 2 months
[JBoss JIRA] (WFLY-2510) Add CLI command to display information about current connection.
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFLY-2510?page=com.atlassian.jira.plugin.... ]
Brian Stansberry updated WFLY-2510:
-----------------------------------
Assignee: Claudio Miranda (was: Darran Lofthouse)
> Add CLI command to display information about current connection.
> ----------------------------------------------------------------
>
> Key: WFLY-2510
> URL: https://issues.jboss.org/browse/WFLY-2510
> Project: WildFly
> Issue Type: Enhancement
> Security Level: Public(Everyone can see)
> Components: CLI
> Reporter: Darran Lofthouse
> Assignee: Claudio Miranda
> Labels: management_security,
> Fix For: Awaiting Volunteers
>
>
> Would be useful for a single command to display information about the current connection.
> Definitely information about SSL, certificates, authenticated user, address etc...
> Could also obtain other common info from server.
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
11 years, 2 months