[JBoss JIRA] (JGRP-2029) SEQUENCER locks itself from receiving messages
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2029?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-2029:
---------------------------
Fix Version/s: 3.6.11
(was: 3.6.10)
> SEQUENCER locks itself from receiving messages
> ----------------------------------------------
>
> Key: JGRP-2029
> URL: https://issues.jboss.org/browse/JGRP-2029
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.1
> Environment: two nodes cluster environment, we are using infinispan 7.1.1 with JGroups 3.6.1-FINAL inside.
> Reporter: Sean Guo
> Assignee: Bela Ban
> Fix For: 3.6.11, 4.0
>
> Attachments: filtered_threads.txt, jgroups-tcp.xml
>
>
> I attached the filtered thread dump and the jgroups-tcp.xml from one node.
> If I read the code correctly, all the threads reading message from Socket are blocked(INT-1...INT-4) and they are waiting the lock owned by #489 thread.
> Please check whether this is a bug or configuration problem. Currently we have to remove the SEQUENCER from JGroups' protocol stack.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 4 months
[JBoss JIRA] (JGRP-2029) SEQUENCER locks itself from receiving messages
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2029?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2029:
--------------------------------
You have a somewhat unusual configuration, e.g.
* SEQUENCER is now where it is supposed to be (below FRAG2)
* FLUSH is in the config; this combo has never been tested. I suggest remove FLUSH
* PEER_LOCK: not recommened, CENTRAL_LOCK is the recommended lock protocol
> SEQUENCER locks itself from receiving messages
> ----------------------------------------------
>
> Key: JGRP-2029
> URL: https://issues.jboss.org/browse/JGRP-2029
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.1
> Environment: two nodes cluster environment, we are using infinispan 7.1.1 with JGroups 3.6.1-FINAL inside.
> Reporter: Sean Guo
> Assignee: Bela Ban
> Fix For: 3.6.11, 4.0
>
> Attachments: filtered_threads.txt, jgroups-tcp.xml
>
>
> I attached the filtered thread dump and the jgroups-tcp.xml from one node.
> If I read the code correctly, all the threads reading message from Socket are blocked(INT-1...INT-4) and they are waiting the lock owned by #489 thread.
> Please check whether this is a bug or configuration problem. Currently we have to remove the SEQUENCER from JGroups' protocol stack.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 4 months
[JBoss JIRA] (JGRP-2069) UNICAST3: bypass or remove when running over TCP
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2069?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-2069:
---------------------------
Fix Version/s: 3.6.11
(was: 3.6.10)
> UNICAST3: bypass or remove when running over TCP
> ------------------------------------------------
>
> Key: JGRP-2069
> URL: https://issues.jboss.org/browse/JGRP-2069
> Project: JGroups
> Issue Type: Enhancement
> Reporter: Bela Ban
> Assignee: Bela Ban
> Priority: Minor
> Fix For: 3.6.11, 4.0
>
>
> When running over TCP as transport, UNICAST3 is still required: while TCP/IP retransmits messages reliably and also provides sender-FIFO ordering, the receiver's thread pool might be exhausted and thus the message might get rejected.
> However, *if* the regular and OOB thread pools are disabled, we could actually bypass (or completely remove) UNICAST3. If messages get dropped by a protocol further up the stack, however, there will be no retransmission in this case.
> SOLUTION:
> * Document this behavior
> * Emit an INFO message (or automatically bypass UNICAST3) when run over a TCP transport and both OOB and regular pools are disabled
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 4 months
[JBoss JIRA] (WFCORE-1614) Requesting CLI Equivalent of Remote Echo / set -x in non-interactive mode (from within scripts)
by Jean-Francois Denise (JIRA)
[ https://issues.jboss.org/browse/WFCORE-1614?page=com.atlassian.jira.plugi... ]
Jean-Francois Denise commented on WFCORE-1614:
----------------------------------------------
That is a good point, I am looking at only echoing only the executed branch.
> Requesting CLI Equivalent of Remote Echo / set -x in non-interactive mode (from within scripts)
> -----------------------------------------------------------------------------------------------
>
> Key: WFCORE-1614
> URL: https://issues.jboss.org/browse/WFCORE-1614
> Project: WildFly Core
> Issue Type: Feature Request
> Components: CLI
> Reporter: Jean-Francois Denise
> Assignee: Jean-Francois Denise
> Fix For: 3.0.0.Alpha3
>
> Attachments: test.cli
>
>
> We are proposing here to add a Cli option (command line option and an XML element) to make the CLI to echo the command and its options in non interactive mode. This will help to match a given command and its output.
> For example, the "ls -l" command output would be:
> [standalone@localhost:9990 /] ls -l
> ATTRIBUTE VALUE TYPE
> launch-type STANDALONE STRING
> management-major-version 5 INT
> management-micro-version 0 INT
> management-minor-version 0 INT
> ...
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 4 months
[JBoss JIRA] (WFLY-6776) Unclear error message when creating multiple thread pools of the same type for a workmanager
by Lin Gao (JIRA)
[ https://issues.jboss.org/browse/WFLY-6776?page=com.atlassian.jira.plugin.... ]
Lin Gao commented on WFLY-6776:
-------------------------------
{{long-running-threads/short-running-threads}} in JCA workmanager uses {{BoundedQueueThreadPoolResourceDefinition}} in {{threads}} subsystem of wildfly-core to operate.
It needs changes in wildfly-core to be able to extend the ability to check the {{long-running-threads/short-running-threads}} within same workmanager, the constructor of {{BoundedQueueThreadPoolResourceDefinition}} is private now, which needs to be changed I think.
> Unclear error message when creating multiple thread pools of the same type for a workmanager
> --------------------------------------------------------------------------------------------
>
> Key: WFLY-6776
> URL: https://issues.jboss.org/browse/WFLY-6776
> Project: WildFly
> Issue Type: Bug
> Components: JCA
> Affects Versions: 10.0.0.Final
> Reporter: Lin Gao
> Assignee: Lin Gao
>
> When there is already a long-running thread pool for a work manager and you try to create another one:
> {{/subsystem=jca/workmanager=default/long-running-threads=custom:add(max-threads=30, queue-length=30)}}
> you only get an opaque error message:
> {{"failure-description" => "WFLYCTL0086: Failed to persist configuration change: WFLYCTL0084: Failed to marshal configuration",}} with a, also useless, {{java.lang.IllegalArgumentException}} in the server log.
> It should be more obvious that the error is that you cannot create two long-running thread pools
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 4 months
[JBoss JIRA] (JGRP-2075) SYM/ASYM_ENCRYPT: don't use WeakHashMap for old ciphers
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2075?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-2075:
---------------------------
Fix Version/s: 3.6.11
(was: 3.6.10)
> SYM/ASYM_ENCRYPT: don't use WeakHashMap for old ciphers
> -------------------------------------------------------
>
> Key: JGRP-2075
> URL: https://issues.jboss.org/browse/JGRP-2075
> Project: JGroups
> Issue Type: Task
> Reporter: Bela Ban
> Assignee: Bela Ban
> Priority: Minor
> Fix For: 3.6.11, 4.0
>
>
> Currently we use WeakHashMap, but should not, reasons outlined below. We could replace it with a LazyRemovalCache. Andrew's email refers to SecretKeys but this probably also applies to Ciphers.
> Andrew Haley's email:
> {quote}
> TL/DR: Please don't use WeakReferences, SoftReferences, etc. to cache
> any data which might point to native memory. In particular, never do
> this with instances of java.security.Key. Instead, implement either
> some kind of ageing strategy or a fixed-size cache.
> ...
> This is a warning to anybody who might cache crypto keys.
> A customer has been having problems with the exhaustion of native
> memory before the Java heap is full. It was fun trying to track down
> the cause, but it's now happened several times to several customers,
> and it's a serious problem for real-world usage in app servers.
> PKCS#11 is a standard way to communicate between applications and
> crypto libraries. There is a Java crypto provider which supports
> PKCS#11. Some of our customers must use this provider in order to get
> FIPS certification.
> The problem is this:
> A crypto key is a buffer in memory, allocated by the PKCS#11 native
> library. It's accessed via a handle which is stored as an integer
> field in a Java object. This Java object is a PhantomReference, so
> when the garbage collector detects that a crypto key is no longer
> reachable it is closed and the associated native memory is freed.
> Modern garbage collectors don't much bother to process objects in the
> old generation because it's not usually worthwhile. Thus, crypto keys
> don't get recycled very quickly. They can pile up in the old
> generation. This isn't a problem for the Java heap because the
> objects containing the references to crypto keys are very small.
> Unfortunately, the native side of a crypto key is much bigger, maybe
> up to a thousand times bigger. So if we have 4000 stale crypto keys
> in the heap that's not a problem, a few kbytes. But the native memory
> may be a megabyte.
> This problem is made even worse by Tomcat because it uses
> SoftReferences to cache crypto keys. SoftReferences are processed
> lazily, and maybe not at all until the Java heap runs out of memory.
> Unfortunately it doesn't, but the machine runs out of native memory
> instead.
> We could solve this simply by making instances of PKCS#11 keys really
> big Java objects by padding with dummy fields. Then, the GC would
> collect them quickly. This does work but it seriously impacts
> performance. Also, we could tweak the garbage collectors to clear out
> stale references more enthusiastically, but this impacts performance
> even more. There are some controls with the G1 collector which
> process SoftReferences more aggressively and these help, but again at
> the cost of performance.
> Finally: the Shanandoah collector we're working on handles this
> problem much better than the older collectors, but it's some
> way off.
> {quote}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 4 months