[JBoss JIRA] (JBEE-122) Concurrency Utilities for Java EE (JSR-236)
by Shelly McGowan (JIRA)
[ https://issues.jboss.org/browse/JBEE-122?page=com.atlassian.jira.plugin.s... ]
Shelly McGowan commented on JBEE-122:
-------------------------------------
No rename necessary. Was intended for guidance only.
Prior to release org.jboss.jboss-parent should bump to version 10 (trivial but sync with AS8 master)
Regarding the artifactId, jboss-conc-api_1.0_spec would be preferable to associate the Spec Version with this API set.
> Concurrency Utilities for Java EE (JSR-236)
> --------------------------------------------
>
> Key: JBEE-122
> URL: https://issues.jboss.org/browse/JBEE-122
> Project: JBoss JavaEE Spec APIs
> Issue Type: Sub-task
> Reporter: Shelly McGowan
> Fix For: JavaEE 7 Spec APIs 1.0.0.Beta1
>
>
> These APIs are required as part of our Java EE 7 implementation.
> They should be added to the org.jboss.spec project:
> {code}
> <dependency>
> <groupId>org.jboss.spec.javax.enterprise.concurrent</groupId>
> <artifactId>jboss-concurrent-api_1.0_spec</artifactId>
> <version>1.0.0.Final-SNAPSHOT</version>
> </dependency>
> {code}
> Project can be created in http://github.com/jboss/jboss-concurrent-api_spec
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months
[JBoss JIRA] (JGRP-1605) API breakage
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1605?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-1605:
---------------------------
Description:
API changes to be done in 4.0, which break code:
* MessageDispatcher: remove MessageListener
* Merge AsyncRequestHandler and RequestHandler, OR make them 2 separate interfaces, ie. AsyncRH doesn't extend RH
* Remove @Deprecated methods, properties or classes
was:
API changes to be done in 4.0, which break code:
* MessageDispatcher: remove MessageListener
* Merge AsyncRequestHandler and RequestHandler, OR make them 2 separate interfaces, ie. AsyncRH doesn't extend RH
> API breakage
> ------------
>
> Key: JGRP-1605
> URL: https://issues.jboss.org/browse/JGRP-1605
> Project: JGroups
> Issue Type: Task
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 4.0
>
>
> API changes to be done in 4.0, which break code:
> * MessageDispatcher: remove MessageListener
> * Merge AsyncRequestHandler and RequestHandler, OR make them 2 separate interfaces, ie. AsyncRH doesn't extend RH
> * Remove @Deprecated methods, properties or classes
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months
[JBoss JIRA] (JGRP-1564) TP: passing messages up in batches (part I)
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1564?page=com.atlassian.jira.plugin.... ]
Bela Ban edited comment on JGRP-1564 at 2/28/13 6:45 AM:
---------------------------------------------------------
The first part is done. A quick perf test showed:
h4. MPerf (fast.xml)
(requests/sec/node, 1000)
||Nodes||2||4||6||8||
|old|111|143|107|101|
|new|113|148|117|115|
h4. UnicastTestRpc (fast.xml)
||Node||2||
|old|111|
|new|111|
h4. UPerf (fast.xml)
(requests/sec/node)
||Node||4||8||
|old|6'818|5'352|
|new|7'607|6'211|
was (Author: belaban):
The first part is done. A quick perf test showed:
MPerf (fast.xml):
-----------------
(requests/sec/node, 1000)
||Nodes||2||4||6||8||
|old|111|143|107|101|
|new|113|148|117|115|
UnicastTestRpc (fast.xml):
--------------------------
||Node||2||
|old|111|
|new|111|
UPerf (fast.xml):
-----------------
(requests/sec/node)
||Node||4||8||
|old|6'818|5'352|
|new|7'607|6'211|
> TP: passing messages up in batches (part I)
> -------------------------------------------
>
> Key: JGRP-1564
> URL: https://issues.jboss.org/browse/JGRP-1564
> Project: JGroups
> Issue Type: Enhancement
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 3.3
>
>
> When B receives a batch of 5 messages from A (unicast or multicast), then B uses the *same thread* to send the 5 messages up (this isn't the case for OOB messages).
> It would be more efficient to either have different threads passing the 5 messages up, or use a new *message batch event type* to pass all 5 messages up in one go.
> The advantage of different threads is that all 5 threads add their message to the window, but only 1 removes them and passes them up, rather than each thread adding and removing its own message (fewer lock acquisitions).
> We could try moving the unmarshalling of messages and message batches into TP.receive(). If a batch was received, that code could unmarshal the 5 messages and pass them to corresponding thread pools to send them up.
> The unmarshalling shouldn't take long, so TP.receive() should return quickly.
> This approach would allow us to send OOB messages in message batches, too (currently not allowed).
> The advantage of a message batch is that we pass *one* event up the stack, passing only *once* through all protocols from TP to UNICAST/2 and NAKACK/2, and not 5 times. Also, adding 5 messages to the window under the same lock is more eficient than acquiring the lock 5 times. Ditto for removal.
> The disadvantage is that we now need to handle a different event type (all protocols under UNICAST/NAKACK), e.g. ENCRYPT, SIZE, FRAG(2) (if placed under UNICAST/NAKACK), COMPRESS etc. However, we could add another up(Batch) method, which by default (in Protocol):
> - removes all messages for a given protocol P (by P.ID)
> and calls up(Event.MSG, msg) for all messages in the batch
> - calls up_prot.up(batch) if the batch is not empty
> This would allow for all current protocols to continue working and only the protocols which don't check for headers and/or need special processing (such as UNICAST and NAKACK) would have to implement up(Batch).
> This solution would be better than introducing another event type MSG_BATCH, as not every protocol overriding up(Event) calls super.up(Event).
> However, this solution is not symmetric, ie. messages are batched at the transport level, and should be unbatched at the transport level of the receiver(s) as well...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months
[JBoss JIRA] (JGRP-1564) TP: passing messages up in batches (part I)
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1564?page=com.atlassian.jira.plugin.... ]
Bela Ban edited comment on JGRP-1564 at 2/28/13 6:44 AM:
---------------------------------------------------------
With batching part II implemented, the results for MPerf were more or less the same. UPerf changed slightly:
h4. UPerf (fast.xml):
(requests/sec/node)
||Node||2||4||6||8||
|batch I|n/a|7'607|n/a|6'211|
|batch II|8'012 |{color:green}8'052{color}|7'377|{color:green}6'937{color}|
On the Red Hat cluster-XX lab:
||Node||2||4||6||8||
|batch II|11'376|15'958|15'932|14'925|
Compared to the home cluster, in the Red Hat lab, every process ran on a seperate physical box (no sharing of bandwidth etc).
was (Author: belaban):
With batch part II implemented, the results for MPerf were more or less the same. UPerf changed slightly:
h4. UPerf (fast.xml):
(requests/sec/node)
||Node||2||4||6||8||
|batch I|n/a|7'607|n/a|6'211|
|batch II|8012 |{color:green}8052{color}|7377|{color:green}6937{color}|
On the Red Hat cluster-XX lab:
||Node||2||4||6||8||
|batch II|11376|15958|15932|14925|
Compared to the home cluster, in the Red Hat lab, every process ran on a seperate physical box (no sharing of bandwidth etc).
> TP: passing messages up in batches (part I)
> -------------------------------------------
>
> Key: JGRP-1564
> URL: https://issues.jboss.org/browse/JGRP-1564
> Project: JGroups
> Issue Type: Enhancement
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 3.3
>
>
> When B receives a batch of 5 messages from A (unicast or multicast), then B uses the *same thread* to send the 5 messages up (this isn't the case for OOB messages).
> It would be more efficient to either have different threads passing the 5 messages up, or use a new *message batch event type* to pass all 5 messages up in one go.
> The advantage of different threads is that all 5 threads add their message to the window, but only 1 removes them and passes them up, rather than each thread adding and removing its own message (fewer lock acquisitions).
> We could try moving the unmarshalling of messages and message batches into TP.receive(). If a batch was received, that code could unmarshal the 5 messages and pass them to corresponding thread pools to send them up.
> The unmarshalling shouldn't take long, so TP.receive() should return quickly.
> This approach would allow us to send OOB messages in message batches, too (currently not allowed).
> The advantage of a message batch is that we pass *one* event up the stack, passing only *once* through all protocols from TP to UNICAST/2 and NAKACK/2, and not 5 times. Also, adding 5 messages to the window under the same lock is more eficient than acquiring the lock 5 times. Ditto for removal.
> The disadvantage is that we now need to handle a different event type (all protocols under UNICAST/NAKACK), e.g. ENCRYPT, SIZE, FRAG(2) (if placed under UNICAST/NAKACK), COMPRESS etc. However, we could add another up(Batch) method, which by default (in Protocol):
> - removes all messages for a given protocol P (by P.ID)
> and calls up(Event.MSG, msg) for all messages in the batch
> - calls up_prot.up(batch) if the batch is not empty
> This would allow for all current protocols to continue working and only the protocols which don't check for headers and/or need special processing (such as UNICAST and NAKACK) would have to implement up(Batch).
> This solution would be better than introducing another event type MSG_BATCH, as not every protocol overriding up(Event) calls super.up(Event).
> However, this solution is not symmetric, ie. messages are batched at the transport level, and should be unbatched at the transport level of the receiver(s) as well...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months