[Design the new POJO MicroContainer] - Re: Locks on MainDeployerImpl
by alesj
"alesj" wrote : "adrian(a)jboss.org" wrote :
| | then they might end up processing each other's stuff but they won't miss one.
| I wasn't thinking about missing one.
| I had this in mind (#t = #thread):
|
| 1t - addDeployment
| 1t - process
|
| While 1t is just done with process' undeploy part, but before it enters processTopDeploy, 2t does addDeployment, but it's really re-deploy.
| Meaning it will add its previous context to undeploy, which wouldn't be
| processed until 2t calls process, but itself would be picked-up by 1t's processToDeploy.
| In this case the 2t's re-deploy's undeploy wouldn't happen, only deploy,
| which can lead to whatever. :-)
I probably have to create similar undeploy processing
(and not do that in addDeployment)
in process() as I do for deploy in processToDeploy.
View the original post : http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4211792#4211792
Reply to the post : http://www.jboss.org/index.html?module=bb&op=posting&mode=reply&p=4211792
17 years, 1 month
[Design of Messaging on JBoss (Messaging/JBoss)] - Re: Strings Experiments...
by trustin
This the the microbenchmark code I'm using:
import java.nio.ByteBuffer;
|
| import org.jboss.netty.buffer.ChannelBuffer;
| import org.jboss.netty.buffer.ChannelBuffers;
|
| public class Test {
| public static void main(String[] args) throws Exception {
| long startTime, endTime;
|
| ChannelBuffer buf = ChannelBuffers.dynamicBuffer(1048576);
| byte[] data = new byte[256];
|
| startTime = System.nanoTime();
| for (int j = 0; j < 100; j ++) {
| buf.clear();
| for (int i = 0; i < 1048576 / data.length; i ++) {
| buf.writeBytes(data);
| }
| }
| endTime = System.nanoTime();
|
| System.out.println(endTime - startTime);
|
| startTime = System.nanoTime();
| for (int j = 0; j < 100; j ++) {
| buf.clear();
| for (int i = 0; i < 1048576 / 2; i ++) {
| buf.writeShort((short) 0);
| }
| }
| endTime = System.nanoTime();
|
| System.out.println(endTime - startTime);
|
| ByteBuffer buf2 = ByteBuffer.allocate(1048576);
| startTime = System.nanoTime();
| for (int j = 0; j < 100; j ++) {
| buf2.clear();
| for (int i = 0; i < 1048576 / 2; i ++) {
| buf2.putShort((short) 0);
| }
| }
| endTime = System.nanoTime();
|
| System.out.println(endTime - startTime);
|
| ChannelBuffer buf3 = ChannelBuffers.buffer(1048576);
| startTime = System.nanoTime();
| for (int j = 0; j < 100; j ++) {
| buf3.clear();
| for (int i = 0; i < 1048576 / 2; i ++) {
| buf3.writeShort((short) 0);
| }
| }
| endTime = System.nanoTime();
|
| System.out.println(endTime - startTime);
| }
| }
The following is my result:
41097167
| 348230930
| 444482977
| 73107347
It seems like the boundary checkers in Netty's DynamicChannelBuffer and ByteBuffer are slowing down the put operation. DynamicChannelBuffer uses a byte array rather than a ByteBuffer as its internal data store, and that's why DynamicChannelBuffer does not perform worse than ByteBuffer. It's interesting that ByteBuffer performs even worse than DynamicChannelBuffer which dynamically increases its capacity on demand, while ByteBuffer does nothing much in this test case.
If a non-dynamic ChannelBuffer, just a wrapper of a byte array, is used, no boundary check is done, and therefore it performs pretty well and the only overhead left seems like increasing the buffer position.
I think it is difficult to remove the existing boundary checker in DynamicChannelBuffer because it can't expand itself without it. I would suggest you to use non-dynamic ChannelBuffer whenever possible if the length of the buffer is known.
View the original post : http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4211774#4211774
Reply to the post : http://www.jboss.org/index.html?module=bb&op=posting&mode=reply&p=4211774
17 years, 1 month
[Design of Messaging on JBoss (Messaging/JBoss)] - Re: Strings Experiments...
by timfox
Removing the + i gives:
| spentTime UTF = 3696
| spentTime UTF = 3587
| spentTime UTF = 3699
| spentTime UTF = 3672
| spentTime UTF = 3806
| spentTime PutSimpleString = 161
| spentTime PutSimpleString = 142
| spentTime PutSimpleString = 133
| spentTime PutSimpleString = 148
| spentTime PutSimpleString = 135
| spentTime putString = 2972
| spentTime putString = 2922
| spentTime putString = 2967
| spentTime putString = 2942
| spentTime putString = 2905
| spentTime putStringNewWay = 974
| spentTime putStringNewWay = 937
| spentTime putStringNewWay = 977
| spentTime putStringNewWay = 955
| spentTime putStringNewWay = 960
|
I also remove putNewString() since I didn't see the point of that.
Also you need to test the perf of reading. There's no point writing stuff you never read!
The point that putString is slower than putting the shorts in one by one, implies to me that the overhead is in the Netty ChannelBufferWrapper implementation.
If you look at the Netty ChannelBufferWrapper implementations you will see they do a lot of stuff on every put. So we need to minimise the amount of puts.
View the original post : http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4211759#4211759
Reply to the post : http://www.jboss.org/index.html?module=bb&op=posting&mode=reply&p=4211759
17 years, 1 month