As mentioned above, I'd like to see if we can come to agreement on a couple common
transport protocol configs so in AS 5 we can use the same singleton_name values for the
stacks JBM uses and the stacks other AS services use. Benefit is we simplify our
users' lives quite a bit by opening fewer sockets. Managing multiple multicast sockets
is an important problem for users when it comes to maintaining cluster isolation; the
fewer the better.
If users want to optimize away from the defaults, that's easy to do; just change the
singleton_name to something different.
So, I'd like to see if as a default we can use a common UDP protocol config for the
"jbm-control" stack and the general "udp" stack. All other AS
services by default use the "udp" stack.
Might as well discuss whether we want a common TCP protocol between the
"jbm-data" stack and the general "tcp" stack. That's not a
particular priority for me; just something to think about.
I've compared the UDP protocol configs between the "jbm-control" stack and
the "udp" stack. Very similar. Here are the differences:
mcast_addr:
"udp" = ${jgroups.udp.mcast_addr:228.11.11.11}
"jbm-control" =
${jboss.messaging.controlchanneludpaddress,jboss.partition.udpGroup:228.7.7.7}
Basically, JBM has added its own system property, defaulting to the regular AS one if
JBM's isn't set. How important is using that property? Could it go in a
commented-out UDP config in the jbm-control stack with a different singleton name, e.g.:
| <stack name="jbm-control"
| description="Stack optimized for the JBoss Messaging Control
Channel">
| <config>
| <!-- Shared transport protocol used with other AS services -->
| <UDP
| singleton_name="udp"
| mcast_addr="${jgroups.udp.mcast_addr:228.11.11.11}"
| mcast_port="${jgroups.udp.mcast_port:45688}"
| ...
| />
| <!-- Uncomment and comment out above if you don't want
| to share a transport protocol with other AS services -->
| <!--
| <UDP
| singleton_name="jbm-control"
|
mcast_addr="${jboss.messaging.controlchanneludpaddress,jboss.partition.udpGroup:228.7.7.7}"
|
mcast_port="${jboss.messaging.controlchanneludpport:45568}"
| ...
| />
| -->
mcast_port:
"udp" = ${jgroups.udp.mcast_port:45688}
"jbm-control" = ${jboss.messaging.controlchanneludpport:45568}
See mcast_addr discussion above. No matter what, I just noticed that "udp" and
"jbm-control" are using the same port; that needs to change if we don't use
the same singleton_name. I'll change the "udp" one in a minute.
loopback:
"udp" = true
"jbm-control" = false.
We found that FLUSH behaves badly if loopback=false and the interface the channel is using
doesn't properly support multicast. So we changed the AS to loopback=true. Either
way, the channel doesn't work correctly, but with loopback=true nodes just don't
see each other, clusters don't form, and people need to debug the problem using the
techniques discussed for years on our wiki/JGroups docs. With loopback=false, you get
weird cryptic errors from FLUSH. See
http://lists.jboss.org/pipermail/jboss-development/2008-March/011595.html .
ip_ttl:
"udp" = ${jgroups.udp.ip_ttl:2}
"jbm-controll" = ${jboss.messaging.ipttl:8}
Different system property, different value. I can't see any reason why we should use
a different system property by default. I prefer the "2" value (limit mcast
propagation) but if there is a reason for "8" I'd happily switch to it if it
lets us have a shared config. :)
enable_bundling:
"udp" = true
"jbm-control" = false
Need Bela's input here. I imagine JBM is concerned about latency, which is why they
chose "false". I need to perf test http session replication with bundling on
and off and see the difference. If it's not huge and JBM really needs
'false', I'm personally comfortable with 'false' as a default.
thread_pool.max_threads:
"udp" = 25
"jbm-control" = 200
The "udp" value is too low. I'd be happy to use the JBM value.
(BTW, both "udp" and "jbm-control" have
thread_pool.min_threads="1" and thread_pool.keep_alive_time="5000").
thread_pool.queue_enabled and thread_pool.queue_max_size:
"udp" = false and 100
"jbm-control" = true and 1000
Need Bela's input here. The "udp" stack values came long ago from a JGroups
stacks.xml. I'd imagine the JBM values would be more performant.
Looking at that list, I don't see any show stoppers. Comparing the TCP protocol config
in "tcp" and "jbm-data" shows basically the same set of differences.
Comments?
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4144055#...
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&a...