----- Original Message -----
| From: "Bela Ban" <bban(a)redhat.com>
| To: infinispan-dev(a)lists.jboss.org
| Sent: Friday, May 3, 2013 12:05:41 PM
| Subject: Re: [infinispan-dev] [infinispan-internal] Message flow tracer/analyzer
|
| Re JGroups: most of the kernel and protocols are white boxes, so it
| should be possible to insert hooks in the right places, to get the data
| you want, probably at a smaller cost.
There's no place to hook into the packet reception in unicast/multicast receiver
thread, or identify how some message sent down is looped back. And failure to handover to
OOB/Incoming thread has custom rejection policies but probably I haven't tested it
properly as there's another check that I haven't adapted
(Util.verifyRejectionPolicy())- shame on me :-(
Also, I'd have to extend multiple protocols (overriding the up method) and add more
checks (such as "has this message header for this protocol and it's type is not
...") before calling super.up() - now I have to only put the byteman hook on the
right place after the checks are already done.
No, although I agree that JGroups are widely extensible, you can't design that to have
the absolute freedom MFT requires.
Actually the only way I'd use JGroups native pluggable system is to tag all messages
with a header carrying unique ID instead of combining ucast/mcast seqnos with addresses,
or just timestamp (matching of messages without any kind of seqno is far from perfect).
|
| But kudos to you, this framework is cerainly already useful ! Do you
| have a more detailed docu on what the numbers mean ?
More detailed than
https://github.com/rvansa/message-flow-tracer/blob/master/README.txt ?
If you miss anything there, just remind me.
Radim
|
| On 5/3/13 10:36 AM, Radim Vansa wrote:
| >
| >
| > ----- Original Message -----
| > | From: "Galder Zamarreño" <galder(a)redhat.com>
| > | To: "Radim Vansa" <rvansa(a)redhat.com>
| > | Cc: "infinispan-internal Internal"
<infinispan-internal(a)redhat.com>
| > | Sent: Thursday, May 2, 2013 7:49:53 PM
| > | Subject: Re: [infinispan-internal] Message flow tracer/analyzer
| > |
| > | Did you look into Twitter's Zipkin? It looks very suited for doing this
| > | kind
| > | of stuff:
| > |
http://engineering.twitter.com/2012/06/distributed-systems-tracing-with-z...
| > |
| > | It could be used in combination with Byteman and it gets you a nice user
| > | web
| > | interface :)
| >
| > Thanks for pointing me to that, Galder, I should probably read more blogs
| > :)
| >
| > This looks very similar and integrating with the nice GUI could be cool,
| > but I am not sure whether it's worth the effort (read: I probably won't
| > have time for this).
| > The main difference is that with Byteman I don't have to modify the code to
| > pass the zipkin header along with the request. Not relying on Byteman
| > would be cool (I don't believe that the few accesses of ConcurrentHashMap
| > and ThreadLocal variables slow the system 3 times, Byteman must introduce
| > a lot of its own overhead). Eventually I may end up with interpreting the
| > rules on JGroups/ISPN source level and inserting the calls directly into
| > the source code. I know Byteman can do that on the bytecode level but
| > there are some bugs that prohibit from doing so (and Andrew Dinn noted
| > that these may end up being "won't fix" as fixing them may break
other
| > stuff).
| >
| > Nevertheless, I will probably rename data -> annotation, control flow ->
| > span, message flow -> trace in order to be consistent with Zipkin naming.
| >
| > |
| > | p.s. It's written in Scala.
| >
| > Oh, <irony>great!</irony> ;-)
| >
| > Radim
| >
| >
| > |
| > | On May 2, 2013, at 2:48 PM, Radim Vansa <rvansa(a)redhat.com> wrote:
| > |
| > | > Good news, everyone,
| > | >
| > | > in the last two weeks I've been working on a tool that could help us to
| > | > profile Infinispan performance, analyze it and probably debug some
| > | > stuff
| > | > as well. While trace logs are the most useful, performance is impacted
| > | > to
| > | > almost unusable levels and it still does not provide enough
| > | > information,
| > | > logs have low precision etc.
| > | > The idea is to analyze the behaviour based on the requests and track
| > | > down
| > | > the consequences of each request (put/get/whatever). Currently I have a
| > | > working prototype (I believe already useful) which is able to track
| > | > down
| > | > all messages based on the initial request, records which threads
| > | > execute
| > | > etc. It's Byteman based, no trace logs/code changes required. However,
| > | > according to my initial testing it reduces the overall performance 2-3
| > | > times.
| > | >
| > | > The code is located in
https://github.com/rvansa/message-flow-tracer ,
| > | > please look at README for details what it can do and ping me if you
| > | > have
| > | > any questions/feedback.
| > | >
| > | > Radim
| > | >
| > | > PS: short demo output on
http://pastebin.com/raw.php?i=SBQFuG3a
| > | >
| > | > -----------------------------------------------------------
| > | > Radim Vansa
| > | > Quality Assurance Engineer
| > | > JBoss Datagrid
| > | > tel. +420532294559 ext. 62559
| > | >
| > | > Red Hat Czech, s.r.o.
| > | > Brno, Purkyňova 99/71, PSČ 612 45
| > | > Czech Republic
| > | >
| > | >
| > |
| > |
| > | --
| > | Galder Zamarreño
| > | galder(a)redhat.com
| > |
twitter.com/galderz
| > |
| > | Project Lead, Escalante
| > |
http://escalante.io
| > |
| > | Engineer, Infinispan
| > |
http://infinispan.org
| > |
| > |
| >
| > _______________________________________________
| > infinispan-dev mailing list
| > infinispan-dev(a)lists.jboss.org
| >
https://lists.jboss.org/mailman/listinfo/infinispan-dev
| >
|
| --
| Bela Ban, JGroups lead (
http://www.jgroups.org)
| _______________________________________________
| infinispan-dev mailing list
| infinispan-dev(a)lists.jboss.org
|
https://lists.jboss.org/mailman/listinfo/infinispan-dev