It's hard to believe the slowdown happens because of Byteman, that should be verified first. It might be because of how metrics are being collected (just guessing as I didn't look)
Great initiative! This looks super useful.
----- Original Message -----
| From: "Galder Zamarreño" <galder@redhat.com>
| To: "Radim Vansa" <rvansa@redhat.com>
| Cc: "infinispan-internal Internal" <infinispan-internal@redhat.com>
| Sent: Thursday, May 2, 2013 7:49:53 PM
| Subject: Re: [infinispan-internal] Message flow tracer/analyzer
|
| Did you look into Twitter's Zipkin? It looks very suited for doing this kind
| of stuff:
| http://engineering.twitter.com/2012/06/distributed-systems-tracing-with-zipkin.html
|
| It could be used in combination with Byteman and it gets you a nice user web
| interface :)
Thanks for pointing me to that, Galder, I should probably read more blogs :)
This looks very similar and integrating with the nice GUI could be cool, but I am not sure whether it's worth the effort (read: I probably won't have time for this).
The main difference is that with Byteman I don't have to modify the code to pass the zipkin header along with the request. Not relying on Byteman would be cool (I don't believe that the few accesses of ConcurrentHashMap and ThreadLocal variables slow the system 3 times, Byteman must introduce a lot of its own overhead). Eventually I may end up with interpreting the rules on JGroups/ISPN source level and inserting the calls directly into the source code. I know Byteman can do that on the bytecode level but there are some bugs that prohibit from doing so (and Andrew Dinn noted that these may end up being "won't fix" as fixing them may break other stuff).
Nevertheless, I will probably rename data -> annotation, control flow -> span, message flow -> trace in order to be consistent with Zipkin naming.
|
| p.s. It's written in Scala.
Oh, <irony>great!</irony> ;-)
Radim
|
| On May 2, 2013, at 2:48 PM, Radim Vansa <rvansa@redhat.com> wrote:
|
| > Good news, everyone,
| >
| > in the last two weeks I've been working on a tool that could help us to
| > profile Infinispan performance, analyze it and probably debug some stuff
| > as well. While trace logs are the most useful, performance is impacted to
| > almost unusable levels and it still does not provide enough information,
| > logs have low precision etc.
| > The idea is to analyze the behaviour based on the requests and track down
| > the consequences of each request (put/get/whatever). Currently I have a
| > working prototype (I believe already useful) which is able to track down
| > all messages based on the initial request, records which threads execute
| > etc. It's Byteman based, no trace logs/code changes required. However,
| > according to my initial testing it reduces the overall performance 2-3
| > times.
| >
| > The code is located in https://github.com/rvansa/message-flow-tracer ,
| > please look at README for details what it can do and ping me if you have
| > any questions/feedback.
| >
| > Radim
| >
| > PS: short demo output on http://pastebin.com/raw.php?i=SBQFuG3a
| >
| > -----------------------------------------------------------
| > Radim Vansa
| > Quality Assurance Engineer
| > JBoss Datagrid
| > tel. +420532294559 ext. 62559
| >
| > Red Hat Czech, s.r.o.
| > Brno, Purkyňova 99/71, PSČ 612 45
| > Czech Republic
| >
| >
|
|
| --
| Galder Zamarreño
| galder@redhat.com
| twitter.com/galderz
|
| Project Lead, Escalante
| http://escalante.io
|
| Engineer, Infinispan
| http://infinispan.org
|
|
_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev