From emmanuel at hibernate.org Fri May 6 14:37:22 2016 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Fri, 6 May 2016 20:37:22 +0200 Subject: [infinispan-dev] Multi tenancy support for Infinispan In-Reply-To: References: Message-ID: Is the router a software component of all nodes in the cluster ? Does the router then redirect all request to the same cache-container for all tenant? How is the isolation done then? Or does each tenant have effectively different cache containers and thus be "physically" isolated? Or is that config dependent (from a endpoint to the cache-container) and some tenants could share the same cache container. In which case will they see the same data ? Finally I think the design should allow for "dynamic" tenant configuration. Meaning that I don't have to change the config manually when I add a new customer / tenant. That's all, and sorry for the naive questions :) > On 29 avr. 2016, at 17:29, Sebastian Laskawiec wrote: > > Dear Community, > > Please have a look at the design of Multi tenancy support for Infinispan [1]. I would be more than happy to get some feedback from you. > > Highlights: > The implementation will be based on a Router (which will be built based on Netty) > Multiple Hot Rod and REST servers will be attached to the router which in turn will be attached to the endpoint > The router will operate on a binary protocol when using Hot Rod clients and path-based routing when using REST > Memcached will be out of scope > The router will support SSL+SNI > Thanks > Sebastian > > [1] https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160506/c6c52356/attachment.html From slaskawi at redhat.com Mon May 9 00:47:34 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 9 May 2016 06:47:34 +0200 Subject: [infinispan-dev] Multi tenancy support for Infinispan In-Reply-To: References: Message-ID: Hey Emmanuel! Comments inlined. There is one more thing to discuss - how SNI [1] for Hotrod server fits into the Router design. Obviously there is some overlap and the for SSL+SNI needs to be also implemented in the Router [2] (it potentially needs to decrypt an encrypted "switch-to-tenant" command). Moreover, if the client sends his SNI Host Name with the request - we can connect him to proper CacheContainer even without sending "switch-to-tenant" command. Of course there is some overhead here as well - if someone has only one Hot Rod server and he want's to use SNI - he would need to configure a Router, which would always send everything to a single server. Thanks Sebastian [1] https://github.com/infinispan/infinispan/pull/4279 [2] https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#implementation-details On Fri, May 6, 2016 at 8:37 PM, Emmanuel Bernard wrote: > Is the router a software component of all nodes in the cluster ? > Yes > Does the router then redirect all request to the same cache-container for > all tenant? How is the isolation done then? > Each tenant have its own Cache Container, so they are fully isolated. As the matter of fact this is how it is done now - you can run multiple Hot Rod server in one node (but each of them is attached to different port). The router takes this concept one step further and offers "one entry point" for all embedded Hot Rod servers. > Or does each tenant have effectively different cache containers and thus > be "physically" isolated? > Or is that config dependent (from a endpoint to the cache-container) and > some tenants could share the same cache container. In which case will they > see the same data ? > All tenants operate of their own Cache Containers, so there will not see each other's data. However if you create 2 CacheContainers with the same cluster name (//subsystem/cache-container/transport/@cluster) they should see each other's data. I think this should be a recommended way for handling this kind of things. > > Finally I think the design should allow for "dynamic" tenant > configuration. Meaning that I don't have to change the config manually when > I add a new customer / tenant. > I totally agree. @Tristan - could you please tell me how dynamic reconfiguration via CLI works? I probably should fit into that with router configuration (I assume all existing Protocol Server and Endpoint configuration support it). > > That's all, and sorry for the naive questions :) > No problem - they were very good questions. > > On 29 avr. 2016, at 17:29, Sebastian Laskawiec > wrote: > > Dear Community, > > Please have a look at the design of Multi tenancy support for Infinispan > [1]. I would be more than happy to get some feedback from you. > > Highlights: > > - The implementation will be based on a Router (which will be built > based on Netty) > - Multiple Hot Rod and REST servers will be attached to the router > which in turn will be attached to the endpoint > - The router will operate on a binary protocol when using Hot Rod > clients and path-based routing when using REST > - Memcached will be out of scope > - The router will support SSL+SNI > > Thanks > Sebastian > > [1] > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160509/98cf2771/attachment-0001.html From rvansa at redhat.com Mon May 9 06:55:29 2016 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 9 May 2016 12:55:29 +0200 Subject: [infinispan-dev] Multi tenancy support for Infinispan In-Reply-To: References: Message-ID: <57306CA1.3040405@redhat.com> As for the questions: * Is SSL required for SNI? I can imagine that multi-tenancy would make sense even in situations when the connection does not need to be encrypted. Moreover, if we plan to eventually have HR clients with async API (and using async I/O), SSL is even more PITA. Btw., do we have any numbers how much SSL affects perf? (that's a question for QA, though) * I don't think that dynamic switching of tenants would make sense, since that would require to invalidate all RemoteCache instances, near caches, connection pools, everything. So it's the same as starting from scratch. R. On 04/29/2016 05:29 PM, Sebastian Laskawiec wrote: > Dear Community, > > Please have a look at the design of Multi tenancy support for > Infinispan [1]. I would be more than happy to get some feedback from you. > > Highlights: > > * The implementation will be based on a Router (which will be built > based on Netty) > * Multiple Hot Rod and REST servers will be attached to the router > which in turn will be attached to the endpoint > * The router will operate on a binary protocol when using Hot Rod > clients and path-based routing when using REST > * Memcached will be out of scope > * The router will support SSL+SNI > > Thanks > Sebastian > > [1] > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From slaskawi at redhat.com Mon May 9 07:52:56 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 9 May 2016 13:52:56 +0200 Subject: [infinispan-dev] Multi tenancy support for Infinispan In-Reply-To: <57306CA1.3040405@redhat.com> References: <57306CA1.3040405@redhat.com> Message-ID: Hey Radim! Comments inlined. Thanks Sebastian On Mon, May 9, 2016 at 12:55 PM, Radim Vansa wrote: > As for the questions: > * Is SSL required for SNI? I can imagine that multi-tenancy would make > sense even in situations when the connection does not need to be > encrypted. Moreover, if we plan to eventually have HR clients with async > API (and using async I/O), SSL is even more PITA. Btw., do we have any > numbers how much SSL affects perf? (that's a question for QA, though) > Unfortunately no. SNI is an extension of TLS [2] which is an upgrade of SSL. In Java SNI Host names are specified in SSLParameters [3]. Of course SSL slows things down a bit, that's why we also need a "switch-to-tenant" command which would be used by the clients who do not want SSL. However if someone uses SNI and SSL (and only then) we can switch him to proper tenant automatically (because we have enough information at that point). > > * I don't think that dynamic switching of tenants would make sense, > since that would require to invalidate all RemoteCache instances, near > caches, connection pools, everything. So it's the same as starting from > scratch. > Frankly I also have a mixed feelings about this feature. I think it would be much nicer if we switched to another tenant by doing disconnect/connect sequence (and not switching dynamically). > > R. > > > > > > On 04/29/2016 05:29 PM, Sebastian Laskawiec wrote: > > Dear Community, > > > > Please have a look at the design of Multi tenancy support for > > Infinispan [1]. I would be more than happy to get some feedback from you. > > > > Highlights: > > > > * The implementation will be based on a Router (which will be built > > based on Netty) > > * Multiple Hot Rod and REST servers will be attached to the router > > which in turn will be attached to the endpoint > > * The router will operate on a binary protocol when using Hot Rod > > clients and path-based routing when using REST > > * Memcached will be out of scope > > * The router will support SSL+SNI > > > > Thanks > > Sebastian > > > > [1] > > > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server [2] https://tools.ietf.org/html/rfc6066#page-6 [3] https://docs.oracle.com/javase/8/docs/api/javax/net/ssl/SSLParameters.html#getServerNames-- > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160509/5568d4c4/attachment.html From rvansa at redhat.com Mon May 9 09:30:18 2016 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 9 May 2016 09:30:18 -0400 Subject: [infinispan-dev] Multi tenancy support for Infinispan In-Reply-To: References: <57306CA1.3040405@redhat.com> Message-ID: <573090EA.5060600@redhat.com> On 05/09/2016 07:52 AM, Sebastian Laskawiec wrote: > Hey Radim! > > Comments inlined. > > Thanks > Sebastian > > On Mon, May 9, 2016 at 12:55 PM, Radim Vansa > wrote: > > As for the questions: > * Is SSL required for SNI? I can imagine that multi-tenancy would make > sense even in situations when the connection does not need to be > encrypted. Moreover, if we plan to eventually have HR clients with > async > API (and using async I/O), SSL is even more PITA. Btw., do we have any > numbers how much SSL affects perf? (that's a question for QA, though) > > > Unfortunately no. SNI is an extension of TLS [2] which is an upgrade > of SSL. In Java SNI Host names are specified in SSLParameters [3]. > > Of course SSL slows things down a bit, that's why we also need a > "switch-to-tenant" command which would be used by the clients who do > not want SSL. However if someone uses SNI and SSL (and only then) we > can switch him to proper tenant automatically (because we have enough > information at that point). So you can initiate connection with SSL (+SNI) and then downgrade it to plain-text? > > * I don't think that dynamic switching of tenants would make sense, > since that would require to invalidate all RemoteCache instances, near > caches, connection pools, everything. So it's the same as starting > from > scratch. > > > Frankly I also have a mixed feelings about this feature. I think it > would be much nicer if we switched to another tenant by doing > disconnect/connect sequence (and not switching dynamically). > > > R. > > > > > > On 04/29/2016 05:29 PM, Sebastian Laskawiec wrote: > > Dear Community, > > > > Please have a look at the design of Multi tenancy support for > > Infinispan [1]. I would be more than happy to get some feedback > from you. > > > > Highlights: > > > > * The implementation will be based on a Router (which will be > built > > based on Netty) > > * Multiple Hot Rod and REST servers will be attached to the router > > which in turn will be attached to the endpoint > > * The router will operate on a binary protocol when using Hot Rod > > clients and path-based routing when using REST > > * Memcached will be out of scope > > * The router will support SSL+SNI > > > > Thanks > > Sebastian > > > > [1] > > > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server > > [2] https://tools.ietf.org/html/rfc6066#page-6 > [3] > https://docs.oracle.com/javase/8/docs/api/javax/net/ssl/SSLParameters.html#getServerNames-- > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From dan.berindei at gmail.com Mon May 9 10:50:44 2016 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 9 May 2016 17:50:44 +0300 Subject: [infinispan-dev] Weekly Infinispan IRC meeting 2016-05-09 Message-ID: Hi everyone Here are the logs from our weekly meeting on #infinispan: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2016/infinispan.2016-05-09-14.07.log.html Cheers Dan From galder at redhat.com Mon May 9 11:06:06 2016 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 9 May 2016 17:06:06 +0200 Subject: [infinispan-dev] Infispector Message-ID: <7A2F9076-2AB8-4B56-A29B-CFA070C3E447@redhat.com> Hi all, I've just noticed [1], @Thomas, it appears this is your baby? Could you explain in more detail what you are trying to achieve with this? Do you have a video to show what exactly it does? Also, who's [2]? Curious to know who's working on this stuff :) The reason I'm interested in finding out a bit more about [1] is because we have several efforts in the distributed monitoring/tracing area and want to make sure we're not duplicating same effort. 1. Radim's Message Flow Tracer [3]: This is a project to tool for tracing messages and control flow in JGroups/Infinispan using Byteman. 2. Zipkin effort [4]: The idea behind is to have a way to have Infinispan cluster-wide tracing that uses Zipkin to capture and visualize where time is spent within Infinispan. This includes both JGroups and other components that could be time consuming, e.g. persistence. This will be main task for Infinispan 9. This effort will use a lot of interception points Radim has developed in [3] to tie together messages related to a request/tx around the cluster. Does your effort fall within any of these spaces? Cheers, [1] https://github.com/infinispan/infispector [2] https://github.com/mciz [3] https://github.com/rvansa/message-flow-tracer [4] https://issues.jboss.org/browse/ISPN-6346 -- Galder Zamarre?o Infinispan, Red Hat From slaskawi at redhat.com Tue May 10 04:21:04 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 10 May 2016 10:21:04 +0200 Subject: [infinispan-dev] Multi tenancy support for Infinispan In-Reply-To: <573090EA.5060600@redhat.com> References: <57306CA1.3040405@redhat.com> <573090EA.5060600@redhat.com> Message-ID: On Mon, May 9, 2016 at 3:30 PM, Radim Vansa wrote: > So you can initiate connection with SSL (+SNI) and then downgrade it to > plain-text? > No, that's not possible. SNI Host Name is used to match proper certificate from KeyStore. After successful handshake, you communicate further with SSL/TLS. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160510/2c6512b9/attachment-0001.html From ttarrant at redhat.com Tue May 10 04:59:02 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 10 May 2016 10:59:02 +0200 Subject: [infinispan-dev] Multi tenancy support for Infinispan In-Reply-To: References: Message-ID: <5731A2D6.1020300@redhat.com> Not sure I like the introduction of another component at the front. My original idea for allowing the client to choose the container was: - with TLS: use SNI to choose the container - without TLS: enhance the PING operation of the Hot Rod protocol to also take the server name. This would need to be a requirement when exposing multiple containers over the same endpoint. From a client API perspective, there would be no difference between the above two approaches: just specify the server name and depending on the transport, select the right one. Tristan On 29/04/2016 17:29, Sebastian Laskawiec wrote: > Dear Community, > > Please have a look at the design of Multi tenancy support for Infinispan > [1]. I would be more than happy to get some feedback from you. > > Highlights: > > * The implementation will be based on a Router (which will be built > based on Netty) > * Multiple Hot Rod and REST servers will be attached to the router > which in turn will be attached to the endpoint > * The router will operate on a binary protocol when using Hot Rod > clients and path-based routing when using REST > * Memcached will be out of scope > * The router will support SSL+SNI > > Thanks > Sebastian > > [1] > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From tsykora at redhat.com Tue May 10 09:41:13 2016 From: tsykora at redhat.com (Tomas Sykora) Date: Tue, 10 May 2016 09:41:13 -0400 (EDT) Subject: [infinispan-dev] Infispector In-Reply-To: <7A2F9076-2AB8-4B56-A29B-CFA070C3E447@redhat.com> References: <7A2F9076-2AB8-4B56-A29B-CFA070C3E447@redhat.com> Message-ID: <166216030.1733480.1462887673535.JavaMail.zimbra@redhat.com> Hello Galder, and all! It?s nice to communicate again via infinispan-dev after a while :) TL;DR: I can see some intersections with zipkin.io initiative goals but InfiSpector seems to be much more ?easier to handle and contribute to at this moment? -- that suits more our student-related use case. Let?s continue with the discussion :) Firstly, a short introduction into the context. Red Hat is running Research & Development laboratory here in Brno at 2 biggest local universities: Masaryk University, Faculty of Informatics (FI MU) and Brno University of Technology, Faculty of Information Technologies (FIT VUT). The aim is to better and sooner reach out to students, get them involved into interesting project, show them open-source, git, git workflows and many other things (project specific). An year ago I got excited about this idea and started to think whether I can deliver such a project. And I did. Team faces one big challenge and this is a time constraint. Students are working on _several_ projects during their studies to fulfill courses? requirements to pass the semester. It?s hard for them to find additional time to be coding even something else. Team managed that but slowly, that?s understandable though. Designing InfiSpector infrastructure took us some time (Kafka, Druid, NodeJS) + evaluation of these technologies + proof of concepts. All 5 team members are 2nd year students of bachelor studies at FIT VUT Brno. Marek Ciz (https://github.com/mciz), also my very good friend from my home town :) His primary domain is Druid, Kafka and infrastructure. Vratislav Hais (https://github.com/vratislavhais) Primary domain is front-end. Jan Fitz (https://github.com/janfitz) Primary domain is front-end and graphic design (also designed our logo). Tomas Veskrna -- starting Patrik Cigas -- starting At this moment we are very close to getting real data to be monitored via web UI. It?s a matter of 1-2 months considering there is an examination period happening now at the University. ******************* What is InfiSpector and what we want to achieve: * We missed graphical representation of Infinispan nodes communication so we want -- To be able to spot possible issues AT THE FIRST LOOK (e.g. wait, this should be coordinator, how is that possible he sends/receives only 10 % of all messages?) -- To demonstrate nicely what?s happening inside of ISPN cluster for newcomers (to see how Infinispan nodes talk to each other to better understand concepts) -- To be using nice communication diagrams that describes flows like (130 messages from node1 to node5 -- click to see them in detail, filter out in more detail) * We aimed for NON-invasive solution -- No changes in Infinispan internal code -- Just add custom JGroups protocol, gather data and send them where you want [0] * Provide historical recording of JGroups communication * Help to analyze communication recording from big data point of view -- No need to manually go through gigabytes of text trace logs Simplified InfiSpector architecture: Infinispan Cluster (JGroups with our protocol) ---> Apache Kafka ---> Druid Database (using Kafka Firehose to inject Kafka Topic) <---> NodeJS server back-end <---> front-end (AngularJS) What is coming out from custom JGroup protocol is a short JSON document [1] with a timestamp, sending and receiving node, length of a message and the message itself. Other stuff can be added easily. We will be able to easily answer queries like: How many messages were sent from node1 to node3 during ?last? 60 seconds? What are these messages? How many of them were PutKeyValueCommands? Filter out Heart beats (or even ignore them completely), etc. We don?t have any video recording yet but we are very close to that point. From UI perspective we will be using these 2 charts: [2], [3]. Talking about Infinispan 9 plans -- [4] was reported a month ago by you Galder and we are working on InfiSpector actively let?s say 5 months -- it looks like I should have advertised InfiSpector more, sooner, but I was waiting for at least first working demo to start with blogging and videos :) It?s good that you?ve noticed and that we are having this conversation right now. To be honest I find http://zipkin.io/ initiative to be quite similar. However, InfiSpector seems to be much more ?easier? and not targeting at performance analysis directly. Just adding one protocol at protocol stack and you are good to go. We were thinking about putting Kafka and Druid somewhere into the cloud (later) so users don?t need to start all that big infrastructure at their local machines. I am very open to anything that will help us as a community to achieve our common goal -- to be able to graphically monitor Infinispan communication. Additionally I would be _personally_ looking for something that is easily achievable and is suitable for students to quickly learn new things and quickly make meaningful contributions. Thanks! Tomas [0] Achieved by custom JGroups protocol -- JGROUPS_TO_KAFKA protocol has been implemented. This can be added at the end of JGroups stack and every single message that goes through that is sent to Apache Kafka. [1] { "direction": "receiving/up", "src": "tsykora-19569", "dest": "tsykora-27916", "length": 182, "timestamp": 1460302055376, "message": "SingleRpcCommand{cacheName='___defaultcache', command=PutKeyValueCommand{key=f6d52117-8a27-475e-86a7-002a54324615, value=tsykora-19569, flags=null, putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=60000, maxIdle=-1, version=null}, successful=true}}" } [2] http://bl.ocks.org/NPashaP/9796212 [3] http://bl.ocks.org/mbostock/1046712 [4] https://issues.jboss.org/browse/ISPN-6346 ----- Original Message ----- > From: "Galder Zamarre?o" > To: "infinispan -Dev List" , "Tomas Sykora" > Sent: Monday, May 9, 2016 5:06:06 PM > Subject: Infispector > > Hi all, > > I've just noticed [1], @Thomas, it appears this is your baby? Could you > explain in more detail what you are trying to achieve with this? Do you have > a video to show what exactly it does? > > Also, who's [2]? Curious to know who's working on this stuff :) > > The reason I'm interested in finding out a bit more about [1] is because we > have several efforts in the distributed monitoring/tracing area and want to > make sure we're not duplicating same effort. > > 1. Radim's Message Flow Tracer [3]: This is a project to tool for tracing > messages and control flow in JGroups/Infinispan using Byteman. > > 2. Zipkin effort [4]: The idea behind is to have a way to have Infinispan > cluster-wide tracing that uses Zipkin to capture and visualize where time is > spent within Infinispan. This includes both JGroups and other components > that could be time consuming, e.g. persistence. This will be main task for > Infinispan 9. This effort will use a lot of interception points Radim has > developed in [3] to tie together messages related to a request/tx around the > cluster. > > Does your effort fall within any of these spaces? > > Cheers, > > [1] https://github.com/infinispan/infispector > [2] https://github.com/mciz > [3] https://github.com/rvansa/message-flow-tracer > [4] https://issues.jboss.org/browse/ISPN-6346 > -- > Galder Zamarre?o > Infinispan, Red Hat > > From rvansa at redhat.com Tue May 10 09:53:49 2016 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 10 May 2016 09:53:49 -0400 Subject: [infinispan-dev] Infispector In-Reply-To: <166216030.1733480.1462887673535.JavaMail.zimbra@redhat.com> References: <7A2F9076-2AB8-4B56-A29B-CFA070C3E447@redhat.com> <166216030.1733480.1462887673535.JavaMail.zimbra@redhat.com> Message-ID: <5731E7ED.6060700@redhat.com> To complement this, MFT is a tool that won't offer any sleek charts or visualisations. It's tricky to use and understand - it's intended for developers as a tool for problem analysis. But it gets more in depth than InfiSpector, linking the information from different nodes, JFR events and so forth. R. On 05/10/2016 09:41 AM, Tomas Sykora wrote: > Hello Galder, and all! > It?s nice to communicate again via infinispan-dev after a while :) > > TL;DR: I can see some intersections with zipkin.io initiative goals but InfiSpector seems to be much more ?easier to handle and contribute to at this moment? -- that suits more our student-related use case. Let?s continue with the discussion :) > > Firstly, a short introduction into the context. Red Hat is running Research & Development laboratory here in Brno at 2 biggest local universities: Masaryk University, Faculty of Informatics (FI MU) and Brno University of Technology, Faculty of Information Technologies (FIT VUT). > The aim is to better and sooner reach out to students, get them involved into interesting project, show them open-source, git, git workflows and many other things (project specific). An year ago I got excited about this idea and started to think whether I can deliver such a project. And I did. > > Team faces one big challenge and this is a time constraint. Students are working on _several_ projects during their studies to fulfill courses? requirements to pass the semester. It?s hard for them to find additional time to be coding even something else. Team managed that but slowly, that?s understandable though. Designing InfiSpector infrastructure took us some time (Kafka, Druid, NodeJS) + evaluation of these technologies + proof of concepts. > > All 5 team members are 2nd year students of bachelor studies at FIT VUT Brno. > Marek Ciz (https://github.com/mciz), also my very good friend from my home town :) His primary domain is Druid, Kafka and infrastructure. > Vratislav Hais (https://github.com/vratislavhais) Primary domain is front-end. > Jan Fitz (https://github.com/janfitz) Primary domain is front-end and graphic design (also designed our logo). > Tomas Veskrna -- starting > Patrik Cigas -- starting > > At this moment we are very close to getting real data to be monitored via web UI. It?s a matter of 1-2 months considering there is an examination period happening now at the University. > > ******************* > What is InfiSpector and what we want to achieve: > > * We missed graphical representation of Infinispan nodes communication so we want > -- To be able to spot possible issues AT THE FIRST LOOK (e.g. wait, this should be coordinator, how is that possible he sends/receives only 10 % of all messages?) > -- To demonstrate nicely what?s happening inside of ISPN cluster for newcomers (to see how Infinispan nodes talk to each other to better understand concepts) > -- To be using nice communication diagrams that describes flows like (130 messages from node1 to node5 -- click to see them in detail, filter out in more detail) > * We aimed for NON-invasive solution > -- No changes in Infinispan internal code > -- Just add custom JGroups protocol, gather data and send them where you want [0] > * Provide historical recording of JGroups communication > * Help to analyze communication recording from big data point of view > -- No need to manually go through gigabytes of text trace logs > > Simplified InfiSpector architecture: > > Infinispan Cluster (JGroups with our protocol) ---> Apache Kafka ---> Druid Database (using Kafka Firehose to inject Kafka Topic) <---> NodeJS server back-end <---> front-end (AngularJS) > > What is coming out from custom JGroup protocol is a short JSON document [1] with a timestamp, sending and receiving node, length of a message and the message itself. Other stuff can be added easily. > > We will be able to easily answer queries like: > How many messages were sent from node1 to node3 during ?last? 60 seconds? > What are these messages? > How many of them were PutKeyValueCommands? > Filter out Heart beats (or even ignore them completely), etc. > > We don?t have any video recording yet but we are very close to that point. From UI perspective we will be using these 2 charts: [2], [3]. > > > Talking about Infinispan 9 plans -- [4] was reported a month ago by you Galder and we are working on InfiSpector actively let?s say 5 months -- it looks like I should have advertised InfiSpector more, sooner, but I was waiting for at least first working demo to start with blogging and videos :) It?s good that you?ve noticed and that we are having this conversation right now. > > To be honest I find http://zipkin.io/ initiative to be quite similar. However, InfiSpector seems to be much more ?easier? and not targeting at performance analysis directly. Just adding one protocol at protocol stack and you are good to go. We were thinking about putting Kafka and Druid somewhere into the cloud (later) so users don?t need to start all that big infrastructure at their local machines. > > I am very open to anything that will help us as a community to achieve our common goal -- to be able to graphically monitor Infinispan communication. > Additionally I would be _personally_ looking for something that is easily achievable and is suitable for students to quickly learn new things and quickly make meaningful contributions. > > Thanks! > Tomas > > [0] Achieved by custom JGroups protocol -- JGROUPS_TO_KAFKA protocol has been implemented. This can be added at the end of JGroups stack and every single message that goes through that is sent to Apache Kafka. > [1] > { > "direction": "receiving/up", > "src": "tsykora-19569", > "dest": "tsykora-27916", > "length": 182, > "timestamp": 1460302055376, > "message": "SingleRpcCommand{cacheName='___defaultcache', command=PutKeyValueCommand{key=f6d52117-8a27-475e-86a7-002a54324615, value=tsykora-19569, flags=null, putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=60000, maxIdle=-1, version=null}, successful=true}}" > } > [2] http://bl.ocks.org/NPashaP/9796212 > [3] http://bl.ocks.org/mbostock/1046712 > [4] https://issues.jboss.org/browse/ISPN-6346 > > > > > ----- Original Message ----- >> From: "Galder Zamarre?o" >> To: "infinispan -Dev List" , "Tomas Sykora" >> Sent: Monday, May 9, 2016 5:06:06 PM >> Subject: Infispector >> >> Hi all, >> >> I've just noticed [1], @Thomas, it appears this is your baby? Could you >> explain in more detail what you are trying to achieve with this? Do you have >> a video to show what exactly it does? >> >> Also, who's [2]? Curious to know who's working on this stuff :) >> >> The reason I'm interested in finding out a bit more about [1] is because we >> have several efforts in the distributed monitoring/tracing area and want to >> make sure we're not duplicating same effort. >> >> 1. Radim's Message Flow Tracer [3]: This is a project to tool for tracing >> messages and control flow in JGroups/Infinispan using Byteman. >> >> 2. Zipkin effort [4]: The idea behind is to have a way to have Infinispan >> cluster-wide tracing that uses Zipkin to capture and visualize where time is >> spent within Infinispan. This includes both JGroups and other components >> that could be time consuming, e.g. persistence. This will be main task for >> Infinispan 9. This effort will use a lot of interception points Radim has >> developed in [3] to tie together messages related to a request/tx around the >> cluster. >> >> Does your effort fall within any of these spaces? >> >> Cheers, >> >> [1] https://github.com/infinispan/infispector >> [2] https://github.com/mciz >> [3] https://github.com/rvansa/message-flow-tracer >> [4] https://issues.jboss.org/browse/ISPN-6346 >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From slaskawi at redhat.com Wed May 11 04:38:58 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Wed, 11 May 2016 10:38:58 +0200 Subject: [infinispan-dev] Multi tenancy support for Infinispan In-Reply-To: <5731A2D6.1020300@redhat.com> References: <5731A2D6.1020300@redhat.com> Message-ID: Hey Tristan! If I understood you correctly, you're suggesting to enhance the ProtocolServer to support multiple EmbeddedCacheManagers (probably with shared transport and by that I mean started on the same Netty server). Yes, that also could work but I'm not convinced if we won't loose some configuration flexibility. Let's consider a configuration file - https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how for example use authentication for CacheContainer cc1 (and not for cc2) and encryption for cc1 (and not for cc1)? Both are tied to hotrod-connector. I think using this kind of different options makes sense in terms of multi tenancy. And please note that if we start a new Netty server for each CacheContainer - we almost ended up with the router I proposed. The second argument for using a router is extracting the routing logic into a separate module. Otherwise we would probably end up with several if(isMultiTenent()) statements in Hotrod as well as REST server. Extracting this has also additional advantage that we limit changes in those modules (actually there will be probably 2 changes #1 we should be able to start a ProtocolServer without starting a Netty server (the Router will do it in multi tenant configuration) and #2 collect Netty handlers from ProtocolServer). To sum it up - the router's implementation seems to be more complicated but in the long run I think it might be worth it. I also wrote the summary of the above here: https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#alternative-approach @Galder - you wrote a huge part of the Hot Rod server - I would love to hear your opinion as well. Thanks Sebastian On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant wrote: > Not sure I like the introduction of another component at the front. > > My original idea for allowing the client to choose the container was: > > - with TLS: use SNI to choose the container > - without TLS: enhance the PING operation of the Hot Rod protocol to > also take the server name. This would need to be a requirement when > exposing multiple containers over the same endpoint. > > From a client API perspective, there would be no difference between the > above two approaches: just specify the server name and depending on the > transport, select the right one. > > Tristan > > On 29/04/2016 17:29, Sebastian Laskawiec wrote: > > Dear Community, > > > > Please have a look at the design of Multi tenancy support for Infinispan > > [1]. I would be more than happy to get some feedback from you. > > > > Highlights: > > > > * The implementation will be based on a Router (which will be built > > based on Netty) > > * Multiple Hot Rod and REST servers will be attached to the router > > which in turn will be attached to the endpoint > > * The router will operate on a binary protocol when using Hot Rod > > clients and path-based routing when using REST > > * Memcached will be out of scope > > * The router will support SSL+SNI > > > > Thanks > > Sebastian > > > > [1] > > > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160511/9fd80faa/attachment-0001.html From gabovantonnikolaevich at gmail.com Wed May 11 06:00:57 2016 From: gabovantonnikolaevich at gmail.com (Anton Gabov) Date: Wed, 11 May 2016 13:00:57 +0300 Subject: [infinispan-dev] GSOC 2016. Smart HTTP/2-based protocol for Infinispan. Community Bonding Message-ID: Hello everybody! My name is Anton. I'm participating in project "Infinispan. Smart HTTP/2-based protocol for Infinispan" (GSoC 2016). Here is link to the proposal https://summerofcode.withgoogle.com/projects/#6140413916217344. I'm newbie in Infinispan. Currently, I'm trying to deploy Infinispan server to my virtual server + play with configurations :) Also, read documentations concerning to Infinispan, Hot Rod protocol and HTTP/2. It's really difficult project and I'll do my best! I hope I can count on your support! May I ask questions in IRC chat (#infinispan), if I have them? Best Wishes, Gabov Anton. From slaskawi at redhat.com Wed May 11 07:01:13 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Wed, 11 May 2016 13:01:13 +0200 Subject: [infinispan-dev] GSOC 2016. Smart HTTP/2-based protocol for Infinispan. Community Bonding In-Reply-To: References: Message-ID: Hey Anton! Great to have you! Of course feel free to ask us anything on IRC (Freenode server, #infinispan channel). I recommend starting with Infinispan simple tutorials: - http://infinispan.org/tutorials/ - https://github.com/infinispan/infinispan-simple-tutorials/ If you'd like to have a look at some web apps, you can digg into some of our integration tests (some of them assemble a war in the runtime and deploy it): - https://github.com/infinispan/infinispan/tree/master/integrationtests Our documentation might be found here: - http://infinispan.org/documentation/ Finally, here's a guide how to setup your development environment: - http://infinispan.org/docs/9.0.x/contributing/contributing.html Thanks Sebastian On Wed, May 11, 2016 at 12:00 PM, Anton Gabov < gabovantonnikolaevich at gmail.com> wrote: > Hello everybody! > > My name is Anton. I'm participating in project "Infinispan. Smart > HTTP/2-based protocol for Infinispan" (GSoC 2016). Here is link to the > proposal https://summerofcode.withgoogle.com/projects/#6140413916217344. > > I'm newbie in Infinispan. Currently, I'm trying to deploy Infinispan > server to my virtual server + play with configurations :) > Also, read documentations concerning to Infinispan, Hot Rod protocol and > HTTP/2. > > It's really difficult project and I'll do my best! > I hope I can count on your support! > > May I ask questions in IRC chat (#infinispan), if I have them? > > Best Wishes, > Gabov Anton. > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160511/d036cdec/attachment.html From cristian.malinescu at gmail.com Thu May 12 09:26:50 2016 From: cristian.malinescu at gmail.com (Cristian Malinescu) Date: Thu, 12 May 2016 09:26:50 -0400 Subject: [infinispan-dev] HDFS FileStore Message-ID: Hello folks - I would like to implement for my own project a custom cache store for Infinispan using HDFS and using as base line one of the already implemented file stores - SoftIndex and SingleFile. I thought it would be beneficiary if I start and do it directly as contribution to the Infinispan code base, is someone interested to take on this subject and we start brainstorming about how should this task being approached to be sure it gets done smooth, accordingly to the project's community house rules so we don't encounter hassle at the point when we can look at merging in the baseline, avoid potentially double work for same feature etc. Kind regards Cristian Malinescu https://github.com/Cristian-Malinescu https://www.linkedin.com/in/cristianmalinescu P.S I went already trough http://infinispan.org/docs/8.2.x/contributing/contributing.html so theoretically I can just start and place a pull request on GitHub but I wanted to be sure you guys are also aware of this plan so we keep in sync and all opinions are taken in consideration and addressed. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160512/f0a5e4da/attachment.html From gustavo at infinispan.org Thu May 12 10:52:36 2016 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Thu, 12 May 2016 15:52:36 +0100 Subject: [infinispan-dev] HDFS FileStore In-Reply-To: References: Message-ID: Hi Cristian! A HDFS cache store [1] looks interesting, and given the append-only nature of HDFS, I'd say probably the SoftIndex is better to look at than the SingleFile store since it employs some techniques of append only plus eventual compactations. It'd be interesting to have a design document so that we can have a starting point; we usually publish such designs at [2]. Cheers, Gustavo [1] https://issues.jboss.org/browse/ISPN-2940 [2] https://github.com/infinispan/infinispan/wiki On Thu, May 12, 2016 at 2:26 PM, Cristian Malinescu < cristian.malinescu at gmail.com> wrote: > Hello folks - I would like to implement for my own project a custom cache > store for Infinispan using HDFS and using as base line one of the already > implemented file stores - SoftIndex and SingleFile. > I thought it would be beneficiary if I start and do it directly as > contribution to the Infinispan code base, is someone interested to take on > this subject and we start brainstorming about how should this task being > approached to be sure it gets done smooth, accordingly to the project's > community house rules so we don't encounter hassle at the point when we can > look at merging in the baseline, avoid potentially double work for same > feature etc. > > Kind regards > Cristian Malinescu > > https://github.com/Cristian-Malinescu > https://www.linkedin.com/in/cristianmalinescu > > > P.S I went already trough > http://infinispan.org/docs/8.2.x/contributing/contributing.html > so theoretically I can just start and place a pull request on GitHub but I > wanted to be sure you guys are also aware of this plan so we keep in sync > and all opinions are taken in consideration and addressed. > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160512/e5870729/attachment.html From smarlow at redhat.com Thu May 12 11:23:08 2016 From: smarlow at redhat.com (Scott Marlow) Date: Thu, 12 May 2016 11:23:08 -0400 Subject: [infinispan-dev] WildFly NoSQL client integration and Infinispan remote/JDG as a NoSQL client... Message-ID: <25fb6402-015c-dccc-c8c4-641d69e80131@redhat.com> Hi, Could you bring answers to the discussion [1] about using Infinispan as a remote NoSQL store in WildFly. Perhaps the WildFly standalone.xml subsystem configuration might define a "testdb" profile that any application deployment can use to remotely access the remote Infinispan server running on "testhostmachine" via configuration: " " Does this match at all with how you thought a WildFly application server might use a remote Infinispan server? Are there any concerns about marshalling, since the remote server (testhostmachine) may be a WildFly application server that doesn't have the same application deployments? Mostly, I'd like to discuss the above on [1] but here is fine also (we can link to this mailing list from [1], if we talk here). Scott [1] http://lists.jboss.org/pipermail/wildfly-dev/2016-May/004966.html From slaskawi at redhat.com Fri May 13 09:51:05 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Fri, 13 May 2016 15:51:05 +0200 Subject: [infinispan-dev] Multi tenancy support for Infinispan In-Reply-To: References: <5731A2D6.1020300@redhat.com> Message-ID: Hey guys! Any last call on this? I'm going to start the implementation on Monday. Thanks Sebastian On Wed, May 11, 2016 at 10:38 AM, Sebastian Laskawiec wrote: > Hey Tristan! > > If I understood you correctly, you're suggesting to enhance the > ProtocolServer to support multiple EmbeddedCacheManagers (probably with > shared transport and by that I mean started on the same Netty server). > > Yes, that also could work but I'm not convinced if we won't loose some > configuration flexibility. > > Let's consider a configuration file - > https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how > for example use authentication for CacheContainer cc1 (and not for cc2) and > encryption for cc1 (and not for cc1)? Both are tied to hotrod-connector. I > think using this kind of different options makes sense in terms of multi > tenancy. And please note that if we start a new Netty server for each > CacheContainer - we almost ended up with the router I proposed. > > The second argument for using a router is extracting the routing logic > into a separate module. Otherwise we would probably end up with several > if(isMultiTenent()) statements in Hotrod as well as REST server. Extracting > this has also additional advantage that we limit changes in those modules > (actually there will be probably 2 changes #1 we should be able to start a > ProtocolServer without starting a Netty server (the Router will do it in > multi tenant configuration) and #2 collect Netty handlers from > ProtocolServer). > > To sum it up - the router's implementation seems to be more complicated > but in the long run I think it might be worth it. > > I also wrote the summary of the above here: > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#alternative-approach > > @Galder - you wrote a huge part of the Hot Rod server - I would love to > hear your opinion as well. > > Thanks > Sebastian > > > > On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant > wrote: > >> Not sure I like the introduction of another component at the front. >> >> My original idea for allowing the client to choose the container was: >> >> - with TLS: use SNI to choose the container >> - without TLS: enhance the PING operation of the Hot Rod protocol to >> also take the server name. This would need to be a requirement when >> exposing multiple containers over the same endpoint. >> >> From a client API perspective, there would be no difference between the >> above two approaches: just specify the server name and depending on the >> transport, select the right one. >> >> Tristan >> >> On 29/04/2016 17:29, Sebastian Laskawiec wrote: >> > Dear Community, >> > >> > Please have a look at the design of Multi tenancy support for Infinispan >> > [1]. I would be more than happy to get some feedback from you. >> > >> > Highlights: >> > >> > * The implementation will be based on a Router (which will be built >> > based on Netty) >> > * Multiple Hot Rod and REST servers will be attached to the router >> > which in turn will be attached to the endpoint >> > * The router will operate on a binary protocol when using Hot Rod >> > clients and path-based routing when using REST >> > * Memcached will be out of scope >> > * The router will support SSL+SNI >> > >> > Thanks >> > Sebastian >> > >> > [1] >> > >> https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> >> -- >> Tristan Tarrant >> Infinispan Lead >> JBoss, a division of Red Hat >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160513/8a5790c6/attachment-0001.html From cristian.malinescu at gmail.com Fri May 13 10:07:57 2016 From: cristian.malinescu at gmail.com (Cristian Malinescu) Date: Fri, 13 May 2016 10:07:57 -0400 Subject: [infinispan-dev] HDFS FileStore In-Reply-To: References: Message-ID: Hi Gustavo - thanks for the guidance! Have some questions - 1. ISPN-2940 - says the idea isn't new and it didn't got a 'Go' at that moment. If we proceed with this work, does it mean a reopening of the item? 2. Couldn't see any design docs for both SingleFile and SoftIndexFile store(s) subsystems - fairly, couldn't find design docs for any of the pluggable cache store modules. I want to start from one of them to keep consistency and compatibility in style for easiness of adoption. 3. Was the HDFS store idea abandoned because just using HBase would pretty much offer the same with the advantage of offloading on HBase the need for compaction due to the append-only nature of HDFS? Cheers Cris On Thu, May 12, 2016 at 10:52 AM, Gustavo Fernandes wrote: > Hi Cristian! > > A HDFS cache store [1] looks interesting, and given the append-only nature > of HDFS, I'd say probably the SoftIndex is better to look at than the > SingleFile store since it employs some techniques of append only plus > eventual compactations. > It'd be interesting to have a design document so that we can have a > starting point; we usually publish such designs at [2]. > > Cheers, > Gustavo > > [1] https://issues.jboss.org/browse/ISPN-2940 > [2] https://github.com/infinispan/infinispan/wiki > > On Thu, May 12, 2016 at 2:26 PM, Cristian Malinescu < > cristian.malinescu at gmail.com> wrote: > >> Hello folks - I would like to implement for my own project a custom cache >> store for Infinispan using HDFS and using as base line one of the already >> implemented file stores - SoftIndex and SingleFile. >> I thought it would be beneficiary if I start and do it directly as >> contribution to the Infinispan code base, is someone interested to take on >> this subject and we start brainstorming about how should this task being >> approached to be sure it gets done smooth, accordingly to the project's >> community house rules so we don't encounter hassle at the point when we can >> look at merging in the baseline, avoid potentially double work for same >> feature etc. >> >> Kind regards >> Cristian Malinescu >> >> https://github.com/Cristian-Malinescu >> https://www.linkedin.com/in/cristianmalinescu >> >> >> P.S I went already trough >> http://infinispan.org/docs/8.2.x/contributing/contributing.html >> so theoretically I can just start and place a pull request on GitHub but >> I wanted to be sure you guys are also aware of this plan so we keep in sync >> and all opinions are taken in consideration and addressed. >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160513/fa8514d1/attachment.html From sanne at infinispan.org Sun May 15 17:27:00 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Sun, 15 May 2016 22:27:00 +0100 Subject: [infinispan-dev] Multi tenancy support for Infinispan In-Reply-To: References: <5731A2D6.1020300@redhat.com> Message-ID: Hi Sebastian, the design seems to assume that what people want is to have multiple cache containers, one per tenant. Did you consider the tradeoffs comparing to a solution in which you have a single cache container to manage all caches, but isolate tenants by having each one access only the subset of caches it is owning? I haven't thought about all implications, but it seems desirable that all caches - from all tenants - could be managed as a whole. For example in future one might want to know how the memory consumption is being balanced across different tenants and caches, and have some smart policies around such concepts. Were would such logic live? It seems like there is a need for a global coordination of resources across all caches, and so far this has been the CacheManager. You could change this, but then an higher level component will be needed to orchestrate the various CacheManager instances at server level. Similarly, different Caches will need to share some resources; I would expect for example that when you want to run "Infinispan as a Service", you'd want to also provide the option of enabling some popular CacheStores in an easy way for the end user (like a checkbox, as simple as "enable JDBC cachestore" or even higher level "enable persistent backup"). Taking for example the JDBC CacheStore, I think you'd not want to create a new database instance dynamically for each instance but rather have them all share the same, so adding a tenant-id to the key, but also having the JDBC connection pool to this database shared across all tenants. I realize that this alternative approach will have you face some other issues - like adding tenant-aware capabilities to some CacheStore implementations - but sharing and managing the resources is crucial to implement multi-tenancy: if we don't, why would you not rather start separate instances of the Infinispan server? Thanks, Sanne On 13 May 2016 at 14:51, Sebastian Laskawiec wrote: > Hey guys! > > Any last call on this? I'm going to start the implementation on Monday. > > Thanks > Sebastian > > On Wed, May 11, 2016 at 10:38 AM, Sebastian Laskawiec > wrote: >> >> Hey Tristan! >> >> If I understood you correctly, you're suggesting to enhance the >> ProtocolServer to support multiple EmbeddedCacheManagers (probably with >> shared transport and by that I mean started on the same Netty server). >> >> Yes, that also could work but I'm not convinced if we won't loose some >> configuration flexibility. >> >> Let's consider a configuration file - >> https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how for >> example use authentication for CacheContainer cc1 (and not for cc2) and >> encryption for cc1 (and not for cc1)? Both are tied to hotrod-connector. I >> think using this kind of different options makes sense in terms of multi >> tenancy. And please note that if we start a new Netty server for each >> CacheContainer - we almost ended up with the router I proposed. >> >> The second argument for using a router is extracting the routing logic >> into a separate module. Otherwise we would probably end up with several >> if(isMultiTenent()) statements in Hotrod as well as REST server. Extracting >> this has also additional advantage that we limit changes in those modules >> (actually there will be probably 2 changes #1 we should be able to start a >> ProtocolServer without starting a Netty server (the Router will do it in >> multi tenant configuration) and #2 collect Netty handlers from >> ProtocolServer). >> >> To sum it up - the router's implementation seems to be more complicated >> but in the long run I think it might be worth it. >> >> I also wrote the summary of the above here: >> https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#alternative-approach >> >> @Galder - you wrote a huge part of the Hot Rod server - I would love to >> hear your opinion as well. >> >> Thanks >> Sebastian >> >> >> >> On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant >> wrote: >>> >>> Not sure I like the introduction of another component at the front. >>> >>> My original idea for allowing the client to choose the container was: >>> >>> - with TLS: use SNI to choose the container >>> - without TLS: enhance the PING operation of the Hot Rod protocol to >>> also take the server name. This would need to be a requirement when >>> exposing multiple containers over the same endpoint. >>> >>> From a client API perspective, there would be no difference between the >>> above two approaches: just specify the server name and depending on the >>> transport, select the right one. >>> >>> Tristan >>> >>> On 29/04/2016 17:29, Sebastian Laskawiec wrote: >>> > Dear Community, >>> > >>> > Please have a look at the design of Multi tenancy support for >>> > Infinispan >>> > [1]. I would be more than happy to get some feedback from you. >>> > >>> > Highlights: >>> > >>> > * The implementation will be based on a Router (which will be built >>> > based on Netty) >>> > * Multiple Hot Rod and REST servers will be attached to the router >>> > which in turn will be attached to the endpoint >>> > * The router will operate on a binary protocol when using Hot Rod >>> > clients and path-based routing when using REST >>> > * Memcached will be out of scope >>> > * The router will support SSL+SNI >>> > >>> > Thanks >>> > Sebastian >>> > >>> > [1] >>> > >>> > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server >>> > >>> > >>> > _______________________________________________ >>> > infinispan-dev mailing list >>> > infinispan-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> > >>> >>> -- >>> Tristan Tarrant >>> Infinispan Lead >>> JBoss, a division of Red Hat >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Sun May 15 18:46:39 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Sun, 15 May 2016 23:46:39 +0100 Subject: [infinispan-dev] WildFly NoSQL client integration and Infinispan remote/JDG as a NoSQL client... In-Reply-To: <25fb6402-015c-dccc-c8c4-641d69e80131@redhat.com> References: <25fb6402-015c-dccc-c8c4-641d69e80131@redhat.com> Message-ID: Hi Scott, I don't think that having a default "testdb" would be useful if it assumes that the user started an instance of Infinispan Server on a "testhostmachine": very likely end users would want to at least change the hostname; one might as well add the whole section at that point. It could be more interesting if the user could lookup - eg via JNDI or some connection URL - a reference to a client which is exposing the same API be it a remote or a local CacheManager instance; in this case you could have a local CacheManager instance started by default within WildFly and have applications consume this. But is it really useful for people to have a default, predefined testdb? I wonder if it shouldn't rather be very easy for an application to define what it needs, e.g. I'd allow applications to include a "META-INF/caches.xml" to list the Caches needed by the application, have WildFly create (and manage) these and provide a way for the application to lookup the client, or have the client injected. Thanks, Sanne On 12 May 2016 at 16:23, Scott Marlow wrote: > Hi, > > Could you bring answers to the discussion [1] about using Infinispan as > a remote NoSQL store in WildFly. > > Perhaps the WildFly standalone.xml subsystem configuration might define > a "testdb" profile that any application deployment can use to remotely > access the remote Infinispan server running on "testhostmachine" via > configuration: > > " > > jndi-name="java:jboss/infinispan/test" database="testdb"> > > > > > port-offset="${jboss.socket.binding.port-offset:0}"> > > > > > " > > Does this match at all with how you thought a WildFly application server > might use a remote Infinispan server? > > Are there any concerns about marshalling, since the remote server > (testhostmachine) may be a WildFly application server that doesn't have > the same application deployments? > > Mostly, I'd like to discuss the above on [1] but here is fine also (we > can link to this mailing list from [1], if we talk here). > > Scott > > [1] http://lists.jboss.org/pipermail/wildfly-dev/2016-May/004966.html > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From slaskawi at redhat.com Mon May 16 01:02:48 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 16 May 2016 07:02:48 +0200 Subject: [infinispan-dev] Multi tenancy support for Infinispan In-Reply-To: References: <5731A2D6.1020300@redhat.com> Message-ID: Hey Sanne! Comments inlined. Thanks Sebastian On Sun, May 15, 2016 at 11:27 PM, Sanne Grinovero wrote: > Hi Sebastian, > > the design seems to assume that what people want is to have multiple > cache containers, one per tenant. > Did you consider the tradeoffs comparing to a solution in which you > have a single cache container to manage all caches, but isolate > tenants by having each one access only the subset of caches it is > owning? > This approach was the first I crossed out from my list mainly due to isolation, name clashes (but the they are easy to solve - just prefix cache name with tenant) and configuration on Cache Manager level (some tenants might want to use different authentication settings, marshallers etc). > I haven't thought about all implications, but it seems desirable that > all caches - from all tenants - could be managed as a whole. For > example in future one might want to know how the memory consumption is > being balanced across different tenants and caches, and have some > smart policies around such concepts. Were would such logic live? It > seems like there is a need for a global coordination of resources > across all caches, and so far this has been the CacheManager. > Our roadmap contains a health check endpoint. We might aggregate them and create some scaling policies based on aggregated data from all CacheManagers. Regarding to the memory consumption I've seen it implemented the other way around (you can measure how much memory you have by `cat /sys/fs/cgroup/memory/memory.limit_in_bytes` and using it for constructing -Xmx). This way your container will never go beyond the memory limit. I believe this is not an ideal way but definitely the easiest. > You could change this, but then an higher level component will be > needed to orchestrate the various CacheManager instances at server > level. > Yes, the Router could do that. But I would assume that's the health check feature and not multi tenancy. > Similarly, different Caches will need to share some resources; I would > expect for example that when you want to run "Infinispan as a > Service", you'd want to also provide the option of enabling some > popular CacheStores in an easy way for the end user (like a checkbox, > as simple as "enable JDBC cachestore" or even higher level "enable > persistent backup"). Taking for example the JDBC CacheStore, I think you'd not want to > create a new database instance dynamically for each instance but > rather have them all share the same, so adding a tenant-id to the key, > but also having the JDBC connection pool to this database shared > across all tenants. > Those points seem to be valid but again we assume similar configuration for all clients in hosted Infinispan service. This may not always be true (as I pointed out - settings on CacheManager level) and I would prefer to have a configuration flexibility here. We may also address some of the resource consumption/performance issues on the Cloud layer e.g. add a MySQL DB to each Infinispan pod - this way all DB connection will be local to the machine which runs the containers. I realize that this alternative approach will have you face some other > issues - like adding tenant-aware capabilities to some CacheStore > implementations - but sharing and managing the resources is crucial to > implement multi-tenancy: if we don't, why would you not rather start > separate instances of the Infinispan server? > I think Tristan had similar question about starting Infinispan server instances and maybe I didn't emphasize it enough in the design. The goal of adding a router is to allow configuring and starting a CacheManager without starting Netty server. The Netty server will be started only in the Router and it will "borrow" handlers from given Protocol Server. However point granted for Cache Store resources utilization - if all our tenants want to have a JDBC Cache Store than we might create lots of connections to the database. But please note that we will have some problems on Cache Store level this way or another (if we decided to implement multi tenancy as sharing the same CacheManager than we would need to solve data isolation problem). The router approach at least guarantees us isolation which is a #1 priority for me in multi tenancy. I'm just thinking - maybe we should enhance Cloud Cache Store (the name fits ideally here) to deal with such situation and recommend it for our clients as the best tool for storing multi tenant data? > > Thanks, > Sanne > > > > On 13 May 2016 at 14:51, Sebastian Laskawiec wrote: > > Hey guys! > > > > Any last call on this? I'm going to start the implementation on Monday. > > > > Thanks > > Sebastian > > > > On Wed, May 11, 2016 at 10:38 AM, Sebastian Laskawiec < > slaskawi at redhat.com> > > wrote: > >> > >> Hey Tristan! > >> > >> If I understood you correctly, you're suggesting to enhance the > >> ProtocolServer to support multiple EmbeddedCacheManagers (probably with > >> shared transport and by that I mean started on the same Netty server). > >> > >> Yes, that also could work but I'm not convinced if we won't loose some > >> configuration flexibility. > >> > >> Let's consider a configuration file - > >> https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how > for > >> example use authentication for CacheContainer cc1 (and not for cc2) and > >> encryption for cc1 (and not for cc1)? Both are tied to > hotrod-connector. I > >> think using this kind of different options makes sense in terms of multi > >> tenancy. And please note that if we start a new Netty server for each > >> CacheContainer - we almost ended up with the router I proposed. > >> > >> The second argument for using a router is extracting the routing logic > >> into a separate module. Otherwise we would probably end up with several > >> if(isMultiTenent()) statements in Hotrod as well as REST server. > Extracting > >> this has also additional advantage that we limit changes in those > modules > >> (actually there will be probably 2 changes #1 we should be able to > start a > >> ProtocolServer without starting a Netty server (the Router will do it in > >> multi tenant configuration) and #2 collect Netty handlers from > >> ProtocolServer). > >> > >> To sum it up - the router's implementation seems to be more complicated > >> but in the long run I think it might be worth it. > >> > >> I also wrote the summary of the above here: > >> > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#alternative-approach > >> > >> @Galder - you wrote a huge part of the Hot Rod server - I would love to > >> hear your opinion as well. > >> > >> Thanks > >> Sebastian > >> > >> > >> > >> On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant > >> wrote: > >>> > >>> Not sure I like the introduction of another component at the front. > >>> > >>> My original idea for allowing the client to choose the container was: > >>> > >>> - with TLS: use SNI to choose the container > >>> - without TLS: enhance the PING operation of the Hot Rod protocol to > >>> also take the server name. This would need to be a requirement when > >>> exposing multiple containers over the same endpoint. > >>> > >>> From a client API perspective, there would be no difference between > the > >>> above two approaches: just specify the server name and depending on the > >>> transport, select the right one. > >>> > >>> Tristan > >>> > >>> On 29/04/2016 17:29, Sebastian Laskawiec wrote: > >>> > Dear Community, > >>> > > >>> > Please have a look at the design of Multi tenancy support for > >>> > Infinispan > >>> > [1]. I would be more than happy to get some feedback from you. > >>> > > >>> > Highlights: > >>> > > >>> > * The implementation will be based on a Router (which will be built > >>> > based on Netty) > >>> > * Multiple Hot Rod and REST servers will be attached to the router > >>> > which in turn will be attached to the endpoint > >>> > * The router will operate on a binary protocol when using Hot Rod > >>> > clients and path-based routing when using REST > >>> > * Memcached will be out of scope > >>> > * The router will support SSL+SNI > >>> > > >>> > Thanks > >>> > Sebastian > >>> > > >>> > [1] > >>> > > >>> > > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server > >>> > > >>> > > >>> > _______________________________________________ > >>> > infinispan-dev mailing list > >>> > infinispan-dev at lists.jboss.org > >>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>> > > >>> > >>> -- > >>> Tristan Tarrant > >>> Infinispan Lead > >>> JBoss, a division of Red Hat > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160516/3e739188/attachment-0001.html From rvansa at redhat.com Mon May 16 04:57:36 2016 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 16 May 2016 10:57:36 +0200 Subject: [infinispan-dev] HDFS FileStore In-Reply-To: References: Message-ID: <57398B80.8040809@redhat.com> On 05/13/2016 04:07 PM, Cristian Malinescu wrote: > Hi Gustavo - thanks for the guidance! > Have some questions - > 1. ISPN-2940 - says the > idea isn't new and it didn't got a 'Go' at that moment. If we proceed > with this work, does it mean a reopening of the item? > 2. Couldn't see any design docs for both SingleFile and SoftIndexFile > store(s) subsystems - fairly, couldn't find design docs for any of the > pluggable > cache store modules. I want to start from one of them to keep > consistency and compatibility in style for easiness of adoption. The design doc was not needed if the author was not in the need of discussing the design prior to implementation. SingleFileStore design is rather simple: * in-memory key-position_in_file map * place data to any unoccupied spot in the file or prolong the file * keep list of unoccupied spots in size-based tree and SoftIndexFileStore has the design described in the javadoc for the main class (SoftIndexFileStore.java). If you have any questions wrt SIFS, I am the one to answer them. Radim > 3. Was the HDFS store idea abandoned because just using HBase would > pretty much offer the same with the advantage of offloading on HBase > the need > for compaction due to the append-only nature of HDFS? > > Cheers > Cris > > On Thu, May 12, 2016 at 10:52 AM, Gustavo Fernandes > > wrote: > > Hi Cristian! > > A HDFS cache store [1] looks interesting, and given the > append-only nature of HDFS, I'd say probably the SoftIndex is > better to look at than the SingleFile store since it employs some > techniques of append only plus eventual compactations. > It'd be interesting to have a design document so that we can have > a starting point; we usually publish such designs at [2]. > > Cheers, > Gustavo > > [1] https://issues.jboss.org/browse/ISPN-2940 > [2] https://github.com/infinispan/infinispan/wiki > > On Thu, May 12, 2016 at 2:26 PM, Cristian Malinescu > > wrote: > > Hello folks - I would like to implement for my own project a > custom cache store for Infinispan using HDFS and using as base > line one of the already implemented file stores - SoftIndex > and SingleFile. > I thought it would be beneficiary if I start and do it > directly as contribution to the Infinispan code base, is > someone interested to take on this subject and we start > brainstorming about how should this task being approached to > be sure it gets done smooth, accordingly to the project's > community house rules so we don't encounter hassle at the > point when we can look at merging in the baseline, avoid > potentially double work for same feature etc. > > Kind regards > Cristian Malinescu > > https://github.com/Cristian-Malinescu > https://www.linkedin.com/in/cristianmalinescu > > > P.S I went already trough > http://infinispan.org/docs/8.2.x/contributing/contributing.html > so theoretically I can just start and place a pull request on > GitHub but I wanted to be sure you guys are also aware of this > plan so we keep in sync and all opinions are taken in > consideration and addressed. > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From gustavo at infinispan.org Mon May 16 06:51:12 2016 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Mon, 16 May 2016 11:51:12 +0100 Subject: [infinispan-dev] HDFS FileStore In-Reply-To: References: Message-ID: Hi, answers inline: On Fri, May 13, 2016 at 3:07 PM, Cristian Malinescu < cristian.malinescu at gmail.com> wrote: > Hi Gustavo - thanks for the guidance! > Have some questions - > 1. ISPN-2940 - says the idea > isn't new and it didn't got a 'Go' at that moment. If we proceed with this > work, does it mean a reopening of the item? > At the time ISPN-2940 was incorporated as an add-on to [1], but doing [1] at this point is debatable. > 2. Couldn't see any design docs for both SingleFile and SoftIndexFile > store(s) subsystems - fairly, couldn't find design docs for any of the > pluggable > cache store modules. I want to start from one of them to keep > consistency and compatibility in style for easiness of adoption. > Sure, but HDFS is a slightly different filesystem: distributed, append-only and not POSIX compliant, so I'm not sure at what extent it could be based on the other two file stores. > 3. Was the HDFS store idea abandoned because just using HBase would pretty > much offer the same with the advantage of offloading on HBase the need > for compaction due to the append-only nature of HDFS? > > At the end of the day, when using the HBase Cachestore [2], data will be stored in HDFS, but with some caveats: * the data format will be whatever format HBase uses * requires HBase OTOH, a pure HDFS cache store is an interesting proposal for the cases where installing and maintaining HBase is not desirable, and it gives freedom to choose a highly interoperable storage like Apache Parquet [3] [1] https://issues.jboss.org/browse/ISPN-2941 [2] https://github.com/infinispan/infinispan-cachestore-hbase [3] https://parquet.apache.org/ > Cheers > Cris > > On Thu, May 12, 2016 at 10:52 AM, Gustavo Fernandes < > gustavo at infinispan.org> wrote: > >> Hi Cristian! >> >> A HDFS cache store [1] looks interesting, and given the append-only >> nature of HDFS, I'd say probably the SoftIndex is better to look at than >> the SingleFile store since it employs some techniques of append only plus >> eventual compactations. >> It'd be interesting to have a design document so that we can have a >> starting point; we usually publish such designs at [2]. >> >> Cheers, >> Gustavo >> >> [1] https://issues.jboss.org/browse/ISPN-2940 >> [2] https://github.com/infinispan/infinispan/wiki >> >> On Thu, May 12, 2016 at 2:26 PM, Cristian Malinescu < >> cristian.malinescu at gmail.com> wrote: >> >>> Hello folks - I would like to implement for my own project a custom >>> cache store for Infinispan using HDFS and using as base line one of the >>> already implemented file stores - SoftIndex and SingleFile. >>> I thought it would be beneficiary if I start and do it directly as >>> contribution to the Infinispan code base, is someone interested to take on >>> this subject and we start brainstorming about how should this task being >>> approached to be sure it gets done smooth, accordingly to the project's >>> community house rules so we don't encounter hassle at the point when we can >>> look at merging in the baseline, avoid potentially double work for same >>> feature etc. >>> >>> Kind regards >>> Cristian Malinescu >>> >>> https://github.com/Cristian-Malinescu >>> https://www.linkedin.com/in/cristianmalinescu >>> >>> >>> P.S I went already trough >>> http://infinispan.org/docs/8.2.x/contributing/contributing.html >>> so theoretically I can just start and place a pull request on GitHub but >>> I wanted to be sure you guys are also aware of this plan so we keep in sync >>> and all opinions are taken in consideration and addressed. >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160516/3c01f656/attachment.html From ttarrant at redhat.com Mon May 16 07:29:08 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 16 May 2016 13:29:08 +0200 Subject: [infinispan-dev] Infinispan documentation Message-ID: <5739AF04.8010400@redhat.com> Hi all, just a heads up on my documentation overhaul plan (incidentally the WildFly guys are also currently discussing their own documentation issues too on wildfly-dev). First of all, I've issued a PR [1] which performs an initial overhaul of the documentation source files by removing the meaningless "chapter-xx" prefix, removing obsolete/duplicated sections, replacing mentions of JBoss AS 7 with WildFly and rearranging some sections so that the flow is more accurate (i.e. all cachestores as children of persistence, simple-cache as a child of local-cache, total-order as a child of transactions, etc). The following are issues / tasks I would like to tackle in the near future: - Single page vs multiple pages While having a single page might be useful in some situation (offline?) it is cumbersome to navigate and hurts our SEO. Try searching for "infinispan transactions" and Google still shows the old wiki page. - Collapsible Table of Contents Our current TOC is very large (67 lines) and it is always expanded. This means that readers need to scroll to find what they are looking for. I think that providing an expandable/collapsible tree would be ideal. - Merging the guides By introducing a multi-page approach, the reason to split our different guides (getting started, user, server, faq) becomes less of a necessity. - Versioning Currently the documentation for each version is available under docs/major.minor/. I would like to have semantic names for our main docs instead (i.e. "stable" and "dev"). Unfortunately this means that searching - Alternative formats Asciidoc makes it easy to produce alternative formats and I think we should generate PDF, EPUB and single HTML as well available as separate downloads. I've been playing with the "webhelp" style available in docbook and I got [2] (be warned, it's a partial upload) which has many of the advantages I'm looking for (and more, like an integrated search), but some shortcomings as well. In particular each web page is ~145KB of which 132KB of the sidebar, which I guess could be extracted into an iframe. Comments and suggestions are obviously welcome Tristan [1] https://github.com/infinispan/infinispan/pull/4345 [2] http://www.dataforte.net/infinispan/configuring_cache.html -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From slaskawi at redhat.com Mon May 16 08:31:26 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 16 May 2016 14:31:26 +0200 Subject: [infinispan-dev] Infinispan documentation In-Reply-To: <5739AF04.8010400@redhat.com> References: <5739AF04.8010400@redhat.com> Message-ID: Hey Tristan! Comments inlined. Thanks Sebastian On Mon, May 16, 2016 at 1:29 PM, Tristan Tarrant wrote: > Hi all, > > just a heads up on my documentation overhaul plan (incidentally the > WildFly guys are also currently discussing their own documentation > issues too on wildfly-dev). > > First of all, I've issued a PR [1] which performs an initial overhaul of > the documentation source files by removing the meaningless "chapter-xx" > prefix, removing obsolete/duplicated sections, replacing mentions of > JBoss AS 7 with WildFly and rearranging some sections so that the flow > is more accurate (i.e. all cachestores as children of persistence, > simple-cache as a child of local-cache, total-order as a child of > transactions, etc). > > The following are issues / tasks I would like to tackle in the near future: > > - Single page vs multiple pages > > While having a single page might be useful in some situation (offline?) > it is cumbersome to navigate and hurts our SEO. Try searching for > "infinispan transactions" and Google still shows the old wiki page. > Can we have both, just like Weld [3][4][5]? Multiple page guide could be more friendly to Google and singe page could be useful for ctrl+f lovers. > - Collapsible Table of Contents > > Our current TOC is very large (67 lines) and it is always expanded. This > means that readers need to scroll to find what they are looking for. I > think that providing an expandable/collapsible tree would be ideal. > +1. But maybe we could rearrange/remove some of them? > - Merging the guides > > By introducing a multi-page approach, the reason to split our different > guides (getting started, user, server, faq) becomes less of a necessity. > +1000 > - Versioning > > Currently the documentation for each version is available under > docs/major.minor/. I would like to have semantic names for our main docs > instead (i.e. "stable" and "dev"). Unfortunately this means that searching > I agree, but having version is also very useful. Just a couple of days ago I was searching through Weld 2.2.SP1 documentation looking for some configuration parameters which were removed in the latest version. I assume many of our users do the same with Infinispan. > - Alternative formats > > Asciidoc makes it easy to produce alternative formats and I think we > should generate PDF, EPUB and single HTML as well available as separate > downloads. > +1 for PDF and Single page HTML. I'm not sure about EPUB (at least I haven't heard about anyone using this format for reading a manual). > > I've been playing with the "webhelp" style available in docbook and I > got [2] (be warned, it's a partial upload) which has many of the > advantages I'm looking for (and more, like an integrated search), but > some shortcomings as well. In particular each web page is ~145KB of > which 132KB of the sidebar, which I guess could be extracted into an > iframe. > To be honest I'm not a big fan of docbook style but probably that's because the way I work with the documentation. Usually I try to search a manual using ctrl+f or when the it's is more complicated (or split into multiple sections), I often use google with site:infinispan.org (this is just an example). That's why I always like single page manuals - I can find everything I need pretty quickly. But I assume I'm not in the majority, am I? > > > Comments and suggestions are obviously welcome > > Tristan > > [1] https://github.com/infinispan/infinispan/pull/4345 > [2] http://www.dataforte.net/infinispan/configuring_cache.html [3] http://weld.cdi-spec.org/documentation/ [4] https://docs.jboss.org/weld/reference/latest/en-US/html_single/ [5] https://docs.jboss.org/weld/reference/latest/en-US/html/ > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160516/b062a549/attachment-0001.html From ttarrant at redhat.com Mon May 16 09:21:17 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 16 May 2016 15:21:17 +0200 Subject: [infinispan-dev] Infinispan documentation In-Reply-To: References: <5739AF04.8010400@redhat.com> Message-ID: <5739C94D.7040802@infinispan.org> On 16/05/2016 14:31, Sebastian Laskawiec wrote: > While having a single page might be useful in some situation (offline?) > > it is cumbersome to navigate and hurts our SEO. Try searching for > "infinispan transactions" and Google still shows the old wiki page. > > > Can we have both, just like Weld [3][4][5]? Multiple page guide could > be more friendly to Google and singe page could be useful for ctrl+f > lovers. Sure, that is my intention. > > - Collapsible Table of Contents > > +1. But maybe we could rearrange/remove some of them? Reorganizing the actual content of the documentation is very much needed, but is not part of the scope of this e-mail. > - Versioning > > Currently the documentation for each version is available under > docs/major.minor/. I would like to have semantic names for our > main docs > instead (i.e. "stable" and "dev"). Unfortunately this means that > searching > > > I agree, but having version is also very useful. > > Just a couple of days ago I was searching through Weld 2.2.SP1 > documentation looking for some configuration parameters which were > removed in the latest version. I assume many of our users do the same > with Infinispan. It was never my intention to suggest removal of the old docs, just that docs/stable and docs/dev would always point to the latest stable and unstable docs. Unfortunately, because GitHub pages doesn't support URL rewriting, we need to decide whether we want to also have the actual "hard version" copies of the docs (i.e. stable AND 8.2.x) at the same time. > - Alternative formats > > Asciidoc makes it easy to produce alternative formats and I think we > should generate PDF, EPUB and single HTML as well available as > separate > downloads. > > > +1 for PDF and Single page HTML. I'm not sure about EPUB (at least I > haven't heard about anyone using this format for reading a manual). Sure. > To be honest I'm not a big fan of docbook style but probably that's > because the way I work with the documentation. Usually I try to search > a manual using ctrl+f or when the it's is more complicated (or split > into multiple sections), I often use google with site:infinispan.org > (this is just an example). That's why I always > like single page manuals - I can find everything I need pretty > quickly. But I assume I'm not in the majority, am I? That was just an experiment, I want to keep a look which is as close as possible as the current one. I will probably tackle this in steps, i.e. start with the "stable" + "dev" paths, add the collapsible tree, maybe add the js search, and then split the docs. Tristan From pedro at infinispan.org Mon May 16 09:49:40 2016 From: pedro at infinispan.org (Pedro Ruivo) Date: Mon, 16 May 2016 14:49:40 +0100 Subject: [infinispan-dev] Distributed Counter Discussion In-Reply-To: <57065CA6.7000109@redhat.com> References: <56E70D92.1010206@infinispan.org> <56FABA94.5010104@infinispan.org> <5706527F.6040608@redhat.com> <57065CA6.7000109@redhat.com> Message-ID: <5739CFF4.2020600@infinispan.org> Hi all, This is an update email. I've just made a preview for the distributed counters. You can find it in [1] In this initial version I'm targeting a more consistent approach because it covers a large uses cases (you'll lose on performance) and it's easy to test. All the updates are performed atomically, the return value is a consistent and it doesn't lose values. It supports limits, notifications and resets. Since performance is a hot topic (there is no free lunch unfortunately) and an eventually consistent counter fits in a bunch of uses cases, I'm planning to add another counter manager with this properties. But I'll have to remove the reset (it is not commutative with inc/dec) and take advantage of the functional API (without locking) to have CRDT properties. Does it make sense to you? Another task missing is comparing the performance between all the implementations available and check if it match with the theory above. Any feedback is more than welcome since there are a quite few things to keep track on. Cheers, Pedro [1] https://github.com/infinispan/infinispan/pull/4350 On 04/07/2016 02:12 PM, Tristan Tarrant wrote: > On 07/04/2016 15:06, Sanne Grinovero wrote: >> For the "eventually consistent" case, returning a local value might be >> fine but you'd need to define also how writes are merged and what >> guarantees it aims to provide (or which we want to intentionally >> ignore) > > Fortunately a counter has commutative ops, so order is unimportant and > it makes things so much easier ;) > >> And what about the sequence use case? I guess it can wait, just >> tracking the need for that too. > > Yes, that is a separate thing as it needs far stricter guarantees. > > Tristan > From rvansa at redhat.com Mon May 16 10:22:04 2016 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 16 May 2016 16:22:04 +0200 Subject: [infinispan-dev] Infinispan documentation In-Reply-To: <5739C94D.7040802@infinispan.org> References: <5739AF04.8010400@redhat.com> <5739C94D.7040802@infinispan.org> Message-ID: <5739D78C.40904@redhat.com> On 05/16/2016 03:21 PM, Tristan Tarrant wrote: > On 16/05/2016 14:31, Sebastian Laskawiec wrote: >> While having a single page might be useful in some situation (offline?) >> >> it is cumbersome to navigate and hurts our SEO. Try searching for >> "infinispan transactions" and Google still shows the old wiki page. >> >> >> Can we have both, just like Weld [3][4][5]? Multiple page guide could >> be more friendly to Google and singe page could be useful for ctrl+f >> lovers. > Sure, that is my intention. >> - Collapsible Table of Contents >> >> +1. But maybe we could rearrange/remove some of them? > Reorganizing the actual content of the documentation is very much > needed, but is not part of the scope of this e-mail. > >> - Versioning >> >> Currently the documentation for each version is available under >> docs/major.minor/. I would like to have semantic names for our >> main docs >> instead (i.e. "stable" and "dev"). Unfortunately this means that >> searching >> >> >> I agree, but having version is also very useful. >> >> Just a couple of days ago I was searching through Weld 2.2.SP1 >> documentation looking for some configuration parameters which were >> removed in the latest version. I assume many of our users do the same >> with Infinispan. > It was never my intention to suggest removal of the old docs, just that > docs/stable and docs/dev would always point to the latest stable and > unstable docs. Unfortunately, because GitHub pages doesn't support URL > rewriting, we need to decide whether we want to also have the actual > "hard version" copies of the docs (i.e. stable AND 8.2.x) at the same time. What about docs/stable being just || (maybe with a link in the body). I am not sure how well this will work with SEO. Sanne would probably know, as Hibernate recently dealed with the same issue. My 2c Radim || > >> - Alternative formats >> >> Asciidoc makes it easy to produce alternative formats and I think we >> should generate PDF, EPUB and single HTML as well available as >> separate >> downloads. >> >> >> +1 for PDF and Single page HTML. I'm not sure about EPUB (at least I >> haven't heard about anyone using this format for reading a manual). > Sure. >> To be honest I'm not a big fan of docbook style but probably that's >> because the way I work with the documentation. Usually I try to search >> a manual using ctrl+f or when the it's is more complicated (or split >> into multiple sections), I often use google with site:infinispan.org >> (this is just an example). That's why I always >> like single page manuals - I can find everything I need pretty >> quickly. But I assume I'm not in the majority, am I? > That was just an experiment, I want to keep a look which is as close as > possible as the current one. > > I will probably tackle this in steps, i.e. start with the "stable" + > "dev" paths, add the collapsible tree, maybe add the js search, and then > split the docs. > > Tristan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From ttarrant at redhat.com Mon May 16 10:57:05 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 16 May 2016 16:57:05 +0200 Subject: [infinispan-dev] Infinispan documentation In-Reply-To: <5739D78C.40904@redhat.com> References: <5739AF04.8010400@redhat.com> <5739C94D.7040802@infinispan.org> <5739D78C.40904@redhat.com> Message-ID: <5739DFC1.6000304@infinispan.org> On 16/05/2016 16:22, Radim Vansa wrote: > What about docs/stable being just | http-equiv="refresh" > content="0;url=http://infinispan.org/docs/8.2">| > (maybe with a link in the body). I am not sure how well this will work > with SEO. Sanne would probably know, as Hibernate recently dealed with > the same issue. No, redirect wouldn't help. I've also asked a friend who told me that the "primary" docs should have a rel="canonical" meta tag Tristan From sanne at infinispan.org Mon May 16 10:59:08 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 16 May 2016 15:59:08 +0100 Subject: [infinispan-dev] Infinispan documentation In-Reply-To: <5739D78C.40904@redhat.com> References: <5739AF04.8010400@redhat.com> <5739C94D.7040802@infinispan.org> <5739D78C.40904@redhat.com> Message-ID: On 16 May 2016 at 15:22, Radim Vansa wrote: > On 05/16/2016 03:21 PM, Tristan Tarrant wrote: >> On 16/05/2016 14:31, Sebastian Laskawiec wrote: >>> While having a single page might be useful in some situation (offline?) >>> >>> it is cumbersome to navigate and hurts our SEO. Try searching for >>> "infinispan transactions" and Google still shows the old wiki page. >>> >>> >>> Can we have both, just like Weld [3][4][5]? Multiple page guide could >>> be more friendly to Google and singe page could be useful for ctrl+f >>> lovers. >> Sure, that is my intention. >>> - Collapsible Table of Contents >>> >>> +1. But maybe we could rearrange/remove some of them? >> Reorganizing the actual content of the documentation is very much >> needed, but is not part of the scope of this e-mail. >> >>> - Versioning >>> >>> Currently the documentation for each version is available under >>> docs/major.minor/. I would like to have semantic names for our >>> main docs >>> instead (i.e. "stable" and "dev"). Unfortunately this means that >>> searching >>> >>> >>> I agree, but having version is also very useful. >>> >>> Just a couple of days ago I was searching through Weld 2.2.SP1 >>> documentation looking for some configuration parameters which were >>> removed in the latest version. I assume many of our users do the same >>> with Infinispan. >> It was never my intention to suggest removal of the old docs, just that >> docs/stable and docs/dev would always point to the latest stable and >> unstable docs. Unfortunately, because GitHub pages doesn't support URL >> rewriting, we need to decide whether we want to also have the actual >> "hard version" copies of the docs (i.e. stable AND 8.2.x) at the same time. > > What about docs/stable being just > > | content="0;url=http://infinispan.org/docs/8.2">| > > > (maybe with a link in the body). > I am not sure how well this will work with SEO. Sanne would probably > know, as Hibernate recently dealed with the same issue. We're aware of having such SEO issues but didn't solve them yet :-/ You don't want to redirect though. You also don't want to have copies of the documentation in multiple places, at least not without clearly marking which one is the canonical (the only authoritative) source. This can be done either by adding more meta fields in the HTML header, or by inserting them in the HTTP headers. The HTTP headers approach seems to score best, but Hibernate can't use that as documentation is hosted on a server out of our control - so is Github pages for Infinispan. I'd recommend to host your own httpd server, or deploy on cloudfront: this would also give you better performance (especially by having more control on caching options and proxy pragma settings), and better performance makes both users and SEO happy. Sanne > > My 2c > > Radim > || >> >>> - Alternative formats >>> >>> Asciidoc makes it easy to produce alternative formats and I think we >>> should generate PDF, EPUB and single HTML as well available as >>> separate >>> downloads. >>> >>> >>> +1 for PDF and Single page HTML. I'm not sure about EPUB (at least I >>> haven't heard about anyone using this format for reading a manual). >> Sure. >>> To be honest I'm not a big fan of docbook style but probably that's >>> because the way I work with the documentation. Usually I try to search >>> a manual using ctrl+f or when the it's is more complicated (or split >>> into multiple sections), I often use google with site:infinispan.org >>> (this is just an example). That's why I always >>> like single page manuals - I can find everything I need pretty >>> quickly. But I assume I'm not in the majority, am I? >> That was just an experiment, I want to keep a look which is as close as >> possible as the current one. >> >> I will probably tackle this in steps, i.e. start with the "stable" + >> "dev" paths, add the collapsible tree, maybe add the js search, and then >> split the docs. >> >> Tristan >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From smarlow at redhat.com Mon May 16 12:09:46 2016 From: smarlow at redhat.com (Scott Marlow) Date: Mon, 16 May 2016 12:09:46 -0400 Subject: [infinispan-dev] WildFly NoSQL client integration and Infinispan remote/JDG as a NoSQL client... In-Reply-To: References: <25fb6402-015c-dccc-c8c4-641d69e80131@redhat.com> Message-ID: Hi Sanne, Thanks for the response! :-) On 05/15/2016 06:46 PM, Sanne Grinovero wrote: > Hi Scott, > > I don't think that having a default "testdb" would be useful if it > assumes that the user started an instance of Infinispan Server on a > "testhostmachine": very likely end users would want to at least change > the hostname; one might as well add the whole section at that point. All hostname/port numbers are now defined via the WildFly socket-binding-group section, so that users can change hostname/port numbers in the management console. > > It could be more interesting if the user could lookup - eg via JNDI or > some connection URL - a reference to a client which is exposing the > same API be it a remote or a local CacheManager instance; in this case > you could have a local CacheManager instance started by default within > WildFly and have applications consume this. When the user looks up a CacheManager, does the remote CacheManager handle marshalling of application classes? Or just Java types? > > But is it really useful for people to have a default, predefined testdb? No, the below testdb is just an example of what might be found in the infinispan-nosql subsystem. I assume that the infinispan-nosql subsystem could reference many different target hosts. > > I wonder if it shouldn't rather be very easy for an application to > define what it needs, e.g. I'd allow applications to include a > "META-INF/caches.xml" to list the Caches needed by the application, > have WildFly create (and manage) these and provide a way for the > application to lookup the client, or have the client injected. Is the remote cache a mirror system, that contains the same application deployments as the client calling in? I assume that the remote cache is not a WildFly clustered host but really do not know. If the remote cache is a WildFly clustered host, that is already handled via the WildFly dispatcher. If not, then authentication needs to be supported for the remote cache clients. Regarding the "META-INF/caches.xml" idea, I'm not sure yet about that idea. Would each client deployment, have its own connection pool to the remote server? Or would caches.xml identify shared connection profiles that are defined in the WildFly standalone*.xml? > > Thanks, > Sanne > > > On 12 May 2016 at 16:23, Scott Marlow wrote: >> Hi, >> >> Could you bring answers to the discussion [1] about using Infinispan as >> a remote NoSQL store in WildFly. >> >> Perhaps the WildFly standalone.xml subsystem configuration might define >> a "testdb" profile that any application deployment can use to remotely >> access the remote Infinispan server running on "testhostmachine" via >> configuration: >> >> " >> >> > jndi-name="java:jboss/infinispan/test" database="testdb"> >> >> >> >> >> > port-offset="${jboss.socket.binding.port-offset:0}"> >> >> >> >> >> " >> >> Does this match at all with how you thought a WildFly application server >> might use a remote Infinispan server? >> >> Are there any concerns about marshalling, since the remote server >> (testhostmachine) may be a WildFly application server that doesn't have >> the same application deployments? >> >> Mostly, I'd like to discuss the above on [1] but here is fine also (we >> can link to this mailing list from [1], if we talk here). >> >> Scott >> >> [1] http://lists.jboss.org/pipermail/wildfly-dev/2016-May/004966.html >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From bban at redhat.com Tue May 17 01:53:37 2016 From: bban at redhat.com (Bela Ban) Date: Tue, 17 May 2016 07:53:37 +0200 Subject: [infinispan-dev] Infispector In-Reply-To: <166216030.1733480.1462887673535.JavaMail.zimbra@redhat.com> References: <7A2F9076-2AB8-4B56-A29B-CFA070C3E447@redhat.com> <166216030.1733480.1462887673535.JavaMail.zimbra@redhat.com> Message-ID: <573AB1E1.4070903@redhat.com> Interesting! Looking forward to demos/videos... On 10/05/16 15:41, Tomas Sykora wrote: > Hello Galder, and all! > It?s nice to communicate again via infinispan-dev after a while :) > > TL;DR: I can see some intersections with zipkin.io initiative goals but InfiSpector seems to be much more ?easier to handle and contribute to at this moment? -- that suits more our student-related use case. Let?s continue with the discussion :) > > Firstly, a short introduction into the context. Red Hat is running Research & Development laboratory here in Brno at 2 biggest local universities: Masaryk University, Faculty of Informatics (FI MU) and Brno University of Technology, Faculty of Information Technologies (FIT VUT). > The aim is to better and sooner reach out to students, get them involved into interesting project, show them open-source, git, git workflows and many other things (project specific). An year ago I got excited about this idea and started to think whether I can deliver such a project. And I did. > > Team faces one big challenge and this is a time constraint. Students are working on _several_ projects during their studies to fulfill courses? requirements to pass the semester. It?s hard for them to find additional time to be coding even something else. Team managed that but slowly, that?s understandable though. Designing InfiSpector infrastructure took us some time (Kafka, Druid, NodeJS) + evaluation of these technologies + proof of concepts. > > All 5 team members are 2nd year students of bachelor studies at FIT VUT Brno. > Marek Ciz (https://github.com/mciz), also my very good friend from my home town :) His primary domain is Druid, Kafka and infrastructure. > Vratislav Hais (https://github.com/vratislavhais) Primary domain is front-end. > Jan Fitz (https://github.com/janfitz) Primary domain is front-end and graphic design (also designed our logo). > Tomas Veskrna -- starting > Patrik Cigas -- starting > > At this moment we are very close to getting real data to be monitored via web UI. It?s a matter of 1-2 months considering there is an examination period happening now at the University. > > ******************* > What is InfiSpector and what we want to achieve: > > * We missed graphical representation of Infinispan nodes communication so we want > -- To be able to spot possible issues AT THE FIRST LOOK (e.g. wait, this should be coordinator, how is that possible he sends/receives only 10 % of all messages?) > -- To demonstrate nicely what?s happening inside of ISPN cluster for newcomers (to see how Infinispan nodes talk to each other to better understand concepts) > -- To be using nice communication diagrams that describes flows like (130 messages from node1 to node5 -- click to see them in detail, filter out in more detail) > * We aimed for NON-invasive solution > -- No changes in Infinispan internal code > -- Just add custom JGroups protocol, gather data and send them where you want [0] > * Provide historical recording of JGroups communication > * Help to analyze communication recording from big data point of view > -- No need to manually go through gigabytes of text trace logs > > Simplified InfiSpector architecture: > > Infinispan Cluster (JGroups with our protocol) ---> Apache Kafka ---> Druid Database (using Kafka Firehose to inject Kafka Topic) <---> NodeJS server back-end <---> front-end (AngularJS) > > What is coming out from custom JGroup protocol is a short JSON document [1] with a timestamp, sending and receiving node, length of a message and the message itself. Other stuff can be added easily. > > We will be able to easily answer queries like: > How many messages were sent from node1 to node3 during ?last? 60 seconds? > What are these messages? > How many of them were PutKeyValueCommands? > Filter out Heart beats (or even ignore them completely), etc. > > We don?t have any video recording yet but we are very close to that point. From UI perspective we will be using these 2 charts: [2], [3]. > > > Talking about Infinispan 9 plans -- [4] was reported a month ago by you Galder and we are working on InfiSpector actively let?s say 5 months -- it looks like I should have advertised InfiSpector more, sooner, but I was waiting for at least first working demo to start with blogging and videos :) It?s good that you?ve noticed and that we are having this conversation right now. > > To be honest I find http://zipkin.io/ initiative to be quite similar. However, InfiSpector seems to be much more ?easier? and not targeting at performance analysis directly. Just adding one protocol at protocol stack and you are good to go. We were thinking about putting Kafka and Druid somewhere into the cloud (later) so users don?t need to start all that big infrastructure at their local machines. > > I am very open to anything that will help us as a community to achieve our common goal -- to be able to graphically monitor Infinispan communication. > Additionally I would be _personally_ looking for something that is easily achievable and is suitable for students to quickly learn new things and quickly make meaningful contributions. > > Thanks! > Tomas > > [0] Achieved by custom JGroups protocol -- JGROUPS_TO_KAFKA protocol has been implemented. This can be added at the end of JGroups stack and every single message that goes through that is sent to Apache Kafka. > [1] > { > "direction": "receiving/up", > "src": "tsykora-19569", > "dest": "tsykora-27916", > "length": 182, > "timestamp": 1460302055376, > "message": "SingleRpcCommand{cacheName='___defaultcache', command=PutKeyValueCommand{key=f6d52117-8a27-475e-86a7-002a54324615, value=tsykora-19569, flags=null, putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=60000, maxIdle=-1, version=null}, successful=true}}" > } > [2] http://bl.ocks.org/NPashaP/9796212 > [3] http://bl.ocks.org/mbostock/1046712 > [4] https://issues.jboss.org/browse/ISPN-6346 > > > > > ----- Original Message ----- >> From: "Galder Zamarre?o" >> To: "infinispan -Dev List" , "Tomas Sykora" >> Sent: Monday, May 9, 2016 5:06:06 PM >> Subject: Infispector >> >> Hi all, >> >> I've just noticed [1], @Thomas, it appears this is your baby? Could you >> explain in more detail what you are trying to achieve with this? Do you have >> a video to show what exactly it does? >> >> Also, who's [2]? Curious to know who's working on this stuff :) >> >> The reason I'm interested in finding out a bit more about [1] is because we >> have several efforts in the distributed monitoring/tracing area and want to >> make sure we're not duplicating same effort. >> >> 1. Radim's Message Flow Tracer [3]: This is a project to tool for tracing >> messages and control flow in JGroups/Infinispan using Byteman. >> >> 2. Zipkin effort [4]: The idea behind is to have a way to have Infinispan >> cluster-wide tracing that uses Zipkin to capture and visualize where time is >> spent within Infinispan. This includes both JGroups and other components >> that could be time consuming, e.g. persistence. This will be main task for >> Infinispan 9. This effort will use a lot of interception points Radim has >> developed in [3] to tie together messages related to a request/tx around the >> cluster. >> >> Does your effort fall within any of these spaces? >> >> Cheers, >> >> [1] https://github.com/infinispan/infispector >> [2] https://github.com/mciz >> [3] https://github.com/rvansa/message-flow-tracer >> [4] https://issues.jboss.org/browse/ISPN-6346 >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban, JGroups lead (http://www.jgroups.org) From rvansa at redhat.com Tue May 17 08:10:28 2016 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 17 May 2016 14:10:28 +0200 Subject: [infinispan-dev] Adding tests for new cache mode Message-ID: <573B0A34.8020302@redhat.com> Hi, I've decided to start working on Scattered Cache [1][2] POC. I'd like to use most of the tests for distributed mode, but just extending DistXxxTest with ScatteredXxxTest and overriding getCacheMode() seems quite inelegant, though this is a common practice for repl/dist tests. I had similar problem with Simple Cache, but I didn't need as many tests for that. @Parameters are not used as much in our testsuite - is there any reason for that? And is there any better way, if I want just test everything and exclude those tests where it does not make sense to run the test as well? Suggestions are welcome. Radim [1] https://issues.jboss.org/browse/ISPN-6645 [2] https://github.com/infinispan/infinispan/wiki/Scattered-Cache-design-doc -- Radim Vansa JBoss Performance Team From ttarrant at redhat.com Tue May 17 08:26:29 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 17 May 2016 14:26:29 +0200 Subject: [infinispan-dev] Adding tests for new cache mode In-Reply-To: <573B0A34.8020302@redhat.com> References: <573B0A34.8020302@redhat.com> Message-ID: <573B0DF5.5000105@infinispan.org> On 17/05/2016 14:10, Radim Vansa wrote: > Hi, > > I've decided to start working on Scattered Cache [1][2] POC. I'd like to > use most of the tests for distributed mode, but just extending > DistXxxTest with ScatteredXxxTest and overriding getCacheMode() seems > quite inelegant, though this is a common practice for repl/dist tests. I > had similar problem with Simple Cache, but I didn't need as many tests > for that. > > @Parameters are not used as much in our testsuite - is there any reason Not using @Parameters is a mistake, IMHO, so if you're willing to convert the relevant ones, that would be lovely. Tristan From ttarrant at redhat.com Wed May 18 03:03:31 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 18 May 2016 09:03:31 +0200 Subject: [infinispan-dev] Infinispan 9.0.0.Alpha2 and Infinispan 8.2.2.Final Message-ID: <573C13C3.1020007@redhat.com> Hi all, yesterday we released Infinispan 9.0.0.Alpha2 and Infinispan 8.2.2.Final. Read all about them at http://goo.gl/p9LbJE Enjoy Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From rory.odonnell at oracle.com Wed May 18 04:39:39 2016 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Wed, 18 May 2016 09:39:39 +0100 Subject: [infinispan-dev] Early Access builds of JDK 9 b118 & JDK 9 with Project Jigsaw, b118 (#4987) are available on java.net Message-ID: Hi Galder, Early Access b118 for JDK 9 is available on java.net, summary of changes are listed here . Early Access b118 (#4913) for JDK 9 with Project Jigsaw is available on java.net. JDK 9 Build 118 includes a refresh of the module system. There are several changes in this update, JDK 9 b118 has the updated policy for root modules described in JEP 261 [1]. This means that java.corba and the 6 EE modules aren't resolved by default and so it will look "as if" the types in these modules have been removed. More info on the JDK 9 dev mailing list [2]. A change that went into JDK 9 b102 is worth mentioning: JDK9: Remove stopThread RuntimePermission from the default java.policy In previous releases, untrusted code had the "stopThread" RuntimePermission granted by default. This permission allows untrusted code to call Thread.stop(), initiating an asynchronous ThreadDeath Error, on threads in the same thread group. Having a ThreadDeath Error thrown asynchronously is not something that trusted code should be expected to handle gracefully. The permission is no longer granted by default. Rgds,Rory [1] http://openjdk.java.net/jeps/261 [2] http://mail.openjdk.java.net/pipermail/jdk9-dev/2016-May/004309.html -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA, Dublin,Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160518/3a68f461/attachment.html From rvansa at redhat.com Wed May 18 08:12:25 2016 From: rvansa at redhat.com (Radim Vansa) Date: Wed, 18 May 2016 14:12:25 +0200 Subject: [infinispan-dev] ClusteredGetCommand vs. SingleRpcCommand Message-ID: <573C5C29.3010102@redhat.com> Just wondering, why do we have ClusteredGetCommand (and similar ones) and don't wrap GetKeyValueCommand into SingleRpcCommand as with the others? Git history starts in 2009, and I think this goes to real history :) Radim -- Radim Vansa JBoss Performance Team From dan.berindei at gmail.com Wed May 18 10:33:51 2016 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 18 May 2016 17:33:51 +0300 Subject: [infinispan-dev] ClusteredGetCommand vs. SingleRpcCommand In-Reply-To: <573C5C29.3010102@redhat.com> References: <573C5C29.3010102@redhat.com> Message-ID: I wasn't present in the JBoss Cache days either, but I'm guessing it's to allow "special" processing of the command and/or results outside of the interceptor chain. It also saves a bit on the marshalling cost, as the serialization of SingleRpcCommand isn't terribly efficient. In general, I wouldn't want to force all commands to be VisitableCommands -- especially custom commands that our interceptors probably don't know how to deal with. And TransactionBoundaryCommands go through the chain, but they look up the transaction and create an invocation context based on it, and I'm not sure that logic would fit in SingleRpcCommand. But I remember how long it took me to get used to the back-and-forth between GetKeyValueCommand and ClusteredGetCommand, so I'm all for removing ClusteredGetCommand. Cheers Dan On Wed, May 18, 2016 at 3:12 PM, Radim Vansa wrote: > Just wondering, why do we have ClusteredGetCommand (and similar ones) > and don't wrap GetKeyValueCommand into SingleRpcCommand as with the > others? Git history starts in 2009, and I think this goes to real history :) > > Radim > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Wed May 18 11:02:11 2016 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 18 May 2016 18:02:11 +0300 Subject: [infinispan-dev] Adding tests for new cache mode In-Reply-To: <573B0DF5.5000105@infinispan.org> References: <573B0A34.8020302@redhat.com> <573B0DF5.5000105@infinispan.org> Message-ID: On Tue, May 17, 2016 at 3:26 PM, Tristan Tarrant wrote: > On 17/05/2016 14:10, Radim Vansa wrote: >> Hi, >> >> I've decided to start working on Scattered Cache [1][2] POC. I'd like to >> use most of the tests for distributed mode, but just extending >> DistXxxTest with ScatteredXxxTest and overriding getCacheMode() seems >> quite inelegant, though this is a common practice for repl/dist tests. I >> had similar problem with Simple Cache, but I didn't need as many tests >> for that. >> >> @Parameters are not used as much in our testsuite - is there any reason > Not using @Parameters is a mistake, IMHO, so if you're willing to > convert the relevant ones, that would be lovely. > +1 to switch to @Parameters as many tests as you want! Dan From rvansa at redhat.com Thu May 19 03:20:28 2016 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 19 May 2016 09:20:28 +0200 Subject: [infinispan-dev] ClusteredGetCommand vs. SingleRpcCommand In-Reply-To: References: <573C5C29.3010102@redhat.com> Message-ID: <573D693C.9010604@redhat.com> I wasn't suggesting to remove CGC nor any others - haven't had any issues with that. Though, as I add another type of cache and compare that to distributed non-tx cache (looking for reusable code), I try to evaluate if any piece of code is really a proper behaviour, workaround or just technological debt. I haven't really checked the marshalling cost of CGC vs. SRC. Following the pattern, I've created ClusteredGetAllCommand, too, but when Galder gets back to Functional API, he should properly implement the ReadOnly*Commands [1] and he'll need another command for that - or just push those functional commands through SingleRpcCommand. So I take this as there's no trick in CGC. R. [1] https://issues.jboss.org/browse/ISPN-6586 On 05/18/2016 04:33 PM, Dan Berindei wrote: > I wasn't present in the JBoss Cache days either, but I'm guessing it's > to allow "special" processing of the command and/or results outside of > the interceptor chain. It also saves a bit on the marshalling cost, as > the serialization of SingleRpcCommand isn't terribly efficient. > > In general, I wouldn't want to force all commands to be > VisitableCommands -- especially custom commands that our interceptors > probably don't know how to deal with. And TransactionBoundaryCommands > go through the chain, but they look up the transaction and create an > invocation context based on it, and I'm not sure that logic would fit > in SingleRpcCommand. > > But I remember how long it took me to get used to the back-and-forth > between GetKeyValueCommand and ClusteredGetCommand, so I'm all for > removing ClusteredGetCommand. > > Cheers > Dan > > > On Wed, May 18, 2016 at 3:12 PM, Radim Vansa wrote: >> Just wondering, why do we have ClusteredGetCommand (and similar ones) >> and don't wrap GetKeyValueCommand into SingleRpcCommand as with the >> others? Git history starts in 2009, and I think this goes to real history :) >> >> Radim >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From galder at redhat.com Thu May 19 03:30:59 2016 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Thu, 19 May 2016 09:30:59 +0200 Subject: [infinispan-dev] EventLoop not available WARN message on docker Message-ID: <60BED88A-3663-4EC0-9C8A-52A1DDEB8C9B@redhat.com> Hey Gustavo, I'm running the following command: $ docker run -it --name master -h master -e "SLAVES=1" gustavonalle/infinispan-server-domain:9.0.0.Alpha2 And seeing this WARN message: https://gist.github.com/galderz/895dd3bc60ddcd2065eb5c4680681d0d I don't think functionality is affected but preferable method for Netty's event loop could not be found. Does this ring a bell? Cheers, -- Galder Zamarre?o Infinispan, Red Hat From gustavo at infinispan.org Thu May 19 03:42:23 2016 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Thu, 19 May 2016 08:42:23 +0100 Subject: [infinispan-dev] EventLoop not available WARN message on docker In-Reply-To: <60BED88A-3663-4EC0-9C8A-52A1DDEB8C9B@redhat.com> References: <60BED88A-3663-4EC0-9C8A-52A1DDEB8C9B@redhat.com> Message-ID: On Thu, May 19, 2016 at 8:30 AM, Galder Zamarre?o wrote: > Hey Gustavo, > > I'm running the following command: > > $ docker run -it --name master -h master -e "SLAVES=1" > gustavonalle/infinispan-server-domain:9.0.0.Alpha2 > > And seeing this WARN message: > https://gist.github.com/galderz/895dd3bc60ddcd2065eb5c4680681d0d > > I don't think functionality is affected but preferable method for Netty's > event loop could not be found. Does this ring a bell? > You're correct, this is harmless, although a bit bit verbose [1] [1] https://issues.jboss.org/browse/ISPN-6651 > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160519/3360cb3e/attachment.html From rvansa at redhat.com Thu May 19 11:05:16 2016 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 19 May 2016 17:05:16 +0200 Subject: [infinispan-dev] Adding tests for new cache mode In-Reply-To: References: <573B0A34.8020302@redhat.com> <573B0DF5.5000105@infinispan.org> Message-ID: <573DD62C.7060408@redhat.com> Uuuh... if it was just so easy. Now I've played with @Factory, @DataProvider and others (since we want to parameterize whole class, not just invocation of method), and it's broken - I can't simply rename the test according to parameters, it behaves differently in IntelliJ and on command line and also with different versions of TestNG. But that's just aesthetic concern. Another problem is the resource tracking AbstractInfinispanTest uses: @BeforeTest/AfterTest is ran only once per class, not per instance @BeforeClass/AfterClass is ran once per instance, but in a loop on the instances within the same thread, before the methods are executed The tests are then run in the order instance1.test1(), instance2.test1(), instance1.test2() - so this needs to be reordered using IMethodInterceptor. Luckily, the BeforeClass invocation is lazy and AfterClass is eager, so these methods are invoked when these should be. So, at this point [1] I managed to get the tests running fine, and if I omit the testName in @Test, it is reported correctly in target/surefire-reports/. However, the output that's printed when I run maven test does not involve the parameter, and in IntelliJ the test subsequent result gets just marked with (x) where x is some number. When testName is set, it's repeated for all results in reports, in IntelliJ as well (with (x) for each method invocation), and commandline output is not changed. If anyone knows how to fix that, go for it - I don't know what IntelliJ or the surefire reporter picks. Just a hint - ITest interface won't help you, that just spoils everything. Radim [1] https://github.com/rvansa/infinispan/tree/t_test_factory On 05/18/2016 05:02 PM, Dan Berindei wrote: > On Tue, May 17, 2016 at 3:26 PM, Tristan Tarrant wrote: >> On 17/05/2016 14:10, Radim Vansa wrote: >>> Hi, >>> >>> I've decided to start working on Scattered Cache [1][2] POC. I'd like to >>> use most of the tests for distributed mode, but just extending >>> DistXxxTest with ScatteredXxxTest and overriding getCacheMode() seems >>> quite inelegant, though this is a common practice for repl/dist tests. I >>> had similar problem with Simple Cache, but I didn't need as many tests >>> for that. >>> >>> @Parameters are not used as much in our testsuite - is there any reason >> Not using @Parameters is a mistake, IMHO, so if you're willing to >> convert the relevant ones, that would be lovely. >> > +1 to switch to @Parameters as many tests as you want! > > Dan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From sanne at infinispan.org Thu May 19 11:47:42 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 19 May 2016 16:47:42 +0100 Subject: [infinispan-dev] Adding tests for new cache mode In-Reply-To: <573DD62C.7060408@redhat.com> References: <573B0A34.8020302@redhat.com> <573B0DF5.5000105@infinispan.org> <573DD62C.7060408@redhat.com> Message-ID: I'm sorry, I can't help you but you're pretty much describing while the Lucene and Query modules use JUnit exclusively. Rather than spreading the @Parameter virus, I'd rather see a tendency to move things to JUnit. It doesn't have some of the TestNG features but it's easy to extend - I just think that for a project like this one is better of by making custom and reliable extensions than to use a general purpose cryptic framework. 2cents.. Sanne On 19 May 2016 at 16:05, Radim Vansa wrote: > Uuuh... if it was just so easy. Now I've played with @Factory, > @DataProvider and others (since we want to parameterize whole class, not > just invocation of method), and it's broken - I can't simply rename the > test according to parameters, it behaves differently in IntelliJ and on > command line and also with different versions of TestNG. But that's just > aesthetic concern. > > Another problem is the resource tracking AbstractInfinispanTest uses: > > @BeforeTest/AfterTest is ran only once per class, not per instance > @BeforeClass/AfterClass is ran once per instance, but in a loop on the > instances within the same thread, before the methods are executed > > The tests are then run in the order instance1.test1(), > instance2.test1(), instance1.test2() - so this needs to be reordered > using IMethodInterceptor. Luckily, the BeforeClass invocation is lazy > and AfterClass is eager, so these methods are invoked when these should be. > > So, at this point [1] I managed to get the tests running fine, and if I > omit the testName in @Test, it is reported correctly in > target/surefire-reports/. However, the output that's printed when I run > maven test does not involve the parameter, and in IntelliJ the test > subsequent result gets just marked with (x) where x is some number. When > testName is set, it's repeated for all results in reports, in IntelliJ > as well (with (x) for each method invocation), and commandline output is > not changed. > > If anyone knows how to fix that, go for it - I don't know what IntelliJ > or the surefire reporter picks. Just a hint - ITest interface won't help > you, that just spoils everything. > > Radim > > [1] https://github.com/rvansa/infinispan/tree/t_test_factory > > On 05/18/2016 05:02 PM, Dan Berindei wrote: >> On Tue, May 17, 2016 at 3:26 PM, Tristan Tarrant wrote: >>> On 17/05/2016 14:10, Radim Vansa wrote: >>>> Hi, >>>> >>>> I've decided to start working on Scattered Cache [1][2] POC. I'd like to >>>> use most of the tests for distributed mode, but just extending >>>> DistXxxTest with ScatteredXxxTest and overriding getCacheMode() seems >>>> quite inelegant, though this is a common practice for repl/dist tests. I >>>> had similar problem with Simple Cache, but I didn't need as many tests >>>> for that. >>>> >>>> @Parameters are not used as much in our testsuite - is there any reason >>> Not using @Parameters is a mistake, IMHO, so if you're willing to >>> convert the relevant ones, that would be lovely. >>> >> +1 to switch to @Parameters as many tests as you want! >> >> Dan >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Wed May 25 04:52:59 2016 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Wed, 25 May 2016 10:52:59 +0200 Subject: [infinispan-dev] Multi tenancy support for Infinispan In-Reply-To: References: <5731A2D6.1020300@redhat.com> Message-ID: Hi all, Sorry for the delay getting back on this. The addition of a new component does not worry me so much. It has the advantage of implementing it once independent of the backend endpoint, whether HR or Rest. What I'm struggling to understand is what protocol the clients will use to talk to the router. It seems wasteful having to build two protocols at this level, e.g. one at TCP level and one at REST level. If you're going to end up building two protocols, the benefit of the router component dissapears and then you might as well embedded the two routing protocols within REST and HR directly. In other words, for the router component to make sense, I think it should: 1. Clients, no matter whether HR or REST, to use 1 single protocol to the router. The natural thing here would be HTTP/2 or similar protocol. 2. The router then talks HR or REST to the backend. Here the router uses TCP or HTTP protocol based on the backend needs. ^ The above implies that HR client has to talk TCP when using HR server directly or HTTP/2 when using it via router, but I don't think this is too bad and it gives us some experience working with HTTP/2 besides the work Anton is carrying out as part of GSoC. Cheers, -- Galder Zamarre?o Infinispan, Red Hat > On 11 May 2016, at 10:38, Sebastian Laskawiec wrote: > > Hey Tristan! > > If I understood you correctly, you're suggesting to enhance the ProtocolServer to support multiple EmbeddedCacheManagers (probably with shared transport and by that I mean started on the same Netty server). > > Yes, that also could work but I'm not convinced if we won't loose some configuration flexibility. > > Let's consider a configuration file - https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how for example use authentication for CacheContainer cc1 (and not for cc2) and encryption for cc1 (and not for cc1)? Both are tied to hotrod-connector. I think using this kind of different options makes sense in terms of multi tenancy. And please note that if we start a new Netty server for each CacheContainer - we almost ended up with the router I proposed. > > The second argument for using a router is extracting the routing logic into a separate module. Otherwise we would probably end up with several if(isMultiTenent()) statements in Hotrod as well as REST server. Extracting this has also additional advantage that we limit changes in those modules (actually there will be probably 2 changes #1 we should be able to start a ProtocolServer without starting a Netty server (the Router will do it in multi tenant configuration) and #2 collect Netty handlers from ProtocolServer). > > To sum it up - the router's implementation seems to be more complicated but in the long run I think it might be worth it. > > I also wrote the summary of the above here: https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#alternative-approach > > @Galder - you wrote a huge part of the Hot Rod server - I would love to hear your opinion as well. > > Thanks > Sebastian > > > > On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant wrote: > Not sure I like the introduction of another component at the front. > > My original idea for allowing the client to choose the container was: > > - with TLS: use SNI to choose the container > - without TLS: enhance the PING operation of the Hot Rod protocol to > also take the server name. This would need to be a requirement when > exposing multiple containers over the same endpoint. > > From a client API perspective, there would be no difference between the > above two approaches: just specify the server name and depending on the > transport, select the right one. > > Tristan > > On 29/04/2016 17:29, Sebastian Laskawiec wrote: > > Dear Community, > > > > Please have a look at the design of Multi tenancy support for Infinispan > > [1]. I would be more than happy to get some feedback from you. > > > > Highlights: > > > > * The implementation will be based on a Router (which will be built > > based on Netty) > > * Multiple Hot Rod and REST servers will be attached to the router > > which in turn will be attached to the endpoint > > * The router will operate on a binary protocol when using Hot Rod > > clients and path-based routing when using REST > > * Memcached will be out of scope > > * The router will support SSL+SNI > > > > Thanks > > Sebastian > > > > [1] > > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From slaskawi at redhat.com Thu May 26 10:51:29 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Thu, 26 May 2016 16:51:29 +0200 Subject: [infinispan-dev] Multi tenancy support for Infinispan In-Reply-To: References: <5731A2D6.1020300@redhat.com> Message-ID: Hey Galder! Comments inlined. Thanks Sebastian On Wed, May 25, 2016 at 10:52 AM, Galder Zamarre?o wrote: > Hi all, > > Sorry for the delay getting back on this. > > The addition of a new component does not worry me so much. It has the > advantage of implementing it once independent of the backend endpoint, > whether HR or Rest. > > What I'm struggling to understand is what protocol the clients will use to > talk to the router. It seems wasteful having to build two protocols at this > level, e.g. one at TCP level and one at REST level. If you're going to end > up building two protocols, the benefit of the router component dissapears > and then you might as well embedded the two routing protocols within REST > and HR directly. > I think I wasn't clear enough in the design how the routing works... In your scenario - both servers (hotrod and rest) will start EmbeddedCacheManagers internally but none of them will start Netty transport. The only transport that will be turned on is the router. The router will be responsible for recognizing the request type (if HTTP - find proper REST server, if HotRod protocol - find proper HotRod) and attaching handlers at the end of the pipeline. Regarding to custom protocol (this usecase could be used with Hotrod clients which do not use SSL (so SNI routing is not possible)), you and Tristan got me thinking whether we really need it. Maybe we should require SSL+SNI when using HotRod protocol with no exceptions? The thing that bothers me is that SSL makes the whole setup twice slower: https://gist.github.com/slaskawi/51f76b0658b9ee0c9351bd17224b1ba2#file-gistfile1-txt-L1753-L1754 > > In other words, for the router component to make sense, I think it should: > > 1. Clients, no matter whether HR or REST, to use 1 single protocol to the > router. The natural thing here would be HTTP/2 or similar protocol. > Yes, that's the goal. > 2. The router then talks HR or REST to the backend. Here the router uses > TCP or HTTP protocol based on the backend needs. > It's even simpler - it just uses the backend's Netty Handlers. Since the SNI implementation is ready, please have a look: https://github.com/infinispan/infinispan/pull/4348 > > ^ The above implies that HR client has to talk TCP when using HR server > directly or HTTP/2 when using it via router, but I don't think this is too > bad and it gives us some experience working with HTTP/2 besides the work > Anton is carrying out as part of GSoC. > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > > > On 11 May 2016, at 10:38, Sebastian Laskawiec > wrote: > > > > Hey Tristan! > > > > If I understood you correctly, you're suggesting to enhance the > ProtocolServer to support multiple EmbeddedCacheManagers (probably with > shared transport and by that I mean started on the same Netty server). > > > > Yes, that also could work but I'm not convinced if we won't loose some > configuration flexibility. > > > > Let's consider a configuration file - > https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how > for example use authentication for CacheContainer cc1 (and not for cc2) and > encryption for cc1 (and not for cc1)? Both are tied to hotrod-connector. I > think using this kind of different options makes sense in terms of multi > tenancy. And please note that if we start a new Netty server for each > CacheContainer - we almost ended up with the router I proposed. > > > > The second argument for using a router is extracting the routing logic > into a separate module. Otherwise we would probably end up with several > if(isMultiTenent()) statements in Hotrod as well as REST server. Extracting > this has also additional advantage that we limit changes in those modules > (actually there will be probably 2 changes #1 we should be able to start a > ProtocolServer without starting a Netty server (the Router will do it in > multi tenant configuration) and #2 collect Netty handlers from > ProtocolServer). > > > > To sum it up - the router's implementation seems to be more complicated > but in the long run I think it might be worth it. > > > > I also wrote the summary of the above here: > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#alternative-approach > > > > @Galder - you wrote a huge part of the Hot Rod server - I would love to > hear your opinion as well. > > > > Thanks > > Sebastian > > > > > > > > On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant > wrote: > > Not sure I like the introduction of another component at the front. > > > > My original idea for allowing the client to choose the container was: > > > > - with TLS: use SNI to choose the container > > - without TLS: enhance the PING operation of the Hot Rod protocol to > > also take the server name. This would need to be a requirement when > > exposing multiple containers over the same endpoint. > > > > From a client API perspective, there would be no difference between the > > above two approaches: just specify the server name and depending on the > > transport, select the right one. > > > > Tristan > > > > On 29/04/2016 17:29, Sebastian Laskawiec wrote: > > > Dear Community, > > > > > > Please have a look at the design of Multi tenancy support for > Infinispan > > > [1]. I would be more than happy to get some feedback from you. > > > > > > Highlights: > > > > > > * The implementation will be based on a Router (which will be built > > > based on Netty) > > > * Multiple Hot Rod and REST servers will be attached to the router > > > which in turn will be attached to the endpoint > > > * The router will operate on a binary protocol when using Hot Rod > > > clients and path-based routing when using REST > > > * Memcached will be out of scope > > > * The router will support SSL+SNI > > > > > > Thanks > > > Sebastian > > > > > > [1] > > > > https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server > > > > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > -- > > Tristan Tarrant > > Infinispan Lead > > JBoss, a division of Red Hat > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160526/1b9be893/attachment.html From ttarrant at redhat.com Mon May 30 03:46:47 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 30 May 2016 09:46:47 +0200 Subject: [infinispan-dev] Infinispan URL format Message-ID: <574BEFE7.4000407@redhat.com> In the past there has been talk of representing a connection to Infinispan using a URL, in particular for HotRod. The Hibernate OGM team is now working on adding NoSQL datasources to WildFly, and they've asked for they should represent connections to various of these. For Hot Rod: infinispan:hotrod://[host1][:port1][,[host2][:port2]]...[/cachemanager] The [cachemanager] part is for multi-tenant servers (Hot Rod doesn't currently support this, so this is forward-looking). Obviously we will support all of the HotRod properties for specifying things like security, etc. For Embedded: infinispan:embedded:file://path/to/config.xml (for specifying an external config file) infinispan:embedded:jndi://path/to/jndi (for referencing a cachemanager in JNDI) infinispan:embedded: (configuration specified as properties) For the latter, we also need to be able to represent an infinispan configuration using properties with a simple mapping to XML elements/attributes, e.g. cache-manager.local-cache.mycache.eviction.size=1000 Comments are welcome Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From rvansa at redhat.com Mon May 30 06:09:43 2016 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 30 May 2016 12:09:43 +0200 Subject: [infinispan-dev] Adding tests for new cache mode In-Reply-To: References: <573B0A34.8020302@redhat.com> <573B0DF5.5000105@infinispan.org> <573DD62C.7060408@redhat.com> Message-ID: <574C1167.9020803@redhat.com> Thanks for the insight, Sanne. Has this thread died, or are there any further comments or ideas? (does anyone understand those integrations with Maven or Intellij?) Radim On 05/19/2016 05:47 PM, Sanne Grinovero wrote: > I'm sorry, I can't help you but you're pretty much describing while > the Lucene and Query modules use JUnit exclusively. > > Rather than spreading the @Parameter virus, I'd rather see a tendency > to move things to JUnit. It doesn't have some of the TestNG features > but it's easy to extend - I just think that for a project like this > one is better of by making custom and reliable extensions than to use > a general purpose cryptic framework. > > 2cents.. > > Sanne > > On 19 May 2016 at 16:05, Radim Vansa wrote: >> Uuuh... if it was just so easy. Now I've played with @Factory, >> @DataProvider and others (since we want to parameterize whole class, not >> just invocation of method), and it's broken - I can't simply rename the >> test according to parameters, it behaves differently in IntelliJ and on >> command line and also with different versions of TestNG. But that's just >> aesthetic concern. >> >> Another problem is the resource tracking AbstractInfinispanTest uses: >> >> @BeforeTest/AfterTest is ran only once per class, not per instance >> @BeforeClass/AfterClass is ran once per instance, but in a loop on the >> instances within the same thread, before the methods are executed >> >> The tests are then run in the order instance1.test1(), >> instance2.test1(), instance1.test2() - so this needs to be reordered >> using IMethodInterceptor. Luckily, the BeforeClass invocation is lazy >> and AfterClass is eager, so these methods are invoked when these should be. >> >> So, at this point [1] I managed to get the tests running fine, and if I >> omit the testName in @Test, it is reported correctly in >> target/surefire-reports/. However, the output that's printed when I run >> maven test does not involve the parameter, and in IntelliJ the test >> subsequent result gets just marked with (x) where x is some number. When >> testName is set, it's repeated for all results in reports, in IntelliJ >> as well (with (x) for each method invocation), and commandline output is >> not changed. >> >> If anyone knows how to fix that, go for it - I don't know what IntelliJ >> or the surefire reporter picks. Just a hint - ITest interface won't help >> you, that just spoils everything. >> >> Radim >> >> [1] https://github.com/rvansa/infinispan/tree/t_test_factory >> >> On 05/18/2016 05:02 PM, Dan Berindei wrote: >>> On Tue, May 17, 2016 at 3:26 PM, Tristan Tarrant wrote: >>>> On 17/05/2016 14:10, Radim Vansa wrote: >>>>> Hi, >>>>> >>>>> I've decided to start working on Scattered Cache [1][2] POC. I'd like to >>>>> use most of the tests for distributed mode, but just extending >>>>> DistXxxTest with ScatteredXxxTest and overriding getCacheMode() seems >>>>> quite inelegant, though this is a common practice for repl/dist tests. I >>>>> had similar problem with Simple Cache, but I didn't need as many tests >>>>> for that. >>>>> >>>>> @Parameters are not used as much in our testsuite - is there any reason >>>> Not using @Parameters is a mistake, IMHO, so if you're willing to >>>> convert the relevant ones, that would be lovely. >>>> >>> +1 to switch to @Parameters as many tests as you want! >>> >>> Dan >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From rory.odonnell at oracle.com Mon May 30 08:50:46 2016 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Mon, 30 May 2016 13:50:46 +0100 Subject: [infinispan-dev] Early Access builds of JDK 9 b120 & JDK 9 with Project Jigsaw b120 (#5074) are available on java.net Message-ID: <96899a80-9036-87a5-7559-bfbe89567774@oracle.com> Hi Galder, Early Access b120 for JDK 9 is available on java.net, summary of changes are listed here . Early Access b120 (#5074) for JDK 9 with Project Jigsaw is available on java.net. JDK 9 Build 120 has over *400 *bug fixes, hotspot fixes making a significant contribution. In addition , this build implements JEP 289: Deprecate the Applet API [1] Notable changes since the is last announcement email - in JDK 9 b119 the big change was moving the class file version from 52.0 to 53.0, see [2] for more details. Rgds,Rory [1] JEP 289: Deprecate the Applet API [2] http://mail.openjdk.java.net/pipermail/jdk9-dev/2016-January/003507.html -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA, Dublin,Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160530/08bd1435/attachment-0001.html From galder at redhat.com Tue May 31 07:33:26 2016 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Tue, 31 May 2016 13:33:26 +0200 Subject: [infinispan-dev] Infinispan URL format In-Reply-To: <574BEFE7.4000407@redhat.com> References: <574BEFE7.4000407@redhat.com> Message-ID: Comments inline: -- Galder Zamarre?o Infinispan, Red Hat > On 30 May 2016, at 09:46, Tristan Tarrant wrote: > > In the past there has been talk of representing a connection to > Infinispan using a URL, in particular for HotRod. > The Hibernate OGM team is now working on adding NoSQL datasources to > WildFly, and they've asked for they should represent connections to > various of these. ^ What's this trying to solve exactly? > For Hot Rod: > > infinispan:hotrod://[host1][:port1][,[host2][:port2]]...[/cachemanager] > > The [cachemanager] part is for multi-tenant servers (Hot Rod doesn't > currently support this, so this is forward-looking). > Obviously we will support all of the HotRod properties for specifying > things like security, etc. ^ Hmmm, all properties? Do you envision potentially putting all HR client config inside a URL? > > For Embedded: > > infinispan:embedded:file://path/to/config.xml (for specifying an > external config file) > infinispan:embedded:jndi://path/to/jndi (for referencing a cachemanager > in JNDI) > infinispan:embedded: (configuration specified as properties) > > For the latter, we also need to be able to represent an infinispan > configuration using properties with a simple mapping to XML > elements/attributes, e.g. > > cache-manager.local-cache.mycache.eviction.size=1000 ^ Why 'local-cache' in property name? cachemanager.mycache...etc would be enough since there can't be duplicate cache names inside a given cache manager. So, is 'local-cache' merely a hint? Cheers, > > > Comments are welcome > > Tristan > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Tue May 31 07:48:36 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 31 May 2016 13:48:36 +0200 Subject: [infinispan-dev] Infinispan URL format In-Reply-To: References: <574BEFE7.4000407@redhat.com> Message-ID: <574D7A14.5090608@infinispan.org> On 31/05/2016 13:33, Galder Zamarre?o wrote: > Comments inline: > > -- > Galder Zamarre?o > Infinispan, Red Hat > >> On 30 May 2016, at 09:46, Tristan Tarrant wrote: >> >> In the past there has been talk of representing a connection to >> Infinispan using a URL, in particular for HotRod. >> The Hibernate OGM team is now working on adding NoSQL datasources to >> WildFly, and they've asked for they should represent connections to >> various of these. > ^ What's this trying to solve exactly? Similar to how a JDBC URL works, providing a convenient format for specifying a connection to an Infinispan resource. Look at [1] >> For Hot Rod: >> >> infinispan:hotrod://[host1][:port1][,[host2][:port2]]...[/cachemanager] >> >> The [cachemanager] part is for multi-tenant servers (Hot Rod doesn't >> currently support this, so this is forward-looking). >> Obviously we will support all of the HotRod properties for specifying >> things like security, etc. > ^ Hmmm, all properties? Do you envision potentially putting all HR client config inside a URL? The use of the ?name=value[&name=value] format in the URL is not the only way. JDBC, for example, has a separate properties param: DriverManager.getConnection(jdbcUrl, properties); >> For Embedded: >> >> infinispan:embedded:file://path/to/config.xml (for specifying an >> external config file) >> infinispan:embedded:jndi://path/to/jndi (for referencing a cachemanager >> in JNDI) >> infinispan:embedded: (configuration specified as properties) >> >> For the latter, we also need to be able to represent an infinispan >> configuration using properties with a simple mapping to XML >> elements/attributes, e.g. >> >> cache-manager.local-cache.mycache.eviction.size=1000 > ^ Why 'local-cache' in property name? cachemanager.mycache...etc would be enough since there can't be duplicate cache names inside a given cache manager. So, is 'local-cache' merely a hint? This is not for connecting to an existing instance, but for actually creating a cachemanager. Tristan [1] http://lists.jboss.org/pipermail/wildfly-dev/2016-May/004953.html From emmanuel at hibernate.org Tue May 31 14:23:42 2016 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Tue, 31 May 2016 20:23:42 +0200 Subject: [infinispan-dev] Infinispan URL format In-Reply-To: References: <574BEFE7.4000407@redhat.com> Message-ID: > On 31 May 2016, at 13:33, Galder Zamarre?o wrote: > >> In the past there has been talk of representing a connection to >> Infinispan using a URL, in particular for HotRod. >> The Hibernate OGM team is now working on adding NoSQL datasources to >> WildFly, and they've asked for they should represent connections to >> various of these. > > ^ What's this trying to solve exactly? The reasoning is as follows in a nutshell. If Infinispan wants to be treated as a database, it needs to be friendly towards its client and offer a proper simple access. A driver + a URL scheme is a common scheme across the RDBMS and NoSQL space these days. If we have this, then Wildfly users can start separating the data source configuration from their application deployment like they have been able to for RDBMSes (or other JCA deployment AFAIR). The main difference is that we will not return javax.sql.DataSource objects but the natural native object of the driver. We are setting these approaches in Wildfly for various NoSQL solutions already. Infinispan is the remaining outlier. Emmanuel -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160531/4a48f824/attachment.html From paul.ferraro at redhat.com Tue May 31 15:51:17 2016 From: paul.ferraro at redhat.com (Paul Ferraro) Date: Tue, 31 May 2016 15:51:17 -0400 Subject: [infinispan-dev] Infinispan URL format In-Reply-To: <574BEFE7.4000407@redhat.com> References: <574BEFE7.4000407@redhat.com> Message-ID: This also fits nicely with the JCache API, where a CacheProvider is expected to express a connection to a CacheManager as a URI. On Mon, May 30, 2016 at 3:46 AM, Tristan Tarrant wrote: > In the past there has been talk of representing a connection to > Infinispan using a URL, in particular for HotRod. > The Hibernate OGM team is now working on adding NoSQL datasources to > WildFly, and they've asked for they should represent connections to > various of these. > > For Hot Rod: > > infinispan:hotrod://[host1][:port1][,[host2][:port2]]...[/cachemanager] > > The [cachemanager] part is for multi-tenant servers (Hot Rod doesn't > currently support this, so this is forward-looking). > Obviously we will support all of the HotRod properties for specifying > things like security, etc. > > For Embedded: > > infinispan:embedded:file://path/to/config.xml (for specifying an > external config file) > infinispan:embedded:jndi://path/to/jndi (for referencing a cachemanager > in JNDI) > infinispan:embedded: (configuration specified as properties) > > For the latter, we also need to be able to represent an infinispan > configuration using properties with a simple mapping to XML > elements/attributes, e.g. > > cache-manager.local-cache.mycache.eviction.size=1000 > > > Comments are welcome > > Tristan > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev