[jboss-jira] [JBoss JIRA] (JGRP-1672) Shared memory to send message between different processes on the same box

Bela Ban (Jira) issues at jboss.org
Thu May 9 02:58:00 EDT 2019


    [ https://issues.jboss.org/browse/JGRP-1672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13731264#comment-13731264 ] 

Bela Ban commented on JGRP-1672:
--------------------------------

More links:
* https://docs.okd.io/latest/dev_guide/shared_memory.html
* https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/

> Shared memory to send message between different processes on the same box
> -------------------------------------------------------------------------
>
>                 Key: JGRP-1672
>                 URL: https://issues.jboss.org/browse/JGRP-1672
>             Project: JGroups
>          Issue Type: Feature Request
>            Reporter: Bela Ban
>            Assignee: Bela Ban
>            Priority: Major
>             Fix For: 5.0
>
>         Attachments: ShmTest.java
>
>
> Investigate whether it makes sense to use shared memory to pass messages between processes on the same box. Say if we have A, B and C on box-1 and X, Y, Z on box-2, when A multicasts a message, it could loop it back to itself, place it into shared memory for B and C to read and multicast it to X, Y, Z. The multicast socket could be non-loopback, so box-1 would not receive it.
> Problems:
> * Shared memory in Java can only be done via memory mapped (sequential or random access) files. To pass a lot of messages, something like a ring buffer would have to be created in shared memory
> * Unless we use FileLock, or polling/busy reading, there is no way to know when a producer has written a message into shared memory. We'd therefore have to use a signalling mechanism, probably a small JGroups message, to notify the consumer(s) of new messages.
> ** Alternatively, we could do busy waiting: the producer writes into a memory location when a message is ready to be consumed. Perhaps this memory location can be the number of messages ready to be read. The consumer could busy-wait, and decrement the number of messages read. This variable could be protected by a file lock, so after some amount of busy-waiting, the consumer could go back and do a real wait on the file lock, instead of burning CPU doing busy-waits.
> * For multicast messages, we'd have 1 producer but many consumers. A RingBuffer would not work here, as we don't know when all consumers have read a given message, ie. when to advance the read pointer
> ** As an alternative, we could have one shared memory buffer per member on the same host. This would also cater to unicast messages. However, then we'd use up a lot of memory.
> * How would this work for TCP ? We'd have to send the message to only members which are outside the local box. How do we identify those members ?
> * Message reception: a multicast message received and targetted to all members on the same box could also be placed into shared memory, so everyone on the same box receives it
> ** How would this work for TCP ? E.g. A sending a multicast message M would use shared memory to deliver M to B and C on box-1, but if it sends it to X, Y and Z, then that's unneeded work, as it could send it only to X, which could place it into shared memory for Y and Z to consume M.
> *** We'd have to include the knowledge of 'affinity' into an address



--
This message was sent by Atlassian Jira
(v7.12.1#712002)


More information about the jboss-jira mailing list