On 2/26/13 2:17 PM, Manik Surtani wrote:

On 26 Feb 2013, at 14:12, Paolo Romano <romano@inesc-id.pt> wrote:

If you're really into self-tuning this parameter, I expect that a very simple gradient-descent mechanism would actually work pretty well in this case.

We have done similar work in the Cloud-TM project (applied to both message batching and number of threads active per node), and if you're interested I may send more references on this.

Yes, please do.  :)

In attach 2 papers in pdf version:
- "paper_submitted.pdf" deals with optimization of the level of parallelism in transactional memories (both centralized and distributed)
- "SASO12_paper84_PDFexpressOk.pdf" presents a mechanism for self-tuning the batching (a.k.a. message packing) in total-order based protocols.

Diego Didona (who has already subscribed this mailing-list) will be glad to provide more details ;)