<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
  <head>
    <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#ffffff" text="#000000">
    On 2/17/11 9:02 PM, Mark Little wrote:
    <blockquote
      cite="mid:2BB692FA-DEF5-4812-9C82-C5273266F773@ncl.ac.uk"
      type="cite">
      <div>Hi Paolo,<br>
        <br>
        Sent from my iPhone</div>
      <div><br>
        On 17 Feb 2011, at 19:51, Paolo Romano &lt;<a
          moz-do-not-send="true" href="mailto:romano@inesc-id.pt">romano@inesc-id.pt</a>&gt;
        wrote:<br>
        <br>
      </div>
      <blockquote type="cite">
        <div> Hi Mark,<br>
          <br>
          let me try to clarify the scope and goals of the work
          Sebastiano and Diego are doing, as I think there may have been
          some misunderstanding here.<br>
          <br>
          First an important premise, which I think was not clear in
          their previous message. We are here considering the *full
          replication mode* of Infinispan, in which every node maintains
          all copies of the same data items. This implies that there is
          no need to fetch data from remote nodes during transaction
          execution. Also, Infinispan was configured NOT TO use eager
          locking. In other words, during transaction's execution
          Infinispan acquires locks only locally.<br>
        </div>
      </blockquote>
      <div><br>
      </div>
      Ok that was certainly not clear.
      <div><br>
        <blockquote type="cite">
          <div> <br>
            What it is being compared is a single-master replication
            protocol (the classic primary-backup scheme) vs the 2PC
            based multi-master replication protocol (which is the one
            currently implemented by Infinispan). <br>
          </div>
        </blockquote>
        <div><br>
        </div>
        Since two phase commit is associated a lot with transaction
        systems ;) and probably more so than replication, I think we
        probably need to be more careful with the implicit assumptions
        that are carried with any discussions in future :) It should
        help avoid confusion.</div>
    </blockquote>
    <br>
    I see you point. But I guess that the one of main reasons why
    several details of the Infinispan's 2PC based replication algorithm
    were omitted in the mail was not make it so long to discourage
    reading it ;-) <br>
    <br>
    <blockquote
      cite="mid:2BB692FA-DEF5-4812-9C82-C5273266F773@ncl.ac.uk"
      type="cite">
      <div><br>
        <blockquote type="cite">
          <div> <br>
            The two protocols behave identically during transactions'
            execution. What differs between the two protocols is the
            handling of the commit phase. <br>
            <br>
            In the single master mode, as no remote conflict can occur
            and local conflicts are managed by local concurrency
            control, data updates can be simply pushed to the backups.
            More precisely (this should answer also a question of
            Mircea), the primary DURING the commit phase, in a
            synchronous fashion, propagates the updates to the backups,
            waits for their ack, and only then returns from the commit
            request to the application.<br>
            <br>
            In the multi-master mode, 2PC is used to materialize
            conflicts among transactions executed at different nodes,
            and to ensure agreement on the transaction's outcome. Thus
            2PC serves  to enforce both atomicity and isolation.<br>
          </div>
        </blockquote>
        <div><br>
        </div>
        Not quite :)</div>
      <div><br>
      </div>
      <div>2PC is a consensus protocol, pure and simple. I think what
        you are meaning is that some of the participants do additional
        work during the two phases, such as happens in a TM where locks
        may be propagated, promoted or released. But the reason I
        mention this is because we as an industry (and academia) have
        confused the world by not being clear on what 2PC brings. You'd
        be surprised by the number of people who believe 2PC is
        responsible for all of the ACID semantics. I think we need to be
        very clear on such distinctions.</div>
    </blockquote>
    <br>
    ... true, it's the lock acquisition taking place during the voting
    phase of 2PC that ensures isolation, not 2PC per se. I called this
    approach 2PC based multi-master replication protocol when I
    Introduced it in my mail... 2PC here was just a shorthand.<br>
    <br>
    <blockquote
      cite="mid:2BB692FA-DEF5-4812-9C82-C5273266F773@ncl.ac.uk"
      type="cite">
      <div><br>
        <blockquote type="cite">
          <div> <br>
            The plots attached in the mail from Diego and Sebastiano are
            comparing the performance of two replication strategies that
            provide the same degree of consistency/failure
            resiliency...at least assuming perfect failure detection,</div>
        </blockquote>
        <div><br>
        </div>
        And we both know there is no such thing, unless you want to get
        into the world of quantum mechanics :-)</div>
    </blockquote>
    <br>
    ....still there are many real systems that are actually designed
    assuming perfect failure detection... possibly relying on some
    dedicated hardware mechanism (which encapsulate the perfect failure
    detection assumption) to enforce accuracy by forcing suspected nodes
    to shut down falsely suspected nodes, e.g.
    <a class="moz-txt-link-freetext" href="http://en.wikipedia.org/wiki/STONITH">http://en.wikipedia.org/wiki/STONITH</a>.<br>
    <br>
    <blockquote
      cite="mid:2BB692FA-DEF5-4812-9C82-C5273266F773@ncl.ac.uk"
      type="cite">
      <div><br>
        <blockquote type="cite">
          <div> I'd prefer to avoid delving into a (actually
            interesting) discussion of non-blocking guarantees of the
            two protocols in partially or eventually synchronous
            environments in this mailing list. </div>
        </blockquote>
        <div><br>
        </div>
        I'd be interested in continuing off list then.</div>
    </blockquote>
    <br>
    well, trying to make a very long story (probably too) short one may
    say that:<br>
    <br>
    - in an asynchronous system (even if augmented with eventually
    perfect failure detection), classic 2PC blocks upon coordinator
    crashes. This, pragmatically, forces to heuristic decisions that may
    lead to atomicity violations. Using more expensive commit protocols,
    such as Paxos Commit
    (<a class="moz-txt-link-freetext" href="http://research.microsoft.com/apps/pubs/default.aspx?id=64636">http://research.microsoft.com/apps/pubs/default.aspx?id=64636</a>), one
    can enforce atomicity also in eventually synchronous systems
    (tolerating f faults out of 2f+1 replicas). <br>
    <br>
    - With primary backup (PB), in an asynchronous system the issue is
    the split-brain syndrome (this wiki definition is not the best ever
    but it's the only I could rapidly find:
    <a class="moz-txt-link-freetext" href="http://en.wikipedia.org/wiki/Split-brain_(Computing)">http://en.wikipedia.org/wiki/Split-brain_(Computing)</a> ).
    Unsurprisingly, also for PB, it is also possible to design (more
    expensive) variants working in partially synchronous systems. An
    example is Vertical Paxos
    (research.microsoft.com/pubs/80907/podc09v6.pdf). Or assuming
    virtual synchrony (<a class="moz-txt-link-freetext" href="http://en.wikipedia.org/wiki/Virtual_synchrony">http://en.wikipedia.org/wiki/Virtual_synchrony</a>), 
    by propagating the primary's updates via uniform reliable broadcast
    (also called safe delivery in virtual synchrony's gergon,
    <a class="moz-txt-link-freetext" href="http://www.cs.huji.ac.il/labs/transis/lab-projects/guide/chap3.html#safe">http://www.cs.huji.ac.il/labs/transis/lab-projects/guide/chap3.html#safe</a>).
    Or, mapping the solution to classic Paxos, by having the role of the
    primary coincide with that of the leader in the Multi-paxos protocol
    (<a class="moz-txt-link-freetext" href="http://en.wikipedia.org/wiki/Paxos_algorithm#Multi-Paxos">http://en.wikipedia.org/wiki/Paxos_algorithm#Multi-Paxos</a>). <br>
    <br>
    Note that, in an asynchronous system, the PB implementation shown in
    the plots might violate consistency in case of false failure
    suspicions of the primary, just like the current Infinispan's 2PC
    based replication scheme might do in case of false failure
    suspicions of the coordinator. So, in terms of required synchrony
    assumptions, the 2 considered protocols are fairly comparable.<br>
    <br>
    <br>
    <blockquote
      cite="mid:2BB692FA-DEF5-4812-9C82-C5273266F773@ncl.ac.uk"
      type="cite">
      <div><br>
        <blockquote type="cite">
          <div>Thus I disagree when you say that they are not
            comparable. <br>
          </div>
        </blockquote>
        <div><br>
        </div>
        Well hopefully you now see why I still disagree. If the term 2PC
        is being used here to mean the atomicity/consensus protocol
        only, then they are not comparable. If it's a shorthand for more
        than that then we should use a different notation.</div>
    </blockquote>
    <br>
    Considering that 2PC was a shorthand for "2PC based multi-master
    replication scheme" ... I hope that we agree now that the
    performance of the two protocols are comparable ;-)<br>
    <br>
    <br>
    <blockquote
      cite="mid:2BB692FA-DEF5-4812-9C82-C5273266F773@ncl.ac.uk"
      type="cite">
      <div><br>
        <blockquote type="cite">
          <div> <br>
            IMO, the only arguable unfairness in the comparison is that
            the multi-master scheme allows processing update
            transactions at every node, whereas the single-master scheme
            should  rely either on 1) a load balancing scheme to ensure
            redirecting all the update transactions to the master, or 2)
            rely on a redirection scheme to redirect towards the master
            any update transactions "originated" by the backups. The
            testbed considered in the current evaluation is "emulating"
            option  1, which does favour the PB scheme. On the other
            hand, we are currently studying a non-intrusive way to
            implement a transparent redirection scheme (option 2) in
            Infinispan, so suggestions are very welcome! ;-)<br>
            <br>
            I hope this post clarified your perplexities!! :-)<br>
          </div>
        </blockquote>
        <div><br>
        </div>
        Yes and no :)</div>
      <div><br>
        <blockquote type="cite">
          <div> <br>
            Cheers,<br>
            <br>
                Paolo<br>
            <br>
            <br>
            <br>
            On 2/17/11 3:53 PM, Mark Little wrote:
            <blockquote
              cite="mid:2F42CB44-FDCF-4D7C-BF13-39BD2259210B@redhat.com"
              type="cite">Hi
              <div><br>
                <div>
                  <div>On 11 Feb 2011, at 17:56, Sebastiano Peluso
                    wrote:</div>
                  <br class="Apple-interchange-newline">
                  <blockquote type="cite">
                    <div>Hi all,<br>
                      <br>
                      first let us introduce ourselves given that it's
                      the first time we write in this mailing list.<br>
                      <br>
                      Our names are Sebastiano Peluso and Diego Didona,
                      and we are working at INESC-ID Lisbon in the
                      context of the Cloud-TM project. Our work is
                      framed in the context of self-adaptive replication
                      mechanisms (please refer to previous message by
                      Paolo on this mailing list and to the <a
                        moz-do-not-send="true"
                        href="http://www.cloudtm.eu">www.cloudtm.eu</a>
                      website for additional details on this project).<br>
                      <br>
                      Up to date we have been working on developing a
                      relatively simple primary-backup (PB) replication
                      mechanism, which we integrated within Infinispan
                      4.2 and 5.0. In this kind of scheme only one node
                      (called the primary) is allowed to process update
                      transactions, whereas the remaining nodes only
                      process read-only transactions. This allows coping
                      very efficiently with write transactions, as the
                      primary does not have to incur in the cost of
                      remote coordination schemes nor in distributed
                      deadlocks, which can hamper performance at high
                      contention with two phase commit schemes. With PB,
                      in fact, the primary can serialize transactions
                      locally, and simply propagate the updates to the
                      backups for fault-tolerance as well as to allow
                      them to process read-only transactions on fresh
                      data of course. On the other hand, the primary is
                      clearly prone to become the bottleneck especially
                      in large clusters and write intensive workloads.<br>
                    </div>
                  </blockquote>
                  <div><br>
                  </div>
                  So when you say "serialize transactions locally" does
                  this mean you don't do distributed locking? Or is
                  "locally" not referring simply to the primary?</div>
                <div><br>
                  <blockquote type="cite">
                    <div><br>
                      Thus, this scheme does not really represent a
                      replacement for the default 2PC protocol, but
                      rather an alternative approach that results
                      particularly attractive (as we will illustrate in
                      the following with some radargun-based performance
                      results) in small scale clusters, or, in "elastic"
                      cloud scenarios, in periods where the workload is
                      either read dominated or not very intense.</div>
                  </blockquote>
                  <div><br>
                  </div>
                  <div>I'm not sure what you mean by a "replacement" for
                    2PC? 2PC is for atomicity (consensus). It's the A in
                    ACID. It's not the C, I or D.</div>
                  <br>
                  <blockquote type="cite">
                    <div> Being this scheme particularly efficient in
                      these scenarios, in fact, its adoption would allow
                      to minimize the number of resources acquired from
                      the cloud provider in these periods, with direct
                      benefits in terms of cost reductions. In Cloud-TM,
                      in fact, we aim at designing autonomic mechanisms
                      that would dynamically switch among multiple
                      replication mechanisms depending on the current
                      workload characteristics.<br>
                    </div>
                  </blockquote>
                  <div><br>
                  </div>
                  Replication and transactions aren't mutually
                  inconsistent things. They can be merged quite well :-)</div>
                <div><br>
                </div>
                <div><a moz-do-not-send="true"
                    href="http://www.cs.ncl.ac.uk/publications/articles/papers/845.pdf">http://www.cs.ncl.ac.uk/publications/articles/papers/845.pdf</a></div>
                <div><a moz-do-not-send="true"
                    href="http://www.cs.ncl.ac.uk/publications/inproceedings/papers/687.pdf">http://www.cs.ncl.ac.uk/publications/inproceedings/papers/687.pdf</a></div>
                <div><a moz-do-not-send="true"
                    href="http://www.cs.ncl.ac.uk/publications/inproceedings/papers/586.pdf">http://www.cs.ncl.ac.uk/publications/inproceedings/papers/586.pdf</a></div>
                <div><a moz-do-not-send="true"
                    href="http://www.cs.ncl.ac.uk/publications/inproceedings/papers/596.pdf">http://www.cs.ncl.ac.uk/publications/inproceedings/papers/596.pdf</a></div>
                <div><a moz-do-not-send="true"
                    href="http://www.cs.ncl.ac.uk/publications/trnn/papers/117.pdf">http://www.cs.ncl.ac.uk/publications/trnn/papers/117.pdf</a></div>
                <div><a moz-do-not-send="true"
                    href="http://www.cs.ncl.ac.uk/publications/inproceedings/papers/621.pdf">http://www.cs.ncl.ac.uk/publications/inproceedings/papers/621.pdf</a></div>
                <div><br>
                </div>
                <div>The reason I mention these is because I don't quite
                  understand what your goal is/was with regards to
                  transactions and replication.</div>
                <div><br>
                  <blockquote type="cite">
                    <div><br>
                      Before discussing the results of our preliminary
                      benchmarking study, we would like to briefly
                      overview how we integrated this replication
                      mechanism within Infinispan. Any comment/feedback
                      is clearly highly appreciated. First of all we
                      have defined a new command, namely
                      PassiveReplicationCommand, that is a subclass of
                      PrepareCommand. We had to define a new command
                      because we had to design customized "visiting"
                      methods for the interceptors. Note that our
                      protocol only affects the commit phase of a
                      transaction, specifically, during the prepare
                      phase, in the prepare method of the
                      TransactionXaAdapter class, if the Primary Backup
                      mode is enabled, then a PassiveReplicationCommand
                      is built by the CommandsFactory and it is passed
                      to the invoke method of the invoker. The
                      PassiveReplicationCommand is then visited by the
                      all the interceptors in the chain, by means of the
                      visitPassiveReplicationCommand methods. We
                      describe more in detail the operations performed
                      by the non-trivial interceptors:<br>
                      <br>
                      -TxInterceptor: like in the 2PC protocol, if the
                      context is not originated locally, then for each
                      modification stored in the
                      PassiveReplicationCommand the chain of
                      interceptors is invoked.<br>
                      -LockingInterceptor: first the next interceptor is
                      called, then the cleanupLocks is performed with
                      the second parameter set to true (commit the
                      operations). This operation is always safe: on the
                      primary it is called only after that the acks from
                      all the slaves are received  (see the
                      ReplicationInterceptor below); on the slave there
                      are no concurrent conflicting writes since these
                      are already locally serialized by the locking
                      scheme performed at the primary.<br>
                      -ReplicationInterceptor: first it invokes the next
                      iterceptor; then if the predicate
                      shouldInvokeRemoteTxCommand(ctx) is true, then the
                      method rpcManager.broadcastRpcCommand(command,
                      true, false) is performed, that replicates the
                      modifications in a synchronous mode (waiting for
                      an explicit ack from all the backups).<br>
                      <br>
                      As for the commit phase:<br>
                      -Given that in the Primary Backup the prepare
                      phase works as a commit too, the commit method on
                      a TransactionXaAdapter object in this case simply
                      returns.<br>
                      <br>
                      On the resulting extended Infinispan version a
                      subset of unit/functional tests were executed and
                      successfully passed:<br>
                      - commands.CommandIdUniquenessTest<br>
                      - replication.SyncReplicatedAPITest<br>
                      - replication.SyncReplImplicitLockingTest<br>
                      - replication.SyncReplLockingTest<br>
                      - replication.SyncReplTest<br>
                      - replication.SyncCacheListenerTest<br>
                      - replication.ReplicationExceptionTest<br>
                      <br>
                      <br>
                      We have tested this solution using a customized
                      version of Radargun. Our customizations were first
                      of all aimed at having each thread accessing data
                      within transactions, instead of executing single
                      put/get operations. In addition, now every
                      Stresser thread accesses with uniform probability
                      all of the keys stored by Infinispan, thus
                      generating conflicts with a probability
                      proportional to the number of concurrently active
                      threads and inversely proportional to the total
                      number of keys maintained.<br>
                      <br>
                      As already hinted, our results highlight that,
                      depending on the current workload/number of nodes
                      in the system, it is possible to identify
                      scenarios where the PB scheme significantly
                      outperforms the current 2PC scheme, and vice
                      versa.</div>
                  </blockquote>
                  <div><br>
                  </div>
                  I'm obviously missing something here, because it still
                  seems like you are attempting to compare incomparable
                  things. It's not like comparing apples and oranges (at
                  least they are both fruit), but more like apples and
                  broccoli ;-)</div>
                <div><br>
                  <blockquote type="cite">
                    <div> Our experiments were performed in a cluster of
                      homogeneous 8-core (<a moz-do-not-send="true"
                        class="moz-txt-link-abbreviated"
                        href="mailto:Xeon@2.16GhZ">Xeon@2.16GhZ</a>)
                      nodes interconnected via a Gigabit Ethernet and
                      running a Linux Kernel 64 bit version
                      2.6.32-21-server. The results were obtained by
                      running for 30 seconds 8 parallel Stresser threads
                      per nodes, and letting the number of node vary
                      from 2 to 10. In 2PC, each thread executes
                      transactions which consist of 10 (get/put)
                      operations, with a 10% of probability of
                      generating a put operation. With PB, the same kind
                      of transactions are executed on the primary, but
                      the backups execute read-only transactions
                      composed of 10 get operations. This allows to
                      compare the maximum throughput of update
                      transactions provided by the two compared schemes,
                      without excessively favoring PB by keeping the
                      backups totally idle.<br>
                      <br>
                      The total write transactions' throughput exhibited
                      by the cluster (i.e. not the throughput per node)
                      is shown in the attached plots, relevant to caches
                      containing 1000, 10000 and 100000 keys. As already
                      discussed, the lower the number of keys, the
                      higher the chance of contention and the
                      probability of aborts due to conflicts. In
                      particular with the 2PC scheme the number of
                      failed transactions steadily increases at high
                      contention up to 6% (to the best of our
                      understanding in particular due to distributed
                      deadlocks). With PB, instead the number of failed
                      txs due to contention is always 0.<br>
                    </div>
                  </blockquote>
                  <div><br>
                  </div>
                  Maybe you don't mean 2PC, but pessimistic locking? I'm
                  still not too sure. But I am pretty sure it's not 2PC
                  you mean.</div>
                <div><br>
                  <blockquote type="cite">
                    <div><br>
                      Note that, currently, we are assuming that backup
                      nodes do not generate update transactions. In
                      practice this corresponds to assuming the presence
                      of some load balancing scheme which directs (all
                      and only) update transactions to the primary node,
                      and read transactions to the backup.</div>
                  </blockquote>
                  <div><br>
                  </div>
                  Can I say it again? Serialisability?</div>
                <div><br>
                  <blockquote type="cite">
                    <div> In the negative case (a put operation is
                      generated on a backup node), we simply throw a
                      PassiveReplicationException at the CacheDelegate
                      level. This is probably suboptimal/undesirable in
                      real settings, as update transactions may be
                      transparently rerouted (at least in principle!) to
                      the primary node in a RPC-style. Any suggestion on
                      how to implement such a redirection facility in a
                      transparent/non-intrusive manner would be highly
                      appreciated of course! ;-)<br>
                      <br>
                      To conclude, we are currently working on a
                      statistical model that is able to predict the best
                      suited replication scheme given the current
                      workload/number of machines, as well as on a
                      mechanism to dynamically switch from one
                      replication scheme to the other.<br>
                      <br>
                      <br>
                      We'll keep you posted on our progresses!<br>
                      <br>
                      Regards,<br>
                        Sebastiano Peluso, Diego Didona<br>
                      <span>&lt;PCvsPR_WRITE_TX_1000keys.png&gt;</span><span>&lt;PCvsPR_WRITE_TX_10000keys.png&gt;</span><span>&lt;PCvsPR_WRITE_TX_100000keys.png&gt;</span>------------------------------------------------------------------------------<br>
                      The ultimate all-in-one performance toolkit:
                      Intel(R) Parallel Studio XE:<br>
                      Pinpoint memory and threading errors before they
                      happen.<br>
                      Find and fix more than 250 security defects in the
                      development cycle.<br>
                      Locate bottlenecks in serial and parallel code
                      that limit performance.<br>
                      <a moz-do-not-send="true"
href="http://p.sf.net/sfu/intel-dev2devfeb_______________________________________________">http://p.sf.net/sfu/intel-dev2devfeb_______________________________________________</a><br>
                      Cloudtm-discussion mailing list<br>
                      <a moz-do-not-send="true"
                        href="mailto:Cloudtm-discussion@lists.sourceforge.net">Cloudtm-discussion@lists.sourceforge.net</a><br>
                      <a moz-do-not-send="true"
                        href="https://lists.sourceforge.net/lists/listinfo/cloudtm-discussion">https://lists.sourceforge.net/lists/listinfo/cloudtm-discussion</a><br>
                    </div>
                  </blockquote>
                </div>
                <br>
                <div>
                  <div style="word-wrap: break-word; font-size: 12px;">
                    <div>
                      <div>---</div>
                      <div>Mark Little</div>
                      <div><a moz-do-not-send="true"
                          href="mailto:mlittle@redhat.com">mlittle@redhat.com</a></div>
                      <div><br class="webkit-block-placeholder">
                      </div>
                      <div>JBoss, by Red Hat</div>
                      <div>Registered Address: Red Hat UK Ltd, Amberley
                        Place, 107-111 Peascod Street, Windsor,
                        Berkshire, SI4 1TE, United Kingdom.</div>
                      <div>Registered in UK and Wales under Company
                        Registration No. 3798903 Directors: Michael
                        Cunningham (USA), Charlie Peters (USA), Matt
                        Parsons (USA) and Brendan Lane (Ireland).</div>
                    </div>
                    <div><br>
                    </div>
                  </div>
                  <br class="Apple-interchange-newline">
                  <br class="Apple-interchange-newline">
                </div>
                <br>
              </div>
              <pre wrap=""><fieldset class="mimeAttachmentHeader"></fieldset>
------------------------------------------------------------------------------
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
<a moz-do-not-send="true" href="http://p.sf.net/sfu/intel-dev2devfeb">http://p.sf.net/sfu/intel-dev2devfeb</a></pre>
              <pre wrap=""><fieldset class="mimeAttachmentHeader"></fieldset>
_______________________________________________
Cloudtm-discussion mailing list
<a moz-do-not-send="true" href="mailto:Cloudtm-discussion@lists.sourceforge.net">Cloudtm-discussion@lists.sourceforge.net</a>
<a moz-do-not-send="true" href="https://lists.sourceforge.net/lists/listinfo/cloudtm-discussion">https://lists.sourceforge.net/lists/listinfo/cloudtm-discussion</a>
</pre>
            </blockquote>
            <br>
            <br>
          </div>
        </blockquote>
        <blockquote type="cite">
          <div><span>------------------------------------------------------------------------------</span><br>
            <span>The ultimate all-in-one performance toolkit: Intel(R)
              Parallel Studio XE:</span><br>
            <span>Pinpoint memory and threading errors before they
              happen.</span><br>
            <span>Find and fix more than 250 security defects in the
              development cycle.</span><br>
            <span>Locate bottlenecks in serial and parallel code that
              limit performance.</span><br>
            <span><a moz-do-not-send="true"
                href="http://p.sf.net/sfu/intel-dev2devfeb">http://p.sf.net/sfu/intel-dev2devfeb</a></span></div>
        </blockquote>
        <blockquote type="cite">
          <div><span>_______________________________________________</span><br>
            <span>Cloudtm-discussion mailing list</span><br>
            <span><a moz-do-not-send="true"
                href="mailto:Cloudtm-discussion@lists.sourceforge.net">Cloudtm-discussion@lists.sourceforge.net</a></span><br>
            <span><a moz-do-not-send="true"
                href="https://lists.sourceforge.net/lists/listinfo/cloudtm-discussion">https://lists.sourceforge.net/lists/listinfo/cloudtm-discussion</a></span><br>
          </div>
        </blockquote>
      </div>
      <pre wrap="">
<fieldset class="mimeAttachmentHeader"></fieldset>
------------------------------------------------------------------------------
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
<a class="moz-txt-link-freetext" href="http://p.sf.net/sfu/intel-dev2devfeb">http://p.sf.net/sfu/intel-dev2devfeb</a></pre>
      <pre wrap="">
<fieldset class="mimeAttachmentHeader"></fieldset>
_______________________________________________
Cloudtm-discussion mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Cloudtm-discussion@lists.sourceforge.net">Cloudtm-discussion@lists.sourceforge.net</a>
<a class="moz-txt-link-freetext" href="https://lists.sourceforge.net/lists/listinfo/cloudtm-discussion">https://lists.sourceforge.net/lists/listinfo/cloudtm-discussion</a>
</pre>
    </blockquote>
    <br>
    <br>
    <pre class="moz-signature" cols="72">-- 

Paolo Romano, PhD
Senior Researcher
INESC-ID
Rua Alves Redol, 9
1000-059, Lisbon Portugal
Tel. + 351 21 3100300
Fax  + 351 21 3145843
Webpage <a class="moz-txt-link-freetext" href="http://www.gsd.inesc-id.pt/~romanop">http://www.gsd.inesc-id.pt/~romanop</a></pre>
  </body>
</html>