On 16 Dec 2016 12:39, "Tristan Tarrant" <ttarrant(a)redhat.com> wrote:
On 16/12/16 13:12, Emmanuel Bernard wrote:
> On 16 Dec 2016, at 09:48, Tristan Tarrant <ttarrant(a)redhat.com> wrote:
>
> On 16/12/16 09:34, Emmanuel Bernard wrote:
>>> Yes, the above design is what sprung to mind initially. Not sure about
>>> the need of keeping the log in memory, as we would probably need some
>>> form of persistent log for cache shutdown. Since this looks a lot like
>>> the append-log of the Artemis journal, maybe we could use that.
>>
>> Well, when the cache is shut down, don’t we have time to empty the
in-memory log?
>
> Cache shutdown should not be deferred because there is a backlog of
> events that haven't been forwarded to Debezium, so we would want to pick
> up from where we were when we restart the cache.
But you’re willing to wait for the Artemis journal finish writing? I
don’t quite
see the difference.
I'm thinking about the case where Debezium is temporarily not able to
collect the changes.
+1 That's the crucial concern.
We can have Infinispan attempt to transmit all updates to debezium on a
best effort base, but we can't guarantee to send them all. We can resume
the state replication stream as Randall suggested, providing in that case a
squashed view which might work great to replicate the same state.
Analysis of the stream of updates though shall not be able to rely on
seeing *all* events.
As Emmanuel seemed to agree previously, for that use case one would need a
different product like Kafka.
What I'm getting at is that if we agree that this granularity of events
needs *just* to replicate state, then we can take advantage of that detail
in various areas of the implementation, providing significant performance
optimisation opportunities.
Sanne
Tristan
--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev