I have done some reading up on this. I do not have any working prototypes I can link to. However the very high level summary of this is as follows.
* There are several methods for locking in SQL databases to ensure concurrent writes do not happen. * transactions by themselves do not guarantee locking. * In the context of a sync offering, the two best options are 1. *Row Level Locking* using the *FOR UPDATE* Syntax as part of the initial read/select statement. This will lock the affected row(s) in the context of the current transaction such that any other concurrent writes will fail. 2. *Optimistic Concurrency Control* The basic idea is that your records have a version or timestamp column which you fetch as part of the initial read. Then when you write the new data you add a WHERE VERSION = __expected version__ _expected version_
Both of these are outlined in the blogpost linked above. There are also some additional resources:
* Hacker News Comments on the original blog post - https://news.ycombinator.com/item?id=13227078 * https://stackoverflow.com/questions/32427698/control-concurrency-for-read-then-update-then-write-transaction-in-postgres * https://www.citusdata.com/blog/2018/02/15/when-postgresql-blocks/ * https://tapoueh.org/blog/2018/07/postgresql-concurrency-isolation-and-locking/
Our main difficulty is the fact that all of the solutions involve application level behaviour. We don't really know this will fit in right now with how users would eventually write their resolver logic code. |
|