I wanted to better understand synchronization. There are number of rules of thumb to consider:
in business: centralize for efficiency, distribute for scale
in databases: Consistency, Availability, Partition tolerance
in networks: high bandwidth, low latency, low cost – pick any two
Cost is an interesting factor to consider. It is important in our daily lives. Economics is constrained by the laws of physics. We can see that time and space relate to latency and bandwidth. Also, databases store information over time and networks communicate information across space. Since I am not trying to develop a physically measurable theory, I will stop discussing time and space. [1]
I prefer the (more familiar / less obscure) terminology of bandwidth and latency over partition tolerance and availability, respectively. In place of consistency, I will refer to the term reliability. Reliability can be implied in networking. After all, the Shannon Hartley Theorem (aka Shannon’s Law) relates bandwidth to reliability. The missing link in our understanding is latency.
Notice that databases, networks, and interactive systems are open systems. We might have multiple users directly or indirectly via real-time events, but the system interacts with something outside its control. For the theory of computation to work in the way that it does, computation must be a closed system. While computations can be partial, the infinite loops are not allowed to produce output. [2] Interactive, infinite, and open systems are fundamentally different from computation.
To interact with more than one user, we need to synchronize between them. To synchronize events we need to resolve dependencies, which we only know at run-time. [3] Determining dependency is determining the post dominance frontier of forward flow (IE the data flow and control flow of running a program). [4] The amount of computation needed to determine dependency is not linear with respect to evaluation and the amount of dependencies (IE size of the dominance frontier) refers to the users. In other words, more bandwidth results in more latency.
Now we have a theory of synchronization.
[1] We can assume that we can adjust granularity for our circumstances.
[2] I have written about this before, including in metamath.html.
[3] Dependency can be implicit for a fixed number, but that is not realy interactive / open.
[4] I forget the definition of dominance sometimes. In case it helps you learn or remember dominance frontier, it might help to think of the closure of a transitive function.