In distributed computing, there's something called the CAP theorem. In layman's terms:
Given a network partition, you must sacrifice either availability or consistency in a distributed system.
Let's say there's a network failure: half of the computers connected to the system can no longer communicate with each other. When new data arrives, the system has two choices: (1) reject the new data and ensure both partitioned groups agree on the same state, or (2) process the new data and return the most recent state, even if it may be out of sync with the other partition.
For example – blockchains choose availability over consistency. New transactions aren't guaranteed to be committed. Some nodes might disagree on the state of the system. But over time, blockchains are eventually consistent and agree on history – which is why you have to wait for a number of "confirmations" to know if your transaction is "finalized" or not.
But there are more trade-offs to distributed systems (and databases) that aren't just CAP. There's an extension to CAP that says that systems must additionally choose between latency and consistency, even in the absence of network partitions (e.g., process and then tell everyone else about the new data, or vice versa).
Blockchains make trade-offs. In the next few posts, I'll explore some of the trade-offs – where they might excel and where they might be at a disadvantage. Rarely is there one tool that is good for everything.