The Centralization of dYdX

dYdX is a decentralized exchange for tokenized assets. You can swap different tokens like Ethereum and Bitcoin. Recently, they made a large infrastructure change that makes their decentralized exchange look much more like a traditional one—this is the centralization of dYdX.

First, dYdX will move the significant portions of its app off-chain – orders and cancellations will never touch the blockchain. I wrote about this possibility in Crypto Without Blockchains. Orders will still be gossiped across the network, but without any cryptographic or ordering guarantees, it won't serve many purposes.

Second, dYdX will move to its own blockchain. This follows from the first change – validators cannot have application-specific validation on a generic chain like Ethereum. While still a blockchain, the infrastructure will be highly centralized:

  • It's too inefficient to query the blockchain directly, so dedicated indexers will serve data over HTTP or WebSockets to frontend applications like web or mobile apps. There are no guarantees that this data matches the historical chain data.
  • The new chain will be proof-of-stake. Since the chain is dedicated to serving dYdX, the diversity of validators will be low. As a result, there will be more incentive to have vertical solutions.

Why would dYdX make these changes?

  1. Transaction speed – exchanges built on blockchains cannot compete with centralized exchanges like Coinbase or FTX.
  2. Transaction fees – in the decentralized model, fees are proportional to network security. dYdX can lower fees by becoming more centralized.
  3. Developer/user experience – building applications on top of a blockchain is difficult – it's hard to store and retrieve data and difficult to optimize. Controlling more infrastructure can translate to a better developer and end-user experience.
  4. Regulation – I suspect dYdX is proactively trending towards centralization in anticipation of upcoming regulation. It would be near impossible to require financial regulation (KYC and AML) on top of an anonymized user base.

I think that dYdX is moving in the right direction. Web3 enables new experiences, but ultimately the infrastructure will have to look a lot more like what we had before.

16 Bell-Curve Opinions on Engineering
ipse se nihil scire id unum sciat
He himself thinks he knows one thing, that he knows nothing

There's a meme format that shows a bell curve – the X-axis depicting expertise or intelligence, and the Y-axis the number of people who share that opinion.

It's used to show when beginners and experts share the same opinion (often the simplest one), but one that goes against common practice.

Here are 16 bell-curve opinions on engineering. Disclaimer: I make no claim on what side of the bell curve I'm on for some of these unpopular opinions.

  1. You should always use Kubernetes and other "correct" infrastructure.
    Beginners/Experts: Don't Use Kubernetes, Yet. Use the simplest abstraction you need for now.
  2. Technical debt is bad and should be avoided at all costs.
    Beginners/Experts: Technical debt can be a good tradeoff between time and effort. It's hard to predict future requirements.
  3. We need to build an internal developer platform that abstracts cloud APIs away from our developers.
    Beginners/Experts: Just use the cloud APIs directly
  4. First mover advantage is important.
    Beginners/Experts: You can short-circuit the learning curve and avoid costly experiments by copying others.
  5. Don't repeat yourself
    Beginners/Experts: A little duplication is often better than a little dependency.
  6. Jupyter Notebooks should be avoided. They aren't reproducible and promote bad practices.
    Beginners/Experts: Highly imperative programming can be useful in the experimental stage. Presentation next to code can shorten iteration cycles.
  7. Windows is not a good operating system for developers.
    Beginners/Experts: Windows has a great desktop environment, and WSL is good enough for most things.
  8. You need a database with multiple read and write replicas.
    Beginners/Experts: Sometimes a single sqlite instance is enough for your data.
  9. Spreadsheets should be replaced by real software.
    Beginners/Experts: Spreadsheets are often more maintainable, more usable, and more extensible than most software projects.
  10. Single-page applications were a mistake
    Beginners/Experts: Single-page applications solve real issues and state management is hard. Users expect a higher level of reactivity and state management today. The answer is probably in the middle.
  11. We need a dedicated configuration language to manage our complex configuration.
    Beginners/Experts: Just write your configuration in the same language as your application.
  12. Every service should be decomposed into micro-services with sharp boundaries.
    Beginners/Experts: Interfaces are fluid: requirements change, knowledge is gained, and dependencies evolve. Start with a monorepo.
  13. We should write fully declarative configuration.
    Beginners/Experts: Imperative configuration isn't always bad. Pick and choose what needs strong guarantees.
  14. Optimize everything you can.
    Beginners/Experts: Optimization is fragile. Optionality can be more valuable.
  15. GitHub stars don't mean anything.
    Beginners/Experts: GitHub stars might be a noisy signal, but they are a signal.
  16. Pair programming, agile, and rigid frameworks
    Beginners/Experts: Collaborate freely. Don't over-index on the process.
History of Version Control Systems: Part 3
My hatred of CVS means that I see SVN as the most pointless project ever started. — Linus Torvalds, creator of the Linux Kernel and Git

The third generation of VCS was distributed. It's best to describe it through the story of Git.

Larry McVoy had worked on a VCS called Sun WorkShop TeamWare in the 90s. TeamWare mirrored many of the features of Subversion and Perforce but built on SCCS. In 1998, McVoy saw the issues with the growing development of the Linux Kernel, which was now seven years old and involved thousands of developers.

In 2000, McVoy started a company called BitMover to solve these issues. BitMover published BitKeeper, a proprietary version control system, which offered a community version that was free for open-source developers.

In 2002, the Linux kernel started using BitKeeper as its VCS.

However, a developer (who went on to create rsync) who worked for the Open Source Development Labs (OSDL) (the precursor to the Linux Foundation), reverse-engineered the BitKeeper protocol, bypassing the license requirements for proprietary features. As a response, BitKeeper revoked the licenses associated with the OSDL, which effectively removed the free license for the Linux Kernel.

Linus Torvalds, the benevolent dictator of the Linux kernel, decided that he would write his own VCS. git was born. git had some notable properties not found in other VCS.

It cloned the entire history locally. This meant no file locking and quick operations that weren't limited by the network. While it could generate and apply patches, its storage model kept full versions of each file change. This made for quick branching and quick checkouts. There were no complicated patch algebras needed – checking out a revision meant just finding the set of SHA-addressed files that correspond to it. Finally, git tracked changesets in a DAG (directed acyclic graph), which made for more correct but complicated branching and merging.

Finally, git had the full marketing power of Linus and Linux behind it. It's a powerful sell to have your VCS used by the world's largest open-source project (with free feature updates and bug fixes from the world's best developers).

One of my favorite talks is Linus Torvalds's talk about git at Google in 2007. You can watch the video here.