Product Velocity of Alternative Cloud Platforms

Jun 1, 2022

Product velocity is the number one indicator of a successful platform. One source of product velocity comes from having a differentiated backbone that creates the opportunity to bolt on existing functionality in a new way quickly. You built a differentiated backbone by holding one primitive constant (the network, database, metrics, etc.) and optimizing around that.

For example, look at Functions – are they built on the network (Cloudflare Workers), database (Snowflake UDFs), or metrics (Datadog Lambda Extensions)? Again, holding one thing constant allows these products to compete with the underlying layer (AWS Lambda). Yet conceptually, it's the same feature bolted onto different parts of the stack.

What are the different backbones an alternative cloud provider can hold constant?

The network is the computer (Cloudflare)

An adage from Sun Microsystems that's now trademarked by Cloudflare. Nearly every service needs to interact with the network. So the network isn't just core to the application stack but also the security boundary.

Building on the network optimizes for speed and latency.

You can imagine most AWS services could be rebuilt at the edge. Databases (R2), Functions (Workers),  Firewalls (DDoS, WAF), Load Balancing, etc.

The database is the computer (Snowflake, Databricks)

The "network is the computer" became "the database is the computer" after Oracle acquired Sun Microsystems in 2009. The Modern Data Stack thesis is built around the modern data warehouse. However, many data applications only exist because they can assume a Snowflake endpoint.

The thesis is that downstream applications will be rebased on the cloud data warehouse. Customers will have data sovereignty to run their own analytics stack (on top of the cloud data warehouse). Applications can be built quickly and not worry about tricky persistence layers. SaaS applications can make use of data from adjacent SaaS.

Building around the database means increased data availability and persistence.

The observable is the computer (Datadog)

If a tree falls in a forest, and nobody is around to hear it, does it make a sound? Users expect high uptime and reliability. This can't be provided unless operators know what's happening (especially in an increasingly complex ecosystem). Observability needs will change over time as applications evolve, but instrumenting it can be difficult.

Datadog builds around the observability agent. Shipping new features can be as simple as collecting a new metric and adding it to the dashboard.

The job is the computer (Salesforce, Stripe, Coinbase)

Some have referred to these as industry clouds, but I think they still have hidden backbones. They might be more domain-specific – a system of record (e.g., CRM), payments (e.g., Stripe or Coinbase), or identity, but they still are a foundational platform upon which additional products can be built.