Reproducibility in Practice

What is reproducibility in software engineering? It works on my machine

Reproducibility is the confidence that an application will behave similarly in development, test, and production. Reproducibility can make it easier to track down bugs and fix them.

Reproducibility is a confidence interval, not an absolute concept. It forces the engineer to understand the tradeoffs between flexibility and correctness and determine under which conditions reproducibility is desired or necessary.

Here's a few different examples of reproducibility in practice.

Vendoring

Vendoring is the process of saving the state of all software dependencies. Usually this takes the form of downloading all the dependencies and committing them to source control. For instance, node_modules and Go's vendor folder.

Why is vendoring useful?

  • Resolved compile-time or runtime dependencies may be different in different environments. A package exists locally, or a maintainer has updated the package remotely on the package repository. By committing a certain set of dependencies that have been tested, other developers can be more confident that the project will build and run on their machine.

Reproducible Vendoring

Committing dependencies to the repository is a good step, but how were those dependencies resolved?

  • When it comes time to update, how do we know what transitive dependencies will also need to be updated?
  • How do we trust that a developer did not sneak in malicious code when committing a large number of vendored dependencies?

By having a manifest of checksums and versions for each user-specified dependency, along with a program that can "solve" the transitive dependencies based on that file, we can solve both of these problems. The same program can be ran in CI to shift the trust model to the solver binary instead of the code review. The solver binary will also be able to update dependencies transitively.

Examples Go's go.mod, npm's package-lock.json.

Declarative Configuration

Declarative configuration, in contrast to imperative configuration, describes a desired state of software, rather than the explicit commands to create that state.

Like vendoring, the declarative model is more reproducible because the imperative model does not account for the current state of the environment. Has the application already been deployed? Does a folder exist or need to be created? While the imperative model can imperatively check all of these conditions initially at runtime, the state may change over time and produce undesirable results. Once you start watching the state continuously, you've arrived at the declarative model.

Like reproducible vendoring, the declarative model is about shifting the reproducible burden to a application - in this case, a reconciler or a controller that manages the state.

Most importantly, declarative configuration allows for infrastructure as code. It allows you to codify the state of the infrastructure, which means that it can be reproduced easier.

Containers

Container can be used to provide reproducibility in terms of the root filesystem, environment variables, PID, and user.

Containers can provide reproducibility in two aspects

  • Runtime reproducibility
  • Build reproducibility

By ensuring the rootfs, environment variables, and user in running deployments, we can reduce the possibility of an ill-provisioned node, bad behaved sibling processes, and unexpected filesystem state.

In contrast to the previous strategies for reproducibility, containers are about creating a specification bundle that behaves the same on any linux kernel. Namespaces make sure that the processes view of the world looks the same regardless of the the state of the world.

You can think of build reproducibility in a similar matter, except for the state of the world when the artifact is built rather than when it is ran. Note well: the Dockerfile doesn't provide great reproducibility, in the sense that it still doesn't solve the issue of vendored dependencies or the availability and reproducibility of networked dependencies, but it is a step in the right direction.

Byte-for-byte reproducible builds on the same "environment"

Build a binary, take the checksum, and build it again on a similar machine. Chances are, you won't get the same checksum as you did before? Why?

Compilers like GCC can capture the build path and use that in compilers, nondeterministic random ids can be injected as well.

Build systems like Bazel, Pants, and Buck are all aiming to be reproducible build systems.

There is an effort to make debian packages reproducible as well.

An Alternative to the Dockerfile
$ docker build -f mockerfile.yaml

In this blog post, I'll show you how to write your own Dockerfile syntax that works out of the box with any existing Docker installation. If you want to see it in action right away, here's a YAML file that is used in place of a Dockerfile.

curl https://raw.githubusercontent.com/r2d4/mockerfile/master/Mockerfile.yaml | DOCKER_BUILDKIT=1 docker build -f - .

The sample code for this post can be found on GitHub.

Background

Buildkit is a tool that can convert code to docker images. It's already integrated in Docker versions  18.09  and above.

Buildkit works by mapping a human-readable frontend (e.g. Dockerfile) to a set of Ops (ExecOp, CacheOp, SecretOp, CopyOp, SourceOp, etc.), collectively called low-level builders (LLB).

That LLB is then executed by either a runc or containerd worker and produces a docker image.

Design

Our demo frontend is going to be called Mockerfile. It's going to be a YAML based syntactic sugar for building ubuntu-based images. It will contain two keys: package, which is some automation around apt-get, and external, which will fetch external dependencies concurrently.

#syntax=r2d4/mocker
apiVersion: v1alpha1
images:
- name: demo
  from: ubuntu:16.04
  package:
    repo: 
    - deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8
    - deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial edge
    gpg: 
    - https://bazel.build/bazel-release.pub.gpg
    - https://download.docker.com/linux/ubuntu/gpg
    install:
    - bazel
    - python-dev
    - ca-certificates
    - curl
    - build-essential
    - git
    - gcc
    - python-setuptools
    - lsb-release
    - software-properties-common
    - docker-ce=17.12.0~ce-0~ubuntu
  external:
  - src: https://storage.googleapis.com/kubernetes-release/release/v1.10.0/bin/linux/amd64/kubectl
    dst: /usr/local/bin/kubectl

  - src: https://github.com/kubernetes-sigs/kustomize/releases/download/v1.0.8/kustomize_1.0.8_linux_amd64
    dst: /usr/local/bin/kustomize
    sha256: b5066f7250beb023a3eb7511c5699be4dbff57637ac4a78ce63bde6e66c26ac4

  - src: https://storage.googleapis.com/kubernetes-helm/helm-v2.10.0-linux-amd64.tar.gz
    dst: /tmp/helm
    install:
    - install /tmp/helm/linux-amd64/helm /usr/local/bin/helm

  - src: https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-217.0.0-linux-x86_64.tar.gz
    dst: /tmp

Code Walk-through

High level steps

  1. Write a conversion function from your configuration file format to LLB
  2. Write a build function that handles some extra tasks such as mounting the configuration file, secrets, or context.
  3. Use that build function in the frontend gRPC gateway
  4. Publish as a docker image
  5. Add the #syntax=yourregistry/yourimage directive to your top of your config file and set DOCKER_BUILDKIT=1 to build with any Docker installation.

Writing the Conversion Function

Here is my conversion function for Mockerfile. It takes my configuration struct and returns DAG called llb.State.

Some interesting observations:

  • You can start as many different concurrent paths as you want with llb.Image (Similar to a FROM instruction), but those paths must be merged into a final image.
  • Merging is done with a copy helper function, which takes two llb.State, mounts src to dst, and copies the file over, producing a single llb.State. (Similar to a COPY --from multistage build)

The external files are downloaded in separate alpine images, and then use the copy helper to move them into the final image. It uses a small script to verify the checksums of the downloaded binaries s = s.Run(shf("echo \"%s %s\" | sha256sum -c -", e.Sha256, downloadDst)).Root(). If the checksum does not match, the command fails, and the image build stops.

Writing the Build Function

Steps of the build function

  1. Get the Mockerfile/Dockerfile config and build context
  2. Convert config to LLB
  3. Solve the LLB
  4. Package the image and metadata

The configuration file itself must be mounted into the build contianer, for which we use llb.Local. You can see this in action here. Mounting a build context would be done in a similar way.

Creating the gRPC gateway

We reuse the grpc client here. As long as your build function fits the interface type BuildFunc func(context.Context, Client) (*Result, error), things will work as expected.

Publish the image

Our image is quite simple, using the built binary as the entrypoint. The binary runs the grpc gateway we created in the last step. Here is an example.

Using it

  • Add # syntax=yourregistry/yourimage to the top of your configuration file. Buildkit looks for that, and will pull and use that image as the solver.
  • Add DOCKER_BUILDKIT=1 to your docker build command to enable the buildkit backend.