We deleted our Dockerfiles: a better, faster way to build container images

CI/CD
We deleted our Dockerfiles: a better, faster way to build container images

Earlier this year, we shared some reflections on the limitations of Dockerfiles and BuildKit (Docker’s internal build engine). We talked to hundreds of teams about their CI/CD pipelines over the last 6 months and consistently see that docker build is slowing teams down.

We considered the problems and wondered if the strategies we used to build the fastest CI/CD platform could be leveraged to build container images as well. So we gave it a try - converting our Dockerfiles to RWX run definitions and seeing if we could extract container images natively from our own product.

And it worked! Two weeks ago, we deleted the Dockerfile for our application, and we deleted the step in our CI pipelines that previously ran docker build:

commit acae90a991fb4b2ecdfcf5c754ebe7169af57c33
Author: Dan Manges <[email protected]>
Date: Fri Nov 7 18:28:36 2025 -0500
Remove the Dockerfile (#6330)
D .dockerignore
M .rwx/build-image-rwx.yml
D Dockerfile

Our container image builds got faster and the configuration became simpler.

In this post, we’re excited to share how we build container images, why it’s faster than building with Dockerfiles and BuildKit, how it has improved our developer experience, and how you can start deleting your Dockerfiles too.

How we build container images on RWX

RWX is a CI/CD platform built around the idea of executing builds as a graph of cacheable tasks. Each step in a build pipeline is represented by a task that runs atomically, rather than a series of stateful steps running as a single job tied to a single VM.

We save the filesystem changes from every task to use as input into subsequent tasks. This technique enabled us to package up those filesystem changes as layers in a container image.

In effect, we were already producing container images from every single task in an RWX run definition. And it was exceptionally fast. The thought of building a container image for every single step in a CI pipeline may sound like it’d be far too slow, but we’ve optimized it to happen very quickly.

Now, a Dockerfile that is implemented something like this:

Dockerfile
FROM node:24.11.0-trixie-slim
WORKDIR /app
ENV NODE_ENV=production
COPY package.json package-lock.json ./
RUN npm ci --omit=dev
COPY . .
USER node
CMD ["node", "server.js"]

Can be converted to an RWX definition that looks like this:

.rwx/image.yml
base:
image: node:24.11.0-trixie-slim
config: none
tasks:
- key: code
call: git/clone 1.9.0
with:
repository: https://github.com/rwx-cloud/rwx-image-example.git
ref: ${{ init.commit-sha }}
- key: npm-install
use: code
run: npm ci --omit=dev
filter:
- package.json
- package-lock.json
- key: image
use: npm-install
run: |
echo "node" | tee $RWX_IMAGE/user
echo "node server.js" | tee $RWX_IMAGE/command
env:
NODE_ENV: production

Docker pull

To prove this out, we implemented endpoints in our Cloud backend that correspond to the distribution registry endpoints. This enabled us to pull container images directly from our Cloud backend, for any step in an entire CI pipeline.

Although you can pull directly via docker pull, we shipped an rwx image pull command in the CLI to make it even easier.

Why it’s faster than using BuildKit

Distributed, right-sized compute

Docker builds run on a single machine, one step after another. Even when you use multi-stage builds, each stage competes for the same CPU, memory, disk, and network. And if you need to build multiple variants (different architectures, library versions, etc.) you typically end up running the whole build repeatedly.

RWX takes a different approach. By default, tasks in an RWX run definition (which correspond to individual steps in a Dockerfile) are distributed across multiple machines.

This way, each task can have its own right-sized compute: a task that needs 16 CPUs can claim it, while the next task can run on a smaller machine.

By running on distributed compute by default, we can avoid having an under-provisioned build machine, which inherently can end up being over-utilized or queueing builds.

.rwx/example.yml
tasks:
# runs on a 2 cpu machine
- key: apt
run: apt-get update && apt-get install -y build-essential && apt-get clean
# runs in parallel with apt on a different 2 cpu machine
- key: code
call: git/clone 1.9.0
# runs on a 16 cpu machine
- key: bundle
use: [apt, code]
agent:
cpus: 16
run: bundle install -j16
filter: [Gemfile, Gemfile.lock]
# back down to 2 cpu
- key: precompile
use: bundle
run: bundle exec rails assets:precompile

Cache is king

With a Dockerfile, once you change any layer, you force a rebuild of every layer thereafter. Straight from the Docker documentation:

And that's the Docker build cache in a nutshell. Once a layer changes, then all downstream layers need to be rebuilt as well. Even if they wouldn't build anything differently, they still need to re-run.

RWX container builds use content-based caching with filtering, which enables having a cache hit even after a cache miss.

Rather than having to carefully order the COPY statements in a Dockerfile to maintain caching, we can instead copy our whole repository into the image, and then filter subsequent command executions.

Here is a common example of a Dockerfile that would have suboptimal caching:

Dockerfile
FROM ruby:3.4
RUN apt-get update && apt-get install -y build-essential nodejs && apt-get clean
# copy the Gemfiles first for caching
COPY Gemfile Gemfile.lock .
RUN bundle install
# copy the npm files
# unfortunately, this will cache miss if bundle install is a cache miss
COPY package.json package-lock.json .
RUN npm install
COPY frontend .
RUN npm run build
COPY . .
RUN bundle exec rails assets:precompile

And here is the same image definition converted to RWX, which will always cache as optimally as possible.

.rwx/example.yml
base:
image: ruby:3.4
config: none
tasks:
# start with the entire code repository
- key: code
call: git/clone 1.9.0
- key: apt
run: apt-get update && apt-get install -y build-essential nodejs && apt-get clean
# use the entire `code`, but filter down to just the Gemfiles
- key: bundle
use: code
run: bundle install
filter: [Gemfile, Gemfile.lock]
# runs in parallel with `bundle`
# can cache hit even when `bundle` misses
- key: node-modules
use: [apt, code]
run: npm install
filter: [package.json, package-lock.json]
# this doesn't need rebuilt now if backend code changes!
- key: npm-build
use: [code, node-modules]
run: npm run build
filter:
- package.json
- package-lock.json
- frontend
- key: assets-precompile
use: [code, bundle, npm-build]
run: bundle exec rails assets:precompile

The cache key on RWX is determined by the command and the contents of the source files. Importantly, any files not specified in the filter will not be present on disk. This sandboxing approach ensures that cache hits will never be a false positive.

Automatic caching from full repository

We also just don’t need to agonize over properly configuring additional cache control levers like --cache-from and --cache-to. We frequently work directly with engineering organizations of all sizes to help them optimize their CI, and a shocking percentage of the companies we’ve worked with either have their Docker cache misconfigured or haven’t configured one at all.

Many pipelines will also do things like pull images before building, which can help a little bit, but in the case where there is a legitimate cache miss, it’s a waste of time to pull an image that ultimately will not be used.

RWX resolves cache hits seamlessly and automatically in real-time from the contents of the entire container repository; no configured required.

Network is fast, compression is slow

Docker compresses every layer before it is uploaded and decompresses every layer when it is downloaded. This was a great decision in 2013.

But in 2025, cloud networks are substantially faster than compression algorithms. It’s faster to upload 1 gigabyte of data than it is to gzip 1 gigabyte of data.

Compression is also a bad tradeoff because storage is cheap and compute is expensive.

In RWX, we transmit and store all of our layers and cache uncompressed.

Why we love our new developer experience

Context matters

With traditional Dockerfiles, the entire project has to be sent to the builder as a build context. For engineering teams with very large code repositories, this can be very slow.

This means that even when leveraging faster remote build machines, a fair amount of time can be spent uploading the repository.

Instead of pushing contents, it’s much faster to use git clone on the build machine to pull the code into the image.

While the git clone approach could be done with BuildKit, it’s not viable because of the caching mechanics. Individual files need to be added with a COPY before the entire repo is put into the image. Otherwise, the entire build will cache miss. Since filtering on RWX alleviates this concern, you can improve performance by cloning straight into the image rather than pushing build context.

First-class observability

Successful steps in a Docker build don’t output logs to the CLI by default, so interesting logs for a run command that indicate the cause of downstream problems are easily missed.

In RWX, the full logs for every task are preserved and easily accessible regardless of success or failure. We can leave ourselves rich annotations in our logs to understand what’s happening.

And every step in our build comes with its own diagnostics and explorable filesystem.

Faster container builds on GitHub Actions

Although we recommend running all of your CI on RWX, you can build container images on RWX directly from GitHub Actions by using the rwx-cloud/build-push-action.

.github/workflows/rwx.yml
name: Build on RWX and Push to Docker Hub
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- uses: rwx-cloud/build-push-action@v1
with:
access-token: ${{ secrets.RWX_ACCESS_TOKEN }}
file: .rwx/build.yml
target: app
push-to: docker.io/myusername/myapp:latest

What’s next?

Deleting our Dockerfile may have started as an experiment, but we’ve become convinced that RWX is now the best way to build container images.

We get the benefits of producing container images without slowing down our CI and CD waiting for them to build. Ultimately, we ship faster while still generating reproducible and portable build artifacts.

You can experiment with building your own container images on RWX today.

And we’d love to talk more with you!

Never miss an update

Get the latest releases and news about RWX and our ecosystem with our newsletter.

Share this post

Enjoyed this post? Pleas share it on your favorite social network!

Related posts

Read more on updates and advice from the RWX engineering team

See all posts
rwx run - development without the push and pull
CI/CD

rwx run - development without the push and pull

Beginning with version v2, rwx run can now launch a build directly from your terminal - local code changes included.

Nov 20, 2025
Read now
You Shouldn't Have To Change Your Cron Schedules Every Six Months
CI/CD

You Shouldn't Have To Change Your Cron Schedules Every Six Months

RWX supports more flexibility with cron schedules, such as specifying time zones, than what is available on Github Actions.

Nov 3, 2025
Read now
Using an MCP Server to Fix Tests that Failed on CI
CI/CD

Using an MCP Server to Fix Tests that Failed on CI

We shipped an MCP Server via the RWX CLI. Use it to have AI fix tests that failed on CI, powered by RWX tracking test failures as a semantic output.

Aug 8, 2025
Read now