Earlier this year, we shared some reflections on the limitations of Dockerfiles and BuildKit (Docker’s internal build engine). We talked to hundreds of teams about their CI/CD pipelines over the last 6 months and consistently see that docker build is slowing teams down.
We considered the problems and wondered if the strategies we used to build the fastest CI/CD platform could be leveraged to build container images as well. So we gave it a try - converting our Dockerfiles to RWX run definitions and seeing if we could extract container images natively from our own product.
And it worked! Two weeks ago, we deleted the Dockerfile for our application, and we deleted the step in our CI pipelines that previously ran docker build:
commit acae90a991fb4b2ecdfcf5c754ebe7169af57c33Author: Dan Manges <[email protected]>Date: Fri Nov 7 18:28:36 2025 -0500Remove the Dockerfile (#6330)D .dockerignoreM .rwx/build-image-rwx.ymlD Dockerfile
Our container image builds got faster and the configuration became simpler.
In this post, we’re excited to share how we build container images, why it’s faster than building with Dockerfiles and BuildKit, how it has improved our developer experience, and how you can start deleting your Dockerfiles too.
How we build container images on RWX
RWX is a CI/CD platform built around the idea of executing builds as a graph of cacheable tasks. Each step in a build pipeline is represented by a task that runs atomically, rather than a series of stateful steps running as a single job tied to a single VM.
We save the filesystem changes from every task to use as input into subsequent tasks. This technique enabled us to package up those filesystem changes as layers in a container image.
In effect, we were already producing container images from every single task in an RWX run definition. And it was exceptionally fast. The thought of building a container image for every single step in a CI pipeline may sound like it’d be far too slow, but we’ve optimized it to happen very quickly.
Now, a Dockerfile that is implemented something like this:
FROM node:24.11.0-trixie-slimWORKDIR /appENV NODE_ENV=productionCOPY package.json package-lock.json ./RUN npm ci --omit=devCOPY . .USER nodeCMD ["node", "server.js"]
Can be converted to an RWX definition that looks like this:
base:image: node:24.11.0-trixie-slimconfig: nonetasks:- key: codecall: git/clone 1.9.0with:repository: https://github.com/rwx-cloud/rwx-image-example.gitref: ${{ init.commit-sha }}- key: npm-installuse: coderun: npm ci --omit=devfilter:- package.json- package-lock.json- key: imageuse: npm-installrun: |echo "node" | tee $RWX_IMAGE/userecho "node server.js" | tee $RWX_IMAGE/commandenv:NODE_ENV: production
Docker pull
To prove this out, we implemented endpoints in our Cloud backend that correspond to the distribution registry endpoints. This enabled us to pull container images directly from our Cloud backend, for any step in an entire CI pipeline.
Although you can pull directly via docker pull, we shipped an rwx image pull command in the CLI to make it even easier.
Why it’s faster than using BuildKit
Distributed, right-sized compute
Docker builds run on a single machine, one step after another. Even when you use multi-stage builds, each stage competes for the same CPU, memory, disk, and network. And if you need to build multiple variants (different architectures, library versions, etc.) you typically end up running the whole build repeatedly.
RWX takes a different approach. By default, tasks in an RWX run definition (which correspond to individual steps in a Dockerfile) are distributed across multiple machines.
This way, each task can have its own right-sized compute: a task that needs 16 CPUs can claim it, while the next task can run on a smaller machine.
By running on distributed compute by default, we can avoid having an under-provisioned build machine, which inherently can end up being over-utilized or queueing builds.
tasks:# runs on a 2 cpu machine- key: aptrun: apt-get update && apt-get install -y build-essential && apt-get clean# runs in parallel with apt on a different 2 cpu machine- key: codecall: git/clone 1.9.0# runs on a 16 cpu machine- key: bundleuse: [apt, code]agent:cpus: 16run: bundle install -j16filter: [Gemfile, Gemfile.lock]# back down to 2 cpu- key: precompileuse: bundlerun: bundle exec rails assets:precompile
Cache is king
With a Dockerfile, once you change any layer, you force a rebuild of every layer thereafter. Straight from the Docker documentation:
And that's the Docker build cache in a nutshell. Once a layer changes, then all downstream layers need to be rebuilt as well. Even if they wouldn't build anything differently, they still need to re-run.
RWX container builds use content-based caching with filtering, which enables having a cache hit even after a cache miss.
Rather than having to carefully order the COPY statements in a Dockerfile to maintain caching, we can instead copy our whole repository into the image, and then filter subsequent command executions.
Here is a common example of a Dockerfile that would have suboptimal caching:
FROM ruby:3.4RUN apt-get update && apt-get install -y build-essential nodejs && apt-get clean# copy the Gemfiles first for cachingCOPY Gemfile Gemfile.lock .RUN bundle install# copy the npm files# unfortunately, this will cache miss if bundle install is a cache missCOPY package.json package-lock.json .RUN npm installCOPY frontend .RUN npm run buildCOPY . .RUN bundle exec rails assets:precompile
And here is the same image definition converted to RWX, which will always cache as optimally as possible.
base:image: ruby:3.4config: nonetasks:# start with the entire code repository- key: codecall: git/clone 1.9.0- key: aptrun: apt-get update && apt-get install -y build-essential nodejs && apt-get clean# use the entire `code`, but filter down to just the Gemfiles- key: bundleuse: coderun: bundle installfilter: [Gemfile, Gemfile.lock]# runs in parallel with `bundle`# can cache hit even when `bundle` misses- key: node-modulesuse: [apt, code]run: npm installfilter: [package.json, package-lock.json]# this doesn't need rebuilt now if backend code changes!- key: npm-builduse: [code, node-modules]run: npm run buildfilter:- package.json- package-lock.json- frontend- key: assets-precompileuse: [code, bundle, npm-build]run: bundle exec rails assets:precompile
The cache key on RWX is determined by the command and the contents of the source files. Importantly, any files not specified in the filter will not be present on disk. This sandboxing approach ensures that cache hits will never be a false positive.
Automatic caching from full repository
We also just don’t need to agonize over properly configuring additional cache control levers like --cache-from and --cache-to.
We frequently work directly with engineering organizations of all sizes to help them optimize their CI, and a shocking percentage of the companies we’ve worked with either have their Docker cache misconfigured or haven’t configured one at all.
Many pipelines will also do things like pull images before building, which can help a little bit, but in the case where there is a legitimate cache miss, it’s a waste of time to pull an image that ultimately will not be used.
RWX resolves cache hits seamlessly and automatically in real-time from the contents of the entire container repository; no configured required.
Network is fast, compression is slow
Docker compresses every layer before it is uploaded and decompresses every layer when it is downloaded. This was a great decision in 2013.
But in 2025, cloud networks are substantially faster than compression algorithms. It’s faster to upload 1 gigabyte of data than it is to gzip 1 gigabyte of data.
Compression is also a bad tradeoff because storage is cheap and compute is expensive.
In RWX, we transmit and store all of our layers and cache uncompressed.
Why we love our new developer experience
Context matters
With traditional Dockerfiles, the entire project has to be sent to the builder as a build context. For engineering teams with very large code repositories, this can be very slow.
This means that even when leveraging faster remote build machines, a fair amount of time can be spent uploading the repository.
Instead of pushing contents, it’s much faster to use git clone on the build machine to pull the code into the image.
While the git clone approach could be done with BuildKit, it’s not viable because of the caching mechanics. Individual files need to be added with a COPY before the entire repo is put into the image. Otherwise, the entire build will cache miss.
Since filtering on RWX alleviates this concern, you can improve performance by cloning straight into the image rather than pushing build context.
First-class observability
Successful steps in a Docker build don’t output logs to the CLI by default, so interesting logs for a run command that indicate the cause of downstream problems are easily missed.
In RWX, the full logs for every task are preserved and easily accessible regardless of success or failure. We can leave ourselves rich annotations in our logs to understand what’s happening.
And every step in our build comes with its own diagnostics and explorable filesystem.
Faster container builds on GitHub Actions
Although we recommend running all of your CI on RWX, you can build container images on RWX directly from GitHub Actions by using the rwx-cloud/build-push-action.
name: Build on RWX and Push to Docker Hubon:push:branches: [main]jobs:build:runs-on: ubuntu-lateststeps:- uses: actions/checkout@v6- name: Login to Docker Hubuses: docker/login-action@v3with:username: ${{ secrets.DOCKER_USERNAME }}password: ${{ secrets.DOCKER_PASSWORD }}- uses: rwx-cloud/build-push-action@v1with:access-token: ${{ secrets.RWX_ACCESS_TOKEN }}file: .rwx/build.ymltarget: apppush-to: docker.io/myusername/myapp:latest
What’s next?
Deleting our Dockerfile may have started as an experiment, but we’ve become convinced that RWX is now the best way to build container images.
We get the benefits of producing container images without slowing down our CI and CD waiting for them to build. Ultimately, we ship faster while still generating reproducible and portable build artifacts.
You can experiment with building your own container images on RWX today.
And we’d love to talk more with you!
- We’ll be at AWS re:Invent Booth 1958 from December 1-5. If you’re around, please stop by!
- Say hello in the RWX Discord
- Email co-founder Dan Manges at [email protected]
Related posts
Read more on updates and advice from the RWX engineering team

rwx run - development without the push and pull
Beginning with version v2, rwx run can now launch a build directly from your terminal - local code changes included.


