Why (and How) I Built My Own Nx Remote Cache Server
I really like Nx — the monorepo tooling is solid, and Nx Cloud's free tier is actually pretty generous. But for my current project it's still not enough, and the project isn't profitable yet to justify paying for more cache minutes or storage.
I checked out some existing custom remote cache solutions, but almost every one requires you to bring your own S3 bucket, GCS, or some other cloud storage. That means extra infrastructure, credentials to manage, potential costs... just more moving parts I didn't want to deal with right now.
Then I noticed Nx publishes a clean OpenAPI spec for their remote cache API. It looked straightforward enough — I figured I could implement a basic server that just uses the local filesystem in a day or two at most, without pulling me too far away from my actual work.
So that's what I ended up doing: nx-caching-server (also published on Docker Hub as enxtur/nx-caching-server).
Why Go instead of Node/TS?
My main stack is Node + TypeScript, but the API spec felt perfect for Go: clean structs, straightforward http handling, JSON without boilerplate. Plus I wanted to brush up on my Go skills anyway — it's been a while, and I didn't want them to get too rusty.
The result is dead simple: give it a folder to store cache artifacts, and that's it. No Redis, no buckets, no databases, no extra third-party services. Exactly the zero-pain setup I was after.
I've already wired it into my project's CI/CD pipeline — cache hits are working nicely, build times are noticeably down, and it feels reliable so far.
If you're in a similar spot (Nx monorepo, not ready for Nx Cloud paid tier, and allergic to provisioning cloud storage just for caching), give it a try. It's MIT licensed, so fork it, tweak it, open PRs if you spot improvements or run into issues.
Repo: https://github.com/enxtur/nx-caching-server
Docker: docker pull enxtur/nx-caching-server
Feedback welcome — especially if you deploy it somewhere real or hit any edge cases.
