The Minio Pivot: Container images and CVEs
In case you misssed it, there’s been some intense community discussion around Minio (the popular S3-Compatible object storage server). It’s a perfect storm of a critical vulnerability, a sudden business model change, and a high-stakes standoff that puts Docker’s hub in a really shitty position.
What Happened?
The whole mess kicked off when a critical privilege escalation vulnerability, CVE-2024-40626, was disclosed. A patch for this nasty bug hit Minio’s source code around October 15th, but users quickly noticed the official minio/minio Docker image wasn’t being updated. This led to a GitHub issue being filed around October 18th, politely asking what was up. The answer was a bombshell: Minio had already made a business decision around October 10th to become a source-only distribution. They were abandoning their pre-compiled binaries (including the Docker image) and had even pulled their online documentation. As you’d expect, the community discussion exploded, both on GitHub and in a massive Hacker News thread around October 21st. To make a bad situation worse, when Docker’s own ’trusted images’ team tried to contact Minio, they were met with complete radio silence.
Docker is in a Shitty Position
This leaves Docker, the custodian of the “trusted” minio/minio image, in a no-win scenario. They have an image on Docker Hub, tagged as “trusted,” that they know contains a critical, unpatched vulnerability, and the vendor is MIA. Their options are all bad.
They could do nothing and just leave the vulnerable image up. This, however, would be a massive breach of trust. Ufortunately, knowingly leaving a popular, vulnerable image up for grabs with a massive vulnerability up is a choice.
Their second option is to take the image down entirely. This is the “break the internet” option. Countless CI/CD pipelines, production Kubernetes clusters, and development environments have docker pull minio/minio in their scripts. Removing the image instantly breaks all of them, and the community backlash would be deafening.
Finally, and this is arguably the worst choice, they could fix the image themselves. This sets a terrible precedent. Docker is a platform, not a software maintainer for every open-source project on the planet. Are they now responsible for patching any “trusted” image whose vendor becomes unresponsive? It’s a legal and logistical nightmare that completely blurs the line between platform and vendor.
What This Means for You
This entire fiasco is a brutal reminder of the fragility of our software supply chains. We place immense trust in package managers and “trusted” registries, but that trust can be broken in an instant by a single vendor’s business decision.
Minio is well within its rights to change its distribution model. Doing so by relative surprise, leaving an unpatched critical vulnerability in its most popular distribution channel, is a massive failure of responsibility to the community that helped build its popularity.
So, what should you do if you’re a Minio user?
- STOP using the public
minio/minioDocker image immediately. Assume it is and will remain vulnerable. - Audit your systems. Find every instance where you’re pulling that image.
- Choose your new path:
- Build from source: This is what Minio wants. You will have to create your own CI pipeline to build Minio from its source code and host your own internal Docker image.
- Find a community fork: Keep an eye out. It is almost certain that a community-maintained fork will spring up to provide patched binaries.
- Migrate to an alternative: This might be the final push you need to look at alternatives like SeaweedFS, Ceph, or a managed S3-compatible service from a cloud provider.
This is a wake-up call. “Trusted” doesn’t mean “immortal.” Always have a plan for what happens when a critical dependency disappears.
Generally speaking, if there is a critical open source component in my application or stack, I’ll grab the source code and make sure that I’m building the images locally and then store them in an artifact repository so that my instances can use that versus the upstream. While I’ll trust the upstream to a point, there are too many pieces of open source that support large instances of the internet with a single person or company keeping those pieces up to date and working.