This was insanely difficult to figure out, so I’m gonna write up a super short blurb on it, so that maybe it can help someone else on the internet figure out how to use a pull-through proxy with k3os nodes.
I have a harbor installation that handles proxying docker-hub ever since they implemented their reasonably restrictive rate limits. It’s worked great since v2.2.1 when they added global robot users. Super useful for my gitlab and any other docker things I’m doing.
The meat, the
mirrors: docker.io: endpoint: - "https://$REGISTRY_HOST/v2/$PROXY_PROJECT" $REGISTRY_HOST: endpoint: - "https://$REGISTRY_HOST" configs: "$REGISTRY_HOST": auth: username: "read-only-user" password: "this is a password lol" tls: ca_file: /etc/ssl/certs/ca.crt
The above config assumes you’ve got a harbor registry at https://$REGISTRY_HOST, mine is at https://registry.light.kow.is. The proxy project is the same as you configured when setting up the pull-through proxy.
This setup means when something asks for
rancher/pause:3.14 it will default to resolving
docker.io/rancher/pause:3.14, but will use the configured registry endpoint for it instead. Thus all the k8s charts, and things, and everything that have the default containers defined to pull from docker hub, will instead go through my pull-through proxy. Keeping me below the rate limits, and keeping it fast!
I spent about 8 hours fighting k3s, or k3os, problems, when it turned out, I was just getting rate-limited by docker-hub. K8s failure modes made it really hard to figure out what was going on. I wish I’d had this plainly documented up front.
I hope this saves you a whole buttload of time.