In a previous article I described how I deployed my blog on kubernetes and served it over HTTP. Today I’d like to add three more pieces:
- Automate Let’s Encrypt certificate retrieval (and renewal)
- Add a TLS-capable load balancer
- Add IPv6 support (because it’s 2017)
Automating certificate management
Thanks to Let’s Encrypt web servers can request trusted and signed certificate for free in a fully automated manner. A web traffic load balancer is basically a proxy server, acting like a web server on the frontend and like a HTTP client towards the backend. So why not let the load balancer’s fronted (the web server part) take care of fetching a certificate from Let’s Encrypt? We have seen other web servers, such as Caddy, taking care of certificate management.
Unfortunately, this is not a feature that is available on Google Cloud Platform (GCP). Furthermore, I can imagine this working fine with a single load balancer, but failing at scale in a multi-balancer setup. The reason is, that Let’s Encrypt has an API limit. One can request only so many certificates in a week. But even if we had access to an unlimited API, it would still be a non-trivial task to make sure the right load balancer is responding to the HTTP challenge request from Let’s Encrypt.
What we need to address the problem is a software that retrieves and renews certificates and deploys them to our load balancer(s) whenever a relevant change occurs. A relevant change in this sense could be a modified hostname, a new subdomain, or the nearing expiration date of a currently deployed certificate. Fortunately, there is a tool for that already. There are actually multiple tools, and they run on kubernetes, making deployment really straightforward:
In this article we will use kube-lego, but I can highly recommend cert-manager, too. Of course, for non-production use cases only. 😉
Like every other workload, we like to cage kube-lego into a dedicated namespace. We define the namespace in
apiVersion: v1 kind: Namespace metadata: name: kube-lego
And create it via the command line tool
$ kubectl create -f k8s/kube-lego.ns.yaml
The next step is to define and configure the kube-lego deployment in
For the initial deployment of kube-lego, I recommend setting
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kube-lego namespace: kube-lego spec: replicas: 1 template: metadata: labels: app: kube-lego spec: containers: - name: kube-lego image: jetstack/kube-lego:0.1.5 imagePullPolicy: Always ports: - containerPort: 8080 env: - name: LEGO_LOG_LEVEL value: info # more verbose: debug - name: LEGO_EMAIL value: firstname.lastname@example.org # change this! - name: LEGO_URL value: https://acme-v01.api.letsencrypt.org/directory - name: LEGO_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: LEGO_POD_IP valueFrom: fieldRef: fieldPath: status.podIP resources: limits: cpu: 100m memory: 50Mi requests: cpu: 50m memory: 50Mi readinessProbe: httpGet: path: /healthz port: 8080 initialDelaySeconds: 5 timeoutSeconds: 1
Once the namespace is ready we can deploy and check if the deployment succeeded:
$ kubectl create -f k8s/kube-lego.deployment.yaml ✂️ $ kubectl -n kube-lego get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-lego 1 1 1 1 10m
Tip: Consider using a configmap as alternative to hard-coding configuration parameters into a deployment.
Addding a TLS-enabled load balancer
With kube-lego there are two different ways of defining a load balancer. The easier (but more expensive) one is to use a load balancer provided by GCP. The alternative is deploying an nginx ingress pod and using that as the load balancer. I got good results from both in my experiments. For the sake of brevity, we will use the quicker GCP way in this article.
First, we need to create a kubernetes ingress object to balance and proxy incoming web traffic. The important part here is, that we can influence the behavior of the ingress object by providing annotations.
kubernetes.io/ingress.class: "gce"This annotation let’s kubernetes know that we want to use a GCP load balancer for ingress traffic. Obviously, this annotation does not make sense on kubernetes installations which do not run on GCP.
kubernetes.io/tls-acme: "true"` This annotation allows kube-lego to manage the domains and certificates referenced in this ingress object for us. If we leave out this annotation, kube-lego will refrain from touching it or its associated kubernetes secrets.
apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/tls-acme: "true" kubernetes.io/ingress.class: "gce" name: website namespace: website spec: rules: - host: test.danrl.com http: paths: - backend: serviceName: website servicePort: 80 path: / tls: - hosts: - test.danrl.com secretName: test-danrl-com-certificate
$ kubectl create -f k8s/website.ingress.yaml
It may take a while for the ingress object to become fully visible. GCP is not the fastest fellow to spin up new load balancers in my experience. ⏱
$ kubectl -n website get ingress NAME HOSTS ADDRESS PORTS AGE website test.danrl.com 22.214.171.124 80, 443 3m
Very soon after the load balancer is up and running, kube-lego should jump in and notice the lack of a certificate. It will fetch one and deploy it automatically. Awesome! We can watch this process in the logs. I use Stackdriver for collecting logs from kubernetes workloads, but there are many other options as well. Wherever your logs are, lookout for a line similar to this one:
level=info msg="requesting certificate for test.danrl.com" context="ingress_tls" name=website namespace=website
Once the requested certificate has been received, kube-lego will create or update the secret for it. We can verify the existence of the secret:
$ kubectl -n website get secrets NAME TYPE DATA AGE ✂️ test-danrl-com-certificate kubernetes.io/tls 2 22m
From now on, kube-lego will monitor the certificate and renew and replace it as necessary. The certificate should also show up in the load balancer configuration on the GCP console at Network Services → Load balancing → Certificates (you may have to enable the advanced menu at the bottom):
To test the automation further we could trigger a certificate renewal by tweaking the
LEGO_MINIMUM_VALIDITY environment variable (optional).
For reference, here is the automatically retrieved follow-up certificate I got:
Adding IPv6 to the load balancer
In the standard configuration GCP load balancers are started without an IPv6 address assigned. Technically, they can handle IPv6 traffic and we are free to assign IPv6 addresses to a GCP load balancer. To do this, we first have to reserve a static IPv6 address. This is done at VPC network → External IP addresses.
Reserving an address means, that this address can not be used by anyone else on the platform. If we reverse addresses but don’t use them charges will apply.
Once the address is reserved, we can assign it to the load balancer. To do that, we have to add an additional frontend for every address and every protocol (HTTP, HTTPS). That is, two frontends for each additional address.
We have to do the same for HTTPS, too, of course. When setting the IPv6 HTTPS frontend, we select the current certificate from the dropdown menu.
Almost automated… 😤
And now I have some bad news for you. ☹️ IPv6 load balancer frontends, certificate renewal via kube-lego, and GCP load balancers do not go very well together (as of time of writing). When kube-lego renews the certificates it ignores manually added frontends. This means, the certificate for the IPv6 address will not be replaced automatically. Very frustrating!
In the screenshot we can see the new certificate k8s-ssl-1-website2-website2–a02b6ae745a706f8 alongside the old one k8s-ssl-website2-website2–a02b6ae745a706f8. Only for the IPv4 frontend was the certificate replaced.