How I continuously deploy my static website

How I configured a deployment pipeline for my website.


The source of my website is managed in a local git repository. It consists of markdown and image files for content, HTML, CSS and JS files for the theme and layout, and some files for visitors to download such as PDFs. Everything is compiled using a static website generator. I switched from Jekyll to Hugo for that because I liked Hugo’s theme engine more. I don’t exactly remember why, but for some reason I use to compile the source files into static HTML locally. It would probably make more sense to do that on a server as part of the deployment pipeline. After compilation I usually check the resulting files into the repository as well, placing them in a folder named public. That unnecessarily blows up the repository, so I do not really recommend that.

Once the website is compiled and the result is pushed to a private GitHub repository, magic kicks in. My website is served from a virtual machine running on Vultr infrastructure hosted in New York. The virtual machine runs Ubuntu Server as operating systems and Caddy as webserver. Caddy is an awesome, extensible HTTP/2 webserver written in Go with built-in TLS certificate management. By default, Caddy will automatically request (and renew) TLS certificates from Let’s Encrypt for all the domains and subdomains it serves. Bringing a TLS-secured website online has never been easier I think.

For my website, I use the git plugin for Caddy. The git plugin exposes a webhook that can be triggered by GitHub. I configured GitHub to trigger the webhook for every push to the website repository. Once triggered, Caddy will pull the latest changes thus automatically updating the files it serves.

This leaves me with only little to do when I want to publish new articles: Edit the markdown source file, compile the website, push and commit. Everything after that is taken care of by the pipeline I just described.

I’ll use the following paragraphs to share my configuration. Feel free to build upon it and improve the pipeline.

Configuring GitHub

Caddy expects the webhook payload to be in JSON format. Setting the right content type makes it easier for Caddy to recognize the payload. I highly recommend using a webhook secret! Without the secret, anyone knowing the webhook URL could trigger the webserver to pull the repository. However, as there would be no changes in that case, no visible damage would happen and no information would leak. But it is a waste of resources and it is a nice distributed denial of service (DDOS) attack surface.

Note: If you are using a private repository, make sure to allow the machine running Caddy to connect to your GitHub account. You can use the per-repository deploy keys for that. Just place the SSH pubkey of the webserver in read-only mode there.

Configuring Caddy

Configuring Caddy is usually pretty straightforward. The important parts for the pipeline to work are the lines starting with hook and hook_type. The first parameter for hook defines the webhook’s absolute URL. The second parameter defines the secret. Both parameter must match the GitHub webhook configuration., {
    root /srv/
    git {
        path /srv/
        hook /webhook WEBHOOKSECRET
        hook_type generic

This is how updating the website looks like in the webserver’s log file:

caddy[2431]: Received pull notification for the tracking branch, updating...
caddy[2431]: From
caddy[2431]:  * branch            master     -> FETCH_HEAD
caddy[2431]:    d3b5f7b..3fb9302  master     -> origin/master
caddy[2431]: Updating d3b5f7b..3fb9302
caddy[2431]: Fast-forward
caddy[2431]: 2 files changed, 2 insertions(+), 2 deletions(-)
caddy[2431]: ssh:// pulled.

There is more

If you consider using Caddy as webserver I recommend having a look at the other plugins. There is some really useful stuff out there, for example a Hugo administration plugin.