Hugo in DevOps Mode

As I explained last week I have been updating this website in various ways; I removed the downloadable PDFs, then added privacy-friendly analytics, and finally, I set up a scheduled pipeline in GitLab to automatically build and deploy this website every Friday morning.

Yes, I admit that for the last 2.5 years I’ve built and deployed this website manually; the thing is that for a while I was more interested in migrating content than anything else. And to be honest, Hugo makes such a task very simple.

The thing is, it’s not just about uploading the resulting HTML files; I also have a search engine that must be regenerated every time I add new content. Again, not the end of the world, but still.

So I went full DevOps with Hugo and Asciidoctor and PHP and Python and all the other things I need to publish this website, but no Kubernetes. For many people, DevOps and Kubernetes are so connected to one another that it’s hard to separate them. But the truth is that you can go “full DevOps” with any technology.

DevOps without Kubernetes

The steps I took were the following:

  1. Encapsulate in a first container image all the tools required to build the website. It’s a container image deriving from docker.io/klakegg/hugo:0.101.0-ext-asciidoctor with a few more bells and whistles.
  2. Encapsulate in another container the search engine index generation. This one is a bit more complex, because I needed PHP and Python in it, but nothing to phone home about. Of course both containers are stored in GitLab’s very convenient image registry.
# Install PHP dependencies
FROM docker.io/library/composer:2.5.5 AS composer
WORKDIR /build
COPY composer.json /build/composer.json
RUN composer install

# Runtime image
FROM docker.io/library/ubuntu:22.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && apt install -y sqlite php8.1 php8.1-sqlite3 php8.1-mysql php8.1-mbstring python3 python3-pip make
WORKDIR /build

# Install Python dependencies
COPY requirements.txt /build/requirements.txt
RUN pip3 install -r requirements.txt

COPY --from=composer /build/vendor /build/vendor
COPY src /build/src
  1. Create a .gitlab-ci.yml file for this website putting all of these tools in action1:
stages: [build, index, deploy]

hugo:
  stage: build
  image:
    name: registry.gitlab.com/akosma/hugo:1.2
    entrypoint: [""]
  cache:
    policy: pull-push
    paths:
      - build/public
  script:
    - hugo --minify --quiet --baseURL https://akos.ma --destination build/public
  artifacts:
    paths:
      - build/public
  only:
    - schedules

index:
  stage: index
  image:
    name: registry.gitlab.com/akosma/search-engine-for-akos.ma:1.5
    entrypoint: [""]
  cache:
    policy: pull-push
    paths:
      - build/public
      - build/index
  script:
    - mkdir -p build/index
    - rm build/index/index.db
    - php /build/src/create_index.php build/public build/index
    - python3 /build/src/create_files_db.py build/public build/index
  artifacts:
    paths:
      - build/index
  only:
    - schedules

deploy:
  stage: deploy
  image: docker.io/library/alpine:3.17.2
  cache:
    policy: pull
    paths:
      - build/public
      - build/index
  before_script:
    - apk update && apk add openssh-client rsync
    - eval $(ssh-agent -s)
    - echo "$SSH_PRIVATE_KEY" | ssh-add -
    - mkdir ~/.ssh
    - echo "$SSH_HOST_KEY" > ~/.ssh/known_hosts
  script:
    - 'rsync --recursive --quiet --update ./build/public/* "$DEPLOYMENT_PATH_HTML"'
    - 'scp ./build/index/index.db "$DEPLOYMENT_PATH_SEARCH"'
  only:
    - schedules
  1. Create a scheduled pipeline in GitLab, to be executed every Friday morning at 08:00 AM (Zürich time).

Thanks to containers, I can reuse them on the Makefile that I use to generate the website, and even better, I don’t need to install PHP, Python, or even Hugo. My only requirement is Podman or Docker.

PODMAN := podman run --rm --volume "$${PWD}":/build
IMAGE_HUGO := registry.gitlab.com/akosma/hugo:1.2
HUGO := $(PODMAN) --entrypoint="hugo" $(IMAGE_HUGO) --minify --quiet
PREVIEW := $(PODMAN) --publish 1313:1313 $(IMAGE_HUGO)

IMAGE_INDEXER := registry.gitlab.com/akosma/search-engine-for-akos.ma:1.5
SEARCH_INPUT := $(CURDIR)/build/public
SEARCH_OUTPUT := $(CURDIR)/build/index

.PHONY: all
all: html

.PHONY: preview
preview:
	$(PREVIEW) server --bind "0.0.0.0" --buildDrafts --buildFuture --baseURL http://localhost:1313

.PHONY: html
html:
	$(HUGO) --baseURL https://akos.ma --destination build/public

.PHONY: search_index
search_index: build/index $(SEARCH_OUTPUT)/index.db

build/index:
	mkdir -p build/index

$(SEARCH_OUTPUT)/index.db:
	podman run --rm --volume $(SEARCH_INPUT):/build/input --volume $(SEARCH_OUTPUT):/build/output $(IMAGE_INDEXER) php src/create_index.php input output; \
	podman run --rm --volume $(SEARCH_INPUT):/build/input --volume $(SEARCH_OUTPUT):/build/output $(IMAGE_INDEXER) python3 src/create_files_db.py input output

.PHONY: clean
clean:
	rm -rf build

Deployment

As you can see in the YAML above, the deployment process involves good old rsync and scp, and the pipelines only work when triggered by a schedule. The inspiration of this task comes directly from this page.

Et voilà. Full DevOps with traditional web hosting servers. Just git push articles for future editions during the week, and let GitLab wake up and do its job automatically for you on Friday morning.


  1. The value of the SSH_HOST_KEY can be found with the ssh-keygen -H -F hostname command.) ↩︎