Yesod App Gitlab CI Setup

by|inArticles||6 min read
GitLab
GitLab

Gitlab is a very flexible and fantastic alternative to classic CI servers. Nevertheless it is sometimes a bit tricky to set up. Hence, in this blog post we will cover a basic set up to build, test and deploy a Yesod application. We will cover the following points:

  • Setting up a runner and registering to a Gitlab-Server
  • Assigning the runner a persistent (and replaceable) volume<br>
  • Link the volume during the build process (functioning as a file cache to speed up our build)<br>

You will need the following things already up and running:

  • Gitlab Server
  • Server with root access<br>
  • Docker and docker-compose installed on this server

Setting up a runner

Let's begin and create a runner which will execute jobs to build, test and deploy our Yesod application. We assume at this point, that you have already a Gitlab server running and your repository is created and represents a valid Yesod application. Furthermore it is necessary that you have a server instance with Docker and docker-compose already installed. With those thing already set up you can connect to your server and choose a place to create a docker-compose.yaml file for our runner, which looks something like this:

version: '2'
services:
  gitlab-runner:
    image: gitlab/gitlab-runner:latest
    restart: always
    container_name: gitlab-runner
    volumes:
    - /srv/docker/gitlab-runner/config:/etc/gitlab-runner

As you can see, we defined `/serv/docker/gitlab-runner/config/config.toml` as a volume. This file will be persistent on the host system and survive any server restarts. Plus, we can easily configure our runner this way directly on the host system. We will keep the configuration aside for a couple of moments and instead register a new runner first. It is necessary to install the gitab-runner executable first (You can find this here: ). With this executable installed we can now finally register a new runner with a registration token from Gitlab (Administration > Runners).

sudo gitlab-runner register -n \
   --url https://yourgitlabserver.com/ \
   --registration-token REGISTRATION_TOKEN \
   --executor docker \
   --description "Shared Docker Runner" \
   --docker-image "docker:stable" \
   --docker-volumes /var/run/docker.sock:/var/run/docker.sock

As you can see, we use the docker executor and as well use the "docker:stable" image as our base image. If the runner registration passes successfully you should be able to see a new runner registered in Gitlab. You can click on the runner to see the token of the runner. We need to copy this token to run our runner properly.

To start the runner, we can use a very simple configuration in the `config.toml` file. Something like this:

concurrent = 1 [[runners]] name = "docker-gitlab-runner" url = "{{http(s)://CI.location}}" token = "{{token}}" limit = 0 executor = "docker" builds_dir = "" [runners.docker] host = "" image = "docker:stable" privileged = false disable_cache = false cache_dir = ""

The executor of the runner is defined as docker, so we can create and spin up new containers for us. As well we will set the concurrency to 1, to be able to use the cache volume later without any interference. With this config file we can now start the runner:

sudo docker-compose up -d

(Hint: If the runner is not starting, you can try to use docker-compose up without -d, to attach to the instance logs and see what is going wrong).

If the runner is running correctly, we have finished our first step. (A big step by-the-way). With the runner registered and installed we can now already execute jobs.

Assigning the runner a persistent volume

First of all, let's see why we need a persistent volume. Since our runner will run docker containers they will be created on demand during the job and destroyed afterwards. Letting stack download and build dependencies each time would be time consuming and you would transfer a lot of data (and probably waste your traffic budgets). It is much more efficient to download and build once and reuse the artifacts on each build.

It's possible to define a persistent volume directly on the runner container. What we will do instead is an extra container running nginx. This container will have a persistent volume which we will use as our build-cache directory. Nginx can serve later as a simple server to show generated test results or simple application demo pages (If you don't need Nginx, you can use something else or just don't publish the port of the container).

If you have a flexible cloud environment (DigitalOcean, Vultr, Hetzner Cloud or something similar), I recommend ordering a small volume, mount it on your server and use this volume as the build-cache directory. With time and each build your build-cache directory will probably grow. In this case you can easily resize the volume and adapt to your needs. As well, if you plan to have more runners, you can copy a volume, mount it to the new runner and reuse already built dependencies.

Let's return to our docker-compose.yaml and add some lines for the persistent volume:

version: '2'
services:
  gitlab-runner-volume:
    image: nginx:1.15.7
    restart: always
    container_name: gitlab-runner-volume
    volumes:
    - /srv/docker/gitlab-runner-volume:/build-cache

  gitlab-runner:
    image: gitlab/gitlab-runner:latest
    restart: always
    container_name: gitlab-runner
    volumes:
    - /srv/docker/gitlab-runner/config:/etc/gitlab-runner

    volumes_from:
    - gitlab-runner-volume:rw

It is important to give the container a unique name. We will use this in our runner configuration file and define the volume as the build-cache directory like this:

    volumes_from = ["gitlab-runner-volume:rw"]

With the new configuration file we can rebuild our gitlab-runner container (to apply the new configuration) and re-run it together with the additional container:

sudo docker-compose down
sudo docker-compose up -d

Link the volume during the build process

With the new runner we can now turn our focus on the CI configuration file in our app repository. Let's start with a basic configuration:

image: ersocon/stack:1.9.3

stages:
  -build

build:production:
  stage: build
  environment: Production
  tags:
    - haskell
  only:
    - master
  artifacts:
    paths:
      - build-artifacts/
  script:
    - stack build yesod-bin cabal-install --system-ghc
    - stack exec -- yesod keter

What's happening here? We defined the base container in the `image` attribute. I have chosen `ersocon/stack:1.9.3`. This container comes with haskell and stack pre-installed and can execute stack commands with our beautiful repository code. (Of course you are free to choose any container you like).

As well we defined one stage, called `build` and a build step `build:production` which will be executed in the build lane. Furthermore we have a build-artifacts folder which will contain our keter file after stack has built it for us.

Now, this configuration has some drawbacks. It works, but never reuses any build dependencies. As already discussed earlier, we would waste a lot of traffic budget on our server and as well we would need a lot of CPU power to rebuild dependencies each time we execute this job. So, let't optimize this.

First, let's define s stack root directory, which we will make persistent, so it is still there after each build and is still there after a container or server restart. The stack-root directory has all dependencies which were build already. So if we do it once, the next time they are not downloaded anymore and as well not built again. It's not critical to have a common file for all branch builds. But I don'r recommend to use it across multiple projects.

variables:
  STACK_ROOT: "/build-cache/{{your-repository-name}}/stack-root"

We can as well put our projects stack-work directory in our persistent build-cache. But here, we need to separate across branches. As well, stack-work is local. We can not just define the directory. And since we don't want to copy files around, we just symlink them into the project, like this:

before_script:
  - mkdir -p /build-cache/{{your-repository-name}}/${CI_COMMIT_REF_NAME}/.stack-work

  - ln -s /build-cache/{{your-repository-name}}/${CI_COMMIT_REF_NAME}/.stack-work .stack-work

With this setup the first job will run for a long(er) time. But afterwards, each build will just take a couple of minutes (depending on the projects complexity).

Thank you for reading this far! Let’s connect. You can @ me on X (@debilofant) with comments, or feel free to follow. Please like/share this article so that it reaches others as well.

© Copyright 2024 - ersocon.net - All rights reservedVer. 415