Firq Website Logo Firq Website Logo
Back to all posts

How to: Deploy this site

by Firq • Published on 17 March 2023

After spending the earlier half of the day getting familiar with new commands and concepts like artifacts, I am pleased to announce that I managed to rewrite my deployment process.

Let’s take a look, shall we?

The previous setup

Since this site is being developed with Astro and served statically, I need to build it before being able to serve it. So running npm install and npm build is given, and that’s what the really old GitLab Pages setup did. Back then, the gitlab-ci.yml looked like this:

image: node:lts
pages:
  cache:
    paths:
      - node_modules/
  script:
    - npm install
    - npm run build
  artifacts:
    paths:
      - public
  only:
    - main

Funnily, the node:lts is a change I made to the Astro docs after Version 2.0 released.

This setup just used a node docker image for building and publishing the files to the path public, where GitLab Pages then serves them.

But after migrating to the custom configuration using npx serve, the pipeline needed to be adjusted. The new pipeline moved building from the docker container to the proxmox instance where the site is hosted. The new pipeline looked like this:

deploy-site:
  stage: deploy
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
  before_script:
    - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
    - eval $(ssh-agent -s)
    - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
    - mkdir -p ~/.ssh
    - chmod 700 ~/.ssh
    - echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
    - chmod 644 ~/.ssh/known_hosts

  script:
    - echo "Connecting to proxmox machine"
    - ssh $DEPLOY_USER@$DEPLOY_HOST -o StrictHostKeyChecking=no IdentitiesOnly=yes " .... "

First installing ssh and readying everything from the CI variables, this was a big change from the Pages configuration. But there was one thing that really bothered me: The ssh call. If you look at it, you can see that I left out the commands that are passed to ssh. There is a good reason: These are waaaay too long. If displayed as separated shell calls, they would look something like this:

screen -X -S website-firq-npx kill;
rm -r -f public/*;
cd build;
git reset --hard;
git pull;
rm -r -f node_modules;
npm install;
npm run build;
rm -r public/assets/data/;
cp -R public ~;
cp serve.json ~/public;
cd ~;
screen -S website-firq-npx -dm npx serve public/ -p 9000 -c serve.json"

With the following directory structure on the remote host, this can easily be explained:

/
├─ public/
│  ├─ site content
│  ├─ serve.json

├─ build/
│  ├─ node_modules/
│  ├─ .git
│  ├─ repository content
Small explanation of the serve.json

The serve.json is the config-file of the npx serve command that controls the behavior of the files served by it. In my case, I use if for two entries:

{
  "directoryListing": ["/!assets/**"],
  "headers": [
    {
      "source": "**/*.@(jpg|jpeg|gif|png|webp)",
      "headers": [
        {
          "key": "Cache-Control",
          "value": "no-cache"
        }
      ]
    }
  ]
}

This redirects any non-file access requests to my custom 404 page, as I don’t want people to be able to use the npx UI for navigating my file system. It also controls the cache for the image-files, which I still need to change in the future.


Upon starting the pipeline, the old site gets taken offline and deleted. Afterwards, the repository in build is reset and pulled. Then, as previously, npm install and npm run build are executed, the finished data is then moved to the public folder before getting served.

But after running with this setup for a while, I noticed its shortcomings when it comes to easily expanding: The command being so long meant the code had really bad visibility and was pretty much obfuscated. So, I wanted to change that up. How I did it follows now.

The new setup

The first step was splitting up the pipeline into 2 stages instead of one. This meant a new stages entry needed to be added to the yaml:

stages:
  - build
  - deploy
  - notification

Note: The notification stage is not developed by me, but by the great folks over at the DiscordHooks Project. By using their gitlab-ci-discord-webhook, I can send the status of my pipelines to any amount of discord servers. Look at their repo if you want to integrate it into your own pipelines.

Building first

This meant the upper portion of the .gitlab-ci,yml looks like the following:

build-site:
  image: node:lts
  stage: build
  cache:
    paths:
      - node_modules/
  only:
    - main
  script:
    - npm install
    - npm run build
    - rm -r public/assets/data/
    - cp serve.json public
  artifacts:
    paths:
      - public
    expire_in: 1 day

This stage then builds the site, moves the serve config, removes the data folder and puts the output as an artifact into public. It is pretty obvious that this works similar to the old GitLab Pages pipeline, but instead of invoking the pages afterwards, the data just gets stored for a later stage. After completing, the new stage begins.

rsync heaven

Since the files are now built on the GitLab server and not in the Proxmox instance, the files need to be moved over during the pipeline. This can be achieved by a great utility known as rsync. It can be used for sending data via ssh to any client that supports it, which makes it the ideal tool for my use-case. I also recommend looking at this blog post from Mitsunee detailing how to use ssh and rsync to sync game libraries to the Steam Deck with ease.

In my case, the resulting .gitlab-ci.yml looked like this:

deploy-site:
  stage: deploy
  only:
    - main
  before_script:
    - 'which rsync || ( apk update && apk add rsync )'
    - 'which ssh-agent || ( apk update && apk add openssh-client)'
    - eval $(ssh-agent -s)
    - mkdir -p ~/.ssh
    - chmod 700 ~/.ssh
    - echo "$SSH_PRIVATE_KEY" | tr -d '\r' >> ~/.ssh/key_firq
    - chmod 600 ~/.ssh/key_firq
    - echo "Host $DEPLOY_HOST" >> ~/.ssh/config
    - echo $'\n\tIdentityFile ~/.ssh/key_firq' >> ~/.ssh/config
    - echo $'\n\tStrictHostKeyChecking no\n\tIdentitiesOnly yes\n' >> ~/.ssh/config
    - chmod 644 ~/.ssh/config
    - echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
    - chmod 644 ~/.ssh/known_hosts

  script:
    - ls public
    - ssh $DEPLOY_USER@$DEPLOY_HOST "screen -X -S website-firq-npx kill; rm -r -f public/*;"
    - rsync -az --stats public $DEPLOY_USER@$DEPLOY_HOST:~/.
    - ssh $DEPLOY_USER@$DEPLOY_HOST "screen -S website-firq-npx -dm npx serve public/ -p 9000 -c serve.json"

First, you have the rsync and ssh setup, which now also creates a new ssh-config for this. This really reduces the amount of arguments that need to be passed to ssh and rsync, making the whole flow less error-prone. My main issue with setting this up was to find out how to structure this, as I needed to setup ssh from the pipeline without causing any weird issues. But by using a custom ssh-key and config, it became really easy to get this to run.

Funnily enough, this also taught me how to squash commits on main, as I had like 30 commits where I pretty much changed single lines just for debugging. I’ll use a feature branch next time, that’s for sure.

git rebase -i origin/main~30 main
git push origin +main

Finally, a result

After getting it all to work, I was really pleased how well this whole chain works. In the future, I’ll probably tie the pipeline to a tag push instead of pushes on main, but this can wait.

Also: I can only recommend that anyone working with GitLab uses their workflows extension in VSCode . This makes debugging faster, as you can easily validate the yaml in Code, and you can observe running pipelines and merge requests from the editor itself without switching tabs.

All in all, I am really happy with this improvement, and know that developing this site became a lot easier now

Thanks for reading,

~ Firq