Before I begin this, I want to give a shoutout to Neshura - without him, I would never have had the opportunity to actually create and host my website. I also want to thank him for his patience when I was stuck multiple times during the migration process, as my experience with Forgejo and its CI was really limited.
Migrating Git - Gitlab to Forgejo
The migration of the git files themselves went a lot smoother than expected. Main reason for this was Forgejos built-in migration tool suite, which lets you migrate external repositories with ease.
Generally, the workflow was as follows:
- Open Forgejo and start a new migration
- Select GitLab from the list
- Fill the details of the migration: Repository URL, Access token, etc.
- Hit Migrate
And voila: The repository with all extra features like the wiki is getting migrated automatically.
In total, I did this for 7 repositories without issues. Afterwards, it was just a matter of updating the remote repository url for my local clones:
git remote set-url origin https://forgejo.neshweb.net/Firq/firq-dev-website.git
Since the underlying git repository didn’t change, migration was painless on the client side of things.
Reworking CI
One of the major parts of the migration was to port my working GitLab-CI to Forgejo Actions. As I never worked with this before (or with Github Actions, which is the basis of Forgejo), it was … interesting to say the least.
I stumbled over multiple issues while transforming my old .gitlab-ci.yml
file into a new build_release.yml
, but in the end I got there. My main issue was that I imagined the CI just to work like GitLab - but I was mistaken.
My main points of confusion were:
- Files are not checked out automatically - A manual run of the checkout action is necessary, otherwise the files will just be missing
- The working directory is not preserved between different run commands (made apparent early on)
But after around 2 hours of tinkering, I managed to get everything working (and even improved some steps, such as automatically getting the known_hosts instead of hardcoding it into the secrets). But still, it felt a bit unsatisfying, as I was still relying on a ssh-connection with rsync to deploy the files. Unbeknownst to me, this was about to change drastically …
Such Innovation: Moving from dedicated website server to docker containers
After migrating and finishing up, Neshura showed me how he deploys the main website of his server: With a docker container. After seeing how much easier this would be in the long run, I decided to just go for it and switch from building the static files and syncing them to a webserver to just build my own container.
Generally, this turned out to be a lot easier than expected: I added a Dockerfile to my repo and switched the CI to build a container based on that, which then would get published to the Forgejo registry.
The Dockerfile
itself is rather simple:
FROM node:lts AS build
WORKDIR /app
COPY . .
RUN npm i
RUN npm run build
FROM forgejo.neshweb.net/ci-docker-images/website-serve:latest AS runtime
COPY --from=build /app/dist /public
COPY --from=build /app/serve.json /public/serve.json
RUN rm -r /public/assets/data/
ENV PORT 8081
EXPOSE 8081
CMD [ "serve", "public/", "-p", "8081" ]
As you can see, I am using a custom container for the runtime stage, which will be explained in the next section.
Custom serve docker - My new goto for static site serving
When starting out with the Dockerfile
, I first used the standard node:lts
image for the runtime. This meant I also had to install the serve
package by @warren-bank
each time I built the container. Since this takes extra time and resources each run, I decided to create a pre-configured docker container that can be used for this instead.
The Dockerfile
for that one is laughable simple:
FROM node:lts
RUN npm install --global "@warren-bank/serve"
The container is also published to the docker registry that the Forgejo instance provides, which allows referencing it easily during runs.
Deployments using Dockge
Since my website is now using a docker container instead of the previous rsync
+ screen
approach, a new deployment solution was needed.
In the end, Neshura proposed to use Dockge, a new, simple container management tool build by the developer of the beloved uptime-kuma. With that set up, getting the website only was really really easy:
- Create a new stack
- Add a container entry
- Fill the new entry with the url of the website container on the registry
- Configure the ports
- Hit start
After that, it was just a matter of pointing nginx to the new IP address and port that Dockge uses. Just like that, the website was online.
But this still wasn’t the end of my migration tasks.
Unlighthouse - Implementing website testing without worries
Before I even planned to migrate to Forgejo, I had long implemented some simple site benchmarking using the Unlighthouse
package. This required a separate instance of my site to be running for benchmarking, as I wanted to test the site before pushing any changes to the main domain.
The same principle now applies here: I can push changes to my dev branch and build a preview container by pushing a preview tag. Once that’s build, I can deploy the preview using Dockge to a staging environment.
With that set up, I can push a new tag with a keyword for unlighthouse to run. The reports will then be uploaded onto the old webserver, as I don’t want to build extra containers just for the reports.
Implementing this proved a lot easier than expected, but sadly I got disappointed when trying to circumvent the staging environment, as the container wouldn’t work as a service in Forgejo actions.
Dedicated unlighthouse docker
One last hurdle was, ironically, Gitlab. Specifically, it was their lighthouse
container I used for testing. The main issue was that this container is run as a non-privileged user. In general, this is not a problem. However, this prevented me from actually cloning my repository using Forgejo actions.
After some testing, I decided to just make my own version of the lighthouse
container, without the user but with unlighthouse
preinstalled (this also helps with processing times, as puppeteer takes a good amount of time to install each run).
The Dockerfile
can be found below (really simple again):
FROM node:20.10.0-bookworm
LABEL authorname="firq"
WORKDIR /unlighthouse
ENV CHROMIUM_VERSION="120.0.6099.109-1~deb12u1"
ENV NODE_ENV='production'
ENV PATH="/unlighthouse/node_modules/.bin:${PATH}"
RUN apt-get update && apt-get -y install --no-install-recommends chromium=${CHROMIUM_VERSION} procps && rm -rf /var/lib/apt/lists/*
RUN npm install @unlighthouse/cli puppeteer
With this container, the CI step to actually run the tests became a lot easier:
jobs:
unlighthouse:
runs-on: docker
container: forgejo.neshweb.net/ci-docker-images/unlighthouse:latest
steps:
- name: Checkout repository
uses: https://code.forgejo.org/actions/checkout@v3
- name: Run unlighthouse
run: unlighthouse-ci --site "https://preview.firq.dev/"
- name: Prepare artifacts
run: cp serve.json unlighthouse-reports
- name: Upload reports
uses: actions/upload-artifact@v3
with:
name: unlighthouse-reports
path: unlighthouse-reports/
Once the tests ran, the artifacts would be uploaded to the website-server for the time being.
Conclusion
In the end, I must say migrating was a lot more painless than expected. Sure, Forgejo is missing some of the features that Gitlab offers (mainly YAML anchors and manual CI actions, which hopefully will be implemented in the near future). But at the end of the day, it actually feels refreshing to now have a stable and independent CI to deploy this site, without having to construct weird solutions to self-inflicted problems.
I also updated my about page to now reflect the migration, as the old technologies weren’t up-to-date anymore.
If you want to check out the repository by yourself, feel free to do so. It is available on Neshuras Forgejo instance