i feel like it's obligatory, at some point, on any blog that talks about tech, to have a post that goes "oh here's how i deploy it by the way". Now is my turn, i guess? But this isn't simply 'how i deploy', and more of "how i perfected my deployment to go from 11 minutes of CI to about one and a half, and what i learned in the process". In the end, i guess, this is more of a post about CI/CD.
If you want the full file as it is, you can go here.
# My Environment
There's only one thing worse than making CI/CD pass: debugging it.
Continuous Integration and Delivery is a practice in devops (remember when that was the buzzword?) that consists in automating checks and deployment in a development forge. It will also refer to the tools that allow you to do such automation.
My environment is as follows:
- git forge running Forgejo with "actions" (the Forgejo slang for CI/CD, taken from GitHub actions)
- A blog written with the Zola engine
- A web server that serves flat files (your browser likely interacted with it to read this)
There is additional complexity surrounding the forge and how its actions run:
- There are 5 runners on my forge server
- All runners are docker containers, which can themselves spawn docker containers
It happened in the past that i needed features not available yet in Zola, or
encountered bugs that were not yet fixed in the latest release. If i used
pre-made docker images, i would have to make them myself, which requires a lot
of work behind the scenes: manually connecting to the host, pulling a docker
image, running it, installing the necessary version of Rust if not already
present, then the version of Zola i want, then tagging it. Even when this is
all automated in a Dockerfile, this way of doing things requires backend
admin access, which i cannot guarantee everyone has, and i cannot guarantee i
would have on another Forgejo instance.
So, another condition on the infrastructure:
- i am not using pre-made docker images that already contain Zola
# The Basic Version
In DevOps, a "workflow" is a set of different "jobs" that are themselves sequences
of steps that run in a given environment. You can have multiple workflows in a
repository, multiple jobs in a workflow (with conditions as to who runs before
what and such), and, of course, multiple steps per job. My repository will have
one workflow, with one job called deploy-website.
i start by creating a very simple YAML file in .forgejo/workflows/ at the
root of my directory. The skeleton of the workflow file is as follows:
name: Deploy Vulpine Citrus
on: push
jobs:
build-website:
runs-on: ubuntu-22.04
container:
image: rust:latest
steps:
- name: Install node
run: |
apt update && apt install --yes nodejs
- name: Check out sources
uses: actions/checkout@v3
- name: Checkout submodules
run: git submodule update --init --recursive
There are various points to discuss in the skeleton already:
- All of my runners are tagged as 'ubuntu-22.04' (meaning that, by default, they would run your action in a Docker container running an Ubuntu 22.04 image)
- i give my action a name, and tell it to run every time something is pushed to the repo
- Because other projects already running on the forge use a Rust image, and
because i do not depend on a given minimum version, i make the runner deploy
my job in a
rust:latestcontainer - Complication #1:
rust:latestdoes not includenodeby default, which is necessary for the actions that pull your sources (for some reason), so it is installed before anything else - My blog repo uses a submodule, so it gets deployed
## Installing Zola
Now onto the other steps: i want to install Zola using cargo, so i add the
following step:
- name: Install Zola
run: cargo install --locked --git https://github.com/getzola/zola.git --tag v0.20.0
The option --locked makes us respect all dependency versions listed in the
Cargo.lock file from the repo. This is recommended, because dependencies might
have new versions that were not proven to build correctly yet (some people just
break semantic versioning, sometimes)1.
## Building the Blog
Once zola is an executable that can be run, we can run it:
- name: Build site
run: zola build
And now, the flat web files are available in ./public/. And we need to
transfer them now... somehow?
## It Gets Weird: Transferring to the Server
Transferring files to the server is done this way: the old blog directory on the
server is wiped, and the entire public/ folder from the pipeline is transferred
via SSH to re-populate it.
We need to make use of a feature in Forgejo Actions called Secrets. Secrets allow you to create protected
variables that your actions can use but that cannot be read again. In my setup,
i create two such variables: CONNECT_KEY and REMOTE. The former is an SSH
private key, and the latter is a remote URL in the format of user@host. That
user has write permissions on the blog directory.
Then, after way too much engineering, we have this:
- name: Upload to server
run: |
echo "${{ secrets.CONNECT_KEY }}" >> key
chmod 600 key
ssh -o StrictHostKeyChecking=no -o IdentityFile=key ${{ secrets.REMOTE }} "rm -rf /var/www/vulpinecitrus/*"
scp -o StrictHostKeyChecking=no -o IdentityFile=key -r ./public/* ${{ secrets.REMOTE }}:/var/www/vulpinecitrus/
ssh -i key ${{ secrets.REMOTE }} "chown -R www-data:www-data /var/www/vulpinecitrus"
rm key
Multiple things to say here:
- The file
public.zipis created, containing the tree of the blog repo - The secret value
CONNECT_KEYis written to a file calledkey, and its permissions are set (to0o600) such that SSH does not complain that someone other than the owner of the file can read its contents or write to it - Using
ssh, i begin by deleting the entirety of '/var/www/vulpinecitrus/', where the old blog tree resides - Using
scp, i move all the files frompublic/to the folder that was just wiped - Ownership information is set so that
www-datais both the owner and the group owner for the new files - All the SSH commands use
ScriptHostKeyChecking=no, which is technically terrible, because it bypasses first-time fingerprint checking. This was added because the pipeline Docker, since it is a 'fresh' install, does not know the remote server. If you want to be more secure, you can add aREMOTE_FINGERPRINTvariable, and add it to~/.ssh/known_hostsprior to opening remote connections IdentityFile=key(or-iforssh) tells the programs to usekeyas the private identity key presented to authenticate with the server- The key, of course, is destroyed by the action script
And with this, we have a very basic action: it gets the sources, builds Zola, builds the site, and deploys it. But it has things i would want to improve on:
- The website gets updated every time something is pushed, including files that should not trigger rebuilds
- Only the
mainbranch is deployed and should trigger the CI/CD - Having a copy of the blog tree that can be downloaded as an archive from the action would help debug future problems
- On the server running Forgejo, building Zola takes 8 minutes
# Improvement 1: Conditional Building
Forgejo Actions borrows its description language from GitHub Actions, which
allow for conditional building. You can specify a regex or set of regexes and
a branch name with the push event type that correspond to the desired triggers.
In my case, this is:
on:
push:
paths:
- config.toml
- static/**
- sass/**
- templates/**
- themes/**
- content/**
- .forgejo/workflows/deploy.yaml
branches:
- main
The action will run again if something is pushed on main that is either: a
change in configuration, a new static file, a change in CSS, a change to the
templates, a change to the theme files, a change to content (posts), or the
workflow file itself.
# Improvement 2: Artifacts
Artifacts are blobs of data generated by CI/CD. In my case, i want a copy of the deployed blog root in ZIP file so i can debug it in case something does not make sense in the hierarchy of files. Uploading Artifacts on Forgejo Actions is a pretty janky process, but i figured out how to make it work.
To understand that, however, we need to go on a small tangent about all of these
step scripts, like actions/checkout@v3. This is GitHub Actions syntax, which
originally meant "Take the code at branch v3 of the repo https://github.com/actions/checkout/, and run it". The action available
for saving Artifacts, upload-artifact@v4, is capable of saving a file or set
of files. In my experience with @v3 on Forgejo, it was more reliable to ZIP
the files yourself and save them. The fourth version of that action, however,
removed compatibility with anything that isn't GitHub. While Forgejo actions are not
retrieved from GitHub, a lot of them are simply mirrors of the GitHub version.
As a result, when using actions/upload-artifact@v4 or actions/download-artifact@v4
(which can be used for two unrelated CI action scripts to exchange files; that's
another use of artifacts), you have to use the version available at https://code.forgejo.org/forgejo/. Because of how actions work, you can absolutely just specify a
URL instead of actions/whatever.
Next, the action will need a unique name for the artifact. In our case, i want
to call it vulpinecitrus-{branch}-{short commit ID}.zip. With the fact that i
need to zip the files (and zip is not installed by default), this gives us:
- name: Install node & zip
run: |
apt update && apt install --yes nodejs zip
# ...
- name: Compress
run: zip -r public.zip public/
- name: Create file name
run: |
forge_ref=${{ gitea.ref }}
echo "ART_FILENAME=vulpinecitrus-${forge_ref#refs/heads/}-$(git rev-parse --short ${{ gitea.sha }})" >> "$GITHUB_ENV"
- name: Save as artifact
uses: https://data.forgejo.org/forgejo/upload-artifact@v4
with:
name: ${{ env.ART_FILENAME }}.zip
path: public.zip
Here's how it goes:
- i add
zipto the packages that are installed at the start - The
public/directory is compressed - A step is added to create a variable in the environment of the build script
called
ART_FILENAME. Variables are created by appending them at a location pointed to by$GITHUB_ENV(to this day, i do not know if that name has been changed yet for Forgejo, but the GitHub version works). The value placed in there is a combination of the various things i wanted: the short commit hash (git rev-parse --short ${{ gitea.sha }}), and the name of the branch (${{ gitea.ref }}without the leadingrefs/heads/) - The variable
ART_FILENAMEvariable is later accessed in theenvobject - Finally, we save the artifact, uploading
public.zipand calling it${{ env.ART_FILENAME }}.zip
As you can see, the layers of software history are unraveling. When this was
written originally, Forgejo had barely forked from Gitea, meaning that all the
CI/CD infrastructure still used gitea as a namespace for variables. There is
also leftovers of GitHub here and there.
Now, i do not remember everything i did on the server side to actually get artifacts running. Memory tells me that there is some configuration that needs changing, but chances are that it is already available on your instance if your admin has enabled actions at all.
Adding artifacts to the steps adds fourteen seconds to my workflow. That is not a lot for me, but if you find that it takes too long, you can always place the part that transfers to the remote server first.
# Improvement 3: Caching Zola Build
With all of that said, building Zola still takes ages. CI/CD systems introduced caches to deal with this kind of problem, so let's use them. This will be pretty straightforward, but i will use one trick.
We will initially try to retrieve a cache called zola-{ zola version } before
building. To do so, a new environment variable is added to the workflow, so that
we can consistently modify the desired version everywhere in the script2. This is
placed at the root of the YAML file, under the name:
env:
ZOLA_VERSION: v0.20.0
Then, we attempt to retrieve the cache in our steps:
- name: Retrieve Zola cache
uses: actions/cache/restore@v4
with:
path: |
/usr/local/cargo/registry/index/
/usr/local/cargo/registry/cache/
/usr/local/cargo/bin/zola
/usr/local/cargo/git/db/
key: zola-${{ env.ZOLA_VERSION }}
i am saving the registry cache and index, the database of crates, and the binary
zola in the cargo binary folder. Later, after building, they will be cached
once again in the same key:
- name: Save Zola cache
uses: actions/cache/save@v4
with:
path: |
/usr/local/cargo/registry/index/
/usr/local/cargo/registry/cache/
/usr/local/cargo/bin/zola
/usr/local/cargo/git/db/
key: zola-${{ env.ZOLA_VERSION }}
timeout-minutes: 2
continue-on-error: true
By default, when retrieving a cache, the action will still succeed if the cache does not exist. On the other paw, when saving, by default, the action will fail the entire workflow on failure. This is not a critical component, so we can allow ourselves to continue if it errors out. In practice, the cache size for these settings is around 139&thsp;MB on my systems.
## Actually building Zola (But Only If Needed)
Now, here is an excellent trick. i want to know if Zola is already installed,
and only force its install if the version changed (or if it absent for some
reason). The commands that we run in Forgejo actions run on sh, the default
UNIX shell, which means i can use conditions and chains of commands.
Specifically, i want to build Zola only if it does not exist in the $PATH or
if its version is different from the expected one. Formulated otherwise, "either
Zola exists and has the correct version, or i build it".
That first condition, expressed as a command, looks something like this:
which zola &>/dev/null
If zola is not in the $PATH, that command will fail. We can chain this with
[ "v$(zola --version | cut -d' ' -f 2)" = "${{ env.ZOLA_VERSION }}" ]
Note the "v" at the start (because Zola formats its version differently in the
command output from how the developers tag releases). Chaining both with &&
means "Zola exists and is the expected version". If that succeeds, then nothing
happens.
If that fails, we build:
- name: Install latest zola
run: ( which zola &>/dev/null && [ "v$(zola --version | cut -d' ' -f 2)" = "${{ env.ZOLA_VERSION }}" ] ) || cargo install --force --locked --git https://github.com/getzola/zola.git --tag ${{ env.ZOLA_VERSION }}
A new option is added, --force, to overwrite the existing binary. Even if the
version changes, cargo will fail to build because it notices that a binary of
the same name already exists in the /usr/local/cargo/bin/ folder.
And voilà. If everything worked correctly, the time taken to actually run the workflow should be reduced drastically. In my case, Zola will build the first time any workflow runs, and then the build step will take 0 seconds every time after. If you modify the key name or list of paths however, the cache will be invalidated, and you will need to wait for a full rebuild.
# So What Did We Learn Exactly
There are various things i learned throughout this whole ordeal:
- Sometimes you can just throw a hack together (
5617daf, Nov. 2023) and optimize it later (22b14cf, Jul. 2025), when it gets really annoying - Every piece of software is a castle built on sand. If you stray enough from tested features, if your use case is complex enough, you will pull back at the walls of user experience like one would pull away the fabric of reality, and you will unravel the certainty of everything around you. You will forget the comfort of software that works. You will lose yourself in those settings, those crappy hot fixes. The knowledge of some of the things upon which your infrastructure hinges will become a part of you, until you, yourself, become diluted enough that, some day, you will look in the mirror, and wonder how much of you is code? How much of you is commands you memorized? Do machines also feel pain? Can a hard drive creak the same way your bones crack when you sit up?
- Hosting a git forge is probably worth it, even if you're the only one using it. Enabling CI/CD on it if it's only for you is your choice to make, though. The entire CI/CD setup of my forge was carefully setup and debugged separately over the course of a couple of weeks for another project i host there. The workflow to deploy this blog piggybacks off of it, but it would not be worth doing the whole setup of action runners just for that3
- The way CI/CD works today is almost entirely dictated by how one corporation did it in their system. GitLab is trying something different, but, in the end, the GitHub way of doing things seems to be winning out (which can create friction, like development only considering one platform, as happened with the artifact actions)
- If you have some folks whose entire work it is to do these things all day, buy them coffee. Seriously.
- Software history is fascinating. At a glance, you can probably guess that, to get to a point where actions work in Forgejo, four different groups who made software (GitHub, Gogs, Gitea, Forgejo) were involved. Interestingly, there seems to be an intersection for the last three; and especially the last two.
If you followed along and learned something, let me know. Otherwise, i hope you now have a good little workflow file to deploy your blog. Have fun!