I’ve managed to implement a CI/CD workflow into this site - it took a while, because I was overcomplicating matters. I also needed a lot of help from my brother, who is much more familiar with Unix than I am.
First Attempt
I started off with the Hugo GitHub actions template to give me something to work from.
My idea for the workflow was the following:
- Set the site to “update mode” - replace pages that won’t work during the update with informative versions
- Delete the assests that will be regenerated by the build - Vite will recompile the css/js and change the filenames. I don’t want to get a bloat of unused files, and deleting them leads to the issue above
- Copy the site across to the server
I thought it would be best to do this in a single connection - whilst I may not be subject to rate limiting, it just feels wrong and inefficient.
To help with this, I found something in the GitHub Actions Marketplace - ssh-scp-ssh-pipelines - the description says that it can run ssh commands on a server before (and after) scp using a single connection. Perfect!
Except, not. Running it would throw up exceptions and fail. This is because it tries to spin up a Docker image running Debian 10 to run it all with Python, and Debian 10 reached it’s end of life 3 years ago.
Thinking further about this, the question is: Why?! Python could be installed on Ubuntu, Windows or probably any other OS that GitHub will spin up. It just seems horrifically wasteful (and emphasises to me that I shouldn’t just blindly trust anything in the marketplace)
Second Attempt
I had a further look around the marketplace and found something that used Node.js (rather than spinning up another machine to run code) but I had already been stung. I should try to do this myself.
Thus, I wrote a bash script to do steps 1 & 2 above, and added it to my solution (as I intend to add more functionality, there would be more unsuable pages whilst updating, so the script would need to expand)
This time I wrote the ssh and scp commands myself, resulting in the following steps:
- Use scp to copy the update script to the server
- Use ssh to run chmod to make the script executable, then execute the script
- Use scp to copy the new website to the server
This was 3 connections, but the best my ignorance could conjure. All working… right?
Nope. Something that was obvious in hindsight was that the “I’m updating” page would be unstyled, because the css is being deleted in the second step. There would be at most a split second where it would work correctly.
Another issue that I didn’t encounter, but did discover in my further reading, was that scp wasn’t a good idea for transfer. Apparently, when it is provided with an asterisk (e.g. scp ./public/* => dest) it gets expanded before doing the transfer. If that list of files is too large, then it starts to mess up. I don’t know what the limit is, but why tempt fate?
Final Attempt
There were two main facts that influenced my final code:
- The impracticality of the “updating” page as mentioned above
- The file transfer took a couple of seconds
If file transfer took minutes, it would make sense to strive on, but since it was such a tiny length of time, there was no real point.
As such, the solution was to trim it all down to a single line of code:
rsync -va public/ username@servername:destinationfolder --delete
This copies over the website and deletes all of the unused files. It minimises interruptions, and works fine. Sometimes the simplest solution really is the best.