Linux Packages For Rust (2/3) - Building with GitHub Actions using Custom Actions and Docker Container Images
A look into vending a Rust project for various OSes and CPU architectures.
Ryan Gorup - Founder
Background: Ebbflow is a multi-cloud load balancer that provisions browser-trusted certificates for your endpoints in addition to providing a client-friendly SSH proxy. Servers can host endpoints from any cloud, on premises, or from your home, all at the same time. It uses nearest-server routing for low latencies and is deployed around the globe.
This post describes how Ebbflow vends its client which is written in Rust to its Linux users, describing the tools used to build the various packages for popular distributions. This guide starts off assuming we can build
.rpm packages locally which we covered in our previous blog post, but now we want to move towards distribution. Specifically, we will discuss how the client is built for all of its target platforms using GitHub Actions. The Docker images and GitHub Actions that we discuss are all public and usable by anyone. Here's what we have in this post:
In our next blog post of this series we will talk about the final step of getting the package into the hands of our users by detailing our package server release process.
Ebbflow Client 101
Ebbflow's job is to route user traffic to your web server application (or SSH Daemon) on your servers. The central Ebbflow service proxies data between user connections (e.g. browser, SSH client) and the client. The client then proxies the data to your web server or local SSH daemon. Users of Ebbflow install the client on machines that will host endpoints, be SSH-ed to, or both at the same time (which is very useful). You can find the client's code on GitHub. We talked about taking the client's binaries and packaging them into
.rpm packages in the previous blog post. The client is also vended to Windows users.
It's one thing to have your code written and to be able to build it on your development machine, it's another thing to build the project for multiple OSs or CPU architectures in any sane way. I could harp on this for a long time, but I'll spare you the trouble and continue to tell you how the Ebbflow client is built for the numerous target platforms.
There are many methods to build your code. Typically you will hook up some build service so that when you
git push or otherwise merge in code, a build will automatically kick off and spit out the resulting artifacts. As far as build services go, Travis CI is popular, there's Jenkins, AWS CodeBuild, Azure Pipelines, and many others. GitHub, where the Ebbflow client's source code is hosted, launched GitHub Actions, another continuous integration and building platform.
GitHub Actions has two important things going for it. First, the code is hosted through GitHub so integration is a piece of cake. Secondly, they have a marketplace for using Action definitions built by other people in your workflow This marketplace makes it super easy to use and you can begin building in minutes.
The build process we are going to discuss can get confusing especially if you've never worked with distributed build platforms, Docker, or GitHub Actions before. GitHub Action executions start at its highest organizational unit, the Workflow:
- Workflows run Jobs
- Jobs execute on Runners which are virtual build environments
- Runners are provided by GitHub and run either Ubuntu, macOS, or Windows
- Jobs have Steps, which can use Marketplace Actions or just do simple things like execute individual shell commands
- Steps run on the Runner image, or can run in custom Docker containers (!!)
Keep in mind that the word Action is overloaded or is confusingly used in general. The entire build framework/product/service is named GitHub Actions. But, inside of your Workflow you specify other 'Actions' that can be from the marketplace. And when you use an Action in your Workflow, it's really just a Step of that Workflow, and the word 'Action' doesn't really get used in your Workflow. It's a little sticky!
GitHub Actions QuickStart if you want to try it yourself!
To get started with GitHub Actions & Rust code click the 'Actions' tab in your GitHub repo and start with some simple workflows, likely language specific ones like Rust build environment GitHub provides 1st-party. For more advanced Rust build settings, check out rust-cargo for using any
cargo command and rust-clippy-check to run Clippy, both created by the actions-rs group.
Now, if you remember from our last blog post, we build our Rust code into proper Linux packages namely
.rpm formats. To do this, we use the
cargo deb and
cargo rpm commands provided through the cargo-deb and cargo-rpm crates respectively. We could simply install these commands onto the Runner that GitHub provides using the rust-cargo-install action. Sounds great!
One problem: cargo-deb and cargo-rpm require OS installed tools such as
rpm-build. Also we are statically linking
MUSL which also needs to be installed to the OS. GitHub provides a lot of software pre-installed but the GitHub runner for Linux runs Ubuntu and installing
rpm-build to Ubuntu is possible, but not simple.
Instead of dealing with GitHub's runners, we could take advantage of what we described earlier: the ability to run a step on a custom Docker container. This will allow us to create images that have everything set up exactly how we need them, and we can build directly on a given distro that we are targeting. Let's look at how we created those images.
Our goal is that when we
git push new client code, GitHub Actions will start our workflow which will execute certain build steps in our custom Docker images. To make this happen, we will create Docker images that have all of the OS packages we need, then reference these images in an Action, then reference that Action in our Workflow. We will discuss these topics from the ground up.
GitHub Actions allows you to bring-your-own container in a feature named Container Actions where your build will executed inside of a provided Docker image, one that we will install Rust and all of the OS packages we need. Its pretty fantastic actually, you just adhere to a small spec for your Dockerfile and then your GitHub Action definition kind of just will run inside the container when you want it to.
The general process for developing these images and creating a container which can build our packages with all of the necessary dependencies is to just start with a base image for a given distro such as Debian or Fedora and from there install Rust. I would run a
cargo build in this container to make sure my code can work, before worrying about the
.rpm packages. I then install
MUSL and build a statically linked target to make sure that works. Finally I would install
cargo deb or
cargo rpm, then execute that and fix any dependencies until your build succeeds. Tip: Don't squash your image into one small layer until you are done or your builds will take a long time during development. Credit is owed to the muslrust project by Github user clux which inspired (or is the base of) all of the images we built.
The container integration with GitHub actions is pretty interesting when you think about it. They have a Runner running Linux (or macOS or Windows) and your steps run directly on this, but some of your step can run inside custom containers alongside your previous or subsequent commands. You could have a single Job use multiple containers and execute commands in each of them, and copy files between all of them. Your containers can see your source code that you likely copied as the first step of the Job. Note that the source code was copied onto the GitHub Runner before your image was pulled!
The secret sauce is explained in the Dockerfile guide that you read when creating the Docker images. GitHub accomplishes this via a mount of the working directory that has been used so far onto your container! This mount does the trick to give your container access to files that were from the GitHub hosted Runner and vice versa. Pretty simple, pretty neat!
Here are the various packages and architecture targets and links to their section. All of these images are open source and available on GitHub, DockerHub, and also usable through the GitHub Actions we discuss later.
|amd64||Ubuntu & Debian|
|armv7||Raspbian (or RaspberryPi OS)|
Poking around the internet when starting this process I saw a docker image for building statically-linked Rust projects named muslrust by Github user clux. This image provided everything we needed for our images, except that it doesn't have the
cargo deb command installed. So, I simply created a new Docker image from this image and added the
cargo deb installation. Full credit goes to the muslrust project which inspired all of our other images which we discuss below. Here is our
Dockerfile for the
.deb build image in its entirety:
Note that in the above we
$GITHUB_WORKSPACE. This is because we are going to adhere to the Dockerfile guide that GitHub published, that we need to adhere to so that our images work with their magic. For local development of this even with the
CD command, I run the following commands
# Build the image docker build -t namespace/tag /path/to/dockerfile # Once inside your Rust project's root where you would normally cargo deb.. docker run -v $PWD:/volume --rm --env GITHUB_WORKSPACE=. -t namespace/tag cargo deb
You can see this image's Dockerfile on GitHub alongside the other files related to this container. I want to point out that in our testing, the packages built on this image work on both Ubuntu & Debian systems so we never created a Debian specific one (muslrust uses Ubuntu). This compatibility is not the case for Fedora & openSUSE, which I talk about shortly. I'm going to punt on explaining the final GitHub Actions integration and I'll continue talking about the other images.
Raspberry Pis are a target of Ebbflows and use the Arm CPU architecture which differs from nearly all desktop PCs and servers. To target this architecture, we can either actually run the build on a server that has an Arm chip using GitHub Action's self hosted runner support or we could emulate the Arm chip using QEMU and Rust's cross-compiling support. The latter seemed easier and doesn't require hardware (although we have a few Raspberry Pis laying around), so we went with that.
The Rust community comes up big for us again and there are plenty of resources for cross compiling. Namely, the
rust-cross project by GitHub user japaric. I created a Debian image and following the guides provided by them and the general internet, I was able to create a Dockerfile which pulls in all of the necessary dependencies such as
entrypoint.sh file and local development instructions are the same as the ones above.
I just created a Docker image based on the latest Fedora version and started plugging in the packages I needed like
MUSL, the Rust toolchain, and
rpm-build. You can find the resulting
Dockerfile on GitHub.
The Fedora based build really just spits out an
.rpm file and I have no code dependent on the actual distro, so ideally I could use this single
.rpm across any
.rpm friendly distros, of relevance openSUSE. Unfortunately, when I attempted this, openSUSE was unable to run the binaries or install the
.rpm built on the Fedora image. I simply created an openSUSE docker image for my needs instead of looking into exactly why this was happening, and you can find that Dockerfile on GitHub.
We're a few levels deep now. We have our client's code which needs to be built in GitHub Actions, which run on the GitHub Runners, which will use our Docker images. We have Docker images which can produce statically linked packages for the target OSs/CPUs on our desktop, but we haven't plugged that into GitHub Actions in any way, we just have the Dockerfiles ready to go.
To execute a workflow's step in your own container you can point your Action's definition (we talk about this soon..) to a Dockerfile and GitHub will build the image before using it during your workflow execution. That is fine but if your Docker build is slow, then you will want to have an image already built. GitHub Actions can used public images easily.
To build and host Docker images, Ebbflow uses Docker Hub which is free for public images. You can find Ebbflow's public images on its Docker Hub profile. Getting started with Docker Hub is pretty simple, you create and account and create a 'repo' which will point to your GitHub repo, and set up automatic builds. Here is what Ebbflow's build triggers look like which set up builds on new commits or tags to the repo.
Once you set this up DockerHub will start building your images and eventually tag them. Its a free service so you cannot really complain, but this might take a minute or two. Once DockerHub is building your images, you are ready to point to them in your Actions. Finally!
To use the Docker images we've created we must create an 'Action' for them and will follow this guide to do so. An Action is just a
.yaml definition, and Actions do not need to be published to the Marketplace for you to use them. I want to give a thank you to GitHub user zhxiaogg for developing the cargo-static-build Action which acted as the template for all of our actions. So given our built and hosted images, we can reference them pretty simply, here is an example for our Arm build that you can also find on GitHub:
Besides some basic information like a description, the interesting parts are in the
cmd section and
runs section. The
runs part just points to our DockerHub image that was tagged for us, and passes the input value
cmd to the entrypoint script. For the
cmd section, we are just defining the default command to use for this action. For this to make sense, let's look at how we would reference this Action in an example workflow:
The integration is easy - we just point to the Arm Action repository and specify the
1.0 version. Note how we are not providing a command because that was handled by the
default entry in our Action definition above. After the build, we use the artifact upload Action to get access to your built package later. The
action.yml files for our other actions are almost identical except for the image and the command.
The Ebbflow Client's workflow file is a few hundred lines long, so instead of pasting it here I encourage you to check it out on GitHub. Here is a shortened version of it:
In this workflow we run a 'quickcheck' job to verify that the build succeeds and we spit put the version of our project which is taken from our
Cargo.toml file. The
raspbianbuild step waits for this to finish first, then begins the build which uses our Arm build Action. Once that is complete. we will create a Draft of our Release and upload our built artifact(s) to it.
This process is pretty great, after a
git push GitHub Actions will build everything and upload it to a 'release' as a Draft, and if I want to release a version, I just fill out the title and description, and all of the assets are there. Check out one of our releases to see all the assets - those were all placed there automatically.
Ebbflow has also taken all of the Actions that were necessary for us to build our client and vended them on the Actions Marketplace. Also our build images can be found on DockerHub. Here are the direct links for the resources we've created:
|Format||Build Distro||CPU arch.||Code||Action||DockerHub|
I hope you found this useful! All in all, the Docker image creation and GitHub integration took me around 5 days to fully flesh out and get working in case you were curious on the effort it took. With this guide, I hope it takes you less time! The next blog post will talk about how these packages actually end up in the hands of our customers. These will be posted to /r/rust, Hacker News, or feel free to send an email to email@example.com with subject "blog subscribe" (or something similar) to have blog posts be emailed to you, or check back at ebbflow.io/blog.
Thanks for taking the time to read this! If you'd like to check out Ebbflow you can use the free trial without providing a credit card! Simply create an account and get started. It takes six minutes to register, install the client, and host your website!