🎉 Ebbflow has launched! 🎊
Use coupon code LaunchParty to save 25% off of the base price!

Blog

Posts

Recent Posts

Hosting websites with Kubernetes has never been easier. Simply deploy our container alongside yours inside your Pod and set up your Endpoint (i.e. Load Balancer) in the Ebbflow Console, the rest is taken care of.

When users navigate to a website hosted through Ebbflow, they will connect to Ebbflow's servers first and be presented a browser-trusted Let's Encrypt certificate. From there, Ebbflow will proxy the connection to the nearest server or container that is set up to host your website using our client. In this guide, we talk about an integration using Kubernetes, but Ebbflow can be used with other platforms at the same time with no additional configuration.

Ebbflow does not charge per endpoint/load balancer, only pay for access to the platform and the data that you send through the system. You can set up routing to any number of clusters for any number of domain names in any physical location, it doesn't matter to Ebbflow.

How It Works

Running Ebbflow with Kubernetes is very simple, especially compared to other load balancing solutions that integrate with Kubernetes. Ebbflow allows you to avoid ingress controllers, services, cloud-specific load balancer integrations, and paying for the load balancers themselves which cost more than Ebbflow.

As mentioned before, Ebbflow proxies connections through to your servers via the Ebbflow Client. This client will run inside of each pod alongside your container that runs your web server. The client will proxy the connection to your server inside the pod over localhost.

Getting Started

Before setting up Kubernetes, you will create your endpoint in the Ebbflow console. You can follow instructions on doing such from the quickstart guide.

Once your Endpoint is set up centrally, we need to run the client inside of your pod. To do so, we take the client's image and add it as another container inside of our pod. Then we need to configure it. The required changes are

  1. Add the Ebbflow image as another container in your Deployment
    • Image: ebbflow/ebbflow-client-linux-amd64:1.1 (Docker Hub)
  2. Run the Ebbflow image with arguments reflecting your Endpoint's settings
    • (Required) run-blocking The name of the command that runs the proxy
    • (Required) --dns VALUE the DNS entry of your endpoint, e.g. example.com
    • (Required) --port VALUE the port your service runs on, e.g. 80, 1500, 8080
    • (Recommended) --healthcheck-port VALUE The port to complete a simple TCP connect health check to, usually the same port as your service but it can be different
  3. Provide a host key via the EBB_KEY environment variable
    • You first create a key in the console (Console > IAM > Create New host Key). Then you add this secret to your cluster with a command like the following:

      $ kubectl create secret generic ebbkey --from-literal=ebbkey=ebb_hst_asdf1234

Overall, it takes a small amount of time to get the Ebbflow client running inside of your Pod. Each pod is atomic, there are no per-node or per-cluster containers running, just per-pod.

Examples

Here is an example deployment.yaml that shows the configuration required to run an example image which runs on port 8000, note that containerPort is specified so that the container has that port open.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: example
  labels:
    app: example
spec:
  replicas: 1
  selector:
    matchLabels:
      app: example
  template:
    metadata:
      labels:
        app: example
    spec:
      containers:
      - name: example
        image: your/image
        ports:
        - containerPort: 8000
      - name: ebbflow
        image: ebbflow/ebbflow-client-linux-amd64:1.1
        args:
          - run-blocking
          - --dns
          - example.com
          - --port
          - '8000'
          - --healthcheck-port
          - '8000'
        env:
          - name: EBB_KEY
            valueFrom:
              secretKeyRef:
                name: ebbkey
                key: ebbkey
CLI It is easy to install the client on your local development machine to play around with the commands and CLI, just follow your OS specific instructions here. It should take minutes to get your site hosted using a command like sudo ebbflow run-blocking --dns example.com --port 8000.

Live Example A live example of using kubernetes with Ebbflow can be found at toby.pictures. The code for this site is on GitHub and is hosted on Google Cloud's Kubernetes Engine.

Simple Simple Simple

This guide is very short due to the fact that the integration is so simple. Networking with Ebbflow is just that, simple, while providing industry leading technology like multi-cloud reliability, nearest-server routing, health checking, and per-pod routing. It should take only minutes to use Ebbflow to route traffic to your cluster. Presently only Linux amd64 images are provided. If you ever run into trouble with the Ebbflow client, you can reach out to Ebbflow for help by contacting support@ebbflow.io. Some useful links:

  • (link) Container Image on DockerHub
  • (link) Example k8s project using Ebbflow
  • (link) The Client's Code on GitHub
  • (link) Client Documentation
  • (link) Ebbflow Quickstart Guide

Linux Packages For Rust (2/3) - Building with GitHub Actions using Custom Actions and Docker Container Images

A look into vending a Rust project for various OSes and CPU architectures.

Ryan Gorup - Founder

Background: Ebbflow is a multi-cloud load balancer that provisions browser-trusted certificates for your endpoints in addition to providing a client-friendly SSH proxy. Servers can host endpoints from any cloud, on premises, or from your home, all at the same time. It uses nearest-server routing for low latencies and is deployed around the globe.

This post describes how Ebbflow vends its client which is written in Rust to its Linux users, describing the tools used to build the various packages for popular distributions. This guide starts off assuming we can build .deb and .rpm packages locally which we covered in our previous blog post, but now we want to move towards distribution. Specifically, we will discuss how the client is built for all of its target platforms using GitHub Actions. The Docker images and GitHub Actions that we discuss are all public and usable by anyone. Here's what we have in this post:

0.0 Motivation & Intro to GitHub Actions
0.1 The Plan: Docker Images > Actions > Workflow
1.0 Docker Images
1.1   GitHub Actions' Container Magic
1.2   Our Build Images
1.3   Building & Hosting the Build Images
2.0 Creating GitHub Actions
3.0 Final GitHub Action Workflow
4.0 Wrapup: All of our Public Resources

In our next blog post of this series we will talk about the final step of getting the package into the hands of our users by detailing our package server release process.

Ebbflow Client 101

Ebbflow's job is to route user traffic to your web server application (or SSH Daemon) on your servers. The central Ebbflow service proxies data between user connections (e.g. browser, SSH client) and the client. The client then proxies the data to your web server or local SSH daemon. Users of Ebbflow install the client on machines that will host endpoints, be SSH-ed to, or both at the same time (which is very useful). You can find the client's code on GitHub. We talked about taking the client's binaries and packaging them into .deb and .rpm packages in the previous blog post. The client is also vended to Windows users.

0.0 Motivation: Automated Builds

It's one thing to have your code written and to be able to build it on your development machine, it's another thing to build the project for multiple OSs or CPU architectures in any sane way. I could harp on this for a long time, but I'll spare you the trouble and continue to tell you how the Ebbflow client is built for the numerous target platforms.

There are many methods to build your code. Typically you will hook up some build service so that when you git push or otherwise merge in code, a build will automatically kick off and spit out the resulting artifacts. As far as build services go, Travis CI is popular, there's Jenkins, AWS CodeBuild, Azure Pipelines, and many others. GitHub, where the Ebbflow client's source code is hosted, launched GitHub Actions, another continuous integration and building platform.

GitHub Actions for Automated Builds

GitHub Actions has two important things going for it. First, the code is hosted through GitHub so integration is a piece of cake. Secondly, they have a marketplace for using Action definitions built by other people in your workflow This marketplace makes it super easy to use and you can begin building in minutes.

The build process we are going to discuss can get confusing especially if you've never worked with distributed build platforms, Docker, or GitHub Actions before. GitHub Action executions start at its highest organizational unit, the Workflow:

  • Workflows run Jobs
  • Jobs execute on Runners which are virtual build environments
    • Runners are provided by GitHub and run either Ubuntu, macOS, or Windows
  • Jobs have Steps, which can use Marketplace Actions or just do simple things like execute individual shell commands
  • Steps run on the Runner image, or can run in custom Docker containers (!!)

Keep in mind that the word Action is overloaded or is confusingly used in general. The entire build framework/product/service is named GitHub Actions. But, inside of your Workflow you specify other 'Actions' that can be from the marketplace. And when you use an Action in your Workflow, it's really just a Step of that Workflow, and the word 'Action' doesn't really get used in your Workflow. It's a little sticky!

GitHub Actions QuickStart if you want to try it yourself!

To get started with GitHub Actions & Rust code click the 'Actions' tab in your GitHub repo and start with some simple workflows, likely language specific ones like Rust build environment GitHub provides 1st-party. For more advanced Rust build settings, check out rust-cargo for using any cargo command and rust-clippy-check to run Clippy, both created by the actions-rs group.

Now, if you remember from our last blog post, we build our Rust code into proper Linux packages namely .deb and .rpm formats. To do this, we use the cargo deb and cargo rpm commands provided through the cargo-deb and cargo-rpm crates respectively. We could simply install these commands onto the Runner that GitHub provides using the rust-cargo-install action. Sounds great!

One problem: cargo-deb and cargo-rpm require OS installed tools such as dpkg, ldd, or rpm-build. Also we are statically linking MUSL which also needs to be installed to the OS. GitHub provides a lot of software pre-installed but the GitHub runner for Linux runs Ubuntu and installing rpm-build to Ubuntu is possible, but not simple.

Instead of dealing with GitHub's runners, we could take advantage of what we described earlier: the ability to run a step on a custom Docker container. This will allow us to create images that have everything set up exactly how we need them, and we can build directly on a given distro that we are targeting. Let's look at how we created those images.

0.1 The Plan: Docker Images > Actions > Workflow

Our goal is that when we git push new client code, GitHub Actions will start our workflow which will execute certain build steps in our custom Docker images. To make this happen, we will create Docker images that have all of the OS packages we need, then reference these images in an Action, then reference that Action in our Workflow. We will discuss these topics from the ground up.

1.0 Developing the Docker Images

GitHub Actions allows you to bring-your-own container in a feature named Container Actions where your build will executed inside of a provided Docker image, one that we will install Rust and all of the OS packages we need. Its pretty fantastic actually, you just adhere to a small spec for your Dockerfile and then your GitHub Action definition kind of just will run inside the container when you want it to.

The general process for developing these images and creating a container which can build our packages with all of the necessary dependencies is to just start with a base image for a given distro such as Debian or Fedora and from there install Rust. I would run a cargo build in this container to make sure my code can work, before worrying about the .deb or .rpm packages. I then install MUSL and build a statically linked target to make sure that works. Finally I would install cargo deb or cargo rpm, then execute that and fix any dependencies until your build succeeds. Tip: Don't squash your image into one small layer until you are done or your builds will take a long time during development. Credit is owed to the muslrust project by Github user clux which inspired (or is the base of) all of the images we built.

This process will give us an image we could use to build our project locally, but we will need to hook these up with GitHub. Let's look at how GitHub uses our Docker images real quick.

1.1 GitHub Actions' Container Magic

The container integration with GitHub actions is pretty interesting when you think about it. They have a Runner running Linux (or macOS or Windows) and your steps run directly on this, but some of your step can run inside custom containers alongside your previous or subsequent commands. You could have a single Job use multiple containers and execute commands in each of them, and copy files between all of them. Your containers can see your source code that you likely copied as the first step of the Job. Note that the source code was copied onto the GitHub Runner before your image was pulled!

The secret sauce is explained in the Dockerfile guide that you read when creating the Docker images. GitHub accomplishes this via a mount of the working directory that has been used so far onto your container! This mount does the trick to give your container access to files that were from the GitHub hosted Runner and vice versa. Pretty simple, pretty neat!

1.2 Our Build Images

Here are the various packages and architecture targets and links to their section. All of these images are open source and available on GitHub, DockerHub, and also usable through the GitHub Actions we discuss later.

.debamd64Ubuntu & Debian
.debarmv7Raspbian (or RaspberryPi OS)
.rpmamd64Fedora
.rpmamd64OpenSUSE

.deb for amd64 Ubuntu & Debian

Poking around the internet when starting this process I saw a docker image for building statically-linked Rust projects named muslrust by Github user clux. This image provided everything we needed for our images, except that it doesn't have the cargo deb command installed. So, I simply created a new Docker image from this image and added the cargo deb installation. Full credit goes to the muslrust project which inspired all of our other images which we discuss below. Here is our Dockerfile for the amd64/.deb build image in its entirety:

With entrypoint.sh

Note that in the above we CD into $GITHUB_WORKSPACE. This is because we are going to adhere to the Dockerfile guide that GitHub published, that we need to adhere to so that our images work with their magic. For local development of this even with the CD command, I run the following commands

# Build the image
docker build -t namespace/tag /path/to/dockerfile

# Once inside your Rust project's root where you would normally cargo deb..
docker run -v $PWD:/volume --rm --env GITHUB_WORKSPACE=. -t namespace/tag cargo deb

You can see this image's Dockerfile on GitHub alongside the other files related to this container. I want to point out that in our testing, the packages built on this image work on both Ubuntu & Debian systems so we never created a Debian specific one (muslrust uses Ubuntu). This compatibility is not the case for Fedora & openSUSE, which I talk about shortly. I'm going to punt on explaining the final GitHub Actions integration and I'll continue talking about the other images.

.deb for armv7 Raspbian (or RaspberryPi OS)

Raspberry Pis are a target of Ebbflows and use the Arm CPU architecture which differs from nearly all desktop PCs and servers. To target this architecture, we can either actually run the build on a server that has an Arm chip using GitHub Action's self hosted runner support or we could emulate the Arm chip using QEMU and Rust's cross-compiling support. The latter seemed easier and doesn't require hardware (although we have a few Raspberry Pis laying around), so we went with that.

The Rust community comes up big for us again and there are plenty of resources for cross compiling. Namely, the rust-cross project by GitHub user japaric. I created a Debian image and following the guides provided by them and the general internet, I was able to create a Dockerfile which pulls in all of the necessary dependencies such as MUSL and cargo-deb. The entrypoint.sh file and local development instructions are the same as the ones above.

.rpm for Fedora amd64

I just created a Docker image based on the latest Fedora version and started plugging in the packages I needed like MUSL, the Rust toolchain, and rpm-build. You can find the resulting Dockerfile on GitHub.

.rpm for OpenSUSE amd64

The Fedora based build really just spits out an .rpm file and I have no code dependent on the actual distro, so ideally I could use this single .rpm across any .rpm friendly distros, of relevance openSUSE. Unfortunately, when I attempted this, openSUSE was unable to run the binaries or install the .rpm built on the Fedora image. I simply created an openSUSE docker image for my needs instead of looking into exactly why this was happening, and you can find that Dockerfile on GitHub.

1.3 Building & Hosting the Build Images

We're a few levels deep now. We have our client's code which needs to be built in GitHub Actions, which run on the GitHub Runners, which will use our Docker images. We have Docker images which can produce statically linked packages for the target OSs/CPUs on our desktop, but we haven't plugged that into GitHub Actions in any way, we just have the Dockerfiles ready to go.

To execute a workflow's step in your own container you can point your Action's definition (we talk about this soon..) to a Dockerfile and GitHub will build the image before using it during your workflow execution. That is fine but if your Docker build is slow, then you will want to have an image already built. GitHub Actions can used public images easily.

To build and host Docker images, Ebbflow uses Docker Hub which is free for public images. You can find Ebbflow's public images on its Docker Hub profile. Getting started with Docker Hub is pretty simple, you create and account and create a 'repo' which will point to your GitHub repo, and set up automatic builds. Here is what Ebbflow's build triggers look like which set up builds on new commits or tags to the repo.

Once you set this up DockerHub will start building your images and eventually tag them. Its a free service so you cannot really complain, but this might take a minute or two. Once DockerHub is building your images, you are ready to point to them in your Actions. Finally!

2.0 Creating Github Actions

To use the Docker images we've created we must create an 'Action' for them and will follow this guide to do so. An Action is just a .yaml definition, and Actions do not need to be published to the Marketplace for you to use them. I want to give a thank you to GitHub user zhxiaogg for developing the cargo-static-build Action which acted as the template for all of our actions. So given our built and hosted images, we can reference them pretty simply, here is an example for our Arm build that you can also find on GitHub:

Besides some basic information like a description, the interesting parts are in the cmd section and runs section. The runs part just points to our DockerHub image that was tagged for us, and passes the input value cmd to the entrypoint script. For the cmd section, we are just defining the default command to use for this action. For this to make sense, let's look at how we would reference this Action in an example workflow:

The integration is easy - we just point to the Arm Action repository and specify the 1.0 version. Note how we are not providing a command because that was handled by the default entry in our Action definition above. After the build, we use the artifact upload Action to get access to your built package later. The action.yml files for our other actions are almost identical except for the image and the command.

3.0 Final GitHub Action Workflow

The Ebbflow Client's workflow file is a few hundred lines long, so instead of pasting it here I encourage you to check it out on GitHub. Here is a shortened version of it:

In this workflow we run a 'quickcheck' job to verify that the build succeeds and we spit put the version of our project which is taken from our Cargo.toml file. The raspbianbuild step waits for this to finish first, then begins the build which uses our Arm build Action. Once that is complete. we will create a Draft of our Release and upload our built artifact(s) to it.

This process is pretty great, after a git push GitHub Actions will build everything and upload it to a 'release' as a Draft, and if I want to release a version, I just fill out the title and description, and all of the assets are there. Check out one of our releases to see all the assets - those were all placed there automatically.

4.0 Wrapup: All of our Public Resources

Ebbflow has also taken all of the Actions that were necessary for us to build our client and vended them on the Actions Marketplace. Also our build images can be found on DockerHub. Here are the direct links for the resources we've created:

FormatBuild DistroCPU arch.CodeActionDockerHub
.debUbuntuamd64repoactionimage
.debDebianarmv7repoactionimage
.rpmFedoraamd64repoactionimage
.rpmopenSUSE Leapamd64repoactionimage

I hope you found this useful! All in all, the Docker image creation and GitHub integration took me around 5 days to fully flesh out and get working in case you were curious on the effort it took. With this guide, I hope it takes you less time! The next blog post will talk about how these packages actually end up in the hands of our customers. These will be posted to /r/rust, Hacker News, or feel free to send an email to info@ebbflow.io with subject "blog subscribe" (or something similar) to have blog posts be emailed to you, or check back at ebbflow.io/blog.

Thanks for taking the time to read this! If you'd like to check out Ebbflow you can use the free trial without providing a credit card! Simply create an account and get started. It takes six minutes to register, install the client, and host your website!

You can use Ebbflow to host a website with your Raspberry Pi very easily. You can use multiple Pis at the same time, or add your Raspberry Pi as an additional host to your fleet! It's also super easy to set up your Pi(s) so you can ssh to them.

Here are the steps this guide will work through assuming you've created an account (which has a free trial!!)

Installing the Client
Set up SSH (to SSH to your Pi from anywhere)
Set up Website Hosting

Ebbflow actually uses Raspberry Pis to host its website! You're currently being served by a server named aws-linux-i34709-useast. A hard-refresh (how-to) may let you get a Raspberry Pi as the host if you don't already see one. Here is an actual photo of some of the Pis Ebbflow uses to host its own website, through Ebbflow:

If you get stuck, you can always follow the quickstart guide or consult the general documentation.

Installing the Client

Ebbflow vends a Debian package for the Raspberry Pi specifically (If you're curious about how Ebbflow does this, check Linux Packaging blog post). The following commands will add Ebbflow's package signing key as a trusted key, add our servers as a package repo, and then finally install the package to your system.

curl https://pkg.ebbflow.io/live/raspbian/buster.gpg | sudo apt-key add -
curl https://pkg.ebbflow.io/live/raspbian/buster.list | sudo tee /etc/apt/sources.list.d/ebbflow.list
sudo apt update && sudo apt install ebbflow

You should see that the ebbflow package was successfully installed! If not, email support@ebbflow.io.

To initialize the client, you then execute

sudo ebbflow init
Here you will be asked about setting up SSH!

Setting up SSH

When you execute sudo ebbflow init you will be asked if you'd like to have SSH set up for your Pi, meaning you will be able to SSH to this Pi from anywhere. If you would like it, type y and continue! It will then ask you to change what name will be used as the target for this Pi. This name is used when connections to this Pi are attempted, e.g. when you do ssh -J ebbflow.io pi@HOSTNAME. If you have one Pi, you will probably leave this as the default raspberrypi hostname, but feel free to change it. Here is what it looks like for my Pi that I already changed the hostname of:

The Pi is setup to Receive connections, but you aren't set up to connect to it yet. You need to add the public portion of your SSH key from as host you'd like to ssh from. To do this, from the host want to SSH from, find the public portion of your SSH key, typically ~/.ssh/id_rsa.pub. Here are other resources:

  • Finding an existing key: guide
  • Generating a new key: guide

After you've found your key, add it to Ebbflow in the Console > IAM > Create New SSH Key. This is what it should look like:

Note the Policy we are adding, SshToAnything. More documentation about Ebbflow's permission model can be found here, but what we do specifically now is allow this SSH key to SSH to any server in this account.

Once that is setup, you should be all good to go! Assuming you kept raspberrypi as your hostname, you can likely just do

ssh -J ebbflow.io pi@raspberrypi
This "should just work"! If something goes wrong, go to your Pi and execute ebbflow status to look for errors.

Hosting your Website

Hosting websites through your Pi is a very exciting prospect, as it can be a major pain to get the networking correct to route a website to your Pi. As described in Ebbflow's announcement blog post, networking is tricky and Ebbflow aims to solve that. And as we mentioned above, Ebbflow's website is partially hosted by Raspberry Pis, through Ebbflow itself! (Note that we are talking about the website for Ebbflow, not the actual Ebbflow data plane, which is hosted on multiple regions, continents, and cloud providers)

First things first, your Pi will need to be running some web server. Maybe that means Apache (guide) or something you wrote in Python or Rust ❤️. This server should already be up and running. The goal is to expose this web server to the world using a domain name you own!

To get started with things Ebbflow-wise, create an Endpoint in Ebbflow. We highly recommend Managed as the type. From there, you will see a page with some data and instructions for setting up DNS. You need to point your domain name to Ebbflow, so that when users visit your website they will reach Ebbflow servers which will then reach out to your Raspberry Pi. The following diagram shows the general traffic flow

The instructions of setting up DNS for your domain name are specific to whomever you registered your domain name with so we cannot provide specific instructions.

Once DNS is set up click the Verify button on your endpoint's page. Ebbflow then tries to verify your domain name points to Ebbflow and that everything is OK.

The following video shows the end-to-end process which may be helpful to you, although it is running on an Ubuntu machine, not Raspberry Pi. If you get stuck, consult this or the quickstart guide.

After DNS is working, we just need to tell our Pi to host the website! We do so by adding the domain name as a new Endpoint using the ebbflow CLI:

# Note the port your server is running on, Apache uses 80 for example

sudo ebbflow config add-endpoint --dns your-website.com --port 80
After that is done, and assuming your web server is running, then you should be able to navigate to your website on any browser, or via curl. If that doesn't work, make sure your endpoint has been 'Verified' on the Ebbflow Console or check for errors on your Pi using ebbflow status. Your Raspberry Pi should now be successfully hosting your website with browser trusted Let's Encrypt certificates!

Conclusion

Now your Raspberry Pi is being put to use! You can access your Pi from the coffee shop or when you're traveling, and you didn't need to punch a hole in your router or expose your home's IP address. Raspberry Pis are powerful little computers that would cost at least a hundred dollars a year, or more, for a similarly spec'd cloud instance. Using Ebbflow, we can use our Raspberry Pis to actually host production traffic, as Ebbflow is doing itself.

I hope you've enjoyed this guide on how to host a website on your Raspberry Pi! If you have feedback, you can always email info@ebbflow.io.

Linux Packages for Rust - Creating Debs & Rpms (1/3)

A look into vending a Rust project for various OSes and CPU architectures.

Ryan Gorup - Founder

Background: Ebbflow is a multi-cloud load balancer that provisions browser-trusted certificates for your endpoints in addition to providing a client-friendly SSH proxy. Servers can host endpoints from any cloud, on premises, or from your home, all at the same time. It uses nearest-server routing for low latencies and is deployed around the globe.

This post describes how Ebbflow vends its client which is written in Rust to its Linux users, describing the tools used to build the various packages for popular distributions. In a future post, we will discuss how these packages are ultimately vended to users.

Linux

Ebbflow vends the client to numerous distributions of Linux based operating systems. Linux is free, flexible, and is widely used in the software community and is being used increasingly in consumer markets. Ebbflow uses Linux for development machines and production servers.

To vend software to Linux users, you must consider the Distribution that the users may be running. For non-Linux users, a distribution can be thought of as a version or flavor of an OS. Each distribution provides package management systems and backround-service management software which may be different, adding complication to our task of vending our client.

This guide will talk about the problems that need to be solved after the core code has been written and all that's left to do is get it into users' hands. So, given that we have our binaries, here are the main tasks that we must work on:

Task Solution
Run the Daemon in the Background Create a systemd service
No Dependencies Statically Linking
Working with Distributions .deb and .rpm Packages
Automated Builds >> Next Blog Post
Vending the Packages Future Blog Post

Ebbflow Client 101

Ebbflow's job is to route user traffic to your web server application (or SSH Daemon) on your servers. The central Ebbflow service proxies data between user connections (e.g. browser, SSH client) and the client. The client then proxies the data to your web server or local SSH daemon. Users of Ebbflow install the client on machines that will host endpoints, be SSH-ed to, or both at the same time (which is very useful). The following diagram shows this visually.

Quick note: The client initiates the connection to the central Ebbflow service via an outbound TLS connection, which makes the client extremely firewall friendly. You may completely block all inbound connections but still host websites or be SSH-ed to! Also, this innovation allows the servers to be located in any network, and even change networks without any configuration at all. Ebbflow just receives connections from the general internet, and after authentication and authorization, will allow the server to host the endpoint or be SSH-ed to. Neat!

The client has two parts, the CLI and the background daemon. Both of these programs are 100% Rust, 100% async, and 100% 'safe', and statically linked - the dream of any Rust developer! The CLI is an executable tool named ebbflow that is used to tell the background to host new endpoints, disable or re-enable endpoints or the SSH proxy, and configure other settings.

The background daemon is the second piece to this puzzle, and is the workhorse and is responsible for actually transferring bytes between the central Ebbflow servers and your local web server or SSH daemon. This daemon is just a long-running background executable named ebbflowd.

Backround Daemon / systemd Service

ebbflowd needs to run in the background, run without a logged-in user, start on system boot or reboot, and be started again if it crashes. (most) Linux distributions provide a mechanism for this called systemd, which is a system to manage and execute "services". When systemd manages a service, it will start it, watch it, and fulfulls all of the needs we have.

systemd uses "unit files" as the means to know how a service should be handled. They can be very simple, and you can see Ebbflow's unit file on GitHub. The unit file points to the executable, declares a dependency that the system's networking services should be initialized first, and that the program should be restarted quickly if it exits. There are many helpful guides for systemd online from general info to more reference type guides.

Below is the actual ebbflowd.service unit file used by Ebbflow.

Simple, short, and gets the job done! The daemon will be restarted after one second, and will execute /usr/sbin/ebbflowd on our users' behalf.

If you are making your own, you can get started quickly by changing ExecStart to your built program in whatever directory it lies in now. To run the service, we must do the following

  1. Add the unit to systemd's known services
    • sudo cp myprogram.service /etc/systemd/system/
  2. Enable the unit so systemd will start it on boot, registering the unit file
    • sudo systemctl enable myprogram
  3. Tell systemd to start your program
    • sudo systemctl start myprogram
  4. Check that its running!
    • sudo systemctl status myprogram
  5. Stop the program
    • sudo systemctl stop myprogram

After writing and testing your unit file you will need to either instruct users to complete the above steps or, preferably, vend your Unit in a proper package so that the users can avoid managing the unit file themselves. Ebbflow's unit file is vended through the built packages which we describe later.

Going Static

Rust will dynamically link against the OS's libc implementation when the standard library is used, which is the case for almost all substantial Rust applications. When compiling Rust on a Linux system Rust will dynamically link against glibc by default.

For background, linking against glibc is not necessarily a bad thing - you can reduce your binary size and benefit from any performance or security updates without changing your code. However, when vending your project to users, linking against something that is not under you control is not ideal. First off, when linking, you will link against a specific version of glibc - the version that your build system has. So let's look at a ficticious example where you build on Ubuntu 20.04 (which uses glibc version 2.31) and vend to Ubuntu 18.04 (which uses 2.27). If you build on your Ubuntu 20.04 system and rustc uses some feature of glibc 2.31 that is not present in 2.27, and you execute your code on the Ubuntu 18.04 system, your code may break!

Edit: Per reddit commenter /u/STEROVIS, the above statement is innacurrate, and instead of "breaking", the "program simply will refuse to run at all whether you use a nonexistent feature or not. The dynamic linker will instantly fail complaining about a missing GLIBC_* version symbol". Still, the program will not work!

You could avoid this scenario by building and linking to a low glibc version, but how low? How far back should you go? What if you want to vend to users of very very old systems?

For maximum flexibility and to gain ground against our goal of working on many distributions, you can avoid these problems by statically linking libc using MUSL, a libc implementation that rustc can include in your built binary so your program can run without any glibc dependencies. To get started using MUSL, you add a new target to rustup by executing the following. Also note that you may need to install a musl package, for example on Debian execute apt-get install musl.

# Add new MUSL target to rustup
$ rustup target add x86_64-unknown-linux-musl

# Build!
$ cargo build --target x86_64-unknown-linux-musl

Besides the std library, Rust code may link to other OS libraries most often OpenSSL or libsodium for crypto. The Ebbflow client avoids this by using rustls which uses ring under the hood. Rustls is an ergonomic TLS library for Rust. It is highly performant even compared to OpenSSL and recently underwent a 3rd party security audit which showed no flaws. When you invest in the Rust ecosystem and use Rust-written libraries, you are rewarded with the ability to statically link which is a desirable situation.

To this point, we've taken our project and registered the backround daemon with the OS using systemd and statically linked our code using MUSL. The client could be tested to run on a single machine, but there are many installation steps. From here, we will package up our daemon and unit file nicely using packages.

Packaging for Linux; .deb and .rpm

Each Linux distribution provides a Package Manager which is used to manage the installed software, called packages, on the system. Most distributions use the .deb or .rpm package formats. .deb packages are used by the Debian distribution and its derivitatives such as Ubuntu. .rpm packages are used by other distributions such as OpenSUSE, Fedora, and CentOS. Each distribution may have a different CLI tool for managing packages, even if they use the same package format, but that problem will be solved in the next blog post.

To create a package, you first create a format-specific specification or configuration file. Once your specification is created you use the format-specific build tool to build the actual packages, in our case a .deb and .rpm file.

.deb Packaging

Building Debian packages for a Rust project is dead simple using the cargo-deb crate, which is used as a cargo subcommand. This command will create a .deb package from your Rust project and is highly configurable - check out the README for all of the options.

Working with cargo deb is simple, so simple that you don't actually need to touch any configuration at all if you have a simple Rust project: you can just execute cargo deb and the tool will infer all of the required settings and build your package!

The Ebbflow client is more complicated, so we will provide configuration through the client's Cargo.toml file. Here is a snapshot of what that looks like:

There are three interesting components to this. The first is the conf-files list. In general, configuration files are different than other files that may be vended in a package - they are typically changed after installation, but not changed between versions, and are user-specific. This is opposed to the binary executables which typically change between versions and are not user-specific. Package managers strive for integrity by hashing known files for a package, and checking these hashes during uninstallation and upgrades. If an upgrade occurs and a file has an unexpected hash, the user will be prompted to resolve the issue.

To deal with the fact that configuration files are special, the Debian package specification allows you to specify configuration files which will not be checked for integrity. This avoids bugging the users during upgrades which is desirable. Long story short, if you have configuration files, inform the package manager by listing the file in your package build!

The second interesting item is the maintainer-scripts item. This points to some scripts which are used to control the installation of the package. Specifically, the scripts will register the systemd unit of ours, start the service, and stop it when uninstalled. For more information about this, see the discussion of services in the cargo-deb repo.

Lastly, the third intesting item is the assets section. This simply tells cargo deb what files should be copied to our final package, to which location, and their respective unix permissions.

Running cargo deb will build the .deb and you can sudo apt install ./path/to/.deb your package on your local system to test it out!

.rpm Packaging

Like Debian packages, it is super easy to create .rpm packages for Rust projects using a helpful cargo subcommand, in this case, cargo-rpm. To get started in a new project, execute cargo rpm init to create a basic .spec file. Under the hood, cargo rpm will pass the .spec to rpmbuild which builds the final .rpm file. A reference for the .spec file can be found here.

cargo rpm use the .spec file alongside any changes to your Cargo.toml when building your package. Writing the .spec file is a little more intimidating than the .deb configuration, but quite simple once you understand how it works. The Ebbflow client's .spec can be found on GitHub, but a snapshot is below. Besides the spec, we will also change our Cargo.toml.

First, the changes to our Cargo.toml:

Above, we state that we would like the cargo rpm tool to use the --release flag and to target x86_64-unknown-linux-musl so we statically link our binaries. The next section informs cargo rpm that we have two binaries that should be taken from the build output, namely ebbflow and ebbflowd. Lastly, the final section lists some other files that will be brought into our RPM build environment, which will be referenced in the .spec file. These files are all placed in the .rpm directory of our Rust project.

Now let's look at the .spec

Much of this was auto-generated from running cargo rpm init. The major changes needed were to include the ebbflowd.service file (which we referenced in our changes to Cargo.toml) and to list our configuration files. Rpm configuration files act like Debian configuration files, which we discussed earlier in the Debian package section.

Running cargo rpm build will build the .rpm file based on our various configuration items and spit out the package. Note that this will need to be completed on an RPM based distribution or you will need to figure out how to grab the rpmbuild tool and execute that on your non-RPM-based distribution.

One more note is guidelines dictate that you may NOT initiate/start your systemd service on installation if the service requires manual configuration, which the Ebbflow client does. For this reason, Ebbflow informs users to execute sudo systemctl enable --now ebbflowd after they have initialized/configured the client by executing sudo ebbflow init.

Going Forward: Automated Builds & Vending to Users

We have the packages, they're statically linked and ready to go, but the final hurdles are to build the packages in a sane way and to deliver the packages through the users' package management systems. These topics are reserved for an upcoming blog post! These will be posted to /r/rust, Hacker News, or feel free to send an email to info@ebbflow.io with subject "blog subscribe" (or something similar) to have blog posts be emailed to you!

Closing Thoughts

Thanks to cargo deb and cargo rpm building Linux packages is dead simple for Rust projects. These two projects are invaluable for working with .deb and .rpm packages. More documentation would always be lovely for these tools, but thankfully the package formats themselves are old and there are numerous examples and helpful resources online to help with your configuration.

The Ebbflow client was developed over a handful of months and underwent major changes. It was first just a CLI, then re-written to have the daemon and CLI frontent. There was actually little forethought regarding the packaging process during the re-write; the re-write got finished and then we turned to packaging it up. The Cargo Deb crate and Cargo Rpm crates came to the rescue and made packaging very simple. The trickiest bit was getting the systemd files to work and automatically start up, but there are enough resources online, and now the Ebbflow client is another example of how to get that running, but specifically in Rust!

Thanks for taking the time to read this! If you'd like to check out Ebbflow you can use the free trial without providing a credit card! Simply create an account and get started. It takes six minutes to register, install the client, and host your website!

Packaging & Vending Production Rust Software For Windows

A look into vending a Rust project for various OSes and CPU architectures.

Ryan Gorup - Founder

Background: Ebbflow is a multi-cloud load balancer that provisions browser-trusted certificates for your endpoints in addition to providing a client-friendly SSH proxy. Servers can host endpoints from any cloud, on premises, or from your home, all at the same time. It uses nearest-server routing for low latencies and is deployed around the globe.

This post describes how Ebbflow vends its client to Windows users which is written in Rust, describing the tools used to build and ultimately deliver the program to users.

Ebbflow Client 101

Ebbflow's job is to route user traffic to your web server application (or SSH Daemon) on your servers. The central Ebbflow service proxies data between user connections (e.g. browser, SSH client) and the client. The client then proxies the data to your web server or local SSH daemon. Users of Ebbflow install the client on machines that will host endpoints, be SSH-ed to, or both at the same time (which is very useful). The following diagram shows this visually.

Quick note: The client initiates the connection to the central Ebbflow service via an outbound TLS connection, which makes the client extremely firewall friendly. You may completely block all inbound connections but still host websites or be SSH-ed to! Also, this innovation allows the servers to be located in any network, and even change networks without any configuration at all. Ebbflow just receives connections from the general internet, and after authentication and authorization, will allow the server to host the endpoint or be SSH-ed to. Neat!

The client has two parts, the CLI and the background daemon. Both of these programs are 100% Rust, 100% async, and 100% 'safe', and statically linked - the dream of any Rust developer! The CLI is an executable tool named ebbflow that is used to tell the background to host new endpoints, disable or re-enable endpoints or the SSH proxy, and configure other settings.

The background daemon is the second piece to this puzzle, and is the workhorse and is responsible for actually transferring bytes between the central Ebbflow servers and your local web server or SSH daemon. This daemon is just a long-running background executable named ebbflowd.

Windows

Supporting Windows as a first-class server OS is very important to Ebbflow. Taking the two executables & re-compiling them on Windows was simple thanks to Rust's abstractions over the underlying OS and numerous community projects. First things first, the background daemon needs to be changed to work in a Windows world.

Background Daemon / Windows Service

The ebbflow CLI is very simple and is portable in the sense that you could execute the binary on a windows machine without installing it and it will do its job correctly, but ebbflowd is another story. To execute a background program in Windows one uses the Windows Service framework. This framework is needed as the ebbflowd program needs to run when there are no interactive users, start when the system starts, and be watched and restarted in the event of a crash (which has not happened due to Rust's inherit safety and never unwraping).

A sticking point of using the Windows Service framework is that your program needs to implement a specific interface, namely it must provide an entry point, handle signals, and register itself. To help, the windows-service Rust crate (provided by Linus Färnstrand) makes this very simple.

This windows-service crate provides a macro which requires that you implement just a few functions that are effectively main functions. The Ebbflow client uses #[cfg(windows)] to conditionally include all of the necessary code to implement these functions and therefore the Windows Service interface itself! The winlog crate was also helpful for taking log events and adding them to the Windows Event Log system.

Going Static

Rust will dynamically link against the OS's libc implementation when the standard library is used, which is the case for almost all substantial Rust applications. On Windows, this means Rust will compile programs and link them to the system's 'MSVC' (MicroSoft Visual C++) libc implementation. Interestingly, the default Windows 10 image does NOT include this library! This results in Rust code breaking when executed on a new system, something that has come up in testing, but may not come up in development as MSVC would likely get installed at one point or another when setting up a Windows machine for code development.

Statically linking Rust code is simple. To bring along MSVC with your compiled Rust program, you instruct the compiler to statically link MSVC. In my case I set just one environment variable (RUSTFLAGS='-C target-feature=+crt-static') and it simply worked, but you can add a config file entry or pass a flag to compilation as well.

Besides the std library, Rust code may link to other OS libraries most often OpenSSL or libsodium for crypto. The Ebbflow client avoids this by using rustls which uses ring under the hood. Rustls is an ergonomic TLS library for Rust. It is highly performant even compared to OpenSSL and recently underwent a 3rd party security audit which showed no flaws. When you invest in the Rust ecosystem and use Rust-written libraries, you are rewarded with the ability to statically link which is a desirable situation.

Building the .msi using Wix

Windows uses 'installers' in the form of .msi packages to handle which packages are installed on a system. These installers tell Windows what files/executables go where and contain logic to handle upgrades, uninstallations, and configuration management.

Ebbflow uses Wix which is a tool that takes an .xml configuration file and generates .msi packages. Conveniently, a plug-in to Rust's cargo build tool named cargo-wix exists. This tool can generate a skeleton Wix xml file which we then modified to suit our needs. Under the hood, executing cargo wix will compile your program in --release, then package up everything into your .msi using candle.exe to compile your Wix config and light.exe to link it.

Generating and tinkering with the client's Wix file took some time. There are numerous StackOverflow q/a's and other resources to help with writing the Wix file which was helpful, but I found that vending a Windows Service in an installer is more of an edge case. Also, configuring the UI components of the installer is tricky and I never found a great reference implementation or other documentation.

Automated Builds using GitHub Actions

Up to this point, I've produced a statically linked Windows program and packaged it into an installer. For various reasons such as build reproducability & transparency, and hosting the built .msi for free, Ebbflow uses GitHub Actions to build and package up the client. You can see the client's Windows build configuration on GitHub, and here is a snippet:

GitHub provides a Windows image that has various build tools included, importantly Wix. Getting Windows builds set up on GitHub Actions was pretty simple and I had a great experience with the service.

Distribution to Users

After writing the code, making an installer and testing that out, the next step is to figure out how to get this into our customers hands. To solve this, the first thought is to just distribute the .msi file itself and notify customers of upgrades. This generally works and the installer can handle upgrades and uninstallations. However, there are drawbacks. The major one is that any future updates will require the customer to download the new .msi, which requires some custom process. Windows will flag any software that is unsigned and all applications must be signed using certificates derived from trusted certificate authorities. This is fine and secure but to have our .msi work without warnings you must purchase a code signing certificate/key from someone like Digicert or the like which can easily run into the hundreds of dollars. Alternatively you can create your own CA & signing key/certificate, self-sign the msi, then have all customers add your CA to their trusted store. This does not seem too common from my understanding.

These two drawbacks are pretty severe, which is what leads us to our chosen solutions, Chocolatey and Winget.

Chocolatey

Chocolatey is a 3rd party package manager and you can use it to install, update, remove, and/or search for packages. It works great for personal use and will feel very familiar to any Linux users. Chocolatey also provides a multi-computer management solution to enterprise customers so sysadmins could distribute and maintain many Windows computers from afar.

Importantly, Chocolatey packages do not require code-signing and will not trigger any warnings when installed. Adding package to the Chocolatey package store is decently simple. Chocolatey deals in .nupkg-es, which are easily created with the Chocolatey CLI. Execute choco new to generate a base set of files. After modifying them, you package up everything into the .nupkg container using choco pack. Ebbflow's chocolatey repo can be found on GitHub. Be sure to remove configuration entries or files that you don't need. And for referencing the actual installer for download, we just use the GitHub Release artifact that we post, all thanks to our earlier decision to build on GitHub.

Once you submit your package to Chocolatey, it goes through two automated steps of validation and it will spit out some advice that can be required and block your package from being released or it will spit out recommended changes like adding specific links or documentation. If these pass, the package enters 'moderation', and an actual human will review the package before it is released. This can take days to weeks, and is a drawback. Chocolatey does have Business plans, but to my knowledge these do not come with quicker package moderation.

Further, any new versions of a package after the initial moderation will need to be human-moderated. Your package can become 'Trusted' to allow new versions to bypass the human moderation, but this takes several complaint/issue-free versions of your application to be released.

Winget

Windows is working on a 1st party package manager named winget. Its exciting and adding your program to the community repo is simple and will feel familiar if you've worked with MacOS's Homebrew system. You must author a 'manifest' which points to your .msi and includes things like a description and a SHA256 hash. You can view Ebbflow's Winget configuration on GitHub.

Like Chocolatey, you do not need to code-sign your package and will not trigger any warnings on installation. Also like Chocolatey, there is a human moderation/approval process that is needed for each package and subsequent version. I believe this will change in the future when winget adds support for other sources. At that point, I assume you would have full control of your package and users will just need to add your Source to their source list. winget is promising and I'm looking forward to seeing it develop and march towards release in 2021.

Conclusion

The Rust language and 1st party tools facilitate a quick and smooth transition from code running only using cargo run to a fully built and distributed program. The static linking support is simple. The cargo tool is a game changer; it is used for dependency management which is a major headache for most languages. Cargo subcommands are a joy to use and made integration with Wix dead simple. The cargo wix subcommand allows developers to stay within the Rust ecosystem and still build and compile their program idiomatically through cargo.

Vending a package to Windows is overall a nice experience. The tooling works well and there is a good amount of online resources available such as prior-art so to speak on GitHub and other sources. Also, general documentation and StackOverflow are widely available. The biggest sticking points were anything with Wix, working with the 'Service' framework, and waiting for Chocolatey and winget packages to be approved. Non-Windows developers will find it easy to get started building and vending Rust code thanks to the language's built in support and through the numerous helpful community provided projects.

Thanks for taking the time to read this! If you'd like to check out Ebbflow you can use the free trial without providing a credit card! Simply create an account and get started. It takes six minutes to register, install the client, and host your website!