Hosting a Phoenix Web App on OpenBSD

I have some Phoenix web apps and wanted to host them on OpenBSD. It's just a few small apps on a single server, not anything complex, so I was hoping to figure out a simple approach (no need for anything enterprise-grade or web-scale). I think I was able to come up with something good, which I'll share below.

Interface configuration

Had I only one IP address and several web apps to run behind it, I would likely use relayd or another reverse proxy. In my case, I was able to use a separate address for each web app. Aside from the simplicity, this keeps open the possibility of a web app doing its own SNI and virtual host handling.

When I had set up my server, I only assigned a single address from my allocation to the network interface. Before going any further, I would need to configure an alias on that interface for each additional address. I added a line like this to /etc/hostname.bnx0 for each address so that the aliases would be configured at startup:

inet alias

Then I ran ifconfig to add the aliases immediately:

ifconfig bnx0 inet alias netmask

Apparently setting the mask to all-ones is the thing to do for aliases that are in the same subnet as the primary address (or as another alias that has a regular mask set, I think). It wasn't immediately obvious to me why, and I think this might be different on other network stacks. I'd like to research it later and learn more.

Packet filter

I planned to run my web applications as a regular user, so they wouldn't be able to bind privileged ports. I added a couple of rules to pf.conf(5) to have traffic for ports 80 (HTTP) and 443 (HTTPS) on the web app address redirected to non-privileged ports on the localhost address to which the web app will be able to bind, even when running as a regular user.

pass in on bnx0 proto tcp to port 80 rdr-to port 4210
pass in on bnx0 proto tcp to port 443 rdr-to port 4211

I planned on running my web apps in distributed mode (with the --sname option to elixir). In distributed mode, the Erlang VM will listen on a non-privileged port on all addresses and will register with the Erlang port mapping daemon which also (by default) listens on a non-privileged port on all addresses. I found that the port mapper can be configured to listen only on the loopback interface, but I didn't find a way to have the Erlang VM do the same.

In any event, I didn't plan on having regular user processes listen on public interfaces so I added another rule to pf.conf to block incoming traffic to non-privileged ports being listened on by regular users.

block return in on ! lo0 proto { tcp, udp } to port 1025:65535 user >= 1000

After adding these rules, I reloaded the packet filter configuration (as root).

pfctl -f /etc/pf.conf


I wanted to use HTTPS for my web apps and I wanted have the certificates automatically managed. I was already familiar with acme-client, so I configured it to manage these certificates as well. I did find myself jumping through a couple of hoops since acme-client expects to run as root and expects the files it manages to be likewise owned by root while I wanted to run my web apps as a regular user. I think I may look into having the web apps manage their own certificates instead, but that's a project for a different day.

I first set up a place in the web app user's home directory for the certificates and keys. The directory for the keys should be accessible only by the web app user.

mkdir ~/ssl
mkdir ~/ssl/private
chmod 700 ~/ssl/private

Next, I added a stanza to /etc/acme-client.conf for the web app's certificates and key.

domain "" {
  domain key "/home/user/ssl/private/"
  domain certificate "/home/user/ssl/"
  domain chain certificate "/home/user/ssl/"

Finally, I added an entry to root's crontab to check on the certificate every day. The entry also resets ownership on the managed files. My understanding is that the Erlang ssl application instance that the web app will create will read and cache the key and certificate files at startup, but will periodically check them for updates. So it should pick up the new certificate when it is renewed without needing to be signaled or restarted.

28 4 * * * /usr/sbin/acme-client && chown -R user.user /home/user/ssl

Installing Elixir and Erlang

Phoenix is written in Elixir, which in turn requires Erlang. I checked the Elixir install guide and found that there's an OpenBSD package for Elixir.

pkg_add elixir

Following the suggestion given by the installer, I created /etc/man.conf and added Erlang's manual page directory in case I should want to examine those manual pages. The manual page for man.conf provided the defaults.

manpath /usr/share/man
manpath /usr/X11R6/man
manpath /usr/local/man
manpath /usr/local/lib/erlang21/man

I added a line to my ~/.profile to set LC_ALL. Setting LC_ALL is important on OpenBSD because although the base system mostly ignores the locale (other than having limited support for UTF-8), it is very important to Elixir that the Erlang VM run with UTF-8 character encoding (Elixir uses UTF-8 natively, but depends on support from the VM to do so). The locale(1) manual page has more details about OpenBSD locale support.

export LC_ALL="en_US.UTF-8"

I also pre-emptively installed Hex (the package manager for Elixir and Erlang) and rebar (an older build tool for Erlang, used by a Phoenix dependency) since all of my web apps use them.

mix local.hex --force
mix local.rebar --force

Installing postgres

As is common, my Phoenix web apps use Ecto and PostgreSQL for persistence. I installed the PostgreSQL client and server packages.
pkg_add postgresql-client postgresql-server 

After installing the packages, I initialized a database cluster as the _postgresql user. I used the flags recommended in /usr/local/share/doc/pkg-readmes/postgresql-server. They set the PostgreSQL super-user user name to “postgres”, enable scram-sha-256 authentication (current best-practice — the likely alternative being md5, which is deprecated), set the encoding to UTF-8, and cause the initdb program to prompt for a new super-user password.

su - _postgresql
mkdir /var/postgresql/data
initdb -D /var/postgresql/data -U postgres -A scram-sha-256 -E UTF8 -W

There was some performance tuning guidance in the read-me file too. I glanced through it. It may come in handy if I need to scale my apps up in the future.

I enabled the postgresql daemon and started it.

rcctl enable postgresql
rcctl start postgresql

I created a new PostgreSQL user and a new database for one of my web apps. I'll need to do this for each web app. The user name, password, and database name become part of DATABASE_URL environment variable I'll specify when running the web app. The new user is set up as the owner of the new database.

createuser -P -U postgres skilman
createdb -O skilman -U postgres skilman

Backups with Tarsnap

I already had an account with Tarsnap from other projects and I thought it would be nice to use it here for automatic backups. I chose to build it from source, following the compilation instructions.

I'm only interested in backing up my PostgreSQL databases since that's where all of my web app data lives. I already have the application source code and the system configuration is reasonably well documented in these articles. Because of this simple use-case, I won't need to run tarsnap with root permissions. So, I installed it in my user's home directory rather than in /usr/local.

tar -xzf tarsnap-autoconf-1.0.39.tgz
cd tarsnap-autoconf-1.0.39
./configure --prefix=/home/user/tarsnap
make all
make install

Continuing with the getting started instructions, I registered my machine and made a key file. It's important to keep a copy of the key someplace safe. I use macOS locally, so I put it in my login keychain as a secure note using Keychain Access.

tarsnap/bin/tarsnap-keygen --keyfile tarsnap.key --user xxxxxx --machine xxxxxx

I put together a script to dump the PostgreSQL database for my web app into a staging directory, then back up the staging directory with Tarsnap. After that, it lists the backups in Tarsnap and deletes all but the 90 most recent. I plan to run this script daily, so that should give me 90 days of backups. The script includes a user name and password for the web app database, so it's important that the script file be readable only by my user (mode 0700). If I add more web apps, I'll need to add lines to this script for each of their databases.

PGPASSWORD=xxxxxxxx pg_dump -f backup-staging/skilman.sql \
	-U skilman skilman
tarsnap/bin/tarsnap -c --keyfile tarsnap.key --cachedir tarsnap/cache \
	-f $(date +%s) backup-staging
tarsnap/bin/tarsnap --list-archives --keyfile tarsnap.key | sort -nr | \
                sed '1,90d' | while read name ; do
        tarsnap/bin/tarsnap -d --keyfile tarsnap.key --cachedir tarsnap/cache \
                -f $name

I wanted to run the script every day, so I added it to my user's crontab(5).

@daily ~/

Prepare the web app

Since I would be using acme-client's HTTP challenge mechanism, the web app would be responsible for responding to the challenge by sending a file produced by acme-client. I wrote a little Plug to handle these requests and added it near the front of the chain in my endpoint. The plug would need to know where to look for the files, so I made this configurable with an environment variable.

At least at the time I was setting this up (and perhaps still), the default endpoint settings in the generated config/prod.secret.exs would cause the web app to listen on all IPv6 addresses. I preferred that it listen only on the local interface since I didn't want it accessible to the outside world except through the expected ports and this is where they would be redirected to.

I also wanted to have the app handle HTTPS and load its key and certificates from the files managed by acme-client. I found that by also specifying the allowed protocol versions, ciphers, and elliptic curves I was able to improve the results of automated security scans. I'd like to do more here, but I think I'm off to a good start.

I updated the relevant portion of config/prod.secret.exs in my app.

config :skilman, SkilmanWeb.Endpoint,
  secret_key_base: secret_key_base,
  acme: [
    directory: System.get_env("ACME_DIRECTORY") ||
      raise "environment variable ACME_DIRECTORY is missing"
  http: [
    ip: {127, 0, 0, 1},
    port: String.to_integer(System.get_env("HTTP_PORT") ||
      raise "environment variable HTTP_PORT is missing")
  https: [
    ip: {127, 0, 0, 1},
    port: String.to_integer(System.get_env("HTTPS_PORT") ||
      raise "environment variable HTTPS_PORT is missing"),
    keyfile: (System.get_env("HTTPS_KEYFILE") ||
      raise "environment variable HTTPS_KEYFILE is missing"),
    certfile: (System.get_env("HTTPS_CERTFILE") ||
      raise "environment variable HTTPS_CERTFILE is missing"),
    cacertfile: (System.get_env("HTTPS_CACERTFILE") ||
      raise "environment variable HTTPS_CACERTFILE is missing")
    versions: [:"tlsv1.2"],
    ciphers: [
      {:dhe_rsa, :aes_128_gcm, :aead, :sha256},
      {:ecdhe_rsa, :aes_128_gcm, :aead, :sha256},
      {:dhe_rsa, :aes_256_gcm, :aead, :sha384},
      {:ecdhe_rsa, :aes_256_gcm, :aead, :sha384}
    eccs: [

It's often necessary for an app to generate a full URL that points to the app itself, so it needs to know the base URL to use. I updated the endpoint configuration my app's config/prod.exs with the correct information. In particular, I added the correct host name and scheme.

config :skilman, SkilmanWeb.Endpoint,
  url: [host: "", scheme: "https"],
  cache_static_manifest: "priv/static/cache_manifest.json"

Running the web app

I looked at the process for building and deploying Phoenix web apps using releases and it seemed pretty great. But, I think — at least at this time — that I don't need most of what it provides. I decided to just rsync the app to the server and run it with the environment set to “prod”.

Compilation (particularly of dependencies) takes a little while the first time, but Elixir is smart about not re-compiling things that haven't changed so it's not too bad after that.

On OpenBSD, we have openrsync by default, so I needed an option to (local) rsync to specify that. I also only wanted to synchronize certain files and directories to the server. I don't think the list will change much, so I specified them individually.

rsync --rsync-path=/usr/bin/openrsync -r --delete \
  config lib priv mix.exs mix.lock lucy:skilman

I wanted the web app to keep running after I logged out, so I put together a script to make a tmux session and start the app in one of its windows. The script sets several environment variables for the app: production mode, database connection details, secret key base, ports to listen on, and so on. Then it gets dependencies, compresses and digests static assets, creates and initializes the database (we've already created the database and the PostgreSQL user we made doesn't have permission to create it anyway, so this will produce an error but it also initializes the table Ecto uses for tracking migrations if it doesn't exist, which is important), performs any necessary database migrations, and then starts the app in distributed mode (so an interactive Elixir session can be attached to it).

cd ~/$app_name
tmux new-session -d -n shell -s $app_name
tmux set-option -g -t $app_name remain-on-exit on
tmux new-window -d -n server -t $app_name:1 /bin/ksh -l -c "\
        export MIX_ENV=prod; \
        export DATABASE_URL=ecto://$app_name:$db_pass@localhost/$app_name; \
        export SECRET_KEY_BASE=$secret; \
        export ACME_DIRECTORY=/var/www/acme; \
        export HTTP_PORT=4210; \
        export HTTPS_PORT=4211; \
        export HTTPS_KEYFILE=/home/user/ssl/private/; \
        export HTTPS_CERTFILE=/home/user/ssl/; \
        export HTTPS_CACERTFILE=/home/user/ssl/; \
        mix deps.get; \
        mix phx.digest; \
        mix ecto.create; \
        mix ecto.migrate; \
        elixir --sname $app_name@localhost -S mix phx.server"
tmux new-window -d -n iex -t $app_name:2 /bin/ksh -l -c "\
        iex --sname $app_name-iex@localhost --remsh $app_name@localhost"
tmux new-window -d -n psql -t $app_name:3 \
        "PGPASSWORD=$db_pass psql $app_name $app_name"

I made sure to chmod 0700 this script too since it contains secrets.

After starting the app, the script goes on to add a few other useful windows in the tmux session: one with an interactive Elixir session attached to the running app and one with psql logged into the app's database as the app. There's also a window with a shell in the app's directory from when the session was created.

If I need to, I can log in to the server and attach tmux to this session and have everything I need to diagnose or debug the app at my fingertips.

tmux attach -t skilman

If I want to re-start the application (perhaps after rsync'ing over a new version), I can do so while attached to the tmux session and looking at the app server window by hitting control-B followed by colon to get a command prompt, then entering respawn-window -k. This works in the interactive Elixir window too, in case it becomes disconnected. The re-spawn can be initiated from the command line as well.

tmux respawn-window -t skilman:1 -k

To stop the entire app, I can kill the tmux session.

tmux kill-session -t skilman

I wanted my apps to start up with the system, too, so I added a line to my user's crontab to run the script at startup.

@reboot ~/

Deployment script

Deployment wasn't too bad with this setup, but I thought I could automate it with a script inside my application's project directory.

rsync -r --delete --exclude-from .gitignore --exclude .git -l \
	. vultr:$app_name
ssh vultr "tmux respawn-window -t $app_name:1 -k"

Now, after making changes to the app locally, I can call this script to deploy it and have the changes live in a few seconds.

Wrapping up

I hope that this article has been helpful to you. If this is the kind of thing you're into, you may enjoy my other articles. If you have any questions or comments, please feel free to drop me an e-mail.

Aaron D. Parks