This article is getting pretty old. There have been several OpenBSD releases since I wrote it.
I have some Phoenix web apps and wanted to host them on OpenBSD. It's just a few small apps on a single server, not anything complex, so I was hoping to figure out a simple approach (no need for anything enterprise-grade or web-scale). I think I was able to come up with something good, which I'll share below.
Although I could do SSL termination individually in each of my web apps, I thought I would like to perform this function in a single place for all of them. It might also be nice in some cases to be able to run more than one web app behind a single IP address. After looking at the manual page for relayd, I thought it would be a good fit for what I was trying to do.
When I had set up my server, I only assigned a single address from my allocation to the network interface and I was already listening on this address with httpd. I didn't want to pile everything on a single IP address, so I thought I should configure an alias on the interface for one of my other addresses. I added a line like this to /etc/hostname.bnx0 so that the alias would be configured at startup:
inet alias 69.63.227.52 255.255.255.255
Then I ran ifconfig to add the alias immediately:
ifconfig bnx0 inet alias 69.63.227.52 netmask 255.255.255.255
Apparently setting the mask to all-ones is the thing to do for aliases that are in the same subnet as the primary address (or as another alias that has a regular mask set, I think). It wasn't immediately obvious to me why, and I think this might be different on other network stacks. I'd like to research it later and learn more.
I started out my relayd configuration with a macro for the first alias address I had just added to the network interface and a table for the first web app I wanted to set up. The web apps will listen on the loopback interface (different ports, of course).
first_alias=69.63.227.52 table <skilman> { lo0 }
I wanted to start out by relaying HTTP (non-TLS) traffic, so I'd need an http protocol section. I figured I might end up having more than one web app behind this address, so I set up pass rules to forward to my app if the host header matches one of the host names I expect. relayd uses the last rule that matches, so I put a block rule before these to catch any connections with a host header that didn't end up matching. I figured it would be nice to return an error page rather than just dropping the connection, so I started the section out with a “return error” directive.
http protocol "http" { return error block header "host" match forward to <skilman> header "host" value "skilman.com" match forward to <skilman> header "host" value "www.skilman.com" }
With the protocol set up, I could add the relay section. This starts out with a forward directive for the web app. This is where I set which port to contact the web app on. I can add more forward directives here later as I add web apps; which one is used is determined by the rules in the protocol section above. I also added directives to specify the address and port where relayd should listen and to connect this relay section to the protocol section above.
relay "http" { forward to <skilman> port 4220 listen on $first_alias port 80 protocol "http" }
This was enough configuration to get going, so I enabled relayd and started it.
rcctl enable relayd rcctl start relayd
As I adjusted my configuration, I used relayd -n
a lot to check
the syntax and rcctl reload relayd
to load the new configuration.
Phoenix is written in Elixir, which in turn requires Erlang. I checked the Elixir install guide and found that there's an OpenBSD package for Elixir.
pkg_add elixir
Following the suggestion given by the installer, I created
/etc/man.conf
and added Erlang's manual page directory in case I
should want to examine those manual pages. The manual page for
man.conf provided the defaults.
manpath /usr/share/man manpath /usr/X11R6/man manpath /usr/local/man manpath /usr/local/lib/erlang21/man
I added a line to my ~/.profile
to set LC_CTYPE. Setting
LC_CTYPE is important on OpenBSD because although the base system mostly ignores
the locale (other than having limited support for UTF-8), it is very important
to Elixir that the Erlang VM run with UTF-8 character encoding (Elixir uses
UTF-8 natively, but depends on support from the VM to do so). The
locale(1) manual page has more
details about OpenBSD locale support.
export LC_CTYPE="en_US.UTF-8"
I also pre-emptively installed Hex (the package manager for Elixir and Erlang) and rebar (an older build tool for Erlang, used by a Phoenix dependency) since all of my web apps use them.
mix local.hex --force mix local.rebar --force
pkg_add postgresql-client postgresql-server
After installing the packages, I initialized a database cluster as the
_postgresql user. I used the flags recommended in
/usr/local/share/doc/pkg-readmes/postgresql-server
. They set the
PostgreSQL super-user user name to “postgres”, enable scram-sha-256
authentication (current best-practice — the likely alternative being md5,
which is deprecated), set the encoding to UTF-8, and cause the initdb program
to prompt for a new super-user password.
su - _postgresql mkdir /var/postgresql/data initdb -D /var/postgresql/data -U postgres -A scram-sha-256 -E UTF8 -W exit
There was some performance tuning guidance in the read-me file too. I glanced through it. It may come in handy if I need to scale my apps up in the future.
I enabled the postgresql daemon and started it.
rcctl enable postgresql rcctl start postgresql
I created a new PostgreSQL user and a new database for one of my web apps. I'll need to do this for each web app. The user name, password, and database name become part of REPO_URL environment variable I'll specify when running the web app. The new user is set up as the owner of the new database.
createuser -P -U postgres skilman createdb -O skilman -U postgres skilman
I already had an account with Tarsnap from other projects and I thought it would be nice to use it here for automatic backups. I chose to build it from source, following the compilation instructions.
I'm only interested in backing up my PostgreSQL databases since that's where
all of my web app data lives. I already have the application source code and the
system configuration is reasonably well documented in these articles. Because of
this simple use-case, I won't need to run tarsnap with root permissions. So, I
installed it in my user's home directory rather than in /usr/local
.
tar -xzf tarsnap-autoconf-1.0.39.tgz cd tarsnap-autoconf-1.0.39 ./configure --prefix=/home/user/tarsnap make all make install
Continuing with the getting started instructions, I registered my machine and made a key file. It's important to keep a copy of the key someplace safe. I use macOS locally, so I put it in my login keychain as a secure note using Keychain Access.
tarsnap/bin/tarsnap-keygen --keyfile tarsnap.key --user xxxxxx --machine xxxxxx
I put together a script to dump the PostgreSQL database for my web app into a staging directory, then back up the staging directory with Tarsnap. After that, it lists the backups in Tarsnap and deletes all but the 90 most recent. I plan to run this script daily, so that should give me 90 days of backups. The script includes a user name and password for the web app database, so it's important that the script file be readable only by my user (mode 0700). If I add more web apps, I'll need to add lines to this script for each of their databases.
#!/bin/ksh PGPASSWORD=xxxxxxxx pg_dump -f backup-staging/skilman.sql \ -U skilman skilman tarsnap/bin/tarsnap -c --keyfile tarsnap.key --cachedir tarsnap/cache \ -f $(date +%s) backup-staging tarsnap/bin/tarsnap --list-archives --keyfile tarsnap.key | sort -nr | \ sed '1,90d' | while read name ; do tarsnap/bin/tarsnap -d --keyfile tarsnap.key --cachedir tarsnap/cache \ -f $name done
I wanted to run the script every day, so I added it to my user's crontab(5).
@daily ~/backup.sh
Since I would be using acme-client's HTTP challenge mechanism, the web app would be responsible for responding to the challenge by sending a file produced by acme-client. I wrote a little Plug to handle these requests and added it near the front of the chain in my endpoint. The plug would need to know where to look for the files, so I made this configurable with an environment variable.
I also added a plug to redirect requests to the canonical host name (from www.skilman.com to skilman.com, for example) and to the HTTPS scheme. I set this plug up after the ACME plug in the pipeline so that those requests could be completed over HTTP.
I updated the endpoint configuration in config/prod.conf
to have
the web server listen on the loopback address at a port specified by an
environment variable.
It's sometimes necessary for an app to generate a full URL that points to the
app itself, so it needs to know the base URL to use. I updated the endpoint
configuration again with the correct information. In particular, I added the
correct host name and scheme. Since I planned on having relayd do TLS
termination, I made sure to set the scheme to https
.
config :skilman, SkilmanWeb.Endpoint, http: [ip: {127, 0, 0, 1}, port: http_port], url: [host: "skilman.com", scheme: "https"]
I looked at the process for building and deploying Phoenix web apps using releases and it seemed pretty great. But, I think — at least at this time — that I don't need most of what it provides. I decided to just rsync the app to the server and run it with the environment set to “prod”.
Compilation (particularly of dependencies) takes a little while the first time, but Elixir is smart about not re-compiling things that haven't changed so it's not too bad after that.
On OpenBSD, we have openrsync by default, so I needed an option to (local) rsync to specify that. I also only wanted to synchronize certain files and directories to the server. I don't think the list will change much, so I specified them individually.
rsync --rsync-path=/usr/bin/openrsync -r --delete \ config lib priv mix.exs mix.lock lucy:skilman
I wanted the web app to keep running after I logged out, so I put together a script to make a tmux session and start the app in one of its windows. The script sets several environment variables for the app: production mode, database connection details, secret key base, ports to listen on, and so on. Then it gets dependencies, compresses and digests static assets, creates and initializes the database (we've already created the database and the PostgreSQL user we made doesn't have permission to create it anyway, so this will produce an error but it also initializes the table Ecto uses for tracking migrations if it doesn't exist, which is important), performs any necessary database migrations, and then starts the app in distributed mode (so an interactive Elixir session can be attached to it).
#!/bin/ksh app_name=skilman db_pass=xxxxxxxxxxx cd ~/$app_name cat > tmux_server.sh <<-EOF export LC_CTYPE="en_US.UTF-8" export MIX_ENV=prod export REPO_URL=ecto://$app_name:$db_pass@localhost/$app_name export SECRET_KEY_BASE=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx export ACME_DIRECTORY=/var/www/acme export HTTP_PORT=4210 mix deps.get mix ecto.create mix ecto.migrate elixir --sname $app_name@localhost -S mix phx.server EOF cat > tmux_iex.sh <<-EOF export LC_CTYPE="en_US.UTF-8" iex --sname $app_name-iex@localhost --remsh $app_name@localhost EOF cat > tmux_psql.sh <<-EOF export PGPASSWORD=$db_pass psql $app_name $app_name EOF chmod 700 tmux_server.sh tmux_iex.sh tmux_psql.sh tmux new-session -d -n shell -s $app_name tmux set-option -g -t $app_name remain-on-exit on tmux new-window -d -n server -t $app_name:1 ./tmux_server.sh tmux new-window -d -n iex -t $app_name:2 ./tmux_iex.sh tmux new-window -d -n psql -t $app_name:3 ./tmux_psql.sh
I made sure to chmod 0700 this script too since it contains secrets.
After starting the app, the script goes on to add a few other useful windows in the tmux session: one with an interactive Elixir session attached to the running app and one with psql logged into the app's database as the app. There's also a window with a shell in the app's directory from when the session was created.
If I need to, I can log in to the server and attach tmux to this session and have everything I need to diagnose or debug the app at my fingertips.
tmux attach -t skilman
If I want to re-start the application (perhaps after rsync'ing over a new
version), I can do so while attached to the tmux session and looking at the app
server window by hitting control-B followed by colon to get a command prompt,
then entering respawn-window -k
. This works in the interactive
Elixir window too, in case it becomes disconnected. The re-spawn can be
initiated from the command line as well.
tmux respawn-window -t skilman:1 -k
To stop the entire app, I can kill the tmux session.
tmux kill-session -t skilman
I wanted my apps to start up with the system, too, so I added a line to my user's crontab to run the script at startup.
@reboot ~/skilman.sh
Deployment wasn't too bad with this setup, but I thought I could automate it with a script inside my application's project directory.
#!/bin/bash app_name=skilman rsync -r --delete --exclude-from .gitignore --exclude .git -l \ . lucy:$app_name ssh lucy "tmux respawn-window -t $app_name:1 -k"
Now, after making changes to the app locally, I can call this script to deploy it and have the changes live in a few seconds.
I wanted to use HTTPS for my web apps and I wanted have the certificates automatically managed. I was already familiar with acme-client, so I configured it to manage these certificates as well.
I started by adding a section to /etc/acme-client.conf for the web app's certificates and key.
domain "skilman.com" { domain key "/etc/ssl/private/skilman.com.key" domain full chain certificate "/etc/ssl/skilman.com.crt" }
Finally, I added an entry to root's crontab to check on the certificate every day as well as an entry to reload relayd a little bit after that so it will pick up any changes to the certificate.
28 4 * * * /usr/sbin/acme-client skilman.com 48 4 * * * /usr/sbin/rcctl reload relayd > /dev/null
To get my first certificate, I manually ran the ACME client.
acme-client skilman.com
With my certificate safely in hand, I could add http protocol and relay
sections to relayd to listen on port 443 and do TLS termination for my web app.
These sections work mostly the same as those for HTTP. They have a couple of
options for TLS. From the tls keypair
directive in the protocol
section, relayd knows to look in /etc/ssl where we had acme-client put the
certificate and key. I added a match directive to set the
x-forwarded-proto
header to https
. I configured
Plug.RewriteOn to use this to set the scheme, which my redirect plug needs to
know. Also, there's no need to have rules for all the different host name
aliases here since I don't have certificates for them.
http protocol "https" { return error block header "host" match header set "x-forwarded-proto" value "https" pass forward to <skilman> header "host" value "skilman.com" tls keypair "skilman.com" } relay "https" { forward to <skilman> port 4220 listen on $first_alias port 443 tls protocol "https" }
With a quick restart of relayd, everything was working.
Since my apps are started in distributed mode, I can use SSH to forward the
necessary ports and connect an iex
session running on my local
machine. One of the ports (for the port-mapper) is always the same, but the
other is selected randomly (thus the need for a port-mapper). epmd
makes it easy to get the port numbers:
lucy$ epmd21 -names epmd: up and running on port 4369 with data: name skilman at port 15246
With the port numbers in hand, I can re-start my SSH connection with port forwarding:
ssh -L 4369:localhost:4369 -L 15246:localhost:15246 lucy
Just to check everything is working, I can run epmd -names
locally in another window to see I'm getting the same output as on the server.
Of course, if epmd
were running locally the SSH command would have
failed (since the port would already be in use), but I don't use
epmd
locally much.
I also needed the cookie from the server, which was in ~/.erlang.cookie. Distributed nodes can only interact with each other when their cookies are equal.
Now for the fun stuff! I can connect a local Erlang observer
instance to the running server node.
erl -sname debug@localhost -setcookie xxxxx -run observer
To connect to the server node, I go to the Nodes
menu in
Observer (for some reason, it takes almost a minute for the menus to become
clickable, but once they do, it works fine), select Connect node
,
and enter the server node's name (like skilman@localhost
).
I hope that this article has been helpful to you. If this is the kind of thing you're into, you may enjoy my other articles. If you have any questions or comments, please feel free to drop me an e-mail.
Aaron D. Parks