This article is getting pretty old. There have been several OpenBSD releases since I wrote it.
I wanted to host some simple websites on my OpenBSD server. I reviewed the manual pages and it sounded like httpd would be a good option.
httpd serves web pages over HTTP or HTTPS. It can also do FastCGI.
I created
/etc/httpd.conf
and
started out with a macro for the IP address I'd like to bind. I didn't want to
bind all addresses since some of them will be used for other purposes.
main=69.63.227.51
I added a server section for each site to listen on port 80 for HTTP traffic.
I wanted to handle ACME challenges (from Let's
Encrypt) here, but redirect all other requests to HTTPS. Oh, and I wanted to
also redirect any requests for www
, so I added an alias for that.
Many clients try HTTP if no protocol is specified, and it's common for folks to
type www in front of a web address out of habbit.
server heartfx.net { listen on $main port 80 alias www.heartfx.net location "/.well-known/acme-challenge/*" { root "/acme" request strip 2 } location "*" { block return 301 "https://heartfx.net$DOCUMENT_URI" } }
DOCUMENT_URI gives the path without any query string. If you'd like to include the query string, you might prefer REQUEST_URI.
I noticed in testing that other host names would also get the redirect shown
above. It seems like the first server section for an address and port
combination serves as the default when the host
header doesn't
get a match. To handle that case, I added a catch-all section listening on the
same address and port that just drops the connection.
server "*" { listen on $main port 80 block }
I needed a place to put the files for the site and I wanted to be able
to update the site by sftp without logging in as the superuser, so I created a
directory and changed the ownership to a non-privileged user. The directory
needs to be under /var/www
because (by default) httpd chroots to
/var/www
.
mkdir /var/www/htdocs/heartfx.net chown user.user /var/www/htdocs/heartfx.net
Before I could do much with TLS, I figured I should get some certificates. That meant it was time to start httpd so it could at least respond to ACME challenges. rcctl(8) is used to control system services, both for enabling or disabling as well as starting, stopping, reloading, and such.
rcctl enable httpd rcctl start httpd
And if you're fussing around with the configuration, you can reload it with:
rcctl reload httpd
I checked with my DNS host and made sure that I had address records for each host name set up and pointing at my server.
I set up /etc/acme-client.conf with a section for Let's Encrypt and a section for each host name.
authority letsencrypt { account key "/etc/acme/letsencrypt-privkey.pem" api url "https://acme-v02.api.letsencrypt.org/directory" } domain "heartfx.net" { domain key "/etc/ssl/private/heartfx.net.key" domain full chain certificate "/etc/ssl/heartfx.net.crt" }
That and running acme-client for each host name was enough to get me a Let's Encrypt account and certificates.
Let's Encrypt certificates have a short expiration window to encourage automated renewal. Automation is as simple as adding a line to crontab(5) for each certificate:
28 4 * * * /usr/sbin/acme-client heartfx.net
I'll need to add such a line for each certificate. If I end up with a lot, maybe a small script will be a better way to go. I chose to have acme-client check my certificates every day in the early morning. A little while after the certificates are checked (and possibly updated), I ask httpd to reload its configuration file so it can pick them up (I don't think it will notice otherwise, please drop me a line if I'm mistaken).
48 4 * * * /usr/sbin/rcctl reload httpd > /dev/null
With a certificate available, I was able to add another server section for each site to listen on port 443 for HTTPS requests. Each section tells httpd where to find the TLS certificate and key as well as the files for the site.
server heartfx.net { listen on $main tls 443 tls { certificate "/etc/ssl/heartfx.net.crt" key "/etc/ssl/private/heartfx.net.key" } root "/htdocs/heartfx.net" }
I wanted to add cache control headers, but httpd doesn't (yet?) seem to support that. I would like to be able to compress some of my static files, particularly favicon.ico and any HTML or text documents. httpd doesn't have support for dynamic compression (and likely won't, since I guess it's a security mess), but I'm hoping for future support for static compression.
A quick reload put the configuration into effect
rcctl reload httpd
I can add more static sites to this configuration by adding additional server sections and creating the directories for them.
I noticed in testing that my browser wanted to download and save PDF files rather than open them in-browser. I figured maybe they were being served with a generic content type header. Checking the manual page page for the httpd configuration file confirmed that a few file extensions are automatically converted to specific content types by default, but PDF is not included. I added a section to the configuration file to duplicate the defaults and add an entry for PDF files and a few other file types that I commonly use. I can add additional file types to this as needed.
types { text/css css text/html html htm text/plain txt image/gif gif image/jpeg jpeg jpg image/png png application/javascript js application/xml xml image/vnd.microsoft.icon ico application/pdf pdf image/svg+xml svg video/mp4 mp4 audio/wave wav audio/mpeg mp3 }
Many of the sites I wanted to host were well-served (ha!) with what I had set up so far, but I had a couple that — while they didn't need the full web-app treatment — could benefit from an old-school CGI script or two. What better language for an old-school CGI script than Perl?
OpenBSD comes with a FastCGI server that wraps old-school CGI scripts. No surprise that it integrates well with httpd. It's called slowcgi and you can enable it and start it like this:
rcctl enable slowcgi rcctl start slowcgi
With slowcgi running, I added a stanza to my site in httpd.conf to let the
web server know to use FastCGI for the cgi-bin
directory. The
default configuration is to connect to slowcgi
's socket, so I
didn't have to do much typing. I did want to pass a couple of environment
variables to my scripts, and I was able to set them here. If you're going to
keep super-secret configuration stuff in httpd.conf, it's probably worth
checking the permissions on it and making sure they're as tight as you'd like.
location "/cgi-bin/*" { fastcgi param DB_USERNAME foo fastcgi param DB_PASSWORD bar }
My next task was to get Perl up and running inside httpd
's
chroot jail. I started by making a usr/bin
directory and
copying Perl into it:
mkdir -p /var/www/usr/bin cp /usr/bin/perl /var/www/usr/bin
The Perl binary depends on some dynamic libraries which I also copied into
the jail. I used ldd to see which ones
it depended on. After copying them into the jail, I had
ldconfig scan them to build a
hints file for ld.so. I hadn't copied anything into usr/local/lib
yet, but if I do in the future, I'll want to have ldconfig scan it too (it
scans usr/lib
by default).
ldd /usr/bin/perl mkdir /var/www/usr/lib mkdir /var/www/usr/libexec mkdir /var/www/sbin cp /usr/lib/libperl.so.19.0 /var/www/usr/lib cp /usr/lib/libm.so.10.1 /var/www/usr/lib cp /usr/lib/libc.so.95.1 /var/www/usr/lib cp /usr/libexec/ld.so /var/www/usr/libexec/ld.so cp /sbin/ldconfig /var/www/sbin chroot /var/www ldconfig /usr/local/lib
Back in the day, we would use CGI.pm in all of our Perl CGI programs. I
guess it's been deprecated and the new hotness is to use one of the new web
app frameworks. I actually thought CGI.pm was a little heavy for what I was
doing, nevermind a whole framework. I decided to go the other way
and see how little I could get away with. I looked at
cgi-lib.pl. Version 1.14 was pretty
small and easy to understand. Version 2.18 was considerably more complex, but
made up for it by supporting multipart/form-data
encoding.
I guess I'm just hard to please, because I ended up putting together my
own cgi.pl that just parses parameters into %params and provides a convenience
function cgi_die
for responding with a 500 error. It's about sixty
lines (including documentation) and does what I need for now.
Perl comes with a pretty good selection of built-in modules and I wanted to use a couple of them. I found the pure Perl ones were in /usr/libdata/perl5 while the ones that use XSUBS were in /usr/libdata/perl5/amd64-openbsd on my box.
One of the built-in modules I wanted to use was HTTP::Tiny, for making calls to a web service. It also supports HTTPS, but for that it needs IO::Socket::SSL, which is not a built-in module. It is, however, available in the OpenBSD ports tree along with its dependency Net::SSleay. I installed them:
pkg_add p5-IO-Socket-SSL
I found the newly installed modules in /usr/local/libdata/perl5 and /usr/local/libdata/perl5/amd64-openbsd.
I copied the built-in modules and the ones I had just installed into the
chroot jail. I used ldd
to identify the libraries that were
required by the modules that used XSUBS and copied those in too.
mkdir -p /var/www/usr/libdata/ cp -R /usr/libdata/perl5 /var/www/usr/libdata find /var/www/usr/libdata -name \*.so -exec ldd \{\} \; cp /usr/lib/libm.so.10.1 /var/www/usr/lib mkdir -p /var/www/usr/local/libdata/ cp -R /usr/local/libdata/perl5 /var/www/usr/local/libdata find /var/www/usr/local/libdata -name \*.so -exec ldd \{\} \; cp /usr/lib/libssl.so.47.6 /usr/lib/libcrypto.so.45.5 /usr/lib/libz.so.5.0 \ /var/www/usr/lib
HTTP::Tiny is an HTTP client and it wants to be able to resolve names to Internet addresses. This required copying a couple more files into the jail:
mkdir /var/www/etc cp /etc/resolv.conf /etc/hosts /var/www/etc
I found myself using Perl's localtime
function in one script.
It turns out that it needs some files to do what it does, so I copied them
into the jail as well:
mkdir /var/www/usr/share cp -R /usr/share/zoneinfo /var/www/usr/share cp /etc/localtime /var/www/etc
It may sound crazy, but for the scripts I've put together so far, I'm not
using a database. I'm just using plain files and locking them with
flock
to prevent trouble if multiple instances of the script are
running. (Is it better if I call it a light-weight document store?) I wanted
to back up these files, so I added a line to my backup script to archive them
into the backup staging directory before tarsnap runs:
(cd /var/www/htdocs/phasedust.com && tar cf ~/backup-staging/phasedust.tar *)
I've been using Transmit to deploy site content. I set up a separate “Server” for each site, even though they're on the same actual server. The credentials are the same for each one, but I'm able to set the local and remote directories for convenience. Deployment, then, is just a matter of selecting the site and hitting the “synchronize” button.
I hope that you found this helpful. If this is the kind of thing you're into, you may enjoy my other articles. If you have any questions or comments, please feel free to drop me an e-mail.
Aaron D. Parks