FreeBSD 12 Jail Host - Part 1 - Basic System Setup (2021)
Estimated reading time: Seal yourself in a dark room for a day or two.
I was looking at doing an overhaul and cleanup of my off-premises server I have running in AWS since it's now accumulated half a decade of cruft and it's good to touch on the set up of some of the applications I depend on a few times a lifetime to make sure I never get too rusty.
At home I use Proxmox as a platform for managing my home server and for easily and conveniently allowing me to set up mixed-platform VMs, containers, and complex networking pretty easily. My off-premises server is in AWS, and since they only support nested virtualization on some really pricey instance types that I otherwise had no need for, I doubled back around to what I was originally running and have been for over two decades: FreeBSD.
What I found is that over the years one of the main things that always put FreeBSD head and shoulders above any other option for me—the FreeBSD Handbook—was no longer nearly as comprehensive or detailed. It was downright abandoned. That made the entire journey one of finding old, confusing forum posts and otherwise just a lot of shooting into the inky blackness and seeing what hit.
But if you're reading this, then everything's working. So here's what I did for the next poor sap that comes along. Well, not too poor. This is not meant to be a tutorial. Where any sort of lower-level detail is included, it's because it was probably hard-won knowledge. Frankly, most of the point of this writing is just to provide myself a reference for next time.
Apologies in advance for any typos and the general writing style. This was much more about converting some notes I took during setup into something vaguely approaching useful in the future and containing some context than writing something well.
All the parts in this series.
Everything builds on the AWS Marketplace FreeBSD 12 images. I spun up a EC2 instance (t3.medium, but you do you) based off of this AMI. I did find that often during boot of a new instance rather than start it would go into a boot loop Checking the instance screenshot would show it continually trying to boot but never actually making progress. I never did catch a screenshot that gave any indication as to why this was happening. but terminating and creating a new instance usually had it come up successfully. Off to a good start.
Default login uses the key you provided when creating the instance and the
The First Steps
Let's get some quality of life stuff figured out and some housekeeping done. Namely, install some tools we're comfortable with and do some basic best practice security work. (We'll circle back around and lock things down some more once we've got the major changes out of the way.)
Install vim, sudo, bash, and whatever else you think you can't live without.
$ su - # pkg install vim-console sudo bash [...] #
Add an Account
Let's stop using the default
# pw group add myserver # pw user add apippin -d /home/apippin -m -g myserver -G wheel -s /usr/local/bin/bash # mkdir ~apippin/.ssh && cat >> ~apippin/.ssh/authorized-keys [paste key here] ^D
Give yourself passwordless access to root from your account. By virtue of being in the
group you'll have access via
su, but I find
sudo a little more flexible so let's configure
the same access through that. In your
/usr/local/etc/sudoers file, add:
%wheel ALL=(ALL) NOPASSWD:ALL
Configure SSH to only allow access via your newly created user. I am also going to move my SSH
daemon to a different port.
I'm using AWS Security Groups as my primary firewall for this instance. They apply to the
interface, not to the destination IP so in order to separately filter the source address
for connections to the host's SSH daemon (only me) and a jail's SSH daemon (for gitea; public),
I need them to be on separate ports. Also despite what people may say about it being "security
through obscurity" and useless, I'd recommend it anyway. It's not a security measure,
really, but it reduces the risk from things like worms (which are generaly not doing a full
port scan on every host on the internet) as well as massively cuts down on the automated
login attempts and log spam.
You can choose whichever port you'd like here. If you choose not to move it, you'll need
to remove the rule blocking traffic to port 22 in the firewall script in Part 3.
Port 125 # pick your own port; choose something unassigned and <1024 so only root can bind it AllowUsers apippin
While I'm here, I usually like to do a skim through and just a few other options disabling things that I don't have any intention of using (e.g., port forwarding, X11 forwarding, etc).
With that done, restart SSH but do not close your existing connection. Connect
in with your new user and key and ensure you can log in. Test gaining root access through
sudo to ensure you're not locked out of your own machine. Once
you're confident that everything's working, you can kill your old session.
Let's get our storage all ready to go as well.
Resize Root Disk
Resize the root disk in AWS to whatever final size you expect you'll need. Reboot the server I know, I know. But two seconds on Google didn't find me the answer to getting it to recognize that the disk had changed size and the server isn't actually doing anything yet so just do it and keep things moving. then do something like the following to resize the partition on the disk and grow the filesystem to fit the resized partition:
// NOTE: Confirm your device names first! # gpart show nvd0 [should show (corrupt), and also show you the partition number for your main root volume] // Fix the GPT tables due to the resized disk # gpart recover nvd0 // Resize the root partition; partition number is probably 3 # gpart resize -i [partition_number] nvd0 // Resize the UFS volume on the partition to use the rest of the space # growfs /dev/gpt/rootfs
If you're like me and running this on a cheap server but also expect to run like 15 different
bloated Java apps on it, you're gonna need some swap. Create a volume (if you're using AWS then
the gp3 volume type is good here) and attach it. You can find the device name for the newly
attached volume by just checking
dmesg for the attach notification.
// We could use the raw disk, but instead we'll at least throw a partition on it // so that we can assign and then mount it by a label. # gpart create -s GPT nvd1 // Create the swap partition # gpart add -t freebsd-swap -l swap nvd1 // Enable swap # swapon /dev/gpt/swap // Mount at every boot by adding to fstab # cat >> /etc/fstab /dev/gpt/swap none swap sw 0 0 ^D
It's not enabled by default, but thankfully it's relatively straightforward to enable by editing
// Enable loading zfs at boot # cat >> /boot/loader.conf zfs_load="YES" ^D // Load zfs module into running system # kldload zfs // Enable ZFS # sysrc zfs_enable="YES" // Start ZFS # /etc/rc.d/zfs start
While it's not strictly necessary, given the changes made to fstab, boot configuration, startup configuration, etc, I'd recommend this as a point to just reboot really quickly and make sure the system still comes back up. You're not that far in and if you've made a mistake and totally hosed it it's potentially still quicker to just terminate the server versus actually recovering it. Fun side note: Since you're using an AWS Marketplace image the root volume can only be detached and attached while the instance is stopped and only to other stopped instances. As well, it can only be attached to other instances based on the same marketplace image.
For my purposes I've added two volumes—one for jails, and one for other data. You can manage this however you wish. For each volume you want to add, allocate and attach in AWS then:
// Create partition table # gpart create -s GPT nvd2 // Create ZFS partition; name each one something unique (e.g., `jails`, `data`, etc) at `[your_label]` # gpart add -t freebsd-zfs -l [your_label] nvd2 // Create zpool # zpool create [your_label] gpt/[your_label] // Set the mountpoint # zfs set mountpoint=/usr/jails jails
At this point you have a running FreeBSD system with ample storage available, the absolute bare minimum of security measures in place, and everything prepped to start setting up the jail management stuff in Part 2.