Service Isolation and Security

Rescaping my personal infrastructure

Posted by Jonathan DeMasi on 2018-04-03

Recently I'd considered adding some services to my personal VPS. At the time I was using a single, 4G Linode. This had always met my needs and then some - in fact most times the only limiting resource for the types of things I'd been doing was storage capacity. It got me thinking, though, about how at work, these types of services almost always get isolated onto their own virtual machine. For one, having a service isolated to its own VM is more secure, as if it gets compromised, most likely nothing else will be impacted (especially if you're doing proper firewalling, account management, etc.). So why wasn't I doing this for my own personal infrastructure? I realized there was no good reason other than it was less to maintian, and none of the services I had previously been running were very high volume or risk. My site, afterall, is completely static. My VPN was only for personal use, by me, and each client runs its own firewall. But the service I was considering tackling this go was email, and that seemed daunting and scary. I also came to learn that running proper email with spam and virus filtering uses more resources than I originally anticipated. Below I'd like to detail my old layout, my new layout, and what I gained (and lost) through this transition. I do think for personal projects, it's okay to just maintain a single, large VPS for most people. I am not strictly advocating for it, and I think it's also important to understand the implications of doing this. If you have a single VPS and one service runs awry or is compromised, everything is hosed. With separate VPS instances for each service, you avoid this issue entirely (assuming you don't have passwordless SSH between all your nodes, etc. etc.).

Before the Migration

Before this change of heart I had a single, 4G Linode that hosted several web services, an OpenVPN server, an XMPP server (via ejabberd), git hosting, and also served as my persistent session for IRC (via Tmux+irssi). There were pros and cons to this approach. The biggest pro, in my opinion, is only having a single box to update, secure, backup, etc. It was easy to keep track of whether or not I'd updated after critical kernel updates and such, as there was only one place to apply said updates. As I mentioned before, storage was often my limiting resource, too. Having one contiguous SSD space (instead of, now, 4 smaller ones) was really nice. I could pretty reasonably accommodate a "temporary" 15-20G file if I had to stash it on my Linode. That's not really the case now, as most of the instances I have are only 1G with 20G of disk space total. That being said, I have a Backupsy VPS for backup purposes with a lot of storage available if I really needed to mooch some. Additionally, having 4G of RAM for just about everything was great. My static website(s) were served with ease and grace by Nginx. ejabberd is a bit heavier, but nothing crazy. OpenVPN, similarly, had barely any resource requirements - the biggest probably being CPU when I was using compression and doing a lot of tunneling. If one service NEEDED more RAM at any one point in time, there was never a shortage. This, as I would discover, was a luxury I didn't know I really had. But, the biggest thing that was bothering me while reflecting on my single Linode layout was that I knew it was not great for security. If my IRC client got compromised, someone had access to my VPN server, for example. That just doesn't make sense, and any professional looking at that situation would baulk at the idea.

After the Migration

In general, I think it's a good idea to practice what you preach. At work, I would never question isolating services as much as possible. Maintaining my personal infra with the same attitude was good practice for "the real world" and ultimately, I am spending $5/month more, but with better security policies. For just about any sysadmin, security should be the first priority. Also, when these services have to interact, it is good to know how to do that when they're on separate machines. You can't just use Unix sockets and insecure, non-SSL communication. This forces you to go the extra mile and deepens understanding of the technology itself. As such, I went from a single, 4G Linode to 3x1G Linodes and a single 2G Linode. I had planned on using 4x1G, but as it turned out, a 1G Linode kept choking when I received an email and clamav was invoked. That being said, I cannot afford to miss out on receiving emails because I'm out of RAM, so I had to upgrade that VPS. This put my "after" migration budget $5/month above my original, 4G only layout.

Anyway, one of the best parts of moving my services to separate Linodes was the ability to do some nifty VPN routing for resources that I only want to be accessible to VPN clients. For example, I can now run Roundcube webmail on my web host but only allow access by way of the VPN, yet still use a public address and real, non-self-signed SSL certs for the instance. I also have several beta projects that are access controlled in the same fashion. It also means that if I'm using my VPN for tunneling all of my traffic while I'm on the road, the pegged CPU it will require can't impact any of my other services. Currently, the "VPN Linode" is sitting in Linode's Fremont datacenter. The speed hasn't been the most impressive, but I did choose 4 geographically different regions for each Linode so that if I ever needed to use two as a failover pair it would make more sense. I didn't really think about using Dallas (the closest and fastest for me, in Boulder) as the VPN and Fremont for something less speed intensive.

Another benefit that I have seen so far is that I have been able to disable outbound SSH on all of my hosts except for my "management/misc." Linode. This means if someone manages to compromise the VPN node, for example, they cannot use it to shell into any of my other nodes via the WAN or my VPN itself. This is an interesting security step I normally would never think to take, but after some intense discussion in Sysadministrivia's IRC channel, I decided it was worth pursuing even for personal boxen. On the other side of that, I can even explicitly deny SSH on the other boxes, so even if someone got root on the VPN box and changed the firewall and SSH rules I'd still be safe. This layered approach to security is the best - there should never be a single point of failure in order to protect your assets.

I have found that it's been more work, especially running Arch which benefits from frequent updates, to keep up with updating and testing everything. The benefit is that I can restart the different boxes (hosting different services now, yay!) at different times. I don't use my VPN as 3AM usually, so using at to schedule a reboot of that machine at its own time, separate from the rest, is easy enough. This has led to doing a lot of automation at least to check for updates. For example, I've scripted checking for updates to postfix, dovecot, and other related utilities on the email server. That way I know if there are updates pertinent JUST to the services on that particular machine. This allows me to more selectively apply updates when they're actually required vs. system packages that, while important, are not in constant use. The scripting experience itself is great, though, and is just as great a lesson as the security tuning.