Setting up a Homelab Docker Server (and More)
Sun Jun 07, 2020 · 3130 words

Table of Contents

  1. Introduction
  2. Before the OS: Setting up RAID
  3. Installing the OS (CentOS 7)
  4. The Base Packages
  5. Security
  6. Monitoring with Grafana
  7. Docker Setup

0 . Introduction

The goal of this post will be to go from a blank server to a server running Docker for software containerization, as well other software for more advanced usage like VMs.
For some background, I’d like to say that I’ve been following the homelab community for quite some time. And I spent a lot of hands-on time with enterprise server hardware while working at FPN. I was constantly checking LabGopher for listings on eBay for servers, in the hopes of snagging a good deal on some quality hardware. After a little while, I found a really good deal on a HP Proliant DL380p G8. It was wiped clean and had everything I needed to get up and running.
Since I feel like this definitely a niche topic for more advanced Linux users, I’m going to make several assumptions about readers’ skill level. Nothing I’m going to be doing is very complex or arcane, and I’ll try my best to explain what’s happening. But I will assume that you have a good understanding of general Linux, such as:

Being familiar with these topics are useful if you have a specific use case, or want to modify what I have below to fit your needs.
Lastly, I’m assuming that the server hardware you have is similar to mine (because I only have one server to test on). Feel free to skip sections, as I try to have some modularity between sections so nothing depends on another sections (apart from the obvious - such as requiring an OS to install stuff on).


1 . Before the OS: Setting up RAID

My server came with 12 3TB SAS disks, which was a huge score. They do have over 30,000 power on hours, though, but SAS drives are almost apocalypse-proof and I have a few spares.
But the great thing about having so many drives are the RAID possibilities. Another prudent choice would be going with ZFS or really any other volume manager that you like. For my server I ran with RAID 6. It keeps a good tradeoff between performance and fault tolerance. Compared to other non-nested RAID levels, it has two drive fault tolerance, which is extremely useful when one drive fails and is replaced. When the drive is replaced, the rebuild (restoring parity with a new drive, essentially “bringing it up to speed”) puts heavy read use on all drives in the array. In the unlikely (but heightened) chance of another drive failure during a rebuild, the array can still be rebuilt. This was particularly valuable to me with my extremely used drives.
This choice was based on my situation, and you should carefully consider the drives you have, how you want to use them, and what you expect out of them. Each RAID level has different solutions to each of those statements, and there’s even entire filesystems (like ZFS) that also try to solve more cases under one umbrella.


My HP server came with a hardware RAID controller as well. This made setting up RAID straightforward through the GUI. The GUI walked me through selecting drives, allocating space, and choosing the RAID level in basically a foolproof setup. I’m going to assume you can get yourself through whatever GUI your controller/server has for you, or you can muscle through some other way.
When setting up your filesystem, make sure to set it up on your entire physical disk array (unless you’re doing your own thing). In section #2, I’ll explain how to allocate the disk space for the OS for maximum flexibility later on.
Once you have your filesystem all setup with a nice, clean, redundant blank slate, you’re ready to install the OS.
(Oh, and make sure to poke around the BIOS. There’s cool stuff in there. Don’t mess with stuff you haven’t adequately RTFM on though, for sure.)


2 . Installing the OS (CentOS 7)

This section is another step that is well-guided by easy to follow installation GUIs. There isn’t much that isn’t clear in the visual installer, so I believe in you to do the right thing.
However, there is one thing I really want to emphasize, and that is volume allocation. Please only allocate what you need right now, not all of your space (unless you don’t have a whole lot to begin with. I’m talking <1TB). Here’s why:
I have 36TB of raw storage. I’ve currently allocated 3TB in CentOS. I know I won’t be going through that much data too quickly, so I didn’t allocate all my storage space right away. CentOS’s LVM (logical volume manager, think partitions) is very good. You can expand/shrink/add new volumes very easily, which is great. Of course, when messing with your data, be careful, have backups, or be prepared to irrevocably lose all of your data. With that said, don’t try to shrink a volume. It probably won’t work. This is why I advocate for only allocating how much storage you think you need right now. It’s much much much easier to expand a volume into unallocated space than try to finangle your volumes’ sizes after the fact.


3 . The base packages

After booting into your install and sudo yum upgrade to make sure you have the latest packages, it’s time to get fiddling. The first thing to install are basic packages that will let you get started with your server. The base packages I have are: SSH, HP Tools (obviously only if you have a HP server), OpenVPN, and Docker.

3.1 SSH

Before SSH is set up, you’ll be connecting straight to the server’s I/O. Less than ideal. So, the first thing you’ll want to setup is SSH access.
For the best security, I only have private-public keypair login enabled. In the config I’ll set up disabling other login methods.
The SSH config files are located at /etc/ssh/sshd_config Here’s what I have changed:

Port 8754

SSH uses port 21/22 by default, and brute-force methods will hammer those ports. Switching to a less-used port will drastically lower the amount of login attempts on your server. To be clear, this is security through obscurity and is equivalent to no security layer at all. Coming sections will go through actual hardening to provide security. This step is just to reduce the logging the server does, and keep Fail2Ban cleaner.

PermitRootLogin no

By denying direct login from SSH to the root account, you can prevent brute-force methods that commonly try for the highest privileges right away. The ‘good practice’ is logging in with your user account and elevating privlege from there as necessary (switching to the root user is not recommended, just sudo)

MaxSessions 2

This limits the number of active login sessions allowed. If you’re going to be the only person using this server’s SSH, 2 (or even 1) should be sufficient. This also protects against DoS brute force attacks.
-> This does not limit the number of logged in sessions you can have, just simultaneous login attempts. Have as many terminal windows open as you want!

AuthorizedKeysFile .ssh/authorized_keys

This is the single file that stores authorized public keys. The directory will be your user account’s home dir, in the hidden .ssh/ directory. The file may not exist yet, we’ll transfer the data to it later.

PasswordAuthentication no

Disable the login method of user-password.

ChallengeResponseAuthentication no

Disable keyboard-interactive login.

MaxStartups 2:50:5

This is also to prevent a flood of login attempts at once. It’s actually redundant to use in conjunction with MaxSessions 2, as with that >2 connections will instantly drop. MaxStartups uses a probability of dropping a connection based on the values set above. 2:50:5 means after 2 unauthenticated connections, any more connections are randomly dropped at a rate of 50% chance, up to 100% drop at 5 connections (and beyond). It’s rather neat and really the only reason I have it here.

And those are the settings I have changed for the ssh config file.
Restart the ssh daemon with sudo systemctl restart sshd. If the sshd service isn’t enabled, sudo systemctl enable sshd then sudo systemctl start sshd.

Key Generation

Next step for SSH is the keygen process. My local computer runs Windows, and I use puTTYgen to create the key pairs. Make sure to use a strong password (I recommend KeePass to generate/store secrets - it’s fantastic!). The public key gets transferred to the server’s ~/.ssh/authorized_keys file. The private key gets saved to your computer and is used every time for authentication against the public key.
It’s possible the ~/.ssh/ directory doesn’t exist yet. To create it with the correct permissions:

sudo mkdir -p ~/.ssh
sudo touch ~/.ssh/authorized_keys
sudo chmod -R go= ~/.ssh

This will set the directory and files in it with the correct permissions.

Port forward

This step is optional if you never plan on accessing your server remotely. It’s also more secure and you don’t have to worry about outside attacks on your server. A more secure alternative is running an OpenVPN service and connecting to the VPN to connect to the server, but I won’t be covering that as I had issues with it.
Nope, I went with a simple port forward.

sudo firewall-cmd --permanent --zone=public --add-port=9999/tcp
sudo firewall-cmd --reload

Then restart sshd again, and check for connectivity on your local computer! Also, it’s easy to check for remote access by trying to login from your phone while on cellular.

3.2 HP Tools

This step is exclusive to HP machines. I mainly use this package to monitor temps in the server, as HP servers have a lot (like 30+) temperature sensors. There are a few other commands that this package includes which may be useful for some.
To get the package, the HP repo will have to be added to yum:
First, make a yum .repo file in /etc/yum.repos.d/hp.repo, and put this inside:

[HP-spp]
name=HP Service Pack for ProLiant
baseurl=http://downloads.linux.hpe.com/SDR/repo/spp/RHEL/7.2/x86_64/current/
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/GPG-KEY-ssp
[HP-mcp]
name=HP Management Component Pack for ProLiant
baseurl=http://downloads.linux.hpe.com/SDR/repo/mcp/centos/7.3/x86_64/current/
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/GPG-KEY-mcp

Then add the GPG keys for the repo, which adds verification.
In /etc/pki/rpm-gpg:

http://downloads.linux.hpe.com/SDR/repo/spp/GPG-KEY-spp
http://downloads.linux.hpe.com/SDR/repo/spp/GPG-KEY-mcp

Now yum will recognize the new repo, and you can install the tools from there with sudo yum -y install hp-health

Usage

The package starts its own subshell with the command sudo hpasmcli. It’s possible to skip the subshell by giving a specific desired readout.
So to get a temperature readout, use
sudo hpasmcli -s "SHOW TEMP"


4 . Security

This is easily the most important part to get right, and make sure everything works and is secure. Nowadays there are thousands of servers whose sole purpose is to test every password for every port of every IP address that faces the internet. It’s not uncommon to see hundreds of requests per hour from servers all across the globe, trying everything imaginable to get a handhold anywhere in a server.
For this reason it’s imperative to take extra steps to minimize the attack area, or parts of the server that must face the internet. Yes, this adds additional hassle, but if there’s anything you don’t want to experience, its having no control over your server and having to wipe it to get it back.
The biggest measure that can be taken is having an SSH-only connection method, which makes it extremely difficult for unauthorized parties to get access to user accounts or root accounts and send commands. The rest of the security setup is below.

4.1 OpenVPN

With that in mind, we’ll use OpenVPN, and connect to the server through a VPN we set up in order to access its other services. This means only a single service and port will be internet-facing (WAN), and will be secured with a (hopefully) strong password and private-public key cryptography.
I’m going to preface the rest of this section with a disclaimer. I actually struggled mightily with setting up this part, and it took me a lot of effort to fix everything and get it working. However, I hope you’re able to simply follow the steps and get going right away. There is also additional configuration required on every machine you want to access the server’s VPN, which I will cover a little bit of, including some of the issues I had.

First is installing OpenVPN:

sudo yum install epel-release -y
sudo yum install -y openvpn wget

OpenVPN also requires a cryptography package. We’ll use Easy RSA. 2.3.3 is the latest at time of writing, check here for the latest ver.

wget -O /tmp/easyrsa https://github.com/OpenVPN/easy-rsa-old/archive/2.3.3.tar.gz

Extract the tarfile with tar xzf /tmp/easyrsa and move it to a subdirectory in openvpn, e.g. /etc/openvpn/easy-rsa using sudo cp -rf easy-rsa-old-2.3.3/easy-rsa/2.0/* /etc/openvpn/easy-rsa.
Lastly, change the owner to yourself (non-root!) with sudo chown <user> /etc/openvpn/easy-rsa

On the server, follow the steps from this repo to create an .ovpn config file. Note that the script does a lot more than just create a config file; it adjusts firewall rules as well, and more. Check to make sure the OpenVPN service is added to the firewall with sudo firewall-cmd --zone=public --list-all. In the services line, you should see openvpn (and possibly other services, we’ll get to that).\

Once the script has run, most of the server-side work has been done. The keys have been generated, and needs to be copied to client machines in order to connect. A client config .ovpn has also been created and needs to be copied as well. I had a lot of issues getting client machines to connect, and what fixed it in the end was having the client.ovpn file include all the cryptography required, i.e. public keys and certs. Note the private key is included in this file, which makes the file a security risk on the client machine. Make sure to protect it, but there is still a password required which is not included in the file.
Anyway, I’ll include the client.ovpn I have, which has some tweaks which helped fix my issues.

client
tls-client
auth SHA512
tls-auth <*.tlsauth file on client machine, full path> 1
cipher AES-256-CBC
remote-cert-eku "TLS Web Client Authentication"
proto udp
remote <remote ip of server> <open port> udp
dev tun
topology subnet
pull
user nobody
group nobody

The rest of the file is certs and keys, of which there are 4 total.

4.2 Firewall

Public Zone (WAN)

By default, CentOS has many services open on the public zone to make it easy for new users to start developing things such as webservers. This is bad for people who want security first. So make sure that the public zone of the firewall, which dictates what services have open internet access, is limited to as few services as possible. In my case, I only have OpenVPN.
Remove any service with sudo firewall-cmd --zone=public --permanent --remove-service=<name> and similarly any port with sudo firewall-cmd --zone=public --permanent --remove-port=<name>.
To that end, this is what I get when I run sudo firewall-cmd --zone=public --list-all:

public (active)
 target: DROP
 icmp-block-inversion: no
 interfaces: eno1
 sources:
 services: openvpn
 ports:
 protocols:
 masquerade: no
 forward-ports:
 source-ports:
 icmp-blocks:
 rich rules:

Internal Zone

The internal zone is for intercommunications between services on the local machine, for example a Docker container needing access to a mySQL service running on the machine. Docker-docker interconnects are handled by docker and additional docker configuration and should not be included here.

Trusted Zone

The trusted zone allows everything by default, and therefore should be restricted to, well, trusted applications and machines. For example, I have certain computers in the LAN in the trusted zone.


5 . Monitoring with Grafana

Grafana is always fun to setup and spend time making nice looking dashboards for all sorts of monitoring. It truly is a sandbox for data visualization. There are many different ways to configure grafana, and many different data collection sources that can be used. For this, I’ll just use InfluxDB for data aggregation, and Telegraf for data acquisition.

5.1 Installing Grafana

Grafana also requires an external repo to be added before the yum install. In /etc/yum.repos.d/grafana.repo add:

[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt

And then install grafana with sudo yum install -y grafana fontconfig freetype (includes font dependencies).
Adjust grafana settings in /etc/grafana/grafana.ini to your liking.
Start the Grafana service with sudo systemctl start grafana-server and let it run at boot sudo systemctl enable grafana-server.\

5.2 Installing InfluxDB

Add the InfluxDB repo file in /etc/yum.repos.d/influxdb.repo:

[influxdb]
name = InfluxDB Repository - RHEL \$releasever
baseurl = https://repos.influxdata.com/centos/\$releasever/\$basearch/stable
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdb.key

Install with sudo yum install influxdb and start & enable the systemctl service.
With InfluxDB installed, it’s necessary to create a database first to store incoming data from Telegraf. Type influx to start a subshell and enter the following:

CREATE DATABASE "MONITORDB"
SHOW DATABASES
CREATE USER "admin" WITH PASSWORD "<password>" WITH ALL PRIVILEGES
CREATE USER "monitor" WITH PASSWORD "<password>"
GRANT ALL ON "MONITORDB" to "monitor"
SHOW GRANTS FOR "monitor"
SHOW USERS

The SHOW commands are to verify the previous steps completed successfully.
In /etc/influxdb/influxdb.conf, make sure auth-enable = true to force password use.

5.3 Installing Telegraf

Telegraf is included in the influxdb repo, so it can be installed with sudo yum install telegraf.
Telegraf requires configuration to start data collection, so dive into /etc/telegraf/telegraf.conf:
Under output plugins uncomment the influxdb output to enable it and add the line urls = ["http://localhost:8086"] and database = "MONITORDB".
Add the username/password combination under the HTTP Basic Auth section.
Lastly, enable any collection plugins you want, e.g. inputs.docker and then start+enable the systemctl service.

Using Grafana

For the first time accessing grafana in browser, the user/password combination is admin for both. You want to change that right away. Then add InfluxDB as a data source, then go wild with making your dashboards!
For example, here’s one of my dashboards: An example Grafana dashboard


6 . Docker Setup

With essentially everything setup, now it’s possible to start installing services for the server to run. I use Docker to run my services. Let’s install it real quick:\

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce

Run this command so the sudo prefix isn’t necessary for every docker command: sudo usermod -aG docker $(whoami) (requires relog).
The start+enable the docker.service


So that’s about it for my journey in setting up a server, and writing down the steps I took during the process. This server is of course a development project, so everything is changing all the time. The current big project I’d like to get running is multiple VMs running on the server, with a VM manager for easier management. There may be a future post about it, or maybe about something else.


back · about · writing · projects