Chapter 1

Containerization

Verb: embracing lightweight and isolated code execution environments by implementing software services in containers, popularly Docker containers

Subsections of Containerization

Building a Container Image

Containers are incredibly useful, even when just running images built by others. However you can make your own container image as your custom solution for anything you like. The key element is to write your own Dockerfile. As an example I will write a Dockerfile to run a Minecraft server below. As you’ll see, this can all be accomplished in very few lines of code.

Base Image: Alpine Linux

Container images need to be built on a base image. And as we’re most often hosting Docker on a Linux distribution, we need a Linux distro as the base of the image. The most common Linux distribution to use for containers is Alpine Linux, due to it’s small default footprint and performance tuning.

While Alpine Linux is great as a container base, it uses its own flavor of all of the basic utilities. For example, the package manager for Alpine is not apt or yum/dnf… it’s a utility called apk. Keep an eye out for the Alpine variations of commands that are noted on this site in order to use Alpine well.


Preparation

Since Dockerfile commands are more-or-less the same commands you would issue to a full installation of your chosen Linux distribution, completing a test installation to a full virtual machine first can be very helpful for figuring out the commands you need for your container image. Once you have the list of commands from the full virtual machine installation you are only left the much easier task of adapting those commands for use in your Dockerfile.

Creating the Example Service: Minecraft Server

It should be easier to understand the layout of the Dockerfile by stepping through the creation of an example. Let’s say that I was running my own Minecraft server on a traditional server or full virtual machine, but now wanted to run it in Docker instead (running it in Docker would confer the benefits of containerization: portability, faster service start up, easier management through division of the service into components, etc.) To prepare I am building a traditional virtual machine in order to record the commands needed to get the environment up and running. Since I intend to build the Docker image based on Alpine Linux (the most popular base for Docker images) I will also build my test virtual machine on Alpine.

Installing to a Full Virtual Machine

For anyone who wishes to run the Alpine install long-term, instead of the container, check out the Alpine installer configuration options I used

Build the VM using the x86_64 build of the “Virtual” branch

OptionValue
VM Nameminecraft
ISO imagealpine-virt-3.17.3-x86_64.iso
Disk size (GB)32
CPU Sockets1
CPU Cores2
CPU Typehost
Memory (MiB)4096
NetworkVLAN with DHCP and internet access
Firewalldisabled

After booting Alpine login with the username: root and no password requested. Run the command setup-alpine to begin setup.

OptionValue
Keyboardus > us
Hostnameminecraft.domain.tld
Interfaceeth0
IP Addressdhcp
Manual networkn
Passwordroot
Retyperoot
TimezoneAmerica/Denver
Proxynone
Mirror1
Setup a user?no
Which SSH server?openssh
Allow root ssh login?yes
Enter SSH keynone
Which disk?sda
How to use disk?sys
Erase disk?y

Then issue the reboot command to boot to the installed system


After completing the Alpine install on a fresh virtual machine I issued the following commands to install a Minecraft server:

  • Enable the Alpine community repository in order to obtain the Java Runtime Environment (JRE) in a later command
vi /etc/apk/repositories
# uncomment the community repository for Alpine v3.17
# save and quit
apk update
  • Add the necessary packages
apk add --no-cache openjdk17-jre-headless wget iptables tmux
  • Create a text file in the current directory (where we will launch Minecraft server from eventually) to bypass the EULA check
vi eula.txt
# write the following
eula=true
# save and quit
wget https://piston-data.mojang.com/v1/objects/8f3112a1049751cc472ec13e397eade5336ca7ae/server.jar
  • Alpine Linux does not come pre-installed with a firewall or network traffic manager. The preferred solution is to install iptables and set it to start automatically, then define the traffic rule, as below. Minecraft server utilizes TCP port 25565.
rc-update add iptables
iptables -A INPUT -p tcp --dport 25565 -j ACCEPT
/etc/init.d/iptables save
  • Finally we’re ready to launch the Minecraft server, through a Java launcher running in a tmux session. The Xms option defines the minimum memory to allocate to the Java virtual machine, and Xmx defines the max that it will ramp up to. tmux is used in place of the common Linux command screen, which is not being supported as broadly anymore. tmux allows a program to run in a detachable shell session, which is especially handy for programs like Minecraft server that do not exit after running (they continue to run continuously) and can accept commands from the command line while they still run in the background. In other words, if Minecraft was coded to run in the background on its own and accept commands through a utility program, tmux wouldn’t be needed.
tmux new java -Xms1G -Xmx2G -jar server.jar nogui
Note

Use the keyboard combination CtrlbSpaced to breakout of the tmux session (detach it). Use the command tmux attach-session to reattach to the last running tmux session. Use CtrlbSpace? while in a session to list commands.


Coding the Dockerfile

The Dockerfile should be formatted in Unix EOL format, which is easiest to achieve by using a program like Notepad++. You can find the setting for the Unix EOL under the menu Edit > EOL Conversion > Unix (LF).

Create a file called Dockerfile with no file extension. It should be filled with the following lines:

  • Use the Alpine 3.17.3 base image (or latest version of Alpine). Base images are already hosted in the Docker repository
FROM alpine:3.17.3
  • Set environment variables. We’ll eventually use these environment variables when instantiating a container, on the command line or in a Docker Compose file. The values we set here are default values that may be overridden by the environment variable definitions we set later
ENV MAX_HEAP=2
ENV MIN_HEAP=1
ENV EULA=false
  • The RUN command causes a command to run during the image build process (not afterward, when the container is instantiating). The command here adds the Java JRE package from the Alpine repository. Interestingly, there is no need to enable the communitry repository this time, probably since the Docker Alpine base image already has it enabled
RUN apk add --no-cache openjdk17-jre-headless
  • As we’ll see in a subsequent step, it is best to download and include the Minecraft server.jar file along with your Dockerfile to build the image, instead of using wget to download it (notice that we did not install wget this time either). The next command puts a copy of the server.jar file into the built image, copied from the build source we will put together later on. I’m also collecting the server.jar file, and the others we create later on, in a new directory at the root of the container’s file system - a directory called “data”. If the directory we copy into doesn’t exist then the COPY command creates it. Including the trailing slash in the command below copies the file to the specified directory, rather than copying to the file system as a file with the new file name specified.
COPY server.jar /data/
  • The WORKDIR command changes the selected working directory within the image, during the build process and for build purposes only
WORKDIR /data
  • Lastly we come to the ENTRYPOINT command. This command records to the image any commands that an instantiated container should run when it starts up. Alternatively we could have used the CMD command, however there are important distinctions between these two commands:
    • CMD can be overridden by specifying a command when instantiating a container (during the Docker command call). ENTRYPOINT does not allow this.
    • There are two forms for both CMD and ENTRYPOINT: shell or exec. The preferred form is the exec form, which is shown below with square bracket notation and with each argument passed in a comma separated list. The shell form passes the command just as it is usually issued on a command line. The difference is how the shell form expands variables, and the exec form does not, unless you use the exec form to call the shell directly as is done below.
    • Every Dockerfile must have a CMD or ENTRYPOINT command
  • The variables in the command below are enclosed in curly braces due to the fact that the argument syntax requires a ‘G’ to follow the value that is passed in. The curly braces prevent an incorrect variable name match when the command line interpreter mistakenly includes the ‘G’ in the variable name.
  • The EULA environment variable can only be used if it is evaluated by the container after instantiation, rather than at build time. Since the only code from the Dockerfile that is evaluated in a running container is in the ENTRYPOINT then it makes sense to include the code to set the EULA file IN the ENTRYPOINT definition, along with everything else.
  • There is no need to run the java command behind a tmux command inside the container. This idea is useful in a full virtual machine so that the virtual machine can still be interacted with (for updates, administration, etc…) while the Minecraft server runs. If you need to do these same things for a container you would just stop it and update the image build, then relaunch the container. So tmux is no longer needed.
ENTRYPOINT ["sh", "-c", "echo eula=${EULA} > /data/eula.txt; java -Xms${MIN_HEAP}G -Xmx${MAX_HEAP}G -jar server.jar nogui"]
  • That’s the end of the Dockerfile, but there is another important component from the full virtual machine build that we have not used here - iptables. In the Docker Compose definition we will create later we will define port mappings through the Docker API. Docker will impose these port mappings on the running container using iptables on its own. So we don’t need to add iptables ourselves.

Expand here to see the full Dockerfile, all together this time

FROM alpine:3.17.3

ENV MAX_HEAP=2
ENV MIN_HEAP=1
ENV EULA=false

RUN apk add --no-cache openjdk17-jre-headless

COPY server.jar /data/

WORKDIR /data

ENTRYPOINT ["sh", "-c", "echo eula=${EULA} > /data/eula.txt; java -Xms${MIN_HEAP}G -Xmx${MAX_HEAP}G -jar server.jar nogui"]


Packaging your Dockerfile and Source Files

While you could certainly construct your Dockerfile to not require accompanying source files, my end goal is to upload the bundle as a .tar file to the image build section in my Portainer Docker management system. In order for me to get a successful build I need to create a directory on my admin machine and place the Dockerfile and the Minecraft server.jar (obtained from Minecraft Server Download) in the top level of the directory, then create a .tar file where these files are at the top level of the .tar file. You can create a .tar file with the 7zip archiving program. When selecting the files to include, multi-select the Dockerfile and the server.jar files and then have 7zip create the .tar - don’t tar the directory or else this will cause a directory to exist at the top level of the .tar file.

dockerfile_folder_structure dockerfile_folder_structure

Next we upload to Portainer and build the image. I named my image minecraft:custom, which will be referenced in the Docker Compose


Docker Compose / Docker Stack Definition

The final step is to create a Docker Compose for your image to be launched from. In Portainer this is done by creating a Stack. However, before we create the stack we should create a Docker volume and network

Docker volume and network definition

The volume definition is trivial as you only need the default options. You can set the name of the Docker volume to minecraftdata.

Your network definition will depend on your network setup. In my case I will need to define a new VLAN with specific rules in my firewall, tag this through my switch and hypervisor down to my Docker host, and then into a new Docker network definition. Alternatively you can use the port mapping that I show below and simply point your Minecraft client at the IP address of your Docker host (not recommended for production deployments as this can carry security concerns).

Docker compose and stack definition

The elements of a Docker compose definition are discussed elsewhere on this site, so I will cut to the chase of the full compose definition below:

version: "3"

services:
  minecraft:
    image: minecraft:custom
	restart: unless-stopped
	volumes:
	  - minecraftdata:/data
	environment:
	  - MAX_HEAP=2
	  - MIN_HEAP=1
	  - EULA=true
	ports:
	  - "25565:25565"

volumes:
  minecraftdata:
	external: true

Launch this definition and you will see the container start and run, and then with a few more minutes’ time the Minecraft server will be available at the IP address of your Docker host. With the Docker volume in place, even if you take down the running container and start it again, the server will still load the server data and the save. Without the Docker volume a restart of the container will lose all game progress and start a new world.

Docker Macvlan Networks

and other types of Docker network drivers

See further information about Docker networks here: https://docs.docker.com/network/drivers/

In my Docker environment I make use of a few Docker Swarm features and also run Swarm on my single Docker host in preparation so I could add more Docker hosts to the swarm in future. Of the various Docker network types the basic ones are: Bridge and Overlay. Bridge is scoped for use on an individual Docker host (not swarm-aware), while Overlay is swarm-aware.

Docker Network Types

Each type of network is enabled through use of a specific driver, so Docker network types are also referred to as Docker network drivers. Docker provides an excellent summary of each driver on their documentation page (linked above), so I’ll just say what they’ve said. The ones that I typically use are in bold:

  • Bridge: the “default” network type that is good for running most containers, or containers that don’t require special networking capabilities. User-defined bridge networks enable containers on the same Docker host to communicate with each other (network is not swarm-aware). Bridge networks are isolated networks such that all containers attached to it can communicate with each other - useful when needing a common network for all containers in the same “project”. Bridge networks are isolated from the host and require specific port mappings to be “exposed / published”.
  • Host: shares the host’s network with the container. When you use this driver the container’s network isn’t isolated from the host.
  • Overlay: best when you need containers running on different Docker hosts to communicate with each other. Like a Bridge network, but swarm-aware so that swarm services can automatically share the network configuration to all Docker hosts in the swarm. Overlay networks are isolated from their hosts, so they require specific port mappings to be “exposed / published”.
  • MACvlan: good for migrating from a traditional VM environment or when you need your containers to appear like physical hosts on the network. This network type, and the IPvlan type, allows your containers to gain direct access to the network and be managed by traditional network security tools (like firewalls).
  • IPvlan: similar to the MACvlan type but doesn’t assign a unique MAC address to each container. Use this type if there is a restriction on the number of MAC addresses you can assign to a network interface or port on your Docker host (the restriction would likely come from the Docker host’s underlying OS).
  • none: completely isolate a container from the host and other containers. Containers with this network type are not meant to have any network communication, so publishing ports to expose services does not work. This type of network is not available when running a container on a Docker Swarm.

Preference for Docker MACvlan Network Type

In my homelab environment I am the administrator for every aspect of service delivery, including system administration and network administration. Network firewalls are still a recommended security technology for all types of networks, and I certainly run one myself and recommend it for any homelab network. Modern firewalls provide many services useful to a homelab network, not least of which is a firewall’s primary function - the list of traffic forwarding rules. Since firewall rules are still based on where traffic came from and which interface, IP address, or port that it wants to go to, managing network traffic destined for containers can become messy with each container using the Docker host’s IP address. Using MACvlan networks to make containers “first class citizens” on the network means firewall management becomes more organized. Each Docker MACvlan network is its own VLAN and thus is a separate interface in the firewall, which simply enhances the organization of firewall rules. There are security considerations as well, especially for environments where traditional VMs and containers operate side-by-side.

For a more complete discussion of network security practices for Docker MACvlan containers and subnet separation, please see my article (Network Security Through Subnet Separation) on the topic

Hugo Docker Image

Hugo is a popular open-source static site generator. A static site generator creates flat HTML files rather than relying on dynamic content like Javascript. Static sites typically have a smaller storage requirement and are more performant than dynamic websites, though they also have a much reduced feature set. However most people would say a site that supports full dynamic content that is used just for article content (like this site) would be a waste.

There are a fair few Hugo Docker images out there, but I decided to make my own to practice Dockerfile creation and to fully understand the image that I end up running. Please see my article regarding building a Docker image for the first time as a prerequisite to this build.

TAR file build-out

In addition to the Dockerfile for the .tar file I’ll be creating I also need to include several other files:

  • entrypoint.sh file contents detailed herein; script for container start
  • hugo binary downloaded from https://github.com/gohugoio/hugo/releases/latest
  • nginx.conf file contents detailed herein; main configuration options for Nginx webserver
  • net.redbarrel.knowhow.conf specific Nginx config for the site

SSH server is also installed so that the site contents can be managed by the administrator. Tips on defining content for a blog site in Hugo is covered in another article.

Dockerfile

FROM alpine:3.17.3

ENV SSHUSER=
ENV SSHPASSWORD=

RUN apk add --update --no-cache \
	git \
	gcompat \
	libc6-compat \
	libstdc++ \
	nginx \
	openssh

RUN echo 'PasswordAuthentication yes' >> /etc/ssh/sshd.config

RUN ln -s /lib/libc.so.6 /usr/lib/libresolv.so.2

RUN mkdir /etc/nginx/sites-available /etc/nginx/sites-enabled

COPY nginx.conf /etc/nginx/

COPY net.redbarrel.knowhow.conf /etc/nginx/sites-available/

RUN ln -s /etc/nginx/sites-available/net.redbarrel.knowhow.conf /etc/nginx/sites-enabled/net.redbarrel.knowhow.conf

COPY hugo /usr/local/bin/

RUN chmod +x /usr/local/bin/hugo

COPY hugo-cron /etc/cron.d/hugo-cron
RUN chmod +x /etc/cron.d/hugo-cron
RUN crontab /etc/cron.d/hugo-cron
RUN touch /var/log/cron.log

COPY entrypoint.sh /

WORKDIR /srv

ENTRYPOINT ["/entrypoint.sh"]

entrypoint.sh

#!/bin/sh
adduser $SSHUSER
echo -n "$SSHUSER:$SSHPASSWORD" | chpasswd
chown $SSHUSER:$SSHUSER /srv
ssh-keygen -A
/usr/sbin/sshd -D -e "$@" > /dev/null 2>&1 &
/usr/sbin/crond -l 2 -L /var/log/cron.log
while true; do { if [ -d /srv/knowhow ] && [ -f /srv/knowhow/config.toml ]; then /usr/local/bin/hugo server -D -s /srv/knowhow --bind=0.0.0.0; break; else wait 30; fi } done > /dev/null 2>&1 &
nginx -g "daemon off;"

nginx.conf

user nginx;
worker_process auto;
pcre_jit on;
error_log /var/log/nginx/error.log warn;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
	worker_connections 1024;
}

http {
	log_format main '$remote_addr - $remote_user [$time_local] "$request" '
					'$status $body_bytes_sent "$http_referer" '
					'"$http_user_agent" "$http_x_forwarded_for"';

	access_log /var/log/nginx/access.log main;

	sendfile on;
	tcp_nopush on;
	tcp_nodelay on;
	keepalive_timeout 65;
	type_hash_max_size 2048;
	server_tokens off;

	include /etc/nginx/mime.types;
	default_type application/octet-stream;

	include /etc/nginx/conf.d/*.conf;
	include /etc/nginx/sites-enabled/*.conf;

	server {
		listen 80 default_server;
		listen [::]:80 default_server;
		
		server_name _;
		
		root /usr/share/nginx/html;
		
		include /etc/nginx/default.d/*.conf;
		
		location / {
		}
		
		error_page 404 /404.html;
			location = /40x.html {
		}
	}
}

net.redbarrel.knowhow.conf

server {
	listen 80;
	listen [::]:80;
	
	server_name knowhow.redbarrel.net;
	
	root /srv/knowhow/public;
	
	index index.html;
	
	access_log /var/log/nginx/www_access.log;
	error_log /var/log/nginx/www_error.log;
	
	location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bpm|rtf)$ {
		access_log off; log_not_found off; expires max;
	}
	
	location / {
		try_files $uri $uri/ =404;
	}
}

hugo-cron

* 4 * * * cd /srv/knowhow;/usr/local/bin/hugo --minify
Tip

The last line of the cron file must be an empty line in order to satisfy the syntax of the cron file

Portainer Build

  • Upload Dockerfile .tar to Portainer as hugo.custom
  • Create a docker volume called hugodata
  • Create a macvlan config network called br_knowhow_config
    • subnet: <subnet>
    • gateway: <gateway>
    • IP range: <ip range>
    • parent network card: <interface>.<vlan> (i.e. ens10.5)
  • Create a macvlan network based on br_knowhow_config caled br_knowhow
    • ☑ enable manual container attachment
  • Create VLAN in your firewall and switches, tagging through to your container host

Portainer Stack / Docker Compose Definition

version: "3"

services:
	hugo:
		image: hugo:custom
		volumes:
			- hugodata:/srv
		networks:
			- br_knowhow
		environment:
			- SSHUSER=<username>
			- SSHPASSWORD=<password>

volumes:
	hugodata:
		external: true

networks:
	br_knowhow:
		external:
			name: br_knowhow

Installing Docker Engine

(on RHEL derivatives)

Also See Official Docker Documentation Install Docker Engine

While other methods are provided in the offical documentation (see above), I prefer adding the offical Docker repository to my package manager as my source for Docker packages.

sudo dnf -y install dnf-utils
sudo dnf-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Note

Even though the repo URL used above references ‘centos’ these packages are suitable for Red Hat-family OS’s. Fedora and RHEL have their own URLs, so you can use those for Fedora and Red Hat respectively as you wish (see the documentation).

Add the packages

sudo dnf install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Note

In the past Docker Compose would need to be installed from a separate Github project, but these days the enhancement is added with the docker-compose-plugin package. Nice!

Enable the Docker system service and start it

sudo systemctl enable --now docker

Portainer Management of Docker

Portainer offers an excellent management webgui for your Docker host. Simply run the Portainer Docker Compose project to get it going. See my article Installing Portainer to get the steps.


Considerations for Hosting Docker

When considering my designs for hosting services in containerized or virtualized infrastructure I realized that I could reduce all services to containers and then just run docker engine directly on my server farm (an environment with 100% containers, 0% VMs). In the end I chose to install a VM hypervisor directly on the servers instead (ProxMox VE), with docker installed as a virtual machine. This allowed me the flexibility to run full virtual machines from time to time, while most of my services would be containers on Docker. ProxMox has compatibility to run LXC containers directly, but not Docker containers; I enabled Docker containers by running my Docker engine inside a full virtual machine under ProxMox.

Initially I had concerns about how performance would turn out, but for the workloads I run in my homelab this hasn’t been a real concern. For the type and quantity of services run in a home lab I don’t have any problem recommending this setup.

Installing Portainter

for Graphical Docker container management

Portainer logo Portainer logo

See official Portainer installation documentation for Community Edition on Linux with Docker Swarm

A vanilla installation of Docker can be entirely managed through Docker’s command line tools, however graphical tools like Portainer offer a GUI representation of commands and are preferred by some administrators. Portainer’s GUI is a webgui, providing the additional benefit of managing your Docker installation through a web browser instead of a locally installed app.

Portainer is offered in a free community-support-only edition (Portainer CE) and an edition for business with paid tiers and direct support (Portainer BE). The business edition includes features that aren’t available in the community edition, though these features are typically of interest for business computing environments, including: tying to centralized access management systems, additional security and reporting, and auditing features. All editions of Portainer also support Docker Swarm, Kubernetes, and Azure ACI.

Installation

Portainer can be run with a one-line Docker command, however since I like to launch Portainer without needing to remember all the options, using a Docker Compose file is much better. This also allows me to add comments (like the previous image version of Portainer that I had running before I did an update) and provides a visually organized layout for the options I use.

Prerequisites

  • Docker Volume: I created a persistent volume to hold the data that Portainer uses to run, including the database it creates. If you’re starting from a fresh Docker or Portainer installation then you’ll need to create the Docker volume first; for all other runs of Portainer you’ll be referencing your previously created persistent volume.

docker volume create: replace portainer_data with whatever name you want for the volume, but be sure to continue replacing it in upcoming commands as well

docker volume create portainer_data
  • Docker Network: I prefer to keep network traffic for each container separated all the way through the network to the external firewall. In order to do this, separate Docker networks are created and VLAN tags specified. The Portainer container is also isolated into its own VLAN, so if you follow this same network design and you’re starting fresh you’ll need the following command. If you prefer standard Docker networking, where each container is connected to the network by specifing a port on the Docker host to expose, then you can skip this step (however my commands do not include the options for exposing docker host ports - see official Docker documentation here and official Portainer documentation here).

docker network create: be sure to set your own values for subnet, gateway, and parent (which should be the name of your network adapter that connects the docker host with your VLAN). portainer_network should be whatever name you want docker to know the network as.

docker network create --driver=macvlan --subnet=192.168.0.0/24 --gateway=192.168.0.1 -o parent=eth38.23 portainer_network
Note

See my short discussion on my preference for macvlan Docker networks when separating containers into externally routed VLANs here. TL;DR it enables secure isolation of containers when managing through an external firewall

  • Certificate: I also wanted the Portainer webgui to use proper HTTPS, however I’m not serving the webgui to the internet and can’t (and don’t want to) pull a LetsEncrypt certificate. A self-signed certificate would still throw an error in my browser (unless I also installed the certificate to my workstation), but I have a better solution since I run my own local certificate authority - i.e. generate my own server certificate and install the root certificate from my local CA. This is why you see options to include Portainer’s SSL certificate shown in the compose yaml below. Don’t forget to create and upload your certificate and key files to the Docker host! - put them in a folder named ssl/ in the directory where you have your Portainer docker-compose.yml file.

  • HTTPS: Lastly, to enable use of the certificates previously mentioned and turn on HTTPS the ’entrypoint’ section is added to the Portainer compose file. This line disables serving Portainer on HTTP while specifying the HTTPS port as 443 (Portainer’s default is port 9443).

docker-compose.yml

version: '3'

services:
  portainer:
    #image: portainer/portainer-ce:2.20.1 <-- previous version noted for easy rollback
    image: portainer/portainer-ce:2.20.2
    container_name: portainer
    restart: always
    networks:
      portainer_network:
    entrypoint:
      /portainer --http-disabled --bind-https :443
    command:
      --sslcert /data/ssl/cmgmt.crt
      --sslkey /data/ssl/cmgmt.key
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - portainer_data:/data
      - ./ssl:/data/ssl

networks:
  portainer_network:
    external: true

volumes:
  portainer_data:
    external: true

Starting and Stopping the Docker compose

This section assumes that you have Docker Compose available in your Docker host somehow. Contemporary Docker installation can include a native plugin to enable Docker Compose - in the past this would have required obtaining the source and running Docker Compose separately. See my article for Installing Docker to see how I add Docker Compose.

Once you have Docker Compose available make sure your current directory is the one containing your Portainer Docker Compose yaml file (docker-compose.yml), and your SSL directory containing your certificate and key inside.

Starting Portainer

docker-compose up -d

Stopping Portainer

docker-compose down

Kubernetes Introduction: Docker vs Containerd

Kubernetes (also written as k8s) is an advanced container management and orchestration platform. It sits above the container management engine, which provides command interfaces for running containers and managing them. Docker is one example of a container management engine; Containerd is another. In fact, Docker is more of a hybrid, providing enhancements and programming interface to the Containerd engine it actually uses under the hood. As k8s sits above Containerd and Docker, it is required to install a container management engine when installing k8s.

In the past Docker directly developed a shim that would interface between itself and k8s, but has since ceased development on it. This produced blog posts announcing that k8s could no longer support Docker. However, development for the shim was picked up by Mirantis. It can found hosted on Github: cri-dockerd

Re-adding Utilities to Minimal Containers

Docker images are purposefully minimal and notoriously omit standard utilities an administrator would need during troubleshooting. See below for reminders of package names to use for reinstalling these utilities