Installing Docker

Written July 19, 2021, Updated Sept 5, 2022

While we could use Docker Desktop, we can also run the docker daemon (aka Docker Engine) directly within our WSL2 environment. It's faster, and takes up less system resources using this method.

Bonus: This method also works for Windows on ARM users, which at the time of this writing, Docker Desktop is x86-64 only.

Prepare the subnets before you begin

Set a predictable IP address for the docker bridge network interface and additional docker subnets. This can save you a lot of grief down the road when docker will inevitably conflict with another subnet on your network.

I am going to use 192.168.1.1 as the bridge IP, and allocate the remaining 254 IP addresses (192.168.1.*) for additional IPs in this subnet.

https://www.ipaddressguide.com/cidr is a great utility for converting subnets into the CIDR notation. For example: 192.168.1.1 - 192.168.1.255 = 192.168.1.1/24

Why 24? Okay, well since you asked, let's get into it...

There are 256x256x256x256 possible IP addresses or:

2^8 x 2^8 x 2^8 x 2^8 (4 octets)

or:

2^32

now, what if we essentially exclude the last octect, we would only have: 2^8 + 2^8 +2^8 right?

aka

2^24. That's where the 24 came from.

Now in addition to the bridge subnet, we need additional subnets for when docker network create blah is invoked.

Let's allocate blocks of 1,024 (2^10) IPs for subsequent networks.

We need to be careful not to overlap with the bridge subnet, so we need to start at the next 1,024 block.

1,024 / 256 = 4, so we will start at:

192.168.4.0 - 192.168.7.255

in cidr notation, this would be...

2^32 - 2^10 = 2^22 = 192.168.4.0/22

Ok enough of the math already, just make the config file:

sudo mkdir -p /etc/docker
echo '{
  "bip": "192.168.1.1/24",
  "default-address-pools": [
    {
      "base": "192.168.4.0/22",
      "size": 24
    }
  ]
}' | sudo tee /etc/docker/daemon.json
# since we are using class C IP addresses, use "size": 24

https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file has an example of all this.

Install the packages

# download and install docker's GPG signing key
wget -q -O - https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
archType="amd64"
if test "$(uname -m)" = "aarch64"
then
    archType="arm64"
fi

# add docker's package repo
echo "deb [arch=${archType} signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list >/dev/null
sudo apt update
# if all goes well, you should see something like:
# ...
# Hit:4 http://archive.ubuntu.com/ubuntu focal InRelease
# ...
# now the packages are now available to install as any other:
sudo apt-get install docker-ce docker-ce-cli containerd.io

Add yourself to the docker group

So you don't need to run sudo all the time, put yourself in the docker group:

sudo usermod -a -G docker $USER

This is required because socket file /var/run/docker.sock (once docker is started) is owned by the docker group.

For this change to take affect, close wsl, and relaunch.

wsl --shutdown
id
# should output ...,999(docker)

mount the cgroup

Because WSL2 is not started in a way that systemd expects, you need to do something extra to avoid some docker problems in your near future. Sadly you need to do this everytime you start the docker daemon :(

# create the mount point
sudo mkdir -p /sys/fs/cgroup/systemd
# check if mounted already, if not, mount it
mountpoint -q /sys/fs/cgroup/systemd || sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd

Finally, start docker

sudo service docker start

Did it work?

# check what's running
docker ps -a
# run something
docker run --name dockertest --rm library/alpine:3.14.0 cat /etc/os-release
# remember the whole cidr setup?  Let's see if that worked:
ip -4 addr show docker0
# hopefully you see this:
# inet 192.168.1.1/24

Start docker without a sudo password

# probably not a bad idea to open another Terminal tab right now, and become root
# this is kinda dangerous, so make sure you back up this file
sudo cp -p /etc/sudoers /etc/sudoers.bak
echo '# Allow members of the docker group to start/stop docker' | sudo tee -a /etc/sudoers >/dev/null
echo '%docker ALL=(root) NOPASSWD: /usr/bin/mkdir -p /sys/fs/cgroup/systemd, /usr/bin/mount -t cgroup -o none\,name=systemd cgroup /sys/fs/cgroup/systemd, /usr/sbin/service docker *' | sudo tee -a /etc/sudoers >/dev/null
# check it
sudo -l
# if all good, delete the backup
sudo rm /etc/sudoers.bak

If you mess up your sudoers file, become root with su - because you remembered to set the root password right?

Start Docker Wrapper Script

You might want to save this little wrapper script to start docker in the future:

mkdir -p ~/.local/bin
>~/.local/bin/startDocker.sh cat <<EOF
#!/bin/sh
sudo mkdir -p /sys/fs/cgroup/systemd
mountpoint -q /sys/fs/cgroup/systemd || sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd
sudo service docker start
EOF
chmod u+x ~/.local/bin/startDocker.sh

If you don't have ~/.local/bin in your PATH already, you should add it:

echo 'export PATH="${PATH}:~/.local/bin"' >> ~/.bashrc

Advanced

host.docker.internal

How does a container communicate to the host's localhost? Docker Desktop adds a DNS record host.docker.internal automatically, but if you're running the CLI, you don't get this luxury. To add custom DNS records (like adding an entry to your /etc/hosts, but inside the container), simply use the --add-host command line argument like so:

docker run --add-host=host.docker.internal:host-gateway ...

host-gateway has special meaning, and will resolve to the docker0 network interface IP.

more memory and permissions

If you ever need to run Chromium in a docker image, these are the magic parameters you'll want:

docker run -e DISPLAY=$DISPLAY --cap-add=SYS_ADMIN --shm-size 256m ...

mounting files

When mounting files, keep in mind the UID of the process running inside the container. The sledge hammer approach is chmod 777, but that just feels wrong. Pretend the UID was 1001, but your UID is 1000. Use the acl feature of ext4 to grant fine grained access without setting up a whole bunch of groups.

setfacl -m 'u:1001:r' myfile.txt
# then mount it as usual, with the absolute path of course:
docker run -v $(realpath myfile.txt):/tmp/myfile.txt ...

Source

This is a version of https://docs.docker.com/engine/install/ubuntu/ tweaked for WSL2.