Site icon Pltaoegle Press

3 Docker tips every beginner should know before running containers

a-container-floating-in-the-sea-with-the-docker-logo-inside-it

Are you beginning with Docker and finding it a little overwhelming? The commands can be unwieldy, and best practices are not always clearly spelled out. When I started, I had three main challenges that I wish someone had warned me about—they might help you too.

Large software projects like Docker often hide crucial best practices and warnings deep within technical documentation. Beginners face a deluge of technical details but few clear and concise guidelines to map out their learning path. When I started, I didn’t understand how to manage multiple dependent services, handle complex commands, or the precise dangers of running containerized processes as root. If these crucial details had been clearly outlined, I could have saved a lot of time and avoided potentially costly mistakes.

### 1. Not Using Docker Compose for Multi-Service Configurations

Docker is well-known for running single containerized services—you provide a command and the necessary options, and it runs your service. But what’s perhaps lesser-known to beginners is that Docker can coordinate multiple services together using **Docker Compose**.

Docker Compose is actually a subcommand of Docker and it takes a YAML configuration file (called `docker-compose.yaml`) to specify your services. This simplifies pulling and configuring one or more services, especially if they depend on each other.

Here’s a snippet from the official Gitea (a Git server) documentation:

“`yaml
# docker-compose.yaml
volumes:
postgres-data:
gitea-data:

services:
server:
image: docker.gitea.com/gitea:nightly
volumes:
– gitea-data:/data
depends_on:
– db

db:
image: docker.io/library/postgres:14
volumes:
– postgres-data:/var/lib/postgresql/data
“`

This example instructs Docker to create two services, where one depends on the other. However, it’s somewhat non-functional because it lacks certain necessary options.

A more complete, functional example would look like this:

“`yaml
# docker-compose.yaml

networks:
gitea:
external: false

volumes:
postgres-data:
gitea-data:

services:
server:
image: docker.gitea.com/gitea:nightly
container_name: gitea
environment:
USER_UID: 1000
USER_GID: 1000
GITEA__database__DB_TYPE: postgres
GITEA__database__HOST: db:5432
GITEA__database__NAME: gitea
GITEA__database__USER: gitea
GITEA__database__PASSWD: gitea
TZ: America/New_York
restart: always
networks:
– gitea
volumes:
– gitea-data:/data
ports:
– “3000:3000”
– “2222:22″
depends_on:
– db

db:
image: docker.io/library/postgres:14
restart: always
environment:
POSTGRES_USER: gitea
POSTGRES_PASSWORD: gitea
POSTGRES_DB: gitea
networks:
– gitea
volumes:
– postgres-data:/var/lib/postgresql/data
“`

**Key points from this Compose file:**

– Environment variables configure each service, such as database credentials passed to Gitea.
– The Gitea service stores its repository data at `/data` inside the container, mounted to a persistent volume on the host.
– The PostgreSQL service stores its data at `/var/lib/postgresql/data` inside the container, also mounted to a persistent volume.
– Persistent volumes ensure data persists outside container lifetimes, typically stored under `/var/lib/docker/volumes` on the host.

To run this setup, navigate to the directory containing the `docker-compose.yaml` file and execute:

“`bash
docker compose up
“`

Docker will pull the necessary images and start both services, which will work seamlessly together.

**Using Docker Compose** simplifies specifying a complex stack of applications, making it far easier and more maintainable than writing a Bash script or individual Docker commands.

### 2. Running Containerized Processes as Root: A Security Risk

When I first started with Docker, I assumed containers were fully isolated, so running a root process inside a container couldn’t do any harm. I was wrong.

By default, processes inside Docker containers share the same user ID (UID) namespace as the host. Running a root process inside a container means it has UID 0 on the host, but with restricted capabilities. This, however, still presents a serious security risk: attackers could exploit vulnerabilities to break out of the container and gain root access on the host machine.

Even processes isolated to localhost are not entirely safe. If your containerized applications handle external data—such as monitoring network traffic, reading files, or receiving requests—your system could be vulnerable to malicious data and exploits.

**At a minimum**, treat containers running as root with the same caution you would for root-level processes on your host. Avoid running risky operations as root inside containers.

#### How to mitigate this risk?

– Run Docker in **rootless mode**, following Docker’s official documentation. This mode maps root-level container processes to unprivileged users on the host, reducing the risk of breakout.
– Alternatively, specify a non-root user inside the container using the `–user` flag when running containers or the `USER` directive in your Dockerfile.

Sometimes, running as a non-root user isn’t straightforward because certain containerized processes need to modify protected files or directories. In such cases, I recommend using **Podman** instead of Docker.

**Podman:**

– Runs containers as unprivileged users by default.
– Is a 100% compatible, drop-in replacement for Docker.
– Has a desktop GUI called Podman Desktop.

I highly recommend you try Podman if you want better security with minimal hassle.

### 3. Not Using a UI, Shell Completions, or Aliases

Docker commands can be quite long and repetitive, which disrupts workflow—especially if you run containers often.

For example, here is the official Docker command to run Pi-hole:

“`bash
docker run –name pihole \
-p 53:53/tcp \
-p 53:53/udp \
-p 80:80/tcp \
-p 443:443/tcp \
-e TZ=Europe/London \
-e FTLCONF_webserver_api_password=”correct horse battery staple” \
-e FTLCONF_dns_listeningMode=all \
-v ./etc-pihole:/etc/pihole \
-v ./etc-dnsmasq.d:/etc/dnsmasq.d \
–cap-add=NET_ADMIN \
–restart=unless-stopped \
pihole/pihole:latest
“`

In reality, managing such commands via Docker Compose or a `systemd` service is more practical. But when you need to run Docker commands directly, there are three tools to help manage the complexity:

#### 1. User Interface (UI)

A good UI can simplify container management, providing a more intuitive way to interact with Docker.

– **Docker Desktop**: The official Docker GUI, great for most users.
– **Lazydocker**: A terminal-based UI perfect for keyboard lovers and those who want a lightweight interface.

I personally prefer lazydocker for its keyboard shortcuts and simplicity. Both tools will greatly improve your efficiency.

#### 2. Shell Completions

Enabling shell completions lets your shell auto-suggest Docker commands and container names. For example, type `docker cont` and press `Tab` to see all container-related commands.

This not only speeds up typing but helps reduce typos.

#### 3. Aliases

Using shell aliases allows you to abbreviate frequently used Docker commands. For example, you could alias `docker compose` to `dc`, or `docker ps -a` to `dpa`, saving keystrokes.

Aliases work both on your host machine and inside containers, helping you run commands more expressively and efficiently.

### Final Thoughts

Docker is a powerful tool for packaging software and services, but it can be less straightforward than it seems. It comes with pitfalls that can trip up beginners, such as managing multi-service configurations, securing containerized processes, and dealing with complex command syntax.

When I began using Docker, these tips were not immediately obvious, so I’m sharing them to help you avoid the same pitfalls. Knowing these three things early on will save you time, enhance security, and make your Docker experience more enjoyable.

Happy containerizing!
https://www.howtogeek.com/mistakes-i-made-when-i-first-started-using-docker/

Exit mobile version