I used to keep an eye on my Hetzner servers the old-fashioned way: SSH in, run a few commands, hop to the next box, and hope nothing was falling apart in the background.

Every monitoring tool I tried felt heavier than the services I was protecting – bloated dashboards, high CPU usage, and no way to see everything at a glance.

Beszel finally fixed that for me.

Why Beszel? (Context & Goals)

Beszel is an open-source, self-hosted server monitoring tool written in Go with a lightweight PocketBase dashboard.

It’s designed for simplicity, efficiency, and clarity - no unnecessary bloat, no agents eating your RAM.

Here’s what makes it stand out for me:

  1. Lightweight design – The hub is a PocketBase app with a Go backend, and each server runs a tiny agent. Even with the hub and five agents, CPU usage stays under 2% on the smallest Hetzner instance.
  2. Real metrics with real usability – The web dashboard is fast, mobile-friendly, and customizable. I can hide metrics I don’t care about (like GPU and temp), pin the ones I do, hit ⌘K to jump to any server, and even dig into Docker container stats and logs without SSH.
  3. Alerts that replace my uptime tools – Beszel watches for downtime, CPU spikes, disks filling up, Docker containers misbehaving, and it emails me when something crosses a threshold. I don’t need separate uptime and resource monitors anymore.

Under the hood, Beszel is a hub-and-agent design. Each agent keeps a WebSocket connection to the hub, but if that ever drops (say, the proxy restarts), it automatically exposes a backup SSH tunnel on port 45876 so the hub can keep pulling metrics. That dual-path connection, plus mutual authentication and fingerprinting, makes me comfortable running it on every production server.

The rest of this guide walks through how I run Beszel in production: preparing the server and DNS, shipping the hub with Docker, securing it with Caddy, onboarding more systems, enabling health checks and alerts, and planning for disaster recovery with snapshots and backups.

📬 NEWSLETTER

If you want more write-ups like this one, subscribe to my newsletter – I send an email when I publish new guides and walkthroughs.

I'm In!

Prep the Server and DNS

Before deploying Beszel, I prepare the base server – just like I do for every new self-hosted project:

  1. Baseline hardening – I follow the "Preparing Your Ubuntu Server for First Use" guide I already published: create a non-root user, lock down SSH, enable UFW, and keep the packages up to date.
  2. Install Docker Engine + Docker Compose – Beszel’s deployment runs on Docker, so I install Docker Engine and the Docker Compose plugin from Docker’s official repository using my guide: Installing Docker and Docker Compose on Ubuntu.
  3. Hetzner IP protection – In the Hetzner Cloud console, I enable protection on the server and on its primary IPv4/IPv6 addresses. That way if the server gets deleted (accidentally or during a disaster), the IPs stay with me and I can reassign them later without touching DNS.
👉
I run everything on Hetzner because it’s cost-effective and reliable—if you’d like to try them, here’s my referral link.

Next comes DNS:

  • I create A and AAAA records for the hostname I’ll use (e.g., beszel.yourdomain.com). When I’m using Cloudflare, I keep the proxy toggled off while I’m setting up the reverse proxy – plain DNS means no cached certificates or blocked ACME requests while Caddy fetches SSL certs.
I recommend running each self-hosted project on its own main hostname and letting that server do just one job. Beszel can technically share a box with other services, but I keep it isolated so troubleshooting stays simple.

With the base server hardened, DNS pointing to the host, Docker/Compose installed, and IP protection turned on, I’m ready to deploy the Beszel stack.

Install Beszel Hub + Local Agent with Docker

Beszel ships as two components: the hub (PocketBase + Beszel's web dashboard) and the agent that runs on every server you want to monitor.

You can install Beszel either using Docker or by using a single binary file.

It’s written in pure Go and can be easily compiled on the server using an install script that Beszel provides – but I still prefer Docker because it keeps everything in one compose file and makes backups, upgrades, and restores painless.

Beszel’s documentation shows two install flows: deploy the hub first and add agents later, or launch both hub and local agent in one go.

I prefer the second option so everything is up and monitored at the same time. That way, if I ever need to scale the hub’s resources as more agents come online, I already have baseline metrics for the hub itself.

Create the Compose File

I start by creating a working directory and moving into it:

mkdir ~/beszel
cd ~/beszel

Then I create a docker-compose.yml file and drop in the official stack:

services:
  beszel:
    image: henrygd/beszel:latest
    container_name: beszel
    restart: unless-stopped
    ports:
      - 8090:8090
    volumes:
      - ./beszel_data:/beszel_data
      - ./beszel_socket:/beszel_socket

  beszel-agent:
    image: henrygd/beszel-agent:latest
    container_name: beszel-agent
    restart: unless-stopped
    network_mode: host
    volumes:
      - ./beszel_agent_data:/var/lib/beszel-agent
      - ./beszel_socket:/beszel_socket
      - /var/run/docker.sock:/var/run/docker.sock:ro
    environment:
      LISTEN: /beszel_socket/beszel.sock
      HUB_URL: http://localhost:8090
      TOKEN: <token>
      KEY: "<key>"

This is straight from Beszel’s docs – no edits beyond plugging in my token/key later.

The local agent writes metrics to /beszel_socket/beszel.sock, so the hub mounts the same path under volumes to read the data locally. Because the containers run in separate networks, the Unix socket is what lets them talk to each other without exposing anything to the outside.

The TOKEN and KEY placeholders stay put for now – we’ll grab the real values from the dashboard right after the stack starts.

With the file saved, I launch the stack:

sudo docker compose up -d

That brings both services online so I can finish wiring the local agent from the dashboard.

Wire the Local Agent in the Dashboard

Once the containers are running, I open http://<server-ip>:8090.

Beszel immediately asks me to create the first account – I type my email, pick a password, and seconds later I’m at the dashboard.

From there I add a system named Hub, copy the token/key Beszel shows, drop those values into the compose file (TOKEN/KEY), and rerun sudo docker compose up -d. Because both containers share /beszel_socket/beszel.sock, I set that socket path as the Host/IP when creating the agent so the connection lights up immediately.

You might notice the dashboard is reachable even if UFW doesn’t allow port 8090. That’s Docker at work – publishing ports through its own iptables rules (DOCKER chain) before UFW ever sees the traffic. We’ll tuck the hub behind Caddy and bind it to 127.0.0.1 shortly so only HTTPS requests hit it.

Hub-Only Variant

If you’d rather bootstrap just the hub and attach agents later, you can start with a trimmed compose file:

services:
  beszel:
    image: henrygd/beszel
    container_name: beszel
    restart: unless-stopped
    ports:
      - 8090:8090
    volumes:
      - ./beszel_data:/beszel_data

Notice there’s no ./beszel_socket:/beszel_socket line – that socket is only needed when a local agent is present.

Once you’re ready to monitor the hub itself, add the agent block back in and redeploy.

Secure Access with Caddy

A reverse proxy sits in front of your service, terminates HTTPS, and forwards clean traffic to the backend.

I don’t like exposing port 8090 over plain HTTP, so before I do anything serious in the dashboard I put Beszel behind Caddy.

I’ve used NGINX for years, but for single-app setups like this I switched to Caddy because it’s lightweight, handles Let’s Encrypt automatically, and takes minutes to configure.

I install Caddy from Ubuntu’s repo:

sudo apt install caddy

Then I back up the default config and overwrite /etc/caddy/Caddyfile with the snippet from Beszel’s docs:

beszel.example.com {
    request_body {
        max_size 10MB
    }
    reverse_proxy 127.0.0.1:8090 {
        transport http {
            read_timeout 360s
        }
    }
}

Replace beszel.example.com with your hostname, save, and run:

sudo systemctl restart caddy

Caddy immediately requests a Let’s Encrypt certificate for that hostname and starts proxying traffic to 127.0.0.1:8090.

If ports 80/443 aren’t allowed in UFW, Caddy can’t complete the ACME challenge.

I forgot to open them the first time and hit Let’s Encrypt’s rate limit, so now I always double-check before restarting. When something looked off, sudo journalctl -u caddy --no-pager -n 100 helped me identify the issue right away.

Once Caddy is proxying to 127.0.0.1:8090, I tighten the Docker port mapping so Beszel only listens on localhost:

ports:
  - 127.0.0.1:8090:8090

After updating the port mapping, I redeploy with sudo docker compose up -d.

From that point on, the dashboard is only reachable through HTTPS on my hostname.

Set APP_URL

Beszel recommends setting APP_URL when you’re behind a reverse proxy so notification links and agent configs use the correct URL.

Set it to the same hostname you specified in Caddy’s config. In the compose file, under the hub service, add:

environment:
  - APP_URL=https://beszel.example.com

Then redeploy with sudo docker compose up -d.

Add More Servers (Systems)

Beszel calls each monitored server a “System.”

To add one, I open the dashboard, click Add System, choose the Docker option, and name it after the server’s hostname so everything stays consistent.

Beszel asks for the server’s IP and the port; I point it to the remote server I’m onboarding and leave the port at the default.

Beszel then shows a Docker compose snippet tailored for that system. I copy it to the target server, keep the same directory structure, and run sudo docker compose up -d. Because all of my servers already run Docker, I stick with the Docker agent (there’s a binary option if you prefer).

Within seconds the new agent connects and the hub starts collecting metrics – it really is that simple: click, copy, paste, deploy, done.

How Agents Stay Connected

Here’s what actually happens after you add a new system: the agent maintains two network paths, falls back automatically if the main one fails, and locks the connection down with mutual authentication.

The next few sections break that down so you know what’s happening behind the scenes.

Communication Paths (WebSocket and SSH)

Every agent maintains two ways to talk to the hub: a WebSocket that dials out to whatever you set in HUB_URL (typically your dashboard hostname) and an SSH-like tunnel on port 45876.

The WebSocket is the default; if the hub URL isn’t reachable – say, Caddy is reloading – the agent automatically spins up the SSH listener and the hub dials in to keep metrics flowing.

Port 45876 and Firewall Rules

Because the agent runs in network_mode: host so it can read the real NIC counters, Docker can’t publish port 45876 the way it does for 8090.

On each server running an agent, I open the port explicitly in UFW but only from the hub’s IPs:

sudo ufw allow in proto tcp from <hub-ipv4> to any port 45876
sudo ufw allow in proto tcp from <hub-ipv6> to any port 45876

That way, when the agent falls back to SSH, the hub can still reach it without exposing the port to the world.

Logs Showing the Fallback in Action

To see the switchover, I tail the agent logs:

sudo docker logs --tail=200 beszel-agent | egrep -i "connect(ed)? to hub|websocket|connected"

Example output:

2025/11/06 18:23:00 INFO WebSocket connected host=beszel.example.com
2025/11/07 09:26:17 WARN Connection closed err=EOF
2025/11/07 09:26:17 WARN Disconnected from hub
2025/11/07 09:26:17 WARN WebSocket connection failed err="unexpected status code: 502"
2025/11/07 09:26:17 INFO Starting SSH server addr=:45876 network=tcp
2025/11/07 09:26:50 INFO SSH connected addr=<hub-ipv4>:56882
2025/11/07 09:26:50 INFO SSH connection established

When you see WebSocket connected, the agent is in dial-out mode.

If the WebSocket drops, the agent starts the SSH server on :45876, the hub reconnects through that path, and metrics keep flowing even if the dashboard itself is unreachable.

When the WebSocket comes back, the agent switches automatically.

Security Notes (From Beszel’s Docs)

The two communication paths are locked down even though they’re automatic:

  • When the hub starts, it generates an ED25519 keypair. The SSH fallback only accepts that key, offers no shell, and doesn’t execute commands. Even if the key leaked, attackers couldn’t run arbitrary commands on the agent.
  • Every WebSocket session starts with mutual auth: the agent sends its registration token, the hub signs a challenge, and the agent verifies it against the hub’s public key. Then the agent sends a fingerprint tied to the monitored server, and the hub makes sure it matches the original system record before letting metrics flow.

Together those safeguards keep the link private – only the original hub and the original agent can talk to each other.

Health Checks

Beszel ships with health commands for both the hub and agent.

Docker can run those commands on a schedule (healthchecks) to ask “are you OK?” and mark the container healthy or unhealthy.

  • The hub command is /beszel health --url http://localhost:8090, which does an HTTP GET to /api/health from inside the container and only returns success if it sees a 200 OK.
  • The agent command is /agent health, which verifies the agent process is running (but doesn’t guarantee the WebSocket or SSH path is live).

Healthchecks aren’t free – they run a command every time and add a bit of CPU overhead – so I balance Beszel’s “60s or more” guidance with my own experience and stick to the 60–120s range.

Docker doesn’t restart containers automatically on failure; it just updates their health status.

I use that status for two things: depends_on waits for the hub to start returning /api/health before the agent launches, and docker ps instantly shows me which container is misbehaving without digging through logs.

Before adding any healthchecks, the Hub system page in the web dashboard (or whatever you named our local agent) shows “Health: None” for both containers running on the hub server (beszel and beszel-agent). Scroll to the bottom and you’ll see the empty indicators.

On the hub server, I edit docker-compose.yml and add these blocks:

  beszel:
    # ...existing config...
    healthcheck:
      test: ["CMD", "/beszel", "health", "--url", "http://localhost:8090"]
      interval: 120s
      timeout: 5s
      retries: 3
      start_period: 5s

  beszel-agent:
    # ...existing config...
    healthcheck:
      test: ["CMD", "/agent", "health"]
      interval: 120s
      timeout: 5s
      retries: 3
      start_period: 15s
    depends_on:
      beszel:
        condition: service_healthy

http://localhost:8090 works because it runs inside the hub container (not through Caddy). The depends_on block makes sure the local agent doesn’t start dialing until the hub is actually serving /api/health.

Beszel’s docs provide baseline healthchecks, but I tweak them slightly: start_period is 5 seconds for the hub and 15 seconds for the agent, and I add explicit timeout/retries so the status is meaningful. That depends_on block only belongs on the hub server – remote agents shouldn’t wait on anything.

After saving the file, I redeploy with sudo docker compose up -d.

From now on docker ps shows beszel ... (healthy) and beszel-agent ... (healthy), and if the hub ever stops answering 200 OK, I see unhealthy immediately.

Back in the dashboard, those “Health: None” labels are replaced with green “Healthy” badges for both containers.

If you want to add a healthcheck to a remote agent (on the server you’re monitoring), just add the agent block (without depends_on) to that system’s compose file:

healthcheck:
  test: ["CMD", "/agent", "health"]
  interval: 120s
  timeout: 5s
  retries: 3
  start_period: 15s

That keeps Docker honest on each monitored server: the agent container reports healthy only if the process is running.

SMTP Configuration

If you want email alerts (server down/up, login notifications), Beszel needs an SMTP server.

In the Settings → Notifications tab you’ll see “Please configure an SMTP server.” Click it and you’re taken to the PocketBase dashboard where you can enable “Use SMTP mail server,” enter your SMTP details, and send a test email.

I use Proton Mail’s SMTP server (part of the Unlimited plan). It’s privacy-focused and reliable. If you want a free alternative, SMTP2GO is easy to set up and works well.

I’ve used Proton products for years – if you try Proton Mail for free and later upgrade to Unlimited, it’s worth it.

After the test email succeeds, go back to the main Beszel dashboard. Each system added has a bell icon at the right – click it to open Alert settings.

I enable the Status alert (uptime monitoring) and set it to 1 minute. Hetzner servers reboot in seconds, so I don’t get spammed during kernel reboots, but I know right away if a server goes offline.

Once SMTP is configured, Beszel also emails you when someone logs in from a new location.

PocketBase Passwords

After the initial setup (when Beszel prompts you for your first password), I immediately change the PocketBase superuser password from the hub server. This command is also your go-to if you ever lose dashboard access:

sudo docker exec beszel /beszel superuser update name@example.com newpassword

That command logs everyone out of the PocketBase admin UI.

It’s good practice to use a different password for PocketBase and Beszel, so after running the CLI command I also log into the PocketBase Users view and update the Beszel dashboard user there. Follow the same steps later whenever you want to rotate the Beszel password.

For now we do those updates separately; a single command would be nicer after an incident.

MFA OTP

Beszel also supports email-based MFA. In the hub’s compose file add:

environment:
  - MFA_OTP=true

Redeploy with sudo docker compose up -d.

The next time you log out and back in, Beszel prompts for the one-time code it emails you.

This only works if SMTP is rock solid, so use a reliable provider.

Container Metrics

When I first tried Beszel, it couldn’t track Docker containers. Now it does – and even captures logs.

Open any system that runs Docker and scroll down: you’ll see each container listed with CPU/memory usage, network I/O, image, status, uptime, and a health indicator.

Above the list, Beszel plots the top CPU and memory consumers plus a Docker network I/O chart, which makes it obvious which container is spiking.

Clicking a container shows its logs (with a refresh button) and configuration details.

I also skim the beszel-agent container logs directly from the UI so I can tell if the agent fell back to SSH overnight without SSHing into the box.

CPU Metrics

The CPU charts are my favorite quick-check.

If I see a spike, I click the three dots in the CPU widget and open the CPU Time Breakdown chart.

The first thing I check is I/O wait – it’s often the culprit when disks are slow or a query is misbehaving.

Beszel also plots average utilization across cores and a per-core chart, which is great for single-threaded apps like WordPress because you can see which core is pegged.

Disaster Recovery

The final piece of the stack is disaster recovery.

Think about what happens if you misconfigure something, install a bad update, or your provider has a fire – how do you bring Beszel back online quickly?

Hetzner’s primary IP protection keeps your IPv4/IPv6 addresses even if you delete the server, but you still need a fresh server image.

Snapshots are the easiest way: take one in the Snapshots tab, and if disaster strikes you can spin up a new server from it, reassign the primary IPs, and you’re back without touching DNS or Caddy.

Snapshots are manual. If you want automated restore points, Hetzner’s backup add-on (20% of the server cost) keeps daily backups for a week.

You can convert any backup into a snapshot before deleting a server. Also enable deletion protection so you don’t nuke anything by accident.

My Recovery Checklist

Here’s the exact sequence I follow when something goes sideways:

  1. Enable Hetzner backups and turn on deletion protection for both the server and its primary IPs.
  2. When something goes wrong, convert the latest backup (and one or two earlier ones) into snapshots so I have multiple restore points.
  3. Once the conversions finish, verify the snapshots exist, then disable deletion protection on the server only (leave IP protection enabled).
  4. Delete the broken server, restore from the chosen snapshot, and assign the same primary IPv4/IPv6 addresses.
  5. Bring Beszel back up, confirm Caddy works over HTTPS, agents reconnect, and metrics start flowing.
  6. After everything is stable, remove extra snapshots to keep storage costs down.

I’ve tested the process: I created a snapshot, deleted the hub server, restored to a new box with the same primary IPs, and Beszel came back without any issues.

Snapshots work flawlessly for this setup.

Conclusion and Final Thoughts

You’ve seen how I run Beszel end to end: prep the server, deploy the hub + agent, lock it down behind Caddy, onboard new server (systems), keep the hub healthy with healthchecks, wire up email/MFA, and plan for disaster recovery.

The payoff is huge – I can glance at the dashboard, see which containers or cores are spiking, know when a system goes down, and have confidence I can rebuild it quickly if something goes wrong.

If you follow the same steps, you’ll have a monitoring stack that’s fast, reliable, and easy to manage.

Thanks for following along; let me know how your own setup goes.


If you run into any issues or need further help, feel free to revisit this guide or reach out for assistance.

If you want updates when I publish new walkthroughs, subscribe to my newsletter. The signup form lives right beneath the “Read next” section, so drop your email there while it’s fresh.

And if you found this useful – or have thoughts and feedback – drop them in the discussion section; I always enjoy seeing how others build on this.