Self-Host Private and Lightweight Analytics with Umami
Learn how to self-host Umami, a lightweight and privacy-focused analytics tool that gives you full control over your website data.
I've always looked for a web analytics solution that respects user privacy and doesn’t slow down my website. The default option for most people is Google Analytics – it’s popular, but in my opinion, it’s bloated and far from privacy-friendly.
Many analytics platforms aren’t GDPR-compliant and often require cookie consent banners, which can hurt the user experience. On top of that, relying on third-party services can negatively affect your site’s performance – just like Google Analytics does.
So why do we keep using third-party tools to collect visitor data when we can host our own analytics and truly own our data?
Sure, some platforms offer deep insights, but unless your business relies heavily on those insights, all that detail isn’t always necessary.
In this guide, I’ll show you how to self-host Umami – an open-source analytics tool that’s fast, privacy-friendly, and easy to host.
Get occasional updates from my self-hosting journey — guides, tools, and lessons along the way.
Why Choose Umami?
After using Google Analytics for a couple of months to track my website’s traffic, I just stopped. I said to myself, nah, there has to be a better solution.
It made my website feel heavy and sluggish. I had to add a cookie banner just to stay compliant, and honestly, that hurt the user experience. People would land on my site, see the banner, and bounce – probably never to return.
I’ve always wanted to build a site that feels fast, clean, and respectful. So why should I ruin the experience just to collect some traffic stats?
Eventually, I came across Pirsch.io – a solid, privacy-friendly tool that doesn’t use cookies and doesn’t require consent banners. I used it for a while and liked it.
But if you know me… I’m the self-host guy. I wanted full control – my server, my data, my rules.
That’s when I found Umami. It anonymizes visitor data to protect privacy, doesn’t use cookies (so no annoying cookie banners), and since it’s self-hosted on your own infrastructure, your data stays entirely in your hands.
The best part? Its tracking script is only 2KB in size – practically nothing! Google Analytics used to slow down my site’s page load time by around 500ms.
On top of that, it has a fantastic UI – easy to navigate, simple to understand, and quick to find exactly what you need. It also comes packed with features like reporting, comparison tools, filtering, custom events, team collaboration, and much more.

Requirements Before You Start
To follow this guide, you’ll need a server running Ubuntu 24.04 LTS, prepared for installing Umami. I personally recommend using Hetzner – it's reliable and affordable.
Make sure you’ve set up DNS records for the domain or subdomain you plan to use for accessing Umami’s web interface:
- Add an A record pointing to your server’s IPv4 address.
- If you’re using IPv6, also add an AAAA record.
I always recommend running self-hosted projects on the main hostname (like umami.yourdomain.com
) if that project is the only thing hosted on the server – which is the case here. I don’t recommend running Umami alongside other services on the same server. Instead, always opt for one server per project when possible. It keeps things clean, reduces conflicts, and makes troubleshooting easier.
The only major requirement for installing Umami is having Docker and Docker Compose installed on your server.
You can install both with a single command:
sudo apt install docker.io docker-compose
Umami doesn’t come with an automatic installation script – instead, we’ll create a docker-compose.yml
file and define the necessary configuration ourselves.
But, if you clone Umami’s repository from GitHub, you’ll find an example docker-compose.yml
file that installs Umami with a PostgreSQL database using Docker. However, I prefer using MySQL, which is what we’ll be using in this guide.
Installing Umami
Installing Umami is a straightforward process.
First, navigate to your home directory and create a new directory for Umami:
mkdir ~/umami
cd ~/umami
Next, create a docker-compose.yml
file inside that directory and add the following configuration:
version: '3'
services:
umami:
image: docker.umami.is/umami-software/umami:mysql-latest
ports:
- "3000:3000"
environment:
DATABASE_URL: mysql://${MYSQL_USER}:${MYSQL_PASSWORD}@umami-db:3306/${MYSQL_DATABASE}
DATABASE_TYPE: mysql
APP_SECRET: ${APP_SECRET}
depends_on:
umami-db:
condition: service_healthy
init: true
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:3000/api/heartbeat || exit 1"]
interval: 5s
timeout: 5s
retries: 5
umami-db:
image: mysql:latest
environment:
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
volumes:
- ./umami-db-data:/var/lib/mysql
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "mysqladmin ping -h localhost -u${MYSQL_USER} -p${MYSQL_PASSWORD} --silent"]
interval: 5s
timeout: 5s
retries: 5
This docker-compose.yml
file defines two services:
umami
: the actual Umami app.umami-db
: the MySQL database that Umami will use to store analytics data.
The services are connected internally by Docker, and depends_on
ensures the database is healthy before Umami starts. All sensitive values like database credentials and secret keys are pulled from environment variables using the ${VARIABLE_NAME}
format.
You need to create a .env
file in the same directory as your docker-compose.yml
with the following variables:
MYSQL_DATABASE=umami
MYSQL_USER=umami
MYSQL_PASSWORD=userpassword
MYSQL_ROOT_PASSWORD=rootpassword
APP_SECRET=yourgeneratedsecretkey
To generate a secure APP_SECRET
value, you can run:
openssl rand -base64 32
Copy the output and paste it as the value for APP_SECRET
in your .env
file.
!
, @
, #
, or &
in your MySQL passwords, as they can sometimes cause issues when parsed inside Docker or MySQL.Once your docker-compose.yml
and .env
files are ready, you can start Umami by running the following command inside the same directory:
sudo docker-compose up -d
This will pull the necessary images, start the containers in the background, and get Umami up and running on port 3000
.
You can now check if everything is running properly with
sudo docker ps
This will list all running containers. You should see both the umami
and umami-db
services listed and marked as Up
.
Note that the container names may look different from the service names you used in the docker-compose.yml
. That’s because Docker Compose generates container names using this pattern:
<project-name>_<service-name>_<index>
Here’s a quick breakdown:
Part | Value |
---|---|
project-name |
umami (folder name) |
service-name |
umami , umami-db |
index |
1 (first container) |
So if your project folder is named umami
, your container names will likely be:
umami_umami_1
umami_umami-db_1
If something isn’t working or you want to check what’s going on behind the scenes, use:
sudo docker logs <container_name_or_id>
You can use either the container's name or its ID to check logs for errors or issues.
You can now access Umami’s web interface by visiting:
http://your_server_ip:3000
Log in using the default credentials:
- Username:
admin
- Password:
umami
Reverse Proxy Setup
We definitely don’t want to access our web interface using a non-secure connection and our server’s IP address. Instead, we want to use our server’s hostname over a secure HTTPS connection.
To achieve that, we’ll set up a reverse proxy.
You can use any reverse proxy you're comfortable with (like NGINX), but in this guide, we’ll use Caddy. It's a lightweight, modern web server that's easy to configure – and best of all, it handles SSL certificates automatically using Let's Encrypt.
Install Caddy with:
sudo apt install caddy
Caddy’s config is deceptively simple. Start by opening the config file:
sudo vim /etc/caddy/Caddyfile
Clear out the default contents and paste in the following config (replace umami.yourdomain.com
with your actual domain):
umami.yourdomain.com {
reverse_proxy localhost:3000
}
That’s literally it. Save the file and restart Caddy:
sudo systemctl restart caddy
Caddy will automatically fetch an SSL certificate for your domain and start proxying traffic to your Umami container running on port 3000
.
Restrict Direct Access Port 3000
By default, Docker exposes Umami on port 3000
, which means it can be accessed directly via your server's IP (http://your_server_ip:3000
).
To enhance security and ensure all traffic goes through your reverse proxy (Caddy), you should restrict Umami to listen only on localhost
.
In your docker-compose.yml
, change the port binding from:
ports:
- "3000:3000"
To:
ports:
- "127.0.0.1:3000:3000"
This ensures Umami is only accessible from inside the server (by Caddy), and blocks external access on port 3000
.
Then, from inside your Umami project directory, restart your Docker containers:
sudo docker-compose down
sudo docker-compose up -d
Umami is now securely hidden behind your reverse proxy.
Installing Updates
To upgrade to the latest version, simply pull the updated image and restart your containers.
Navigate to your Umami project directory and pull the latest image for your database type (MySQL in our case):
sudo docker pull docker.umami.is/umami-software/umami:mysql-latest
Recreate your containers using the updated image:
sudo docker-compose down
sudo docker-compose up -d
Umami will apply any necessary database migrations automatically on startup.
If you're using PostgreSQL instead, use:
sudo docker pull docker.umami.is/umami-software/umami:postgresql-latest
Your data will remain safe, and the update process typically takes just a few seconds.
Disaster Recovery
The final step in setting up your self-hosted analytics platform is planning for disaster recovery.
Think about what you’d do if something goes wrong – like a server breach, a misconfiguration, a bad update, or even something out of your control, like a fire in your provider’s data center. You need a way to quickly bring Umami back online without losing your analytics data.
We already have our primary IPs secured (assuming you have enabled protection for them) – they stay with us no matter what, which is great. Now, we need a reliable way to restore the server and reassign those same IPs. The best way to do this is by using snapshots.
If you're using Hetzner, you can go to your server’s Snapshots tab and create one. A snapshot is a full backup of your server’s disk at that point in time.
If disaster strikes, you can spin up a new server from that snapshot, assign the primary IPs, and it should work right away – no need to change any DNS settings or reconfigure Caddy.
Make sure to test this recovery process at least once so you’re confident it works.
Snapshots are created manually. If you want automatic backups, you can enable Hetzner's backup feature. It costs 20% extra, but it keeps daily backups for a week. Once the week is over, old backups are replaced by new ones. You can convert any of these backups into a snapshot, which you can then use to restore your server.
Conclusion and Final Thoughts
With your Umami instance now fully deployed and secured behind a reverse proxy, you have complete control over your analytics.
From here, you can now start adding websites, creating new teams, setting up custom events, and exploring your analytics dashboard – all from your own infrastructure, fully in your control.
You’ve built a powerful, privacy-friendly analytics platform – no third-party trackers, no compromises.
And just as important – test your disaster recovery plan regularly to make sure everything still works as expected. It’s better to catch issues during a test than during a real outage.
If you run into any issues or need further help, feel free to revisit this guide or reach out for assistance.
If you found value in this guide or have any questions or feedback, please don't hesitate to share your thoughts in the discussion section.
Discussion