Comprehensive Linux Server Hardening Guide

Hardening a Linux server is crucial right after you set it up.

You’ll find the best practices and practical tips to ensure your server is protected from potential threats.

In this comprehensive Linux server hardening guide, I leverage my extensive experience with Linux servers to cover every essential aspect of server security.

Preparation

To make the most of this guide, ensure you have a server running Ubuntu.

If you don’t have one, consider getting a free VPS server to follow along.

Following along on your own server will enhance your understanding and practical experience.

Please note that while I personally use Ubuntu for all my guides, the principles discussed here are applicable to all Linux servers. Keep in mind that specific command syntax may vary across different Linux distributions.

Mindset Shift

Now, for the first tip, it’s not about a specific system adjustment or change, but about how you think about server security.

Whether you’re a server administrator or a security professional, understanding the right mindset for server security is crucial.

Let’s be clear about what’s possible and what’s not in server security.

Technically, an unhackable server is possible if it’s never turned on and stays disconnected.

However, this isn’t practical for servers that need to be publicly accessible, like those hosting WordPress websites or online stores.

And guess what? There are various vulnerabilities, including new and unknown ones, emerging daily.

This guide can enhance your server security, but it requires a mindset shift.

Don’t fall into the trap of believing that your servers are invincible or that hiring a top-notch security expert eliminates all risks.

True security means being prepared for anything at any time.

Choosing a Secure Server Provider

Choosing the right server provider is crucial, and security should be a top consideration.

Think about it – if you pick a server provider that doesn’t protect its infrastructure from DDoS attacks and an attack happens, you’re left with few options.

Even though you can manage your server, you can’t control the foundational hardware.

One option is Hetzner, from which I have all my servers. I’ve never encountered any security issues with this provider.

They put a lot of effort into securing their infrastructure, and being based in Germany, we all know the German quality.

Essential Security Practices

In the following, I will cover essential security practices that should be implemented first on every newly deployed server.

These practices include using strong passwords, using a non-root user with root privileges instead of the root user, utilizing SSH key authentication instead of password authentication, hardening SSH, installing and configuring Fail2ban to block unauthorized access attempts, and finally, using the config file for easy access to servers.

Strong Passwords

This is a well-known practice, and there isn’t much to add.

Avoid weak passwords and use a password manager for generating and securely storing complex passwords. I personally use Bitwarden.

It’s crucial to refrain from using the same password repeatedly. Always opt for unique, strong passwords that incorporate symbols, not just letters and numbers.

Now, it’s time to learn how to change a user’s password on a Linux server using a simple command.

After accessing your server for the first time, you might want to change the password for the root user. You can do this with the following command:

passwd root

The passwd command is used to change the password for any user on your server. Simply replace root with the user for whom you want to change the password.

Info: Knowing how to manage users effectively also plays a role in server security.

Adding a Non-Root User

When accessing your server for the first time, you are probably using the root user, which is not recommended.

The root user has total control over the entire server.

It is very easy to make mistakes when running commands using root, as you can accidentally break your server.

It’s safer to use a non-root user, requiring the sudo prefix for administrative commands and a password prompt.

Another advantage of using a non-root user is protecting your server from brute force attacks because everyone knows that the root user is the default user on all Linux servers, and that’s why attackers often attempt to brute force the root user to guess your password.

Overall, using a non-root user with a strong password is definitely more secure than relying on the root user, even with a strong password.

Note: Hackers can still try to brute force your server because adding a new user doesn’t eliminate the root user. Preventing access with the root user is necessary to stop brute force attacks, which I’ll cover in the SSH Hardening section.

We will start by adding a new user using the following command:

adduser username

The server will ask for a password and some optional details. If you’re in a hurry, just hit ENTER to skip.

Note: Try to use a randomly generated string for the username instead of well-known usernames like admin, administrator, etc.

Now, we need to grant this user root privileges. This can be achieved in two ways.

The first method is to add the user to the sudo group.

The second method is to manually add the user to the /etc/sudoers file.

The first method is simple, as it only requires a single command:

usermod -aG sudo username

That’s it.

Now you can run commands that require root privileges without the need for the root user.

Use the exit command or the logout command to exit the current session and reconnect to your server with the new user.

Try running the apt update command. It should not work directly, as you need to run the command like this: sudo apt update and then enter your password.

Now that you know how to add a new user to your server with root privileges, I still want to show you the second method.

The sudoers file is used to grant privileges to users. This allows you to control who can perform specific actions.

When you attempt to run a command that requires root privileges using the sudo prefix, the server checks your username against the sudoers file.

If you have the privilege to run the command, you are good to go, otherwise, you will receive an error.

There are two ways to open the sudoers file, either by opening it normally like any other file with your preferred editor or by using the visudo command.

Note: If you are logged in as the non-root user, remember to add sudo before your commands. For now, I assume you are still using the root user.

Using the visudo command is recommended because it scans the file for any syntax errors after saving.

If there are any errors, it won’t save the file but will prompt you to edit the file again, exit without saving, or force the command to save the file, although forcing the command is not advised.

The command will open the sudoers file with the default editor used by the server, which is often nano.

If you want to use another editor, you can change the default editor by using the following command:

update-alternatives --config editor

The command will prompt you to choose between editors. Select your preferred one and press ENTER.

Now, run the visudo command and take a look at the sudoers file:

/etc/sudoers
...
# User privilege specification
root    ALL=(ALL:ALL) ALL

# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL

# Allow members of group sudo to execute any command
%sudo   ALL=(ALL:ALL) ALL
...

As you can see, the sudo group has been granted full privileges, the same given to the root user.

That’s why, when adding any user to this group, the user is able to run any command that requires root privileges.

Let me explain what all these ALLs mean:

  • The first ALL indicates that this rule applies to all hosts.
  • The second ALL indicates that the user can run commands as all users.
  • The third ALL indicates that the user can run commands as all groups.
  • The last ALL means that these privileges apply to any command.

We can manually add any user to this file with a specific rule and set of privileges.

Add your previously created user under the rule for the root user like this:

/etc/sudoers
...
username    ALL=(ALL:ALL) ALL
...

In simpler terms, the added line now gives the user the ability to run any command as any user and any group on any host.

Save the file, and if you don’t encounter any syntax warnings, you’re good to go.

There are many options to explore when editing this file, such as restricting the number of commands a user can run or enabling the execution of commands without entering a password each time.

Read more: For an in-depth exploration of managing users and working with sudo, check out my dedicated blog post.

Using SSH Keys

When it comes to accessing your server, you probably use your password with Secure Shell (SSH).

However, is this the most secure way to access your server? Not necessarily.

A stronger alternative is SSH key authentication, which uses a key pair – a public key and a private key – for enhanced security.

This method removes the need for password authentication.

The public key is stored on the server, and the private key is kept by you, the server administrator.

Authentication occurs when the server sends a message encrypted with the public key, and if the user can decrypt it using the private key, access is granted.

This ensures that only those with the private key can access the server, significantly improving security over password-only authentication.

Therefore, it’s crucial to securely store the private key, while the public key can be freely shared.

When relying solely on password authentication, anyone with our password can access the server.

Using SSH keys, on the other hand, restricts access to only those with the corresponding private key.

Note: It is crucial to secure your laptop as well. Ensuring that the private key is stored in a secure place requires safeguarding the security of the place itself.

Now, let’s generate our key pair and transfer the public key to our server.

First, we want to ensure that we don’t already have a key pair because running this command will override it.

Open your terminal and run the following command (without accessing the server) to list the contents of the ~/.ssh directory:

ls -l ~/.ssh

If you get a total 0 output, it means that the directory is empty, indicating that you don’t have any SSH keys, which means you are good to go.

If you see the known_hosts and known_hosts.old files listed in the output, you are also good to go as these files contain a list of known hosts (servers) you have previously accessed.

If you get an error indicating that the directory doesn’t exist, don’t worry, as creating the key pair will automatically create the .ssh directory.

However, if you already have any existing key pairs in that directory, ensure to back them up before creating a new key pair.

Now, run the following command:

ssh-keygen -b 4096 

We are using the ssh-keygen command to create a new key pair.

The -b option allows us to specify the bit size of our key, which is 4096 in our case, providing a stronger key than the default size.

Output
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/ivansalloum/.ssh/id_rsa):

After running the command, you will be asked where to save the key.

It defaults to the .ssh directory inside your home directory. If you don’t specify a path and file name, the command will use the default path and file name.

I mentioned earlier that running this command will override an already generated key pair. This is because the command has a default location and file name to store the key.

If you consistently stick with the default option, it will override an existing key pair, as the newly created key pair has the same name as the previously created one and is stored in the same location.

When you want to create multiple key pairs for different use cases, it’s always recommended to change their names.

You can change the default name of the key by entering the same path and a name after the colon like this:

Output
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/ivansalloum/.ssh/id_rsa): /Users/ivansalloum/.ssh/id_rsa_test

This way, I’m creating a new key pair called id_rsa_test that won’t be overridden when generating a new key pair.

Note: If you have Linux installed on your laptop and are creating a key pair, the path for the key would probably look like this: /home/user/.ssh/id_rsa

I will stick with the defaults for now, assuming you haven’t generated any key pairs before.

Press ENTER to stick with the defaults, as I did, and then you will be prompted to enter a passphrase.

Output
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/ivansalloum/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):

A passphrase adds an extra layer of security and functions similarly to a password. You have to enter it each time you want to connect to your server, meaning you need both your private key and the passphrase.

This way, if someone gains access to your private key, they would also need the passphrase to access the server.

You can skip adding a passphrase by pressing ENTER twice. After skipping or adding a passphrase, your key pair will be generated.

Note: Make sure to store the passphrase in a secure location, such as a password manager.

If we look at the contents of the ~/.ssh directory, we will find the private and public keys we just generated:

Output
total 96
...
id_rsa id_rsa.pub

The id_rsa file is the private key, which shouldn’t be shown to anyone, and the id_rsa.pub file is the public key, which we should copy to our server.

Before we proceed to copy our public key to the server, let me clarify something.

If you place the public key in the .ssh directory within the home directory of the root user, you can only use key authentication when accessing the server with the root user.

On the other hand, if you copy the public key to the .ssh directory in the home directory of another user, that specific user is granted access to the server using key authentication, while the root user is not.

It’s essential to copy the public key only to the .ssh directory of the user you intend to use for accessing the server with key authentication.

Users without a public key can still utilize password authentication.

However, if you’ve disabled password authentication in the sshd_config file, users are required to use key authentication.

Users lacking a public key won’t be able to access the server in this scenario.

Note: Further details on this will be covered in the SSH Hardening section.

As we’ve previously created a non-root user for future use, there’s no need to copy the public key for the root user, as we don’t intend to access our server with the root user.

When copying the public key to our server, you should place it inside a file called authorized_keys within the .ssh directory in the home directory of the user.

You can do this manually, or you can use the ssh-copy-id command, which I prefer.

If you want to do this manually, copy the content of the id_rsa.pub file, access your server with the user for whom you want to enable key authentication, and paste the content of the public key inside the authorized_keys file.

If the .ssh directory doesn’t exist, create it. If the authorized_keys file doesn’t exist too, create it inside the .ssh directory, add the public key to it, save and close.

Now, try to connect to your server again with the same user, and you should be able to access the server without entering your password.

However, if you decided to use a passphrase, you should enter your passphrase to access your server.

Info: If, for some reason, you can’t access your server using key authentication and you are still prompted to enter your password, try this command:
ssh -i ~/.ssh/id_rsa user@ipaddress

If you don’t want to do it manually, use the following command to copy the public key:

ssh-copy-id -i ~/.ssh/id_rsa.pub user@server_ip

Replace user with the username of the user to whom you want to copy the public key, and replace server_ip with the IP address of your server.

You will be prompted to enter the password for that user, enter it and press ENTER.

That’s it. You can now use key authentication instead of password authentication.

SSH Hardening

For now, we have a non-root user with root privileges that we can use instead of the root user.

Additionally, we have generated an SSH key pair, copied the public key for that user, and can successfully connect to our server using SSH key authentication.

What we should do now is prevent the root user from accessing the server using SSH since we have a non-root user that we will use in the future and we should disable password authentication, as we don’t want to authenticate using passwords because we want to use SSH keys only.

Disabling password authentication will lock down SSH to our non-root user, as they are the only one with the public key, meaning they are the only one who can access the server.

Preventing root from accessing the server ensures that any attempt to access the server with the root user will fail.

With all these, we can ensure that only our non-root user can access the server using key authentication, and all other users are unable to access the server.

This will help us stop brute force attacks.

To accomplish this, we should edit the /etc/ssh/sshd_config file and modify two variables.

Note: I assume from now on that you are connected to your server with the non-root user.

Open the file with your preferred editor and search for the #PasswordAuthentication and #PermitRootLogin variables.

Uncomment these two variables and change both of their values to no like this:

/etc/ssh/sshd_config
PasswordAuthentication no
PermitRootLogin no

Now, save and close the file.

Info: In the The UFW Firewall section, I will show you another crucial step to protect SSH, which involves restricting access to the SSH port to only specific IP addresses.

We should restart the SSH service for these changes to take effect. Run the following command to do so:

sudo systemctl restart ssh

If you try to access the server with the root user now, you will receive an error like this:

Output
root@serverip: Permission denied (publickey).

If you try to access the server with a user that doesn’t have a public key, you will get the same error.

Read more: For an in-depth exploration of SSH security and advanced practices, check out my dedicated blog post.

Installing and Configuring Fail2ban

Fail2Ban is a security tool that helps protect your server from unauthorized access attempts and brute force attacks by monitoring logs for suspicious activities and blocking the IP addresses of attackers.

If it detects repeated failed access attempts, Fail2Ban takes action to prevent further access from those specific IP addresses.

This will allow your server to harden itself against these access attempts without intervention from you.

Note: Disabling password authentication might reduce the need for Fail2ban. However, I still advocate for its use as an added security layer.

We will start by installing Fail2ban:

sudo apt install fail2ban

Fail2ban will disable its service upon installation because some of its default settings may cause unexpected behavior.

You can verify this by using the following command:

sudo systemctl status fail2ban.service
Output
○ fail2ban.service - Fail2Ban Service
     Loaded: loaded (/lib/systemd/system/fail2ban.service; disabled; vendor pr>
     Active: inactive (dead)
       Docs: man:fail2ban(1)

Its configuration files are located in the /etc/fail2ban/ directory.

If you list the contents of this directory, you will find two important configuration files: fail2ban.conf and jail.conf.

The fail2ban.conf file contains Fail2Ban’s global settings, which I don’t recommend modifying. The jail.conf file contains jails – filters with actions.

We shouldn’t directly modify these files, as an update may override our changes.

That’s why Fail2Ban recommends creating two local copies of these configuration files for us to modify.

To create a local copy of these two files:

sudo cp jail.conf jail.local
sudo cp fail2ban.conf fail2ban.local

Now you can safely modify Fail2Ban’s configuration.

Open the jail.local file with your preferred editor and examine its settings.

Under the [DEFAULT] section, you will find settings that apply to all services protected by Fail2Ban.

Elsewhere in the file, there are sections like the [sshd] section, which contains service-specific settings that will override the defaults.

Under the [DEFAULT] section, there are some variables that you may want to change.

/etc/fail2ban/jail.local [DEFAULT]
...
bantime = 10m
...

The bantime variable sets the duration for which an IP will be blocked from accessing the server after failing to authenticate correctly.

By default, this is set to 10 minutes. You can change its value as you prefer, for instance, to 60m for one hour.

/etc/fail2ban/jail.local [DEFAULT]
...
findtime = 10m
maxretry = 5
...

These two variables function together to determine the conditions under which an IP should be blocked from accessing the server.

The maxretry variable defines the number of authentication attempts an IP is allowed to make within a time period defined by findtime before being blocked.

With the default settings, Fail2Ban will block an IP that unsuccessfully attempts to access the server more than 5 times within a 10-minute interval.

Now, it is time to examine the service-specific sections, also known as jails, such as the [sshd] section, which protects our server from failed access attempts.

Each of these sections needs to be individually enabled by adding an enabled = true line under the header, along with their other settings.

By default, only the [sshd] jail is enabled, and all others are disabled.

You can verify this by opening the /etc/fail2ban/jail.d/defaults-debian.conf file, which contains the line enabled = true for the [sshd] jail.

Scroll down the jail.conf file until you find the [sshd] section, which should look similar to this:

/etc/fail2ban/jail.local [sshd]
...
port    = ssh
logpath = %(sshd_log)s
backend = %(sshd_backend)s
...

You can include variables defined in the [DEFAULT] section, such as the bantime, maxretry, and findtime variables, which will only apply to this jail.

For now, we will stick with the defaults and start the Fail2Ban service:

sudo systemctl start fail2ban
sudo systemctl enable fail2ban

Now, if someone attempts to access our server and fails to authenticate five times within a 10-minute interval, they will be automatically blocked from accessing the server by Fail2Ban.

You can use the fail2ban-client command to check the active jails:

sudo fail2ban-client status
Output
Status
|- Number of jail:	1
`- Jail list:	sshd

To view the status and information regarding a specific jail like the sshd jail, you can use the following command:

sudo fail2ban-client status sshd
Output
Status for the jail: sshd
|- Filter
|  |- Currently failed:	5
|  |- Total failed:	21
|  `- File list:	/var/log/auth.log
`- Actions
   |- Currently banned:	1
   |- Total banned:	2
   `- Banned IP list:	218.92.0.29

There are many more things you can do with Fail2Ban, but for now, protecting SSH is the most important.

The Config File

The config file is basically a list that holds information about the servers we manage and access using SSH.

It simplifies the process of connecting to our servers and organizes server information into a single file for easy access.

This is not a security-related tip or trick, but it could come in handy when troubleshooting issues and for quick access to a specific server when managing a lot of servers.

Open your terminal (without accessing the server) and create the file using the following command:

touch ~/.ssh/config

Open the file with your preferred editor and add the following:

~/.ssh/config
Host server1
        HostName 81.41.156.93
        User ivan
        Port 22
        IdentityFile ~/.ssh/id_rsa

This is an example of a server I named server1 with the IP address 81.41.156.93.

I want to use the user ivan to access my server using port 22 and the private key id_rsa.

Now, if I want to access my server, instead of using this command:

ssh -p 22 -i ~/.ssh/id_rsa ivan@81.41.156.93

I simply use this:

ssh server1

Do you see how easy it is to access a server now?

I added the -p 22 option just to show you how the command will look without the config file.

SSH will use port 22 by default, so there is no need to add this option when accessing the server normally.

Now, replace the information above with the details of your server and attempt to access the server as I did.

Leave the SSH port as it is if you didn’t change it, as SSH uses port 22 by default.

When you have multiple servers, add a new Host block for each server.

Patching

It is crucial to regularly update your servers. Many hacks happen because servers aren’t patched with the latest security updates.

Not taking patching seriously is like leaving your front door wide open and being shocked when someone walks in uninvited.

In the following, I will show you how to update manually, how to set up automatic security updates, and how to enable the Canonical Livepatch service to patch the Linux kernel without rebooting.

Manual Updates

First, let’s learn how to update manually. It’s just two simple commands.

Begin by updating the package list on your server with the following command:

sudo apt update

This command prompts the server to scan the server’s packages and identify those requiring updates, including security patches.

Once this is done, run the following command to update your server’s packages that need updating:

sudo apt upgrade

The server may ask for confirmation by displaying a prompt that requires a yes or no response. Make sure to type yes.

The updating process may take a while, depending on the number of updates needed.

Automatic Security Updates

This step is crucial for keeping your servers secure. While updating manually is an option, it’s easy to forget or run out of time.

Automatic security updates ensure your servers stay patched without the hassle.

Now that you know how to update manually, it is time to set up automatic security updates.

We need to install the unattended-upgrades package. Use the following command to install it:

sudo apt install unattended-upgrades

Now, we need to run just one more command:

sudo dpkg-reconfigure unattended-upgrades

A pop-up window will appear, asking you if you want to automatically download and install stable updates.

Choose <Yes> and click ENTER.

When you chose <Yes> and pressed ENTER, Unattended Upgrades modified this file and changed the value from 0 to 1:

/etc/apt/apt.conf.d/20auto-upgrades
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";

The number indicates how often Unattended Upgrades will run in days.

A value of 1 will run Unattended Upgrades every day, while a value of 0 will disable Unattended Upgrades.

Now that we have automatic security updates in place, there are some important considerations I’d like to share with you.

Firstly, Unattended Upgrades primarily deals with security updates. You’ll need to manually check for other updates regularly, perhaps weekly.

Also, be aware that Unattended Upgrades might automatically reboot your server for certain updates, which could be disruptive on a production server.

I recommend to manually reboot during low-traffic periods or after notifying users of downtime.

You can customize Unattended Upgrades to either disable automatic reboots or reschedule them for less disruptive times.

Lastly, while Unattended Upgrades can be set to update all packages, not just security ones, I don’t recommend this option, as some updates may break your server.

Tip: I recommend updating other packages on a staging server first to check for any errors or problems before updating them on your production server.

Unattended Upgrades has its settings in a file called 50unattended-upgrades under the /etc/apt/apt.conf.d/ directory.

Open this file in your preferred editor and take a look at its settings:

/etc/apt/apt.conf.d/50unattended-upgrades
...
Unattended-Upgrade::Allowed-Origins {
        "${distro_id}:${distro_codename}";
        "${distro_id}:${distro_codename}-security";
        // Extended Security Maintenance; doesn't necessarily exist for
        // every release and this system may not have it installed, but if
        // available, the policy for updates is such that unattended-upgrades
        // should also install from here by default.
        "${distro_id}ESMApps:${distro_codename}-apps-security";
        "${distro_id}ESM:${distro_codename}-infra-security";
//      "${distro_id}:${distro_codename}-updates";
//      "${distro_id}:${distro_codename}-proposed";
//      "${distro_id}:${distro_codename}-backports";
};
...

As you can see from this block of code, Unattended Upgrades handles only security updates.

If you want it to handle non-security updates and update other installed packages, you can comment out the line ${distro_id}:${distro_codename}-updates like this:

/etc/apt/apt.conf.d/50unattended-upgrades
...
Unattended-Upgrade::Allowed-Origins {
        "${distro_id}:${distro_codename}";
        "${distro_id}:${distro_codename}-security";
        // Extended Security Maintenance; doesn't necessarily exist for
        // every release and this system may not have it installed, but if
        // available, the policy for updates is such that unattended-upgrades
        // should also install from here by default.
        "${distro_id}ESMApps:${distro_codename}-apps-security";
        "${distro_id}ESM:${distro_codename}-infra-security";
        "${distro_id}:${distro_codename}-updates";
//      "${distro_id}:${distro_codename}-proposed";
//      "${distro_id}:${distro_codename}-backports";
};
...

If you only want Unattended Upgrades to handle security updates, as I recommend, ensure that only security origins are allowed and that all others are commented out, like its default behavior.

You may also want to configure whether Unattended Upgrades should reboot the server if a security update requires a reboot to be applied.

You can specify a time to reboot or disable this feature completely.

To control if Unattended Upgrades should reboot your system automatically, look for the line Unattended-Upgrade::Automatic-Reboot in the configuration file.

Set this to "false" to prevent automatic reboots after updates. If you prefer automatic reboots, change it to "true".

Additionally, you can schedule a specific time for these reboots.

For this, find the line Unattended-Upgrade::Automatic-Reboot-Time and set it to your desired time, like "04:00" for a reboot at 4 AM.

There is one more thing to be aware of.

Sometimes, Unattended Upgrades fails to install a security update automatically, requiring a manual update.

You can specify an email address to which Unattended Upgrades should send an email in case this happens.

Note: Your server should be able to send emails. I’ve written a detailed guide on how to configure Postfix to use an external SMTP relay for sending emails.

Scroll down the file until you find the line Unattended-Upgrade::Mail ""; and add the email address you want to send a notification to inside the two double quotation marks.

Then, scroll down a little further until you find the line Unattended-Upgrade::MailReport "on-change"; and change it from "on-change" to "only-on-error" to receive a notification only if a security update fails to be installed.

Don’t forget to comment out these two lines. They should look like this:

/etc/apt/apt.conf.d/50unattended-upgrades
...
Unattended-Upgrade::Mail "hello@ivansalloum.com";
Unattended-Upgrade::MailReport "only-on-error";
...

Ensure that the mailutils package is installed on your server, as it provides the mail command used by Unattended Upgrades to send emails.

You can install it with the following command:

sudo apt install mailutils

To verify that your configuration is working correctly, you can manually trigger an update by executing the following command:

sudo unattended-upgrade -d

With this, we have configured our server to install security updates automatically.

Canonical Livepatch Service

The Canonical Livepatch Service, also known as Livepatch, is a service provided by Canonical (the company behind Ubuntu) that allows patches for the currently running Linux kernel to be applied live, meaning the patch is active immediately without the need to reboot the server.

This is particularly beneficial for production environments that need to run without any downtime.

Before we activate Livepatch on our server, there is some important information I’d like to share with you.

Livepatch is designed to fix serious security issues in the Linux kernel.

However, due to certain limitations, some parts of the kernel cannot be patched while the server is running.

In such cases, a traditional kernel upgrade and reboot might still be required.

There are a number of software components that can require you to reboot your server as well.

And the crucial point to note is that enabling Livepatch does not enable automatic security updates.

This is clearly mentioned in the Livepatch documentation, and Canonical itself recommends using Unattended Upgrades for automatic security updates, as we did earlier.

Let me simplify this for you:

  • Unattended Upgrades runs once a day and, by default, includes security updates, including kernel updates that can’t be livepatched.
  • Livepatch runs multiple times a day but only patches high and critical kernel vulnerabilities.

Now, let’s activate Livepatch on our server.

To use Livepatch, an account with Ubuntu Pro is necessary. Additionally, each account is limited to using Livepatch on five machines.

It doesn’t matter whether it is a desktop computer, a server, or a virtual installation of Ubuntu (a virtual machine).

A paid license is required for more machines.

Note: The Canonical Livepatch Service only works with the Ubuntu distribution.

Now, go to the Ubuntu Pro website and click on Get Ubuntu Pro Now.

You will be asked to specify who will use this subscription.

Choose Myself and click on Register to create an account.

Once you are finished, go to the dashboard, which should look something like this:

Now, copy your token and run this command on your server:

sudo pro attach token
Output
...
This machine is now attached to 'Ubuntu Pro - free personal subscription'

SERVICE          ENTITLED  STATUS       DESCRIPTION
anbox-cloud      yes       disabled     ...
esm-apps         yes       enabled      ...
esm-infra        yes       enabled      ...
fips-preview     yes       disabled     ...
fips-updates     yes       disabled     ...
livepatch        yes       enabled      Canonical Livepatch service
realtime-kernel* yes       disabled     ...
usg              yes       disabled     ...

 * Service has variants
...

Livepatch is now enabled on our server. You can check the Livepatch status at any time using this command:

sudo canonical-livepatch status --verbose

For more details, I suggest referring to the Livepatch documentation.

The UFW Firewall

A firewall plays a key role in server security. Think of it as your server’s bouncer – it filters incoming and outgoing traffic, allowing only authorized access to specific ports.

To make this process easy, we’ll employ the Uncomplicated Firewall (UFW).

UFW simplifies the setup and management of firewall rules, making it user-friendly.

Installing UFW

On Debian-based distributions, like Ubuntu, it often comes pre-packaged. You can check and install it using:

sudo apt install ufw

Many server providers configure the UFW firewall upon deploying the server to allow only SSH connections, enabling you to connect to the server.

If you have a server from Vultr, UFW is likely to be enabled by default. In my case with Hetzner, UFW is not enabled.

You can check the status of UFW and your current ruleset using this command:

sudo ufw status

The command’s output will either indicate that UFW is inactive, or that it is active with your current rule set.

If UFW is currently inactive, that’s fine, as we’ll proceed to configure it properly and activate it.

However, if UFW is already active, disable and reset it using the following command:

sudo ufw disable; sudo ufw reset

You can re-enable it once you have added all the rules and finished configuring it.

UFW’s Default Policy

By default, UFW takes a secure approach by blocking all incoming traffic while allowing outgoing traffic from our server.

This means our server can communicate externally, but it remains inaccessible to others.

Since there is no issue with our server reaching the outside world, there is no need to make any changes to that aspect.

However, to enable incoming traffic, it’s essential to selectively open only the required ports and authorize traffic through them.

If you wish to review the default settings for UFW, you can examine the /etc/default/ufw file. Open the file using your preferred editor.

/etc/default/ufw
...
# Set the default input policy to ACCEPT, DROP, or REJECT. Please note that if
# you change this you will most likely want to adjust your rules.
DEFAULT_INPUT_POLICY="DROP"

# Set the default output policy to ACCEPT, DROP, or REJECT. Please note that if
# you change this you will most likely want to adjust your rules.
DEFAULT_OUTPUT_POLICY="ACCEPT"
...

As you can see, the default policy for incoming traffic is set to DROP, while the default policy for outgoing traffic is set to ACCEPT.

You can review the default policy using the following command too:

sudo ufw status verbose

You can modify this default behavior of UFW either by directly editing the file or by using these two commands:

sudo ufw default policy incoming
sudo ufw default policy outgoing

Replace policy with either deny, allow or reject. deny corresponds to DROP, allow corresponds to ACCEPT, and reject corresponds to REJECT.

Both DROP and REJECT policies prevent traffic from passing through the firewall, but they differ in their response messages.

With DROP, the traffic is silently discarded without any acknowledgment sent to the source. It neither forwards the packet nor responds to it.

On the other hand, REJECT sends an error message back to the source, signaling a connection failure.

Checking Open Ports

Before adding any firewall rules, you need to identify which ports are open on your server. This information can be obtained using the Nmap utility.

Install the nmap package using the following command:

sudo apt install nmap

Once Nmap is installed, use the following command to scan for open ports, replacing server_ip with your server’s IP address:

sudo nmap server_ip
Output
...
PORT    STATE SERVICE
22/tcp  open  ssh
25/tcp  open  smtp
80/tcp  open  http
110/tcp open  pop3
143/tcp open  imap
587/tcp open  submission
993/tcp open  imaps
995/tcp open  pop3s

This is the output I got from one of my servers. If your server is new, you might only see the SSH port open, as SSH is the only service installed by default.

Nmap, by default, only scans TCP ports. Use the following command to scan UDP ports:

sudo nmap -sU serverip

The open state, as indicated, isn’t solely related to the firewall. In other words, it doesn’t necessarily imply that the port is accessible to the public.

Even if I have a rule to deny traffic to port 80, Nmap may still show the port as open.

Essentially, this signifies that a service, such as the HTTP service, is installed on the server and is using port 80.

We utilize Nmap to identify the services installed on our server along with their associated ports, helping us plan the necessary rules to add.

Configuring Rules

Now that we’ve identified the open ports on our server using Nmap and understand that UFW defaults to blocking incoming traffic, the next step is to add rules that allow traffic to these ports.

In the following, I’ll guide you through the various options that UFW offers for rule configuration.

I’ll share my approach to firewall configuration through helpful tips, providing just the essential information for proper setup.

This won’t delve too deeply into UFW.

If you find yourself needing more in-depth information or a comprehensive tutorial on UFW, feel free to let me know in the comments.

Allowing SSH Traffic

The first step before enabling a firewall is to allow traffic on port 22 (SSH) to ensure access to the server.

If you enable the firewall before adding this rule, you risk losing access to your server.

Use the following command to allow SSH traffic:

sudo ufw allow 22/tcp

This is a simple and swift solution to allow SSH traffic.

Essentially, we’ve permitted any IP address to access port 22, meaning our SSH port is open to everyone.

This is something I never do on a production server.

Although we’ve generated an SSH key pair, implemented key authentication, and added a non-root user, hackers could still attempt unauthorized access.

What if we could proactively prevent these attempts? This would enhance overall security.

We can restrict the SSH port to a specific IP, allowing only this designated IP to access it.

This ensures that SSH access is limited to a single, trusted IP address, such as your IP at your home network.

However, this approach comes with a drawback. Most home networks use DHCP, causing your IP to change periodically.

When your IP changes, you lose access to the server, making it less practical.

Note: We can address this issue by utilizing a cloud firewall, a topic I will cover in the The Cloud Firewall section.

I typically opt for restricting the SSH port to a specific IP only when I have a dedicated static IP.

A method to acquire a dedicated static IP is by utilizing a VPN service.

Numerous VPN providers provide the option of a dedicated static IP for an additional fee.

If you have a dedicated static IP, use the following command:

sudo ufw allow from IP proto tcp to any port 22

Replace IP with your IP address.

Note: When you restrict SSH access to only one IP, Fail2ban becomes irrelevant as there are no IPs to block. I mention this to avoid confusion. However, I still recommend keeping Fail2ban installed and enabled even when restricting SSH access.

If I can’t restrict SSH access and don’t want to make my SSH port accessible to all, there’s still something I can do, which is using the limit rule.

It allows only 6 connections from the same IP address within a 30-second window, protecting the server from potential brute-force attacks.

Use the limit rule instead of the allow rule for SSH on a production server:

sudo ufw limit 22/tcp

If you wish to review the rules you added and your firewall is disabled, the sudo ufw status command won’t display your ruleset.

Instead, you can use the sudo ufw show added command to view the rules you added, even when the firewall is disabled.

Allowing HTTP and HTTPS Traffic

If you are hosting a website, you need to allow traffic on ports 80 (HTTP) and 443 (HTTPS) for your visitors to reach your website.

In this case, there should be no restrictions, as we want everyone to be able to access our website.

You can allow traffic by using the allow rule, similar to what we did for SSH, as follows:

sudo ufw allow 80/tcp
sudo ufw allow 443/tcp

This should be fine, but there are some considerations I would like to share with you.

First, if you are using an SSL certificate, which you should, and redirecting traffic from HTTP to HTTPS, as recommended, there is no need to allow traffic on port 80.

Although allowing traffic on both ports is not a problem and I’ve never faced any issues with it, I wanted to inform you.

Second, if you are using a proxy service, such as Cloudflare or Sucuri, traffic will only come from their IPs.

You can enhance security by restricting HTTP and HTTPS traffic to only their IPs.

For example, for Cloudflare, you would use these commands:

sudo ufw allow from IP to any port 80 proto tcp
sudo ufw allow from IP to any port 443 proto tcp

You should repeat these two commands for all Cloudflare IPs, or you can use a bash script to automate this.

Allowing Other Traffic

Now that you know how to allow traffic, restrict access, and utilize the limit rule, it’s time to add rules for other ports.

Plan your ruleset based on the services installed and those you intend to use.

If you have a control panel installed on your server or a service that you exclusively use, consider restricting access to your IP or using the limit rule.

Denying Traffic

As I mentioned earlier, UFW is configured to deny all incoming traffic by default.

However, there may be situations where you want to block traffic based on the source IP address or subnet, especially if you’re aware of attacks coming from that source.

Consider this simple scenario: I have a WordPress website hosted on my server, and I’ve set rules to allow traffic on ports 80 and 443. However, I noticed that a specific IP is continuously attacking the login page, consuming my server resources. To address this, I want to block traffic from that IP, preventing access to the website.

I can achieve this using the following commands:

sudo ufw deny from IP to any port 80 proto tcp
sudo ufw deny from IP to any port 443 proto tcp

Replace IP with the IP of the attacker. This way, we have blocked HTTP and HTTPS traffic to our website from that IP address.

If you want to block traffic from an IP address on all ports, you can use the following command:

sudo ufw deny from IP

If an entire subnet of IPs is causing issues, we can extend this approach to block traffic from the entire subnet using this command:

sudo ufw deny from IP/24

These are some basic examples of how to deny traffic from a specific source. These can be quite handy in certain situations when you need to quickly stop an ongoing attack.

Application Profiles

Applications (services installed) can register their profiles with UFW upon installation, enabling UFW to manage them by name. To view the available profiles, you can use the following command:

sudo ufw app list
Output
Available applications:
  Apache
  Apache Full
  Apache Secure
  Dovecot IMAP
  Dovecot POP3
  Dovecot Secure IMAP
  Dovecot Secure POP3
  Nginx Full
  Nginx HTTP
  Nginx HTTPS
  OpenSSH
  Postfix
  Postfix SMTPS
  Postfix Submission

This is the output I got from one of my servers. If your server is new, you are more likely to see only the OpenSSH profile, which is the service behind SSH.

When using Application Profiles, there’s no need to memorize specific ports. Instead, you utilize the profile name to allow, deny, or reject traffic.

For instance, to allow traffic on port 443 (HTTPS), you would use the following command:

sudo ufw allow "NGINX HTTPS"

Or:

sudo ufw allow "Apache Secure"

If you’re curious about the origins of these profiles, check the /etc/ufw/applications.d/ directory.

Deleting Rules

If, for some reason, you want to delete a rule you have added, you can use the sudo ufw delete command followed by the rule itself like this:

sudo ufw delete deny from 111.111.111.111 to any port 80 proto tcp

Or:

sudo ufw delete allow 80

There is another easier way to delete rules, but it requires the firewall to be enabled. This method involves using the rule number.

Once the firewall is enabled, you can use the sudo ufw status numbered command to obtain a list of your rules and their corresponding numbers, like this:

Output
Status: active

     To                         Action      From
     --                         ------      ----
[ 1] 22/tcp                     ALLOW IN    Anywhere                  
[ 2] 22/tcp (v6)                ALLOW IN    Anywhere (v6)  

Now, to delete a rule, I can simply use the rule number:

sudo ufw delete 1 

This is a much simpler method.

Pre-Activation Check

Before activating our firewall, it’s crucial to review the rules we’ve added so far to prevent any unexpected behavior.

Since the firewall is currently disabled, we can’t use the sudo ufw status command to get a list of our rules. Instead, we could use this command:

sudo ufw show added

This command will list all the rules you have added.

Always add the rules, review them, and then proceed to activate the firewall.

Enabling the Firewall

For our final step, let’s activate the firewall. Simply enable the UFW firewall using this command:

sudo ufw enable

With this, our firewall is enabled. If you experience any issues, review your rules again.

Read more: Explore my blog post on effectively blocking INVALID packets using UFW.

The Cloud Firewall

Most server providers offer the option to set up a cloud firewall for your servers.

Using a cloud firewall allows you to create a more advanced firewall setup.

You will have the UFW firewall, an operating system-level firewall that we’ve configured and enabled, and a cloud-level firewall.

This means you have two firewalls working together.

Benefits

An advantage of having a configured and enabled cloud firewall is that if you accidentally add a rule that might lock you out from your server, you won’t be stranded.

The cloud firewall is accessible through the provider’s dashboard, allowing you to make changes as needed.

As I mentioned earlier, we’ll address the issue of our home network IP potentially changing due to DHCP.

That’s why I prefer using the cloud firewall for rules that might change in the future.

Here’s my approach: I configure the UFW firewall with no specific restrictions. For example, for SSH, I allow SSH traffic from all sources on the UFW firewall. However, on the cloud firewall, I specifically restrict SSH to my IP. This way, whenever my IP changes, I can access the provider’s dashboard, update the IP, and regain access to the server.

Another significant advantage is the ability to link the cloud firewall with multiple servers.

This means that configuring and enabling the cloud firewall simultaneously applies to multiple servers, providing more convenience than changing the same rules individually.

Returning to my approach, if my IP changes, and I want to update it in the firewall to regain access, I do it once, and I regain access to all servers connected to the same firewall.

Consider this scenario for better understanding: I previously showed you how to restrict HTTP and HTTPS to Cloudflare’s IPs if you are using their proxy. Now, imagine if Cloudflare changes an IP or adds a new one. If you used the UFW firewall, you would need to adjust the rules on each server individually, which could be inconvenient and time-consuming. However, with the cloud firewall, you would make the change once and push it to all linked servers.

The Default Policy

Most cloud firewalls have a default policy of blocking all incoming traffic and allowing all outgoing traffic, similar to UFW.

Hetzner follows the same policy as UFW, and it is the server provider I use for all my projects.

If you are also using Hetzner, you are good to go.

If not, ensure that the policy aligns with UFW’s, and then configure it similarly by adding rules to allow traffic to specific ports, just as we did with UFW.

If the default policy is different and you are unable to change it, please contact support or refer to the documentation of your provider for further information.

However, it’s worth noting that almost all server providers follow the same policy.

Configuring and Activating

I will guide you through configuring and activating the cloud firewall with Hetzner. If you are using another provider, the process should be similar.

Inside your project, go to the Firewalls tab, and click on the Create Firewall button. A new page will open where you can configure your firewall, as shown in the picture below.

Add a description for each rule you add to ensure easy understanding of your configurations.

Any IPv4 and Any IPv6 allow traffic from all sources.

For the first rule, allowing traffic to port 22 (SSH), remove Any IPv4 and Any IPv6, and add only your current IP.

If your IP changes, you can always access the firewall from your dashboard and update it to the new one without any issues.

Now, for the second rule, which is allowing traffic to ICMP, it’s worth noting that many network administrators consider ICMP a security risk and opt to block it at the firewall.

While ICMP does have some security issues, it also serves important functions. Some features are useful for troubleshooting, while others are essential for certain software to function correctly.

It’s important to note that UFW has the ICMP protocol open by default, and I prefer leaving it this way while controlling the ICMP protocol from the cloud firewall.

I have never experienced any issues denying traffic to ICMP, but your experience may vary.

Another important point about the ICMP protocol is that when it is not blocked, hackers can ping your server’s IP to check if it’s online.

However, when the ICMP protocol is blocked, your server remains hidden as they cannot ping it.

I recommend trying to deny traffic first. If a problem occurs, allow traffic again to ICMP. It is essentially up to you.

If you want to deny traffic to ICMP, simply remove the second rule. If you want to allow traffic again, add the rule back.

Now, add the remaining rules according to your preferences, ensuring that they align with UFW’s ruleset to avoid conflicts.

I always leave the Outbound rules empty as there is nothing to add.

Once you’ve completed adding the rules, scroll down to the Apply to section, and select the servers to which you want this firewall to apply.

Scroll to the end, add a name to the firewall, and click on the Create Firewall button.

Now, we have two firewalls enabled and working together.

Virus and Malware Scanning

Regular scanning for viruses and malware is crucial, especially when hosting a website, to ensure your files remain free from infections.

Clam AntiVirus (ClamAV) is a free, powerful and efficient open-source software as it excels in detecting, quarantining, and removing various types of malware, such as trojans, worms, rootkits, and more.

In the following, I’ll guide you on installing ClamAV and utilizing it for effective virus and malware scanning.

Resource Considerations

Before proceeding with the installation of ClamAV, it’s crucial to be aware that ClamAV can be resource-intensive, especially during scanning processes.

Ensure that your server has sufficient resources, including CPU and memory, to support ClamAV effectively.

Inadequate resources may lead to performance issues and affect the overall functionality of your server.

I recommend a server with at least two dedicated CPU cores and 8 GB of memory.

I also recommend installing and testing ClamAV on a testing server to get an idea of how many resources it could take and how to plan for your resources.

I’ve never experienced any performance issues when running ClamAV on a dedicated vCPU server from Hetzner. Their CPUs are powerful, and their memory is high-speed.

If you’re interested in trying ClamAV on Hetzner’s dedicated vCPU servers, you can use my link to get free credits to start.

Installing ClamAV

Now it’s time to install ClamAV. Use the following command for installation:

sudo apt install clamav

The ClamAV database is updated automatically during the installation. It is a virus database that is regularly updated with newly discovered threats.

This installation adds one new service: the clamav-freshclam service, which is enabled and running by default.

This service periodically checks for virus database definition updates, downloads, and installs them.

The latest virus database definitions are located in the /var/lib/clamav/ directory.

Running a Scan

To run a scan, use the clamscan command on the directory you want to scan.

Note: Avoid scanning the entire root filesystem / for now. I’ll explain how to do that later.

I will scan my WordPress files for any malware or viruses:

sudo clamscan -r /var/www/webapps/wordpress/public_html
Output
...
----------- SCAN SUMMARY -----------
Known viruses: 8680618
Engine version: 0.103.9
Scanned directories: 1151
Scanned files: 6323
Infected files: 0
Data scanned: 361.83 MB
Data read: 145.92 MB (ratio 2.48:1)
Time: 69.349 sec (1 m 9 s)
Start Date: 2023:12:15 13:33:46
End Date:   2023:12:15 13:34:55

ClamAV scanned my WordPress files and found no malware or viruses. Great!

The -r option stands for recursive scan. When you include this option in your command, ClamAV will not only scan the specified directory but also all of its subdirectories.

This is particularly useful when you want to thoroughly check an entire directory tree for malware or viruses, ensuring that no part of the directory structure is overlooked.

Good To Know

When you run a scan with ClamAV, it uses about a one GB of memory just to get started. This happens because it loads the virus database into memory.

After running the command, you’ll notice a short wait before the scan begins, and the memory usage on your server will increase by around one GB or more.

ClamAV needs this one GB of free memory for the virus database, plus additional resources for the scanning process itself.

That’s why I recommend scanning only when you have sufficient resources available.

Some people suggest stopping the clamav-freshclam service and preventing it from starting on reboot to save resources.

They recommend manually updating the virus database only when needed for a scan.

However, I don’t completely agree with this approach. In my experience, the clamav-freshclam service, which checks for virus database updates about twice a day, doesn’t use a lot of resources and helps keep the virus database current.

While manually updating is an option and just involves running a command before scanning, it’s not necessary.

If you still choose to stop and disable the service, you can use these two commands to do so:

sudo systemctl stop clamav-freshclam.service
sudo systemctl disable clamav-freshclam.service

Now, you’ll need to manually update the virus database each time before you scan. Use this command for manual updates:

sudo freshclam

Given ClamAV’s resource requirements, my approach is to scan only when my server has enough resources, and I always keep the clamav-freshclam service running.

Scanning the Root Filesystem

You may consider scanning the entire root filesystem / for malware or viruses, but there are critical aspects to consider.

Any files identified as malware or viruses, including potential false positives, will be moved.

This action might inadvertently break your server. If you’re using ClamAV on a production server, run a scan only if absolutely necessary.

Scanning the entire root filesystem can be resource-intensive, so it’s important to monitor resource usage.

If your server has limited resources, it could significantly impact overall performance.

Certain directories like /proc, /sys, /run, /dev, and /snap are special and scanning them may lead to errors.

Therefore, it’s essential to exclude these directories from the scan. Creating a custom directory for moving infected files is advisable, and this directory should also be excluded from the scan.

To begin, create a directory for quarantining infected files:

sudo mkdir /root/quarantine

I created this directory within the root home directory.

To scan the entire root filesystem while excluding special directories, use the following command:

sudo clamscan -r --log=/var/log/clamav/clamscan.log --move=/root/quarantine --exclude='^/(proc|sys|run|dev|snap)/' / 

This command performs a recursive scan of the root filesystem, excluding the specified special directories.

It logs the scan results in the clamscan.log file located in the /var/log/clamav/ directory and moves any detected malware or viruses to the quarantine directory.

The final recommendation is to avoid setting up automated scans with ClamAV. It’s better to run scans manually.

This approach is due to the unpredictable nature of scan results. There’s a risk that ClamAV might mistakenly identify a normal file as malware and move it, which could break your server.

Therefore, it’s crucial to always run scans manually and carefully examine the results to maintain the integrity of your server.

Read more: Explore my blog post on maintaining process privacy on a Linux server.

Backups

The final tip, although arguably not directly related to security, is crucial enough to be included in this guide.

From the start, I emphasized the importance of maintaining a prepared mindset for any situation.

Without backups, how can you be truly prepared? What if your server gets hacked? Would you prefer struggling to recover or swiftly setting up a new server and restoring your business from a backup? The latter is my choice.

I ensure my backups are stored in at least three remote locations to safeguard against hacks, server failures, or any unforeseen events, providing a reliable plan B to keep my business operational.

As for specific backup solutions, it depends on what you’re hosting. For a WordPress site, use a plugin and an automated bash script for both application-level and server-level backups.

Use WP-CLI for an additional backup, and send these three backups to three different locations.

While this may seem costly and time-consuming, the investment is worthwhile. If three locations are impractical, opt for at least one location.

Crucially, it’s not just about backing up your data – it’s about testing those backups. Ensure you have tested and functional backups, not just stored files.

Imagine your server is compromised or inaccessible, and you need to revive your business.

Test your backup, measure the time it takes to restore your business online, and have a plan in place for data restoration in case of emergencies.

Conclusion and Final Thoughts

Great job reaching the end! I hope this guide has been super helpful for you in securing your Linux server.

If you found value in this guide or have any questions or feedback, please don’t hesitate to share your thoughts in the comments section below.

Your input is greatly appreciated, and you can also contact me directly if you prefer.

Newsletter

I'm excited to share my knowledge and experience! Subscribe to my newsletter for the latest updates. 👇

Leave a Reply

Your email address will not be published. Required fields are marked *