Disk performance is a critical factor that impacts everything from boot times to database queries.

I learned this the hard way when a slow disk caused high I/O wait on my server, bringing everything to a crawl. Web pages loaded slowly, database queries took forever, and the entire server felt unresponsive.

In high-performance environments such as web servers and databases, where thousands of I/O operations occur every second, disk speed becomes a non-negotiable priority.

That's why in this guide, we'll explore how to benchmark disk performance on a Linux server using various tools and metrics, helping you assess throughput, latency, and IOPS effectively.

I assume you're working on a properly set-up Ubuntu server. If not, check out my guide on preparing Ubuntu servers to get started.

Author's Note

Before we jump into benchmarking, here’s an important disclaimer: Never run these tests on a production environment.

On one hand, you risk breaking things, and on the other hand, the results won’t be accurate.

For the most accurate results, follow these best practices:

  • Run tests on an inactive server with no active I/O operations or memory usage.
  • Repeat each test 2–3 times to account for variability.
  • Calculate the average value of your results for better accuracy.

Understanding Disk Basics

For example, let's look at hosting a WordPress website to help understand how disk performance affects website speed.

Disks are responsible for storing your website files, databases, and logs, so their speed directly influences the user experience.

Key factors that affect disk speed include the type of disk and various disk speed metrics.

Type of disk:

  • HDD (Hard Disk Drive): Slower performance due to the use of spinning mechanical disks.
  • SSD (Solid State Drive): Much faster than HDDs, as they have no moving parts.
  • NVMe SSD: The fastest type of SSD, offering superior speed compared to standard SSDs.

Disk speed metrics:

  • Latency: The time it takes to access data on the disk. Lower latency translates to faster data access and a quicker website.
  • Throughput: The amount of data that can be transferred per second, measured in MB/s. Higher throughput means faster data transfer.
  • IOPS (Input/Output Operations Per Second): This metric measures how many small read/write (I/O) operations can occur per second. Higher IOPS is crucial for databases, as it allows for faster processing of multiple requests.

HDDs are good at sequential reads/writes but slow at handling small random requests. SSDs and NVMe disks excel at random IOPS and low latency.

Types of Disk Operations

When benchmarking, you'll typically test two different types of read/write operations:

  • Sequential I/O: Large continuous files, like downloading a big backup or saving a large video.
  • Random I/O: Small, scattered files, like database queries or WordPress files.

For example, WordPress websites mainly depend on random read/write performance, especially for database and PHP file reads.

Basic Read/Write Tests

In the following, I’ll show you how to test your disk’s sequential read/write performance.

Sequential I/O tests measure how quickly data can be written to or read from the disk in a continuous, linear order.

These tests are commonly used for tasks like large file transfers or backups and simulate real-world scenarios where data is written continuously, without the disk needing to jump to different parts.

Sequential Read Speed Performance

When deploying a new Linux server, I always begin with a quick sequential read speed test using the hdparm command.

hdparm measures both cached and buffered read speeds, giving you an idea of how quickly your server can access data from memory and directly from the disk.

It ensures accurate measurements by flushing the buffer cache before running the test since we want to know how fast data is read from the disk itself without any buffers.

Now, to run the read speed test, use the following:

sudo hdparm -Tt /dev/sda

Replace /dev/sda with the actual device name of your target disk.

Once the test completes, hdparm will display output like this:

/dev/sda:
 Timing cached reads:   28238 MB in  1.99 seconds = 14180.88 MB/sec
 Timing buffered disk reads: 6434 MB in  3.00 seconds = 2144.09 MB/sec
  • Timing cached reads: Measure memory read speed, indicating how fast data is accessed from memory.
  • Timing buffered disk reads: Reflect the actual disk read speed, showing how fast your server can read data directly from the disk.
It’s a simple yet powerful way to get an initial sense of disk performance. I love this command!

Sequential Write Speed Performance

To measure sequential write speed, you can use the following dd command to write data to a test file:

dd if=/dev/zero of=test_file bs=64k count=16k conv=fdatasync

Here’s what each part of the command does:

  • if=/dev/zero: Uses a special file that generates zeroes, so no actual data needs to be read, focusing on write performance.
  • of=test_file: Specifies the file where data is written.
  • bs=64k: Sets the block size to 64KB, a common block size for disk transfers in real-world scenarios.
  • count=16k: Writes 1GB of data (64KB * 16,000).
  • conv=fdatasync: Ensures that all data is flushed to the disk immediately. Without this flag, some data might remain in memory (cached) instead of being written to the disk, which could lead to inaccurate results.

Once the test completes, dd will display output like this:

16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.36814 s, 785 MB/s

785 MB/s is the sequential write speed of the disk, indicating how fast the disk can write data in this test scenario.

Feel free to change the bs and count values based on your specific test scenario. For example, using bs=1M and count=1024 would write a 1GB file but with a larger block size.

Advanced Disk Performance Benchmarking

fio is one of the most powerful and flexible tools for benchmarking disk performance, particularly because it can simulate real-world workloads.

To evaluate how well your disk can handle random I/O loads, we can run a benchmark that simulates both random read and random write operations.

This is especially important for websites that serve dynamic content and rely on database access like WordPress for example.

fio is not usually installed by default. To install it, run the following command:

sudo apt install fio

Here’s how you can benchmark your disk’s random I/O performance:

fio --name=random_rw --ioengine=libaio --rw=randrw --direct=1 --bs=4k --numjobs=4 --size=1G --runtime=30 --time_based --group_reporting --unlink=1

Breakdown of the command:

  • --name=random_rw: The name of the test job.
  • --ioengine=libaio: libaio is the I/O engine that enables asynchronous operations. It's optimal for testing random I/O because it simulates a real-world server where multiple I/O operations happen in parallel.
  • --rw=randrw: This tells fio to perform random read/write operations.
  • --direct=1: This forces direct I/O, bypassing the server's cache, ensuring that the test measures actual disk performance without being influenced by cached data.
  • --bs=4k: Block size of 4KB is common for database workloads.
  • --numjobs=4: This creates 4 parallel jobs, which simulates multiple users or processes accessing the disk concurrently. More jobs will help simulate a higher load and better represent the demands on a real server.
  • --size=1G: Each of the 4 jobs will use 1 GB of data.
  • --runtime=30: This runs the benchmark for 30 seconds, allowing enough time to get a solid measurement of disk performance without excessive overhead.
  • --time_based: This will run the test for the duration set by --runtime, instead of running it until fio finishes reading/writing the data, even if the data isn't fully processed. This ensures that the benchmark will always run for the specified period (30 seconds in this case), regardless of whether all data has been read or written, which is particularly useful for simulating time-based loads in real-world environments.
  • --group_reporting: This will display the results from all jobs into a single output for easier analysis.
  • --unlink=1: After the benchmark, unlink deletes the test files to clean up.

Once the test completes, fio will generate a large amount of output. Pay close attention to these specific lines in the output:

read: IOPS=27.8k, BW=109MiB/s (114MB/s)(3263MiB/30001msec)
     lat (usec): min=55, max=5530, avg=142.94, stdev=50.93
write: IOPS=22.2k, BW=86.5MiB/s (90.8MB/s)(2597MiB/30001msec)
     lat (usec): min=4, max=3448, avg= 6.81, stdev= 8.16

Read performance:

  • IOPS: 27.8k operations per second (higher is better).
  • Bandwidth: 114MB/s throughput (how fast data is read).
  • Latency: Average time per read (142.94 µs), lower is better.

Write performance:

  • IOPS: 22.2k write operations per second (higher is better).
  • Bandwidth: 90.8MB/s throughput (how fast data is written).
  • Latency: Average time per write (6.81 µs), lower is better.

The disk performs well with high IOPS (27.8k for reads, 22.2k for writes), meaning it can handle many operations per second. The throughput (109 MiB/s for reads, 86.5 MiB/s for writes) indicates good data transfer speeds, which is important for handling website traffic. The latency is low, especially for writes, meaning the disk responds quickly to requests.

Conclusion and Final Thoughts

In this guide, we’ve explored the essential tools for benchmarking disk performance on a Linux server.

By measuring IOPS, throughput, and latency, you can gain a clearer understanding of how your server’s disk is performing.

Remember, disk speed plays a crucial role in server responsiveness, and investing time in disk performance analysis can lead to significant improvements in your server’s stability.

If you found value in this guide or have any questions or feedback, please don't hesitate to share your thoughts in the discussion section.

Your input is greatly appreciated, and you can also contact me directly if you prefer.