Getting the best performance from solid state drives on Linux
Minimizing Write Cycles
Avoiding unnecessary write cycles helps prolong the life of the SSD. One way to avoid writing to the disk is to transfer parts of the filesystem into memory using a tool such as tmpfs.
Good candidates for relocating to memory are directories such as /tmp
, /var/spool
, /var/tmp
, and sometimes /var/log
. In the case of /var/log
, you need to decide whether you need persistent logs or whether you are OK with discarding the log information when you shut down the computer. To locate the directories in memory, add the entries from Listing 6 to the end of the /etc/fstab
file. These commands mount the directories as tmpfs
.
Listing 6
Logs in Memory
none /tmp tmpfs defaults,noatime,mode=1777 0 0 none /var/tmp tmpfs defaults,noatime 0 0 none /var/log tmpfs defaults,noatime 0 0 none /var/spool tmpfs defaults,noatime 0 0
Over time, the browser cache also causes many write cycles. The /run/user
directory, which resides in RAM by default, is a useful candidate for a browser cache location.
In Firefox or Iceweasel, enter about:config
in the address bar and confirm the prompt. Now right-click anywhere on the list and create a new entry via the context menu (New | String). Enter browser.cache.disk.parent_directory
as the name. The content you need for the new entry is /run/user/1000/firefox-cache
. Change 1000
to the user ID, which the id -u
command outputs on the system. After you restart the browser, it will write its temporary data to RAM.
You need to do the same thing, but with slightly different steps, in Chrome or Chromium. Edit the browser start menu with the chrome.desktop
, google-chrome.desktop
, or chromium.desktop
files in the /usr/share/applications/
directory.
Add the cache directory to be used in the Exec
line using an editor started with root privileges via the -disk-cache-dir=/run/user/$UID/chrome-cache
option (Listing 7). Then, the next time you start Chrome, the browser will store its data in RAM.
Listing 7
Browser Cache in Memory
[...] # Exec=/usr/bin/google_chrome_stable %U Exec=/usr/bin/google_chrome_stable -disk-cache-dir=/run/user/$UID/chrome-cache %U [...]
Browsers automatically add the required subdirectory in /run/user/
. This same principle also applies to other browsers. However, if you use the system for long periods of time without rebooting, pay attention to memory levels.
Setting the Pace
Reading and writing data with a disk is a painfully slow compared with accessing main memory. Aligning the read/write heads on magnetic disks alone takes several milliseconds. The system therefore needs intelligent methods to reduce access times and optimize access to the disk.
The I/O scheduler is responsible for disk access in the Linux kernel. The scheduler combines and sorts file access so the disk heads move as little as possible. The Completely Fair Queuing Scheduler (CFQ), which is typically the kernel default, does not offer optimal SSD support. You can easily tickle an additional 5 and 10MBps more transfer rate out of the SSD by replacing CFQ with the Deadline or NOOP schedulers.
You can see the current scheduler for the first disk /dev/sda
from the file /sys/block/sda/queue/scheduler
(see line 1 of Listing 8). You can change the scheduler without rebooting (line 3); however, the new setting will not survive a reboot.
Listing 8
Checking the Current Scheduler
$ cat /sys/block/sda/queue/scheduler noop deadline [cfq] $ sudo tee /sys/block/sda/queue/scheduler <<< deadline $ cat /sys/block/sda/queue/scheduler noop [deadline] cfq
The easiest way to set up the desired scheduler is using Udev. Start an editor with root privileges and save the contents from Listing 9 in the 60-schedulers.rules
file below /etc/udev/rules.d/
. Udev reads the queue/rotational
attribute to discover whether the disk is a traditional hard disk or an SSD, and then it automatically sets the desired scheduler.
Listing 9
Setting the Scheduler
# for Solid State Drives : Activate Deadline for SSDs ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="deadline"
Kernel 3.17 comes with a new mechanism for SSDs connected via SATA. The kernel completely avoids the conventional block I/O layer and thus the scheduler and its weaknesses. This new mechanism is Multi-Queue Block I/O Queuing (blk-mq
), which was implemented for device connections via PCIe in kernel 3.13 [8]. This mechanism did not result in any adverse effects in a two-month endurance test.
You can enable BLK-MQ without having to recompile the kernel. To do so, first create the /etc/modpobe.d/scsi-mod.conf
file as root and add options scsi-mod use_blk_mq=1
as the content (the commands in Listing 10 will do this for you). If you now query the active scheduler after a restart, you should see none as the result.
Listing 10
Enabling Block I/O Queuing
$ sudo touch /etc/modpobe.d/scsi-mod.conf $ echo options scsi-mod use_blk_mq=1 | sudo tee /etc/modpobe.d/scsi-mod.conf [... Restart the computer ...] $ cat /sys/block/sda/queue/scheduler none
Benchmarks
You can easily see the improvement for read/write performance, particularly when many small files are being deleted.
We extracted the source code for the Linux kernel with a total size of 497MB and 36,706 small files in our lab. After unpacking, we synced all temporarily-stored data, deleted the unpacked source code, and again wrote the cache to disk using sync
.
The figures in Table 1 speak for themselves: With online discard, deleting many files takes a lot of time – 40 seconds longer than a batch discard, which you run manually or periodically via a script.
Table 1
SSD Performance
Action | Duration (secs) |
---|---|
Without Online Discard |
|
Extract |
1.21 |
Sync |
1.66 (= 172MBps) |
Delete |
0.47 |
Sync |
0.17 |
With Online Discard |
|
Extract |
1.18 |
Sync |
1.62 (= 176MBps) |
Delete |
0.48 |
Sync |
40.41 |
« Previous 1 2 3 Next »
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
![Learn More](https://www.linux-magazine.com/var/linux_magazin/storage/images/media/linux-magazine-eng-us/images/misc/learn-more/834592-1-eng-US/Learn-More_medium.png)
News
-
NVIDIA Released Driver for Upcoming NVIDIA 560 GPU for Linux
Not only has NVIDIA released the driver for its upcoming CPU series, it's the first release that defaults to using open-source GPU kernel modules.
-
OpenMandriva Lx 24.07 Released
If you’re into rolling release Linux distributions, OpenMandriva ROME has a new snapshot with a new kernel.
-
Kernel 6.10 Available for General Usage
Linus Torvalds has released the 6.10 kernel and it includes significant performance increases for Intel Core hybrid systems and more.
-
TUXEDO Computers Releases InfinityBook Pro 14 Gen9 Laptop
Sporting either AMD or Intel CPUs, the TUXEDO InfinityBook Pro 14 is an extremely compact, lightweight, sturdy powerhouse.
-
Google Extends Support for Linux Kernels Used for Android
Because the LTS Linux kernel releases are so important to Android, Google has decided to extend the support period beyond that offered by the kernel development team.
-
Linux Mint 22 Stable Delayed
If you're anxious about getting your hands on the stable release of Linux Mint 22, it looks as if you're going to have to wait a bit longer.
-
Nitrux 3.5.1 Available for Install
The latest version of the immutable, systemd-free distribution includes an updated kernel and NVIDIA driver.
-
Debian 12.6 Released with Plenty of Bug Fixes and Updates
The sixth update to Debian "Bookworm" is all about security mitigations and making adjustments for some "serious problems."
-
Canonical Offers 12-Year LTS for Open Source Docker Images
Canonical is expanding its LTS offering to reach beyond the DEB packages with a new distro-less Docker image.
-
Plasma Desktop 6.1 Released with Several Enhancements
If you're a fan of Plasma Desktop, you should be excited about this new point release.