Getting the best performance from solid state drives on Linux

Minimizing Write Cycles

Avoiding unnecessary write cycles helps prolong the life of the SSD. One way to avoid writing to the disk is to transfer parts of the filesystem into memory using a tool such as tmpfs.

Good candidates for relocating to memory are directories such as /tmp, /var/spool, /var/tmp, and sometimes /var/log. In the case of /var/log, you need to decide whether you need persistent logs or whether you are OK with discarding the log information when you shut down the computer. To locate the directories in memory, add the entries from Listing 6 to the end of the /etc/fstab file. These commands mount the directories as tmpfs.

Listing 6

Logs in Memory

 

Over time, the browser cache also causes many write cycles. The /run/user directory, which resides in RAM by default, is a useful candidate for a browser cache location.

In Firefox or Iceweasel, enter about:config in the address bar and confirm the prompt. Now right-click anywhere on the list and create a new entry via the context menu (New | String). Enter browser.cache.disk.parent_directory as the name. The content you need for the new entry is /run/user/1000/firefox-cache. Change 1000 to the user ID, which the id -u command outputs on the system. After you restart the browser, it will write its temporary data to RAM.

You need to do the same thing, but with slightly different steps, in Chrome or Chromium. Edit the browser start menu with the chrome.desktop, google-chrome.desktop, or chromium.desktop files in the /usr/share/applications/ directory.

Add the cache directory to be used in the Exec line using an editor started with root privileges via the -disk-cache-dir=/run/user/$UID/chrome-cache option (Listing 7). Then, the next time you start Chrome, the browser will store its data in RAM.

Listing 7

Browser Cache in Memory

 

Browsers automatically add the required subdirectory in /run/user/. This same principle also applies to other browsers. However, if you use the system for long periods of time without rebooting, pay attention to memory levels.

Setting the Pace

Reading and writing data with a disk is a painfully slow compared with accessing main memory. Aligning the read/write heads on magnetic disks alone takes several milliseconds. The system therefore needs intelligent methods to reduce access times and optimize access to the disk.

The I/O scheduler is responsible for disk access in the Linux kernel. The scheduler combines and sorts file access so the disk heads move as little as possible. The Completely Fair Queuing Scheduler (CFQ), which is typically the kernel default, does not offer optimal SSD support. You can easily tickle an additional 5 and 10MBps more transfer rate out of the SSD by replacing CFQ with the Deadline or NOOP schedulers.

You can see the current scheduler for the first disk /dev/sda from the file /sys/block/sda/queue/scheduler (see line 1 of Listing 8). You can change the scheduler without rebooting (line 3); however, the new setting will not survive a reboot.

Listing 8

Checking the Current Scheduler

 

The easiest way to set up the desired scheduler is using Udev. Start an editor with root privileges and save the contents from Listing 9 in the 60-schedulers.rules file below /etc/udev/rules.d/. Udev reads the queue/rotational attribute to discover whether the disk is a traditional hard disk or an SSD, and then it automatically sets the desired scheduler.

Listing 9

Setting the Scheduler

 

Kernel 3.17 comes with a new mechanism for SSDs connected via SATA. The kernel completely avoids the conventional block I/O layer and thus the scheduler and its weaknesses. This new mechanism is Multi-Queue Block I/O Queuing (blk-mq), which was implemented for device connections via PCIe in kernel 3.13 [8]. This mechanism did not result in any adverse effects in a two-month endurance test.

You can enable BLK-MQ without having to recompile the kernel. To do so, first create the /etc/modpobe.d/scsi-mod.conf file as root and add options scsi-mod use_blk_mq=1 as the content (the commands in Listing 10 will do this for you). If you now query the active scheduler after a restart, you should see none as the result.

Listing 10

Enabling Block I/O Queuing

 

Benchmarks

You can easily see the improvement for read/write performance, particularly when many small files are being deleted.

We extracted the source code for the Linux kernel with a total size of 497MB and 36,706 small files in our lab. After unpacking, we synced all temporarily-stored data, deleted the unpacked source code, and again wrote the cache to disk using sync.

The figures in Table 1 speak for themselves: With online discard, deleting many files takes a lot of time – 40 seconds longer than a batch discard, which you run manually or periodically via a script.

Table 1

SSD Performance

Action

Duration (secs)

Without Online Discard

Extract

1.21

Sync

1.66 (= 172MBps)

Delete

0.47

Sync

0.17

With Online Discard

Extract

1.18

Sync

1.62 (= 176MBps)

Delete

0.48

Sync

40.41

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • SSD tuning

    Solid state drives make everything run faster and more smoothly, but you can squeeze out even more performance with some practical optimization steps.

  • Parted Magic

    It's really annoying when a disk suddenly dies on you or a typo in a command deletes important data. The free Parted Magic Live distro offers help.

  • Choose a Filesystem

    Every Linux computer needs a filesystem, and users often choose a filesystem by habit or by default. But, if you're seeking stability, versatility, or a small performance advantage, it pays to take a closer look.

  • Ask Klaus!

    New fdisk behavior, booting Knoppix from USB, and using PHP mail() in Apache.

  • Migrating to an SSD

    Replacing your hard drive with an SSD is a sure way to speed up your system; however, migrating to an SSD is a little more complicated than you might imagine. We'll help you find your way through the pitfalls.

comments powered by Disqus

Direct Download

Read full article as PDF:

Price $2.95

News