Zack's Kernel News

Speeding Up Background Buffered Writebacks

Jens Axboe reported that, "Since the dawn of time, our background buffered writeback has sucked. When we do background buffered writeback, it should have little impact on foreground activity. That's the definition of background activity… But for as long as I can remember, heavy buffered writers has not behaved like that."

He gave an example of trying to start a foreground process like Chrome while doing a background buffered writeback, noting that, "it basically won't start before the buffered writeback is done." He added, "… or for server-oriented workloads, where installation of a big RPM (or similar) adversely impacts database reads or sync writes. When that happens, I get people yelling at me."

Jens posted some more examples that demonstrated the problem. He also submitted some patches to address the issue. He said:

"We still want to issue big writes from the vm side of things, so we get nice and big extents on the filesystem end. But we don't need to flood the device with THOUSANDS of requests for background writeback. For most devices, we don't need a whole lot to get decent throughput.

This adds some simple blk-wb code that keeps limits on how much buffered writeback we keep in flight on the device end. The default is pretty low. If we end up switching to WB_SYNC_ALL, we up the limits. If the dirtying task ends up being throttled in balance_dirty_pages(), we up the limit. If we need to reclaim memory, we up the limit. The cases that need to clean memory at or near device speeds, they get to do that. We still don't need thousands of requests to accomplish that. And for the cases where we don't need to be near device limits, we can clean at a more reasonable pace. See the last patch in the series for a more detailed description of the change, and the tunable.

I welcome testing. If you are sick of Linux bogging down when buffered writes are happening, then this is for you, laptop or server. The patchset is fully stable; I have not observed problems. It passes full xfstest runs, and a variety of benchmarks as well. It works equally well on blk-mq/scsi-mq, and "classic" setups."

Dave Chinner threw a massive test-case at Jens's code and found that in some cases:

"The performance has dropped significantly. The typical range I expect to see once memory has filled (a bit over 8m inodes) is 180k-220k. Runtime on a vanilla kernel was 4m40s and there were no performance drops, so this workload runs almost a minute slower with the block layer throttling code.

What I see in these performance dips is the XFS transaction subsystem stalling *completely* – instead of running at a steady state of around 350,000 transactions/s, there are *zero* transactions running for periods of up to ten seconds. This coincides with the CPU usage falling to almost zero as well. AFAICT, the only thing that is running when the filesystem stalls like this is memory reclaim.

Without the block throttling patches, the workload quickly finds a steady state of around 7.5-8.5 million cached inodes, and it doesn't vary much outside those bounds. With the block throttling patches, on every transaction subsystem stall that occurs, the inode cache gets 3-4 million inodes trimmed out of it (i.e. half the cache), and in a couple of cases I saw it trim 6+ million inodes from the cache before the transactions started up and the cache started growing again."

Jens was unable to reproduce Dave's slowdown and asked Dave to try a couple of new patches on top of the original ones to see if Jens guessed rightly at the cause. The two of them went back and forth for a bit, trying to reproduce the problem and figure out why Dave saw it while Jens didn't.

Meanwhile Holger Hoffstätte also ran some tests, saying, "I've backported this series (incl. updates) to stable-4.4.x – not too difficult, minus the NVM part which I don't need anyway – and have been running it for the past few days without any problem whatsoever, with GREAT success." As he described it, "copying several GBs at once to a SATA-3 SSD (or even an external USB-2 disk with measly 40 MB/s) doodles along in the background like it always should have, and desktop work is not noticeably affected."

So, aside from Dave's performance issue, which appears to be real, there are at least two people seeing a solid improvement with Jens's code. In all likelihood, Dave and Jens will discover that Dave's issue won't totally kill the patch, so we can all look forward to faster background writes at some point soon.

The Author

The Linux kernel mailing list comprises the core of Linux development activities. Traffic volumes are immense, often reaching 10,000 messages in a week, and keeping up to date with the entire scope of development is a virtually impossible task for one person. One of the few brave souls to take on this task is Zack Brown.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Kernel News

    Chronicler Zack Brown reports on the latest news, views, dilemmas, and developments within the Linux kernel community.

  • Kernel News

    Chronicler Zack Brown reports on the latest news, views, dilemmas, and developments within the Linux kernel community.

  • Kernel News

    Chronicler Zack Brown reports on the latest news, views, dilemmas, and developments within the Linux kernel community.

  • sysdig

    Many Linux diagnostic tools require knowledge of a special syntax, which complicates handling and confuses the output. Sysdig groups several important tools into a single interface.

  • Kernel News

    Zack Brown reports on container-aware cgroups, a different type of RAM chip on a single system, new SARA security framework, and improving GPIO interrupt handling.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News