Zack's Kernel News

Zack's Kernel News

Article from Issue 269/2023
Author(s):

Chronicler Zack Brown reports on the little links that bring us closer within the Linux kernel community.

Minimum Version Numbers for Build Tools

Long ago, when this ancient planet was not yet quite so ancient, the Linux kernel refused to support newer versions of the GNU C compiler (GCC). Compiler development proceeded, and Linux would compile only on a specific older version because Linus Torvalds disagreed with some of the new behaviors going into GCC. In other words, it was war.

Today the situation is much different, and the Linux kernel now identifies the minimum version of the tools it requires, rather than the maximum. And today, as Masahiro Yamada pointed out recently, Linux supports GCC version 5.1 and later.

However, Masahiro noted that version 2.23 of GNU Binutils – the assembler, the linker, and other tools essential for building anything with GCC – was now 10 years old and showing its age. Masahiro wanted to raise Linux's minimum supported version to Binutils 2.25, which was released just around the time of GCC 5.1 (seven years ago). He posted a patch to make that change.

It's not a meaningless change! Earlier tools lack support for features Linux actually uses. As a result, the kernel build system needs to check for older tools during compilation. If it finds them, it has to use less efficient workarounds for the more modern features.

Linus said Masahiro's patch seemed sane and accepted it; Nick Desaulniers also liked the patch. There were no dissenting voices.

Masahiro also took the patch into other kernel source trees and suggested that various subsystems and architectures could start to clean up their code. The idea being that they could rip out all the code supporting the older, less full-featured versions of Binutils and simplify the build system somewhat.

This is an ongoing process, and it's not completely free of controversy. If you look at the situation from another angle, the developers are making it harder to compile Linux on absolutely all existing systems. Systems stuck with version 2.23 of GNU Binutils without the ability to upgrade might have a problem compiling newer kernels.

So in fact, it was significant that Masahiro pointed out that Binutils 2.25 was released at around the same time as GCC 5.1. It's very unlikely that anyone with access to GCC 5.1 would be stuck with a significantly older version of Binutils. The decision to require at least GCC 5.1 was made fairly carefully. For example, older versions of GCC actually produced compiler errors at the time Linus decided to update the requirement. So it seemed very unlikely that anyone was actually using the previous minimum version (GCC 4.9) to build kernels. Another reason for making the change to GCC 5.1 was Linus's decision to treat compiler warnings as errors – itself an extremely controversial change at the time – which posed a huge challenge because of all the warnings GCC 4.9 produced in addition to the errors. Rather than fix all those errors and warnings for such an old compiler that probably no one was using, Linus chose to update the minimum version number.

It's always a judgment call. Other operating system projects might insist that older tools must be supported under all circumstances. They would tolerate any amount of bloat in their own kernel source tree rather than force users to upgrade even very old tools. The approach Linus takes is a bit of a compromise with that idea and allows the kernel source tree to continually become cleaner and better organized over time – something very valuable when new developers become interested in contributing.

Intel and ARM Security Issues

Eric Biggers all but accused Intel and ARM of trying to downplay security vulnerabilities in their CPUs. He said that they "recently published documentation that says that no instructions are guaranteed to be constant-time with respect to their data operands, unless a 'data independent timing' flag [...] is set."

Here's the thing about that. An attacker can sometimes gain useful information about something like a password simply by observing how much time it takes your code to reject a given login attempt. If the rejection is quick, it could mean the first characters of the password were wrong. If the rejection is microscopically slower, it could mean the first characters were correct. Then the attacker can keep the earlier characters, make a new guess for the later characters, and try again until it guesses the whole password correctly.

The idea of constant-time execution with respect to data means that the CPU will take the same amount of time to perform a given operation, regardless of whether the data is long or short. This has a direct effect on preventing those exact sort of attacks.

Regarding Intel and ARM's recent statements, Eric continued, "this is a major problem for crypto code, which needs to be constant-time, especially with respect to keys. And since this is a CPU issue, it affects all code running on the CPU. While neither company is treating this as a security disclosure, to me this looks exactly like a CPU vulnerability."

He gave links to the documentation posted by ARM and Intel. That documentation actually said that because the CPUs didn't guarantee constant-time execution by default, they implemented flags that could be set by the operating system to force the CPU to perform constant-time execution.

Eric credited Adam Langley with bringing the whole issue to his attention and asked, "I'm wondering if people are aware of this issue, and whether anyone has any thoughts on whether/where the kernel should be setting these new CPU flags."

Peter Zijlstra heaved a deep sigh, pointing out that these "CPU flags" are actually registers that are specific to a given model of CPU – in other words, the kernel would have to deal with them differently and specifically for each separate CPU model that implemented the constant-time execution flag.

Peter did suggest that it would be better to set and unset the flags as needed by specific processes, rather than globally set a CPU to always use constant-time execution. That way, for code that didn't need the added security protection, it wouldn't have to tolerate the slowdown.

Arnd Bergmann noticed that some parts of the kernel were already using the flags, and he listed some of the ways it was handled currently. For example, he pointed out some caveats such as "the bit is context switched on kernel entry, so setting the bit in user space does not change the behavior inside of a syscall." He also remarked that currently crypto code inside the kernel did not use the flag, even when available on the CPU.

Jeffrey Walton also remarked, "please make setting/clearing the bit available in userland. Libraries like Botan, Crypto++ and OpenSSL could benefit from it."

Meanwhile Jason A. Donenfeld offered the practical and security-conscious suggestion, "Maybe it should be set unconditionally now, until we figure out how to make it more granular." But he also mused, "I wonder, though, what's the cost of enabling/disabling it? Would we in fact need a kind of lazy-deferred disabling, like we have with kernel_fpu_end()?"

Eric replied, "I'd much prefer it being set unconditionally by default as well, as making everyone (both kernel and userspace) turn it on and off constantly would be a nightmare. Note that Intel's documentation says that CPUs before Ice Lake behave as if DOITM [Data Operand Independent Timing Mode] is always set."

He concluded, "I think the logical approach is to unconditionally set DOITM by default, to fix this CPU bug in Ice Lake and later and just bring things back to the way they were in CPUs before Ice Lake. With that as a baseline, we can then discuss whether it's useful to provide ways to re-enable this CPU bug / 'feature', for people who want to get the performance boost (if one actually exists) of data dependent timing after carefully assessing the risks."

Jason agreed with this, remarking, "It's actually kind of surprising that Intel didn't already do this by default. Sure, maybe the Intel manual never explicitly guaranteed constant time, but a heck of a lot of code relies on that being the case."

Dave also agreed, saying, "I'm in this camp as well. Let's be safe and set it by default."

At this point, a number of these people started discussing implementation details for both Intel and ARM, and the conversation got technical. The email thread ended fairly abruptly – presumably because people started trying out patches.

Removing Support for ICC

Masahiro Yamada posted a patch to remove support for Intel's C compiler from the Linux kernel. He pointed out that nobody had complained when patches were accepted into the kernel build system that explicitly left Intel's C compiler out. For example, "init/Kconfig defines CC_IS_GCC and CC_IS_CLANG but not CC_IS_ICC, and nobody has reported any issue." He concluded, "I guess the Intel Compiler support is broken, and nobody is caring about it."

Linus Torvalds accepted the patch and replied, "I don't think anybody ever really used icc. I can't recall having heard a single peep about icc problems, and I don't think it's because it was *so* good at emulating gcc that nobody ever hit any issues."

Harald Arnesen also pointed out that Intel's compiler binary itself reported that it was a deprecated tool. He ran icc -v and got the message, "remark #10441: The Intel(R) C++ Compiler Classic (ICC) is deprecated and will be removed from product release in the second half of 2023."

Arnd Bergmann affirmed that he also had recently considered removing support for Intel's C compiler. He remarked, "Intel have completely dropped their old codebase and moved to using LLVM, so my guess is that with the current releases it will behave the same as clang."

Dave Hansen, from Intel, also remarked, "I honestly can't remember seeing anyone actually use icc during my entire tenure at Intel. I'll ask around to see if there's any plausible reason to fix this up and keep it. But, I'm not holding my breath."

There was a bit more discussion, and Masahiro updated his patch to avoid removing certain things that related more to GCC and Clang than ICC. But finally it looks as though Intel's compiler will go away.

It was actually a fascinating project and originally represented a significant alternative to GCC – at the time the only such alternative that the kernel could actually use. And given that the kernel developers and the GCC developers have not always seen eye to eye, it seemed like a good idea at the time to have such a fallback available. The days of such conflicts between developer teams, however, seems to have passed. These days the kernel and the GCC developers seem pretty much in lockstep with each other.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Kernel News

    This month in Kernel News: Spanking Linus; Controlling Boot Parameters via Sysfs; Finessing GCC; and Dealing with Loose Build Dependencies.

  • Kernel News

    Zack discusses the new GNSS GPS subsystem, new LoRaWAN subsystem, tracking compiler dependencies at config time, and uninlining for Debugging. 

  • Linux Kernel 6.0 Officially Released

    Although it will be some time before most Linux distributions ship with the latest kernel, the next major release is now available.

  • Kernel News

    Chronicler Zack Brown reports on the latest news, views, dilemmas, and developments within the Linux kernel community.

  • OSADL: Realtime Linux for the Atom Processor

    OSADL, a consortium of Linux users in the automation industry, has released a realtime Linux kernel for Intel's Atom processor.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News