Linux 6.6's new scheduler

A Fair Slice

© Photo by Diana Krotova on Unsplash

© Photo by Diana Krotova on Unsplash

Article from Issue 301/2025
Author(s):

Linux 6.6 introduces the EEVDF scheduler, a next-generation CPU scheduling algorithm focused on fairness and responsiveness.

Linux Kernel 6.6 introduced a major update to process scheduling by replacing the almost decade-old Completely Fair Scheduler (CFS) with the Earliest Eligible Virtual Deadline First (EEVDF) scheduler [1]. This new scheduler fulfills the same role as CFS dividing CPU time among processes but does so more efficiently, with lower latency and more predictable behavior. EEVDF is based on an algorithm from a mid-'90s research paper and was merged under the guidance of veteran kernel developer Peter Zijlstra. For those managing Linux systems (from on-premises servers to cloud environments), EEVDF brings potential performance boosts and smoother multitasking without requiring fundamental changes to userspace software. I'll explore how EEVDF works, how to configure and optimize it, and why it matters for modern Linux distributions and workloads.

Unlike its predecessor, EEVDF takes a more algorithmic approach to fairness and responsiveness, removing many of the ad-hoc heuristics and tuning knobs that CFS relied on. The design ensures that tasks which haven't gotten their fair share of CPU are automatically favored, while tasks that overused CPU are gently throttled back in subsequent scheduling decisions. This clean, proportional-share mechanism improves latency for tasks that CFS might inadvertently starve or delay, all without the need for complex tuning parameters. In practical terms, system administrators and developers should see more consistent task scheduling and fewer edge-case issues. The kernel developers caution that a few specific workloads could see initial performance regressions under EEVDF, but these are expected to be rare and are being addressed in follow-up patches.

Why the Scheduler Changed

The CFS had been the default scheduler since Linux 2.6.23 (2007), so replacing it was a big deal. The main motivation was to eliminate the fragile heuristics and guesswork CFS used to handle various workloads, especially latency-sensitive tasks. CFS implemented fairness by tracking each task's virtual runtime and trying to give equal CPU time to all tasks of equal priority. However, to accommodate interactive vs. batch behavior, CFS accumulated many tunable parameters and heuristics. These required careful balancing and could mispredict some scenarios, leading to suboptimal performance or responsiveness. Administrators might recall tweaking CFS parameters or using tools such as nice/renice to influence scheduling, which sometimes felt like a black art in multitenant or high-load systems.

EEVDF simplifies this by using a formal proportional-share model. It introduces two key concepts to the Linux scheduler: lag and virtual deadlines. Every task still has a virtual runtime (adjusted by weight/priorities similar to CFS), but EEVDF computes each task's lag as the difference between the CPU time it should have received (ideal fair share) and what it actually got. Tasks that have a positive lag have not gotten their fair share and are marked eligible to run, whereas tasks with negative lag (meaning they ran more than their fair share recently) become temporarily ineligible. Ineligible tasks will remain off the CPU until their negative lag decays back to zero (their entitlement "catches up"), at which point they become eligible again. By doing this, EEVDF ensures strict fairness: No task gets more than its entitled slice over time, and any task that fell behind is prioritized next. This behavior is built-in, not a heuristic, and it's grounded in the math of virtual time accounting.

On top of fairness, EEVDF addresses latency requirements via the concept of a virtual deadline for each task. The virtual deadline is essentially the time by which a task should run next to maintain fairness, calculated from the task's eligible time plus a slice of CPU time. Importantly, EEVDF incorporates a second-priority metric often called latency sensitivity. A latency-sensitive task is assigned shorter time slices, which naturally gives it earlier virtual deadlines. EEVDF then uses an "earliest virtual deadline first" policy among eligible tasks meaning the task whose virtual deadline is soonest is run next. In effect, tasks that don't need a lot of CPU but do need quick responses get to run sooner (more frequently, in small bursts), without allowing them to consume more total CPU time than their fair share. This is a clean solution to a problem that CFS handled with partial measures. Under CFS, one had to either run latency-sensitive processes as higher priority (lower nice value), which also gave them more total CPU, or rely on the "latency tolerance" patches. With EEVDF, a task can be responsive without being unfair: It gets CPU quickly when needed, but it won't eclipse other tasks in total usage over a long interval. The Linux kernel's adoption of EEVDF was aimed specifically at avoiding the need for the old "latency nice" patchwork by integrating responsiveness into the core scheduler algorithm.

Performance Impact

One of the big questions with any new scheduler is "how does it perform in real-world workloads?" Early testing and benchmarks of Linux 6.6 with EEVDF have shown promising improvements. Phoronix test results on a 128-core EPYC 9754 "Bergamo" server revealed dramatic improvements in various database and analytics workloads after upgrading from Linux 6.5 (CFS) to Linux 6.6 (EEVDF). These are substantial improvements for a single kernel version bump.

It's worth noting that the biggest wins with EEVDF tend to appear in situations with many runnable tasks contending for CPU. Large servers, containers in cloud deployments, or CI/CD runners with numerous parallel jobs stand to benefit from the scheduler's more consistent time distribution. On typical desktop PCs or low-core-count VMs, the differences might be less pronounced. In fact, the Phoronix testing noted that on lower core count systems (such as typical developer laptops or small cloud VMs), they didn't observe dramatic performance changes in everyday tasks. This suggests EEVDF's fairness algorithm shines when the scheduling problem is complex (lots of tasks/cores), whereas for light usage you might not notice a change, which is a good thing (it means there's no regression in the common case).

What about latency and responsiveness? By design, EEVDF provides latency-sensitive tasks with more immediate access to the CPU, and initial results back this up. The EEVDF algorithm's consistency also helps with tail latency, the worst-case delays, which is crucial for SLAs in enterprise and cloud services. All these improvements come while maintaining fairness: Unlike simply nice-ing important processes, EEVDF doesn't let them run away with the CPU forever, so background tasks still get their due share.

Using EEVDF on Linux Systems

The good news for system administrators is that there's nothing special you need to do to "enable" EEVDF. If you're running a Linux 6.6 or newer kernel, EEVDF is the default process scheduler out of the box. In other words, once you upgrade your kernel, all normal processes are automatically handled by EEVDF instead of CFS. There is no runtime toggle or GRUB kernel option for switching CPU schedulers on the fly. Linux doesn't support swapping out the scheduler at runtime; it's a core part of the kernel. So, to start using EEVDF, the primary step is to run a kernel version 6.6 or later. Many popular Linux distributions are incorporating EEVDF as they rebase on newer kernels:

  • Fedora was quick to adopt it (Fedora 39+ was released with the 6.6+ kernel, thereby including EEVDF by default).
  • Arch Linux and other rolling releases pulled in kernel 6.6 soon after its release, so users got EEVDF via routine updates.
  • Ubuntu 23.10 (released October 2023) was just a bit too early for 6.6, but Ubuntu 24.04 LTS includes a 6.8 kernel, which means EEVDF is present in the default kernel.

From a configuration standpoint, migrating to EEVDF is straightforward because most existing settings carry over. Any custom configurations you had for CFS (such as CPU affinity, cgroup CPU quotas, nice levels) continue to work under EEVDF. The scheduler respects Linux's normal interfaces: Nice values still determine weight (CPU share) as they did before, and realtime policies are unchanged and take precedence over EEVDF for those privileged tasks. If you had CPU bandwidth control set up via cgroups, that mechanism is still in place. Internally, the kernel's CFS bandwidth controller now works with EEVDF. In fact, one of the final patches of the EEVDF rollout was explicitly to fix up the interaction with the cgroup bandwidth limiter, ensuring that per-cgroup CPU quotas and limits continue to function correctly under the new scheduler. This means that in multitenant environments (such as cloud VMs or containers) you can upgrade to EEVDF without losing the ability to enforce CPU limits for security or quality-of-service reasons.

How can you verify that EEVDF is active on your system? Because the scheduler is built into the kernel, it doesn't announce itself explicitly via a command-line utility. You can infer it from the kernel version and build info. Running uname -r will show your kernel version; if it's 6.6 or higher and you're using a standard distro kernel, you can be confident EEVDF is in use (because CFS was removed and replaced by EEVDF code in those versions). Another clue is to check for the presence (or absence) of certain scheduler tunables in /proc/sys/kernel. Many of CFS's tunable parameters remain visible for compatibility, but some might not have the same effect. For example, on a 6.6 kernel, if you run

sysctl -A | grep sched

you will still see outputs like kernel.sched_autogroup_enabled, kernel.sched_cfs_bandwidth_slice_us, etc. – the names haven't changed (the kernel interfaces often retain cfs in names for continuity). The value of sched_autogroup_enabled (which manages automatic task groupings for interactive shells) will likely be 1 on a desktop distro, which is fine. Autogrouping works in conjunction with EEVDF just as it did with CFS, grouping tasks by session to improve desktop interactivity.

Buy this article as PDF

Download Article PDF now with Express Checkout
Price $2.95
(incl. VAT)

Buy Linux Magazine

Related content

  • CachyOS

    CachyOS promises lighting fast speed and optimized performance for experienced users and newcomers alike.

  • Linux 6.12 LTS

    For 20 years, the kernel developers maintained real-time support outside the mainline kernel. Now, in Linux 6.12, real-time support has become an official part of the operating system kernel.

  • Kernel 6.10 Available for General Usage

    Linus Torvalds has released the 6.10 kernel and it includes significant performance increases for Intel Core hybrid systems and more.

  • A Gaggle of Schedulers in Kernel Development Battle

    Really Fair - Really Simple, Really Fair - Really Unfair: three schedulers are the topic of current discussions on the kernel mailing list.

  • Optimizing the Kernel

    We explore some optimizations designed to deliver a smoother experience for desktop users.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News