Zack's Kernel News
Zack's Kernel News
The Linux kernel mailing list comprises the core of Linux development activities. Traffic volumes are immense, often reaching 10,000 messages in a week, and keeping up to date with the entire scope of development is a virtually impossible task for one person. One of the few brave souls to take on this task is Zack Brown.
Saving Power With Idle CPUs
Arun R. Bharadwaj is working on CPUidle, a project to make sure that idle CPUs on a given PowerPC system are put into the most appropriate power usage state in a clean and efficient way. For example, it's not necessarily obvious whether a CPU is going to be idle long enough to warrant putting it into a very low power state, from which it would take longer to make it available to the system again if it's needed. CPUidle relies on various heuristics to guide it in making that selection. CPUidle currently chooses between only two idle states – a snooze or a full-fledged nap. No kidding, that's what they're called. CPUidle is the kind of project that starts off relatively simply and then gradually introduces additional idle states to accommodate newer and newer processors, with more and more complicated heuristics used to choose from among them. Typically one might expect such a situation to become very crufty and esoteric over time, until some freak like Alan Cox comes along and completely redoes it to make it much simpler and better. Alternatively, multiple freaks could implement competing improvements, leading to no agreement and tons of fighting until Linus Torvalds does it his own way and the fighting subsides again. CPUidle seems like that type of project to me.
VMware Deprecates Paravirtualization Features
VMware has decided to stop supporting VMI, their paravirtualization technique, because virtualization features in hardware are starting to be the default for all new systems. The announcement from Alok Kataria of VMware gave the benchmark results, which showed that the hardware virtualization was better than their VMI implementation. The reason for his post was to ask for advice about the best way to retire the VMI code, which had been accepted into the kernel already.
Chris Wright explained that typically there would be a deprecation phase, listing the feature in Documentation/feature-removal-schedule.txt and giving run-time alerts that inform the user they're using a deprecated feature. But Jeremy Fitzhardinge pointed out that VMI wasn't so much a feature as an optimization, and beyond the slight speed difference, users would probably not even notice its absence. This made sense to Chris, and because the code was very specific to VMware, he didn't think it would be essential to follow normal deprecation procedures. Also, Jeremy pointed out that removing VMI would be a simple matter of just taking out a couple of files, as opposed to making massive changes in many different places, Greg Kroah-Hartman felt it would be fine to just go ahead and do that.
However, just as Alok posted the patch that would do this, Ingo Molnár objected that a proper sunset period was really what they should undertake. He pointed out that most VMware users weren't yet benefiting from the hardware improvements that made VMI obsolete. Until they did, he wanted to keep the VMI optimization available, even though it wasn't as good as the hardware versions that VMware predicted would be ubiquitous in 2011.
Alok was fine with this and posted patches to go through the regular deprecation process. But just as he did so, H. Peter Anvin said it seemed much too early yet to consider this. Alok's patch was for 2.6.32; Peter felt it would be better to wait until the end of 2010 or (he estimated) kernel 2.6.37.
Gradually, a deprecation plan developed involving a very gradual approach that would leave VMI in the kernel until it could be assumed that users might be migrating their hardware over to systems that had hardware virtualization features.
VMware was acting the good citizen in these code deprecation and kernel development processes. Open source development often includes this type of zeroing in to a solution on the basis of a number of participants' sense of what should be done.
Greg Kroah-Hartman has taken over maintainership of the TTY layer, which had previously been orphaned. He won't be using Git for development, however – he's chosen to rely on quilt, Andrew Morton's patch management tool. The TTY layer handles all serial connections and, basically, any time you have a shell prompt. These types of important elements of the kernel don't tend to change too much over time but should be maintained anyway.
Avi Kivity recently announced that he'd taken on Marcelo Tosatti as co-maintainer of the KVM code. In a departure from the way these things usually happen, they've decided to split up the maintainership duties week by week, with Avi handling incoming patches one week, then Marcelo doing it for a week, and so on. This would leave each of them free to focus on their own coding projects (or as Avi enthusiastically put it, go to the beach) on their off weeks. KVM lets users create any number of virtual machines on a single system, each of which can run Linux or Windows or provide a variety of other environments.
Git Survey Results
Jakub Narebski recently announced the results of the 2009 survey of Git users. The graphical results are available at http://tinyurl.com/GitSurvey2009Analyze. Among almost 4,000 participants, most found Git easy to learn and use. The biggest block of users used Git for managing code, and some used it for backup and configuration files. The vast majority relied on a binary package of Git, rather than compiling the source themselves. As one might expect, the majority of users used Git under Linux, but a lot also used it under Mac OS X, and a surprising 22% of users also used it under Microsoft Windows. Virtually everyone relied on the command-line interface, as opposed to front ends like gitk, and most users either hosted their Git repository themselves or used Github.
The most commonly used Git commands are git add and git push. Not much surprise there. A full 94% of Git users are happy with the tool; although that might not be saying much in that, if they weren't happy, they might not be using it enough to answer a survey about it. In terms of what were the most desired new features, users were fairly evenly split between a better user interface, better documentation, and better auxiliary tools.
The full analysis is fun to look at and has more information than the quick digest I'm posting here. I can't help but wonder whether statistician Nate Silver would have any special comments to make. To me, these results seem to say that Git has definitely entered mainstream use, as opposed to being just a developer's tool.
The 22% of users that use Git under Windows is surprising, partly because my understanding is that Git's tremendous speed advantage doesn't translate well under Windows. If anyone's actually done a comparison and wants to share a graph or two, please send them to me and I'll publish them here under your name next time.
The 94% approval rating might go down over time as more users are forced to adopt Git in a work environment, when they'd rather stick with more familiar tools. Git, after all, still lacks some sophisticated features that other revision control systems have. One of Git's main claims to fame is speed, and some users might find that to be less important than some of the fancier bells and whistles that haven't yet been implemented.
Con Kolivas has been working on his own BFS (Brain F * Scheduler) as an alternative to the existing kernel process scheduler, and recently, there was a fight about it. The situation is interesting because the existing scheduler, while trying to ensure good performance for smaller machines, also supports systems that have thousands of CPUs. To support such large systems, it apparently makes sacrifices with regard to latency on smaller desktop-oriented computers. For example, video playback might freeze for a fraction of a second; a game of quake might act a little jittery.
Clearly, Ingo Molnár, the maintainer of the in-kernel scheduler, wants to have good performance on smaller systems too, but it's a tough needle to thread, especially because Linus Torvalds doesn't want to have multiple schedulers in the kernel that target systems of different sizes.
Con's work on BFS, therefore, was not an effort to write a scheduler that might be incorporated into the official source tree. Because he didn't like the latency issues he saw on smaller systems and didn't like the complexity of the in-kernel scheduler, he decided to write BFS as a manifestation of everything he had come to believe about schedulers over the years. So the discussion on the linux-kernel mailing list was a little unusual in that it concerned a piece of code that was not being submitted and whose author had no plans to try to get included.
Still, various folks, starting with Ingo, ran comparisons between BFS and the in-line scheduler. He ran his tests on hardware that Con had said would be at the upper bound of where BFS would still work well – a dual quad-core system, with hyperthreading. The tests confirmed that BFS didn't work better than the in-kernel scheduler on a system that size.
A bunch of other folks ran benchmarks and did other sorts of testing. Nikos Chantziaras found that when using BFS on his Intel Core 2 Duo E6600, MPlayer no longer dropped frames when he dragged the window around on the desktop and the LMMS sound synthesizer no longer crackled and popped during playback. Additionally, Doom 3 and similar games no longer froze for fractions of a second during play. These things were difficult to test numerically, but he did clearly notice the improvement under BFS.
Con himself had no interest in the debate. He replied only once during the discussion to say that he didn't like benchmarks that relied too heavily on abstract ideas and not enough on real-world usage; essentially, he wanted the folks on the mailing list to drop the whole thing and let him code in peace.
Undoubtedly people will continue to pay attention to the progress of BFS, but Con is right that, for a significant number of reasons, the code might never be an official part of Linux. Con's disdain to support systems with thousands of processors and Linus' reluctance to include multiple schedulers in the kernel relegates BFS to the very geekiest members of the Linux user community, at least for now.
Visualizing Kernel Sources
Taro Okumichi has written a tool that converts kernel source code into HTML and lets users easily expand and view macros and include statements and examine structs. He used a lot of AJAX-y features to make the whole thing easy to use. Florian Mickler was super-impressed with this and encouraged Taro to share his code. Américo Wang was also really impressed, and Bill Davidsen was interested in seeing more features.
Taro posted a link to his tool, at http://cfw.sourceforge.net/htmltag/init/main.c.pinfo.html, but at the time of this writing, it led to a dead page. Taro hasn't posted any other information yet, but this seems like a pretty cool tool. Hopefully he'll keep working on it and share it with folks again.