Linux-Kongress 2009 Tuning Gathering

Oct 30, 2009

The Linux-Kongress is traditionally where kernel developers exchange honors and advice about new features and enhancements. This year a number of speakers presented performance improvement data and discussed what aspects of Linux can be drawn out even more.

The annual gathering organized by the German Unix Users Group (GUUG) took place this year in Dresden, the German city on the river Elbe, October 27-30. Around 100 participates took part in the steadily more familiar event. After two days of tutorials another two days of technical sessions included 25 talks on two parallel tracks. Linux Foundation's Ted Ts'o, in his keynote, preached a bit to the choir in listing the good reasons behind the Linux free development model. Felix "Fefe" von Leitner, on the other hand, provided some fascinating research results after examining a number of compilers for what they can produce in machine code from typical code fragments. He was surprized to find that the oft derided GNU Compiler Collection (GCC) in many cases does some clever optimization.

Specialized compilers such as Intel's Compiler Suite (ICC) may effectively bring in such architecturally typical properties as vectorization, but can't do the same tricks in the details. As von Leitner suggested wryly to Linux Magazine Online, "GCC as an open source project has practically unlimited resources in doctoral candidates at universities." All in all he recommends that developers rather write understandable code than to optimize it by hand: "The compiler wins best with efficient code."

Kernel hacker Andi Kleen adds that von Leitner's statement applies only on a limited scale to programs and the Linux kernel when it comes to developing steadily growing multi-core systems. Kleen, who has long been responsible for Linux's 64-bit port, is now on Intel's payroll. He suggested that many of today's CPU providers get performance increases out of their processors in much greater degree from multi-cores rather than clock rates. So far weather simulation supercomputers doing the same parallel processing over and over again have profited from this approach. But since other systems reaching to desktops and netbooks can also benefit from multi-core, developers should, according to Kleen, take a stronger parallelization tack. A lot can still only be done by hand, despite the many libraries Kleen mentioned to Linux Magazine Online.

A central role to parallel processing applies to locks, which ensure that multi-cores can simultaneously access shared resources to find inconsistent data. Kleen emphasized the importance of implementing these locks as granularly as possible, such as by locking subtrees or branches instead of the entire tree or all functions that operate on the structure. Another problem is the communication expenditure among the cores. The worst case is that processors wait for each other's cache to empty unnecessarily, leading to ever increasing overhead.

Kleen meanwhile looks upon the kernel with some satisfaction. It already uses modern forms of locks that go into sleep mode. However, developers in userland still have ways to go in implementing them. Tips that Kleen has about his implementation should appear on the Linux-Kongress webpages as soon as GUUG organizer Wolfgang Stief posts them in the coming week.

Related content

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News