goto FAIL - Free and Open Source versus Closed Source Proprietary Code

Jon

Paw Prints: Writings of the maddog

Apr 19, 2014 GMT
Jon maddog Hall

The events of the past couple of weeks with the bug in OpenSSL exposing many websites and other communications are indeed unfortunate. Not only has it caused a lot of extra work and expense for those who have Internet servers to have to update code and replace certificates, but it has caused many end users to spend time doing what they avoid doing, update their passwords. Forcing end users to update their passwords is both bad and good news, which I will discuss later.

However, the OpenSSL bug has also generated an abundance of articles and TV reports about the validity of Open Source code and (once again) questions of whether or not Open Source is more secure or less secure, higher quality or lower quality than closed source, proprietary code. News people (most of whom have never written a piece of code more complex than “Hello World”) have been contacting “elder statesmen of Open Source” and asking them about the “many eyes” that are supposed to be looking at Free and Open Source Software (FOSS) and talking about the ability of “volunteers” to generate production level code versus “paid professionals”. We have all heard the arguments about how FOSS developers take more time and care to develop their code because they do not want to be embarrassed by it, and how closed source companies use powerful tools and methodologies that keep their code pristine over many years.

Here is my statement: There is no magic. The “many eyes” do not help if no one is looking. However, there is the ability to look and the ability to patch the FOSS code when you want or need to do so.

I had worked for many years in various development environments, both Free and Open Source Software and closed-source proprietary software.

I saw FOSS developers write phenomenal programs, using some of the latest techniques for developing code, since a lot of the study of how to do this comes from academics who are working on just these issues, On the other hand I have seen atrocious code written by someone and “tossed over the wall”, even if their source was visible to everyone.

I saw engineers at Digital Equipment Corporation (DEC) write closed source, proprietary code and submit it to the field test code pool when it was obvious that the engineer had never even tested it one single time....there was absolutely no path through their code.  A "professional" (and I use the term loosely) had not done their job.

Yes, in some closed-source companies they do a great job of peer review of code, and in some FOSS projects the project leader/committing engineer reviews the code for both correctness and maintainability style. In both cases there have been bugs found and eliminated, but there is no guarantee that the code is “bug free”, no matter which environment you come from.

Some of the writers pointed out the that OpenSSL team of coders is small, with one “full time” person and several “volunteers”. I will point out that even in large companies, the number of people working on some particular part of a project may be very small, as few as one person. Quality of code is not necessarily a function of the size of the development team, nor the size of the company. A large company's resources are often stretched across many projects.

Quality of code is also compromised by commitments to schedules for either customer benefit or new revenue benefit. At DEC there were often trade-offs in functionality inclusion due to shipping the code “on time” to meet a customer need or to support a new processor. FOSS projects are not immune to these issues of “timeliness”, but my experience has shown that they are less likely to be negatively affected by them. “Ship no code before its time” is more often heard in the FOSS space than in closed source.  However, even in FOSS code the tendency sometimes happens to go in and put a "quick fix" without testing the entire module, particularly if you think the bug or functionality is "small".

In FOSS, the concept that you are a volunteer on a project like OpenSSL does not mean that you have no training in writing good code, or that your knowledge of security issues does not rival those of “professional” security companies. As a volunteer it simply means that you do not get directly paid to develop that code. You may be a “paid professional programmer” for a company that depends on that code, or work for a company that is in some other field of software, but you wanted to work on that particular piece of code as a “volunteer”.

However there are three areas where FOSS code DOES shine above closed source proprietary code, and those areas are “openness of design”, “mean time for complete fix” and the ability to audit.

Most FOSS projects have their forums, mailing lists and other design work very open and visible. The arguments back and forth about issues on the design and code are very visible to anyone who wishes to review or participate in them. With closed-source projects these discussions and arguments are hidden behind closed doors, and discussions of scalability, efficiency, and security are never seen by the end users. Bug lists, including priority setting of the bugs is typically kept out of sight of the end user customer.

The second case is demonstrated most recently by Apple's problem (ironically also in the SSL code) of what became known as the “goto fail” bug. The bug was first identified in the iOS code, and first patched there. However the code was also in the OS/X code, and it took Apple several WEEKS to fix that bug and deliver the patch, leaving their OS/X customers exposed. Compare that to the OpenSSL issue, where the patch was identified, patched and distributed in both binary and SOURCE to all parties that needed it. I stressed the SOURCE part of the patch, since OpenSSL is used not just on one or two operating systems, as Apple had, and not just three or four versions of those operating systems (the ones that Apple still supported), but many different architectures of hardware, many different operating systems (GNU/Linux, *BSD, Unix), many different distributions of each and over many different versions of those distributions, some of which are not being actively supported by the companies who distributed them.

Many years ago there was a very bad worm penetrating Unix computers. DEC created a binary patch for their current Unix systems, but previous systems now out of software contract were not going to get a patch generated for them, even though there were many customers still using them. After the FOSS community understood the problem, it took them five HOURS to design and make available the same patch in source code form.

More recently, Microsoft's XP operating system retirement forced many companies into extremely expensive support contracts for maintaining support of this retired operating system. It was lucky that Microsoft had used a different code base for their SSL functionality, but it can just have easily been some other bug that would cause an issue for XP users in the future, with no ability of millions of XP users to get a bug fix.

The third place where FOSS shines is in what I will delicately call “international security and privacy”, or the "ability to audit". Unfortunately in today's world we have learned that you can not really trust anyone. Even closed-source code in your own country might be open to “spyware” of one type or another. FOSS code is often developed by people from many different countries, working together. They know that people from many different countries and many different political spectrum are going to be viewing and working on this code. Putting in trap doors and Trojan horses on purpose will quickly ruin your reputation as a programmer if discovered. This is not true in closed-source, proprietary code.

In closing, I would like to re-state what I have said for many years and continue to say: “Open Source is neither inherently more secure or less secure than “closed cource”, but it typically has a faster “mean time to fix” than closed source. Secondly, if you have to depend on obscurity (not being able to see the source code) for your security, then you are not secure.

Finally, the “bad news” about the OpenSSL bug was that it happened in the first place. The “good news” is that it forced people to think about their password strategy for web sites (for some people it may have been a long time) and to re-design a better password policy and mechanism in this more modern (and dangerous) “age of the web”.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News