Perfect Storm Brewing: The Linux Desktop - Part Two
Paw Prints: Writings of the maddog
In “Perfect Storm Brewing: The Linux Desktop – Part One” I described some of the attributes that made for a low desktop penetration for Linux. Notice that not one of the issues was “ease of use” or “ease of porting applications”, but all had to do with installed base and volumes of systems being sold presently.
I had a couple of comments on that article that I will address briefly here, after boiling them down to the following list. I will not necessarily address them in order, but will jump around:
- Penetration of Linux in Asia is higher than in USA (and shelf space is actually paid for by some manufacturers)
- White boxes with Linux are actually a staggering number, but are frequently specific purpose
- Simple tasks under Linux require deep knowledge of command lines
- Configurations very different from one distribution of Linux to another (YAST vs Synaptic, for example)
- Little and hard to find direct support
- Hard to find and missing device drivers
- Refusal of hardware manufacturers to differentiate (FOSS could be a way to do that)
- Lack of games
- Nothing to do with shelf space (by an Anonymous Coward), User Experience Sucks
The full comments are at the end of the previous article, if you wish to read them.
First of all, yesterday I installed yet another Linux distribution on a laptop, reduced the NTFS file system on the disk to make room for Linux (since the customer wanted to maintain a dual-boot system), added the distribution, set up the networking, installed all the software they wanted, and never touched a command line. Not once.
And I used Linux every day, on my desktop and my laptop and for the most part never pull up the terminal emulator to do anything. I will admit to not writing much code these days, but I do send and receive email, browse the web, create and edit videos and audio material, chat with people across the Internet, edit images, draw things, scan things in, and print things. I sometimes do other things, but these are mostly what I do.
I will admit that from time to time I bring up a terminal emulator and write a shell script to do something, but that is usually something gigantic, far out of the scope of most desktop users. No, I can not tell you what they typically are....unless of course you find me at a conference and buy me several beers. I can give you a hint.....I have a laptop that has a four-core (with hyper-threading) Intel processor in it, 16 GB of memory and a TB of disk. I call it “smaug” after the fire-breathing dragon in “The Hobbit”. From time to time I need smaug's power for when I am using that command line.....but Mom&Pop(TM) and most office users would never need smaug.
Now before I have zillions of people add comments onto this article about how they had to do this or that on the command line, I will remind you that the premise of my previous article was that volume and profit drives everything in commerce. If the volumes are there, then companies would write device drivers and put them into the Linux mainstream. And if volumes were even higher and customer demand said they had to be “open source”, the drivers would be open source. No magic.
Once those device drivers are in the mainstream, Value Added Resellers and OEMs would create systems that were pre-installed with a distribution and ALL of the hardware on that box would work. Devices purchased separately would not only have an MS Windows logo and Apple Logo on the box, but would have a little Tux Penguin there too.
But as long as most people buy closed source stuff “off the shelf” in large quantities (and do not buy FOSS products off the shelf or demand FOSS products in large quantities), then the FOSS products never appear on the shelf in the first place. Catch-22.
For problem #5 and #8, the same issues apply. People tend to write applications for systems that are shipping or are going to ship or are anticipated to be going to ship in large volume, and store owners stock programs but only if they think the people will pay for them in large numbers.
Applications and games bring about another issue in Free and Open Source Software, that of (and here I will make various people rage) Software Piracy.
It is this simple. In a world where a lot of people expect to make money writing software (or singing songs, or writing a book) as a product to make money, copyright infringement means that they do not typically achieve their end goals. In a world where people write and use software to provide a service or a solution, but not a product, they can typically meet their goals, even if people “pirate” the software “product”.
And typically (not always) the “piracy rate” (or “copyright infringement rate”, if you will) is highest in countries that have the lowest per capita domestic product. On one hand we tell the people in these countries that they need to use computers and be on the Internet in order to do business and make a living, on the other hand we charge them (or try to charge them) huge amounts of money for a “shiny plastic disk” (or now, a “download of bits”). So they put a patch over their eye and go “Arrrr”.
When I talk to them to tell them about Free Software, they smile and say (and I swear this is true), “maddog, all of our software is free”, meaning that they do not pay for it. This is typically true of desktop, “PC” software used by small businesses and individuals. Larger businesses and governments typically have to pay, as well as larger, more expensive software packages such as those whose names rhyme with Boracle.
Games (in particular) have problems because of the small percentage of people who actually pay for their games, but as FOSS platforms grow in number and game companies change their business models to include advertising, action figures (love my “Angry Bird” plush toys and T-shirts) and data purchases, more and more games will start appearing on Linux systems.
For problem #4, this has little effect on anyone who simply stays with one distribution. On the system I installed yesterday it told me there were something like 50K packages available to me. That distribution happened to be Debian based, but I will assume that one which is Red Hat or Open SuSE based has a similar number of packages.
In fact, the hardest part of the above mentioned installation (for me) was trying to find the de-frag program for NTFS in Windows 7. It was in a different place than the one in Windows XP, and probably a different place than Windows VISTA, but I was never asked to save a Windows VISTA system, so I did not have to “de-frag” VISTAs' file system. And this "broken user experience" was from different versions of the same OS from the same distribution manufacturer, a company called “Microsoft”.
Finally, for my “user experience sucks” friend, if “user experience” was what sold operating systems, then probably Apple would own 90% of the market and Microsoft would own 7%. After all, they are both closed source, and most people think they have a superior “user experience” with Apple.
I worked for a large computer company for a number of years, and brought out seven hardware/software combinations of Unix systems. Since the computer makers knows the size of the screens, the amount of memory you will support, the size of the disks you will have, it is relatively easy to create a “balanced” system. When you control both the hardware and the software completely it is relatively easy to make things “play nice together”. When you control the distribution channels with a sledgehammer you can make sure that all the support people are trained, and support is given. No training for support people or little/no support, well those channels do not get to sell our products.
Even Mr. Jobs said he did not want to have a 7” tablet because it might be too hard for application developers to make a good looking product that works well on a 10” AND a 5” AND a 7” screen. We will see if he was right about Apple developers.
And when you control the ecosystem, you can even keep applications off which do not follow your style guide.
Microsoft followed a different path. Their systems were not limited to only Microsoft hardware, but almost anyone's hardware, with device drivers written literally by anyone. One mistake inside the kernel of the OS, and it was “blue screen of death”.
Applications written by different people could end up on the system. Without an enforced style guide each application looked as if it was written by a different person.
But the systems were a lot less expensive than Apple, and most importantly, Microsoft let other people make money off Microsoft's software. Other companies would advertise Microsoft (“Microsoft Inside”, “Microsoft compatible”).....but in a lot of ways the user experience “sucked” compared to Apple.
Despite that “suckiness”, Microsoft had larger market share....by a lot.
It does not stop with Windows and Apple, lets look at Android and iOS.
From everything I have heard, iOS is a great phone (if you hold it right and do not expect the alarm to work), and Android (until recently) has been described as “clunky”.
iOS had a long head start in the marketplace over Android, had a lot of developers doing Apps, and was the media darling.
But Google followed the same model as Microsoft. Less control on Apps, more welcoming to different hardware vendors, allowed many service providers to carry it, and formulated a market of unlocked, unsigned phones for people that needed them.
Now Android is everywhere. It has to be a nightmare for Apple, because all they can do is litigate companies to keep them out of the market as volume is once again raising its glorious and hideous head.
So while “user experience” is certainly a factor it does not explain the lack of desktop penetration and a lot of what Anonymous Coward describes by just saying “it sucks” would be definitely improved if the OS was installed properly on a hardware system that had all the royalty-bearing fonts, codecs and device drivers needed to make the system work so that people walking into a store would be able to walk out with a working GNU/Linux system.
That brings us to “The Perfect Storm”
First I should describe what constitutes a “Perfect Storm”.
Here in the Northeast of the United States we have lots of different weather patterns come together. Every once in a great while all of the weather patterns come together to create a storm so fierce that massive damage is done, and ships will sail out of their way to escape the fury of the storm. Those that can not escape will normally die, but those that foresee it and manage it will return stronger.
In the past there have been “perfect storms” in computers.
The age of mechanization and steam inspired Babbage to think about the automation of calculating and printing tables of numbers for books. While Babbage did not succeed, he inspired many others to continue on.
World War II brought together many efforts to use automation to solve issues of ordinance calculations, breaking cryptographic codes, and other calculations in the time frames that the calculations were needed.
The advent of the transistor, and its prodigy the integrated circuit, freed the computers of the day from huge power usage, the fragility of vacuum tubes and relays, and heat dispensation, laying the way for digital watches, notebook computers and smart phones.
My belief is a “Perfect Storm” situation of
- low cost Internet to the home
- powerful computers of (relatively) low cost
- many instructional materials on how operating systems were built
- the vast amount of upper-level software existing due to the GNU project (and others such as *BSD)
allowed Linus Torvalds to get the fledging Linux kernel project off the ground.
Now there is another “Perfect Storm” coming, and this one may cause the wave that carries the good ship GNU/Linux Desktop past the sand bars where it is now languishing.
“Web based computing” first started to get people using multiple interfaces to do their work. No longer did people expect the interfaces to be the same familiar “Microsoft” or “Apple” interfaces that they were accustomed to having. Browsers might provide the friendly scroll bar or set of buttons, but HTML and style sheets described the way it looked and this was under control of each webmaster. The crack had formed in the dike.
“Wireless Computing” drove it even further. The interfaces of the phone, sometimes with a browser on it, and sometimes with “native apps” made the crack in the dike turn into a trickle.
“Cloud Computing”, smart phones and “tablets” started to really drive the trickle into a flood. In order to make the smaller, mostly keyboard-free interface more “friendly” hardware and software manufacturers started experimenting with their traditional “menu and scroll-bar” interfaces, and people started getting used to using multiple interfaces in their lives. They also got used to not having applications where they thought they were going to be (on their system), and they started getting used to “applications for free” with the revenue model switched to something else (advertising or other service).
In the GNU/Linux space the bad economy helped by making systems administrators and computer rooms very expensive, until consolidation and virtualization started driving down costs. In schools and universities, long the bastion of “royalty free software”, LTSP systems started allowing schools with over 4000 students and 2200 desktop clients in many buildings run off servers numbering in the dozens and the number of systems administrators counted on less than one hand.
This type of cost savings was not lost on business and governments.
Until recently desktop systems still ranged in the hundreds of dollars and used huge amounts of electric power. LTSP systems typically had desktops made from computers that were “recycled” from large and small companies upgrading their systems. However the labor and shipping costs on gathering up these systems, reconditioning them, reconfiguring them, then shipping them to recipient groups was costly. If the recipient was outside the country, then typically duty had to be paid on the system coming into the country, and the statement of receiving countries “being used as a landfill for old computers” did not go over well at either end of the supply chain. Finally, electricity costs for both running the reconditioned units and for cooling them with air-conditioners (in hot climates) cost huge electric bills.
Handheld systems cost large amounts of money unless subsidized by a carrier, and then you simply paid for the unit over time. And while software innovation was flourishing in many places through open source, hardware innovation was typically done by large companies with deep pockets.
Enter the Arduino
The next part of the perfect storm came from a little project called “Arduino”. Created to help non-computer science and non-electrical engineering people create “stuff” with an open model, it allowed people to build innovative things at a tremendous rate.
The second part of the storm came from the creation of Hackerspaces, where people come together to innovate; with these people sharing equipment, expertise and connections.
The final wave of the storm is symbolized by things like the Beagleboard (and BeagleBone), Raspberry Pi, and the “AK802 Mini Android 4.0 Network Media Player”.
Like the Arduino, the Beagleboard and BeagleBone are inexpensive systems, but are powerful enough to run a version of Linux. They are made to be used by “makers”, and people line up to purchase them to learn electronics and system software. Closed source operating systems and software have little place in this space.
The Raspberry Pi has all the hardware on it to be a “Thin client”. With the right case and dongle power brick it is a low power, low-cost (35 dollars plus case and brick) system that can attach to the back of a “green” LCD monitor (20W while running, 1/3W standby). Hook the Raspberry Pi up to the Internet, obtain your account at Google or some other “cloud” provider and many people would need nothing else. Music, Video, telephone, storage, computing.
The AK802 not only has all the power you need for a thin client, but at less than 85 dollars you can attach an inexpensive keyboard and mouse to it, plug it into your TV or LCD monitor and “go, go, go”.
Finally there are the plans of “Mozilla OS”, a phone that will run only a browser, and all of the applications will be written in HTML5.
People forget (or never knew) that companies like Sun Microsystems had no fabrication plant for their SPARC chips. Neither does ARM. Access to components through distributors is easier (and cheaper) than ever.
Innovation at any level is now starting to happen where it happened in the 1960s and 1970s: in people's garages, basements and old mill buildings.
What if you dislike or distrust the cloud providers? A small server at home with all the other systems in your house as thin clients. Ubuntu, Red Hat and other distributions have all the software you need.
When your computer costs a lot less than 100 dollars, will people still be willing to pay 133 dollars for Windows 7 OEM version, or 400 dollars for Microsoft office? When you now share that software with everyone in the family or office you may pay a lot more than that on a per-user basis.
When the software on your desktop computer system is simple to the point of working or not and the real complexity is on the server or in the cloud, we might find the waters rising in the perfect storm.
Now we come to the last part of the storm, something mentioned as comment #1 above and something I have been talking about for five years.
There are only 1.5 billion desktop systems on the face of the planet and 7.3 billion people. I stress the “only” part because we tend to think of computers as “everyplace”, but with respect to the people on the face of the planet they are still “rare”.
While there are 1.5 billion people using desktop computers even more are going to be using smart phones, and these will probably have a different interface and different applications. A lot of these people will have experienced computers and the Internet before and be “used to them”. They will not be like my father, who had to write down step-by-step what to do on the computer.
Most of the 5.8 billion people not using a desktop are not familiar with the “traditional ways” that computers are “supposed to work”, and they live outside of the United States. Their governments and industry are concentrating on building their own economy, and FOSS gives them that opportunity to grow.
A lot of these people do not have much money and their electricity may be high priced, so as the Internet spreads having access to these non-traditional computer systems will become “normal”.
Some companies have been building browsers and multimedia systems directly into their TV sets. People are (once again) getting used to the fact that their interface is not one that is either Microsoft or iOS based. Many of these systems will be GNU/Linux or Android based, both due to the sophistication of the software (needed for security and stability) and the cost of the OS and layered functionality.
When Linus Torvalds first started giving talks on Linux, he used a chart that showed the growth of the use of Linux going from zero to “world domination” as a straight line. Then he would laugh and tell people that it was charted on “log paper”, meaning that the growth represented by the straight line would start off slow but gain speed very rapidly, in the end being “exponential”.
At first I thought that the line would naturally be limited by the number of people on the face of the planet, but over time I realized that we will be having multiple systems, perhaps many hundred or thousand, per person. Given this, I am glad that IPv6 has 128 bits of address space.
This storm will not happen overnight. I am not saying that 2013 or even 2014 will be the “year of the desktop”, but when you stop and look over a long period of time and see what is happening, drawing the lines on Linus' chart to intersect at “world domination” becomes easier and easier.
Great Article.Thank you for writing this. I will print it out and share with a tech head that can not do social networking. And share it on G+.
It is a rather long read but there are so so many points to cover. You did a great job. Also I remember that you posted some where in an article I think, that was linked back to Facebook, that your father had passed away. I just wanted to tell you that I remember.
I also will not forget the (all very common) group of young lings crowded around you at OLF in 2007.
Thanx for the memories and all the teaching.
Kernel king admits his tone has alienated volunteers, but says the demands of the process require directness.
New flaw in an old encryption scheme leaves the experts scrambling to disable SSL 3
Lennart Poettering wants to change the way Linux developers talk to each other.
Enterprise giant frees itself from ink and home PCs (and visa versa).
Mozilla’s product think tank sinks silently into history.
TODO group will focus on open source tools in large-scale environments.
New tool will look like GParted but support a wider range of storage technologies.
New public key pinning feature will help prevent man-in-the-middle attacks.
Carnegie Mellon researchers say 3 million pages could fall down the phishing hole in the next year.
The US government rolls new best-practice rules for protecting SSH.