Be careful what you wish for
Artificial Intelligence
"maddog" ponders the rise of intelligent machines.
Recently, I read that Stephen Hawking, the world-famous physicist, had warned people that artificial intelligence (AI) could be very useful or could be the worst thing that ever happened to mankind. The article, which included Hawking's comments, went on to discuss the many things AI could do for us – from analyzing and extracting information from the Internet, to making quick judgments in deploying and firing military weapons, to taking over the world.
In the comments section of the article were people who either agreed with Stephen Hawking or (much more often) brushed off his comments by saying "who would create such a thing" or "only a supreme being could create something which is truly intelligent."
Those of you who have been reading my work for a while know that I am a great fan of Dr. Alan Turing, and those who know of Dr. Turing's work also know that his interest in computers stemmed largely from the desire to know how the human mind worked and the desire to create a machine that could think like a human. Dr. Turing's "test" for what constitutes artificial intelligence is still used today, 60 years after his death.
Dr. Turing also believed that if a complex problem could be solved by a digital computer, then the simplest digital computer meeting certain criteria could solve the problems that the most complex digital computer could solve, given enough time and memory. This concept was embodied in his Turing Machine.
The human mind is made up of 33-86 billion neurons and trillions of synaptic connections, with each synapse being only 20-40 nanometers wide. Each neuron may have hundreds to thousands of synapses allowing communication (both chemical and electrical) between the neurons.
Exactly how the human mind stores and fetches data, makes decisions, and controls the rest of the body still eludes us, but each day brings new discoveries that bring us closer to understanding how humans think.
For those who believe that a machine capable of AI will never be created, I would remind them that people once believed that man would never fly, that the world was flat, and that we would never be able to talk to machines in anything other than the ones and zeros of machine language.
I am also a great fan of science fiction, which blossomed during my early youth when authors like Isaac Asimov and Philip K. Dick (among many others) wrote about robots and "giant brains" that could (at first) only mimic human thought. The machines were sometimes purposefully limited by their makers who created "laws" for them to follow. Isaac Asimov's book I, Robot detailed four laws intended to control the robots and created stories that showed what happened when the laws were relaxed.
Often, these AI devices were connected to some type of unlimited power supply that could "never" be turned off (to keep enemies from turning off the power on the device), or the AI device itself built a power supply that could not be turned off. When that power supply was finished was typically the time that the AI device "went crazy" and tried to take over the world. This is why I often told my students that the person who created a computer that could not be unplugged was truly stupid.
Nevertheless, society marches forward with ideas like artificial intelligence and the "Internet of Things" without having the safeguards in place in case these things "go wrong." We need to have the policies in place as the technology moves forward, but often policy lags dramatically.
Even if AI creations do not want to take over the world, what does it mean when we turn off the power of an artificially created intelligence? Is it murder? Do the same rules of "computer ownership" apply when hundreds of thousands of computers are literally dropped in our backyard as part of the sensor network of "Internet of Things"? Should we be able to demand to see the source code for those things, to make sure they are not transmitting data other than what we have been told?
I am often told by users of GNU/Linux that they are not "technical" and therefore cannot contribute to the Free and Open Source Software cause. But, here is an area where philosophy and the humanities can contribute – to help technical people through the knothole of whether an organism that is not human flesh is "human," whether a being that is made of silicon flip-flops and wires can have the same rights as one made up of neurons and synapses, and where those rights end if they start to infringe on the rights of "real" humans.
Carpe Diem.
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
New Slimbook EVO with Raw AMD Ryzen Power
If you're looking for serious power in a 14" ultrabook that is powered by Linux, Slimbook has just the thing for you.
-
The Gnome Foundation Struggling to Stay Afloat
The foundation behind the Gnome desktop environment is having to go through some serious belt-tightening due to continued financial problems.
-
Thousands of Linux Servers Infected with Stealth Malware Since 2021
Perfctl is capable of remaining undetected, which makes it dangerous and hard to mitigate.
-
Halcyon Creates Anti-Ransomware Protection for Linux
As more Linux systems are targeted by ransomware, Halcyon is stepping up its protection.
-
Valve and Arch Linux Announce Collaboration
Valve and Arch have come together for two projects that will have a serious impact on the Linux distribution.
-
Hacker Successfully Runs Linux on a CPU from the Early ‘70s
From the office of "Look what I can do," Dmitry Grinberg was able to get Linux running on a processor that was created in 1971.
-
OSI and LPI Form Strategic Alliance
With a goal of strengthening Linux and open source communities, this new alliance aims to nurture the growth of more highly skilled professionals.
-
Fedora 41 Beta Available with Some Interesting Additions
If you're a Fedora fan, you'll be excited to hear the beta version of the latest release is now available for testing and includes plenty of updates.
-
AlmaLinux Unveils New Hardware Certification Process
The AlmaLinux Hardware Certification Program run by the Certification Special Interest Group (SIG) aims to ensure seamless compatibility between AlmaLinux and a wide range of hardware configurations.
-
Wind River Introduces eLxr Pro Linux Solution
eLxr Pro offers an end-to-end Linux solution backed by expert commercial support.