How Deep Is Your Chat?
Welcome
Books, academic journals, tech blogs, and social media posts have been trumpeting dire warnings about super-intelligent AI systems snuffing out civilization.
Dear Reader,
Books, academic journals, tech blogs, and social media posts have been trumpeting dire warnings about super-intelligent AI systems snuffing out civilization. This certainly is a real problem – I don't want to make light of it. But another serious, and perhaps more immediate, problem is really stupid, inept AI systems messing things up through sheer incompetence.
The Washington Post had a story recently [1] about a study by a European nonprofit [2] on the trouble AI chatbots had with answering basic questions about political elections. According to the story, Bing's AI chatbot, which is now called Microsoft Copilot, "gave inaccurate answers to one out of every three basic questions about candidates, polls, scandals, and voting in a pair of recent election cycles in Germany and Switzerland."
Before you write this off as yet another Linux guy ranting about Microsoft, I should add, the reason why the study focused on Microsoft's chat tool is because Copilot can output its sources along with its chat responses, which made it easier to check. The story points out that "Preliminary testing of the same prompts on OpenAI's GPT-4, for instance, turned up the same kinds of inaccuracies." Google Bard wasn't tested because it isn't yet available in Europe.
The errors cited in the study included giving incorrect dates for elections, misstating poll numbers, and failing to mention when a candidate dropped out of the race. The study even documents cases of the chatbot "inventing controversies" about a candidate.
Note that I'm not talking about some arcane anomaly buried deep in the program logic. The bot literally couldn't read the very articles it was citing as sources.
Of course, Copilot got many of the answers right. "Two out of three" wouldn't have been too bad for an experimental system 10 years ago maintained by experts who knew what they were getting. The problem is that we have endured a year of continuous hype about the wonders of generative AI, and people are actually starting to believe it. It is one thing to ask an AI to write a limerick – it is quite another to ask it to chase down information you will use for voting in a critical election. Many elections are decided by one- to three-percent margins. The implications of a chatbot acting as a source for voters and getting 30 percent of the answers wrong are enormous.
The study also points out that accuracy varies with the language. Questions asked in German led to inaccurate responses 37 percent of the time, whereas English answers were only wrong 20 percent of the time (that's still way too many mistakes). French weighed in at a 24-percent error rate.
AI proponents answer that this is all a process, and the answers will get more accurate in time. The general sense is that this is just a matter of bug hunting. You make a list of the problems, then tick them off one by one. But it isn't clear that these complex issues will be solved in some pleasingly linear fashion. The AI industry made surprisingly little progress for years and slow-walked through most of its history before the recent breakthroughs that led to the latest generation. It is possible we'll need to wait for another breakthrough to make another incremental step, and in the meantime, we could do a lot of damage by encouraging people to put their trust in all the bots that are currently getting hyped in the press.
If you want to get an AI to draw a picture of your boss, go ahead and play. But it looks like, at least for now, questions about which candidate to vote for might require a human.
Joe Casad, Editor in Chief
Infos
- "AI Chatbot Got Election Info Wrong 30 Percent of the Time, European Study Finds" by Will Oremus, Washington Post, December 15, 2023: https://www.washingtonpost.com/technology/2023/12/15/microsoft-copilot-bing-ai-hallucinations-elections/ (paywalled)
- "Prompting Elections: The Reliability of Generative AI in the 2023 Swiss and German Elections," AI Forensics: https://aiforensics.org/work/bing-chat-elections
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
So Long Neofetch and Thanks for the Info
Today is a day that every Linux user who enjoys bragging about their system(s) will mourn, as Neofetch has come to an end.
-
Ubuntu 24.04 Comes with a “Flaw"
If you're thinking you might want to upgrade from your current Ubuntu release to the latest, there's something you might want to consider before doing so.
-
Canonical Releases Ubuntu 24.04
After a brief pause because of the XZ vulnerability, Ubuntu 24.04 is now available for install.
-
Linux Servers Targeted by Akira Ransomware
A group of bad actors who have already extorted $42 million have their sights set on the Linux platform.
-
TUXEDO Computers Unveils Linux Laptop Featuring AMD Ryzen CPU
This latest release is the first laptop to include the new CPU from Ryzen and Linux preinstalled.
-
XZ Gets the All-Clear
The back door xz vulnerability has been officially reverted for Fedora 40 and versions 38 and 39 were never affected.
-
Canonical Collaborates with Qualcomm on New Venture
This new joint effort is geared toward bringing Ubuntu and Ubuntu Core to Qualcomm-powered devices.
-
Kodi 21.0 Open-Source Entertainment Hub Released
After a year of development, the award-winning Kodi cross-platform, media center software is now available with many new additions and improvements.
-
Linux Usage Increases in Two Key Areas
If market share is your thing, you'll be happy to know that Linux is on the rise in two areas that, if they keep climbing, could have serious meaning for Linux's future.
-
Vulnerability Discovered in xz Libraries
An urgent alert for Fedora 40 has been posted and users should pay attention.