How Deep Is Your Chat?
Welcome
Books, academic journals, tech blogs, and social media posts have been trumpeting dire warnings about super-intelligent AI systems snuffing out civilization.
Dear Reader,
Books, academic journals, tech blogs, and social media posts have been trumpeting dire warnings about super-intelligent AI systems snuffing out civilization. This certainly is a real problem – I don't want to make light of it. But another serious, and perhaps more immediate, problem is really stupid, inept AI systems messing things up through sheer incompetence.
The Washington Post had a story recently [1] about a study by a European nonprofit [2] on the trouble AI chatbots had with answering basic questions about political elections. According to the story, Bing's AI chatbot, which is now called Microsoft Copilot, "gave inaccurate answers to one out of every three basic questions about candidates, polls, scandals, and voting in a pair of recent election cycles in Germany and Switzerland."
Before you write this off as yet another Linux guy ranting about Microsoft, I should add, the reason why the study focused on Microsoft's chat tool is because Copilot can output its sources along with its chat responses, which made it easier to check. The story points out that "Preliminary testing of the same prompts on OpenAI's GPT-4, for instance, turned up the same kinds of inaccuracies." Google Bard wasn't tested because it isn't yet available in Europe.
The errors cited in the study included giving incorrect dates for elections, misstating poll numbers, and failing to mention when a candidate dropped out of the race. The study even documents cases of the chatbot "inventing controversies" about a candidate.
Note that I'm not talking about some arcane anomaly buried deep in the program logic. The bot literally couldn't read the very articles it was citing as sources.
Of course, Copilot got many of the answers right. "Two out of three" wouldn't have been too bad for an experimental system 10 years ago maintained by experts who knew what they were getting. The problem is that we have endured a year of continuous hype about the wonders of generative AI, and people are actually starting to believe it. It is one thing to ask an AI to write a limerick – it is quite another to ask it to chase down information you will use for voting in a critical election. Many elections are decided by one- to three-percent margins. The implications of a chatbot acting as a source for voters and getting 30 percent of the answers wrong are enormous.
The study also points out that accuracy varies with the language. Questions asked in German led to inaccurate responses 37 percent of the time, whereas English answers were only wrong 20 percent of the time (that's still way too many mistakes). French weighed in at a 24-percent error rate.
AI proponents answer that this is all a process, and the answers will get more accurate in time. The general sense is that this is just a matter of bug hunting. You make a list of the problems, then tick them off one by one. But it isn't clear that these complex issues will be solved in some pleasingly linear fashion. The AI industry made surprisingly little progress for years and slow-walked through most of its history before the recent breakthroughs that led to the latest generation. It is possible we'll need to wait for another breakthrough to make another incremental step, and in the meantime, we could do a lot of damage by encouraging people to put their trust in all the bots that are currently getting hyped in the press.
If you want to get an AI to draw a picture of your boss, go ahead and play. But it looks like, at least for now, questions about which candidate to vote for might require a human.
Joe Casad, Editor in Chief
Infos
- "AI Chatbot Got Election Info Wrong 30 Percent of the Time, European Study Finds" by Will Oremus, Washington Post, December 15, 2023: https://www.washingtonpost.com/technology/2023/12/15/microsoft-copilot-bing-ai-hallucinations-elections/ (paywalled)
- "Prompting Elections: The Reliability of Generative AI in the 2023 Swiss and German Elections," AI Forensics: https://aiforensics.org/work/bing-chat-elections
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Wine 10 Includes Plenty to Excite Users
With its latest release, Wine has the usual crop of bug fixes and improvements, along with some exciting new features.
-
Linux Kernel 6.13 Offers Improvements for AMD/Apple Users
The latest Linux kernel is now available, and it includes plenty of improvements, especially for those who use AMD or Apple-based systems.
-
Gnome 48 Debuts New Audio Player
To date, the audio player found within the Gnome desktop has been meh at best, but with the upcoming release that all changes.
-
Plasma 6.3 Ready for Public Beta Testing
Plasma 6.3 will ship with KDE Gear 24.12.1 and KDE Frameworks 6.10, along with some new and exciting features.
-
Budgie 10.10 Scheduled for Q1 2025 with a Surprising Desktop Update
If Budgie is your desktop environment of choice, 2025 is going to be a great year for you.
-
Firefox 134 Offers Improvements for Linux Version
Fans of Linux and Firefox rejoice, as there's a new version available that includes some handy updates.
-
Serpent OS Arrives with a New Alpha Release
After months of silence, Ikey Doherty has released a new alpha for his Serpent OS.
-
HashiCorp Cofounder Unveils Ghostty, a Linux Terminal App
Ghostty is a new Linux terminal app that's fast, feature-rich, and offers a platform-native GUI while remaining cross-platform.
-
Fedora Asahi Remix 41 Available for Apple Silicon
If you have an Apple Silicon Mac and you're hoping to install Fedora, you're in luck because the latest release supports the M1 and M2 chips.
-
Systemd Fixes Bug While Facing New Challenger in GNU Shepherd
The systemd developers have fixed a really nasty bug amid the release of the new GNU Shepherd init system.