Detecting and disrupting AI

Deepfake Sleuth


The rise of AI-generated deepfakes requires tools for detecting and even disrupting the technology.

News about artificial intelligence (AI) usually falls into predictable categories. Most stories cover news from developers. Others discuss the implications of AI for art and employment. Occasionally, too, hallucinations (AI results that lack context and are sometimes purely imaginary, like Google AI’s suggestion that people eat rocks ) are mentioned. However, one topic that receives little attention is how to detect AI and even guard against it. Behind the publicity, such goals have become a major subject of research, a subset of security and privacy concerns, especially in the case of deepfakes, such as the image of the Pope in a puffer jacket. On GitHub, for example, a search for “detecting AI” returns 196 results, while a search for “deepfakes” returns 1,800. Such efforts fall into three main categories: the need for critical analysis, the development of detection tools, and technologies to identify or disrupt AI.

Researching the Context

The most basic tool for detecting AI is critical analysis, or web literacy, as some call it. Regardless of the name, it amounts to skeptical investigation of text or images. In a series of blogs and YouTube videos, Mike Caulfield, a researcher at the University of Washington’s Center for an Informed Public, defines four moves to detect AI using the acronym SIFT (stop; investigate the source; find better coverage; and trace claims, quotes, and media to original source). Caulfield elaborates on these moves, “First, when you first hit a page or post and start to read it — STOP. Ask yourself whether you know the website or source of the information, and what the reputation of both the claim and the website is. If you don’t have that information, use the other moves to get a sense of what you’re looking at. Don’t read it or share media until you know what it is.” He also warns against getting distracted by side issues in your investigation. One way to begin is with an investigation into the reputation of the author or website. If the claim is your main interest, find “if it represents a consensus viewpoint, or if it is the subject of much disagreement.” Another approach is to discover and evaluate the original source or context, which is often lost on the Internet. What all these tactics have in common, Caulfield summarizes, is that they are all means to discover the context of the text or image.

Such critical analysis is useful in general, and, to someone with an advanced degree, may be second nature. Its disadvantage, of course, is that it takes time, which can make it inconvenient on the fast-paced Internet. Moreover, as countless security cases have shown, when a choice between security and convenience is available, security usually loses. For this reason, software detection or disruption can often be a sensible alternative.

Detecting AI

By definition, AI consists of averages. The result is often bland images or generalized text. However, in marketing and academia, both are common enough that they can often pass unnoticed. On closer inspection, though, they often contain flaws caused by the merging of multiple sources. In text, long, not quite grammatical sentences are common, and the effort to summarize can create inaccuracies and misleading perspectives, even when the AI does not actually hallucinate. In images, there may be small inaccuracies, such as a necklace looped around only one side of the neck. Facial features can also show irregularities and unnatural movements, especially in videos, where the mouth is manipulated in an effort to match invented speech. Yet another telltale is multiple sources of light and inconsistent shadows.

This is the material that online services like Microsoft Video Authenticator, Deepware Scanner, and Amber Video Verification Service analyze for quick results. These services usually claim 90-96 percent accuracy, but in practice, they do not always come to the same conclusions. From my experience, I suspect that the accuracy of many of them is actually much lower. Probably the most accurate and easiest to use is the open source DeepFake-O-Meter from the University of Buffalo Media Forensic Lab (Figure 1), although as I write, recent publicity has overloaded its servers, which suggests that the demand for such services is high.

Figure 1: DeepFake-O-Meter is one of the growing number of tools for analyzing AI images.

Content Credentials and Beyond

In the long term, the best defense against AI may be so-called content credentials, the inclusion of information about the origins of material. At the simplest level, text can include passages in white that are invisible at a casual glance, but easy to locate if looked for. Similarly, images can have watermarks and detailed metadata such as the creator’s name. On a more sophisticated level, text and images can also include digital signatures and encryption. Such steps can prove ownership, and, of course, the mere fact of their existence can be proof that an image is artificially generated.

Such methods can be laborious to use manually, but they are already being automated. In 2023, the high-end Leica M11-P became the first camera to add content credentials in images. These credentials include not only the time and place where a picture was taken, but also who took it and whether it has been edited. Sony, Canon and Nikon are all developing similar features. However, such features are appearing first on cameras designed for professionals. It will probably be several years – if ever – before they are available for low-end cameras such as the ones on smartphones. Occasional or home users wishing to use such features will likely have to rely on online services.

Content credentials are a defensive measure. However, efforts are also being made to disrupt AI creation by making the results useless. For example, Nataniel Ruiz, Sarah Adel Bargal, and Stan Sclaroff at the University of Boston have created an open source project with the goal of generating an array of “adversarial attacks.” “Our goal,” they explain, “is to apply an imperceptible mask on an image to interfere with modifications of an image.” The result can make the AI image inaccurate and useless for any practical purpose (Figure 2) by substituting one modification for another, interfering with the ability to sharpen blurred images, or even blocking image manipulation altogether. Although I could not find any equivalent project for AI-generated text, I suspect that one or more efforts are only a matter of time.

Figure 2: Disrupting AI creation with a mask.

Buy Linux Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • chkrootkit

    Linux can be infected by rootkit malware that is hidden and hard to detect. The chkrootkit program can help find rootkit infections.

  • Generative Adversarial Networks

    Auction houses are selling AI-based artwork that looks like it came from the grand masters. The Internet is peppered with photos of people who don't exist, and the movie industry dreams of resurrecting dead stars. Enter the world of generative adversarial networks.

  • Caine

    Caine is a Linux distribution based on Ubuntu 10.04 for forensic scientists and security-conscious administrators. Poised to do battle against IT ne’er-do-wells, Caine has a comprehensive selection of software, a user-friendly GUI, and responsive support.

  • XSA Attack

    A new form of phishing attack deposits an HTML tag on the vulnerable service to trap users into authenticating.

  • Investigating Windows Systems

    A forensics expert explains how to extract interesting details from a confiscated Windows hard disk using standard Linux tools.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More