Backdoors in Machine Learning Models

Clean-Label Attacks

One disadvantage of the approach described in this article is that the manipulations are easy to detect. First, the trigger could be found in the training examples. Second, the training examples with triggers have an incorrect label, the one that the attacker wants the model to provide as a response when the trigger is present. Some more advanced approaches might try to hide the manipulations. In clean-label attacks, only the image data is manipulated. The labels remain unchanged so that the label still matches the image. And the image data can even be manipulated in a way that it is imperceptible to the people reviewing the data set.

To inject a backdoor into a model, you do not necessarily need to manipulate an existing data set or create a new, manipulated, labeled data set. Instead, all you need to do is post manipulated images at certain places on the Internet, where they will presumably be accessed by someone at some point, in order to create a model from them. In this case, the images would be labeled by other people (for example, via crowdsourcing) who would not notice the manipulations.

Conclusions

Machine learning and smart systems are currently making giant inroads into every area of daily life. The potential is enormous, and impressive results are repeatedly achieved. But progress always goes hand in hand with new risks. Although the security properties of machine learning models have now been far more thoroughly investigated than a few years ago, still very little is known about them. The AI community will need to develop more effective protections against data poisoning attacks before we can truly trust our smart systems.

Infos

  1. Szegedy, Christian, et al. "Intriguing properties of neural networks." arXiv:1312.6199, Dec. 2013
  2. Athalye, Anish, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. "Synthesizing robust adversarial examples." Proceedings of the 35th International Conference on Machine Learning (2018), PMLR 80:284-293
  3. Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. "Explaining and harnessing adversarial examples." arXiv:1412.6572 [stat.ML], Dec. 2014
  4. Fredrikson, Matt, Somesh Jha, and Thomas Ristenpart. "Model inversion attacks that exploit confidence information and basic countermeasures." Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (2015), pg. 1322-1333
  5. Amazon's Mechanical Turk: https://www.mturk.com
  6. Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. "Badnets: Identifying vulnerabilities in the machine learning model supply chain." arXiv:1708.06733 [cs.CR], Aug. 2017
  7. Deng, L. "The MNIST Database of Handwritten Digit Images for Machine Learning Research." IEEE Signal Processing Magazine, 2012;29(6):141-142
  8. Jupyter Notebook: https://github.com/daniel-e/secml/blob/master/examples/backdoors/mnist.ipynb

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Honeynet

    Security-conscious admins can use a honeynet to monitor, log, and analyze intrusion techniques.

  • Backdoors

    Backdoors give attackers unrestricted access to a zombie system. If you plan to stop the bad guys from settling in, you’ll be interested in this analysis of the tools they might use for building a private entrance.

  • R For Science

    The R programming language is a universal tool for data analysis and machine learning.

  • Virtualizing Rootkits

    A new generation of rootkits avoids detection by virtualizing the compromised system – and the user doesn't notice a thing.

  • Spam-Detecting Neural Network

    Build a neural network that uncovers spam websites.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News