Backdoors in Machine Learning Models
Preparation
The example in this article uses PyTorch, which, along with TensorFlow, is one of the most popular deep-learning frameworks. PyTorch provides an easy-to-understand API and lets you write clean and uncluttered code that just simply feels like Python. To get started, you need to install the Python packages from PyTorch. Use the following command:
pip install torch torchvision
Then download the MNIST data set and create an instance of the MNIST class from the Torchvision package. Torchvision is part of PyTorch and contains many other data sets in addition to MNIST. Listing 1 shows which arguments are passed in to the class. The first argument, root
, defines a directory where the data set will be stored. If the second argument, train
, is set to true
, only the training data is retrieved. The third argument, download
, is used to download the data set. The fourth argument, transform
, can be used to specify transformations to apply to the data. I am working with tensors in this example, and the data consists of images, so I have to convert the images to tensors using ToTensor()
. I will use the same approach to load the data set and validate the model. The only difference is that I need to set train
to false
instead of true
.
Listing 1
MNIST model
01 mnist_training = torchvision.datasets.MNIST( 02 root='.data', 03 train=True, 04 download=True, 05 transform=torchvision.transforms.ToTensor() 06 )
Computing the Model
The next step is to create a function that computes a model for a data set. This function can be seen in Listing 2. Lines 2 to 13 encode the architecture of the CNN. It has a very simple architecture. The first layer is a convolutional layer, followed by a pooling layer. The widely used ReLU
acts as the activation function. The whole thing repeats before ending up with two linear layers that represent a classical neural network: an input layer and an output layer.
Listing 2
Computing the Model
01 def create_model(dataset): 02 model = torch.nn.Sequential( 03 nn.Conv2d(1, 16, 5, 1), 04 nn.ReLU(), 05 nn.MaxPool2d(2, 2), 06 nn.Conv2d(16, 32, 5, 1), 07 nn.ReLU(), 08 nn.MaxPool2d(2, 2), 09 nn.Flatten(), 10 nn.Linear(32*4*4, 512), 11 nn.ReLU(), 12 nn.Linear(512, 10) 13 ) 14 15 opt = torch.optim.Adam(model.parameters(), 0.001) 16 loss_fn = torch.nn.CrossEntropyLoss() 17 loader = torch.utils.data.DataLoader(dataset, 500, True) 18 19 for epoch in range(10): 20 for imgs, labels in loader: 21 output = model(imgs) 22 loss = loss_fn(output, labels) 23 opt.zero_grad() 24 loss.backward() 25 opt.step() 26 print(f"Epoch {epoch}, Loss {loss.item()}") 27 28 return model
Lines 15 to 17 select an optimizer (Adam
, in this case) and a loss function (CrossEntropyLoss
in this case) and create an instance of DataLoader
. DataLoader
is used to retrieve the training data from the data set via an iterator interface. This data set is specified as the first argument. In each iteration, DataLoader
delivers a batch of training data. The size of the batch defines the second argument. In this case, each iteration provides 500 examples. If you set the third argument to true, the data will be randomly shuffled beforehand.
Lines 19 to 26 train the model step by step. They iterate 10 times (line 19) over the complete data set (line 20). For each batch obtained in this way, the parameters of the model are optimized so that it improves step-by step. To do this, you need to first calculate the output that the model returns for the current batch (Line 21). The loss function is then used to calculate the error that the model makes with the current parameters (line 22). In simple terms, this is the difference between the output
that the model provides and the correct values (labels
). Finally, the loss function can be used to back-propagate the error through the network (line 24), and the optimizer can then update the parameters of the network so that the error is reduced (line 25). For this to work, the gradients in line 23 must be set to zero. Additional technical details are not important for this example.
Accuracy of the Model
Calling the create_model()
function with the training data returns a model that recognizes handwritten digits with about 99 percent accuracy in less than two minutes on a current CPU. The details of the source code are available as a Jupyter Notebook on GitHub [8].
« Previous 1 2 3 4 Next »
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Gnome 47.2 Now Available
Gnome 47.2 is now available for general use but don't expect much in the way of newness, as this is all about improvements and bug fixes.
-
Latest Cinnamon Desktop Releases with a Bold New Look
Just in time for the holidays, the developer of the Cinnamon desktop has shipped a new release to help spice up your eggnog with new features and a new look.
-
Armbian 24.11 Released with Expanded Hardware Support
If you've been waiting for Armbian to support OrangePi 5 Max and Radxa ROCK 5B+, the wait is over.
-
SUSE Renames Several Products for Better Name Recognition
SUSE has been a very powerful player in the European market, but it knows it must branch out to gain serious traction. Will a name change do the trick?
-
ESET Discovers New Linux Malware
WolfsBane is an all-in-one malware that has hit the Linux operating system and includes a dropper, a launcher, and a backdoor.
-
New Linux Kernel Patch Allows Forcing a CPU Mitigation
Even when CPU mitigations can consume precious CPU cycles, it might not be a bad idea to allow users to enable them, even if your machine isn't vulnerable.
-
Red Hat Enterprise Linux 9.5 Released
Notify your friends, loved ones, and colleagues that the latest version of RHEL is available with plenty of enhancements.
-
Linux Sees Massive Performance Increase from a Single Line of Code
With one line of code, Intel was able to increase the performance of the Linux kernel by 4,000 percent.
-
Fedora KDE Approved as an Official Spin
If you prefer the Plasma desktop environment and the Fedora distribution, you're in luck because there's now an official spin that is listed on the same level as the Fedora Workstation edition.
-
New Steam Client Ups the Ante for Linux
The latest release from Steam has some pretty cool tricks up its sleeve.