Adversarial Machine Learning#

In this example, we show how to use Gurobi Machine Learning to construct an adversarial example for a trained neural network.

We use the MNIST handwritten digit database (http://yann.lecun.com/exdb/mnist/) for this example.

For this problem, we are given a trained neural network and one well classified example \(\bar x\). Our goal is to construct another example \(x\) close to \(\bar x\) that is classified with a different label.

For the hand digit recognition problem, the input is a grayscale image of \(28 \times 28\) (\(=784\)) pixels and the output is a vector of length 10 (each entry corresponding to a digit). We denote the output vector by \(y\). The image is classified according to the largest entry of \(y\).

For the training example, assume that coordinate \(l\) of the output vector is the one with the largest value giving the correct label. We pick a coordinate corresponding to another label, denoted \(w\), and we want the difference between \(y_w - y_l\) to be as large as possible.

If we can find a solution where this difference is positive, then \(x\) is a counter-example receiving a different label. If instead we can show that the difference is never positive, no such example exists.

Here, we use the \(l_1-\) norm \(|| x - \bar x||_1\) to define the neighborhood with its size defined by a fixed parameter \(\delta\):

\[|| x - \bar x ||_1 \le \delta.\]

Denoting by \(g\) the prediction function of the neural network, the full optimization model reads:

\[\begin{split}\begin{aligned} &\max y_w - y_l \\ &\text{subject to:}\\ &|| x - \bar x ||_1 \le \delta,\\\ & y = g(x). \end{aligned}\end{split}\]

Note that our model is inspired by Fischet al. (2018).

Imports and loading data#

First, we import the required packages for this example.

In addition to the usual packages, we will need matplotlib to plot the digits, and joblib to load a pre-trained network and part of the training data.

Note that from the gurobi_ml package we need to use directly the add_mlp_regressor_constr function for reasons that will be clarified later.

import gurobipy as gp
import numpy as np
from joblib import load
from matplotlib import pyplot as plt

from gurobi_ml.sklearn import add_mlp_regressor_constr

We load a neural network that was pre-trained with Scikit-learn’s MLPRegressor. The network is small (2 hidden layers of 50 neurons), finding a counter example shouldn’t be too difficult.

We also load the first 100 training examples of the MNIST dataset that we saved to avoid having to reload the full data set.

# Load the trained network and the examples
mnist_data = load("../../tests/predictors/mnist__mlpclassifier.joblib")
nn = mnist_data["predictor"]
X = mnist_data["data"]

Choose an example and set labels#

Now we choose an example. Here we chose arbitrarily example 26. We plot the example and verify if it is well predicted by calling the predict function.

# Choose an example
exampleno = 26
example = X[exampleno : exampleno + 1, :]

plt.imshow(example.reshape((28, 28)), cmap="gray")

print(f"Predicted label {nn.predict(example)}")
example3 adversarial mnist
Predicted label ['4']

To set up the objective function of the optimization model, we also need to find a wrong label.

We use predict_proba to get the weight given by the neural network to each label. We then use numpy’s argsort function to get the labels sorted by their weight. The right label is then the last element in the list, and we pick the next to last element as the wrong label.

ex_prob = nn.predict_proba(example)
sorted_labels = np.argsort(ex_prob)[0]
right_label = sorted_labels[-1]
wrong_label = sorted_labels[-2]

Building the optimization model#

Now all the data is gathered, and we proceed to building the optimization model.

We create a matrix variable x corresponding to the new input of the neural network we want to compute and a y matrix variable for the output of the neural network. Those variables should have respectively the shape of the example we picked and the shape of the return value of predict_proba.

We need additional variables to model the \(l1-\)norm constraint. Namely, for each pixel in the image, we need to measure the absolute difference between \(x\) and \(\bar x\). The corresponding matrix variable has the same shape as x.

We set the objective which is to maximize the difference between the wrong label and the right label.

m = gp.Model()
delta = 5

x = m.addMVar(example.shape, lb=0.0, ub=1.0, name="x")
y = m.addMVar(ex_prob.shape, lb=-gp.GRB.INFINITY, name="y")

abs_diff = m.addMVar(example.shape, lb=0, ub=1, name="abs_diff")

m.setObjective(y[0, wrong_label] - y[0, right_label], gp.GRB.MAXIMIZE)

The \(l1-\)norm constraint is formulated with:

\[\begin{split} \eta \ge x - \bar x \\ \eta \ge \bar x - x \\ \sum \eta \le \delta\end{split}\]

With \(\eta\) denoting the absdiff variables.

Those constraints are naturally expressed with Gurobi’s Matrix API.

# Bound on the distance to example in norm-1
m.addConstr(abs_diff >= x - example)
m.addConstr(abs_diff >= -x + example)
m.addConstr(abs_diff.sum() <= delta)

# Update the model
m.update()

Finally, we insert the neural network in the gurobipy model to link x and y.

Note that this case is not as straightforward as others. The reason is that the neural network is trained for classification with a "softmax" activation in the last layer. But in this model we are using the network without activation in the last layer.

For this reason, we change manually the last layer activation before adding the network to the Gurobi model.

Also, we use the function add_mlp_regressor_constr. directly. The network being actually for classification (i.e. of type MLPClassifier) the add_predictor_constr. function would not handle it automatically.

In the output, there is a warning about adding constraints with very small coefficients that are ignored. Neural-networks often contain very small coefficients in their expressions. Any coefficient with an absolute value smaller than \(10^{-13}\) is ignored by Gurobi. This may result in slightly different predicted values but should be negligible.

# Change last layer activation to identity
nn.out_activation_ = "identity"
# Code to add the neural network to the constraints
pred_constr = add_mlp_regressor_constr(m, nn, x, y)

# Restore activation
nn.out_activation_ = "softmax"
Warning for adding constraints: zero or small (< 1e-13) coefficients, ignored

The model should be complete. We print the statistics of what was added to insert the neural network into the optimization model.

Model for mlpclassifier:
200 variables
110 constraints
100 general constraints
Input has shape (1, 784)
Output has shape (1, 10)

--------------------------------------------------------------------------------
Layer           Output Shape    Variables              Constraints
                                                Linear    Quadratic      General
================================================================================
dense                (1, 50)          100           50            0           50 (relu)

dense0               (1, 50)          100           50            0           50 (relu)

dense1               (1, 10)            0           10            0            0

--------------------------------------------------------------------------------

Solving the model#

We now turn to solving the optimization model. Solving the adversarial problem, as we formulated it above, doesn’t actually require computing a provably optimal solution. Instead, we need to either:

  • find a feasible solution with a positive objective cost (i.e. a counter-example), or

  • prove that there is no solution of positive cost (i.e. no counter-example in the neighborhood exists).

We can use Gurobi parameters to limit the optimization to answer those questions: setting BestObjStop to 0.0 will stop the optimizer if a counter-example is found, setting BestBdStop to 0.0 will stop the optimization if the optimizer has shown there is no counter-example.

We set the two parameters and optimize.

m.Params.BestBdStop = 0.0
m.Params.BestObjStop = 0.0
m.optimize()
Set parameter BestBdStop to value 0
Set parameter BestObjStop to value 0
Gurobi Optimizer version 11.0.1 build v11.0.1rc0 (linux64 - "Ubuntu 20.04.6 LTS")

CPU model: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 1 physical cores, 2 logical processors, using up to 2 threads

Optimize a model with 1679 rows, 1778 columns and 41786 nonzeros
Model fingerprint: 0xc1de95dd
Model has 100 general constraints
Variable types: 1778 continuous, 0 integer (0 binary)
Coefficient statistics:
  Matrix range     [1e-13, 2e+00]
  Objective range  [1e+00, 1e+00]
  Bounds range     [1e+00, 1e+00]
  RHS range        [1e-03, 5e+00]
Presolve removed 1247 rows and 729 columns
Presolve time: 0.13s
Presolved: 432 rows, 1049 columns, 38111 nonzeros
Variable types: 969 continuous, 80 integer (75 binary)

Root relaxation: objective 2.292649e+03, 456 iterations, 0.01 seconds (0.02 work units)

    Nodes    |    Current Node    |     Objective Bounds      |     Work
 Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time

     0     0 2292.64887    0   56          - 2292.64887      -     -    0s
     0     0 2001.81721    0   56          - 2001.81721      -     -    0s
     0     0 1814.02368    0   58          - 1814.02368      -     -    0s
     0     0 1814.02368    0   57          - 1814.02368      -     -    0s
     0     0 1739.80466    0   62          - 1739.80466      -     -    0s
     0     0 1739.80466    0   63          - 1739.80466      -     -    0s
     0     0 1739.80466    0   64          - 1739.80466      -     -    0s
     0     0 1739.80466    0   63          - 1739.80466      -     -    0s
     0     0 1403.54990    0   60          - 1403.54990      -     -    0s
     0     0 1283.22971    0   59          - 1283.22971      -     -    0s
     0     0 1268.72820    0   58          - 1268.72820      -     -    0s
     0     0 1268.49058    0   60          - 1268.49058      -     -    0s
     0     0 1070.25883    0   66          - 1070.25883      -     -    0s
     0     0 1018.02065    0   65          - 1018.02065      -     -    0s
     0     0 1018.02065    0   68          - 1018.02065      -     -    0s
     0     0 1018.02065    0   65          - 1018.02065      -     -    0s
     0     0  807.90467    0   66          -  807.90467      -     -    0s
     0     0  796.53929    0   67          -  796.53929      -     -    0s
     0     0  773.29985    0   68          -  773.29985      -     -    0s
     0     0  771.65948    0   66          -  771.65948      -     -    0s
     0     0  771.23020    0   67          -  771.23020      -     -    0s
     0     0  633.43083    0   65          -  633.43083      -     -    0s
     0     0  613.81225    0   65          -  613.81225      -     -    1s
     0     0  613.81225    0   65          -  613.81225      -     -    1s
     0     0  550.74308    0   68          -  550.74308      -     -    1s
     0     0  545.16027    0   68          -  545.16027      -     -    1s
     0     0  544.38954    0   68          -  544.38954      -     -    1s
     0     0  481.27875    0   68          -  481.27875      -     -    1s
     0     0  476.40221    0   68          -  476.40221      -     -    1s
     0     0  475.91650    0   68          -  475.91650      -     -    1s
     0     0  466.81766    0   67          -  466.81766      -     -    1s
     0     0  439.72592    0   67          -  439.72592      -     -    1s
H    0     0                      -9.8602514  439.72592  4560%     -    1s
     0     2  439.72592    0   67   -9.86025  439.72592  4560%     -    1s
   578   448    1.26945   31   21   -9.86025  350.74909  3657%  40.2    5s
H  619   449                      -9.8602514  188.36708  2010%  38.8    7s
   632   458  155.60658    8   67   -9.86025  164.78034  1771%  38.0   10s
   661   479   -8.74180   31   54   -9.86025   65.24685   762%  45.9   16s
H  717   474                      31.8159423   59.79704  87.9%  48.1   17s

Cutting planes:
  MIR: 23
  Flow cover: 23

Explored 718 nodes (37696 simplex iterations) in 17.65 seconds (22.45 work units)
Thread count was 2 (of 2 available processors)

Solution count 2: 31.8159 -9.86025

Optimization achieved user objective limit
Best objective 3.181594232076e+01, best bound 5.979703629226e+01, gap 87.9468%

Results#

Normally, for the example and \(\delta\) we chose, a counter example that gets the wrong label is found. We finish this notebook by plotting the counter example and printing how it is classified by the neural network.

plt.imshow(x.X.reshape((28, 28)), cmap="gray")

print(f"Solution is classified as {nn.predict(x.X)}")
example3 adversarial mnist
Solution is classified as ['9']

Copyright © 2023 Gurobi Optimization, LLC

Total running time of the script: (0 minutes 17.971 seconds)

Gallery generated by Sphinx-Gallery