Usage Example#

In this page, we provide a simple example of using the Gurobi Machine Learning package.

The example is entirely abstract. Its aim is only to illustrate the basic functionalities of the package in the most simple way. For some more realistic applications, please refer to the notebooks in the examples section.

Before proceeding to the example itself, we need to import a number of packages. Here, we will use Scikit-learn to train regression models. We generate random data for the regression using the make_regression function. For the regression model, we use a multi-layer perceptron regressor neural network. We import the corresponding objects.

import gurobipy as gp
import numpy as np
from sklearn.datasets import make_regression
from sklearn.metrics import mean_squared_error
from sklearn.neural_network import MLPRegressor

from gurobi_ml import add_predictor_constr

Certainly, we need gurobipy to build an optimization model and from the gurobi_ml package we need the add_predictor_constr. function. We also need numpy.

We start by building artificial data to train our regressions. To do so, we use make_regression to obtain data with 10 features.

X, y = make_regression(n_features=10, noise=1.0)

Now, create the MLPRegressor object and fit it.

nn = MLPRegressor([20] * 2, max_iter=10000, random_state=1)

nn.fit(X, y)
MLPRegressor(hidden_layer_sizes=[20, 20], max_iter=10000, random_state=1)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


We now turn to the optimization model. In the spirit of adversarial machine learning examples, we use some training examples. We pick \(n\) training examples randomly. For each of the examples, we want to find an input that is in a small neighborhood of it that leads to the output that is closer to \(0\) with the regression.

Denoting by \(X^E\) our set of examples and by \(g\) the prediction function of our regression model, our optimization problem reads:

\[\begin{split}\begin{aligned} &\min \sum_{i=1}^n y_i^2 \\ &\text{s.t.:}\\ &y_i = g(X_i) & & i = 1, \ldots, n,\\ &X^E - \delta \leq X \leq X^E + \delta,\\ \end{aligned}\end{split}\]

where \(X\) is a matrix of variables of dimension \(n \times 10\) (the number of examples we consider and number of features in the regression respectively), \(y\) is a vector of free (unbounded) variables and \(\delta\) a small positive constant.

First, let’s pick randomly 2 training examples using numpy, and create our gurobipy model.

n = 2
index = np.random.choice(X.shape[0], n, replace=False)
X_examples = X[index, :]
y_examples = y[index]

m = gp.Model()

Our only decision variables in this case, are the five inputs and outputs for the regression. We use gurobipy.MVar matrix variables that are most convenient in this case.

The input variables have the same shape as X_examples. Their lower bound is X_examples - delta and their upper bound X_examples + delta.

The output variables have the shape of y_examples and are unbounded. By default, in Gurobi variables are non-negative, we therefore need to set an infinite lower bound.

input_vars = m.addMVar(X_examples.shape, lb=X_examples - 0.2, ub=X_examples + 0.2)
output_vars = m.addMVar(y_examples.shape, lb=-gp.GRB.INFINITY)

The constraints linking input_vars and output_vars can now be added with the function add_predictor_constr.

Note that because of the shape of the variables this will add the 5 different constraints.

The function returns an instance of a modeling object that we can use later on.

pred_constr = add_predictor_constr(m, nn, input_vars, output_vars)

The method print_stats of the modeling object outputs the details of the regression model that was added to the Gurobi.

Model for mlpregressor:
160 variables
82 constraints
80 general constraints
Input has shape (2, 10)
Output has shape (2, 1)

--------------------------------------------------------------------------------
Layer           Output Shape    Variables              Constraints
                                                Linear    Quadratic      General
================================================================================
dense                (2, 20)           80           40            0           40 (relu)

dense0               (2, 20)           80           40            0           40 (relu)

dense1                (2, 1)            0            2            0            0

--------------------------------------------------------------------------------

To finish the model, we set the objective, and then we can optimize it.

m.setObjective(output_vars @ output_vars, gp.GRB.MINIMIZE)

m.optimize()
Gurobi Optimizer version 11.0.1 build v11.0.1rc0 (linux64 - "Ubuntu 20.04.6 LTS")

CPU model: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 1 physical cores, 2 logical processors, using up to 2 threads

Optimize a model with 82 rows, 182 columns and 1322 nonzeros
Model fingerprint: 0x7e3330d9
Model has 2 quadratic objective terms
Model has 80 general constraints
Variable types: 182 continuous, 0 integer (0 binary)
Coefficient statistics:
  Matrix range     [2e-03, 2e+00]
  Objective range  [0e+00, 0e+00]
  QObjective range [2e+00, 2e+00]
  Bounds range     [8e-02, 2e+00]
  RHS range        [8e-02, 1e+00]
Presolve removed 30 rows and 120 columns
Presolve time: 0.01s
Presolved: 52 rows, 62 columns, 369 nonzeros
Presolved model has 1 quadratic objective terms
Variable types: 48 continuous, 14 integer (14 binary)

Root relaxation: objective 3.023112e+04, 146 iterations, 0.00 seconds (0.00 work units)

    Nodes    |    Current Node    |     Objective Bounds      |     Work
 Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time

     0     0 30231.1212    0   14          - 30231.1212      -     -    0s
     0     0 32033.7673    0    7          - 32033.7673      -     -    0s
H    0     0                    32366.920702 32157.7843  0.65%     -    0s
     0     0     cutoff    0      32366.9207 32366.9207  0.00%     -    0s

Cutting planes:
  Gomory: 1
  MIR: 4
  Flow cover: 6
  Relax-and-lift: 2

Explored 1 nodes (201 simplex iterations) in 0.02 seconds (0.01 work units)
Thread count was 2 (of 2 available processors)

Solution count 1: 32366.9

Optimal solution found (tolerance 1.00e-04)
Best objective 3.236692070216e+04, best bound 3.236692070216e+04, gap 0.0000%

The method get_error is useful to check that the solution computed by Gurobi is correct with respect to the regression model we use.

Let \((\bar X, \bar y)\) be the values of the input and output variables in the computed solution. The function returns \(g(\bar X) - y\) using the original regression object.

Normally, all values should be small and below Gurobi’s tolerances in this example.

array([[1.42108547e-14],
       [2.84217094e-14]])

We can look at the computed values for the output variables and compare them with the original target values.

print("Computed values")
print(pred_constr.output_values.flatten())
Computed values
[  83.04558265 -159.59433544]
print("Original values")
print(y_examples)
Original values
[ 194.32251822 -277.99201378]

Finally, we can remove pred_constr with the method remove().

Total running time of the script: (0 minutes 3.078 seconds)

Gallery generated by Sphinx-Gallery