Note: This post was updated to work with Python 3 and the 2nd edition of “Integer Programming” by Laurence Wolsey.
We’ve been studying Lagrangian Relaxation (LR) in the Advanced Topics in Combinatorial Optimization course I’m taking this term, and I had some difficulty finding a simple example covering its application. In case anyone else finds it useful, I’m posting a Python version for solving the Generalized Assignment Problem (GAP). This won’t discuss the theory of LR at all, just give example code using Gurobi.
Generalized assignment
The GAP as defined by Wolsey consists of a maximization problem subject to a set of set packing constraints followed by a set of knapsack constraints.
$$ \begin{align*} & \text{max} && \sum_i \sum_j c_{ij} x_{ij} \\ & \text{s.t.} && \sum_j x_{ij} \leq 1 && \forall i \\ & && \sum_i a_{ij} x_{ij} \leq b_{ij} && \forall j \\ & && x_{ij} \in {0, 1} \end{align*} $$
Naive model
A naive version of this model using Gurobi might look like the following.
#!/usr/bin/env python
# This is the GAP per Wolsey, pg 208.
from gurobipy import Model, GRB, quicksum as qsum
m = Model("GAP per Wolsey")
m.modelSense = GRB.MAXIMIZE
m.setParam("OutputFlag", False) # turns off solver chatter
b = [15, 15, 15]
c = [
[6, 10, 1],
[12, 12, 5],
[15, 4, 3],
[10, 3, 9],
[8, 9, 5],
]
a = [
[5, 7, 2],
[14, 8, 7],
[10, 6, 12],
[8, 4, 15],
[6, 12, 5],
]
# x[i][j] = 1 if i is assigned to j
x = [[m.addVar(vtype=GRB.BINARY) for _ in row] for row in c]
# sum j: x_ij <= 1 for all i
for x_i in x:
m.addConstr(sum(x_i) <= 1)
# sum i: a_ij * x_ij <= b[j] for all j
for j, b_j in enumerate(b):
m.addConstr(qsum(a[i][j] * x_i[j] for i, x_i in enumerate(x)) <= b_j)
# max sum i,j: c_ij * x_ij
m.setObjective(
qsum(qsum(c_ij * x_ij for c_ij, x_ij in zip(c_i, x_i)) for c_i, x_i in zip(c, x))
)
m.optimize()
# Pull solution out of m.
print(f"z = {m.objVal}")
print("x = [")
for x_i in x:
print(f" {[1 if x_ij.x >= 0.5 else 0 for x_ij in x_i]}")
print("]")
The solver quickly finds the following optimal solution of this toy problem.
z = 46.0
x = [
[0, 1, 0]
[0, 1, 0]
[1, 0, 0]
[0, 0, 1]
[0, 0, 0]
]
Lagrangian model
There are two sets of constraints we can dualize. It can be beneficial to apply Lagrangian Relaxation against problems composed of knapsack constraints, so we will dualize the set packing ones.
# sum j: x_ij <= 1 for all i
for x_i in x:
model.addConstr(sum(x_i) <= 1)
We replace these with a new set of variables, penalties
, which take the values of the slacks on the set packing constraints. We then modify the objective function, adding Lagrangian multipliers times these penalties.
Instead of optimizing once, we do so iteratively. An important consideration is we may get nothing more than a dual bound from this process. Any integer solution is not guaranteed to be primal feasible unless it satisfies complementary slackness conditions – for each dualized constraint either its multiplier or penalty must be zero.
We then set the initial multiplier values to 2 and use sub-gradient optimization with a step size of 1 / (iteration #)
to adjust them.
#!/usr/bin/env python
# This is the GAP per Wolsey, pg 208, using Lagrangian Relaxation.
from gurobipy import Model, GRB, quicksum as qsum
m = Model("GAP per Wolsey with Lagrangian Relaxation")
m.modelSense = GRB.MAXIMIZE
m.setParam("OutputFlag", False) # turns off solver chatter
b = [15, 15, 15]
c = [
[6, 10, 1],
[12, 12, 5],
[15, 4, 3],
[10, 3, 9],
[8, 9, 5],
]
a = [
[5, 7, 2],
[14, 8, 7],
[10, 6, 12],
[8, 4, 15],
[6, 12, 5],
]
# x[i][j] = 1 if i is assigned to j
x = [[m.addVar(vtype=GRB.BINARY) for _ in row] for row in c]
# As stated, the GAP has these following constraints. We dualize these into
# penalties instead, using variables so we can easily extract their values.
penalties = [m.addVar() for _ in x]
# Dualized constraints: sum j: x_ij <= 1 for all i
for p, x_i in zip(penalties, x):
m.addConstr(p == 1 - sum(x_i))
# sum i: a_ij * x_ij <= b[j] for all j
for j, b_j in enumerate(b):
m.addConstr(qsum(a[i][j] * x_i[j] for i, x_i in enumerate(x)) <= b_j)
# u[i] = Lagrangian Multiplier for the set packing constraint i
u = [2.0] * len(x)
# Re-optimize until either we have run a certain number of iterations
# or complementary slackness conditions apply.
for k in range(1, 101):
# max sum i,j: c_ij * x_ij
m.setObjective(
qsum(
# Original objective function
sum(c_ij * x_ij for c_ij, x_ij in zip(c_i, x_i))
for c_i, x_i in zip(c, x)
)
+ qsum(
# Penalties for dualized constraints
u_j * p_j
for u_j, p_j in zip(u, penalties)
)
)
m.optimize()
print(
f"iteration {k}: z = {m.objVal}, u = {u}, penalties = {[p.x for p in penalties]}"
)
# Test for complementary slackness
stop = True
eps = 10e-6
for u_i, p_i in zip(u, penalties):
if abs(u_i) > eps and abs(p_i.x) > eps:
stop = False
break
if stop:
print("primal feasible & optimal")
break
else:
s = 1.0 / k
for i in range(len(x)):
u[i] = max(u[i] - s * (penalties[i].x), 0.0)
# Pull solution out of m.
print(f"z = {m.objVal}")
print("x = [")
for x_i in x:
print(f" {[1 if x_ij.x >= 0.5 else 0 for x_ij in x_i]}")
print("]")
Again, the example converges very quickly to an optimal solution.
iteration 1: z = 48.0, u = [2.0, 2.0, 2.0, 2.0, 2.000], penalties = [0.0, 0.0, 0.0, 0.0, 1.0]
iteration 2: z = 47.0, u = [2.0, 2.0, 2.0, 2.0, 1.000], penalties = [0.0, 0.0, 0.0, 0.0, 1.0]
iteration 3: z = 46.5, u = [2.0, 2.0, 2.0, 2.0, 0.500], penalties = [0.0, 0.0, 0.0, 0.0, 1.0]
iteration 4: z = 46.2, u = [2.0, 2.0, 2.0, 2.0, 0.167], penalties = [0.0, 0.0, 0.0, 0.0, 1.0]
iteration 5: z = 46.0, u = [2.0, 2.0, 2.0, 2.0, 0.000], penalties = [0.0, 0.0, 0.0, 0.0, 1.0]
primal feasible & optimal
z = 46.0
x = [
[0, 1, 0]
[0, 1, 0]
[1, 0, 0]
[0, 0, 1]
[0, 0, 0]
]
Exercise for the reader: change the script to dualize the knapsack constraints instead of the set packing constraints. What is the result of this change in terms of convergence?