In the last post, we used Gurobi’s hierarchical optimization features to compute the Pareto front for primary and secondary objectives in an assignment problem. This relied on Gurobi’s setObjectiveN method and its internal code for managing hierarchical problems.

Some practitioners may need to do this without access to a commercial license. This post adapts the previous example to use HiGHS and its native Python interface, highspy. It’s also useful to see what the procedure is in order to understand it better. This isn’t exactly what I’d call hard, but it is easy to mess up.1

Code

The mathematical models are available in the last post, so I won’t restate them here. We start in roughly the same manner as before2: create a binary variable for each worker-patient pair, add assignment problem constraints, and state the primary objective.

from itertools import product
import highspy

n = len(data["cost"])
workers = range(n)
patients = range(n)
workers_patients = list(product(workers, patients))

h = highspy.Highs()

# x[w,p] = 1 if worker w is assigned to patient p.
x = {(w, p): h.addBinary(obj=data["cost"][w][p]) for w, p in workers_patients}

# Each worker is assigned to one patient.
h.addConstrs(sum(x[w, p] for p in patients) == 1 for w in workers)

# Each patient is assigned one worker.
h.addConstrs(sum(x[w, p] for w in workers) == 1 for p in patients)

# Primary objective: minimize cost.
h.setMinimize()
h.solve()
cost = h.getObjectiveValue()

Note that if the costs and affinities were lists instead of matrices, we could have used h.addBinaries instead of h.addBinary.

From here we’ll be solving the model twice for every value of alpha. These expressions for total cost and affinity will make a code a little cleaner.

cost_expr = sum(data["cost"][w][p] * x[w, p] for w, p in workers_patients)
affinity_expr = sum(data["affinity"][w][p] * x[w, p] for w, p in workers_patients)

Now comes the hierarchical optimization logic. For every value of alpha, we find the best affinity possible while keeping cost within alpha of its best possible value.

  • Update the objective function to maximize affinity (see the calls to h.changeColCost and h.setMaximize).
  • Constrain the cost to be within alpha of the original optimal cost (see cost_cons).
  • Re-optimize and save the maximal affinity.

Now we constrain the affinity and re-optimize cost.3

  • Update the objective function to minimize cost again.
  • Constrain the affinity.

Once that’s done, we remove the additional constraints and repeat for a new value of alpha.

for alpha in alphas:
    # Secondary objective: maximize affinity.
    for (w, p), x_wp in x.items():
        h.changeColCost(x_wp.index, data["affinity"][w][p])

    # Constrain cost to be within alpha of maximum.
    cost_cons = h.addConstr(cost_expr <= (1 + alpha) * cost)

    h.setMaximize()
    h.solve()
    affinity = h.getObjectiveValue()

    # Re-optimize with original cost objective, constraining affinity.
    for (w, p), x_wp in x.items():
        h.changeColCost(x_wp.index, data["cost"][w][p])
    affinity_cons = h.addConstr(affinity_expr >= affinity)

    h.setMinimize()
    h.solve()

    yield alpha, h.getObjectiveValue(), affinity

    # Remove cost and affinity constraints for
    h.removeConstr(cost_cons)
    h.removeConstr(affinity_cons)

Encouragingly, running this using the model.py linked below gives the same values as the Gurobi model, albeit not as quickly. Floating point values are rounded for readability.

| alpha | cost     | affinity |
| ----- | -------- | -------- |
| 0.0   | 11212.0  | 53816.0  |
| 0.05  | 11761.0  | 74001.0  |
| 0.1   | 12332.0  | 79981.0  |
| 0.15  | 12886.0  | 83103.0  |
| 0.2   | 13454.0  | 85394.0  |
| 0.25  | 13996.0  | 87136.0  |
| 0.3   | 14557.0  | 88546.0  |
| 0.35  | 15125.0  | 89751.0  |
| 0.4   | 15670.0  | 90664.0  |
| 0.45  | 16255.0  | 91345.0  |
| 0.5   | 16816.0  | 91997.0  |
| 0.55  | 17370.0  | 92537.0  |
| 0.6   | 17924.0  | 93012.0  |
| 0.65  | 18495.0  | 93491.0  |
| 0.7   | 19055.0  | 93829.0  |
| 0.75  | 19591.0  | 94228.0  |
| 0.8   | 20167.0  | 94530.0  |
| 0.85  | 20737.0  | 94833.0  |
| 0.9   | 21295.0  | 95114.0  |
| 0.95  | 21812.0  | 95361.0  |
| 1.0   | 22402.0  | 95613.0  |

Resources

  • model.py hierarchical objectives HiGHS model

  1. It gets even easier to mess up with more than two objectives. ↩︎

  2. Isn’t it nice that MIP modeling is similar across different APIs? ↩︎

  3. Exercise for the reader: why do we need to re-optimize cost? ↩︎