I just returned from the 2025 INFORMS Computing Society conference, where I had the privilege of organizing a cluster on optimization solvers. The cluster had two sessions, Solvers I and Solvers II, and focussed on new developments in the implementation of optimization solvers.
In the coming days, I’ll provide a few posts about some of the solvers covered in those sessions, what makes them interesting, and how to test drive them. For now, I wanted to give a few hot takes from the sessions while they are still fresh in my mind.
Hybrid optimization is everywhere
Hybrid optimization combines multiple techniques to solve a given problem. Most of the hybrid optimization literature focuses on leveraging strengths of different techniques to solve a particular well-defined problem, such as a routing problem with time windows, but it can also provide clear benefits to general-purpose solvers.
OR-Tools, which is likely the most commonly used open source solver, gave a talk on the design of their CP-SAT[-LP] algorithm. To users interacting with OR-Tools through its APIs, CP-SAT looks like an ordinary constraint programming (CP) solver. Internally, however, they boost its CP solver with techniques from satisfiability (SAT) and linear programming (LP). This gives a whole that is much more powerful that its parts, as shown below.
☝ Please pardon the poor quality of this photo I took during their ICS talk.
Meanwhile, the commercial optimizer Hexaly incorporates a basketful of technologies under the hood. These include techniques from from exact methods, heuristics like large neighborhoaod search, and even some ideas from Decision Diagrams (DD).
Interestingly, both solvers admit to some component algorithms being well behind leading implementations. OR-Tools’s SAT and LP solvers are somewhat rudimentary, and Hexaly’s simplex and interior point algorithms would not be competitive on their own. It is the combination of multiple algorithms and approaches that makes the solvers powerful.
State-based modeling has a big opportunity
Mixed Integer Programming (MIP) (and other math programming classes), CP, and Dynamic Programming (DP) have all been standard techniques in the optimization toolkit for decades. While MIP and CP both benefit from standard formats and solver interoperability through systems like MiniZinc, AMPL, and other projects, that never really happened for DP. Even now, DP models are usually bespoke and lack both modeling standards and standard solvers.
That is rapidly changing with the development of both Domain-Independent Dynamic Programming (DIDP), and new DD solvers like CODD. These efforts are still nascent, but there is growing momentum toward building both domain-independent solvers and modeling languages for state-based models. If this succeeds, DP and state-based models have the potential to become similar to MIP and CP in power, portability, and expressiveness.
Established technologies are rapidly innovating, too
Other talks in the cluster showed MaxiCP, a CP solver with roots in MiniCP that is suitable for real-life use, recent developments in proving global optimality for Mixed Integer Non-Linear Programs (MINLP) in Xpress, and an interesting new heuristic solver based on a technique called Random-Key Optimization (RKO) which represents solutions as vectors between 0 and 1 and changes the modeling exercise into solution decoding.
During an interview several years ago, an optimization team leader at a major logistics company who told me that “optimization is a solved problem” and that new solver development was therefore not interesting. That isn’t what I see, though. Instead, I see the practical application of optimization continuing to grow beyond the boundaries of what today’s solvers can handle, and a ton of activity in development of those solvers to make them ever more powerful and flexible.