Skip to main content

Optimization Providers

Solvers

ProviderModelSolvers
D-WaveQuadratic Unconstrained Binary Optimization (QUBO)dwave.Advantage_system4.1, dwave.Advantage_system5.4, dwave.Advantage_system6.4, dwave.Advantage2_prototype2.5, dwave.hybrid_binary_quadratic_model_version2p
Constrained Quadratic Model (CQM)dwave.hybrid_constrained_quadratic_model_version1p
Discrete Quadratic Model (DQM)dwave.hybrid_discrete_quadratic_model_version1p
SimulatorsQuadratic Unconstrained Binary Optimization (QUBO)sim.SimulatedAnnealingSampler, sim.RandomSampler
FujitsuQuadratic Unconstrained Binary Optimization (QUBO)fujitsu.DA3
GurobiQuadratic Unconstrained Binary Optimization (QUBO)gurobi.qubo
Mathematical Programming System (MPS)gurobi.mps
Linear Programming (LP)gurobi.lp
HitachiQuadratic Unconstrained Binary Optimization (QUBO)hitachi.cmos_annealer
JijQuadratic Unconstrained Binary Optimization (QUBO)jij.SA, jij.SQA
Constrained Quadratic Model (CQM)jij.LeapHybridCQM
NECQuadratic Unconstrained Binary Optimization (QUBO)nec.vector_annealer, nec_3.vector_annealer
QuantagoniaQuadratic Unconstrained Binary Optimization (QUBO)quantagonia.qubo
Mixed Integer Linear Programming (MILP)quantagonia.mps
Linear Programming (LP)quantagonia.lp
ToshibaQuadratic Unconstrained Binary Optimization (QUBO)toshiba.qubo
Polynomial Unconstrained Binary Optimization (PUBO)toshiba.pubo
Quadratic Programming Problemstoshiba.qplib
Matrix Markettoshiba.qplib
QPLIBtoshiba.qplib

Parameters

Particular solvers may have specific parameters that can be set. For example, the D-Wave sampler has a num_reads parameter that can be set to specify the number of samples to generate. The Gurobi solver has a time_limit parameter that can be set to specify the maximum time to run the solver and so on.

To set the parameters for a particular solver, we can pass options to the StrangeworksOptimizer during initialization.

from strangeworks_optimization import StrangeworksOptimizer
from strangeworks_optimization_models.parameter_models import *

...

so = StrangeworksOptimizer(model=model,
solver=solver,
options=options)

sw_job = so.run()

Each sampler has its own Parameter Model available for import from strangeworks_optimization_models.parameter_models.

See each Provider page for further details on individual solvers and their parameters.

Parameter ModelSolvers
DwaveSamplerParameterModel
  • dwave.Advantage_system4.1
  • dwave.Advantage_system5.4
  • dwave.Advantage_system6.4
  • dwave.Advantage2_prototype2.5
  • sim.SimulatedAnnealingSampler
  • sim.RandomSampler
DwaveLeapParameterModel
  • dwave.hybrid_binary_quadratic_model_version2p
  • dwave.hybrid_constrained_quadratic_model_version1p
  • dwave.hybrid_discrete_quadratic_model_version1p
EmbeddingParameterModel
  • DwaveSamplerParameterModel(embedding_parameters)
  • HitachiParameterModel(embedding_parameters)
FujitsuParameterModel
  • fujitsu.DA3
GurobiParameterModel
  • gurobi.qubo
  • gurobi.mps
HitachiParameterModel
  • hitachi.cmos_annealer
JijSAParameterModel
  • jij.SA
JijSQAParameterModel
  • jij.SQA
JijLeapHybridCQMParameterModel
  • jij.LeapHybridCQM
NEC2ParameterModel
  • nec.vector_annealer
NEC3ParameterModel
  • nec_3.vector_annealer
QuantagoniaParameterModel
  • quantagonia.qubo
  • quantagonia.mps
ToshibaParameterModel
  • toshiba.qubo
  • toshiba.pubo
  • toshiba.qplib
AquilaParameterModel
  • braket.aquila

Supported Problems

ModelProviders
Quadratic Unconstrained Binary Optimization (QUBO)D-Wave, Simulators, Fujitsu, Gurobi, Hitachi, Jij, NEC, Quantagonia, Toshiba
Constrained Quadratic Model (CQM)D-Wave, Jij
Discrete Quadratic Model (DQM)D-Wave
Mathematical Programming System (MPS)Gurobi
Linear Programming (LP)Gurobi, Quantagonia
Mixed Integer Linear Programming (MILP)Quantagonia
Polynomial Unconstrained Binary Optimization (PUBO)Toshiba
Quadratic Programming ProblemsToshiba
Matrix MarketToshiba
QPLIBToshiba

Quadratic Unconstrained Binary Optimization

Quadratic Unconstrained Binary Optimization (QUBO) is a mathematical problem that is used to model a wide range of combinatorial optimization problems. It is a type of optimization problem that is used to find the minimum value of a quadratic function of binary variables. The QUBO problem can be represented as:

minimize: f(x) = x^T * Q * x
subject to: x_i ∈ {0, 1}

Where x is a vector of binary variables, Q is a symmetric matrix, and x^T is the transpose of x. The QUBO problem can be used to model a wide range of combinatorial optimization problems such as the Traveling Salesman Problem, the Maximum Cut Problem, and the Graph Coloring Problem.

To create a simple qubo model, you can use the following code:

from dimod import BinaryQuadraticModel

linear = {1: -2, 2: -2, 3: -3, 4: -3, 5: -2}
quadratic = {(1, 2): 2, (1, 3): 2, (2, 4): 2, (3, 4): 2, (3, 5): 2, (4, 5): 2}
model = BinaryQuadraticModel(linear, quadratic, "BINARY")

Mathematical Programming System

Mathematical Programming System (MPS) is a file format that is used to represent linear and mixed-integer programming problems. The MPS file format is widely used in the field of mathematical optimization and is supported by a wide range of optimization solvers. The MPS file format is a plain text file format that is used to represent the objective function, the constraints, and the bounds of the decision variables in a mathematical programming problem.

MPS problems can be represented as:

minimize: c^T * x
subject to: Ax = b
l <= x <= u

Where c is a vector of coefficients, x is a vector of decision variables, A is a matrix of coefficients, b is a vector of constants, l is a vector of lower bounds, and u is a vector of upper bounds.

To create an MPS problem, you can load an MPS file using the following code:

from strangeworks_optimization_models.problem_models import MPSFile

model = MPSFile.read_file("path/to/file.mps")

Presolve

Presolve is a crucial step in the optimization process that simplifies mathematical programming models before solving them. The aim of presolve is to reduce the problem size, improve numerical stability, and enhance the overall efficiency of the solver.

The gurobi.presolve service can be used to preprocess the problem before submitting to Gurobi or Quantagonia.

The aim of presolve is to reduce the problem size, improve numerical stability, and enhance the overall efficiency of the solver. By applying these and other presolve techniques, Gurobi effectively reduces the size and complexity of the MPS problem, which can lead to significant improvements in solution time and accuracy.

The presolve process is iterative and can involve multiple passes until no further simplifications are possible. This systematic reduction ensures that the solver works on a more manageable and numerically stable problem.

Model Simplification

During presolve, Gurobi attempts to simplify the original problem by applying various techniques, including:

  • Eliminating Redundant Constraints: Constraints that do not affect the feasible region can be removed. For example, constraints that are always satisfied due to other constraints.

  • Fixed Variable Detection: If the bounds of a variable are equal (i.e., xi[b,b]x_i \in [b, b]), the variable is fixed to that value and can be substituted out of the problem. For xi[b,b]x_i \in [b, b]:

    xi=b    Substitute xi with b.x_i = b \implies \text{Substitute } x_i \text{ with } b.
  • Bound Tightening: Improve variable bounds by analyzing constraints. For instance, if a constraint a1x1+a2x2ba_1 x_1 + a_2 x_2 \leq b implies tighter bounds on x1x_1 and x2x_2 than those explicitly specified, the bounds are updated.

  • Fixed Variable Detection:

  • Bound Tightening: If a1x1+a2x2ba_1 x_1 + a_2 x_2 \leq b tightens bounds on x1x_1 and x2x_2:

    x1ba2x2a1,x2ba1x1a2x_1 \leq \frac{b - a_2 x_2}{a_1}, \quad x_2 \leq \frac{b - a_1 x_1}{a_2}
Coefficient Reduction

Gurobi looks to reduce large coefficients, which can cause numerical instability:

  • Scaling: Adjust the scale of the problem to reduce the range of coefficients. This can help mitigate numerical issues and improve solver performance.
Constraint Aggregation

Constraints can sometimes be combined to form simpler or stronger constraints. For example, two constraints that sum to a single constraint can be aggregated, for example if constraint aTxba^T x \leq b is redundant, it is removed.

If aTxb is always true given other constraints, remove it.\text{If } a^T x \leq b \text{ is always true given other constraints, remove it.}
Detecting Special Structures

Gurobi identifies and exploits special structures in the model:

  • Network Flow: Special handling for constraints that form a network flow, which can be solved more efficiently with specialized algorithms.
  • Knapsack Constraints: Constraints that resemble knapsack problems can be treated differently for enhanced performance.
Variable Substitution

If one variable can be expressed as a function of others (e.g., x3=x1+x2x_3 = x_1 + x_2), Gurobi substitutes this variable to reduce the problem size.

For example x3=x1+x2x_3 = x_1 + x_2:

Replace x3 with x1+x2 in all constraints.\text{Replace } x_3 \text{ with } x_1 + x_2 \text{ in all constraints.}
Dual Presolve

Gurobi also applies presolve techniques to the dual problem, which can uncover additional simplifications in the primal problem.

Linear Programming

Linear Programming (LP) is a mathematical optimization problem that involves finding the maximum or minimum value of a linear objective function subject to linear constraints. The LP problem can be represented as:

minimize: c^T * x
subject to: Ax = b
l <= x <= u

Here is an example of a problem formulation in LP format where x4 is a general integer:

Maximize
obj: x1 + 2 x2 + 3 x3 + x4
Subject To
c1: - x1 + x2 + x3 + 10 x4 <= 20
c2: x1 - 3 x2 + x3 <= 30
c3: x2 - 3.5 x4 = 0
Bounds
0 <= x1 <= 40
2 <= x4 <= 3
General
x4
End

Mixed Integer Linear Programming

Mixed Integer Linear Programming (MILP) is a type of mathematical optimization problem that involves finding the minimum or maximum value of a linear objective function subject to linear constraints and integer constraints on some or all of the variables. The MILP problem can be represented as:

Linear Programming (LP) and Mixed Integer Linear Programming (MILP) are both optimization techniques used to find the best outcome in a mathematical model whose requirements are represented by linear relationships. The key difference between the two lies in the nature of the variables involved:

  • Linear Programming (LP): In LP, all the decision variables are continuous, meaning they can take any value within a given range. The objective is to find the maximum or minimum value of a linear objective function subject to linear equality and inequality constraints.

  • Mixed Integer Linear Programming (MILP): In MILP, some or all of the decision variables are restricted to be integers. This adds complexity to the problem, as the solution space becomes discrete rather than continuous. MILP problems involve finding the optimal value of a linear objective function subject to linear constraints, with the additional requirement that some variables must be integers.

A MILP is expressed using the Mathematical Programming System (MPS) file format. Although MILP is a subset of the general MPS file format, the MPS format is versatile and can represent a wide range of optimization problems, including LP and MILP. The MPS format is a standard way of encoding linear programming problems and is widely supported by optimization solvers.

Here is an example of a MILP problem formulation in MPS format:

minimize: c^T * x
subject to: Ax = b
l <= x <= u
x_i ∈ Z for some i

Constrained Quadratic Model

Constrained Quadratic Model (CQM) is used to find the minimum value of a quadratic function of continuous variables subject to linear equality and inequality constraints. The CQM problem can be represented as:

minimize: f(x) = x^T * Q * x + c^T * x
subject to: Ax = b
Gx <= h

Where x is a vector of continuous variables, Q is a symmetric matrix, c is a vector of coefficients, A is a matrix of coefficients, b is a vector of constants, G is a matrix of coefficients, and h is a vector of constants.

Here's an example of how to create a CQM for the D-Wave Leap solver using the dimod library:

from dimod import Binary, ConstrainedQuadraticModel

weights = [0.9, 0.7, 0.2, 0.1]
capacity = 1
y = [Binary(f"y_{j}") for j in range(len(weights))]
x = [[Binary(f"x_{i}_{j}") for j in range(len(weights))] for i in range(len(weights))]
model = ConstrainedQuadraticModel()
model.set_objective(sum(y))

for i in range(len(weights)):
model.add_constraint(sum(x[i]) == 1, label=f"item_placing_{i}")

for j in range(len(weights)):
model.add_constraint(
sum(weights[i] * x[i][j] for i in range(len(weights))) - y[j] * capacity <= 0,
label=f"capacity_bin_{j}",
)

JiJ also supports CQM problems but they must be created using the jij library and the options must contain the feed_dict:

from strangeworks_optimization_models.parameter_models import JijLeapHybridCQMParameterModel
import jijmodeling as jm

d = jm.Placeholder("d", ndim=1) # Define variable d
d.len_at(0, latex="N") # Set latex expression of the length of d
x = jm.BinaryVar("x", shape=(d.shape[0],)) # Define binary variable
i = jm.Element("i", belong_to=(0, d.shape[0])) # Define dummy index in summation
model = jm.Problem("simple problem") # Create problem instance
model += jm.sum(i, d[i] * x[i]) # Add objective function
model += jm.Constraint("one hot", jm.sum(i, x[i]) == 1) # Add constraint condition
model # Display the problem

solver = "jij.LeapHybridCQM"
options = JijLeapHybridCQMParameterModel(feed_dict={"d": [1.0, 0.1, -2.0, 1.0]})

Discrete Quadratic Model

Discrete Quadratic Model (DQM) is used to find the minimum value of a quadratic function of discrete variables subject to linear equality and inequality constraints. The DQM problem can be represented as:

minimize: f(x) = x^T * Q * x + c^T * x
subject to: Ax = b
Gx <= h
x_i ∈ {0, 1}

Where x is a vector of binary variables, Q is a symmetric matrix, c is a vector of coefficients, A is a matrix of coefficients, b is a vector of constants, G is a matrix of coefficients, and h is a vector of constants.

Here's an example DQM:

from dimod import DiscreteQuadraticModel
import random

model = DiscreteQuadraticModel()
for i in range(10):
model.add_variable(4)
for i in range(9):
for j in range(i + 1, 10):
model.set_quadratic_case(i, random.randrange(0, 4), j, random.randrange(0, 4), random.random())

Quadratic Programming Problems

QPLIB is a collection of test problems for quadratic programming (QP) and quadratic unconstrained binary optimization (QUBO).

You can download QPLIB files from the QPLIB website.

QPLIB problems can be represented as:

minimize: 0.5 * x^T * Q * x + c^T * x
subject to: Ax = b
l <= x <= u

Where Q is a symmetric matrix, c is a vector of coefficients, x is a vector of decision variables, A is a matrix of coefficients, b is a vector of constants, l is a vector of lower bounds, and u is a vector of upper bounds.

Note: The QPLIB file format is used to store by Quadratic Programming and PUBO problems. Please make sure you are using the correct solver for your problem.

To load a QPLIB file, you can use the following code:

from strangeworks_optimization_models.problem_models import QplibFile

model = QplibFile.read_file("path/to/file.qplib")

Polynomial Unconstrained Binary Optimization

Polynomial Unconstrained Binary Optimization (PUBO) is used to find the minimum value of a polynomial function of binary variables. The PUBO problem can be represented as:

minimize: f(x) = ∑_i ∑_j ∑_k a_{ijk} x_i x_j x_k + ∑_i ∑_j a_{ij} x_i x_j + ∑_i a_i x_i + c
subject to: x_i ∈ {0, 1}

Where x is a vector of binary variables, a is a tensor of coefficients, and c is a constant.

To load a PUBO, you can use the following code to read in a QPLIB file that stores a PUBO:

from strangeworks_optimization_models.problem_models import QplibFile

model = QplibFile.read_file("path/to/file.qplib")

⚠️ : The QPLIB file format is used to store by Quadratic Programming and PUBO problems. Please make sure you are using the correct solver for your problem.

Matrix Market

Matrix Market is a NIST-sponsored repository of test matrices for use in numerical algorithms.

You can download Matrix Market files from the Matrix Market website.

They are written in the following format:

%%MatrixMarket matrix coordinate real general
% comment
M N L
I1 J1 A1
I2 J2 A2
...

Where M is the number of rows, N is the number of columns, L is the number of non-zero elements, I is the row index, J is the column index, and A is the value of the non-zero element.

To load a Matrix Market file, you can use the following code:

from strangeworks_optimization_models.problem_models import MatrixMarket

model = MatrixMarket.read_file("path/to/matrix/market.txt")