Flux.jl

Flux.jl is a library for machine learning in Julia.

The upstream documentation is available at https://fluxml.ai/Flux.jl/stable/.

Supported layers

MathOptAI supports embedding a Flux model into JuMP if it is a Flux.Chain composed of:

Basic example

Use MathOptAI.add_predictor to embed a Flux.Chain into a JuMP model:

julia> using JuMP, Flux, MathOptAI
julia> predictor = Flux.Chain(Flux.Dense(1 => 2, Flux.relu), Flux.Dense(2 => 1));
julia> model = Model();
julia> @variable(model, x[1:1]);
julia> y, formulation = MathOptAI.add_predictor(model, predictor, x);
julia> y1-element Vector{JuMP.VariableRef}: moai_Affine[1]
julia> formulationAffine(A, b) [input: 1, output: 2] ├ variables [2] │ ├ moai_Affine[1] │ └ moai_Affine[2] └ constraints [2] ├ 0.9287512898445129 x[1] - moai_Affine[1] = 0 └ -0.6737352609634399 x[1] - moai_Affine[2] = 0 MathOptAI.ReLU() ├ variables [2] │ ├ moai_ReLU[1] │ └ moai_ReLU[2] └ constraints [4] ├ moai_ReLU[1] ≥ 0 ├ moai_ReLU[1] - max(0, moai_Affine[1]) = 0 ├ moai_ReLU[2] ≥ 0 └ moai_ReLU[2] - max(0, moai_Affine[2]) = 0 Affine(A, b) [input: 2, output: 1] ├ variables [1] │ └ moai_Affine[1] └ constraints [1] └ -1.0755665302276611 moai_ReLU[1] + 0.6958913207054138 moai_ReLU[2] - moai_Affine[1] = 0

Reduced-space

Use the reduced_space = true keyword to formulate a reduced-space model:

julia> using JuMP, Flux, MathOptAI
julia> predictor = Flux.Chain(Flux.Dense(1 => 2, Flux.relu), Flux.Dense(2 => 1));
julia> model = Model();
julia> @variable(model, x[1:1]);
julia> y, formulation = MathOptAI.add_predictor(model, predictor, x; reduced_space = true);
julia> y1-element Vector{JuMP.NonlinearExpr}: ((+(0) + (-0.18692143261432648 * max(0, 0.7278303503990173 x[1]))) + (-0.17668312788009644 * max(0, 0.4086625874042511 x[1]))) + 0
julia> formulationReducedSpace(Affine(A, b) [input: 1, output: 2]) ├ variables [0] └ constraints [0] ReducedSpace(MathOptAI.ReLU()) ├ variables [0] └ constraints [0] ReducedSpace(Affine(A, b) [input: 2, output: 1]) ├ variables [0] └ constraints [0]

Gray-box

Use the gray_box = true keyword to embed the network as a vector nonlinear operator:

julia> using JuMP, Flux, MathOptAI
julia> predictor = Flux.Chain(Flux.Dense(1 => 2, Flux.relu), Flux.Dense(2 => 1));
julia> model = Model();
julia> @variable(model, x[1:1]);
julia> y, formulation = MathOptAI.add_predictor(model, predictor, x; gray_box = true);
julia> y1-element Vector{JuMP.VariableRef}: moai_GrayBox[1]
julia> formulationMathOptAI.GrayBox{Flux.Chain{Tuple{Flux.Dense{typeof(NNlib.relu), Matrix{Float32}, Vector{Float32}}, Flux.Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}(Chain(Dense(1 => 2, relu), Dense(2 => 1)), "cpu", true) ├ variables [1] │ └ moai_GrayBox[1] └ constraints [1] └ [x[1], moai_GrayBox[1]] ∈ VectorNonlinearOracle{Float64}(; dimension = 2, l = [0.0], u = [0.0], ..., )

Change how layers are formulated

Pass a dictionary to the config keyword that maps Flux activation functions to a MathOptAI predictor:

julia> using JuMP, Flux, MathOptAI
julia> predictor = Flux.Chain(Flux.Dense(1 => 2, Flux.relu), Flux.Dense(2 => 1));
julia> model = Model();
julia> @variable(model, x[1:1]);
julia> y, formulation = MathOptAI.add_predictor( model, predictor, x; config = Dict(Flux.relu => MathOptAI.ReLUSOS1), );
julia> y1-element Vector{JuMP.VariableRef}: moai_Affine[1]
julia> formulationAffine(A, b) [input: 1, output: 2] ├ variables [2] │ ├ moai_Affine[1] │ └ moai_Affine[2] └ constraints [2] ├ -0.14965906739234924 x[1] - moai_Affine[1] = 0 └ -0.6416634917259216 x[1] - moai_Affine[2] = 0 MathOptAI.ReLUSOS1() ├ variables [4] │ ├ moai_ReLU[1] │ ├ moai_ReLU[2] │ ├ moai_z[1] │ └ moai_z[2] └ constraints [8] ├ moai_ReLU[1] ≥ 0 ├ moai_z[1] ≥ 0 ├ moai_Affine[1] - moai_ReLU[1] + moai_z[1] = 0 ├ [moai_ReLU[1], moai_z[1]] ∈ MathOptInterface.SOS1{Float64}([1.0, 2.0]) ├ moai_ReLU[2] ≥ 0 ├ moai_z[2] ≥ 0 ├ moai_Affine[2] - moai_ReLU[2] + moai_z[2] = 0 └ [moai_ReLU[2], moai_z[2]] ∈ MathOptInterface.SOS1{Float64}([1.0, 2.0]) Affine(A, b) [input: 2, output: 1] ├ variables [1] │ └ moai_Affine[1] └ constraints [1] └ -1.2482085227966309 moai_ReLU[1] + 1.4032899141311646 moai_ReLU[2] - moai_Affine[1] = 0

Custom layers

If your Flux model contains a custom layer, define new methods for build_predictor and add_predictor:

julia> using JuMP, Flux, MathOptAI
julia> struct CustomLayer{T<:Flux.Chain} chain::T end
julia> (model::CustomLayer)(x) = model.chain(x) + x
julia> struct CustomPredictor <: MathOptAI.AbstractPredictor p::MathOptAI.Pipeline end
julia> function MathOptAI.build_predictor(model::CustomLayer) predictor = MathOptAI.build_predictor(model.chain) return CustomPredictor(predictor) end
julia> function MathOptAI.add_predictor( model::JuMP.AbstractModel, predictor::CustomPredictor, x::Vector; kwargs..., ) y, formulation = MathOptAI.add_predictor(model, predictor.p, x; kwargs...) @assert length(x) == length(y) return y .+ x, formulation end
julia> model = Model();
julia> @variable(model, x[i in 1:3]);
julia> predictor = Flux.Chain(CustomLayer(Flux.Chain(Flux.Dense(3 => 3, Flux.relu))))Chain( CustomLayer( Chain( Dense(3 => 3, relu), # 12 parameters ), ), )
julia> y, formulation = MathOptAI.add_predictor(model, predictor, x);
julia> y3-element Vector{JuMP.AffExpr}: moai_ReLU[1] + x[1] moai_ReLU[2] + x[2] moai_ReLU[3] + x[3]
julia> formulationAffine(A, b) [input: 3, output: 3] ├ variables [3] │ ├ moai_Affine[1] │ ├ moai_Affine[2] │ └ moai_Affine[3] └ constraints [3] ├ 0.5679179430007935 x[1] - 0.8486042022705078 x[2] - 0.10187911987304688 x[3] - moai_Affine[1] = 0 ├ -0.7868834733963013 x[1] - 0.9351221323013306 x[2] - 0.46769726276397705 x[3] - moai_Affine[2] = 0 └ -0.9725415706634521 x[1] + 0.8977093696594238 x[2] - 0.453938364982605 x[3] - moai_Affine[3] = 0 MathOptAI.ReLU() ├ variables [3] │ ├ moai_ReLU[1] │ ├ moai_ReLU[2] │ └ moai_ReLU[3] └ constraints [6] ├ moai_ReLU[1] ≥ 0 ├ moai_ReLU[1] - max(0, moai_Affine[1]) = 0 ├ moai_ReLU[2] ≥ 0 ├ moai_ReLU[2] - max(0, moai_Affine[2]) = 0 ├ moai_ReLU[3] ≥ 0 └ moai_ReLU[3] - max(0, moai_Affine[3]) = 0