Flux.jl

Flux.jl is a library for machine learning in Julia.

The upstream documentation is available at https://fluxml.ai/Flux.jl/stable/.

Supported layers

MathOptAI supports embedding a Flux model into JuMP if it is a Flux.Chain composed of:

Basic example

Use MathOptAI.add_predictor to embed a Flux.Chain into a JuMP model:

julia> using JuMP, Flux, MathOptAI
julia> predictor = Flux.Chain(Flux.Dense(1 => 2, Flux.relu), Flux.Dense(2 => 1));
julia> model = Model();
julia> @variable(model, x[1:1]);
julia> y, formulation = MathOptAI.add_predictor(model, predictor, x);
julia> y1-element Vector{JuMP.VariableRef}: moai_Affine[1]
julia> formulationAffine(A, b) [input: 1, output: 2] ├ variables [2] │ ├ moai_Affine[1] │ └ moai_Affine[2] └ constraints [2] ├ -0.7973560094833374 x[1] - moai_Affine[1] = 0 └ 0.5516365170478821 x[1] - moai_Affine[2] = 0 MathOptAI.ReLU() ├ variables [2] │ ├ moai_ReLU[1] │ └ moai_ReLU[2] └ constraints [4] ├ moai_ReLU[1] ≥ 0 ├ moai_ReLU[1] - max(0, moai_Affine[1]) = 0 ├ moai_ReLU[2] ≥ 0 └ moai_ReLU[2] - max(0, moai_Affine[2]) = 0 Affine(A, b) [input: 2, output: 1] ├ variables [1] │ └ moai_Affine[1] └ constraints [1] └ 0.9363561272621155 moai_ReLU[1] - 0.20831146836280823 moai_ReLU[2] - moai_Affine[1] = 0

Reduced-space

Use the reduced_space = true keyword to formulate a reduced-space model:

julia> using JuMP, Flux, MathOptAI
julia> predictor = Flux.Chain(Flux.Dense(1 => 2, Flux.relu), Flux.Dense(2 => 1));
julia> model = Model();
julia> @variable(model, x[1:1]);
julia> y, formulation = MathOptAI.add_predictor(model, predictor, x; reduced_space = true);
julia> y1-element Vector{JuMP.NonlinearExpr}: ((+(0) + (-0.6146305203437805 * max(0, 1.3081188201904297 x[1]))) + (-1.4120771884918213 * max(0, 0.888702392578125 x[1]))) + 0
julia> formulationReducedSpace(Affine(A, b) [input: 1, output: 2]) ├ variables [0] └ constraints [0] ReducedSpace(MathOptAI.ReLU()) ├ variables [0] └ constraints [0] ReducedSpace(Affine(A, b) [input: 2, output: 1]) ├ variables [0] └ constraints [0]

Gray-box

Use the gray_box = true keyword to embed the network as a vector nonlinear operator:

julia> using JuMP, Flux, MathOptAI
julia> predictor = Flux.Chain(Flux.Dense(1 => 2, Flux.relu), Flux.Dense(2 => 1));
julia> model = Model();
julia> @variable(model, x[1:1]);
julia> y, formulation = MathOptAI.add_predictor(model, predictor, x; gray_box = true);
julia> y1-element Vector{JuMP.VariableRef}: moai_Flux[1]
julia> formulationMathOptAI.GrayBox{Flux.Chain{Tuple{Flux.Dense{typeof(NNlib.relu), Matrix{Float32}, Vector{Float32}}, Flux.Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}(Chain(Dense(1 => 2, relu), Dense(2 => 1)), "cpu", true) ├ variables [1] │ └ moai_Flux[1] └ constraints [1] └ [x[1], moai_Flux[1]] ∈ VectorNonlinearOracle{Float64}(; dimension = 2, l = [0.0], u = [0.0], ..., )

Change how layers are formulated

Pass a dictionary to the config keyword that maps Flux activation functions to a MathOptAI predictor:

julia> using JuMP, Flux, MathOptAI
julia> predictor = Flux.Chain(Flux.Dense(1 => 2, Flux.relu), Flux.Dense(2 => 1));
julia> model = Model();
julia> @variable(model, x[1:1]);
julia> y, formulation = MathOptAI.add_predictor( model, predictor, x; config = Dict(Flux.relu => MathOptAI.ReLUSOS1), );
julia> y1-element Vector{JuMP.VariableRef}: moai_Affine[1]
julia> formulationAffine(A, b) [input: 1, output: 2] ├ variables [2] │ ├ moai_Affine[1] │ └ moai_Affine[2] └ constraints [2] ├ 1.1013106107711792 x[1] - moai_Affine[1] = 0 └ 0.4768870174884796 x[1] - moai_Affine[2] = 0 MathOptAI.ReLUSOS1() ├ variables [4] │ ├ moai_ReLU[1] │ ├ moai_ReLU[2] │ ├ moai_z[1] │ └ moai_z[2] └ constraints [8] ├ moai_ReLU[1] ≥ 0 ├ moai_z[1] ≥ 0 ├ moai_Affine[1] - moai_ReLU[1] + moai_z[1] = 0 ├ [moai_ReLU[1], moai_z[1]] ∈ MathOptInterface.SOS1{Float64}([1.0, 2.0]) ├ moai_ReLU[2] ≥ 0 ├ moai_z[2] ≥ 0 ├ moai_Affine[2] - moai_ReLU[2] + moai_z[2] = 0 └ [moai_ReLU[2], moai_z[2]] ∈ MathOptInterface.SOS1{Float64}([1.0, 2.0]) Affine(A, b) [input: 2, output: 1] ├ variables [1] │ └ moai_Affine[1] └ constraints [2] ├ moai_Affine[1] ≤ 0 └ -1.1265441179275513 moai_ReLU[1] - 0.4711526930332184 moai_ReLU[2] - moai_Affine[1] = 0

Custom layers

If your Flux model contains a custom layer, define new methods for build_predictor and add_predictor:

julia> using JuMP, Flux, MathOptAI
julia> struct CustomLayer{T<:Flux.Chain} chain::T end
julia> (model::CustomLayer)(x) = model.chain(x) + x
julia> struct CustomPredictor <: MathOptAI.AbstractPredictor p::MathOptAI.Pipeline end
julia> function MathOptAI.build_predictor(model::CustomLayer) predictor = MathOptAI.build_predictor(model.chain) return CustomPredictor(predictor) end
julia> function MathOptAI.add_predictor( model::JuMP.AbstractModel, predictor::CustomPredictor, x::Vector; kwargs..., ) y, formulation = MathOptAI.add_predictor(model, predictor.p, x; kwargs...) @assert length(x) == length(y) return y .+ x, formulation end
julia> model = Model();
julia> @variable(model, x[i in 1:3]);
julia> predictor = Flux.Chain(CustomLayer(Flux.Chain(Flux.Dense(3 => 3, Flux.relu))))Chain( CustomLayer( Chain( Dense(3 => 3, relu), # 12 parameters ), ), )
julia> y, formulation = MathOptAI.add_predictor(model, predictor, x);
julia> y3-element Vector{JuMP.AffExpr}: moai_ReLU[1] + x[1] moai_ReLU[2] + x[2] moai_ReLU[3] + x[3]
julia> formulationAffine(A, b) [input: 3, output: 3] ├ variables [3] │ ├ moai_Affine[1] │ ├ moai_Affine[2] │ └ moai_Affine[3] └ constraints [3] ├ 0.8237401247024536 x[1] - 0.6359617710113525 x[2] - 0.3679739236831665 x[3] - moai_Affine[1] = 0 ├ 0.02057063579559326 x[1] + 0.059383511543273926 x[2] + 0.47744596004486084 x[3] - moai_Affine[2] = 0 └ -0.9249190092086792 x[1] - 0.005418300628662109 x[2] - 0.1547858715057373 x[3] - moai_Affine[3] = 0 MathOptAI.ReLU() ├ variables [3] │ ├ moai_ReLU[1] │ ├ moai_ReLU[2] │ └ moai_ReLU[3] └ constraints [6] ├ moai_ReLU[1] ≥ 0 ├ moai_ReLU[1] - max(0, moai_Affine[1]) = 0 ├ moai_ReLU[2] ≥ 0 ├ moai_ReLU[2] - max(0, moai_Affine[2]) = 0 ├ moai_ReLU[3] ≥ 0 └ moai_ReLU[3] - max(0, moai_Affine[3]) = 0