Lux.jl
Lux.jl is a library for machine learning in Julia.
The upstream documentation is available at https://lux.csail.mit.edu/stable/.
Supported layers
MathOptAI supports embedding a Lux model into JuMP if it is a Lux.Chain composed of:
Basic example
Use MathOptAI.add_predictor to embed a tuple (containing the Lux.Chain, the parameters, and the state) into a JuMP model:
julia> using JuMP, Lux, MathOptAI, Randomjulia> rng = Random.MersenneTwister();julia> chain = Lux.Chain(Lux.Dense(1 => 2, Lux.relu), Lux.Dense(2 => 1))Chain( layer_1 = Dense(1 => 2, relu), # 4 parameters layer_2 = Dense(2 => 1), # 3 parameters ) # Total: 7 parameters, # plus 0 states.julia> parameters, state = Lux.setup(rng, chain);julia> predictor = (chain, parameters, state);julia> model = Model();julia> @variable(model, x[1:1]);julia> y, formulation = MathOptAI.add_predictor(model, predictor, x);julia> y1-element Vector{JuMP.VariableRef}: moai_Affine[1]julia> formulationAffine(A, b) [input: 1, output: 2] ├ variables [2] │ ├ moai_Affine[1] │ └ moai_Affine[2] └ constraints [2] ├ -1.1502337455749512 x[1] - moai_Affine[1] = 0.9705801010131836 └ 0.14513154327869415 x[1] - moai_Affine[2] = 0.4138000011444092 MathOptAI.ReLU() ├ variables [2] │ ├ moai_ReLU[1] │ └ moai_ReLU[2] └ constraints [4] ├ moai_ReLU[1] ≥ 0 ├ moai_ReLU[1] - max(0.0, moai_Affine[1]) = 0 ├ moai_ReLU[2] ≥ 0 └ moai_ReLU[2] - max(0.0, moai_Affine[2]) = 0 Affine(A, b) [input: 2, output: 1] ├ variables [1] │ └ moai_Affine[1] └ constraints [1] └ -0.9236608147621155 moai_ReLU[1] + 0.7170746922492981 moai_ReLU[2] - moai_Affine[1] = -0.07249476760625839
Reduced-space
Use the reduced_space = true keyword to formulate a reduced-space model:
julia> using JuMP, Lux, MathOptAI, Randomjulia> rng = Random.MersenneTwister();julia> chain = Lux.Chain(Lux.Dense(1 => 2, Lux.relu), Lux.Dense(2 => 1))Chain( layer_1 = Dense(1 => 2, relu), # 4 parameters layer_2 = Dense(2 => 1), # 3 parameters ) # Total: 7 parameters, # plus 0 states.julia> parameters, state = Lux.setup(rng, chain);julia> predictor = (chain, parameters, state);julia> model = Model();julia> @variable(model, x[1:1]);julia> y, formulation = MathOptAI.add_predictor(model, predictor, x; reduced_space = true);julia> y1-element Vector{JuMP.NonlinearExpr}: ((+(0.0) + (0.36622533202171326 * max(0.0, -3.4201807975769043 x[1] + 0.3509547710418701))) + (0.025702303275465965 * max(0.0, 3.3598268032073975 x[1] - 0.6599621772766113))) + 0.12084849923849106julia> formulationReducedSpace(Affine(A, b) [input: 1, output: 2]) ├ variables [0] └ constraints [0] ReducedSpace(MathOptAI.ReLU()) ├ variables [0] └ constraints [0] ReducedSpace(Affine(A, b) [input: 2, output: 1]) ├ variables [0] └ constraints [0]
Gray-box
The Lux extension does not yet support the gray_box keyword argument.
Change how layers are formulated
Pass a dictionary to the config keyword that maps Lux activation functions to a MathOptAI predictor:
julia> using JuMP, Lux, MathOptAI, Randomjulia> rng = Random.MersenneTwister();julia> chain = Lux.Chain(Lux.Dense(1 => 2, Lux.relu), Lux.Dense(2 => 1))Chain( layer_1 = Dense(1 => 2, relu), # 4 parameters layer_2 = Dense(2 => 1), # 3 parameters ) # Total: 7 parameters, # plus 0 states.julia> parameters, state = Lux.setup(rng, chain);julia> predictor = (chain, parameters, state);julia> model = Model();julia> @variable(model, x[1:1]);julia> y, formulation = MathOptAI.add_predictor( model, predictor, x; config = Dict(Lux.relu => MathOptAI.ReLUSOS1()), );julia> y1-element Vector{JuMP.VariableRef}: moai_Affine[1]julia> formulationAffine(A, b) [input: 1, output: 2] ├ variables [2] │ ├ moai_Affine[1] │ └ moai_Affine[2] └ constraints [2] ├ 1.1781089305877686 x[1] - moai_Affine[1] = -0.9110872745513916 └ -2.8652279376983643 x[1] - moai_Affine[2] = 0.09911322593688965 MathOptAI.ReLUSOS1() ├ variables [4] │ ├ moai_ReLU[1] │ ├ moai_ReLU[2] │ ├ moai_z[1] │ └ moai_z[2] └ constraints [8] ├ moai_ReLU[1] ≥ 0 ├ moai_z[1] ≥ 0 ├ moai_Affine[1] - moai_ReLU[1] + moai_z[1] = 0 ├ [moai_ReLU[1], moai_z[1]] ∈ MathOptInterface.SOS1{Float64}([1.0, 2.0]) ├ moai_ReLU[2] ≥ 0 ├ moai_z[2] ≥ 0 ├ moai_Affine[2] - moai_ReLU[2] + moai_z[2] = 0 └ [moai_ReLU[2], moai_z[2]] ∈ MathOptInterface.SOS1{Float64}([1.0, 2.0]) Affine(A, b) [input: 2, output: 1] ├ variables [1] │ └ moai_Affine[1] └ constraints [1] └ 0.5711122751235962 moai_ReLU[1] - 1.1090744733810425 moai_ReLU[2] - moai_Affine[1] = -0.373943030834198