RobustNeuralNetworks.AbstractLBDNParamsRobustNeuralNetworks.AbstractRENParamsRobustNeuralNetworks.ContractingRENParamsRobustNeuralNetworks.DenseLBDNParamsRobustNeuralNetworks.DirectLBDNParamsRobustNeuralNetworks.DirectRENParamsRobustNeuralNetworks.ExplicitLBDNParamsRobustNeuralNetworks.ExplicitRENParamsRobustNeuralNetworks.GeneralRENParamsRobustNeuralNetworks.LipschitzRENParamsRobustNeuralNetworks.PassiveRENParams
Model Parameterisations
Lipschitz-Bounded Deep Networks
RobustNeuralNetworks.AbstractLBDNParams — Typeabstract type AbstractLBDNParams{T, L} endDirect parameterisation for Lipschitz-bounded deep networks.
RobustNeuralNetworks.DenseLBDNParams — TypeDenseLBDNParams{T}(nu, nh, ny, γ; <keyword arguments>) where TConstruct direct parameterisation of a dense (fully-connected) LBDN.
This is the equivalent of a multi-layer perceptron (eg: Flux.Dense) with a guaranteed Lipschitz bound of γ. Note that the Lipschitz bound can made a learnable parameter.
Arguments
nu::Int: Number of inputs.nh::Union{Vector{Int}, NTuple{N, Int}}: Number of hidden units for each layer. Eg:nh = [5,10]for 2 hidden layers with 5 and 10 nodes (respectively).ny::Int: Number of outputs.γ::Real=T(1): Lipschitz upper bound, must be positive.
Keyword arguments:
nl::Function=relu: Sector-bounded static nonlinearity.learn_γ::Bool=false:Whether to make the Lipschitz bound γ a learnable parameter.
See DirectLBDNParams for documentation of keyword arguments initW, initb, rng.
RobustNeuralNetworks.DirectLBDNParams — TypeDirectLBDNParams{T}(nu, nh, ny, γ; <keyword arguments>) where TConstruct direct parameterisation for a Lipschitz-bounded deep network.
This is typically used by a higher-level constructor to define an LBDN model, which takes the direct parameterisation in DirectLBDNParams and defines rules for converting it to an explicit parameterisation. See for example DenseLBDNParams.
Arguments
nu::Int: Number of inputs.nh::Union{Vector{Int}, NTuple{N, Int}}: Number of hidden units for each layer. Eg:nh = [5,10]for 2 hidden layers with 5 and 10 nodes (respectively).ny::Int: Number of outputs.γ::Real=T(1): Lipschitz upper bound, must be positive.
Keyword arguments
initW::Function=Flux.glorot_normal: Initialisation function for implicit paramsX,Y,d.initb::Function=Flux.glorot_normal: Initialisation function for bias vectors.learn_γ::Bool=false:Whether to make the Lipschitz bound γ a learnable parameter.rng::AbstractRNG = Random.GLOBAL_RNG: rng for model initialisation.
See Wang et al. (2023) for parameterisation details.
See also DenseLBDNParams.
RobustNeuralNetworks.ExplicitLBDNParams — Typemutable struct ExplicitLBDNParams{T, N, M}Explicit LBDN parameter struct.
These parameters define the explicit form of a Lipschitz-bounded deep network used for model evaluation. Parameters are stored in NTuples, where each element of an NTuple is the parameter for a single layer of the network. Tuples are faster to work with than vectors of arrays.
See Wang et al. (2023) for more details on explicit parameterisations of LBDN.
Recurrent Equilibrium Networks
RobustNeuralNetworks.AbstractRENParams — Typeabstract type AbstractRENParams{T} endDirect parameterisation for recurrent equilibrium networks.
RobustNeuralNetworks.ContractingRENParams — TypeContractingRENParams{T}(nu, nx, nv, ny; <keyword arguments>) where TConstruct direct parameterisation of a contracting REN.
The parameters can be used to construct an explicit REN model that has guaranteed, built-in contraction properties.
Arguments
nu::Int: Number of inputs.nx::Int: Number of states.nv::Int: Number of neurons.ny::Int: Number of outputs.
Keyword arguments
nl::Function=relu: Sector-bounded static nonlinearity.αbar::T=1: Upper bound on the contraction rate withᾱ ∈ (0,1].
See DirectRENParams for documentation of keyword arguments init, ϵ, bx_scale, bv_scale, polar_param, D22_zero, output_map, rng.
See also GeneralRENParams, LipschitzRENParams, PassiveRENParams.
ContractingRENParams(nv, A, B, C, D; ...)Alternative constructor for ContractingRENParams that initialises the REN from a stable discrete-time linear system with state-space model
\[\begin{align*} x_{t+1} &= Ax_t + Bu_t \\ y_t &= Cx_t + Du_t. \end{align*}\]
[TODO:] This method has not been used or tested in a while. If you find it useful, please reach out to us and we will add full support and testing! :) [TODO:] Make compatible with αbar ≠ 1.0.
RobustNeuralNetworks.DirectRENParams — TypeDirectRENParams{T}(nu, nx, nv; <keyword arguments>) where TConstruct direct parameterisation for an (acyclic) recurrent equilibrium network.
This is typically used by higher-level constructors when defining a REN, which take the direct parameterisation and define rules for converting it to an explicit parameterisation. See for example GeneralRENParams.
Arguments
nu::Int: Number of inputs.nx::Int: Number of states.nv::Int: Number of neurons.
Keyword arguments
init=:randomQR: Initialisation method. Options are::random: Random sampling with Glorot normal distribution. Typically samples "faster"/short memory dynamic models.:randomQR:ComputeXwithglorot_normaland take the QR decompositionX = qr(X).Q. Good for initialisingXclose to the identity when long memory is needed. Default as legacy.:cholesky: ComputeXwith cholesky factorisation ofH, setsE,F,P = I. Good for slow/long memory dynamic models.
polar_param::Bool=true: Use polar parameterisation to constructHmatrix fromXin REN parameterisation (recommended).D22_free::Bool=false: Specify whether to trainD22as a free parameter (true), or construct it separately fromX3, Y3, Z3(false). Typically useD22_free = trueonly for a contracting REN.D22_zero::Bool=false: FixD22 = 0to remove any feedthrough.bx_scale::T=0: Set scale of initial state bias vectorbx.bv_scale::T=1: Set scalse of initial neuron input bias vectorbv.output_map::Bool=true: Include output layer $y_t = C_2 x_t + D_{21} w_t + D_{22} u_t + b_y$. Otherwise, output is just $y_t = x_t$.ϵ::T=1e-12: Regularising parameter for positive-definite matrices.rng::AbstractRNG=Random.GLOBAL_RNG: rng for model initialisation.
See Revay et al. (2021) for parameterisation details.
See also GeneralRENParams, ContractingRENParams, LipschitzRENParams, PassiveRENParams.
RobustNeuralNetworks.ExplicitRENParams — Typemutable struct ExplicitRENParams{T}Explicit REN parameter struct.
These parameters define a recurrent equilibrium network with model inputs and outputs $u_t, y_t$, neuron inputs and outputs $v_t,w_t$, and states x_t.
\[\begin{equation*} \begin{bmatrix} x_{t+1} \\ v_t \\ y_t \end{bmatrix} = \begin{bmatrix} A & B_1 & B_2 \\ C_1 & D_{11} & D_{12} \\ C_2 & D_{21} & D_{22} \\ \end{bmatrix} \begin{bmatrix} x_t \\ w_t \\ u_t \end{bmatrix} + \begin{bmatrix} b_x \\ b_v \\ b_y \end{bmatrix} \end{equation*}\]
See Revay et al. (2021) for more details on explicit parameterisations of REN.
RobustNeuralNetworks.GeneralRENParams — TypeGeneralRENParams{T}(nu, nx, nv, ny, Q, S, R; <keyword arguments>) where TConstruct direct parameterisation of a REN satisfying general behavioural constraints.
Behavioural constraints are encoded by the matrices Q,S,R in an incremental Integral Quadratic Constraint (IQC). See Equation 4 of Revay et al. (2021).
Arguments
nu::Int: Number of inputs.nx::Int: Number of states.nv::Int: Number of neurons.ny::Int: Number of outputs.Q::AbstractMatrix: IQC weight matrix on model outputsS::AbstractMatrix: IQC coupling matrix on model outputs/inputsR::AbstractMatrix: IQC weight matrix on model outputs
Keyword arguments
nl::Function=relu: Sector-bounded static nonlinearity.αbar::T=1: Upper bound on the contraction rate withᾱ ∈ (0,1].
See DirectRENParams for documentation of keyword arguments init, ϵ, bx_scale, bv_scale, polar_param, rng.
See also ContractingRENParams, LipschitzRENParams, PassiveRENParams.
RobustNeuralNetworks.LipschitzRENParams — TypeLipschitzRENParams(nu, nx, nv, ny, γ; <keyword arguments>) where TConstruct direct parameterisation of a REN with a Lipschitz bound of γ.
Arguments
nu::Int: Number of inputs.nx::Int: Number of states.nv::Int: Number of neurons.ny::Int: Number of outputs.γ::Number: Lipschitz upper bound.
Keyword arguments
nl::Function=relu: Sector-bounded static nonlinearity.αbar::T=1: Upper bound on the contraction rate withᾱ ∈ (0,1].learn_γ::Bool=false:Whether to make the Lipschitz bound γ a learnable parameter.
See DirectRENParams for documentation of keyword arguments init, ϵ, bx_scale, bv_scale, polar_param, D22_zero, rng.
See also GeneralRENParams, ContractingRENParams, PassiveRENParams.
RobustNeuralNetworks.PassiveRENParams — TypePassiveRENParams{T}(nu, nx, nv, ny, ν, ρ; <keyword arguments>) where TConstruct direct parameterisation of a passive REN.
Arguments
nu::Int: Number of inputs.nx::Int: Number of states.nv::Int: Number of neurons.ny::Int: Number of outputs.ν::Number=0: Passivity index. Useν > 0for an incrementally strictly input passive model. Set bothν = 0andρ = 0for incrementally passive model.ρ::Number=0: Passivity index. Useρ > 0for an incrementally strictly output passive model.
Note that the product of passivity indices ρν has to be less than 1/4 for passive REN.
Keyword arguments
nl::Function=relu: Sector-bounded static nonlinearity.αbar::T=1: Upper bound on the contraction rate withᾱ ∈ (0,1].
See DirectRENParams for documentation of keyword arguments init, ϵ, bx_scale, bv_scale, polar_param, rng.
See also GeneralRENParams, ContractingRENParams, LipschitzRENParams.