ForwardAlgorithms
This section of the manual describes the module for forward algorithms.
NeuralNetworkReachability.ForwardAlgorithms.ForwardAlgorithm — Type
ForwardAlgorithmAbstract supertype of forward algorithms.
NeuralNetworkReachability.ForwardAlgorithms.DefaultForward — Type
DefaultForward <: ForwardAlgorithmDefault forward algorithm, which works for vector-like inputs.
NeuralNetworkReachability.ForwardAlgorithms.ConcreteForward — Type
ConcreteForward <: ForwardAlgorithmForward algorithm that uses concrete set operations.
NeuralNetworkReachability.ForwardAlgorithms.LazyForward — Type
LazyForward <: ForwardAlgorithmForward algorithm that uses lazy set operations.
NeuralNetworkReachability.ForwardAlgorithms.BoxForward — Type
BoxForward{AMA<:ForwardAlgorithm} <: ForwardAlgorithmForward algorithm that uses a box approximation for non-identity activations and applies the affine map according to the specified algorithm.
Fields
affine_map_algorithm– algorithm to apply for affine maps
NeuralNetworkReachability.ForwardAlgorithms.SplitForward — Type
SplitForward{S<:ForwardAlgorithm,FS,FM} <: ForwardAlgorithmForward algorithm that splits a set, then computes the image under the neural network, and finally merges the resulting sets again, all according to a policy.
Fields
algo– algorithm to be applied between splitting and mergingsplit_function– function for splittingmerge_function– function for merging
NeuralNetworkReachability.ForwardAlgorithms.DeepZ — Type
DeepZ <: ForwardAlgorithmForward algorithm based on zonotopes for ReLU, sigmoid, and tanh activation functions from Singh et al. [SGM+18].
NeuralNetworkReachability.ForwardAlgorithms.AI2Box — Type
AI2Box <: AI2AI2 forward algorithm for ReLU activation functions based on abstract interpretation with the interval domain from Gehr et al. [GMD+18].
Notes
This algorithm is less precise than BoxForward because it abstracts after every step, including the affine map.
NeuralNetworkReachability.ForwardAlgorithms.AI2Zonotope — Type
AI2Zonotope <: AI2AI2 forward algorithm for ReLU activation functions based on abstract interpretation with the zonotope domain from Gehr et al. [GMD+18].
Fields
join_algorithm– (optional; default:"join") algorithm to compute the join of two zonotopes
NeuralNetworkReachability.ForwardAlgorithms.AI2Polytope — Type
AI2Polytope <: AI2AI2 forward algorithm for ReLU activation functions based on abstract interpretation with the polytope domain from Gehr et al. [GMD+18].
NeuralNetworkReachability.ForwardAlgorithms.PolyZonoForward — Type
PolyZonoForward{A<:PolynomialApproximation,N,R} <: ForwardAlgorithmForward algorithm based on poynomial zonotopes via polynomial approximation from Kochdumper et al. [KSAB23].
Fields
polynomial_approximation– method for polynomial approximationreduced_order– order to which the result will be reduced after each layercompact– predicate for compacting the result after each layer
Notes
The default constructor takes keyword arguments with the following defaults:
polynomial_approximation:RegressionQuadratic(10), i.e., quadratic regression with 10 samplescompact:() -> true, i.e., compact after each layer
See the subtypes of PolynomialApproximation for available polynomial approximation methods.
NeuralNetworkReachability.ForwardAlgorithms.Verisig — Type
Verisig{R} <: ForwardAlgorithmForward algorithm for sigmoid and tanh activation functions from Ivanov et al. [IWA+19].
Fields
algo– reachability algorithm of typeTMJets
Notes
The implementation is known to be unsound in some cases.
The implementation currently only supports neural networks with a single hidden layer.