As neural networks (NNs) become more prevalent in safety-critical applications such as control of vehicles, there is a growing need to certify that systems with NN components are safe. This paper presents a set of backward reachability approaches for safety certification of neural feedback loops (NFLs), i.e., closed-loop systems with NN control policies. While backward reachability strategies have been developed for systems without NN components, the nonlinearities in NN activation functions and general noninvertibility of NN weight matrices make backward reachability for NFLs a challenging problem. To avoid the difficulties associated with propagating sets backward through NNs, we introduce a framework that leverages standard forward NN an...
Stability certification and identifying a safe and stabilizing initial set are two important concern...
Among the major challenges in neural control system technology is the validation and certification o...
In this work, we consider the problem of learning a feed-forward neural network controller to safely...
The increasing prevalence of neural networks (NNs) in safety-critical applications calls for methods...
Safety certification of data-driven control techniques remains a major open problem. This work inves...
Artificial neural networks have recently been utilized in many feedback control systems and introduc...
International audienceA forward reachability analysis method for the safety verification of nonlinea...
Applying neural networks as controllers in dynamical systems has shown great promises. However, it i...
Hybrid zonotopes generalize constrained zonotopes by introducing additional binary variables and pos...
Neural networks (NNs) are increasingly applied in safety-critical systems such as autonomous vehicle...
We propose new methods to synthesize control barrier function (CBF)-based safe controllers that avoi...
In this paper, we present a data-driven framework for real-time estimation of reachable sets for con...
We consider the problem of synthesis of safe controllers for nonlinear systems with unknown dynamics...
Machine learning (ML) has demonstrated great success in numerous complicated tasks. Fueled by these ...
We provide a new approach to synthesize controllers for nonlinear continuous dynamical systems withc...
Stability certification and identifying a safe and stabilizing initial set are two important concern...
Among the major challenges in neural control system technology is the validation and certification o...
In this work, we consider the problem of learning a feed-forward neural network controller to safely...
The increasing prevalence of neural networks (NNs) in safety-critical applications calls for methods...
Safety certification of data-driven control techniques remains a major open problem. This work inves...
Artificial neural networks have recently been utilized in many feedback control systems and introduc...
International audienceA forward reachability analysis method for the safety verification of nonlinea...
Applying neural networks as controllers in dynamical systems has shown great promises. However, it i...
Hybrid zonotopes generalize constrained zonotopes by introducing additional binary variables and pos...
Neural networks (NNs) are increasingly applied in safety-critical systems such as autonomous vehicle...
We propose new methods to synthesize control barrier function (CBF)-based safe controllers that avoi...
In this paper, we present a data-driven framework for real-time estimation of reachable sets for con...
We consider the problem of synthesis of safe controllers for nonlinear systems with unknown dynamics...
Machine learning (ML) has demonstrated great success in numerous complicated tasks. Fueled by these ...
We provide a new approach to synthesize controllers for nonlinear continuous dynamical systems withc...
Stability certification and identifying a safe and stabilizing initial set are two important concern...
Among the major challenges in neural control system technology is the validation and certification o...
In this work, we consider the problem of learning a feed-forward neural network controller to safely...