A domain-theoretic framework is presented for validated robustness analysis of neural networks. First, global robustness of a general class of networks is analyzed. Then, using the fact that Edalat's domain-theoretic L-derivative coincides with Clarke's generalized gradient, the framework is extended for attack-agnostic local robustness analysis. The proposed framework is ideal for designing algorithms which are correct by construction. This claim is exemplified by developing a validated algorithm for estimation of Lipschitz constant of feedforward regressors. The completeness of the algorithm is proved over differentiable networks and also over general position networks. Computability results are obtained within the framework of effectivel...
© 2018 Curran Associates Inc..All rights reserved. Finding minimum distortion of adversarial example...
Existing methods for function smoothness in neural networks have limitations. These methods can make...
The current challenge of Deep Learning is no longer the computational power nor its scope of applica...
We present a domain-theoretic framework for validated robustness analysis of neural networks. We fir...
International audienceThis paper presents a quantitative approach to demonstrate the robustness of n...
International audienceThe stability of neural networks with respect to adversarial perturbations has...
International audienceLes réseaux de neurones profonds sont devenus les références dans beaucoup de ...
IEEE Neural networks (NNs) are now routinely implemented on systems that must operate in uncertain e...
The paper addresses the analysis of robustness over training time issue. Robustness is evaluated in ...
We present a new approach to assessing the robustness of neural networks based on estimating the pro...
Despite having high accuracy, neural nets have been shown to be susceptible to adversarial examples,...
Despite having high accuracy, neural nets have been shown to be susceptible to adversarial examples,...
The robustness of neural networks can be quantitatively indicated by a lower bound within which any ...
Deployment of deep neural networks (DNNs) in safety-critical systems requires provable guarantees fo...
With their supreme performance in dealing with a large amount of data, neural networks have signific...
© 2018 Curran Associates Inc..All rights reserved. Finding minimum distortion of adversarial example...
Existing methods for function smoothness in neural networks have limitations. These methods can make...
The current challenge of Deep Learning is no longer the computational power nor its scope of applica...
We present a domain-theoretic framework for validated robustness analysis of neural networks. We fir...
International audienceThis paper presents a quantitative approach to demonstrate the robustness of n...
International audienceThe stability of neural networks with respect to adversarial perturbations has...
International audienceLes réseaux de neurones profonds sont devenus les références dans beaucoup de ...
IEEE Neural networks (NNs) are now routinely implemented on systems that must operate in uncertain e...
The paper addresses the analysis of robustness over training time issue. Robustness is evaluated in ...
We present a new approach to assessing the robustness of neural networks based on estimating the pro...
Despite having high accuracy, neural nets have been shown to be susceptible to adversarial examples,...
Despite having high accuracy, neural nets have been shown to be susceptible to adversarial examples,...
The robustness of neural networks can be quantitatively indicated by a lower bound within which any ...
Deployment of deep neural networks (DNNs) in safety-critical systems requires provable guarantees fo...
With their supreme performance in dealing with a large amount of data, neural networks have signific...
© 2018 Curran Associates Inc..All rights reserved. Finding minimum distortion of adversarial example...
Existing methods for function smoothness in neural networks have limitations. These methods can make...
The current challenge of Deep Learning is no longer the computational power nor its scope of applica...