International audienceThis paper is an attempt to bridge the gap between deep learning and grammatical inference. Indeed, it provides an algorithm to extract a (stochastic) formal language from any recurrent neural network trained for language modelling. In detail, the algorithm uses the already trained network as an oracle-and thus does not require the access to the inner representation of the black-box-and applies a spectral approach to infer a weighted automaton. As weighted automata compute linear functions, they are computationally more efficient than neural networks and thus the nature of the approach is the one of knowledge distillation. We detail experiments on 62 data sets (both synthetic and from real-world applications) that allo...
In recent years we have seen the development of efficient provably correct algorithms for learning W...
Student Paper Awards NIPS 2012Many tasks in text and speech processing and computational biology req...
Recurrent neural network language models (RNNLMs) have recently shown to outperform the venerable n-...
International audienceThis paper is an attempt to bridge the gap between deep learning and grammatic...
International audienceThis paper is an attempt to bridge the gap between deep learning and grammatic...
International audienceThis paper is an attempt to bridge the gap between deep learning and grammatic...
International audienceThis paper is an attempt to bridge the gap between deep learning and grammatic...
International audienceThis paper is an attempt to bridge the gap between deep learning and grammatic...
International audienceThis paper is an attempt to bridge the gap between deep learning and grammatic...
Recurrent Neural Networks (RNNs) have demonstrated their effectiveness in learning and processing se...
We present a method to extract a weighted finite automaton (WFA) from a recurrent neural network (RN...
We present a method to extract a weighted finite automaton (WFA) from a recurrent neural network (RN...
Recurrent Neural Networks (RNNs) have demonstrated their effectiveness in learning and processing se...
International audienceUnderstanding how a learned black box works is of crucial interest for the fut...
. Recent work has shown that the extraction of symbolic rules improves the generalization performan...
In recent years we have seen the development of efficient provably correct algorithms for learning W...
Student Paper Awards NIPS 2012Many tasks in text and speech processing and computational biology req...
Recurrent neural network language models (RNNLMs) have recently shown to outperform the venerable n-...
International audienceThis paper is an attempt to bridge the gap between deep learning and grammatic...
International audienceThis paper is an attempt to bridge the gap between deep learning and grammatic...
International audienceThis paper is an attempt to bridge the gap between deep learning and grammatic...
International audienceThis paper is an attempt to bridge the gap between deep learning and grammatic...
International audienceThis paper is an attempt to bridge the gap between deep learning and grammatic...
International audienceThis paper is an attempt to bridge the gap between deep learning and grammatic...
Recurrent Neural Networks (RNNs) have demonstrated their effectiveness in learning and processing se...
We present a method to extract a weighted finite automaton (WFA) from a recurrent neural network (RN...
We present a method to extract a weighted finite automaton (WFA) from a recurrent neural network (RN...
Recurrent Neural Networks (RNNs) have demonstrated their effectiveness in learning and processing se...
International audienceUnderstanding how a learned black box works is of crucial interest for the fut...
. Recent work has shown that the extraction of symbolic rules improves the generalization performan...
In recent years we have seen the development of efficient provably correct algorithms for learning W...
Student Paper Awards NIPS 2012Many tasks in text and speech processing and computational biology req...
Recurrent neural network language models (RNNLMs) have recently shown to outperform the venerable n-...