We describe a hierarchical, generative model that can be viewed as a nonlinear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information.We demonstrate that the network learns to extract sparse, distributed, hier-archical representations. 1
Recent theoretical and experimental work in imaging neuroscience reveals that activations inferred f...
In this paper we present a framework for using multi-layer per-ceptron (MLP) networks in nonlinear g...
Many machine learning and signal processing tasks involve computing sparse representations using an ...
We describe a hierarchical, generative model that can be viewed as a nonlinear generalization of fac...
A persistent worry with computational models of unsupervised learning is that learning will become m...
We first describe a hierarchical, generative model that can be viewed as a non-linear generalisation...
Borrowing insights from computational neuroscience, we present a family of inference algorithms for ...
<p>Networks are a unifying framework for modeling complex systems and network inference problems are...
Recent theoretical and experimental work in imaging neuroscience reveals that activations inferred f...
Bayesian statistics is a powerful framework for modeling the world and reasoning over uncertainty. I...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2...
Finding useful representations of data in order to facilitate scientific knowledge generation is a u...
Gaussian processes are now widely used to perform key machine learning tasks such as nonlinear regre...
In this work we study the distributed representations learnt by generative neural network models. In...
We build a multi-layer architecture using the Bayesian framework of the Factor Graphs in Reduced Nor...
Recent theoretical and experimental work in imaging neuroscience reveals that activations inferred f...
In this paper we present a framework for using multi-layer per-ceptron (MLP) networks in nonlinear g...
Many machine learning and signal processing tasks involve computing sparse representations using an ...
We describe a hierarchical, generative model that can be viewed as a nonlinear generalization of fac...
A persistent worry with computational models of unsupervised learning is that learning will become m...
We first describe a hierarchical, generative model that can be viewed as a non-linear generalisation...
Borrowing insights from computational neuroscience, we present a family of inference algorithms for ...
<p>Networks are a unifying framework for modeling complex systems and network inference problems are...
Recent theoretical and experimental work in imaging neuroscience reveals that activations inferred f...
Bayesian statistics is a powerful framework for modeling the world and reasoning over uncertainty. I...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2...
Finding useful representations of data in order to facilitate scientific knowledge generation is a u...
Gaussian processes are now widely used to perform key machine learning tasks such as nonlinear regre...
In this work we study the distributed representations learnt by generative neural network models. In...
We build a multi-layer architecture using the Bayesian framework of the Factor Graphs in Reduced Nor...
Recent theoretical and experimental work in imaging neuroscience reveals that activations inferred f...
In this paper we present a framework for using multi-layer per-ceptron (MLP) networks in nonlinear g...
Many machine learning and signal processing tasks involve computing sparse representations using an ...