One standard interpretation of networks of cortical neurons is that they form dynamical attractors. Computations such as stimulus estimation are performed by mapping inputs to points on the networks' attractive manifolds. These points represent population codes for the stimulus values. However, this standard interpretation is hard to reconcile with the observation that the firing rates of such neurons constantly change following presentation of stimuli. We have recently suggested an alternative interpretation according to which computations are realized by systematic changes in the states of such networks over time. This way of performing computations is fast, accurate, readily learnable, and robust to various forms of noise. Here we analyz...
The recently proposed self-consistent signal-to-noise analysis is applied to a current--rate dynamic...
<div><p>Tracking moving objects, including one’s own body, is a fundamental ability of higher organi...
In the context of learning in attractor neural networks (ANN) we discuss the issue of the constraint...
A conventional view of information processing by line (manifold) attractor networks holds that they ...
We propose a theoretical framework for efficient representation of time-varying sensory information ...
Line attractor networks have become standard workhorses of computational accounts of neural populati...
We have recently investigated a new way of conceptualizing the inferential capacities of non-linear ...
International audienceModels of neural responses to stimuli with complex spatiotemporal correlation ...
Models of neural responses to stimuli with complex spatiotemporal correlation structure often assume...
International audienceIn the context of sensory or higher-level cognitive processing, we present a r...
Two issues concerning the application of continuous attractors in neural systems are investigated: t...
Representations in the cortex are often distributed with graded firing rates in the neuronal populat...
Thesis (Ph.D.)--University of Washington, 2012The basic unit of computation in the nervous system is...
Two observations about the cortex have puzzled neuroscientists for a long time. First, neural respon...
There is a wealth of approaches to understanding the ways that populations of neurons encode static,...
The recently proposed self-consistent signal-to-noise analysis is applied to a current--rate dynamic...
<div><p>Tracking moving objects, including one’s own body, is a fundamental ability of higher organi...
In the context of learning in attractor neural networks (ANN) we discuss the issue of the constraint...
A conventional view of information processing by line (manifold) attractor networks holds that they ...
We propose a theoretical framework for efficient representation of time-varying sensory information ...
Line attractor networks have become standard workhorses of computational accounts of neural populati...
We have recently investigated a new way of conceptualizing the inferential capacities of non-linear ...
International audienceModels of neural responses to stimuli with complex spatiotemporal correlation ...
Models of neural responses to stimuli with complex spatiotemporal correlation structure often assume...
International audienceIn the context of sensory or higher-level cognitive processing, we present a r...
Two issues concerning the application of continuous attractors in neural systems are investigated: t...
Representations in the cortex are often distributed with graded firing rates in the neuronal populat...
Thesis (Ph.D.)--University of Washington, 2012The basic unit of computation in the nervous system is...
Two observations about the cortex have puzzled neuroscientists for a long time. First, neural respon...
There is a wealth of approaches to understanding the ways that populations of neurons encode static,...
The recently proposed self-consistent signal-to-noise analysis is applied to a current--rate dynamic...
<div><p>Tracking moving objects, including one’s own body, is a fundamental ability of higher organi...
In the context of learning in attractor neural networks (ANN) we discuss the issue of the constraint...