Abstract:
:As neural activity is transmitted through the nervous system, neuronal noise degrades the encoded information and limits performance. It is therefore important to know how information loss can be prevented. We study this question in the context of neural population codes. Using Fisher information, we show how information loss in a layered network depends on the connectivity between the layers. We introduce an algorithm, reminiscent of the water filling algorithm for Shannon information that minimizes the loss. The optimal connection profile has a center-surround structure with a spatial extent closely matching the neurons' tuning curves. In addition, we show how the optimal connectivity depends on the correlation structure of the trial-to-trial variability in the neuronal responses. Our results explain how optimal communication of population codes requires the center-surround architectures found in the nervous system and provide explicit predictions on the connectivity parameters.
journal_name
Neural Computjournal_title
Neural computationauthors
Renart A,van Rossum MCdoi
10.1162/NECO_a_00227subject
Has Abstractpub_date
2012-02-01 00:00:00pages
391-407issue
2eissn
0899-7667issn
1530-888Xjournal_volume
24pub_type
杂志文章abstract::A mathematical model, of general character for the dynamic description of coupled neural oscillators is presented. The population approach that is employed applies equally to coupled cells as to populations of such coupled cells. The formulation includes stochasticity and preserves details of precisely firing neurons....
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.03-07-482
更新日期:2008-05-01 00:00:00
abstract::We derive solutions for the problem of missing and noisy data in nonlinear time&hyphenseries prediction from a probabilistic point of view. We discuss different approximations to the solutions &hyphen in particular, approximations that require either stochastic simulation or the substitution of a single estimate for t...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976698300017728
更新日期:1998-03-23 00:00:00
abstract::The hypothesis of invariant maximization of interaction (IMI) is formulated within the setting of random fields. According to this hypothesis, learning processes maximize the stochastic interaction of the neurons subject to constraints. We consider the extrinsic constraint in terms of a fixed input distribution on the...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602760805368
更新日期:2002-12-01 00:00:00
abstract::The past decade has seen a rise of interest in Laplacian eigenmaps (LEMs) for nonlinear dimensionality reduction. LEMs have been used in spectral clustering, in semisupervised learning, and for providing efficient state representations for reinforcement learning. Here, we show that LEMs are closely related to slow fea...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00214
更新日期:2011-12-01 00:00:00
abstract::We present a new supervised learning procedure for ensemble machines, in which outputs of predictors, trained on different distributions, are combined by a dynamic classifier combination model. This procedure may be viewed as either a version of mixture of experts (Jacobs, Jordan, Nowlan, & Hintnon, 1991), applied to ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976699300016737
更新日期:1999-02-15 00:00:00
abstract::In this letter, we investigate the fundamental limits on how the interspike time of a neuron oscillator can be perturbed by the application of a bounded external control input (a current stimulus) with zero net electric charge accumulation. We use phase models to study the dynamics of neurons and derive charge-balance...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00643
更新日期:2014-10-01 00:00:00
abstract::Previous studies have combined analytical models of stochastic neural responses with signal detection theory (SDT) to predict psychophysical performance limits; however, these studies have typically been limited to simple models and simple psychophysical tasks. A companion article in this issue ("Evaluating Auditory P...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976601750541813
更新日期:2001-10-01 00:00:00
abstract::Theories of learning and generalization hold that the generalization bias, defined as the difference between the training error and the generalization error, increases on average with the number of adaptive parameters. This article, however, shows that this general tendency is violated for a gaussian mixture model. Fo...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300015439
更新日期:2000-06-01 00:00:00
abstract::Slightly modified versions of an early Hebbian/anti-Hebbian neural network are shown to be capable of extracting the sparse, independent linear components of a prefiltered natural image set. An explanation for this capability in terms of a coupling between two hypothetical networks is presented. The simple networks pr...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976606775093891
更新日期:2006-02-01 00:00:00
abstract::Disparity tuning of visual cells in the brain depends on the structure of their binocular receptive fields (RFs). Freeman and coworkers have found that binocular RFs of a typical simple cell can be quantitatively described by two Gabor functions with the same gaussian envelope but different phase parameters in the sin...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1996.8.8.1611
更新日期:1996-11-15 00:00:00
abstract::Decision trees and neural networks are widely used tools for pattern classification. Decision trees provide highly localized representation, whereas neural networks provide a distributed but compact representation of the decision space. Decision trees cannot be induced in the online mode, and they are not adaptive to ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/0899766041336396
更新日期:2004-09-01 00:00:00
abstract::Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attracto...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1996.8.6.1135
更新日期:1996-08-15 00:00:00
abstract::Based on the dopamine hypotheses of cocaine addiction and the assumption of decrement of brain reward system sensitivity after long-term drug exposure, we propose a computational model for cocaine addiction. Utilizing average reward temporal difference reinforcement learning, we incorporate the elevation of basal rewa...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.10-08-882
更新日期:2009-10-01 00:00:00
abstract::Perceiving and identifying an object is improved by prior exposure to the object. This perceptual priming phenomenon is accompanied by reduced neural activity. But whether suppression of neuronal activity with priming is responsible for the improvement in perception is unclear. To address this problem, we developed a ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.04-09-999
更新日期:2010-05-01 00:00:00
abstract::Neurons perform computations, and convey the results of those computations through the statistical structure of their output spike trains. Here we present a practical method, grounded in the information-theoretic analysis of prediction, for inferring a minimal representation of that structure and for characterizing it...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.12-07-678
更新日期:2010-01-01 00:00:00
abstract::Primary visual cortical complex cells are thought to serve as invariant feature detectors and to provide input to higher cortical areas. We propose a single model for learning the connectivity required by complex cells that integrates two factors that have been hypothesized to play a role in the development of invaria...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00743
更新日期:2015-07-01 00:00:00
abstract::We show that Langevin Markov chain Monte Carlo inference in an energy-based model with latent variables has the property that the early steps of inference, starting from a stationary point, correspond to propagating error gradients into internal layers, similar to backpropagation. The backpropagated error is with resp...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00934
更新日期:2017-03-01 00:00:00
abstract::Recent experiments indicate that the calcium store (e.g., endoplasmic reticulum) is involved in electrical bursting and [Ca2+]i oscillation in bursting neuronal cells. In this paper, we formulate a mathematical model for bursting neurons, which includes Ca2+ in the intracellular Ca2+ stores and a voltage-independent c...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1996.8.5.951
更新日期:1996-07-01 00:00:00
abstract::A hierarchical dynamical map is proposed as the basic framework for sensory cortical mapping. To show how the hierarchical dynamical map works in cognitive processes, we applied it to a typical cognitive task known as priming, in which cognitive performance is facilitated as a consequence of prior experience. Prior to...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/08997660152469341
更新日期:2001-08-01 00:00:00
abstract::We study both analytically and numerically the effect of presynaptic noise on the transmission of information in attractor neural networks. The noise occurs on a very short timescale compared to that for the neuron dynamics and it produces short-time synaptic depression. This is inspired in recent neurobiological find...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976606775623342
更新日期:2006-03-01 00:00:00
abstract::In this letter, we perform a complete and in-depth analysis of Lorentzian noises, such as those arising from [Formula: see text] and [Formula: see text] channel kinetics, in order to identify the source of [Formula: see text]-type noise in neurological membranes. We prove that the autocovariance of Lorentzian noise de...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_01067
更新日期:2018-07-01 00:00:00
abstract::Event-driven simulation strategies were proposed recently to simulate integrate-and-fire (IF) type neuronal models. These strategies can lead to computationally efficient algorithms for simulating large-scale networks of neurons; most important, such approaches are more precise than traditional clock-driven numerical ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2006.18.9.2146
更新日期:2006-09-01 00:00:00
abstract::Recent work suggests that synchronization of neuronal activity could serve to define functionally relevant relationships between spatially distributed cortical neurons. At present, it is not known to what extent this hypothesis is compatible with the widely supported notion of coarse coding, which assumes that feature...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1995.7.3.469
更新日期:1995-05-01 00:00:00
abstract::We present a reduction of a Hodgkin-Huxley (HH)--style bursting model to a hybridized integrate-and-fire (IF) formalism based on a thorough bifurcation analysis of the neuron's dynamics. The model incorporates HH--style equations to evolve the subthreshold currents and includes IF mechanisms to characterize spike even...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976603322518768
更新日期:2003-12-01 00:00:00
abstract::Many neurons that initially respond to a stimulus stop responding if the stimulus is presented repeatedly but recover their response if a different stimulus is presented. This phenomenon is referred to as stimulus-specific adaptation (SSA). SSA has been investigated extensively using oddball experiments, which measure...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00077
更新日期:2011-02-01 00:00:00
abstract::In a pioneering classic, Warren McCulloch and Walter Pitts proposed a model of the central nervous system. Motivated by EEG recordings of normal brain activity, Chvátal and Goldsmith asked whether these dynamical systems can be engineered to produce trajectories that are irregular, disorderly, and apparently unpredict...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00841
更新日期:2016-06-01 00:00:00
abstract::Attractor networks are widely believed to underlie the memory systems of animals across different species. Existing models have succeeded in qualitatively modeling properties of attractor dynamics, but their computational abilities often suffer from poor representations for realistic complex patterns, spurious attract...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2010.02-09-957
更新日期:2010-05-01 00:00:00
abstract::The relationship between a neuron's complex inputs and its spiking output defines the neuron's coding strategy. This is frequently and effectively modeled phenomenologically by one or more linear filters that extract the components of the stimulus that are relevant for triggering spikes and a nonlinear function that r...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.02-09-956
更新日期:2010-03-01 00:00:00
abstract::We propose a modular reinforcement learning architecture for nonlinear, nonstationary control tasks, which we call multiple model-based reinforcement learning (MMRL). The basic idea is to decompose a complex task into multiple domains in space and time based on the predictability of the environmental dynamics. The sys...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602753712972
更新日期:2002-06-01 00:00:00
abstract::We present a graphical model framework for decoding in the visual ERP-based speller system. The proposed framework allows researchers to build generative models from which the decoding rules are obtained in a straightforward manner. We suggest two models for generating brain signals conditioned on the stimulus events....
journal_title:Neural computation
pub_type: 信件
doi:10.1162/NECO_a_00066
更新日期:2011-01-01 00:00:00