Higher-order statistics of input ensembles and the response of simple model neurons.

Abstract:

:Pairwise correlations among spike trains recorded in vivo have been frequently reported. It has been argued that correlated activity could play an important role in the brain, because it efficiently modulates the response of a postsynaptic neuron. We show here that a neuron's output firing rate critically depends on the higher-order statistics of the input ensemble. We constructed two statistical models of populations of spiking neurons that fired with the same rates and had identical pairwise correlations, but differed with regard to the higher-order interactions within the population. The first ensemble was characterized by clusters of spikes synchronized over the whole population. In the second ensemble, the size of spike clusters was, on average, proportional to the pairwise correlation. For both input models, we assessed the role of the size of the population, the firing rate, and the pairwise correlation on the output rate of two simple model neurons: a continuous firing-rate model and a conductance-based leaky integrate-and-fire neuron. An approximation to the mean output rate of the firing-rate neuron could be derived analytically with the help of shot noise theory. Interestingly, the essential features of the mean response of the two neuron models were similar. For both neuron models, the three input parameters played radically different roles with respect to the postsynaptic firing rate, depending on the interaction structure of the input. For instance, in the case of an ensemble with small and distributed spike clusters, the output firing rate was efficiently controlled by the size of the input population. In addition to the interaction structure, the ratio of inhibition to excitation was found to strongly modulate the effect of correlation on the postsynaptic firing rate.

journal_name

Neural Comput

journal_title

Neural computation

authors

Kuhn A,Aertsen A,Rotter S

doi

10.1162/089976603321043702

subject

Has Abstract

pub_date

2003-01-01 00:00:00

pages

67-101

issue

1

eissn

0899-7667

issn

1530-888X

journal_volume

15

pub_type

杂志文章
  • Direct estimation of inhomogeneous Markov interval models of spike trains.

    abstract::A necessary ingredient for a quantitative theory of neural coding is appropriate "spike kinematics": a precise description of spike trains. While summarizing experiments by complete spike time collections is clearly inefficient and probably unnecessary, the most common probabilistic model used in neurophysiology, the ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.07-08-828

    authors: Wójcik DK,Mochol G,Jakuczun W,Wypych M,Waleszczyk WJ

    更新日期:2009-08-01 00:00:00

  • A Distributed Framework for the Construction of Transport Maps.

    abstract::The need to reason about uncertainty in large, complex, and multimodal data sets has become increasingly common across modern scientific environments. The ability to transform samples from one distribution P to another distribution

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01172

    authors: Mesa DA,Tantiongloc J,Mendoza M,Kim S,P Coleman T

    更新日期:2019-04-01 00:00:00

  • Computation in a single neuron: Hodgkin and Huxley revisited.

    abstract::A spiking neuron "computes" by transforming a complex dynamical input into a train of action potentials, or spikes. The computation performed by the neuron can be formulated as dimensional reduction, or feature detection, followed by a nonlinear decision function over the low-dimensional space. Generalizations of the ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/08997660360675017

    authors: Agüera y Arcas B,Fairhall AL,Bialek W

    更新日期:2003-08-01 00:00:00

  • An internal model for acquisition and retention of motor learning during arm reaching.

    abstract::Humans have the ability to learn novel motor tasks while manipulating the environment. Several models of motor learning have been proposed in the literature, but few of them address the problem of retention and interference of motor memory. The modular selection and identification for control (MOSAIC) model, originall...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.03-08-721

    authors: Lonini L,Dipietro L,Zollo L,Guglielmelli E,Krebs HI

    更新日期:2009-07-01 00:00:00

  • Learning spike-based population codes by reward and population feedback.

    abstract::We investigate a recently proposed model for decision learning in a population of spiking neurons where synaptic plasticity is modulated by a population signal in addition to reward feedback. For the basic model, binary population decision making based on spike/no-spike coding, a detailed computational analysis is giv...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2010.05-09-1010

    authors: Friedrich J,Urbanczik R,Senn W

    更新日期:2010-07-01 00:00:00

  • Incremental active learning for optimal generalization.

    abstract::The problem of designing input signals for optimal generalization is called active learning. In this article, we give a two-stage sampling scheme for reducing both the bias and variance, and based on this scheme, we propose two active learning methods. One is the multipoint search method applicable to arbitrary models...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300014773

    authors: Sugiyama M,Ogawa H

    更新日期:2000-12-01 00:00:00

  • A Gaussian attractor network for memory and recognition with experience-dependent learning.

    abstract::Attractor networks are widely believed to underlie the memory systems of animals across different species. Existing models have succeeded in qualitatively modeling properties of attractor dynamics, but their computational abilities often suffer from poor representations for realistic complex patterns, spurious attract...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2010.02-09-957

    authors: Hu X,Zhang B

    更新日期:2010-05-01 00:00:00

  • The Discriminative Kalman Filter for Bayesian Filtering with Nonlinear and Nongaussian Observation Models.

    abstract::The Kalman filter provides a simple and efficient algorithm to compute the posterior distribution for state-space models where both the latent state and measurement models are linear and gaussian. Extensions to the Kalman filter, including the extended and unscented Kalman filters, incorporate linearizations for model...

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/neco_a_01275

    authors: Burkhart MC,Brandman DM,Franco B,Hochberg LR,Harrison MT

    更新日期:2020-05-01 00:00:00

  • Topographic mapping of large dissimilarity data sets.

    abstract::Topographic maps such as the self-organizing map (SOM) or neural gas (NG) constitute powerful data mining techniques that allow simultaneously clustering data and inferring their topological structure, such that additional features, for example, browsing, become available. Both methods have been introduced for vectori...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00012

    authors: Hammer B,Hasenfuss A

    更新日期:2010-09-01 00:00:00

  • Transmission of population-coded information.

    abstract::As neural activity is transmitted through the nervous system, neuronal noise degrades the encoded information and limits performance. It is therefore important to know how information loss can be prevented. We study this question in the context of neural population codes. Using Fisher information, we show how informat...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00227

    authors: Renart A,van Rossum MC

    更新日期:2012-02-01 00:00:00

  • Nonlinear and noisy extension of independent component analysis: theory and its application to a pitch sensation model.

    abstract::In this letter, we propose a noisy nonlinear version of independent component analysis (ICA). Assuming that the probability density function (p. d. f.) of sources is known, a learning rule is derived based on maximum likelihood estimation (MLE). Our model involves some algorithms of noisy linear ICA (e. g., Bermond & ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766052530866

    authors: Maeda S,Song WJ,Ishii S

    更新日期:2005-01-01 00:00:00

  • Spiking neural P systems with astrocytes.

    abstract::In a biological nervous system, astrocytes play an important role in the functioning and interaction of neurons, and astrocytes have excitatory and inhibitory influence on synapses. In this work, with this biological inspiration, a class of computation devices that consist of neurons and astrocytes is introduced, call...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00238

    authors: Pan L,Wang J,Hoogeboom HJ

    更新日期:2012-03-01 00:00:00

  • Computing sparse representations of multidimensional signals using Kronecker bases.

    abstract::Recently there has been great interest in sparse representations of signals under the assumption that signals (data sets) can be well approximated by a linear combination of few elements of a known basis (dictionary). Many algorithms have been developed to find such representations for one-dimensional signals (vectors...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00385

    authors: Caiafa CF,Cichocki A

    更新日期:2013-01-01 00:00:00

  • The Deterministic Information Bottleneck.

    abstract::Lossy compression and clustering fundamentally involve a decision about which features are relevant and which are not. The information bottleneck method (IB) by Tishby, Pereira, and Bialek ( 1999 ) formalized this notion as an information-theoretic optimization problem and proposed an optimal trade-off between throwin...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00961

    authors: Strouse DJ,Schwab DJ

    更新日期:2017-06-01 00:00:00

  • Neural Quadratic Discriminant Analysis: Nonlinear Decoding with V1-Like Computation.

    abstract::Linear-nonlinear (LN) models and their extensions have proven successful in describing transformations from stimuli to spiking responses of neurons in early stages of sensory hierarchies. Neural responses at later stages are highly nonlinear and have generally been better characterized in terms of their decoding perfo...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00890

    authors: Pagan M,Simoncelli EP,Rust NC

    更新日期:2016-11-01 00:00:00

  • Employing the zeta-transform to optimize the calculation of the synaptic conductance of NMDA and other synaptic channels in network simulations.

    abstract::Calculation of the total conductance change induced by multiple synapses at a given membrane compartment remains one of the most time-consuming processes in biophysically realistic neural network simulations. Here we show that this calculation can be achieved in a highly efficient way even for multiply converging syna...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976698300017061

    authors: Köhn J,Wörgötter F

    更新日期:1998-10-01 00:00:00

  • Similarity, connectionism, and the problem of representation in vision.

    abstract::A representational scheme under which the ranking between represented similarities is isomorphic to the ranking between the corresponding shape similarities can support perfectly correct shape classification because it preserves the clustering of shapes according to the natural kinds prevailing in the external world. ...

    journal_title:Neural computation

    pub_type: 杂志文章,评审

    doi:10.1162/neco.1997.9.4.701

    authors: Edelman S,Duvdevani-Bar S

    更新日期:1997-05-15 00:00:00

  • Long-term reward prediction in TD models of the dopamine system.

    abstract::This article addresses the relationship between long-term reward predictions and slow-timescale neural activity in temporal difference (TD) models of the dopamine system. Such models attempt to explain how the activity of dopamine (DA) neurons relates to errors in the prediction of future rewards. Previous models have...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976602760407973

    authors: Daw ND,Touretzky DS

    更新日期:2002-11-01 00:00:00

  • Are loss functions all the same?

    abstract::In this letter, we investigate the impact of choosing different loss functions from the viewpoint of statistical learning theory. We introduce a convexity assumption, which is met by all loss functions commonly used in the literature, and study how the bound on the estimation error changes with the loss. We also deriv...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976604773135104

    authors: Rosasco L,De Vito E,Caponnetto A,Piana M,Verri A

    更新日期:2004-05-01 00:00:00

  • Local and global gating of synaptic plasticity.

    abstract::Mechanisms influencing learning in neural networks are usually investigated on either a local or a global scale. The former relates to synaptic processes, the latter to unspecific modulatory systems. Here we study the interaction of a local learning rule that evaluates coincidences of pre- and postsynaptic action pote...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015682

    authors: Sánchez-Montañés MA,Verschure PF,König P

    更新日期:2000-03-01 00:00:00

  • Machine Learning: Deepest Learning as Statistical Data Assimilation Problems.

    abstract::We formulate an equivalence between machine learning and the formulation of statistical data assimilation as used widely in physical and biological sciences. The correspondence is that layer number in a feedforward artificial network setting is the analog of time in the data assimilation setting. This connection has b...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01094

    authors: Abarbanel HDI,Rozdeba PJ,Shirman S

    更新日期:2018-08-01 00:00:00

  • Modeling short-term synaptic depression in silicon.

    abstract::We describe a model of short-term synaptic depression that is derived from a circuit implementation. The dynamics of this circuit model is similar to the dynamics of some theoretical models of short-term depression except that the recovery dynamics of the variable describing the depression is nonlinear and it also dep...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976603762552942

    authors: Boegerhausen M,Suter P,Liu SC

    更新日期:2003-02-01 00:00:00

  • Active Learning for Enumerating Local Minima Based on Gaussian Process Derivatives.

    abstract::We study active learning (AL) based on gaussian processes (GPs) for efficiently enumerating all of the local minimum solutions of a black-box function. This problem is challenging because local solutions are characterized by their zero gradient and positive-definite Hessian properties, but those derivatives cannot be ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01307

    authors: Inatsu Y,Sugita D,Toyoura K,Takeuchi I

    更新日期:2020-10-01 00:00:00

  • Investigating the fault tolerance of neural networks.

    abstract::Particular levels of partial fault tolerance (PFT) in feedforward artificial neural networks of a given size can be obtained by redundancy (replicating a smaller normally trained network), by design (training specifically to increase PFT), and by a combination of the two (replicating a smaller PFT-trained network). Th...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766053723096

    authors: Tchernev EB,Mulvaney RG,Phatak DS

    更新日期:2005-07-01 00:00:00

  • Density-weighted Nyström method for computing large kernel eigensystems.

    abstract::The Nyström method is a well-known sampling-based technique for approximating the eigensystem of large kernel matrices. However, the chosen samples in the Nyström method are all assumed to be of equal importance, which deviates from the integral equation that defines the kernel eigenfunctions. Motivated by this observ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.11-07-651

    authors: Zhang K,Kwok JT

    更新日期:2009-01-01 00:00:00

  • Optimal sequential detection of stimuli from multiunit recordings taken in densely populated brain regions.

    abstract::We address the problem of detecting the presence of a recurring stimulus by monitoring the voltage on a multiunit electrode located in a brain region densely populated by stimulus reactive neurons. Published experimental results suggest that under these conditions, when a stimulus is present, the measurements are gaus...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00257

    authors: Nossenson N,Messer H

    更新日期:2012-04-01 00:00:00

  • Parameter Identifiability in Statistical Machine Learning: A Review.

    abstract::This review examines the relevance of parameter identifiability for statistical models used in machine learning. In addition to defining main concepts, we address several issues of identifiability closely related to machine learning, showing the advantages and disadvantages of state-of-the-art research and demonstrati...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00947

    authors: Ran ZY,Hu BG

    更新日期:2017-05-01 00:00:00

  • Time-varying perturbations can distinguish among integrate-to-threshold models for perceptual decision making in reaction time tasks.

    abstract::Several integrate-to-threshold models with differing temporal integration mechanisms have been proposed to describe the accumulation of sensory evidence to a prescribed level prior to motor response in perceptual decision-making tasks. An experiment and simulation studies have shown that the introduction of time-varyi...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.07-08-817

    authors: Zhou X,Wong-Lin K,Philip H

    更新日期:2009-08-01 00:00:00

  • Linking Neuromodulated Spike-Timing Dependent Plasticity with the Free-Energy Principle.

    abstract::The free-energy principle is a candidate unified theory for learning and memory in the brain that predicts that neurons, synapses, and neuromodulators work in a manner that minimizes free energy. However, electrophysiological data elucidating the neural and synaptic bases for this theory are lacking. Here, we propose ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00862

    authors: Isomura T,Sakai K,Kotani K,Jimbo Y

    更新日期:2016-09-01 00:00:00

  • MISEP method for postnonlinear blind source separation.

    abstract::In this letter, a standard postnonlinear blind source separation algorithm is proposed, based on the MISEP method, which is widely used in linear and nonlinear independent component analysis. To best suit a wide class of postnonlinear mixtures, we adapt the MISEP method to incorporate a priori information of the mixtu...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2007.19.9.2557

    authors: Zheng CH,Huang DS,Li K,Irwin G,Sun ZL

    更新日期:2007-09-01 00:00:00