Abstract:
:Pairwise correlations among spike trains recorded in vivo have been frequently reported. It has been argued that correlated activity could play an important role in the brain, because it efficiently modulates the response of a postsynaptic neuron. We show here that a neuron's output firing rate critically depends on the higher-order statistics of the input ensemble. We constructed two statistical models of populations of spiking neurons that fired with the same rates and had identical pairwise correlations, but differed with regard to the higher-order interactions within the population. The first ensemble was characterized by clusters of spikes synchronized over the whole population. In the second ensemble, the size of spike clusters was, on average, proportional to the pairwise correlation. For both input models, we assessed the role of the size of the population, the firing rate, and the pairwise correlation on the output rate of two simple model neurons: a continuous firing-rate model and a conductance-based leaky integrate-and-fire neuron. An approximation to the mean output rate of the firing-rate neuron could be derived analytically with the help of shot noise theory. Interestingly, the essential features of the mean response of the two neuron models were similar. For both neuron models, the three input parameters played radically different roles with respect to the postsynaptic firing rate, depending on the interaction structure of the input. For instance, in the case of an ensemble with small and distributed spike clusters, the output firing rate was efficiently controlled by the size of the input population. In addition to the interaction structure, the ratio of inhibition to excitation was found to strongly modulate the effect of correlation on the postsynaptic firing rate.
journal_name
Neural Computjournal_title
Neural computationauthors
Kuhn A,Aertsen A,Rotter Sdoi
10.1162/089976603321043702subject
Has Abstractpub_date
2003-01-01 00:00:00pages
67-101issue
1eissn
0899-7667issn
1530-888Xjournal_volume
15pub_type
杂志文章abstract::A necessary ingredient for a quantitative theory of neural coding is appropriate "spike kinematics": a precise description of spike trains. While summarizing experiments by complete spike time collections is clearly inefficient and probably unnecessary, the most common probabilistic model used in neurophysiology, the ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.07-08-828
更新日期:2009-08-01 00:00:00
abstract::The need to reason about uncertainty in large, complex, and multimodal data sets has become increasingly common across modern scientific environments. The ability to transform samples from one distribution journal_title:Neural computation pub_type: 杂志文章 doi:10.1162/neco_a_01172 更新日期:2019-04-01 00:00:00
abstract::A spiking neuron "computes" by transforming a complex dynamical input into a train of action potentials, or spikes. The computation performed by the neuron can be formulated as dimensional reduction, or feature detection, followed by a nonlinear decision function over the low-dimensional space. Generalizations of the ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/08997660360675017
更新日期:2003-08-01 00:00:00
abstract::Humans have the ability to learn novel motor tasks while manipulating the environment. Several models of motor learning have been proposed in the literature, but few of them address the problem of retention and interference of motor memory. The modular selection and identification for control (MOSAIC) model, originall...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.03-08-721
更新日期:2009-07-01 00:00:00
abstract::We investigate a recently proposed model for decision learning in a population of spiking neurons where synaptic plasticity is modulated by a population signal in addition to reward feedback. For the basic model, binary population decision making based on spike/no-spike coding, a detailed computational analysis is giv...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2010.05-09-1010
更新日期:2010-07-01 00:00:00
abstract::The problem of designing input signals for optimal generalization is called active learning. In this article, we give a two-stage sampling scheme for reducing both the bias and variance, and based on this scheme, we propose two active learning methods. One is the multipoint search method applicable to arbitrary models...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300014773
更新日期:2000-12-01 00:00:00
abstract::Attractor networks are widely believed to underlie the memory systems of animals across different species. Existing models have succeeded in qualitatively modeling properties of attractor dynamics, but their computational abilities often suffer from poor representations for realistic complex patterns, spurious attract...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2010.02-09-957
更新日期:2010-05-01 00:00:00
abstract::The Kalman filter provides a simple and efficient algorithm to compute the posterior distribution for state-space models where both the latent state and measurement models are linear and gaussian. Extensions to the Kalman filter, including the extended and unscented Kalman filters, incorporate linearizations for model...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/neco_a_01275
更新日期:2020-05-01 00:00:00
abstract::Topographic maps such as the self-organizing map (SOM) or neural gas (NG) constitute powerful data mining techniques that allow simultaneously clustering data and inferring their topological structure, such that additional features, for example, browsing, become available. Both methods have been introduced for vectori...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00012
更新日期:2010-09-01 00:00:00
abstract::As neural activity is transmitted through the nervous system, neuronal noise degrades the encoded information and limits performance. It is therefore important to know how information loss can be prevented. We study this question in the context of neural population codes. Using Fisher information, we show how informat...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00227
更新日期:2012-02-01 00:00:00
abstract::In this letter, we propose a noisy nonlinear version of independent component analysis (ICA). Assuming that the probability density function (p. d. f.) of sources is known, a learning rule is derived based on maximum likelihood estimation (MLE). Our model involves some algorithms of noisy linear ICA (e. g., Bermond & ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/0899766052530866
更新日期:2005-01-01 00:00:00
abstract::In a biological nervous system, astrocytes play an important role in the functioning and interaction of neurons, and astrocytes have excitatory and inhibitory influence on synapses. In this work, with this biological inspiration, a class of computation devices that consist of neurons and astrocytes is introduced, call...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00238
更新日期:2012-03-01 00:00:00
abstract::Recently there has been great interest in sparse representations of signals under the assumption that signals (data sets) can be well approximated by a linear combination of few elements of a known basis (dictionary). Many algorithms have been developed to find such representations for one-dimensional signals (vectors...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00385
更新日期:2013-01-01 00:00:00
abstract::Lossy compression and clustering fundamentally involve a decision about which features are relevant and which are not. The information bottleneck method (IB) by Tishby, Pereira, and Bialek ( 1999 ) formalized this notion as an information-theoretic optimization problem and proposed an optimal trade-off between throwin...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00961
更新日期:2017-06-01 00:00:00
abstract::Linear-nonlinear (LN) models and their extensions have proven successful in describing transformations from stimuli to spiking responses of neurons in early stages of sensory hierarchies. Neural responses at later stages are highly nonlinear and have generally been better characterized in terms of their decoding perfo...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00890
更新日期:2016-11-01 00:00:00
abstract::Calculation of the total conductance change induced by multiple synapses at a given membrane compartment remains one of the most time-consuming processes in biophysically realistic neural network simulations. Here we show that this calculation can be achieved in a highly efficient way even for multiply converging syna...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976698300017061
更新日期:1998-10-01 00:00:00
abstract::A representational scheme under which the ranking between represented similarities is isomorphic to the ranking between the corresponding shape similarities can support perfectly correct shape classification because it preserves the clustering of shapes according to the natural kinds prevailing in the external world. ...
journal_title:Neural computation
pub_type: 杂志文章,评审
doi:10.1162/neco.1997.9.4.701
更新日期:1997-05-15 00:00:00
abstract::This article addresses the relationship between long-term reward predictions and slow-timescale neural activity in temporal difference (TD) models of the dopamine system. Such models attempt to explain how the activity of dopamine (DA) neurons relates to errors in the prediction of future rewards. Previous models have...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602760407973
更新日期:2002-11-01 00:00:00
abstract::In this letter, we investigate the impact of choosing different loss functions from the viewpoint of statistical learning theory. We introduce a convexity assumption, which is met by all loss functions commonly used in the literature, and study how the bound on the estimation error changes with the loss. We also deriv...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976604773135104
更新日期:2004-05-01 00:00:00
abstract::Mechanisms influencing learning in neural networks are usually investigated on either a local or a global scale. The former relates to synaptic processes, the latter to unspecific modulatory systems. Here we study the interaction of a local learning rule that evaluates coincidences of pre- and postsynaptic action pote...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300015682
更新日期:2000-03-01 00:00:00
abstract::We formulate an equivalence between machine learning and the formulation of statistical data assimilation as used widely in physical and biological sciences. The correspondence is that layer number in a feedforward artificial network setting is the analog of time in the data assimilation setting. This connection has b...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01094
更新日期:2018-08-01 00:00:00
abstract::We describe a model of short-term synaptic depression that is derived from a circuit implementation. The dynamics of this circuit model is similar to the dynamics of some theoretical models of short-term depression except that the recovery dynamics of the variable describing the depression is nonlinear and it also dep...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976603762552942
更新日期:2003-02-01 00:00:00
abstract::We study active learning (AL) based on gaussian processes (GPs) for efficiently enumerating all of the local minimum solutions of a black-box function. This problem is challenging because local solutions are characterized by their zero gradient and positive-definite Hessian properties, but those derivatives cannot be ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01307
更新日期:2020-10-01 00:00:00
abstract::Particular levels of partial fault tolerance (PFT) in feedforward artificial neural networks of a given size can be obtained by redundancy (replicating a smaller normally trained network), by design (training specifically to increase PFT), and by a combination of the two (replicating a smaller PFT-trained network). Th...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/0899766053723096
更新日期:2005-07-01 00:00:00
abstract::The Nyström method is a well-known sampling-based technique for approximating the eigensystem of large kernel matrices. However, the chosen samples in the Nyström method are all assumed to be of equal importance, which deviates from the integral equation that defines the kernel eigenfunctions. Motivated by this observ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2008.11-07-651
更新日期:2009-01-01 00:00:00
abstract::We address the problem of detecting the presence of a recurring stimulus by monitoring the voltage on a multiunit electrode located in a brain region densely populated by stimulus reactive neurons. Published experimental results suggest that under these conditions, when a stimulus is present, the measurements are gaus...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00257
更新日期:2012-04-01 00:00:00
abstract::This review examines the relevance of parameter identifiability for statistical models used in machine learning. In addition to defining main concepts, we address several issues of identifiability closely related to machine learning, showing the advantages and disadvantages of state-of-the-art research and demonstrati...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00947
更新日期:2017-05-01 00:00:00
abstract::Several integrate-to-threshold models with differing temporal integration mechanisms have been proposed to describe the accumulation of sensory evidence to a prescribed level prior to motor response in perceptual decision-making tasks. An experiment and simulation studies have shown that the introduction of time-varyi...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.07-08-817
更新日期:2009-08-01 00:00:00
abstract::The free-energy principle is a candidate unified theory for learning and memory in the brain that predicts that neurons, synapses, and neuromodulators work in a manner that minimizes free energy. However, electrophysiological data elucidating the neural and synaptic bases for this theory are lacking. Here, we propose ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00862
更新日期:2016-09-01 00:00:00
abstract::In this letter, a standard postnonlinear blind source separation algorithm is proposed, based on the MISEP method, which is widely used in linear and nonlinear independent component analysis. To best suit a wide class of postnonlinear mixtures, we adapt the MISEP method to incorporate a priori information of the mixtu...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.19.9.2557
更新日期:2007-09-01 00:00:00