Does high firing irregularity enhance learning?

Abstract:

:In this note, we demonstrate that the high firing irregularity produced by the leaky integrate-and-fire neuron with the partial somatic reset mechanism, which has been shown to be the most likely candidate to reflect the mechanism used in the brain for reproducing the highly irregular cortical neuron firing at high rates (Bugmann, Christodoulou, & Taylor, 1997; Christodoulou & Bugmann, 2001), enhances learning. More specifically, it enhances reward-modulated spike-timing-dependent plasticity with eligibility trace when used in spiking neural networks, as shown by the results when tested in the simple benchmark problem of XOR, as well as in a complex multiagent setting task.

journal_name

Neural Comput

journal_title

Neural computation

authors

Christodoulou C,Cleanthous A

doi

10.1162/NECO_a_00090

subject

Has Abstract

pub_date

2011-03-01 00:00:00

pages

656-63

issue

3

eissn

0899-7667

issn

1530-888X

journal_volume

23

pub_type

杂志文章
  • Generalization and multirate models of motor adaptation.

    abstract::When subjects adapt their reaching movements in the setting of a systematic force or visual perturbation, generalization of adaptation can be assessed psychophysically in two ways: by testing untrained locations in the work space at the end of adaptation (slow postadaptation generalization) or by determining the influ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00262

    authors: Tanaka H,Krakauer JW,Sejnowski TJ

    更新日期:2012-04-01 00:00:00

  • Kernels for longitudinal data with variable sequence length and sampling intervals.

    abstract::We develop several kernel methods for classification of longitudinal data and apply them to detect cognitive decline in the elderly. We first develop mixed-effects models, a type of hierarchical empirical Bayes generative models, for the time series. After demonstrating their utility in likelihood ratio classifiers (a...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00164

    authors: Lu Z,Leen TK,Kaye J

    更新日期:2011-09-01 00:00:00

  • Neural Circuits Trained with Standard Reinforcement Learning Can Accumulate Probabilistic Information during Decision Making.

    abstract::Much experimental evidence suggests that during decision making, neural circuits accumulate evidence supporting alternative options. A computational model well describing this accumulation for choices between two options assumes that the brain integrates the log ratios of the likelihoods of the sensory inputs given th...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00917

    authors: Kurzawa N,Summerfield C,Bogacz R

    更新日期:2017-02-01 00:00:00

  • Topographic mapping of large dissimilarity data sets.

    abstract::Topographic maps such as the self-organizing map (SOM) or neural gas (NG) constitute powerful data mining techniques that allow simultaneously clustering data and inferring their topological structure, such that additional features, for example, browsing, become available. Both methods have been introduced for vectori...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00012

    authors: Hammer B,Hasenfuss A

    更新日期:2010-09-01 00:00:00

  • Visual Categorization with Random Projection.

    abstract::Humans learn categories of complex objects quickly and from a few examples. Random projection has been suggested as a means to learn and categorize efficiently. We investigate how random projection affects categorization by humans and by very simple neural networks on the same stimuli and categorization tasks, and how...

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/NECO_a_00769

    authors: Arriaga RI,Rutter D,Cakmak M,Vempala SS

    更新日期:2015-10-01 00:00:00

  • Supervised learning in a recurrent network of rate-model neurons exhibiting frequency adaptation.

    abstract::For gradient descent learning to yield connectivity consistent with real biological networks, the simulated neurons would have to include more realistic intrinsic properties such as frequency adaptation. However, gradient descent learning cannot be used straightforwardly with adapting rate-model neurons because the de...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766054323017

    authors: Fortier PA,Guigon E,Burnod Y

    更新日期:2005-09-01 00:00:00

  • On the role of biophysical properties of cortical neurons in binding and segmentation of visual scenes.

    abstract::Neuroscience is progressing vigorously, and knowledge at different levels of description is rapidly accumulating. To establish relationships between results found at these different levels is one of the central challenges. In this simulation study, we demonstrate how microscopic cellular properties, taking the example...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976699300016377

    authors: Verschure PF,König P

    更新日期:1999-07-01 00:00:00

  • Nonlinear and noisy extension of independent component analysis: theory and its application to a pitch sensation model.

    abstract::In this letter, we propose a noisy nonlinear version of independent component analysis (ICA). Assuming that the probability density function (p. d. f.) of sources is known, a learning rule is derived based on maximum likelihood estimation (MLE). Our model involves some algorithms of noisy linear ICA (e. g., Bermond & ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766052530866

    authors: Maeda S,Song WJ,Ishii S

    更新日期:2005-01-01 00:00:00

  • The effects of input rate and synchrony on a coincidence detector: analytical solution.

    abstract::We derive analytically the solution for the output rate of the ideal coincidence detector. The solution is for an arbitrary number of input spike trains with identical binomial count distributions (which includes Poisson statistics as a special case) and identical arbitrary pairwise cross-correlations, from zero corre...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976603321192068

    authors: Mikula S,Niebur E

    更新日期:2003-03-01 00:00:00

  • MISEP method for postnonlinear blind source separation.

    abstract::In this letter, a standard postnonlinear blind source separation algorithm is proposed, based on the MISEP method, which is widely used in linear and nonlinear independent component analysis. To best suit a wide class of postnonlinear mixtures, we adapt the MISEP method to incorporate a priori information of the mixtu...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2007.19.9.2557

    authors: Zheng CH,Huang DS,Li K,Irwin G,Sun ZL

    更新日期:2007-09-01 00:00:00

  • Mismatched training and test distributions can outperform matched ones.

    abstract::In learning theory, the training and test sets are assumed to be drawn from the same probability distribution. This assumption is also followed in practical situations, where matching the training and test distributions is considered desirable. Contrary to conventional wisdom, we show that mismatched training and test...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00697

    authors: González CR,Abu-Mostafa YS

    更新日期:2015-02-01 00:00:00

  • The Discriminative Kalman Filter for Bayesian Filtering with Nonlinear and Nongaussian Observation Models.

    abstract::The Kalman filter provides a simple and efficient algorithm to compute the posterior distribution for state-space models where both the latent state and measurement models are linear and gaussian. Extensions to the Kalman filter, including the extended and unscented Kalman filters, incorporate linearizations for model...

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/neco_a_01275

    authors: Burkhart MC,Brandman DM,Franco B,Hochberg LR,Harrison MT

    更新日期:2020-05-01 00:00:00

  • A sensorimotor map: modulating lateral interactions for anticipation and planning.

    abstract::Experimental studies of reasoning and planned behavior have provided evidence that nervous systems use internal models to perform predictive motor control, imagery, inference, and planning. Classical (model-free) reinforcement learning approaches omit such a model; standard sensorimotor models account for forward and ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976606776240995

    authors: Toussaint M

    更新日期:2006-05-01 00:00:00

  • An internal model for acquisition and retention of motor learning during arm reaching.

    abstract::Humans have the ability to learn novel motor tasks while manipulating the environment. Several models of motor learning have been proposed in the literature, but few of them address the problem of retention and interference of motor memory. The modular selection and identification for control (MOSAIC) model, originall...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.03-08-721

    authors: Lonini L,Dipietro L,Zollo L,Guglielmelli E,Krebs HI

    更新日期:2009-07-01 00:00:00

  • Computing sparse representations of multidimensional signals using Kronecker bases.

    abstract::Recently there has been great interest in sparse representations of signals under the assumption that signals (data sets) can be well approximated by a linear combination of few elements of a known basis (dictionary). Many algorithms have been developed to find such representations for one-dimensional signals (vectors...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00385

    authors: Caiafa CF,Cichocki A

    更新日期:2013-01-01 00:00:00

  • An amplitude equation approach to contextual effects in visual cortex.

    abstract::A mathematical theory of interacting hypercolumns in primary visual cortex (V1) is presented that incorporates details concerning the anisotropic nature of long-range lateral connections. Each hypercolumn is modeled as a ring of interacting excitatory and inhibitory neural populations with orientation preferences over...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976602317250870

    authors: Bressloff PC,Cowan JD

    更新日期:2002-03-01 00:00:00

  • The successor representation and temporal context.

    abstract::The successor representation was introduced into reinforcement learning by Dayan ( 1993 ) as a means of facilitating generalization between states with similar successors. Although reinforcement learning in general has been used extensively as a model of psychological and neural processes, the psychological validity o...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00282

    authors: Gershman SJ,Moore CD,Todd MT,Norman KA,Sederberg PB

    更新日期:2012-06-01 00:00:00

  • Density-weighted Nyström method for computing large kernel eigensystems.

    abstract::The Nyström method is a well-known sampling-based technique for approximating the eigensystem of large kernel matrices. However, the chosen samples in the Nyström method are all assumed to be of equal importance, which deviates from the integral equation that defines the kernel eigenfunctions. Motivated by this observ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.11-07-651

    authors: Zhang K,Kwok JT

    更新日期:2009-01-01 00:00:00

  • Online Reinforcement Learning Using a Probability Density Estimation.

    abstract::Function approximation in online, incremental, reinforcement learning needs to deal with two fundamental problems: biased sampling and nonstationarity. In this kind of task, biased sampling occurs because samples are obtained from specific trajectories dictated by the dynamics of the environment and are usually concen...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00906

    authors: Agostini A,Celaya E

    更新日期:2017-01-01 00:00:00

  • Nonmonotonic generalization bias of Gaussian mixture models.

    abstract::Theories of learning and generalization hold that the generalization bias, defined as the difference between the training error and the generalization error, increases on average with the number of adaptive parameters. This article, however, shows that this general tendency is violated for a gaussian mixture model. Fo...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015439

    authors: Akaho S,Kappen HJ

    更新日期:2000-06-01 00:00:00

  • Training nu-support vector classifiers: theory and algorithms.

    abstract::The nu-support vector machine (nu-SVM) for classification proposed by Schölkopf, Smola, Williamson, and Bartlett (2000) has the advantage of using a parameter nu on controlling the number of support vectors. In this article, we investigate the relation between nu-SVM and C-SVM in detail. We show that in general they a...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976601750399335

    authors: Chang CC,Lin CJ

    更新日期:2001-09-01 00:00:00

  • A general probability estimation approach for neural comp.

    abstract::We describe an analytical framework for the adaptations of neural systems that adapt its internal structure on the basis of subjective probabilities constructed by computation of randomly received input signals. A principled approach is provided with the key property that it defines a probability density model that al...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015862

    authors: Khaikine M,Holthausen K

    更新日期:2000-02-01 00:00:00

  • Boosted mixture of experts: an ensemble learning scheme.

    abstract::We present a new supervised learning procedure for ensemble machines, in which outputs of predictors, trained on different distributions, are combined by a dynamic classifier combination model. This procedure may be viewed as either a version of mixture of experts (Jacobs, Jordan, Nowlan, & Hintnon, 1991), applied to ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976699300016737

    authors: Avnimelech R,Intrator N

    更新日期:1999-02-15 00:00:00

  • Analytical integrate-and-fire neuron models with conductance-based dynamics for event-driven simulation strategies.

    abstract::Event-driven simulation strategies were proposed recently to simulate integrate-and-fire (IF) type neuronal models. These strategies can lead to computationally efficient algorithms for simulating large-scale networks of neurons; most important, such approaches are more precise than traditional clock-driven numerical ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2006.18.9.2146

    authors: Rudolph M,Destexhe A

    更新日期:2006-09-01 00:00:00

  • On the slow convergence of EM and VBEM in low-noise linear models.

    abstract::We analyze convergence of the expectation maximization (EM) and variational Bayes EM (VBEM) schemes for parameter estimation in noisy linear models. The analysis shows that both schemes are inefficient in the low-noise limit. The linear model with additive noise includes as special cases independent component analysis...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766054322991

    authors: Petersen KB,Winther O,Hansen LK

    更新日期:2005-09-01 00:00:00

  • Derivatives of logarithmic stationary distributions for policy gradient reinforcement learning.

    abstract::Most conventional policy gradient reinforcement learning (PGRL) algorithms neglect (or do not explicitly make use of) a term in the average reward gradient with respect to the policy parameter. That term involves the derivative of the stationary state distribution that corresponds to the sensitivity of its distributio...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.12-08-922

    authors: Morimura T,Uchibe E,Yoshimoto J,Peters J,Doya K

    更新日期:2010-02-01 00:00:00

  • Conditional density estimation with dimensionality reduction via squared-loss conditional entropy minimization.

    abstract::Regression aims at estimating the conditional mean of output given input. However, regression is not informative enough if the conditional density is multimodal, heteroskedastic, and asymmetric. In such a case, estimating the conditional density itself is preferable, but conditional density estimation (CDE) is challen...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00683

    authors: Tangkaratt V,Xie N,Sugiyama M

    更新日期:2015-01-01 00:00:00

  • The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction.

    abstract::Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attracto...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1996.8.6.1135

    authors: Casey M

    更新日期:1996-08-15 00:00:00

  • Learning spike-based population codes by reward and population feedback.

    abstract::We investigate a recently proposed model for decision learning in a population of spiking neurons where synaptic plasticity is modulated by a population signal in addition to reward feedback. For the basic model, binary population decision making based on spike/no-spike coding, a detailed computational analysis is giv...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2010.05-09-1010

    authors: Friedrich J,Urbanczik R,Senn W

    更新日期:2010-07-01 00:00:00

  • Insect-inspired estimation of egomotion.

    abstract::Tangential neurons in the fly brain are sensitive to the typical optic flow patterns generated during egomotion. In this study, we examine whether a simplified linear model based on the organization principles in tangential neurons can be used to estimate egomotion from the optic flow. We present a theory for the cons...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766041941899

    authors: Franz MO,Chahl JS,Krapp HG

    更新日期:2004-11-01 00:00:00