Multispike interactions in a stochastic model of spike-timing-dependent plasticity.

Abstract:

:Recently we presented a stochastic, ensemble-based model of spike-timing-dependent plasticity. In this model, single synapses do not exhibit plasticity depending on the exact timing of pre- and postsynaptic spikes, but spike-timing-dependent plasticity emerges only at the temporal or synaptic ensemble level. We showed that such a model reproduces a variety of experimental results in a natural way, without the introduction of various, ad hoc nonlinearities characteristic of some alternative models. Our previous study was restricted to an examination, analytically, of two-spike interactions, while higher-order, multispike interactions were only briefly examined numerically. Here we derive exact, analytical results for the general n-spike interaction functions in our model. Our results form the basis for a detailed examination, performed elsewhere, of the significant differences between these functions and the implications these differences have for the presence, or otherwise, of stable, competitive dynamics in our model.

journal_name

Neural Comput

journal_title

Neural computation

authors

Appleby PA,Elliott T

doi

10.1162/neco.2007.19.5.1362

subject

Has Abstract

pub_date

2007-05-01 00:00:00

pages

1362-99

issue

5

eissn

0899-7667

issn

1530-888X

journal_volume

19

pub_type

杂志文章
  • Learning Slowness in a Sparse Model of Invariant Feature Detection.

    abstract::Primary visual cortical complex cells are thought to serve as invariant feature detectors and to provide input to higher cortical areas. We propose a single model for learning the connectivity required by complex cells that integrates two factors that have been hypothesized to play a role in the development of invaria...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00743

    authors: Chandrapala TN,Shi BE

    更新日期:2015-07-01 00:00:00

  • Nonmonotonic generalization bias of Gaussian mixture models.

    abstract::Theories of learning and generalization hold that the generalization bias, defined as the difference between the training error and the generalization error, increases on average with the number of adaptive parameters. This article, however, shows that this general tendency is violated for a gaussian mixture model. Fo...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015439

    authors: Akaho S,Kappen HJ

    更新日期:2000-06-01 00:00:00

  • Employing the zeta-transform to optimize the calculation of the synaptic conductance of NMDA and other synaptic channels in network simulations.

    abstract::Calculation of the total conductance change induced by multiple synapses at a given membrane compartment remains one of the most time-consuming processes in biophysically realistic neural network simulations. Here we show that this calculation can be achieved in a highly efficient way even for multiply converging syna...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976698300017061

    authors: Köhn J,Wörgötter F

    更新日期:1998-10-01 00:00:00

  • Multiple model-based reinforcement learning.

    abstract::We propose a modular reinforcement learning architecture for nonlinear, nonstationary control tasks, which we call multiple model-based reinforcement learning (MMRL). The basic idea is to decompose a complex task into multiple domains in space and time based on the predictability of the environmental dynamics. The sys...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976602753712972

    authors: Doya K,Samejima K,Katagiri K,Kawato M

    更新日期:2002-06-01 00:00:00

  • Patterns of synchrony in neural networks with spike adaptation.

    abstract::We study the emergence of synchronized burst activity in networks of neurons with spike adaptation. We show that networks of tonically firing adapting excitatory neurons can evolve to a state where the neurons burst in a synchronized manner. The mechanism leading to this burst activity is analyzed in a network of inte...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/08997660151134280

    authors: van Vreeswijk C,Hansel D

    更新日期:2001-05-01 00:00:00

  • Information loss in an optimal maximum likelihood decoding.

    abstract::The mutual information between a set of stimuli and the elicited neural responses is compared to the corresponding decoded information. The decoding procedure is presented as an artificial distortion of the joint probabilities between stimuli and responses. The information loss is quantified. Whenever the probabilitie...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976602317318947

    authors: Samengo I

    更新日期:2002-04-01 00:00:00

  • Nonlinear Time&hyphenSeries Prediction with Missing and Noisy Data

    abstract::We derive solutions for the problem of missing and noisy data in nonlinear time&hyphenseries prediction from a probabilistic point of view. We discuss different approximations to the solutions &hyphen in particular, approximations that require either stochastic simulation or the substitution of a single estimate for t...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976698300017728

    authors: Tresp V V,Hofmann R

    更新日期:1998-03-23 00:00:00

  • Spikernels: predicting arm movements by embedding population spike rate patterns in inner-product spaces.

    abstract::Inner-product operators, often referred to as kernels in statistical learning, define a mapping from some input space into a feature space. The focus of this letter is the construction of biologically motivated kernels for cortical activities. The kernels we derive, termed Spikernels, map spike count sequences into an...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766053019944

    authors: Shpigelman L,Singer Y,Paz R,Vaadia E

    更新日期:2005-03-01 00:00:00

  • Optimal approximation of signal priors.

    abstract::In signal restoration by Bayesian inference, one typically uses a parametric model of the prior distribution of the signal. Here, we consider how the parameters of a prior model should be estimated from observations of uncorrupted signals. A lot of recent work has implicitly assumed that maximum likelihood estimation ...

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/neco.2008.10-06-384

    authors: Hyvärinen A

    更新日期:2008-12-01 00:00:00

  • ParceLiNGAM: a causal ordering method robust against latent confounders.

    abstract::We consider learning a causal ordering of variables in a linear nongaussian acyclic model called LiNGAM. Several methods have been shown to consistently estimate a causal ordering assuming that all the model assumptions are correct. But the estimation results could be distorted if some assumptions are violated. In thi...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00533

    authors: Tashiro T,Shimizu S,Hyvärinen A,Washio T

    更新日期:2014-01-01 00:00:00

  • Rapid processing and unsupervised learning in a model of the cortical macrocolumn.

    abstract::We study a model of the cortical macrocolumn consisting of a collection of inhibitorily coupled minicolumns. The proposed system overcomes several severe deficits of systems based on single neurons as cerebral functional units, notably limited robustness to damage and unrealistically large computation time. Motivated ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976604772744893

    authors: Lücke J,von der Malsburg C

    更新日期:2004-03-01 00:00:00

  • Bayesian Filtering with Multiple Internal Models: Toward a Theory of Social Intelligence.

    abstract::To exhibit social intelligence, animals have to recognize whom they are communicating with. One way to make this inference is to select among internal generative models of each conspecific who may be encountered. However, these models also have to be learned via some form of Bayesian belief updating. This induces an i...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01239

    authors: Isomura T,Parr T,Friston K

    更新日期:2019-12-01 00:00:00

  • Connecting cortical and behavioral dynamics: bimanual coordination.

    abstract::For the paradigmatic case of bimanual coordination, we review levels of organization of behavioral dynamics and present a description in terms of modes of behavior. We briefly review a recently developed model of spatiotemporal brain activity that is based on short- and long-range connectivity of neural ensembles. Thi...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976698300016954

    authors: Jirsa VK,Fuchs A,Kelso JA

    更新日期:1998-11-15 00:00:00

  • Similarity, connectionism, and the problem of representation in vision.

    abstract::A representational scheme under which the ranking between represented similarities is isomorphic to the ranking between the corresponding shape similarities can support perfectly correct shape classification because it preserves the clustering of shapes according to the natural kinds prevailing in the external world. ...

    journal_title:Neural computation

    pub_type: 杂志文章,评审

    doi:10.1162/neco.1997.9.4.701

    authors: Edelman S,Duvdevani-Bar S

    更新日期:1997-05-15 00:00:00

  • Irregular firing of isolated cortical interneurons in vitro driven by intrinsic stochastic mechanisms.

    abstract::Pharmacologically isolated GABAergic irregular spiking and stuttering interneurons in the mouse visual cortex display highly irregular spike times, with high coefficients of variation approximately 0.9-3, in response to a depolarizing, constant current input. This is in marked contrast to cortical pyramidal cells, whi...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.20.1.44

    authors: Englitz B,Stiefel KM,Sejnowski TJ

    更新日期:2008-01-01 00:00:00

  • A finite-sample, distribution-free, probabilistic lower bound on mutual information.

    abstract::For any memoryless communication channel with a binary-valued input and a one-dimensional real-valued output, we introduce a probabilistic lower bound on the mutual information given empirical observations on the channel. The bound is built on the Dvoretzky-Kiefer-Wolfowitz inequality and is distribution free. A quadr...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00144

    authors: VanderKraats ND,Banerjee A

    更新日期:2011-07-01 00:00:00

  • Range-based ICA using a nonsmooth quasi-newton optimizer for electroencephalographic source localization in focal epilepsy.

    abstract::Independent component analysis (ICA) aims at separating a multivariate signal into independent nongaussian signals by optimizing a contrast function with no knowledge on the mixing mechanism. Despite the availability of a constellation of contrast functions, a Hartley-entropy-based ICA contrast endowed with the discri...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00700

    authors: Selvan SE,George ST,Balakrishnan R

    更新日期:2015-03-01 00:00:00

  • Sequential triangle strip generator based on Hopfield networks.

    abstract::The important task of generating the minimum number of sequential triangle strips (tristrips) for a given triangulated surface model is motivated by applications in computer graphics. This hard combinatorial optimization problem is reduced to the minimum energy problem in Hopfield nets by a linear-size construction. I...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.10-07-623

    authors: Síma J,Lnĕnicka R

    更新日期:2009-02-01 00:00:00

  • Inhibition and Excitation Shape Activity Selection: Effect of Oscillations in a Decision-Making Circuit.

    abstract::Decision making is a complex task, and its underlying mechanisms that regulate behavior, such as the implementation of the coupling between physiological states and neural networks, are hard to decipher. To gain more insight into neural computations underlying ongoing binary decision-making tasks, we consider a neural...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01185

    authors: Bose T,Reina A,Marshall JAR

    更新日期:2019-05-01 00:00:00

  • Neuronal assembly dynamics in supervised and unsupervised learning scenarios.

    abstract::The dynamic formation of groups of neurons--neuronal assemblies--is believed to mediate cognitive phenomena at many levels, but their detailed operation and mechanisms of interaction are still to be uncovered. One hypothesis suggests that synchronized oscillations underpin their formation and functioning, with a focus...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00502

    authors: Moioli RC,Husbands P

    更新日期:2013-11-01 00:00:00

  • Temporal coding: assembly formation through constructive interference.

    abstract::Temporal coding is studied for an oscillatory neural network model with synchronization and acceleration. The latter mechanism refers to increasing (decreasing) the phase velocity of each unit for stronger (weaker) or more coherent (decoherent) input from the other units. It has been demonstrated that acceleration gen...

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/neco.2008.09-06-342

    authors: Burwick T

    更新日期:2008-07-01 00:00:00

  • On the Convergence of the LMS Algorithm with Adaptive Learning Rate for Linear Feedforward Networks.

    abstract::We consider the problem of training a linear feedforward neural network by using a gradient descent-like LMS learning algorithm. The objective is to find a weight matrix for the network, by repeatedly presenting to it a finite set of examples, so that the sum of the squares of the errors is minimized. Kohonen showed t...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1991.3.2.226

    authors: Luo ZQ

    更新日期:1991-07-01 00:00:00

  • Whence the Expected Free Energy?

    abstract::The expected free energy (EFE) is a central quantity in the theory of active inference. It is the quantity that all active inference agents are mandated to minimize through action, and its decomposition into extrinsic and intrinsic value terms is key to the balance of exploration and exploitation that active inference...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01354

    authors: Millidge B,Tschantz A,Buckley CL

    更新日期:2021-01-05 00:00:00

  • Kernels for longitudinal data with variable sequence length and sampling intervals.

    abstract::We develop several kernel methods for classification of longitudinal data and apply them to detect cognitive decline in the elderly. We first develop mixed-effects models, a type of hierarchical empirical Bayes generative models, for the time series. After demonstrating their utility in likelihood ratio classifiers (a...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00164

    authors: Lu Z,Leen TK,Kaye J

    更新日期:2011-09-01 00:00:00

  • Asynchronous Event-Based Motion Processing: From Visual Events to Probabilistic Sensory Representation.

    abstract::In this work, we propose a two-layered descriptive model for motion processing from retina to the cortex, with an event-based input from the asynchronous time-based image sensor (ATIS) camera. Spatial and spatiotemporal filtering of visual scenes by motion energy detectors has been implemented in two steps in a simple...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01191

    authors: Khoei MA,Ieng SH,Benosman R

    更新日期:2019-06-01 00:00:00

  • On the classification capability of sign-constrained perceptrons.

    abstract::The perceptron (also referred to as McCulloch-Pitts neuron, or linear threshold gate) is commonly used as a simplified model for the discrimination and learning capability of a biological neuron. Criteria that tell us when a perceptron can implement (or learn to implement) all possible dichotomies over a given set of ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.20.1.288

    authors: Legenstein R,Maass W

    更新日期:2008-01-01 00:00:00

  • The Ornstein-Uhlenbeck process does not reproduce spiking statistics of neurons in prefrontal cortex.

    abstract::Cortical neurons of behaving animals generate irregular spike sequences. Recently, there has been a heated discussion about the origin of this irregularity. Softky and Koch (1993) pointed out the inability of standard single-neuron models to reproduce the irregularity of the observed spike sequences when the model par...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976699300016511

    authors: Shinomoto S,Sakai Y,Funahashi S

    更新日期:1999-05-15 00:00:00

  • Analyzing and Accelerating the Bottlenecks of Training Deep SNNs With Backpropagation.

    abstract::Spiking neural networks (SNNs) with the event-driven manner of transmitting spikes consume ultra-low power on neuromorphic chips. However, training deep SNNs is still challenging compared to convolutional neural networks (CNNs). The SNN training algorithms have not achieved the same performance as CNNs. In this letter...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01319

    authors: Chen R,Li L

    更新日期:2020-12-01 00:00:00

  • Spiking neural P systems with astrocytes.

    abstract::In a biological nervous system, astrocytes play an important role in the functioning and interaction of neurons, and astrocytes have excitatory and inhibitory influence on synapses. In this work, with this biological inspiration, a class of computation devices that consist of neurons and astrocytes is introduced, call...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00238

    authors: Pan L,Wang J,Hoogeboom HJ

    更新日期:2012-03-01 00:00:00

  • Improving generalization performance of natural gradient learning using optimized regularization by NIC.

    abstract::Natural gradient learning is known to be efficient in escaping plateau, which is a main cause of the slow learning speed of neural networks. The adaptive natural gradient learning method for practical implementation also has been developed, and its advantage in real-world problems has been confirmed. In this letter, w...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976604322742065

    authors: Park H,Murata N,Amari S

    更新日期:2004-02-01 00:00:00