Determining Burst Firing Time Distributions from Multiple Spike Trains.

Abstract:

:Recent experimental findings have shown the presence of robust and cell-type-specific intraburst firing patterns in bursting neurons. We address the problem of characterizing these patterns under the assumption that the bursts exhibit well-defined firing time distributions. We propose a method for estimating these distributions based on a burst alignment algorithm that minimizes the overlap among the firing time distributions of the different spikes within the burst. This method provides a good approximation to the burst's intrinsic temporal structure as a set of firing time distributions. In addition, the method allows labeling the spikes in any particular burst, establishing a correspondence between each spike and the distribution that best explains it, and identifying missing spikes. Our results on both simulated and experimental data from the lobster stomatogastric ganglion show that the proposed method provides a reliable characterization of the intraburst firing patterns and avoids the errors derived from missing spikes. This method can also be applied to nonbursting neurons as a general tool for the study and the interpretation of firing time distributions as part of a temporal neural code.

journal_name

Neural Comput

journal_title

Neural computation

authors

Lago-Fernández LF,Szücs A,Varona P

doi

10.1162/neco.2008.07-07-571

subject

Has Abstract

pub_date

2009-04-01 00:00:00

pages

973-90

issue

4

eissn

0899-7667

issn

1530-888X

pii

10.1162/neco.2008.07-07-571

journal_volume

21

pub_type

杂志文章
  • A graphical model framework for decoding in the visual ERP-based BCI speller.

    abstract::We present a graphical model framework for decoding in the visual ERP-based speller system. The proposed framework allows researchers to build generative models from which the decoding rules are obtained in a straightforward manner. We suggest two models for generating brain signals conditioned on the stimulus events....

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/NECO_a_00066

    authors: Martens SM,Mooij JM,Hill NJ,Farquhar J,Schölkopf B

    更新日期:2011-01-01 00:00:00

  • Nonlinear Time&hyphenSeries Prediction with Missing and Noisy Data

    abstract::We derive solutions for the problem of missing and noisy data in nonlinear time&hyphenseries prediction from a probabilistic point of view. We discuss different approximations to the solutions &hyphen in particular, approximations that require either stochastic simulation or the substitution of a single estimate for t...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976698300017728

    authors: Tresp V V,Hofmann R

    更新日期:1998-03-23 00:00:00

  • Methods for combining experts' probability assessments.

    abstract::This article reviews statistical techniques for combining multiple probability distributions. The framework is that of a decision maker who consults several experts regarding some events. The experts express their opinions in the form of probability distributions. The decision maker must aggregate the experts' distrib...

    journal_title:Neural computation

    pub_type: 杂志文章,评审

    doi:10.1162/neco.1995.7.5.867

    authors: Jacobs RA

    更新日期:1995-09-01 00:00:00

  • State-Space Representations of Deep Neural Networks.

    abstract::This letter deals with neural networks as dynamical systems governed by finite difference equations. It shows that the introduction of k -many skip connections into network architectures, such as residual networks and additive dense n...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01165

    authors: Hauser M,Gunn S,Saab S Jr,Ray A

    更新日期:2019-03-01 00:00:00

  • Generalization and selection of examples in feedforward neural networks.

    abstract::In this work, we study how the selection of examples affects the learning procedure in a boolean neural network and its relationship with the complexity of the function under study and its architecture. We analyze the generalization capacity for different target functions with particular architectures through an analy...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300014999

    authors: Franco L,Cannas SA

    更新日期:2000-10-01 00:00:00

  • On the performance of voltage stepping for the simulation of adaptive, nonlinear integrate-and-fire neuronal networks.

    abstract::In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00112

    authors: Kaabi MG,Tonnelier A,Martinez D

    更新日期:2011-05-01 00:00:00

  • Robustness of connectionist swimming controllers against random variation in neural connections.

    abstract::The ability to achieve high swimming speed and efficiency is very important to both the real lamprey and its robotic implementation. In previous studies, we used evolutionary algorithms to evolve biologically plausible connectionist swimming controllers for a simulated lamprey. This letter investigates the robustness ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2007.19.6.1568

    authors: Or J

    更新日期:2007-06-01 00:00:00

  • Spiking neural P systems with astrocytes.

    abstract::In a biological nervous system, astrocytes play an important role in the functioning and interaction of neurons, and astrocytes have excitatory and inhibitory influence on synapses. In this work, with this biological inspiration, a class of computation devices that consist of neurons and astrocytes is introduced, call...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00238

    authors: Pan L,Wang J,Hoogeboom HJ

    更新日期:2012-03-01 00:00:00

  • Why Does Large Batch Training Result in Poor Generalization? A Comprehensive Explanation and a Better Strategy from the Viewpoint of Stochastic Optimization.

    abstract::We present a comprehensive framework of search methods, such as simulated annealing and batch training, for solving nonconvex optimization problems. These methods search a wider range by gradually decreasing the randomness added to the standard gradient descent method. The formulation that we define on the basis of th...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01089

    authors: Takase T,Oyama S,Kurihara M

    更新日期:2018-07-01 00:00:00

  • Nonmonotonic generalization bias of Gaussian mixture models.

    abstract::Theories of learning and generalization hold that the generalization bias, defined as the difference between the training error and the generalization error, increases on average with the number of adaptive parameters. This article, however, shows that this general tendency is violated for a gaussian mixture model. Fo...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015439

    authors: Akaho S,Kappen HJ

    更新日期:2000-06-01 00:00:00

  • Statistics of Visual Responses to Image Object Stimuli from Primate AIT Neurons to DNN Neurons.

    abstract::Under the goal-driven paradigm, Yamins et al. ( 2014 ; Yamins & DiCarlo, 2016 ) have shown that by optimizing only the final eight-way categorization performance of a four-layer hierarchical network, not only can its top output layer quantitatively predict IT neuron responses but its penultimate layer can also automat...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01039

    authors: Dong Q,Wang H,Hu Z

    更新日期:2018-02-01 00:00:00

  • Insect-inspired estimation of egomotion.

    abstract::Tangential neurons in the fly brain are sensitive to the typical optic flow patterns generated during egomotion. In this study, we examine whether a simplified linear model based on the organization principles in tangential neurons can be used to estimate egomotion from the optic flow. We present a theory for the cons...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766041941899

    authors: Franz MO,Chahl JS,Krapp HG

    更新日期:2004-11-01 00:00:00

  • Analytical integrate-and-fire neuron models with conductance-based dynamics for event-driven simulation strategies.

    abstract::Event-driven simulation strategies were proposed recently to simulate integrate-and-fire (IF) type neuronal models. These strategies can lead to computationally efficient algorithms for simulating large-scale networks of neurons; most important, such approaches are more precise than traditional clock-driven numerical ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2006.18.9.2146

    authors: Rudolph M,Destexhe A

    更新日期:2006-09-01 00:00:00

  • Mean First Passage Memory Lifetimes by Reducing Complex Synapses to Simple Synapses.

    abstract::Memory models that store new memories by forgetting old ones have memory lifetimes that are rather short and grow only logarithmically in the number of synapses. Attempts to overcome these deficits include "complex" models of synaptic plasticity in which synapses possess internal states governing the expression of syn...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00956

    authors: Elliott T

    更新日期:2017-06-01 00:00:00

  • On the role of biophysical properties of cortical neurons in binding and segmentation of visual scenes.

    abstract::Neuroscience is progressing vigorously, and knowledge at different levels of description is rapidly accumulating. To establish relationships between results found at these different levels is one of the central challenges. In this simulation study, we demonstrate how microscopic cellular properties, taking the example...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976699300016377

    authors: Verschure PF,König P

    更新日期:1999-07-01 00:00:00

  • Regularized neural networks: some convergence rate results.

    abstract::In a recent paper, Poggio and Girosi (1990) proposed a class of neural networks obtained from the theory of regularization. Regularized networks are capable of approximating arbitrarily well any continuous function on a compactum. In this paper we consider in detail the learning problem for the one-dimensional case. W...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1995.7.6.1225

    authors: Corradi V,White H

    更新日期:1995-11-01 00:00:00

  • McCulloch-Pitts Brains and Pseudorandom Functions.

    abstract::In a pioneering classic, Warren McCulloch and Walter Pitts proposed a model of the central nervous system. Motivated by EEG recordings of normal brain activity, Chvátal and Goldsmith asked whether these dynamical systems can be engineered to produce trajectories that are irregular, disorderly, and apparently unpredict...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00841

    authors: Chvátal V,Goldsmith M,Yang N

    更新日期:2016-06-01 00:00:00

  • Extraction of Synaptic Input Properties in Vivo.

    abstract::Knowledge of synaptic input is crucial for understanding synaptic integration and ultimately neural function. However, in vivo, the rates at which synaptic inputs arrive are high, so that it is typically impossible to detect single events. We show here that it is nevertheless possible to extract the properties of the ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00975

    authors: Puggioni P,Jelitai M,Duguid I,van Rossum MCW

    更新日期:2017-07-01 00:00:00

  • Kernels for longitudinal data with variable sequence length and sampling intervals.

    abstract::We develop several kernel methods for classification of longitudinal data and apply them to detect cognitive decline in the elderly. We first develop mixed-effects models, a type of hierarchical empirical Bayes generative models, for the time series. After demonstrating their utility in likelihood ratio classifiers (a...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00164

    authors: Lu Z,Leen TK,Kaye J

    更新日期:2011-09-01 00:00:00

  • A Distributed Framework for the Construction of Transport Maps.

    abstract::The need to reason about uncertainty in large, complex, and multimodal data sets has become increasingly common across modern scientific environments. The ability to transform samples from one distribution P to another distribution

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01172

    authors: Mesa DA,Tantiongloc J,Mendoza M,Kim S,P Coleman T

    更新日期:2019-04-01 00:00:00

  • Dissociable forms of repetition priming: a computational model.

    abstract::Nondeclarative memory and novelty processing in the brain is an actively studied field of neuroscience, and reducing neural activity with repetition of a stimulus (repetition suppression) is a commonly observed phenomenon. Recent findings of an opposite trend-specifically, rising activity for unfamiliar stimuli-questi...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00569

    authors: Makukhin K,Bolland S

    更新日期:2014-04-01 00:00:00

  • Learning Slowness in a Sparse Model of Invariant Feature Detection.

    abstract::Primary visual cortical complex cells are thought to serve as invariant feature detectors and to provide input to higher cortical areas. We propose a single model for learning the connectivity required by complex cells that integrates two factors that have been hypothesized to play a role in the development of invaria...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00743

    authors: Chandrapala TN,Shi BE

    更新日期:2015-07-01 00:00:00

  • Neural coding: higher-order temporal patterns in the neurostatistics of cell assemblies.

    abstract::Recent advances in the technology of multiunit recordings make it possible to test Hebb's hypothesis that neurons do not function in isolation but are organized in assemblies. This has created the need for statistical approaches to detecting the presence of spatiotemporal patterns of more than two neurons in neuron sp...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300014872

    authors: Martignon L,Deco G,Laskey K,Diamond M,Freiwald W,Vaadia E

    更新日期:2000-11-01 00:00:00

  • A Unifying Framework of Synaptic and Intrinsic Plasticity in Neural Populations.

    abstract::A neuronal population is a computational unit that receives a multivariate, time-varying input signal and creates a related multivariate output. These neural signals are modeled as stochastic processes that transmit information in real time, subject to stochastic noise. In a stationary environment, where the input sig...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01057

    authors: Leugering J,Pipa G

    更新日期:2018-04-01 00:00:00

  • A Mathematical Analysis of Memory Lifetime in a Simple Network Model of Memory.

    abstract::We study the learning of an external signal by a neural network and the time to forget it when this network is submitted to noise. The presentation of an external stimulus to the recurrent network of binary neurons may change the state of the synapses. Multiple presentations of a unique signal lead to its learning. Th...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01286

    authors: Helson P

    更新日期:2020-07-01 00:00:00

  • Online Reinforcement Learning Using a Probability Density Estimation.

    abstract::Function approximation in online, incremental, reinforcement learning needs to deal with two fundamental problems: biased sampling and nonstationarity. In this kind of task, biased sampling occurs because samples are obtained from specific trajectories dictated by the dynamics of the environment and are usually concen...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00906

    authors: Agostini A,Celaya E

    更新日期:2017-01-01 00:00:00

  • Multiple model-based reinforcement learning.

    abstract::We propose a modular reinforcement learning architecture for nonlinear, nonstationary control tasks, which we call multiple model-based reinforcement learning (MMRL). The basic idea is to decompose a complex task into multiple domains in space and time based on the predictability of the environmental dynamics. The sys...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976602753712972

    authors: Doya K,Samejima K,Katagiri K,Kawato M

    更新日期:2002-06-01 00:00:00

  • Spike train decoding without spike sorting.

    abstract::We propose a novel paradigm for spike train decoding, which avoids entirely spike sorting based on waveform measurements. This paradigm directly uses the spike train collected at recording electrodes from thresholding the bandpassed voltage signal. Our approach is a paradigm, not an algorithm, since it can be used wit...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.02-07-478

    authors: Ventura V

    更新日期:2008-04-01 00:00:00

  • Binocular receptive field models, disparity tuning, and characteristic disparity.

    abstract::Disparity tuning of visual cells in the brain depends on the structure of their binocular receptive fields (RFs). Freeman and coworkers have found that binocular RFs of a typical simple cell can be quantitatively described by two Gabor functions with the same gaussian envelope but different phase parameters in the sin...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1996.8.8.1611

    authors: Zhu YD,Qian N

    更新日期:1996-11-15 00:00:00

  • Computation in a single neuron: Hodgkin and Huxley revisited.

    abstract::A spiking neuron "computes" by transforming a complex dynamical input into a train of action potentials, or spikes. The computation performed by the neuron can be formulated as dimensional reduction, or feature detection, followed by a nonlinear decision function over the low-dimensional space. Generalizations of the ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/08997660360675017

    authors: Agüera y Arcas B,Fairhall AL,Bialek W

    更新日期:2003-08-01 00:00:00