Computation in a single neuron: Hodgkin and Huxley revisited.

Abstract:

:A spiking neuron "computes" by transforming a complex dynamical input into a train of action potentials, or spikes. The computation performed by the neuron can be formulated as dimensional reduction, or feature detection, followed by a nonlinear decision function over the low-dimensional space. Generalizations of the reverse correlation technique with white noise input provide a numerical strategy for extracting the relevant low-dimensional features from experimental data, and information theory can be used to evaluate the quality of the low-dimensional approximation. We apply these methods to analyze the simplest biophysically realistic model neuron, the Hodgkin-Huxley (HH) model, using this system to illustrate the general methodological issues. We focus on the features in the stimulus that trigger a spike, explicitly eliminating the effects of interactions between spikes. One can approximate this triggering "feature space" as a two-dimensional linear subspace in the high-dimensional space of input histories, capturing in this way a substantial fraction of the mutual information between inputs and spike time. We find that an even better approximation, however, is to describe the relevant subspace as two dimensional but curved; in this way, we can capture 90% of the mutual information even at high time resolution. Our analysis provides a new understanding of the computational properties of the HH model. While it is common to approximate neural behavior as "integrate and fire," the HH model is not an integrator nor is it well described by a single threshold.

journal_name

Neural Comput

journal_title

Neural computation

authors

Agüera y Arcas B,Fairhall AL,Bialek W

doi

10.1162/08997660360675017

subject

Has Abstract

pub_date

2003-08-01 00:00:00

pages

1715-49

issue

8

eissn

0899-7667

issn

1530-888X

journal_volume

15

pub_type

杂志文章
  • Slow feature analysis: a theoretical analysis of optimal free responses.

    abstract::Temporal slowness is a learning principle that allows learning of invariant representations by extracting slowly varying features from quickly varying input signals. Slow feature analysis (SFA) is an efficient algorithm based on this principle and has been applied to the learning of translation, scale, and other invar...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976603322297331

    authors: Wiskott L

    更新日期:2003-09-01 00:00:00

  • The time-organized map algorithm: extending the self-organizing map to spatiotemporal signals.

    abstract::The new time-organized map (TOM) is presented for a better understanding of the self-organization and geometric structure of cortical signal representations. The algorithm extends the common self-organizing map (SOM) from the processing of purely spatial signals to the processing of spatiotemporal signals. The main ad...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976603765202695

    authors: Wiemer JC

    更新日期:2003-05-01 00:00:00

  • The number of synaptic inputs and the synchrony of large, sparse neuronal networks.

    abstract::The prevalence of coherent oscillations in various frequency ranges in the central nervous system raises the question of the mechanisms that synchronize large populations of neurons. We study synchronization in models of large networks of spiking neurons with random sparse connectivity. Synchrony occurs only when the ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015529

    authors: Golomb D,Hansel D

    更新日期:2000-05-01 00:00:00

  • Nonlinear complex-valued extensions of Hebbian learning: an essay.

    abstract::The Hebbian paradigm is perhaps the best-known unsupervised learning theory in connectionism. It has inspired wide research activity in the artificial neural network field because it embodies some interesting properties such as locality and the capability of being applicable to the basic weight-and-sum structure of ne...

    journal_title:Neural computation

    pub_type: 杂志文章,评审

    doi:10.1162/0899766053429381

    authors: Fiori S

    更新日期:2005-04-01 00:00:00

  • Neural Circuits Trained with Standard Reinforcement Learning Can Accumulate Probabilistic Information during Decision Making.

    abstract::Much experimental evidence suggests that during decision making, neural circuits accumulate evidence supporting alternative options. A computational model well describing this accumulation for choices between two options assumes that the brain integrates the log ratios of the likelihoods of the sensory inputs given th...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00917

    authors: Kurzawa N,Summerfield C,Bogacz R

    更新日期:2017-02-01 00:00:00

  • Nonlinear and noisy extension of independent component analysis: theory and its application to a pitch sensation model.

    abstract::In this letter, we propose a noisy nonlinear version of independent component analysis (ICA). Assuming that the probability density function (p. d. f.) of sources is known, a learning rule is derived based on maximum likelihood estimation (MLE). Our model involves some algorithms of noisy linear ICA (e. g., Bermond & ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766052530866

    authors: Maeda S,Song WJ,Ishii S

    更新日期:2005-01-01 00:00:00

  • Regularized neural networks: some convergence rate results.

    abstract::In a recent paper, Poggio and Girosi (1990) proposed a class of neural networks obtained from the theory of regularization. Regularized networks are capable of approximating arbitrarily well any continuous function on a compactum. In this paper we consider in detail the learning problem for the one-dimensional case. W...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1995.7.6.1225

    authors: Corradi V,White H

    更新日期:1995-11-01 00:00:00

  • Variations on the Theme of Synaptic Filtering: A Comparison of Integrate-and-Express Models of Synaptic Plasticity for Memory Lifetimes.

    abstract::Integrate-and-express models of synaptic plasticity propose that synapses integrate plasticity induction signals before expressing synaptic plasticity. By discerning trends in their induction signals, synapses can control destabilizing fluctuations in synaptic strength. In a feedforward perceptron framework with binar...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00889

    authors: Elliott T

    更新日期:2016-11-01 00:00:00

  • On the Convergence of the LMS Algorithm with Adaptive Learning Rate for Linear Feedforward Networks.

    abstract::We consider the problem of training a linear feedforward neural network by using a gradient descent-like LMS learning algorithm. The objective is to find a weight matrix for the network, by repeatedly presenting to it a finite set of examples, so that the sum of the squares of the errors is minimized. Kohonen showed t...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1991.3.2.226

    authors: Luo ZQ

    更新日期:1991-07-01 00:00:00

  • Modeling short-term synaptic depression in silicon.

    abstract::We describe a model of short-term synaptic depression that is derived from a circuit implementation. The dynamics of this circuit model is similar to the dynamics of some theoretical models of short-term depression except that the recovery dynamics of the variable describing the depression is nonlinear and it also dep...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976603762552942

    authors: Boegerhausen M,Suter P,Liu SC

    更新日期:2003-02-01 00:00:00

  • Rapid processing and unsupervised learning in a model of the cortical macrocolumn.

    abstract::We study a model of the cortical macrocolumn consisting of a collection of inhibitorily coupled minicolumns. The proposed system overcomes several severe deficits of systems based on single neurons as cerebral functional units, notably limited robustness to damage and unrealistically large computation time. Motivated ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976604772744893

    authors: Lücke J,von der Malsburg C

    更新日期:2004-03-01 00:00:00

  • Employing the zeta-transform to optimize the calculation of the synaptic conductance of NMDA and other synaptic channels in network simulations.

    abstract::Calculation of the total conductance change induced by multiple synapses at a given membrane compartment remains one of the most time-consuming processes in biophysically realistic neural network simulations. Here we show that this calculation can be achieved in a highly efficient way even for multiply converging syna...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976698300017061

    authors: Köhn J,Wörgötter F

    更新日期:1998-10-01 00:00:00

  • State-Space Representations of Deep Neural Networks.

    abstract::This letter deals with neural networks as dynamical systems governed by finite difference equations. It shows that the introduction of k -many skip connections into network architectures, such as residual networks and additive dense n...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01165

    authors: Hauser M,Gunn S,Saab S Jr,Ray A

    更新日期:2019-03-01 00:00:00

  • MISEP method for postnonlinear blind source separation.

    abstract::In this letter, a standard postnonlinear blind source separation algorithm is proposed, based on the MISEP method, which is widely used in linear and nonlinear independent component analysis. To best suit a wide class of postnonlinear mixtures, we adapt the MISEP method to incorporate a priori information of the mixtu...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2007.19.9.2557

    authors: Zheng CH,Huang DS,Li K,Irwin G,Sun ZL

    更新日期:2007-09-01 00:00:00

  • Dependence of neuronal correlations on filter characteristics and marginal spike train statistics.

    abstract::Correlated neural activity has been observed at various signal levels (e.g., spike count, membrane potential, local field potential, EEG, fMRI BOLD). Most of these signals can be considered as superpositions of spike trains filtered by components of the neural system (synapses, membranes) and the measurement process. ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.05-07-525

    authors: Tetzlaff T,Rotter S,Stark E,Abeles M,Aertsen A,Diesmann M

    更新日期:2008-09-01 00:00:00

  • Determining Burst Firing Time Distributions from Multiple Spike Trains.

    abstract::Recent experimental findings have shown the presence of robust and cell-type-specific intraburst firing patterns in bursting neurons. We address the problem of characterizing these patterns under the assumption that the bursts exhibit well-defined firing time distributions. We propose a method for estimating these dis...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.07-07-571

    authors: Lago-Fernández LF,Szücs A,Varona P

    更新日期:2009-04-01 00:00:00

  • Constraint on the number of synaptic inputs to a visual cortical neuron controls receptive field formation.

    abstract::To date, Hebbian learning combined with some form of constraint on synaptic inputs has been demonstrated to describe well the development of neural networks. The previous models revealed mathematically the importance of synaptic constraints to reproduce orientation selectivity in the visual cortical neurons, but biolo...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.04-08-752

    authors: Tanaka S,Miyashita M

    更新日期:2009-09-01 00:00:00

  • A computational model for rhythmic and discrete movements in uni- and bimanual coordination.

    abstract::Current research on discrete and rhythmic movements differs in both experimental procedures and theory, despite the ubiquitous overlap between discrete and rhythmic components in everyday behaviors. Models of rhythmic movements usually use oscillatory systems mimicking central pattern generators (CPGs). In contrast, m...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.03-08-720

    authors: Ronsse R,Sternad D,Lefèvre P

    更新日期:2009-05-01 00:00:00

  • A Reservoir Computing Model of Reward-Modulated Motor Learning and Automaticity.

    abstract::Reservoir computing is a biologically inspired class of learning algorithms in which the intrinsic dynamics of a recurrent neural network are mined to produce target time series. Most existing reservoir computing algorithms rely on fully supervised learning rules, which require access to an exact copy of the target re...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01198

    authors: Pyle R,Rosenbaum R

    更新日期:2019-07-01 00:00:00

  • Positive Neural Networks in Discrete Time Implement Monotone-Regular Behaviors.

    abstract::We study the expressive power of positive neural networks. The model uses positive connection weights and multiple input neurons. Different behaviors can be expressed by varying the connection weights. We show that in discrete time and in the absence of noise, the class of positive neural networks captures the so-call...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00789

    authors: Ameloot TJ,Van den Bussche J

    更新日期:2015-12-01 00:00:00

  • An amplitude equation approach to contextual effects in visual cortex.

    abstract::A mathematical theory of interacting hypercolumns in primary visual cortex (V1) is presented that incorporates details concerning the anisotropic nature of long-range lateral connections. Each hypercolumn is modeled as a ring of interacting excitatory and inhibitory neural populations with orientation preferences over...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976602317250870

    authors: Bressloff PC,Cowan JD

    更新日期:2002-03-01 00:00:00

  • Computing confidence intervals for point process models.

    abstract::Characterizing neural spiking activity as a function of intrinsic and extrinsic factors is important in neuroscience. Point process models are valuable for capturing such information; however, the process of fully applying these models is not always obvious. A complete model application has four broad steps: specifica...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00198

    authors: Sarma SV,Nguyen DP,Czanner G,Wirth S,Wilson MA,Suzuki W,Brown EN

    更新日期:2011-11-01 00:00:00

  • Investigating the fault tolerance of neural networks.

    abstract::Particular levels of partial fault tolerance (PFT) in feedforward artificial neural networks of a given size can be obtained by redundancy (replicating a smaller normally trained network), by design (training specifically to increase PFT), and by a combination of the two (replicating a smaller PFT-trained network). Th...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766053723096

    authors: Tchernev EB,Mulvaney RG,Phatak DS

    更新日期:2005-07-01 00:00:00

  • On the problem in model selection of neural network regression in overrealizable scenario.

    abstract::In considering a statistical model selection of neural networks and radial basis functions under an overrealizable case, the problem of unidentifiability emerges. Because the model selection criterion is an unbiased estimator of the generalization error based on the training error, this article analyzes the expected t...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976602760128090

    authors: Hagiwara K

    更新日期:2002-08-01 00:00:00

  • Capturing the Forest but Missing the Trees: Microstates Inadequate for Characterizing Shorter-Scale EEG Dynamics.

    abstract::The brain is known to be active even when not performing any overt cognitive tasks, and often it engages in involuntary mind wandering. This resting state has been extensively characterized in terms of fMRI-derived brain networks. However, an alternate method has recently gained popularity: EEG microstate analysis. Pr...

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/neco_a_01229

    authors: Shaw SB,Dhindsa K,Reilly JP,Becker S

    更新日期:2019-11-01 00:00:00

  • Extraction of Synaptic Input Properties in Vivo.

    abstract::Knowledge of synaptic input is crucial for understanding synaptic integration and ultimately neural function. However, in vivo, the rates at which synaptic inputs arrive are high, so that it is typically impossible to detect single events. We show here that it is nevertheless possible to extract the properties of the ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00975

    authors: Puggioni P,Jelitai M,Duguid I,van Rossum MCW

    更新日期:2017-07-01 00:00:00

  • On the classification capability of sign-constrained perceptrons.

    abstract::The perceptron (also referred to as McCulloch-Pitts neuron, or linear threshold gate) is commonly used as a simplified model for the discrimination and learning capability of a biological neuron. Criteria that tell us when a perceptron can implement (or learn to implement) all possible dichotomies over a given set of ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.20.1.288

    authors: Legenstein R,Maass W

    更新日期:2008-01-01 00:00:00

  • Downstream Effect of Ramping Neuronal Activity through Synapses with Short-Term Plasticity.

    abstract::Ramping neuronal activity refers to spiking activity with a rate that increases quasi-linearly over time. It has been observed in multiple cortical areas and is correlated with evidence accumulation processes or timing. In this work, we investigated the downstream effect of ramping neuronal activity through synapses t...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00818

    authors: Wei W,Wang XJ

    更新日期:2016-04-01 00:00:00

  • A neurocomputational approach to prepositional phrase attachment ambiguity resolution.

    abstract::A neurocomputational model based on emergent massively overlapping neural cell assemblies (CAs) for resolving prepositional phrase (PP) attachment ambiguity is described. PP attachment ambiguity is a well-studied task in natural language processing and is a case where semantics is used to determine the syntactic struc...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00290

    authors: Nadh K,Huyck C

    更新日期:2012-07-01 00:00:00

  • ISO learning approximates a solution to the inverse-controller problem in an unsupervised behavioral paradigm.

    abstract::In "Isotropic Sequence Order Learning" (pp. 831-864 in this issue), we introduced a novel algorithm for temporal sequence learning (ISO learning). Here, we embed this algorithm into a formal nonevaluating (teacher free) environment, which establishes a sensor-motor feedback. The system is initially guided by a fixed r...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/08997660360581930

    authors: Porr B,von Ferber C,Wörgötter F

    更新日期:2003-04-01 00:00:00