Employing the zeta-transform to optimize the calculation of the synaptic conductance of NMDA and other synaptic channels in network simulations.

Abstract:

:Calculation of the total conductance change induced by multiple synapses at a given membrane compartment remains one of the most time-consuming processes in biophysically realistic neural network simulations. Here we show that this calculation can be achieved in a highly efficient way even for multiply converging synapses with different delays by means of the zeta-transform. Using the example of an NMDA synapse, we show that every update of the total conductance is achieved by an iterative process requiring at most three recent multiplications, which together need only the history values from the two most recent iterations. A major advantage is that this small computational load is independent of the number of synapses simulated. A benchmark comparison to other techniques demonstrates superior performance of the zeta-transform. Nonvoltage-dependent synaptic channels can be treated similarly (Olshausen, 1990; Brettle & Niebur, 1994), and the technique can also be generalized to other synaptic channels.

journal_name

Neural Comput

journal_title

Neural computation

authors

Köhn J,Wörgötter F

doi

10.1162/089976698300017061

subject

Has Abstract

pub_date

1998-10-01 00:00:00

pages

1639-51

issue

7

eissn

0899-7667

issn

1530-888X

journal_volume

10

pub_type

杂志文章
  • Synchrony in heterogeneous networks of spiking neurons.

    abstract::The emergence of synchrony in the activity of large, heterogeneous networks of spiking neurons is investigated. We define the robustness of synchrony by the critical disorder at which the asynchronous state becomes linearly unstable. We show that at low firing rates, synchrony is more robust in excitatory networks tha...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015286

    authors: Neltner L,Hansel D,Mato G,Meunier C

    更新日期:2000-07-01 00:00:00

  • A neurocomputational model for cocaine addiction.

    abstract::Based on the dopamine hypotheses of cocaine addiction and the assumption of decrement of brain reward system sensitivity after long-term drug exposure, we propose a computational model for cocaine addiction. Utilizing average reward temporal difference reinforcement learning, we incorporate the elevation of basal rewa...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.10-08-882

    authors: Dezfouli A,Piray P,Keramati MM,Ekhtiari H,Lucas C,Mokri A

    更新日期:2009-10-01 00:00:00

  • A finite-sample, distribution-free, probabilistic lower bound on mutual information.

    abstract::For any memoryless communication channel with a binary-valued input and a one-dimensional real-valued output, we introduce a probabilistic lower bound on the mutual information given empirical observations on the channel. The bound is built on the Dvoretzky-Kiefer-Wolfowitz inequality and is distribution free. A quadr...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00144

    authors: VanderKraats ND,Banerjee A

    更新日期:2011-07-01 00:00:00

  • Inhibition and Excitation Shape Activity Selection: Effect of Oscillations in a Decision-Making Circuit.

    abstract::Decision making is a complex task, and its underlying mechanisms that regulate behavior, such as the implementation of the coupling between physiological states and neural networks, are hard to decipher. To gain more insight into neural computations underlying ongoing binary decision-making tasks, we consider a neural...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01185

    authors: Bose T,Reina A,Marshall JAR

    更新日期:2019-05-01 00:00:00

  • A simple Hebbian/anti-Hebbian network learns the sparse, independent components of natural images.

    abstract::Slightly modified versions of an early Hebbian/anti-Hebbian neural network are shown to be capable of extracting the sparse, independent linear components of a prefiltered natural image set. An explanation for this capability in terms of a coupling between two hypothetical networks is presented. The simple networks pr...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976606775093891

    authors: Falconbridge MS,Stamps RL,Badcock DR

    更新日期:2006-02-01 00:00:00

  • Distributed control of uncertain systems using superpositions of linear operators.

    abstract::Control in the natural environment is difficult in part because of uncertainty in the effect of actions. Uncertainty can be due to added motor or sensory noise, unmodeled dynamics, or quantization of sensory feedback. Biological systems are faced with further difficulties, since control must be performed by networks o...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00151

    authors: Sanger TD

    更新日期:2011-08-01 00:00:00

  • Physiological gain leads to high ISI variability in a simple model of a cortical regular spiking cell.

    abstract::To understand the interspike interval (ISI) variability displayed by visual cortical neurons (Softky & Koch, 1993), it is critical to examine the dynamics of their neuronal integration, as well as the variability in their synaptic input current. Most previous models have focused on the latter factor. We match a simple...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1997.9.5.971

    authors: Troyer TW,Miller KD

    更新日期:1997-07-01 00:00:00

  • Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer Networks.

    abstract::Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms like backpropagation. Although backpropagation is efficient, its implementation in analog VLSI requires excessive computational hardware. In this paper we show that, for anal...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1991.3.4.546

    authors: Jabri M,Flower B

    更新日期:1991-01-01 00:00:00

  • On the classification capability of sign-constrained perceptrons.

    abstract::The perceptron (also referred to as McCulloch-Pitts neuron, or linear threshold gate) is commonly used as a simplified model for the discrimination and learning capability of a biological neuron. Criteria that tell us when a perceptron can implement (or learn to implement) all possible dichotomies over a given set of ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.20.1.288

    authors: Legenstein R,Maass W

    更新日期:2008-01-01 00:00:00

  • A causal perspective on the analysis of signal and noise correlations and their role in population coding.

    abstract::The role of correlations between neuronal responses is crucial to understanding the neural code. A framework used to study this role comprises a breakdown of the mutual information between stimuli and responses into terms that aim to account for different coding modalities and the distinction between different notions...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00588

    authors: Chicharro D

    更新日期:2014-06-01 00:00:00

  • Density-weighted Nyström method for computing large kernel eigensystems.

    abstract::The Nyström method is a well-known sampling-based technique for approximating the eigensystem of large kernel matrices. However, the chosen samples in the Nyström method are all assumed to be of equal importance, which deviates from the integral equation that defines the kernel eigenfunctions. Motivated by this observ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.11-07-651

    authors: Zhang K,Kwok JT

    更新日期:2009-01-01 00:00:00

  • The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction.

    abstract::Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attracto...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1996.8.6.1135

    authors: Casey M

    更新日期:1996-08-15 00:00:00

  • Hidden Quantum Processes, Quantum Ion Channels, and 1/ fθ-Type Noise.

    abstract::In this letter, we perform a complete and in-depth analysis of Lorentzian noises, such as those arising from [Formula: see text] and [Formula: see text] channel kinetics, in order to identify the source of [Formula: see text]-type noise in neurological membranes. We prove that the autocovariance of Lorentzian noise de...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_01067

    authors: Paris A,Vosoughi A,Berman SA,Atia G

    更新日期:2018-07-01 00:00:00

  • Computing confidence intervals for point process models.

    abstract::Characterizing neural spiking activity as a function of intrinsic and extrinsic factors is important in neuroscience. Point process models are valuable for capturing such information; however, the process of fully applying these models is not always obvious. A complete model application has four broad steps: specifica...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00198

    authors: Sarma SV,Nguyen DP,Czanner G,Wirth S,Wilson MA,Suzuki W,Brown EN

    更新日期:2011-11-01 00:00:00

  • Feature selection in simple neurons: how coding depends on spiking dynamics.

    abstract::The relationship between a neuron's complex inputs and its spiking output defines the neuron's coding strategy. This is frequently and effectively modeled phenomenologically by one or more linear filters that extract the components of the stimulus that are relevant for triggering spikes and a nonlinear function that r...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.02-09-956

    authors: Famulare M,Fairhall A

    更新日期:2010-03-01 00:00:00

  • State-Space Representations of Deep Neural Networks.

    abstract::This letter deals with neural networks as dynamical systems governed by finite difference equations. It shows that the introduction of k -many skip connections into network architectures, such as residual networks and additive dense n...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01165

    authors: Hauser M,Gunn S,Saab S Jr,Ray A

    更新日期:2019-03-01 00:00:00

  • Optimal sequential detection of stimuli from multiunit recordings taken in densely populated brain regions.

    abstract::We address the problem of detecting the presence of a recurring stimulus by monitoring the voltage on a multiunit electrode located in a brain region densely populated by stimulus reactive neurons. Published experimental results suggest that under these conditions, when a stimulus is present, the measurements are gaus...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00257

    authors: Nossenson N,Messer H

    更新日期:2012-04-01 00:00:00

  • Binocular receptive field models, disparity tuning, and characteristic disparity.

    abstract::Disparity tuning of visual cells in the brain depends on the structure of their binocular receptive fields (RFs). Freeman and coworkers have found that binocular RFs of a typical simple cell can be quantitatively described by two Gabor functions with the same gaussian envelope but different phase parameters in the sin...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1996.8.8.1611

    authors: Zhu YD,Qian N

    更新日期:1996-11-15 00:00:00

  • Including long-range dependence in integrate-and-fire models of the high interspike-interval variability of cortical neurons.

    abstract::Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766041732413

    authors: Jackson BS

    更新日期:2004-10-01 00:00:00

  • Modeling slowly bursting neurons via calcium store and voltage-independent calcium current.

    abstract::Recent experiments indicate that the calcium store (e.g., endoplasmic reticulum) is involved in electrical bursting and [Ca2+]i oscillation in bursting neuronal cells. In this paper, we formulate a mathematical model for bursting neurons, which includes Ca2+ in the intracellular Ca2+ stores and a voltage-independent c...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1996.8.5.951

    authors: Chay TR

    更新日期:1996-07-01 00:00:00

  • Online Reinforcement Learning Using a Probability Density Estimation.

    abstract::Function approximation in online, incremental, reinforcement learning needs to deal with two fundamental problems: biased sampling and nonstationarity. In this kind of task, biased sampling occurs because samples are obtained from specific trajectories dictated by the dynamics of the environment and are usually concen...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00906

    authors: Agostini A,Celaya E

    更新日期:2017-01-01 00:00:00

  • Evaluating auditory performance limits: II. One-parameter discrimination with random-level variation.

    abstract::Previous studies have combined analytical models of stochastic neural responses with signal detection theory (SDT) to predict psychophysical performance limits; however, these studies have typically been limited to simple models and simple psychophysical tasks. A companion article in this issue ("Evaluating Auditory P...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976601750541813

    authors: Heinz MG,Colburn HS,Carney LH

    更新日期:2001-10-01 00:00:00

  • Estimating functions of independent component analysis for temporally correlated signals.

    abstract::This article studies a general theory of estimating functions of independent component analysis when the independent source signals are temporarily correlated. Estimating functions are used for deriving both batch and on-line learning algorithms, and they are applicable to blind cases where spatial and temporal probab...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015079

    authors: Amari S

    更新日期:2000-09-01 00:00:00

  • Spiking neural P systems with weights.

    abstract::A variant of spiking neural P systems with positive or negative weights on synapses is introduced, where the rules of a neuron fire when the potential of that neuron equals a given value. The involved values-weights, firing thresholds, potential consumed by each rule-can be real (computable) numbers, rational numbers,...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00022

    authors: Wang J,Hoogeboom HJ,Pan L,Păun G,Pérez-Jiménez MJ

    更新日期:2010-10-01 00:00:00

  • Investigating the fault tolerance of neural networks.

    abstract::Particular levels of partial fault tolerance (PFT) in feedforward artificial neural networks of a given size can be obtained by redundancy (replicating a smaller normally trained network), by design (training specifically to increase PFT), and by a combination of the two (replicating a smaller PFT-trained network). Th...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766053723096

    authors: Tchernev EB,Mulvaney RG,Phatak DS

    更新日期:2005-07-01 00:00:00

  • Alignment of coexisting cortical maps in a motor control model.

    abstract::How do multiple feature maps that coexist in the same region of cerebral cortex align with each other? We hypothesize that such alignment is governed by temporal correlations: features in one map that are temporally correlated with those in another come to occupy the same spatial locations in cortex over time. To exam...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1996.8.4.731

    authors: Chen Y,Reggia JA

    更新日期:1996-05-15 00:00:00

  • STDP-Compatible Approximation of Backpropagation in an Energy-Based Model.

    abstract::We show that Langevin Markov chain Monte Carlo inference in an energy-based model with latent variables has the property that the early steps of inference, starting from a stationary point, correspond to propagating error gradients into internal layers, similar to backpropagation. The backpropagated error is with resp...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00934

    authors: Bengio Y,Mesnard T,Fischer A,Zhang S,Wu Y

    更新日期:2017-03-01 00:00:00

  • A Resource-Allocating Network for Function Interpolation.

    abstract::We have created a network that allocates a new computational unit whenever an unusual pattern is presented to the network. This network forms compact representations, yet learns easily and rapidly. The network can be used at any time in the learning process and the learning patterns do not have to be repeated. The uni...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1991.3.2.213

    authors: Platt J

    更新日期:1991-07-01 00:00:00

  • Improving generalization performance of natural gradient learning using optimized regularization by NIC.

    abstract::Natural gradient learning is known to be efficient in escaping plateau, which is a main cause of the slow learning speed of neural networks. The adaptive natural gradient learning method for practical implementation also has been developed, and its advantage in real-world problems has been confirmed. In this letter, w...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976604322742065

    authors: Park H,Murata N,Amari S

    更新日期:2004-02-01 00:00:00

  • Mean First Passage Memory Lifetimes by Reducing Complex Synapses to Simple Synapses.

    abstract::Memory models that store new memories by forgetting old ones have memory lifetimes that are rather short and grow only logarithmically in the number of synapses. Attempts to overcome these deficits include "complex" models of synaptic plasticity in which synapses possess internal states governing the expression of syn...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00956

    authors: Elliott T

    更新日期:2017-06-01 00:00:00