Synaptic runaway in associative networks and the pathogenesis of schizophrenia.

Abstract:

:Synaptic runaway denotes the formation of erroneous synapses and premature functional decline accompanying activity-dependent learning in neural networks. This work studies synaptic runaway both analytically and numerically in binary-firing associative memory networks. It turns out that synaptic runaway is of fairly moderate magnitude in these networks under normal, baseline conditions. However, it may become extensive if the threshold for Hebbian learning is reduced. These findings are combined with recent evidence for arrested N-methyl-D-aspartate (NMDA) maturation in schizophrenics, to formulate a new hypothesis concerning the pathogenesis of schizophrenic psychotic symptoms in neural terms.

journal_name

Neural Comput

journal_title

Neural computation

authors

Greenstein-Messica A,Ruppin E

doi

10.1162/089976698300017836

subject

Has Abstract

pub_date

1998-02-15 00:00:00

pages

451-65

issue

2

eissn

0899-7667

issn

1530-888X

journal_volume

10

pub_type

杂志文章,评审
  • Online Reinforcement Learning Using a Probability Density Estimation.

    abstract::Function approximation in online, incremental, reinforcement learning needs to deal with two fundamental problems: biased sampling and nonstationarity. In this kind of task, biased sampling occurs because samples are obtained from specific trajectories dictated by the dynamics of the environment and are usually concen...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00906

    authors: Agostini A,Celaya E

    更新日期:2017-01-01 00:00:00

  • Including long-range dependence in integrate-and-fire models of the high interspike-interval variability of cortical neurons.

    abstract::Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766041732413

    authors: Jackson BS

    更新日期:2004-10-01 00:00:00

  • The Ornstein-Uhlenbeck process does not reproduce spiking statistics of neurons in prefrontal cortex.

    abstract::Cortical neurons of behaving animals generate irregular spike sequences. Recently, there has been a heated discussion about the origin of this irregularity. Softky and Koch (1993) pointed out the inability of standard single-neuron models to reproduce the irregularity of the observed spike sequences when the model par...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976699300016511

    authors: Shinomoto S,Sakai Y,Funahashi S

    更新日期:1999-05-15 00:00:00

  • Design of charge-balanced time-optimal stimuli for spiking neuron oscillators.

    abstract::In this letter, we investigate the fundamental limits on how the interspike time of a neuron oscillator can be perturbed by the application of a bounded external control input (a current stimulus) with zero net electric charge accumulation. We use phase models to study the dynamics of neurons and derive charge-balance...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00643

    authors: Dasanayake IS,Li JS

    更新日期:2014-10-01 00:00:00

  • Online adaptive decision trees.

    abstract::Decision trees and neural networks are widely used tools for pattern classification. Decision trees provide highly localized representation, whereas neural networks provide a distributed but compact representation of the decision space. Decision trees cannot be induced in the online mode, and they are not adaptive to ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766041336396

    authors: Basak J

    更新日期:2004-09-01 00:00:00

  • A modified algorithm for generalized discriminant analysis.

    abstract::Generalized discriminant analysis (GDA) is an extension of the classical linear discriminant analysis (LDA) from linear domain to a nonlinear domain via the kernel trick. However, in the previous algorithm of GDA, the solutions may suffer from the degenerate eigenvalue problem (i.e., several eigenvectors with the same...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976604773717612

    authors: Zheng W,Zhao L,Zou C

    更新日期:2004-06-01 00:00:00

  • Time-varying perturbations can distinguish among integrate-to-threshold models for perceptual decision making in reaction time tasks.

    abstract::Several integrate-to-threshold models with differing temporal integration mechanisms have been proposed to describe the accumulation of sensory evidence to a prescribed level prior to motor response in perceptual decision-making tasks. An experiment and simulation studies have shown that the introduction of time-varyi...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.07-08-817

    authors: Zhou X,Wong-Lin K,Philip H

    更新日期:2009-08-01 00:00:00

  • Neural Quadratic Discriminant Analysis: Nonlinear Decoding with V1-Like Computation.

    abstract::Linear-nonlinear (LN) models and their extensions have proven successful in describing transformations from stimuli to spiking responses of neurons in early stages of sensory hierarchies. Neural responses at later stages are highly nonlinear and have generally been better characterized in terms of their decoding perfo...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00890

    authors: Pagan M,Simoncelli EP,Rust NC

    更新日期:2016-11-01 00:00:00

  • Neutral stability, rate propagation, and critical branching in feedforward networks.

    abstract::Recent experimental and computational evidence suggests that several dynamical properties may characterize the operating point of functioning neural networks: critical branching, neutral stability, and production of a wide range of firing patterns. We seek the simplest setting in which these properties emerge, clarify...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00461

    authors: Cayco-Gajic NA,Shea-Brown E

    更新日期:2013-07-01 00:00:00

  • Are loss functions all the same?

    abstract::In this letter, we investigate the impact of choosing different loss functions from the viewpoint of statistical learning theory. We introduce a convexity assumption, which is met by all loss functions commonly used in the literature, and study how the bound on the estimation error changes with the loss. We also deriv...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976604773135104

    authors: Rosasco L,De Vito E,Caponnetto A,Piana M,Verri A

    更新日期:2004-05-01 00:00:00

  • Minimal model for intracellular calcium oscillations and electrical bursting in melanotrope cells of Xenopus laevis.

    abstract::A minimal model is presented to explain changes in frequency, shape, and amplitude of Ca2+ oscillations in the neuroendocrine melanotrope cell of Xenopus Laevis. It describes the cell as a plasma membrane oscillator with influx of extracellular Ca2+ via voltage-gated Ca2+ channels in the plasma membrane. The Ca2+ osci...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976601300014655

    authors: Cornelisse LN,Scheenen WJ,Koopman WJ,Roubos EW,Gielen SC

    更新日期:2001-01-01 00:00:00

  • Maintaining Consistency of Spatial Information in the Hippocampal Network: A Combinatorial Geometry Model.

    abstract::Place cells in the rat hippocampus play a key role in creating the animal's internal representation of the world. During active navigation, these cells spike only in discrete locations, together encoding a map of the environment. Electrophysiological recordings have shown that the animal can revisit this map mentally ...

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/NECO_a_00840

    authors: Dabaghian Y

    更新日期:2016-06-01 00:00:00

  • Improving generalization performance of natural gradient learning using optimized regularization by NIC.

    abstract::Natural gradient learning is known to be efficient in escaping plateau, which is a main cause of the slow learning speed of neural networks. The adaptive natural gradient learning method for practical implementation also has been developed, and its advantage in real-world problems has been confirmed. In this letter, w...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976604322742065

    authors: Park H,Murata N,Amari S

    更新日期:2004-02-01 00:00:00

  • A Reservoir Computing Model of Reward-Modulated Motor Learning and Automaticity.

    abstract::Reservoir computing is a biologically inspired class of learning algorithms in which the intrinsic dynamics of a recurrent neural network are mined to produce target time series. Most existing reservoir computing algorithms rely on fully supervised learning rules, which require access to an exact copy of the target re...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01198

    authors: Pyle R,Rosenbaum R

    更新日期:2019-07-01 00:00:00

  • Conductance-based integrate-and-fire models.

    abstract::A conductance-based model of Na+ and K+ currents underlying action potential generation is introduced by simplifying the quantitative model of Hodgkin and Huxley (HH). If the time course of rate constants can be approximated by a pulse, HH equations can be solved analytically. Pulse-based (PB) models generate action p...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1997.9.3.503

    authors: Destexhe A

    更新日期:1997-04-01 00:00:00

  • Information loss in an optimal maximum likelihood decoding.

    abstract::The mutual information between a set of stimuli and the elicited neural responses is compared to the corresponding decoded information. The decoding procedure is presented as an artificial distortion of the joint probabilities between stimuli and responses. The information loss is quantified. Whenever the probabilitie...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976602317318947

    authors: Samengo I

    更新日期:2002-04-01 00:00:00

  • Range-based ICA using a nonsmooth quasi-newton optimizer for electroencephalographic source localization in focal epilepsy.

    abstract::Independent component analysis (ICA) aims at separating a multivariate signal into independent nongaussian signals by optimizing a contrast function with no knowledge on the mixing mechanism. Despite the availability of a constellation of contrast functions, a Hartley-entropy-based ICA contrast endowed with the discri...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00700

    authors: Selvan SE,George ST,Balakrishnan R

    更新日期:2015-03-01 00:00:00

  • Rapid processing and unsupervised learning in a model of the cortical macrocolumn.

    abstract::We study a model of the cortical macrocolumn consisting of a collection of inhibitorily coupled minicolumns. The proposed system overcomes several severe deficits of systems based on single neurons as cerebral functional units, notably limited robustness to damage and unrealistically large computation time. Motivated ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976604772744893

    authors: Lücke J,von der Malsburg C

    更新日期:2004-03-01 00:00:00

  • Characterization of minimum error linear coding with sensory and neural noise.

    abstract::Robust coding has been proposed as a solution to the problem of minimizing decoding error in the presence of neural noise. Many real-world problems, however, have degradation in the input signal, not just in neural representations. This generalized problem is more relevant to biological sensory coding where internal n...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00181

    authors: Doi E,Lewicki MS

    更新日期:2011-10-01 00:00:00

  • Supervised learning in a recurrent network of rate-model neurons exhibiting frequency adaptation.

    abstract::For gradient descent learning to yield connectivity consistent with real biological networks, the simulated neurons would have to include more realistic intrinsic properties such as frequency adaptation. However, gradient descent learning cannot be used straightforwardly with adapting rate-model neurons because the de...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766054323017

    authors: Fortier PA,Guigon E,Burnod Y

    更新日期:2005-09-01 00:00:00

  • Learning Hough transform: a neural network model.

    abstract::A single-layered Hough transform network is proposed that accepts image coordinates of each object pixel as input and produces a set of outputs that indicate the belongingness of the pixel to a particular structure (e.g., a straight line). The network is able to learn adaptively the parametric forms of the linear segm...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976601300014501

    authors: Basak J

    更新日期:2001-03-01 00:00:00

  • Neural coding: higher-order temporal patterns in the neurostatistics of cell assemblies.

    abstract::Recent advances in the technology of multiunit recordings make it possible to test Hebb's hypothesis that neurons do not function in isolation but are organized in assemblies. This has created the need for statistical approaches to detecting the presence of spatiotemporal patterns of more than two neurons in neuron sp...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300014872

    authors: Martignon L,Deco G,Laskey K,Diamond M,Freiwald W,Vaadia E

    更新日期:2000-11-01 00:00:00

  • Analyzing and Accelerating the Bottlenecks of Training Deep SNNs With Backpropagation.

    abstract::Spiking neural networks (SNNs) with the event-driven manner of transmitting spikes consume ultra-low power on neuromorphic chips. However, training deep SNNs is still challenging compared to convolutional neural networks (CNNs). The SNN training algorithms have not achieved the same performance as CNNs. In this letter...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01319

    authors: Chen R,Li L

    更新日期:2020-12-01 00:00:00

  • Robust Closed-Loop Control of a Cursor in a Person with Tetraplegia using Gaussian Process Regression.

    abstract::Intracortical brain computer interfaces can enable individuals with paralysis to control external devices through voluntarily modulated brain activity. Decoding quality has been previously shown to degrade with signal nonstationarities-specifically, the changes in the statistics of the data between training and testin...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01129

    authors: Brandman DM,Burkhart MC,Kelemen J,Franco B,Harrison MT,Hochberg LR

    更新日期:2018-11-01 00:00:00

  • Hybrid integrate-and-fire model of a bursting neuron.

    abstract::We present a reduction of a Hodgkin-Huxley (HH)--style bursting model to a hybridized integrate-and-fire (IF) formalism based on a thorough bifurcation analysis of the neuron's dynamics. The model incorporates HH--style equations to evolve the subthreshold currents and includes IF mechanisms to characterize spike even...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976603322518768

    authors: Breen BJ,Gerken WC,Butera RJ Jr

    更新日期:2003-12-01 00:00:00

  • Bayesian model assessment and comparison using cross-validation predictive densities.

    abstract::In this work, we discuss practical methods for the assessment, comparison, and selection of complex hierarchical Bayesian models. A natural way to assess the goodness of the model is to estimate its future predictive capability by estimating expected utilities. Instead of just making a point estimate, it is important ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/08997660260293292

    authors: Vehtari A,Lampinen J

    更新日期:2002-10-01 00:00:00

  • Neural associative memory with optimal Bayesian learning.

    abstract::Neural associative memories are perceptron-like single-layer networks with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. Previous work optimized the memory capacity for various models of synaptic learning: linear Hopfield-type rules, the Willshaw model employ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00127

    authors: Knoblauch A

    更新日期:2011-06-01 00:00:00

  • Similarity, connectionism, and the problem of representation in vision.

    abstract::A representational scheme under which the ranking between represented similarities is isomorphic to the ranking between the corresponding shape similarities can support perfectly correct shape classification because it preserves the clustering of shapes according to the natural kinds prevailing in the external world. ...

    journal_title:Neural computation

    pub_type: 杂志文章,评审

    doi:10.1162/neco.1997.9.4.701

    authors: Edelman S,Duvdevani-Bar S

    更新日期:1997-05-15 00:00:00

  • Toward a biophysically plausible bidirectional Hebbian rule.

    abstract::Although the commonly used quadratic Hebbian-anti-Hebbian rules lead to successful models of plasticity and learning, they are inconsistent with neurophysiology. Other rules, more physiologically plausible, fail to specify the biological mechanism of bidirectionality and the biological mechanism that prevents synapses...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976698300017629

    authors: Grzywacz NM,Burgi PY

    更新日期:1998-04-01 00:00:00

  • Temporal coding: assembly formation through constructive interference.

    abstract::Temporal coding is studied for an oscillatory neural network model with synchronization and acceleration. The latter mechanism refers to increasing (decreasing) the phase velocity of each unit for stronger (weaker) or more coherent (decoherent) input from the other units. It has been demonstrated that acceleration gen...

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/neco.2008.09-06-342

    authors: Burwick T

    更新日期:2008-07-01 00:00:00