Abstract:
:Learning in a neuronal network is often thought of as a linear superposition of synaptic modifications induced by individual stimuli. However, since biological synapses are naturally bounded, a linear superposition would cause fast forgetting of previously acquired memories. Here we show that this forgetting can be avoided by introducing additional constraints on the synaptic and neural dynamics. We consider Hebbian plasticity of excitatory synapses. A synapse is modified only if the postsynaptic response does not match the desired output. With this learning rule, the original memory performances with unbounded weights are regained, provided that (1) there is some global inhibition, (2) the learning rate is small, and (3) the neurons can discriminate small differences in the total synaptic input (e.g., by making the neuronal threshold small compared to the total postsynaptic input). We prove in the form of a generalized perceptron convergence theorem that under these constraints, a neuron learns to classify any linearly separable set of patterns, including a wide class of highly correlated random patterns. During the learning process, excitation becomes roughly balanced by inhibition, and the neuron classifies the patterns on the basis of small differences around this balance. The fact that synapses saturate has the additional benefit that nonlinearly separable patterns, such as similar patterns with contradicting outputs, eventually generate a subthreshold response, and therefore silence neurons that cannot provide any information.
journal_name
Neural Computjournal_title
Neural computationauthors
Senn W,Fusi Sdoi
10.1162/0899766054615644subject
Has Abstractpub_date
2005-10-01 00:00:00pages
2106-38issue
10eissn
0899-7667issn
1530-888Xjournal_volume
17pub_type
杂志文章abstract::To exhibit social intelligence, animals have to recognize whom they are communicating with. One way to make this inference is to select among internal generative models of each conspecific who may be encountered. However, these models also have to be learned via some form of Bayesian belief updating. This induces an i...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01239
更新日期:2019-12-01 00:00:00
abstract::We outline a hybrid analog-digital scheme for computing with three important features that enable it to scale to systems of large complexity: First, like digital computation, which uses several one-bit precise logical units to collectively compute a precise answer to a computation, the hybrid scheme uses several moder...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602320263971
更新日期:2002-09-01 00:00:00
abstract::We describe a model of short-term synaptic depression that is derived from a circuit implementation. The dynamics of this circuit model is similar to the dynamics of some theoretical models of short-term depression except that the recovery dynamics of the variable describing the depression is nonlinear and it also dep...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976603762552942
更新日期:2003-02-01 00:00:00
abstract::Humans learn categories of complex objects quickly and from a few examples. Random projection has been suggested as a means to learn and categorize efficiently. We investigate how random projection affects categorization by humans and by very simple neural networks on the same stimuli and categorization tasks, and how...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/NECO_a_00769
更新日期:2015-10-01 00:00:00
abstract::This article studies a general theory of estimating functions of independent component analysis when the independent source signals are temporarily correlated. Estimating functions are used for deriving both batch and on-line learning algorithms, and they are applicable to blind cases where spatial and temporal probab...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300015079
更新日期:2000-09-01 00:00:00
abstract::Topographic maps such as the self-organizing map (SOM) or neural gas (NG) constitute powerful data mining techniques that allow simultaneously clustering data and inferring their topological structure, such that additional features, for example, browsing, become available. Both methods have been introduced for vectori...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00012
更新日期:2010-09-01 00:00:00
abstract::Recent advances in the technology of multiunit recordings make it possible to test Hebb's hypothesis that neurons do not function in isolation but are organized in assemblies. This has created the need for statistical approaches to detecting the presence of spatiotemporal patterns of more than two neurons in neuron sp...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300014872
更新日期:2000-11-01 00:00:00
abstract::Several integrate-to-threshold models with differing temporal integration mechanisms have been proposed to describe the accumulation of sensory evidence to a prescribed level prior to motor response in perceptual decision-making tasks. An experiment and simulation studies have shown that the introduction of time-varyi...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.07-08-817
更新日期:2009-08-01 00:00:00
abstract::Large-scale data collection efforts to map the brain are underway at multiple spatial and temporal scales, but all face fundamental problems posed by high-dimensional data and intersubject variability. Even seemingly simple problems, such as identifying a neuron/brain region across animals/subjects, become exponential...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00852
更新日期:2016-08-01 00:00:00
abstract::This letter deals with neural networks as dynamical systems governed by finite difference equations. It shows that the introduction of
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01165
更新日期:2019-03-01 00:00:00
abstract::Cortical neurons of behaving animals generate irregular spike sequences. Recently, there has been a heated discussion about the origin of this irregularity. Softky and Koch (1993) pointed out the inability of standard single-neuron models to reproduce the irregularity of the observed spike sequences when the model par...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976699300016511
更新日期:1999-05-15 00:00:00
abstract::The instantaneous phase of neural rhythms is important to many neuroscience-related studies. In this letter, we show that the statistical sampling properties of three instantaneous phase estimators commonly employed to analyze neuroscience data share common features, allowing an analytical investigation into their beh...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00422
更新日期:2013-04-01 00:00:00
abstract::We describe an analytical framework for the adaptations of neural systems that adapt its internal structure on the basis of subjective probabilities constructed by computation of randomly received input signals. A principled approach is provided with the key property that it defines a probability density model that al...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300015862
更新日期:2000-02-01 00:00:00
abstract::We propose that replication (with mutation) of patterns of neuronal activity can occur within the brain using known neurophysiological processes. Thereby evolutionary algorithms implemented by neuro- nal circuits can play a role in cognition. Replication of structured neuronal representations is assumed in several cog...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00031
更新日期:2010-11-01 00:00:00
abstract::In this work, we discuss practical methods for the assessment, comparison, and selection of complex hierarchical Bayesian models. A natural way to assess the goodness of the model is to estimate its future predictive capability by estimating expected utilities. Instead of just making a point estimate, it is important ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/08997660260293292
更新日期:2002-10-01 00:00:00
abstract::Recently there has been great interest in sparse representations of signals under the assumption that signals (data sets) can be well approximated by a linear combination of few elements of a known basis (dictionary). Many algorithms have been developed to find such representations for one-dimensional signals (vectors...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00385
更新日期:2013-01-01 00:00:00
abstract::Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algor...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.10-08-878
更新日期:2009-12-01 00:00:00
abstract::We study the learning of an external signal by a neural network and the time to forget it when this network is submitted to noise. The presentation of an external stimulus to the recurrent network of binary neurons may change the state of the synapses. Multiple presentations of a unique signal lead to its learning. Th...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01286
更新日期:2020-07-01 00:00:00
abstract::A fast and accurate computational scheme for simulating nonlinear dynamic systems is presented. The scheme assumes that the system can be represented by a combination of components of only two different types: first-order low-pass filters and static nonlinearities. The parameters of these filters and nonlinearities ma...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/neco.2008.04-07-506
更新日期:2008-07-01 00:00:00
abstract::This article presents a new theoretical framework to consider the dynamics of a stochastic spiking neuron model with general membrane response to input spike. We assume that the input spikes obey an inhomogeneous Poisson process. The stochastic process of the membrane potential then becomes a gaussian process. When a ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976601317098529
更新日期:2001-12-01 00:00:00
abstract::We present formal specification and verification of a robot moving in a complex network, using temporal sequence learning to avoid obstacles. Our aim is to demonstrate the benefit of using a formal approach to analyze such a system as a complementary approach to simulation. We first describe a classical closed-loop si...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00493
更新日期:2013-11-01 00:00:00
abstract::As neural activity is transmitted through the nervous system, neuronal noise degrades the encoded information and limits performance. It is therefore important to know how information loss can be prevented. We study this question in the context of neural population codes. Using Fisher information, we show how informat...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00227
更新日期:2012-02-01 00:00:00
abstract::We study active learning (AL) based on gaussian processes (GPs) for efficiently enumerating all of the local minimum solutions of a black-box function. This problem is challenging because local solutions are characterized by their zero gradient and positive-definite Hessian properties, but those derivatives cannot be ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01307
更新日期:2020-10-01 00:00:00
abstract::In this work, we propose a two-layered descriptive model for motion processing from retina to the cortex, with an event-based input from the asynchronous time-based image sensor (ATIS) camera. Spatial and spatiotemporal filtering of visual scenes by motion energy detectors has been implemented in two steps in a simple...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01191
更新日期:2019-06-01 00:00:00
abstract::Lossy compression and clustering fundamentally involve a decision about which features are relevant and which are not. The information bottleneck method (IB) by Tishby, Pereira, and Bialek ( 1999 ) formalized this notion as an information-theoretic optimization problem and proposed an optimal trade-off between throwin...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00961
更新日期:2017-06-01 00:00:00
abstract::We extend the neural Turing machine (NTM) model into a dynamic neural Turing machine (D-NTM) by introducing trainable address vectors. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing stra...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01060
更新日期:2018-04-01 00:00:00
abstract::We study the emergence of synchronized burst activity in networks of neurons with spike adaptation. We show that networks of tonically firing adapting excitatory neurons can evolve to a state where the neurons burst in a synchronized manner. The mechanism leading to this burst activity is analyzed in a network of inte...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/08997660151134280
更新日期:2001-05-01 00:00:00
abstract::Synaptically generated subthreshold membrane potential (Vm) fluctuations can be characterized within the framework of stochastic calculus. It is possible to obtain analytic expressions for the steady-state Vm distribution, even in the case of conductance-based synaptic currents. However, as we show here, the analytic ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/0899766054796932
更新日期:2005-11-01 00:00:00
abstract::This letter proposes a multichannel source separation technique, the multichannel variational autoencoder (MVAE) method, which uses a conditional VAE (CVAE) to model and estimate the power spectrograms of the sources in a mixture. By training the CVAE using the spectrograms of training examples with source-class label...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01217
更新日期:2019-09-01 00:00:00
abstract::A general method is presented to classify temporal patterns generated by rhythmic biological networks when synaptic connections and cellular properties are known. The method is discrete in nature and relies on algebraic properties of state transitions and graph theory. Elements of the set of rhythms generated by a net...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976698300017160
更新日期:1998-10-01 00:00:00