Abstract:
:The successor representation was introduced into reinforcement learning by Dayan ( 1993 ) as a means of facilitating generalization between states with similar successors. Although reinforcement learning in general has been used extensively as a model of psychological and neural processes, the psychological validity of the successor representation has yet to be explored. An interesting possibility is that the successor representation can be used not only for reinforcement learning but for episodic learning as well. Our main contribution is to show that a variant of the temporal context model (TCM; Howard & Kahana, 2002 ), an influential model of episodic memory, can be understood as directly estimating the successor representation using the temporal difference learning algorithm (Sutton & Barto, 1998 ). This insight leads to a generalization of TCM and new experimental predictions. In addition to casting a new normative light on TCM, this equivalence suggests a previously unexplored point of contact between different learning systems.
journal_name
Neural Computjournal_title
Neural computationauthors
Gershman SJ,Moore CD,Todd MT,Norman KA,Sederberg PBdoi
10.1162/NECO_a_00282subject
Has Abstractpub_date
2012-06-01 00:00:00pages
1553-68issue
6eissn
0899-7667issn
1530-888Xjournal_volume
24pub_type
杂志文章abstract::We study the effect of competition between short-term synaptic depression and facilitation on the dynamic properties of attractor neural networks, using Monte Carlo simulation and a mean-field analysis. Depending on the balance of depression, facilitation, and the underlying noise, the network displays different behav...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.19.10.2739
更新日期:2007-10-01 00:00:00
abstract::We describe a model of short-term synaptic depression that is derived from a circuit implementation. The dynamics of this circuit model is similar to the dynamics of some theoretical models of short-term depression except that the recovery dynamics of the variable describing the depression is nonlinear and it also dep...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976603762552942
更新日期:2003-02-01 00:00:00
abstract::We present a system for the automatic interpretation of cluttered scenes containing multiple partly occluded objects in front of unknown, complex backgrounds. The system is based on an extended elastic graph matching algorithm that allows the explicit modeling of partial occlusions. Our approach extends an earlier sys...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2006.18.6.1441
更新日期:2006-06-01 00:00:00
abstract::Uncertainty coming from the noise in its neurons and the ill-posed nature of many tasks plagues neural computations. Maybe surprisingly, many studies show that the brain manipulates these forms of uncertainty in a probabilistically consistent and normative manner, and there is now a rich theoretical literature on the ...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/neco.2007.19.2.404
更新日期:2007-02-01 00:00:00
abstract::The visual systems of many mammals, including humans, are able to integrate the geometric information of visual stimuli and perform cognitive tasks at the first stages of the cortical processing. This is thought to be the result of a combination of mechanisms, which include feature extraction at the single cell level ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00738
更新日期:2015-06-01 00:00:00
abstract::Spiking neural P systems (SN P systems) are a class of distributed parallel computing devices inspired by spiking neurons, where the spiking rules are usually used in a sequential way (an applicable rule is applied one time at a step) or an exhaustive way (an applicable rule is applied as many times as possible at a s...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00665
更新日期:2014-12-01 00:00:00
abstract::In this letter, a standard postnonlinear blind source separation algorithm is proposed, based on the MISEP method, which is widely used in linear and nonlinear independent component analysis. To best suit a wide class of postnonlinear mixtures, we adapt the MISEP method to incorporate a priori information of the mixtu...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.19.9.2557
更新日期:2007-09-01 00:00:00
abstract::It has been suggested that reactivation of previously acquired experiences or stored information in declarative memories in the hippocampus and neocortex contributes to memory consolidation and learning. Understanding memory consolidation depends crucially on the development of robust statistical methods for assessing...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01090
更新日期:2018-08-01 00:00:00
abstract::The bias/variance decomposition of mean-squared error is well understood and relatively straightforward. In this note, a similar simple decomposition is derived, valid for any kind of error measure that, when using the appropriate probability model, can be derived from a Kullback-Leibler divergence or log-likelihood. ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976698300017232
更新日期:1998-07-28 00:00:00
abstract::The need to reason about uncertainty in large, complex, and multimodal data sets has become increasingly common across modern scientific environments. The ability to transform samples from one distribution journal_title:Neural computation pub_type: 杂志文章 doi:10.1162/neco_a_01172 更新日期:2019-04-01 00:00:00
abstract::We present a new supervised learning procedure for ensemble machines, in which outputs of predictors, trained on different distributions, are combined by a dynamic classifier combination model. This procedure may be viewed as either a version of mixture of experts (Jacobs, Jordan, Nowlan, & Hintnon, 1991), applied to ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976699300016737
更新日期:1999-02-15 00:00:00
abstract::Correlated neural activity has been observed at various signal levels (e.g., spike count, membrane potential, local field potential, EEG, fMRI BOLD). Most of these signals can be considered as superpositions of spike trains filtered by components of the neural system (synapses, membranes) and the measurement process. ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2008.05-07-525
更新日期:2008-09-01 00:00:00
abstract::We study the expressive power of positive neural networks. The model uses positive connection weights and multiple input neurons. Different behaviors can be expressed by varying the connection weights. We show that in discrete time and in the absence of noise, the class of positive neural networks captures the so-call...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00789
更新日期:2015-12-01 00:00:00
abstract::We propose that replication (with mutation) of patterns of neuronal activity can occur within the brain using known neurophysiological processes. Thereby evolutionary algorithms implemented by neuro- nal circuits can play a role in cognition. Replication of structured neuronal representations is assumed in several cog...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00031
更新日期:2010-11-01 00:00:00
abstract::In this article, a biologically plausible and efficient object recognition system (called ORASSYLL) is introduced, based on a set of a priori constraints motivated by findings of developmental psychology and neurophysiology. These constraints are concerned with the organization of the input in local and corresponding ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976601300014583
更新日期:2001-02-01 00:00:00
abstract::In the past decade the importance of synchronized dynamics in the brain has emerged from both empirical and theoretical perspectives. Fast dynamic synchronous interactions of an oscillatory or nonoscillatory nature may constitute a form of temporal coding that underlies feature binding and perceptual synthesis. The re...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976699300016287
更新日期:1999-08-15 00:00:00
abstract::We show how a Hopfield network with modifiable recurrent connections undergoing slow Hebbian learning can extract the underlying geometry of an input space. First, we use a slow and fast analysis to derive an averaged system whose dynamics derives from an energy function and therefore always converges to equilibrium p...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00322
更新日期:2012-09-01 00:00:00
abstract::A key problem in computational neuroscience is to find simple, tractable models that are nevertheless flexible enough to capture the response properties of real neurons. Here we examine the capabilities of recurrent point process models known as Poisson generalized linear models (GLMs). These models are defined by a s...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01021
更新日期:2017-12-01 00:00:00
abstract::We consider the effect of the effective timing of a delayed feedback on the excitatory neuron in a recurrent inhibitory loop, when biological realities of firing and absolute refractory period are incorporated into a phenomenological spiking linear or quadratic integrate-and-fire neuron model. We show that such models...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.19.8.2124
更新日期:2007-08-01 00:00:00
abstract::Place cells in the rat hippocampus play a key role in creating the animal's internal representation of the world. During active navigation, these cells spike only in discrete locations, together encoding a map of the environment. Electrophysiological recordings have shown that the animal can revisit this map mentally ...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/NECO_a_00840
更新日期:2016-06-01 00:00:00
abstract::Bursting plays an important role in neural communication. At the population level, macroscopic bursting has been identified in populations of neurons that do not express intrinsic bursting mechanisms. For the analysis of phase transitions between bursting and non-bursting states, mean-field descriptions of macroscopic...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01300
更新日期:2020-09-01 00:00:00
abstract::The brain is known to be active even when not performing any overt cognitive tasks, and often it engages in involuntary mind wandering. This resting state has been extensively characterized in terms of fMRI-derived brain networks. However, an alternate method has recently gained popularity: EEG microstate analysis. Pr...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/neco_a_01229
更新日期:2019-11-01 00:00:00
abstract::We investigate the approximation ability of a multilayer perceptron (MLP) network when it is extended to the complex domain. The main challenge for processing complex data with neural networks has been the lack of bounded and analytic complex nonlinear activation functions in the complex domain, as stated by Liouville...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976603321891846
更新日期:2003-07-01 00:00:00
abstract::We study the learning of an external signal by a neural network and the time to forget it when this network is submitted to noise. The presentation of an external stimulus to the recurrent network of binary neurons may change the state of the synapses. Multiple presentations of a unique signal lead to its learning. Th...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01286
更新日期:2020-07-01 00:00:00
abstract::In a recent paper, Poggio and Girosi (1990) proposed a class of neural networks obtained from the theory of regularization. Regularized networks are capable of approximating arbitrarily well any continuous function on a compactum. In this paper we consider in detail the learning problem for the one-dimensional case. W...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1995.7.6.1225
更新日期:1995-11-01 00:00:00
abstract::Disparity tuning of visual cells in the brain depends on the structure of their binocular receptive fields (RFs). Freeman and coworkers have found that binocular RFs of a typical simple cell can be quantitatively described by two Gabor functions with the same gaussian envelope but different phase parameters in the sin...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1996.8.8.1611
更新日期:1996-11-15 00:00:00
abstract::We consider the problem of training a linear feedforward neural network by using a gradient descent-like LMS learning algorithm. The objective is to find a weight matrix for the network, by repeatedly presenting to it a finite set of examples, so that the sum of the squares of the errors is minimized. Kohonen showed t...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1991.3.2.226
更新日期:1991-07-01 00:00:00
abstract::This article addresses the relationship between long-term reward predictions and slow-timescale neural activity in temporal difference (TD) models of the dopamine system. Such models attempt to explain how the activity of dopamine (DA) neurons relates to errors in the prediction of future rewards. Previous models have...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602760407973
更新日期:2002-11-01 00:00:00
abstract::Independent component analysis (ICA) finds a linear transformation to variables that are maximally statistically independent. We examine ICA and algorithms for finding the best transformation from the point of view of maximizing the likelihood of the data. In particular, we discuss the way in which scaling of the unmi...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976699300016043
更新日期:1999-11-15 00:00:00
abstract::Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algor...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.10-08-878
更新日期:2009-12-01 00:00:00