Abstract:
:The expected free energy (EFE) is a central quantity in the theory of active inference. It is the quantity that all active inference agents are mandated to minimize through action, and its decomposition into extrinsic and intrinsic value terms is key to the balance of exploration and exploitation that active inference agents evince. Despite its importance, the mathematical origins of this quantity and its relation to the variational free energy (VFE) remain unclear. In this letter, we investigate the origins of the EFE in detail and show that it is not simply "the free energy in the future." We present a functional that we argue is the natural extension of the VFE but actively discourages exploratory behavior, thus demonstrating that exploration does not directly follow from free energy minimization into the future. We then develop a novel objective, the free energy of the expected future (FEEF), which possesses both the epistemic component of the EFE and an intuitive mathematical grounding as the divergence between predicted and desired futures.
journal_name
Neural Computjournal_title
Neural computationauthors
Millidge B,Tschantz A,Buckley CLdoi
10.1162/neco_a_01354subject
Has Abstractpub_date
2021-01-05 00:00:00pages
1-36eissn
0899-7667issn
1530-888Xpub_type
杂志文章abstract::Memory models that store new memories by forgetting old ones have memory lifetimes that are rather short and grow only logarithmically in the number of synapses. Attempts to overcome these deficits include "complex" models of synaptic plasticity in which synapses possess internal states governing the expression of syn...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00956
更新日期:2017-06-01 00:00:00
abstract::A mathematical model, of general character for the dynamic description of coupled neural oscillators is presented. The population approach that is employed applies equally to coupled cells as to populations of such coupled cells. The formulation includes stochasticity and preserves details of precisely firing neurons....
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.03-07-482
更新日期:2008-05-01 00:00:00
abstract::Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attracto...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1996.8.6.1135
更新日期:1996-08-15 00:00:00
abstract::A variant of spiking neural P systems with positive or negative weights on synapses is introduced, where the rules of a neuron fire when the potential of that neuron equals a given value. The involved values-weights, firing thresholds, potential consumed by each rule-can be real (computable) numbers, rational numbers,...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00022
更新日期:2010-10-01 00:00:00
abstract::We describe a model of short-term synaptic depression that is derived from a circuit implementation. The dynamics of this circuit model is similar to the dynamics of some theoretical models of short-term depression except that the recovery dynamics of the variable describing the depression is nonlinear and it also dep...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976603762552942
更新日期:2003-02-01 00:00:00
abstract::Learning in a neuronal network is often thought of as a linear superposition of synaptic modifications induced by individual stimuli. However, since biological synapses are naturally bounded, a linear superposition would cause fast forgetting of previously acquired memories. Here we show that this forgetting can be av...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/0899766054615644
更新日期:2005-10-01 00:00:00
abstract::A representational scheme under which the ranking between represented similarities is isomorphic to the ranking between the corresponding shape similarities can support perfectly correct shape classification because it preserves the clustering of shapes according to the natural kinds prevailing in the external world. ...
journal_title:Neural computation
pub_type: 杂志文章,评审
doi:10.1162/neco.1997.9.4.701
更新日期:1997-05-15 00:00:00
abstract::The instantaneous phase of neural rhythms is important to many neuroscience-related studies. In this letter, we show that the statistical sampling properties of three instantaneous phase estimators commonly employed to analyze neuroscience data share common features, allowing an analytical investigation into their beh...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00422
更新日期:2013-04-01 00:00:00
abstract::Synaptically generated subthreshold membrane potential (Vm) fluctuations can be characterized within the framework of stochastic calculus. It is possible to obtain analytic expressions for the steady-state Vm distribution, even in the case of conductance-based synaptic currents. However, as we show here, the analytic ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/0899766054796932
更新日期:2005-11-01 00:00:00
abstract::Although the number of artificial neural network and machine learning architectures is growing at an exponential pace, more attention needs to be paid to theoretical guarantees of asymptotic convergence for novel, nonlinear, high-dimensional adaptive learning algorithms. When properly understood, such guarantees can g...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01117
更新日期:2018-10-01 00:00:00
abstract::This article studies a general theory of estimating functions of independent component analysis when the independent source signals are temporarily correlated. Estimating functions are used for deriving both batch and on-line learning algorithms, and they are applicable to blind cases where spatial and temporal probab...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300015079
更新日期:2000-09-01 00:00:00
abstract::We study both analytically and numerically the effect of presynaptic noise on the transmission of information in attractor neural networks. The noise occurs on a very short timescale compared to that for the neuron dynamics and it produces short-time synaptic depression. This is inspired in recent neurobiological find...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976606775623342
更新日期:2006-03-01 00:00:00
abstract::Several integrate-to-threshold models with differing temporal integration mechanisms have been proposed to describe the accumulation of sensory evidence to a prescribed level prior to motor response in perceptual decision-making tasks. An experiment and simulation studies have shown that the introduction of time-varyi...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.07-08-817
更新日期:2009-08-01 00:00:00
abstract::We generalize recent theoretical work on the minimal number of layers of narrow deep belief networks that can approximate any probability distribution on the states of their visible units arbitrarily well. We relax the setting of binary units (Sutskever & Hinton, 2008 ; Le Roux & Bengio, 2008 , 2010 ; Montúfar & Ay, 2...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/NECO_a_00601
更新日期:2014-07-01 00:00:00
abstract::Large-scale data collection efforts to map the brain are underway at multiple spatial and temporal scales, but all face fundamental problems posed by high-dimensional data and intersubject variability. Even seemingly simple problems, such as identifying a neuron/brain region across animals/subjects, become exponential...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00852
更新日期:2016-08-01 00:00:00
abstract::The important task of generating the minimum number of sequential triangle strips (tristrips) for a given triangulated surface model is motivated by applications in computer graphics. This hard combinatorial optimization problem is reduced to the minimum energy problem in Hopfield nets by a linear-size construction. I...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2008.10-07-623
更新日期:2009-02-01 00:00:00
abstract::The successor representation was introduced into reinforcement learning by Dayan ( 1993 ) as a means of facilitating generalization between states with similar successors. Although reinforcement learning in general has been used extensively as a model of psychological and neural processes, the psychological validity o...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00282
更新日期:2012-06-01 00:00:00
abstract::Mechanisms influencing learning in neural networks are usually investigated on either a local or a global scale. The former relates to synaptic processes, the latter to unspecific modulatory systems. Here we study the interaction of a local learning rule that evaluates coincidences of pre- and postsynaptic action pote...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300015682
更新日期:2000-03-01 00:00:00
abstract::Humans have the ability to learn novel motor tasks while manipulating the environment. Several models of motor learning have been proposed in the literature, but few of them address the problem of retention and interference of motor memory. The modular selection and identification for control (MOSAIC) model, originall...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.03-08-721
更新日期:2009-07-01 00:00:00
abstract::Disparity tuning of visual cells in the brain depends on the structure of their binocular receptive fields (RFs). Freeman and coworkers have found that binocular RFs of a typical simple cell can be quantitatively described by two Gabor functions with the same gaussian envelope but different phase parameters in the sin...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1996.8.8.1611
更新日期:1996-11-15 00:00:00
abstract::Inner-product operators, often referred to as kernels in statistical learning, define a mapping from some input space into a feature space. The focus of this letter is the construction of biologically motivated kernels for cortical activities. The kernels we derive, termed Spikernels, map spike count sequences into an...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/0899766053019944
更新日期:2005-03-01 00:00:00
abstract::Ramping neuronal activity refers to spiking activity with a rate that increases quasi-linearly over time. It has been observed in multiple cortical areas and is correlated with evidence accumulation processes or timing. In this work, we investigated the downstream effect of ramping neuronal activity through synapses t...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00818
更新日期:2016-04-01 00:00:00
abstract::Inspired by recent studies regarding dendritic computation, we constructed a recurrent neural network model incorporating dendritic lateral inhibition. Our model consists of an input layer and a neuron layer that includes excitatory cells and an inhibitory cell; this inhibitory cell is activated by the pooled activiti...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.19.7.1798
更新日期:2007-07-01 00:00:00
abstract::Theories of learning and generalization hold that the generalization bias, defined as the difference between the training error and the generalization error, increases on average with the number of adaptive parameters. This article, however, shows that this general tendency is violated for a gaussian mixture model. Fo...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300015439
更新日期:2000-06-01 00:00:00
abstract::In a recent paper, Poggio and Girosi (1990) proposed a class of neural networks obtained from the theory of regularization. Regularized networks are capable of approximating arbitrarily well any continuous function on a compactum. In this paper we consider in detail the learning problem for the one-dimensional case. W...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1995.7.6.1225
更新日期:1995-11-01 00:00:00
abstract::This article addresses the relationship between long-term reward predictions and slow-timescale neural activity in temporal difference (TD) models of the dopamine system. Such models attempt to explain how the activity of dopamine (DA) neurons relates to errors in the prediction of future rewards. Previous models have...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602760407973
更新日期:2002-11-01 00:00:00
abstract::In this letter, a standard postnonlinear blind source separation algorithm is proposed, based on the MISEP method, which is widely used in linear and nonlinear independent component analysis. To best suit a wide class of postnonlinear mixtures, we adapt the MISEP method to incorporate a priori information of the mixtu...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.19.9.2557
更新日期:2007-09-01 00:00:00
abstract::This letter deals with neural networks as dynamical systems governed by finite difference equations. It shows that the introduction of
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01165
更新日期:2019-03-01 00:00:00
abstract::Recent experimental findings have shown the presence of robust and cell-type-specific intraburst firing patterns in bursting neurons. We address the problem of characterizing these patterns under the assumption that the bursts exhibit well-defined firing time distributions. We propose a method for estimating these dis...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2008.07-07-571
更新日期:2009-04-01 00:00:00
abstract::We show how a Hopfield network with modifiable recurrent connections undergoing slow Hebbian learning can extract the underlying geometry of an input space. First, we use a slow and fast analysis to derive an averaged system whose dynamics derives from an energy function and therefore always converges to equilibrium p...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00322
更新日期:2012-09-01 00:00:00