Abstract:
:The free-energy principle is a candidate unified theory for learning and memory in the brain that predicts that neurons, synapses, and neuromodulators work in a manner that minimizes free energy. However, electrophysiological data elucidating the neural and synaptic bases for this theory are lacking. Here, we propose a novel theory bridging the information-theoretical principle with the biological phenomenon of spike-timing dependent plasticity (STDP) regulated by neuromodulators, which we term mSTDP. We propose that by integrating an mSTDP equation, we can obtain a form of Friston's free energy (an information-theoretical function). Then we analytically and numerically show that dopamine (DA) and noradrenaline (NA) influence the accuracy of a principal component analysis (PCA) performed using the mSTDP algorithm. From the perspective of free-energy minimization, these neuromodulatory changes alter the relative weighting or precision of accuracy and prior terms, which induces a switch from pattern completion to separation. These results are consistent with electrophysiological findings and validate the free-energy principle and mSTDP. Moreover, our scheme can potentially be applied in computational psychiatry to build models of the faulty neural networks that underlie the positive symptoms of schizophrenia, which involve abnormal DA levels, as well as models of the NA contribution to memory triage and posttraumatic stress disorder.
journal_name
Neural Computjournal_title
Neural computationauthors
Isomura T,Sakai K,Kotani K,Jimbo Ydoi
10.1162/NECO_a_00862subject
Has Abstractpub_date
2016-09-01 00:00:00pages
1859-88issue
9eissn
0899-7667issn
1530-888Xjournal_volume
28pub_type
杂志文章abstract::In this note, we demonstrate that the high firing irregularity produced by the leaky integrate-and-fire neuron with the partial somatic reset mechanism, which has been shown to be the most likely candidate to reflect the mechanism used in the brain for reproducing the highly irregular cortical neuron firing at high ra...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00090
更新日期:2011-03-01 00:00:00
abstract::A single-layered Hough transform network is proposed that accepts image coordinates of each object pixel as input and produces a set of outputs that indicate the belongingness of the pixel to a particular structure (e.g., a straight line). The network is able to learn adaptively the parametric forms of the linear segm...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976601300014501
更新日期:2001-03-01 00:00:00
abstract::We present a new supervised learning procedure for ensemble machines, in which outputs of predictors, trained on different distributions, are combined by a dynamic classifier combination model. This procedure may be viewed as either a version of mixture of experts (Jacobs, Jordan, Nowlan, & Hintnon, 1991), applied to ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976699300016737
更新日期:1999-02-15 00:00:00
abstract::In this article, a biologically plausible and efficient object recognition system (called ORASSYLL) is introduced, based on a set of a priori constraints motivated by findings of developmental psychology and neurophysiology. These constraints are concerned with the organization of the input in local and corresponding ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976601300014583
更新日期:2001-02-01 00:00:00
abstract::Recently there has been great interest in sparse representations of signals under the assumption that signals (data sets) can be well approximated by a linear combination of few elements of a known basis (dictionary). Many algorithms have been developed to find such representations for one-dimensional signals (vectors...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00385
更新日期:2013-01-01 00:00:00
abstract::We have studied some of the design trade-offs governing visual representations based on spatially invariant conjunctive feature detectors, with an emphasis on the susceptibility of such systems to false-positive recognition errors-Malsburg's classical binding problem. We begin by deriving an analytical model that make...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300015574
更新日期:2000-04-01 00:00:00
abstract::In this work, we propose a two-layered descriptive model for motion processing from retina to the cortex, with an event-based input from the asynchronous time-based image sensor (ATIS) camera. Spatial and spatiotemporal filtering of visual scenes by motion energy detectors has been implemented in two steps in a simple...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01191
更新日期:2019-06-01 00:00:00
abstract::A mathematical theory of interacting hypercolumns in primary visual cortex (V1) is presented that incorporates details concerning the anisotropic nature of long-range lateral connections. Each hypercolumn is modeled as a ring of interacting excitatory and inhibitory neural populations with orientation preferences over...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602317250870
更新日期:2002-03-01 00:00:00
abstract::Large-scale data collection efforts to map the brain are underway at multiple spatial and temporal scales, but all face fundamental problems posed by high-dimensional data and intersubject variability. Even seemingly simple problems, such as identifying a neuron/brain region across animals/subjects, become exponential...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00852
更新日期:2016-08-01 00:00:00
abstract::In a previous article, we considered game trees as graphical models. Adopting an evaluation function that returned a probability distribution over values likely to be taken at a given position, we described how to build a model of uncertainty and use it for utility-directed growth of the search tree and for deciding o...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976699300016881
更新日期:1999-01-01 00:00:00
abstract::Although the number of artificial neural network and machine learning architectures is growing at an exponential pace, more attention needs to be paid to theoretical guarantees of asymptotic convergence for novel, nonlinear, high-dimensional adaptive learning algorithms. When properly understood, such guarantees can g...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01117
更新日期:2018-10-01 00:00:00
abstract::Topographic maps such as the self-organizing map (SOM) or neural gas (NG) constitute powerful data mining techniques that allow simultaneously clustering data and inferring their topological structure, such that additional features, for example, browsing, become available. Both methods have been introduced for vectori...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00012
更新日期:2010-09-01 00:00:00
abstract::Mild traumatic brain injury (mTBI) presents a significant health concern with potential persisting deficits that can last decades. Although a growing body of literature improves our understanding of the brain network response and corresponding underlying cellular alterations after injury, the effects of cellular disru...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01343
更新日期:2021-01-01 00:00:00
abstract::We show that Langevin Markov chain Monte Carlo inference in an energy-based model with latent variables has the property that the early steps of inference, starting from a stationary point, correspond to propagating error gradients into internal layers, similar to backpropagation. The backpropagated error is with resp...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00934
更新日期:2017-03-01 00:00:00
abstract::A stochastic model of spike-timing-dependent plasticity (STDP) postulates that single synapses presented with a single spike pair exhibit all-or-none quantal jumps in synaptic strength. The amplitudes of the jumps are independent of spiking timing, but their probabilities do depend on spiking timing. By making the amp...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.07-08-814
更新日期:2010-01-01 00:00:00
abstract::Particular levels of partial fault tolerance (PFT) in feedforward artificial neural networks of a given size can be obtained by redundancy (replicating a smaller normally trained network), by design (training specifically to increase PFT), and by a combination of the two (replicating a smaller PFT-trained network). Th...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/0899766053723096
更新日期:2005-07-01 00:00:00
abstract::When we move our body to perform a movement task, our central nervous system selects a movement trajectory from an infinite number of possible trajectories under constraints that have been acquired through evolution and learning. Minimization of the energy cost has been suggested as a potential candidate for a constra...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00757
更新日期:2015-08-01 00:00:00
abstract::To understand the interspike interval (ISI) variability displayed by visual cortical neurons (Softky & Koch, 1993), it is critical to examine the dynamics of their neuronal integration, as well as the variability in their synaptic input current. Most previous models have focused on the latter factor. We match a simple...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1997.9.5.971
更新日期:1997-07-01 00:00:00
abstract::We extend the neural Turing machine (NTM) model into a dynamic neural Turing machine (D-NTM) by introducing trainable address vectors. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing stra...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01060
更新日期:2018-04-01 00:00:00
abstract::In this letter, we investigate the fundamental limits on how the interspike time of a neuron oscillator can be perturbed by the application of a bounded external control input (a current stimulus) with zero net electric charge accumulation. We use phase models to study the dynamics of neurons and derive charge-balance...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00643
更新日期:2014-10-01 00:00:00
abstract::Temporal slowness is a learning principle that allows learning of invariant representations by extracting slowly varying features from quickly varying input signals. Slow feature analysis (SFA) is an efficient algorithm based on this principle and has been applied to the learning of translation, scale, and other invar...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976603322297331
更新日期:2003-09-01 00:00:00
abstract::In this letter, we develop a gaussian process model for clustering. The variances of predictive values in gaussian processes learned from a training data are shown to comprise an estimate of the support of a probability density function. The constructed variance function is then applied to construct a set of contours ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.19.11.3088
更新日期:2007-11-01 00:00:00
abstract::We consider the problem of training a linear feedforward neural network by using a gradient descent-like LMS learning algorithm. The objective is to find a weight matrix for the network, by repeatedly presenting to it a finite set of examples, so that the sum of the squares of the errors is minimized. Kohonen showed t...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1991.3.2.226
更新日期:1991-07-01 00:00:00
abstract::Calculation of the total conductance change induced by multiple synapses at a given membrane compartment remains one of the most time-consuming processes in biophysically realistic neural network simulations. Here we show that this calculation can be achieved in a highly efficient way even for multiply converging syna...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976698300017061
更新日期:1998-10-01 00:00:00
abstract::Independent component analysis (ICA) finds a linear transformation to variables that are maximally statistically independent. We examine ICA and algorithms for finding the best transformation from the point of view of maximizing the likelihood of the data. In particular, we discuss the way in which scaling of the unmi...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976699300016043
更新日期:1999-11-15 00:00:00
abstract::The Kalman filter provides a simple and efficient algorithm to compute the posterior distribution for state-space models where both the latent state and measurement models are linear and gaussian. Extensions to the Kalman filter, including the extended and unscented Kalman filters, incorporate linearizations for model...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/neco_a_01275
更新日期:2020-05-01 00:00:00
abstract::Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attracto...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1996.8.6.1135
更新日期:1996-08-15 00:00:00
abstract::We present a model of visual computation based on tightly inter-connected cliques of pyramidal cells. It leads to a formal theory of cell assemblies, a specific relationship between correlated firing patterns and abstract functionality, and a direct calculation relating estimates of cortical cell counts to orientation...
journal_title:Neural computation
pub_type: 杂志文章,评审
doi:10.1162/089976699300016782
更新日期:1999-01-01 00:00:00
abstract::Attractor networks are widely believed to underlie the memory systems of animals across different species. Existing models have succeeded in qualitatively modeling properties of attractor dynamics, but their computational abilities often suffer from poor representations for realistic complex patterns, spurious attract...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2010.02-09-957
更新日期:2010-05-01 00:00:00
abstract::Synaptic runaway denotes the formation of erroneous synapses and premature functional decline accompanying activity-dependent learning in neural networks. This work studies synaptic runaway both analytically and numerically in binary-firing associative memory networks. It turns out that synaptic runaway is of fairly m...
journal_title:Neural computation
pub_type: 杂志文章,评审
doi:10.1162/089976698300017836
更新日期:1998-02-15 00:00:00