A sensorimotor map: modulating lateral interactions for anticipation and planning.

Abstract:

:Experimental studies of reasoning and planned behavior have provided evidence that nervous systems use internal models to perform predictive motor control, imagery, inference, and planning. Classical (model-free) reinforcement learning approaches omit such a model; standard sensorimotor models account for forward and backward functions of sensorimotor dependencies but do not provide a proper neural representation on which to realize planning. We propose a sensorimotor map to represent such an internal model. The map learns a state representation similar to self-organizing maps but is inherently coupled to sensor and motor signals. Motor activations modulate the lateral connection strengths and thereby induce anticipatory shifts of the activity peak on the sensorimotor map. This mechanism encodes a model of the change of stimuli depending on the current motor activities. The activation dynamics on the map are derived from neural field models. An additional dynamic process on the sensorimotor map (derived from dynamic programming) realizes planning and emits corresponding goal-directed motor sequences, for instance, to navigate through a maze.

journal_name

Neural Comput

journal_title

Neural computation

authors

Toussaint M

doi

10.1162/089976606776240995

subject

Has Abstract

pub_date

2006-05-01 00:00:00

pages

1132-55

issue

5

eissn

0899-7667

issn

1530-888X

journal_volume

18

pub_type

杂志文章
  • Inhibition and Excitation Shape Activity Selection: Effect of Oscillations in a Decision-Making Circuit.

    abstract::Decision making is a complex task, and its underlying mechanisms that regulate behavior, such as the implementation of the coupling between physiological states and neural networks, are hard to decipher. To gain more insight into neural computations underlying ongoing binary decision-making tasks, we consider a neural...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01185

    authors: Bose T,Reina A,Marshall JAR

    更新日期:2019-05-01 00:00:00

  • Pattern generation by two coupled time-discrete neural networks with synaptic depression.

    abstract::Numerous animal behaviors, such as locomotion in vertebrates, are produced by rhythmic contractions that alternate between two muscle groups. The neuronal networks generating such alternate rhythmic activity are generally thought to rely on pacemaker cells or well-designed circuits consisting of inhibitory and excitat...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976698300017449

    authors: Senn W,Wannier T,Kleinle J,Lüscher HR,Müller L,Streit J,Wyler K

    更新日期:1998-07-01 00:00:00

  • Synchrony of neuronal oscillations controlled by GABAergic reversal potentials.

    abstract::GABAergic synapse reversal potential is controlled by the concentration of chloride. This concentration can change significantly during development and as a function of neuronal activity. Thus, GABA inhibition can be hyperpolarizing, shunting, or partially depolarizing. Previous results pinpointed the conditions under...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2007.19.3.706

    authors: Jeong HY,Gutkin B

    更新日期:2007-03-01 00:00:00

  • Representation sharpening can explain perceptual priming.

    abstract::Perceiving and identifying an object is improved by prior exposure to the object. This perceptual priming phenomenon is accompanied by reduced neural activity. But whether suppression of neuronal activity with priming is responsible for the improvement in perception is unclear. To address this problem, we developed a ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.04-09-999

    authors: Moldakarimov S,Bazhenov M,Sejnowski TJ

    更新日期:2010-05-01 00:00:00

  • Reinforcement learning in continuous time and space.

    abstract::This article presents a reinforcement learning framework for continuous-time dynamical systems without a priori discretization of time, state, and action. Based on the Hamilton-Jacobi-Bellman (HJB) equation for infinite-horizon, discounted reward problems, we derive algorithms for estimating value functions and improv...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015961

    authors: Doya K

    更新日期:2000-01-01 00:00:00

  • A Unifying Framework of Synaptic and Intrinsic Plasticity in Neural Populations.

    abstract::A neuronal population is a computational unit that receives a multivariate, time-varying input signal and creates a related multivariate output. These neural signals are modeled as stochastic processes that transmit information in real time, subject to stochastic noise. In a stationary environment, where the input sig...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01057

    authors: Leugering J,Pipa G

    更新日期:2018-04-01 00:00:00

  • Computing with self-excitatory cliques: A model and an application to hyperacuity-scale computation in visual cortex.

    abstract::We present a model of visual computation based on tightly inter-connected cliques of pyramidal cells. It leads to a formal theory of cell assemblies, a specific relationship between correlated firing patterns and abstract functionality, and a direct calculation relating estimates of cortical cell counts to orientation...

    journal_title:Neural computation

    pub_type: 杂志文章,评审

    doi:10.1162/089976699300016782

    authors: Miller DA,Zucker SW

    更新日期:1999-01-01 00:00:00

  • Statistical procedures for spatiotemporal neuronal data with applications to optical recording of the auditory cortex.

    abstract::This article presents new procedures for multisite spatiotemporal neuronal data analysis. A new statistical model - the diffusion model - is considered, whose parameters can be estimated from experimental data thanks to mean-field approximations. This work has been applied to optical recording of the guinea pig's audi...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015150

    authors: François O,Abdallahi LM,Horikawa J,Taniguchi I,Hervé T

    更新日期:2000-08-01 00:00:00

  • Universal approximation depth and errors of narrow belief networks with discrete units.

    abstract::We generalize recent theoretical work on the minimal number of layers of narrow deep belief networks that can approximate any probability distribution on the states of their visible units arbitrarily well. We relax the setting of binary units (Sutskever & Hinton, 2008 ; Le Roux & Bengio, 2008 , 2010 ; Montúfar & Ay, 2...

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/NECO_a_00601

    authors: Montúfar GF

    更新日期:2014-07-01 00:00:00

  • Including long-range dependence in integrate-and-fire models of the high interspike-interval variability of cortical neurons.

    abstract::Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766041732413

    authors: Jackson BS

    更新日期:2004-10-01 00:00:00

  • Attractive periodic sets in discrete-time recurrent networks (with emphasis on fixed-point stability and bifurcations in two-neuron networks).

    abstract::We perform a detailed fixed-point analysis of two-unit recurrent neural networks with sigmoid-shaped transfer functions. Using geometrical arguments in the space of transfer function derivatives, we partition the network state-space into distinct regions corresponding to stability types of the fixed points. Unlike in ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/08997660152002898

    authors: Tino P,Horne BG,Giles CL

    更新日期:2001-06-01 00:00:00

  • Making the error-controlling algorithm of observable operator models constructive.

    abstract::Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algor...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.10-08-878

    authors: Zhao MJ,Jaeger H,Thon M

    更新日期:2009-12-01 00:00:00

  • A Novel Reconstruction Framework for Time-Encoded Signals with Integrate-and-Fire Neurons.

    abstract::Integrate-and-fire neurons are time encoding machines that convert the amplitude of an analog signal into a nonuniform, strictly increasing sequence of spike times. Under certain conditions, the encoded signals can be reconstructed from the nonuniform spike time sequences using a time decoding machine. Time encoding a...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00764

    authors: Florescu D,Coca D

    更新日期:2015-09-01 00:00:00

  • Whence the Expected Free Energy?

    abstract::The expected free energy (EFE) is a central quantity in the theory of active inference. It is the quantity that all active inference agents are mandated to minimize through action, and its decomposition into extrinsic and intrinsic value terms is key to the balance of exploration and exploitation that active inference...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01354

    authors: Millidge B,Tschantz A,Buckley CL

    更新日期:2021-01-05 00:00:00

  • Analytical integrate-and-fire neuron models with conductance-based dynamics for event-driven simulation strategies.

    abstract::Event-driven simulation strategies were proposed recently to simulate integrate-and-fire (IF) type neuronal models. These strategies can lead to computationally efficient algorithms for simulating large-scale networks of neurons; most important, such approaches are more precise than traditional clock-driven numerical ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2006.18.9.2146

    authors: Rudolph M,Destexhe A

    更新日期:2006-09-01 00:00:00

  • NMDA Receptor Alterations After Mild Traumatic Brain Injury Induce Deficits in Memory Acquisition and Recall.

    abstract::Mild traumatic brain injury (mTBI) presents a significant health concern with potential persisting deficits that can last decades. Although a growing body of literature improves our understanding of the brain network response and corresponding underlying cellular alterations after injury, the effects of cellular disru...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01343

    authors: Gabrieli D,Schumm SN,Vigilante NF,Meaney DF

    更新日期:2021-01-01 00:00:00

  • When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.

    abstract::A significant threat to the recent, wide deployment of machine learning-based systems, including deep neural networks (DNNs), is adversarial learning attacks. The main focus here is on evasion attacks against DNN-based classifiers at test time. While much work has focused on devising attacks that make small perturbati...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01209

    authors: Miller D,Wang Y,Kesidis G

    更新日期:2019-08-01 00:00:00

  • Clustering based on gaussian processes.

    abstract::In this letter, we develop a gaussian process model for clustering. The variances of predictive values in gaussian processes learned from a training data are shown to comprise an estimate of the support of a probability density function. The constructed variance function is then applied to construct a set of contours ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2007.19.11.3088

    authors: Kim HC,Lee J

    更新日期:2007-11-01 00:00:00

  • Effects of fast presynaptic noise in attractor neural networks.

    abstract::We study both analytically and numerically the effect of presynaptic noise on the transmission of information in attractor neural networks. The noise occurs on a very short timescale compared to that for the neuron dynamics and it produces short-time synaptic depression. This is inspired in recent neurobiological find...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976606775623342

    authors: Cortes JM,Torres JJ,Marro J,Garrido PL,Kappen HJ

    更新日期:2006-03-01 00:00:00

  • Neural integrator: a sandpile model.

    abstract::We investigated a model for the neural integrator based on hysteretic units connected by positive feedback. Hysteresis is assumed to emerge from the intrinsic properties of the cells. We consider the recurrent networks containing either bistable or multistable neurons. We apply our analysis to the oculomotor velocity-...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.12-06-416

    authors: Nikitchenko M,Koulakov A

    更新日期:2008-10-01 00:00:00

  • Random embedding machines for pattern recognition.

    abstract::Real classification problems involve structured data that can be essentially grouped into a relatively small number of clusters. It is shown that, under a local clustering condition, a set of points of a given class, embedded in binary space by a set of randomly parameterized surfaces, is linearly separable from other...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976601753196012

    authors: Baram Y

    更新日期:2001-11-01 00:00:00

  • Information recall using relative spike timing in a spiking neural network.

    abstract::We present a neural network that is capable of completing and correcting a spiking pattern given only a partial, noisy version. It operates in continuous time and represents information using the relative timing of individual spikes. The network is capable of correcting and recalling multiple patterns simultaneously. ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00306

    authors: Sterne P

    更新日期:2012-08-01 00:00:00

  • Methods for combining experts' probability assessments.

    abstract::This article reviews statistical techniques for combining multiple probability distributions. The framework is that of a decision maker who consults several experts regarding some events. The experts express their opinions in the form of probability distributions. The decision maker must aggregate the experts' distrib...

    journal_title:Neural computation

    pub_type: 杂志文章,评审

    doi:10.1162/neco.1995.7.5.867

    authors: Jacobs RA

    更新日期:1995-09-01 00:00:00

  • Resonator Networks, 2: Factorization Performance and Capacity Compared to Optimization-Based Methods.

    abstract::We develop theoretical foundations of resonator networks, a new type of recurrent neural network introduced in Frady, Kent, Olshausen, and Sommer (2020), a companion article in this issue, to solve a high-dimensional vector factorization problem arising in Vector Symbolic Architectures. Given a composite vector formed...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01329

    authors: Kent SJ,Frady EP,Sommer FT,Olshausen BA

    更新日期:2020-12-01 00:00:00

  • Hidden Quantum Processes, Quantum Ion Channels, and 1/ fθ-Type Noise.

    abstract::In this letter, we perform a complete and in-depth analysis of Lorentzian noises, such as those arising from [Formula: see text] and [Formula: see text] channel kinetics, in order to identify the source of [Formula: see text]-type noise in neurological membranes. We prove that the autocovariance of Lorentzian noise de...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_01067

    authors: Paris A,Vosoughi A,Berman SA,Atia G

    更新日期:2018-07-01 00:00:00

  • Are loss functions all the same?

    abstract::In this letter, we investigate the impact of choosing different loss functions from the viewpoint of statistical learning theory. We introduce a convexity assumption, which is met by all loss functions commonly used in the literature, and study how the bound on the estimation error changes with the loss. We also deriv...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976604773135104

    authors: Rosasco L,De Vito E,Caponnetto A,Piana M,Verri A

    更新日期:2004-05-01 00:00:00

  • Fast population coding.

    abstract::Uncertainty coming from the noise in its neurons and the ill-posed nature of many tasks plagues neural computations. Maybe surprisingly, many studies show that the brain manipulates these forms of uncertainty in a probabilistically consistent and normative manner, and there is now a rich theoretical literature on the ...

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/neco.2007.19.2.404

    authors: Huys QJ,Zemel RS,Natarajan R,Dayan P

    更新日期:2007-02-01 00:00:00

  • A theory of slow feature analysis for transformation-based input signals with an application to complex cells.

    abstract::We develop a group-theoretical analysis of slow feature analysis for the case where the input data are generated by applying a set of continuous transformations to static templates. As an application of the theory, we analytically derive nonlinear visual receptive fields and show that their optimal stimuli, as well as...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00072

    authors: Sprekeler H,Wiskott L

    更新日期:2011-02-01 00:00:00

  • The number of synaptic inputs and the synchrony of large, sparse neuronal networks.

    abstract::The prevalence of coherent oscillations in various frequency ranges in the central nervous system raises the question of the mechanisms that synchronize large populations of neurons. We study synchronization in models of large networks of spiking neurons with random sparse connectivity. Synchrony occurs only when the ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015529

    authors: Golomb D,Hansel D

    更新日期:2000-05-01 00:00:00

  • Spike train decoding without spike sorting.

    abstract::We propose a novel paradigm for spike train decoding, which avoids entirely spike sorting based on waveform measurements. This paradigm directly uses the spike train collected at recording electrodes from thresholding the bandpassed voltage signal. Our approach is a paradigm, not an algorithm, since it can be used wit...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.02-07-478

    authors: Ventura V

    更新日期:2008-04-01 00:00:00