Maintaining Consistency of Spatial Information in the Hippocampal Network: A Combinatorial Geometry Model.

Abstract:

:Place cells in the rat hippocampus play a key role in creating the animal's internal representation of the world. During active navigation, these cells spike only in discrete locations, together encoding a map of the environment. Electrophysiological recordings have shown that the animal can revisit this map mentally during both sleep and awake states, reactivating the place cells that fired during its exploration in the same sequence in which they were originally activated. Although consistency of place cell activity during active navigation is arguably enforced by sensory and proprioceptive inputs, it remains unclear how a consistent representation of space can be maintained during spontaneous replay. We propose a model that can account for this phenomenon and suggest that a spatially consistent replay requires a number of constraints on the hippocampal network that affect its synaptic architecture and the statistics of synaptic connection strengths.

journal_name

Neural Comput

journal_title

Neural computation

authors

Dabaghian Y

doi

10.1162/NECO_a_00840

subject

Has Abstract

pub_date

2016-06-01 00:00:00

pages

1051-71

issue

6

eissn

0899-7667

issn

1530-888X

journal_volume

28

pub_type

信件
  • Solving stereo transparency with an extended coarse-to-fine disparity energy model.

    abstract::Modeling stereo transparency with physiologically plausible mechanisms is challenging because in such frameworks, large receptive fields mix up overlapping disparities, whereas small receptive fields can reliably compute only small disparities. It seems necessary to combine information across scales. A coarse-to-fine ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00722

    authors: Li Z,Qian N

    更新日期:2015-05-01 00:00:00

  • Spiking neural P systems with a generalized use of rules.

    abstract::Spiking neural P systems (SN P systems) are a class of distributed parallel computing devices inspired by spiking neurons, where the spiking rules are usually used in a sequential way (an applicable rule is applied one time at a step) or an exhaustive way (an applicable rule is applied as many times as possible at a s...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00665

    authors: Zhang X,Wang B,Pan L

    更新日期:2014-12-01 00:00:00

  • Analytical integrate-and-fire neuron models with conductance-based dynamics for event-driven simulation strategies.

    abstract::Event-driven simulation strategies were proposed recently to simulate integrate-and-fire (IF) type neuronal models. These strategies can lead to computationally efficient algorithms for simulating large-scale networks of neurons; most important, such approaches are more precise than traditional clock-driven numerical ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2006.18.9.2146

    authors: Rudolph M,Destexhe A

    更新日期:2006-09-01 00:00:00

  • Local and global gating of synaptic plasticity.

    abstract::Mechanisms influencing learning in neural networks are usually investigated on either a local or a global scale. The former relates to synaptic processes, the latter to unspecific modulatory systems. Here we study the interaction of a local learning rule that evaluates coincidences of pre- and postsynaptic action pote...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015682

    authors: Sánchez-Montañés MA,Verschure PF,König P

    更新日期:2000-03-01 00:00:00

  • Synchrony in heterogeneous networks of spiking neurons.

    abstract::The emergence of synchrony in the activity of large, heterogeneous networks of spiking neurons is investigated. We define the robustness of synchrony by the critical disorder at which the asynchronous state becomes linearly unstable. We show that at low firing rates, synchrony is more robust in excitatory networks tha...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015286

    authors: Neltner L,Hansel D,Mato G,Meunier C

    更新日期:2000-07-01 00:00:00

  • The Deterministic Information Bottleneck.

    abstract::Lossy compression and clustering fundamentally involve a decision about which features are relevant and which are not. The information bottleneck method (IB) by Tishby, Pereira, and Bialek ( 1999 ) formalized this notion as an information-theoretic optimization problem and proposed an optimal trade-off between throwin...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00961

    authors: Strouse DJ,Schwab DJ

    更新日期:2017-06-01 00:00:00

  • Derivatives of logarithmic stationary distributions for policy gradient reinforcement learning.

    abstract::Most conventional policy gradient reinforcement learning (PGRL) algorithms neglect (or do not explicitly make use of) a term in the average reward gradient with respect to the policy parameter. That term involves the derivative of the stationary state distribution that corresponds to the sensitivity of its distributio...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.12-08-922

    authors: Morimura T,Uchibe E,Yoshimoto J,Peters J,Doya K

    更新日期:2010-02-01 00:00:00

  • Parameter learning for alpha integration.

    abstract::In pattern recognition, data integration is an important issue, and when properly done, it can lead to improved performance. Also, data integration can be used to help model and understand multimodal processing in the brain. Amari proposed α-integration as a principled way of blending multiple positive measures (e.g.,...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00445

    authors: Choi H,Choi S,Choe Y

    更新日期:2013-06-01 00:00:00

  • Bayesian framework for least-squares support vector machine classifiers, gaussian processes, and kernel Fisher discriminant analysis.

    abstract::The Bayesian evidence framework has been successfully applied to the design of multilayer perceptrons (MLPs) in the work of MacKay. Nevertheless, the training of MLPs suffers from drawbacks like the nonconvex optimization problem and the choice of the number of hidden units. In support vector machines (SVMs) for class...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976602753633411

    authors: Van Gestel T,Suykens JA,Lanckriet G,Lambrechts A,De Moor B,Vandewalle J

    更新日期:2002-05-01 00:00:00

  • On the relation of slow feature analysis and Laplacian eigenmaps.

    abstract::The past decade has seen a rise of interest in Laplacian eigenmaps (LEMs) for nonlinear dimensionality reduction. LEMs have been used in spectral clustering, in semisupervised learning, and for providing efficient state representations for reinforcement learning. Here, we show that LEMs are closely related to slow fea...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00214

    authors: Sprekeler H

    更新日期:2011-12-01 00:00:00

  • Learning object representations using a priori constraints within ORASSYLL.

    abstract::In this article, a biologically plausible and efficient object recognition system (called ORASSYLL) is introduced, based on a set of a priori constraints motivated by findings of developmental psychology and neurophysiology. These constraints are concerned with the organization of the input in local and corresponding ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976601300014583

    authors: Krüger N

    更新日期:2001-02-01 00:00:00

  • Are loss functions all the same?

    abstract::In this letter, we investigate the impact of choosing different loss functions from the viewpoint of statistical learning theory. We introduce a convexity assumption, which is met by all loss functions commonly used in the literature, and study how the bound on the estimation error changes with the loss. We also deriv...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976604773135104

    authors: Rosasco L,De Vito E,Caponnetto A,Piana M,Verri A

    更新日期:2004-05-01 00:00:00

  • Similarity, connectionism, and the problem of representation in vision.

    abstract::A representational scheme under which the ranking between represented similarities is isomorphic to the ranking between the corresponding shape similarities can support perfectly correct shape classification because it preserves the clustering of shapes according to the natural kinds prevailing in the external world. ...

    journal_title:Neural computation

    pub_type: 杂志文章,评审

    doi:10.1162/neco.1997.9.4.701

    authors: Edelman S,Duvdevani-Bar S

    更新日期:1997-05-15 00:00:00

  • Energy-Efficient Neuromorphic Classifiers.

    abstract::Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are ext...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00882

    authors: Martí D,Rigotti M,Seok M,Fusi S

    更新日期:2016-10-01 00:00:00

  • Neural associative memory with optimal Bayesian learning.

    abstract::Neural associative memories are perceptron-like single-layer networks with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. Previous work optimized the memory capacity for various models of synaptic learning: linear Hopfield-type rules, the Willshaw model employ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00127

    authors: Knoblauch A

    更新日期:2011-06-01 00:00:00

  • Physiological gain leads to high ISI variability in a simple model of a cortical regular spiking cell.

    abstract::To understand the interspike interval (ISI) variability displayed by visual cortical neurons (Softky & Koch, 1993), it is critical to examine the dynamics of their neuronal integration, as well as the variability in their synaptic input current. Most previous models have focused on the latter factor. We match a simple...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1997.9.5.971

    authors: Troyer TW,Miller KD

    更新日期:1997-07-01 00:00:00

  • Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer Networks.

    abstract::Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms like backpropagation. Although backpropagation is efficient, its implementation in analog VLSI requires excessive computational hardware. In this paper we show that, for anal...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1991.3.4.546

    authors: Jabri M,Flower B

    更新日期:1991-01-01 00:00:00

  • When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.

    abstract::A significant threat to the recent, wide deployment of machine learning-based systems, including deep neural networks (DNNs), is adversarial learning attacks. The main focus here is on evasion attacks against DNN-based classifiers at test time. While much work has focused on devising attacks that make small perturbati...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01209

    authors: Miller D,Wang Y,Kesidis G

    更新日期:2019-08-01 00:00:00

  • A theory of slow feature analysis for transformation-based input signals with an application to complex cells.

    abstract::We develop a group-theoretical analysis of slow feature analysis for the case where the input data are generated by applying a set of continuous transformations to static templates. As an application of the theory, we analytically derive nonlinear visual receptive fields and show that their optimal stimuli, as well as...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00072

    authors: Sprekeler H,Wiskott L

    更新日期:2011-02-01 00:00:00

  • Information recall using relative spike timing in a spiking neural network.

    abstract::We present a neural network that is capable of completing and correcting a spiking pattern given only a partial, noisy version. It operates in continuous time and represents information using the relative timing of individual spikes. The network is capable of correcting and recalling multiple patterns simultaneously. ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00306

    authors: Sterne P

    更新日期:2012-08-01 00:00:00

  • Dependence of neuronal correlations on filter characteristics and marginal spike train statistics.

    abstract::Correlated neural activity has been observed at various signal levels (e.g., spike count, membrane potential, local field potential, EEG, fMRI BOLD). Most of these signals can be considered as superpositions of spike trains filtered by components of the neural system (synapses, membranes) and the measurement process. ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.05-07-525

    authors: Tetzlaff T,Rotter S,Stark E,Abeles M,Aertsen A,Diesmann M

    更新日期:2008-09-01 00:00:00

  • Learning only when necessary: better memories of correlated patterns in networks with bounded synapses.

    abstract::Learning in a neuronal network is often thought of as a linear superposition of synaptic modifications induced by individual stimuli. However, since biological synapses are naturally bounded, a linear superposition would cause fast forgetting of previously acquired memories. Here we show that this forgetting can be av...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766054615644

    authors: Senn W,Fusi S

    更新日期:2005-10-01 00:00:00

  • Generalization and selection of examples in feedforward neural networks.

    abstract::In this work, we study how the selection of examples affects the learning procedure in a boolean neural network and its relationship with the complexity of the function under study and its architecture. We analyze the generalization capacity for different target functions with particular architectures through an analy...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300014999

    authors: Franco L,Cannas SA

    更新日期:2000-10-01 00:00:00

  • Direct estimation of inhomogeneous Markov interval models of spike trains.

    abstract::A necessary ingredient for a quantitative theory of neural coding is appropriate "spike kinematics": a precise description of spike trains. While summarizing experiments by complete spike time collections is clearly inefficient and probably unnecessary, the most common probabilistic model used in neurophysiology, the ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.07-08-828

    authors: Wójcik DK,Mochol G,Jakuczun W,Wypych M,Waleszczyk WJ

    更新日期:2009-08-01 00:00:00

  • Neutral stability, rate propagation, and critical branching in feedforward networks.

    abstract::Recent experimental and computational evidence suggests that several dynamical properties may characterize the operating point of functioning neural networks: critical branching, neutral stability, and production of a wide range of firing patterns. We seek the simplest setting in which these properties emerge, clarify...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00461

    authors: Cayco-Gajic NA,Shea-Brown E

    更新日期:2013-07-01 00:00:00

  • Supervised learning in a recurrent network of rate-model neurons exhibiting frequency adaptation.

    abstract::For gradient descent learning to yield connectivity consistent with real biological networks, the simulated neurons would have to include more realistic intrinsic properties such as frequency adaptation. However, gradient descent learning cannot be used straightforwardly with adapting rate-model neurons because the de...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766054323017

    authors: Fortier PA,Guigon E,Burnod Y

    更新日期:2005-09-01 00:00:00

  • Effects of fast presynaptic noise in attractor neural networks.

    abstract::We study both analytically and numerically the effect of presynaptic noise on the transmission of information in attractor neural networks. The noise occurs on a very short timescale compared to that for the neuron dynamics and it produces short-time synaptic depression. This is inspired in recent neurobiological find...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976606775623342

    authors: Cortes JM,Torres JJ,Marro J,Garrido PL,Kappen HJ

    更新日期:2006-03-01 00:00:00

  • Irregular firing of isolated cortical interneurons in vitro driven by intrinsic stochastic mechanisms.

    abstract::Pharmacologically isolated GABAergic irregular spiking and stuttering interneurons in the mouse visual cortex display highly irregular spike times, with high coefficients of variation approximately 0.9-3, in response to a depolarizing, constant current input. This is in marked contrast to cortical pyramidal cells, whi...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.20.1.44

    authors: Englitz B,Stiefel KM,Sejnowski TJ

    更新日期:2008-01-01 00:00:00

  • Bayesian active learning of neural firing rate maps with transformed gaussian process priors.

    abstract::A firing rate map, also known as a tuning curve, describes the nonlinear relationship between a neuron's spike rate and a low-dimensional stimulus (e.g., orientation, head direction, contrast, color). Here we investigate Bayesian active learning methods for estimating firing rate maps in closed-loop neurophysiology ex...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00615

    authors: Park M,Weller JP,Horwitz GD,Pillow JW

    更新日期:2014-08-01 00:00:00

  • The Ornstein-Uhlenbeck process does not reproduce spiking statistics of neurons in prefrontal cortex.

    abstract::Cortical neurons of behaving animals generate irregular spike sequences. Recently, there has been a heated discussion about the origin of this irregularity. Softky and Koch (1993) pointed out the inability of standard single-neuron models to reproduce the irregularity of the observed spike sequences when the model par...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976699300016511

    authors: Shinomoto S,Sakai Y,Funahashi S

    更新日期:1999-05-15 00:00:00