Abstract:
:Place cells in the rat hippocampus play a key role in creating the animal's internal representation of the world. During active navigation, these cells spike only in discrete locations, together encoding a map of the environment. Electrophysiological recordings have shown that the animal can revisit this map mentally during both sleep and awake states, reactivating the place cells that fired during its exploration in the same sequence in which they were originally activated. Although consistency of place cell activity during active navigation is arguably enforced by sensory and proprioceptive inputs, it remains unclear how a consistent representation of space can be maintained during spontaneous replay. We propose a model that can account for this phenomenon and suggest that a spatially consistent replay requires a number of constraints on the hippocampal network that affect its synaptic architecture and the statistics of synaptic connection strengths.
journal_name
Neural Computjournal_title
Neural computationauthors
Dabaghian Ydoi
10.1162/NECO_a_00840subject
Has Abstractpub_date
2016-06-01 00:00:00pages
1051-71issue
6eissn
0899-7667issn
1530-888Xjournal_volume
28pub_type
信件abstract::Modeling stereo transparency with physiologically plausible mechanisms is challenging because in such frameworks, large receptive fields mix up overlapping disparities, whereas small receptive fields can reliably compute only small disparities. It seems necessary to combine information across scales. A coarse-to-fine ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00722
更新日期:2015-05-01 00:00:00
abstract::Spiking neural P systems (SN P systems) are a class of distributed parallel computing devices inspired by spiking neurons, where the spiking rules are usually used in a sequential way (an applicable rule is applied one time at a step) or an exhaustive way (an applicable rule is applied as many times as possible at a s...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00665
更新日期:2014-12-01 00:00:00
abstract::Event-driven simulation strategies were proposed recently to simulate integrate-and-fire (IF) type neuronal models. These strategies can lead to computationally efficient algorithms for simulating large-scale networks of neurons; most important, such approaches are more precise than traditional clock-driven numerical ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2006.18.9.2146
更新日期:2006-09-01 00:00:00
abstract::Mechanisms influencing learning in neural networks are usually investigated on either a local or a global scale. The former relates to synaptic processes, the latter to unspecific modulatory systems. Here we study the interaction of a local learning rule that evaluates coincidences of pre- and postsynaptic action pote...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300015682
更新日期:2000-03-01 00:00:00
abstract::The emergence of synchrony in the activity of large, heterogeneous networks of spiking neurons is investigated. We define the robustness of synchrony by the critical disorder at which the asynchronous state becomes linearly unstable. We show that at low firing rates, synchrony is more robust in excitatory networks tha...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300015286
更新日期:2000-07-01 00:00:00
abstract::Lossy compression and clustering fundamentally involve a decision about which features are relevant and which are not. The information bottleneck method (IB) by Tishby, Pereira, and Bialek ( 1999 ) formalized this notion as an information-theoretic optimization problem and proposed an optimal trade-off between throwin...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00961
更新日期:2017-06-01 00:00:00
abstract::Most conventional policy gradient reinforcement learning (PGRL) algorithms neglect (or do not explicitly make use of) a term in the average reward gradient with respect to the policy parameter. That term involves the derivative of the stationary state distribution that corresponds to the sensitivity of its distributio...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.12-08-922
更新日期:2010-02-01 00:00:00
abstract::In pattern recognition, data integration is an important issue, and when properly done, it can lead to improved performance. Also, data integration can be used to help model and understand multimodal processing in the brain. Amari proposed α-integration as a principled way of blending multiple positive measures (e.g.,...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00445
更新日期:2013-06-01 00:00:00
abstract::The Bayesian evidence framework has been successfully applied to the design of multilayer perceptrons (MLPs) in the work of MacKay. Nevertheless, the training of MLPs suffers from drawbacks like the nonconvex optimization problem and the choice of the number of hidden units. In support vector machines (SVMs) for class...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602753633411
更新日期:2002-05-01 00:00:00
abstract::The past decade has seen a rise of interest in Laplacian eigenmaps (LEMs) for nonlinear dimensionality reduction. LEMs have been used in spectral clustering, in semisupervised learning, and for providing efficient state representations for reinforcement learning. Here, we show that LEMs are closely related to slow fea...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00214
更新日期:2011-12-01 00:00:00
abstract::In this article, a biologically plausible and efficient object recognition system (called ORASSYLL) is introduced, based on a set of a priori constraints motivated by findings of developmental psychology and neurophysiology. These constraints are concerned with the organization of the input in local and corresponding ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976601300014583
更新日期:2001-02-01 00:00:00
abstract::In this letter, we investigate the impact of choosing different loss functions from the viewpoint of statistical learning theory. We introduce a convexity assumption, which is met by all loss functions commonly used in the literature, and study how the bound on the estimation error changes with the loss. We also deriv...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976604773135104
更新日期:2004-05-01 00:00:00
abstract::A representational scheme under which the ranking between represented similarities is isomorphic to the ranking between the corresponding shape similarities can support perfectly correct shape classification because it preserves the clustering of shapes according to the natural kinds prevailing in the external world. ...
journal_title:Neural computation
pub_type: 杂志文章,评审
doi:10.1162/neco.1997.9.4.701
更新日期:1997-05-15 00:00:00
abstract::Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are ext...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00882
更新日期:2016-10-01 00:00:00
abstract::Neural associative memories are perceptron-like single-layer networks with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. Previous work optimized the memory capacity for various models of synaptic learning: linear Hopfield-type rules, the Willshaw model employ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00127
更新日期:2011-06-01 00:00:00
abstract::To understand the interspike interval (ISI) variability displayed by visual cortical neurons (Softky & Koch, 1993), it is critical to examine the dynamics of their neuronal integration, as well as the variability in their synaptic input current. Most previous models have focused on the latter factor. We match a simple...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1997.9.5.971
更新日期:1997-07-01 00:00:00
abstract::Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms like backpropagation. Although backpropagation is efficient, its implementation in analog VLSI requires excessive computational hardware. In this paper we show that, for anal...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1991.3.4.546
更新日期:1991-01-01 00:00:00
abstract::A significant threat to the recent, wide deployment of machine learning-based systems, including deep neural networks (DNNs), is adversarial learning attacks. The main focus here is on evasion attacks against DNN-based classifiers at test time. While much work has focused on devising attacks that make small perturbati...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01209
更新日期:2019-08-01 00:00:00
abstract::We develop a group-theoretical analysis of slow feature analysis for the case where the input data are generated by applying a set of continuous transformations to static templates. As an application of the theory, we analytically derive nonlinear visual receptive fields and show that their optimal stimuli, as well as...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00072
更新日期:2011-02-01 00:00:00
abstract::We present a neural network that is capable of completing and correcting a spiking pattern given only a partial, noisy version. It operates in continuous time and represents information using the relative timing of individual spikes. The network is capable of correcting and recalling multiple patterns simultaneously. ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00306
更新日期:2012-08-01 00:00:00
abstract::Correlated neural activity has been observed at various signal levels (e.g., spike count, membrane potential, local field potential, EEG, fMRI BOLD). Most of these signals can be considered as superpositions of spike trains filtered by components of the neural system (synapses, membranes) and the measurement process. ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2008.05-07-525
更新日期:2008-09-01 00:00:00
abstract::Learning in a neuronal network is often thought of as a linear superposition of synaptic modifications induced by individual stimuli. However, since biological synapses are naturally bounded, a linear superposition would cause fast forgetting of previously acquired memories. Here we show that this forgetting can be av...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/0899766054615644
更新日期:2005-10-01 00:00:00
abstract::In this work, we study how the selection of examples affects the learning procedure in a boolean neural network and its relationship with the complexity of the function under study and its architecture. We analyze the generalization capacity for different target functions with particular architectures through an analy...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300014999
更新日期:2000-10-01 00:00:00
abstract::A necessary ingredient for a quantitative theory of neural coding is appropriate "spike kinematics": a precise description of spike trains. While summarizing experiments by complete spike time collections is clearly inefficient and probably unnecessary, the most common probabilistic model used in neurophysiology, the ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.07-08-828
更新日期:2009-08-01 00:00:00
abstract::Recent experimental and computational evidence suggests that several dynamical properties may characterize the operating point of functioning neural networks: critical branching, neutral stability, and production of a wide range of firing patterns. We seek the simplest setting in which these properties emerge, clarify...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00461
更新日期:2013-07-01 00:00:00
abstract::For gradient descent learning to yield connectivity consistent with real biological networks, the simulated neurons would have to include more realistic intrinsic properties such as frequency adaptation. However, gradient descent learning cannot be used straightforwardly with adapting rate-model neurons because the de...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/0899766054323017
更新日期:2005-09-01 00:00:00
abstract::We study both analytically and numerically the effect of presynaptic noise on the transmission of information in attractor neural networks. The noise occurs on a very short timescale compared to that for the neuron dynamics and it produces short-time synaptic depression. This is inspired in recent neurobiological find...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976606775623342
更新日期:2006-03-01 00:00:00
abstract::Pharmacologically isolated GABAergic irregular spiking and stuttering interneurons in the mouse visual cortex display highly irregular spike times, with high coefficients of variation approximately 0.9-3, in response to a depolarizing, constant current input. This is in marked contrast to cortical pyramidal cells, whi...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2008.20.1.44
更新日期:2008-01-01 00:00:00
abstract::A firing rate map, also known as a tuning curve, describes the nonlinear relationship between a neuron's spike rate and a low-dimensional stimulus (e.g., orientation, head direction, contrast, color). Here we investigate Bayesian active learning methods for estimating firing rate maps in closed-loop neurophysiology ex...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00615
更新日期:2014-08-01 00:00:00
abstract::Cortical neurons of behaving animals generate irregular spike sequences. Recently, there has been a heated discussion about the origin of this irregularity. Softky and Koch (1993) pointed out the inability of standard single-neuron models to reproduce the irregularity of the observed spike sequences when the model par...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976699300016511
更新日期:1999-05-15 00:00:00