Rapid processing and unsupervised learning in a model of the cortical macrocolumn.

Abstract:

:We study a model of the cortical macrocolumn consisting of a collection of inhibitorily coupled minicolumns. The proposed system overcomes several severe deficits of systems based on single neurons as cerebral functional units, notably limited robustness to damage and unrealistically large computation time. Motivated by neuroanatomical and neurophysiological findings, the utilized dynamics is based on a simple model of a spiking neuron with refractory period, fixed random excitatory interconnection within minicolumns, and instantaneous inhibition within one macrocolumn. A stability analysis of the system's dynamical equations shows that minicolumns can act as monolithic functional units for purposes of critical, fast decisions and learning. Oscillating inhibition (in the gamma frequency range) leads to a phase-coupled population rate code and high sensitivity to small imbalances in minicolumn inputs. Minicolumns are shown to be able to organize their collective inputs without supervision by Hebbian plasticity into selective receptive field shapes, thereby becoming classifiers for input patterns. Using the bars test, we critically compare our system's performance with that of others and demonstrate its ability for distributed neural coding.

journal_name

Neural Comput

journal_title

Neural computation

authors

Lücke J,von der Malsburg C

doi

10.1162/089976604772744893

subject

Has Abstract

pub_date

2004-03-01 00:00:00

pages

501-33

issue

3

eissn

0899-7667

issn

1530-888X

journal_volume

16

pub_type

杂志文章
  • Modeling short-term synaptic depression in silicon.

    abstract::We describe a model of short-term synaptic depression that is derived from a circuit implementation. The dynamics of this circuit model is similar to the dynamics of some theoretical models of short-term depression except that the recovery dynamics of the variable describing the depression is nonlinear and it also dep...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976603762552942

    authors: Boegerhausen M,Suter P,Liu SC

    更新日期:2003-02-01 00:00:00

  • Regularized neural networks: some convergence rate results.

    abstract::In a recent paper, Poggio and Girosi (1990) proposed a class of neural networks obtained from the theory of regularization. Regularized networks are capable of approximating arbitrarily well any continuous function on a compactum. In this paper we consider in detail the learning problem for the one-dimensional case. W...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1995.7.6.1225

    authors: Corradi V,White H

    更新日期:1995-11-01 00:00:00

  • Fast recursive filters for simulating nonlinear dynamic systems.

    abstract::A fast and accurate computational scheme for simulating nonlinear dynamic systems is presented. The scheme assumes that the system can be represented by a combination of components of only two different types: first-order low-pass filters and static nonlinearities. The parameters of these filters and nonlinearities ma...

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/neco.2008.04-07-506

    authors: van Hateren JH

    更新日期:2008-07-01 00:00:00

  • Hidden Quantum Processes, Quantum Ion Channels, and 1/ fθ-Type Noise.

    abstract::In this letter, we perform a complete and in-depth analysis of Lorentzian noises, such as those arising from [Formula: see text] and [Formula: see text] channel kinetics, in order to identify the source of [Formula: see text]-type noise in neurological membranes. We prove that the autocovariance of Lorentzian noise de...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_01067

    authors: Paris A,Vosoughi A,Berman SA,Atia G

    更新日期:2018-07-01 00:00:00

  • A simple Hebbian/anti-Hebbian network learns the sparse, independent components of natural images.

    abstract::Slightly modified versions of an early Hebbian/anti-Hebbian neural network are shown to be capable of extracting the sparse, independent linear components of a prefiltered natural image set. An explanation for this capability in terms of a coupling between two hypothetical networks is presented. The simple networks pr...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976606775093891

    authors: Falconbridge MS,Stamps RL,Badcock DR

    更新日期:2006-02-01 00:00:00

  • On the relation of slow feature analysis and Laplacian eigenmaps.

    abstract::The past decade has seen a rise of interest in Laplacian eigenmaps (LEMs) for nonlinear dimensionality reduction. LEMs have been used in spectral clustering, in semisupervised learning, and for providing efficient state representations for reinforcement learning. Here, we show that LEMs are closely related to slow fea...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00214

    authors: Sprekeler H

    更新日期:2011-12-01 00:00:00

  • STDP-Compatible Approximation of Backpropagation in an Energy-Based Model.

    abstract::We show that Langevin Markov chain Monte Carlo inference in an energy-based model with latent variables has the property that the early steps of inference, starting from a stationary point, correspond to propagating error gradients into internal layers, similar to backpropagation. The backpropagated error is with resp...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00934

    authors: Bengio Y,Mesnard T,Fischer A,Zhang S,Wu Y

    更新日期:2017-03-01 00:00:00

  • Derivatives of logarithmic stationary distributions for policy gradient reinforcement learning.

    abstract::Most conventional policy gradient reinforcement learning (PGRL) algorithms neglect (or do not explicitly make use of) a term in the average reward gradient with respect to the policy parameter. That term involves the derivative of the stationary state distribution that corresponds to the sensitivity of its distributio...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.12-08-922

    authors: Morimura T,Uchibe E,Yoshimoto J,Peters J,Doya K

    更新日期:2010-02-01 00:00:00

  • Spiking neural P systems with weights.

    abstract::A variant of spiking neural P systems with positive or negative weights on synapses is introduced, where the rules of a neuron fire when the potential of that neuron equals a given value. The involved values-weights, firing thresholds, potential consumed by each rule-can be real (computable) numbers, rational numbers,...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00022

    authors: Wang J,Hoogeboom HJ,Pan L,Păun G,Pérez-Jiménez MJ

    更新日期:2010-10-01 00:00:00

  • A hierarchical dynamical map as a basic frame for cortical mapping and its application to priming.

    abstract::A hierarchical dynamical map is proposed as the basic framework for sensory cortical mapping. To show how the hierarchical dynamical map works in cognitive processes, we applied it to a typical cognitive task known as priming, in which cognitive performance is facilitated as a consequence of prior experience. Prior to...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/08997660152469341

    authors: Hoshino O,Inoue S,Kashimori Y,Kambara T

    更新日期:2001-08-01 00:00:00

  • Density-weighted Nyström method for computing large kernel eigensystems.

    abstract::The Nyström method is a well-known sampling-based technique for approximating the eigensystem of large kernel matrices. However, the chosen samples in the Nyström method are all assumed to be of equal importance, which deviates from the integral equation that defines the kernel eigenfunctions. Motivated by this observ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.11-07-651

    authors: Zhang K,Kwok JT

    更新日期:2009-01-01 00:00:00

  • Conditional density estimation with dimensionality reduction via squared-loss conditional entropy minimization.

    abstract::Regression aims at estimating the conditional mean of output given input. However, regression is not informative enough if the conditional density is multimodal, heteroskedastic, and asymmetric. In such a case, estimating the conditional density itself is preferable, but conditional density estimation (CDE) is challen...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00683

    authors: Tangkaratt V,Xie N,Sugiyama M

    更新日期:2015-01-01 00:00:00

  • Simultaneous rate-synchrony codes in populations of spiking neurons.

    abstract::Firing rates and synchronous firing are often simultaneously relevant signals, and they independently or cooperatively represent external sensory inputs, cognitive events, and environmental situations such as body position. However, how rates and synchrony comodulate and which aspects of inputs are effectively encoded...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976606774841521

    authors: Masuda N

    更新日期:2006-01-01 00:00:00

  • Are loss functions all the same?

    abstract::In this letter, we investigate the impact of choosing different loss functions from the viewpoint of statistical learning theory. We introduce a convexity assumption, which is met by all loss functions commonly used in the literature, and study how the bound on the estimation error changes with the loss. We also deriv...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976604773135104

    authors: Rosasco L,De Vito E,Caponnetto A,Piana M,Verri A

    更新日期:2004-05-01 00:00:00

  • The neuronal replicator hypothesis.

    abstract::We propose that replication (with mutation) of patterns of neuronal activity can occur within the brain using known neurophysiological processes. Thereby evolutionary algorithms implemented by neuro- nal circuits can play a role in cognition. Replication of structured neuronal representations is assumed in several cog...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00031

    authors: Fernando C,Goldstein R,Szathmáry E

    更新日期:2010-11-01 00:00:00

  • An amplitude equation approach to contextual effects in visual cortex.

    abstract::A mathematical theory of interacting hypercolumns in primary visual cortex (V1) is presented that incorporates details concerning the anisotropic nature of long-range lateral connections. Each hypercolumn is modeled as a ring of interacting excitatory and inhibitory neural populations with orientation preferences over...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976602317250870

    authors: Bressloff PC,Cowan JD

    更新日期:2002-03-01 00:00:00

  • Alignment of coexisting cortical maps in a motor control model.

    abstract::How do multiple feature maps that coexist in the same region of cerebral cortex align with each other? We hypothesize that such alignment is governed by temporal correlations: features in one map that are temporally correlated with those in another come to occupy the same spatial locations in cortex over time. To exam...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1996.8.4.731

    authors: Chen Y,Reggia JA

    更新日期:1996-05-15 00:00:00

  • The Discriminative Kalman Filter for Bayesian Filtering with Nonlinear and Nongaussian Observation Models.

    abstract::The Kalman filter provides a simple and efficient algorithm to compute the posterior distribution for state-space models where both the latent state and measurement models are linear and gaussian. Extensions to the Kalman filter, including the extended and unscented Kalman filters, incorporate linearizations for model...

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/neco_a_01275

    authors: Burkhart MC,Brandman DM,Franco B,Hochberg LR,Harrison MT

    更新日期:2020-05-01 00:00:00

  • Binocular receptive field models, disparity tuning, and characteristic disparity.

    abstract::Disparity tuning of visual cells in the brain depends on the structure of their binocular receptive fields (RFs). Freeman and coworkers have found that binocular RFs of a typical simple cell can be quantitatively described by two Gabor functions with the same gaussian envelope but different phase parameters in the sin...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1996.8.8.1611

    authors: Zhu YD,Qian N

    更新日期:1996-11-15 00:00:00

  • Traveling waves of excitation in neural field models: equivalence of rate descriptions and integrate-and-fire dynamics.

    abstract::Field models provide an elegant mathematical framework to analyze large-scale patterns of neural activity. On the microscopic level, these models are usually based on either a firing-rate picture or integrate-and-fire dynamics. This article shows that in spite of the large conceptual differences between the two types ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/08997660260028656

    authors: Cremers D,Herz AV

    更新日期:2002-07-01 00:00:00

  • Visual Categorization with Random Projection.

    abstract::Humans learn categories of complex objects quickly and from a few examples. Random projection has been suggested as a means to learn and categorize efficiently. We investigate how random projection affects categorization by humans and by very simple neural networks on the same stimuli and categorization tasks, and how...

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/NECO_a_00769

    authors: Arriaga RI,Rutter D,Cakmak M,Vempala SS

    更新日期:2015-10-01 00:00:00

  • Populations of tightly coupled neurons: the RGC/LGN system.

    abstract::A mathematical model, of general character for the dynamic description of coupled neural oscillators is presented. The population approach that is employed applies equally to coupled cells as to populations of such coupled cells. The formulation includes stochasticity and preserves details of precisely firing neurons....

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2007.03-07-482

    authors: Sirovich L

    更新日期:2008-05-01 00:00:00

  • Nonmonotonic generalization bias of Gaussian mixture models.

    abstract::Theories of learning and generalization hold that the generalization bias, defined as the difference between the training error and the generalization error, increases on average with the number of adaptive parameters. This article, however, shows that this general tendency is violated for a gaussian mixture model. Fo...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015439

    authors: Akaho S,Kappen HJ

    更新日期:2000-06-01 00:00:00

  • An internal model for acquisition and retention of motor learning during arm reaching.

    abstract::Humans have the ability to learn novel motor tasks while manipulating the environment. Several models of motor learning have been proposed in the literature, but few of them address the problem of retention and interference of motor memory. The modular selection and identification for control (MOSAIC) model, originall...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.03-08-721

    authors: Lonini L,Dipietro L,Zollo L,Guglielmelli E,Krebs HI

    更新日期:2009-07-01 00:00:00

  • A modified algorithm for generalized discriminant analysis.

    abstract::Generalized discriminant analysis (GDA) is an extension of the classical linear discriminant analysis (LDA) from linear domain to a nonlinear domain via the kernel trick. However, in the previous algorithm of GDA, the solutions may suffer from the degenerate eigenvalue problem (i.e., several eigenvectors with the same...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976604773717612

    authors: Zheng W,Zhao L,Zou C

    更新日期:2004-06-01 00:00:00

  • Making the error-controlling algorithm of observable operator models constructive.

    abstract::Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algor...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.10-08-878

    authors: Zhao MJ,Jaeger H,Thon M

    更新日期:2009-12-01 00:00:00

  • Energy-Efficient Neuromorphic Classifiers.

    abstract::Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are ext...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00882

    authors: Martí D,Rigotti M,Seok M,Fusi S

    更新日期:2016-10-01 00:00:00

  • Analytical integrate-and-fire neuron models with conductance-based dynamics for event-driven simulation strategies.

    abstract::Event-driven simulation strategies were proposed recently to simulate integrate-and-fire (IF) type neuronal models. These strategies can lead to computationally efficient algorithms for simulating large-scale networks of neurons; most important, such approaches are more precise than traditional clock-driven numerical ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2006.18.9.2146

    authors: Rudolph M,Destexhe A

    更新日期:2006-09-01 00:00:00

  • Invariant global motion recognition in the dorsal visual system: a unifying theory.

    abstract::The motion of an object (such as a wheel rotating) is seen as consistent independent of its position and size on the retina. Neurons in higher cortical visual areas respond to these global motion stimuli invariantly, but neurons in early cortical areas with small receptive fields cannot represent this motion, not only...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2007.19.1.139

    authors: Rolls ET,Stringer SM

    更新日期:2007-01-01 00:00:00

  • Change-based inference in attractor nets: linear analysis.

    abstract::One standard interpretation of networks of cortical neurons is that they form dynamical attractors. Computations such as stimulus estimation are performed by mapping inputs to points on the networks' attractive manifolds. These points represent population codes for the stimulus values. However, this standard interpret...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00051

    authors: Moazzezi R,Dayan P

    更新日期:2010-12-01 00:00:00