Replicating receptive fields of simple and complex cells in primary visual cortex in a neuronal network model with temporal and population sparseness and reliability.

Abstract:

:We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.

journal_name

Neural Comput

journal_title

Neural computation

authors

Tanaka T,Aoyagi T,Kaneko T

doi

10.1162/NECO_a_00341

subject

Has Abstract

pub_date

2012-10-01 00:00:00

pages

2700-25

issue

10

eissn

0899-7667

issn

1530-888X

journal_volume

24

pub_type

杂志文章
  • Approximation by fully complex multilayer perceptrons.

    abstract::We investigate the approximation ability of a multilayer perceptron (MLP) network when it is extended to the complex domain. The main challenge for processing complex data with neural networks has been the lack of bounded and analytic complex nonlinear activation functions in the complex domain, as stated by Liouville...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976603321891846

    authors: Kim T,Adali T

    更新日期:2003-07-01 00:00:00

  • A Mean-Field Description of Bursting Dynamics in Spiking Neural Networks with Short-Term Adaptation.

    abstract::Bursting plays an important role in neural communication. At the population level, macroscopic bursting has been identified in populations of neurons that do not express intrinsic bursting mechanisms. For the analysis of phase transitions between bursting and non-bursting states, mean-field descriptions of macroscopic...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01300

    authors: Gast R,Schmidt H,Knösche TR

    更新日期:2020-09-01 00:00:00

  • Temporal sequence learning, prediction, and control: a review of different models and their relation to biological mechanisms.

    abstract::In this review, we compare methods for temporal sequence learning (TSL) across the disciplines machine-control, classical conditioning, neuronal models for TSL as well as spike-timing-dependent plasticity (STDP). This review introduces the most influential models and focuses on two questions: To what degree are reward...

    journal_title:Neural computation

    pub_type: 杂志文章,评审

    doi:10.1162/0899766053011555

    authors: Wörgötter F,Porr B

    更新日期:2005-02-01 00:00:00

  • Computing with self-excitatory cliques: A model and an application to hyperacuity-scale computation in visual cortex.

    abstract::We present a model of visual computation based on tightly inter-connected cliques of pyramidal cells. It leads to a formal theory of cell assemblies, a specific relationship between correlated firing patterns and abstract functionality, and a direct calculation relating estimates of cortical cell counts to orientation...

    journal_title:Neural computation

    pub_type: 杂志文章,评审

    doi:10.1162/089976699300016782

    authors: Miller DA,Zucker SW

    更新日期:1999-01-01 00:00:00

  • Simultaneous rate-synchrony codes in populations of spiking neurons.

    abstract::Firing rates and synchronous firing are often simultaneously relevant signals, and they independently or cooperatively represent external sensory inputs, cognitive events, and environmental situations such as body position. However, how rates and synchrony comodulate and which aspects of inputs are effectively encoded...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976606774841521

    authors: Masuda N

    更新日期:2006-01-01 00:00:00

  • Conditional density estimation with dimensionality reduction via squared-loss conditional entropy minimization.

    abstract::Regression aims at estimating the conditional mean of output given input. However, regression is not informative enough if the conditional density is multimodal, heteroskedastic, and asymmetric. In such a case, estimating the conditional density itself is preferable, but conditional density estimation (CDE) is challen...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00683

    authors: Tangkaratt V,Xie N,Sugiyama M

    更新日期:2015-01-01 00:00:00

  • Patterns of synchrony in neural networks with spike adaptation.

    abstract::We study the emergence of synchronized burst activity in networks of neurons with spike adaptation. We show that networks of tonically firing adapting excitatory neurons can evolve to a state where the neurons burst in a synchronized manner. The mechanism leading to this burst activity is analyzed in a network of inte...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/08997660151134280

    authors: van Vreeswijk C,Hansel D

    更新日期:2001-05-01 00:00:00

  • The successor representation and temporal context.

    abstract::The successor representation was introduced into reinforcement learning by Dayan ( 1993 ) as a means of facilitating generalization between states with similar successors. Although reinforcement learning in general has been used extensively as a model of psychological and neural processes, the psychological validity o...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00282

    authors: Gershman SJ,Moore CD,Todd MT,Norman KA,Sederberg PB

    更新日期:2012-06-01 00:00:00

  • Mirror symmetric topographic maps can arise from activity-dependent synaptic changes.

    abstract::Multiple adjacent, roughly mirror-image topographic maps are commonly observed in the sensory neocortex of many species. The cortical regions occupied by these maps are generally believed to be determined initially by genetically controlled chemical markers during development, with thalamocortical afferent activity su...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766053491904

    authors: Schulz R,Reggia JA

    更新日期:2005-05-01 00:00:00

  • Training nu-support vector classifiers: theory and algorithms.

    abstract::The nu-support vector machine (nu-SVM) for classification proposed by Schölkopf, Smola, Williamson, and Bartlett (2000) has the advantage of using a parameter nu on controlling the number of support vectors. In this article, we investigate the relation between nu-SVM and C-SVM in detail. We show that in general they a...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976601750399335

    authors: Chang CC,Lin CJ

    更新日期:2001-09-01 00:00:00

  • Synchrony and desynchrony in integrate-and-fire oscillators.

    abstract::Due to many experimental reports of synchronous neural activity in the brain, there is much interest in understanding synchronization in networks of neural oscillators and its potential for computing perceptual organization. Contrary to Hopfield and Herz (1995), we find that networks of locally coupled integrate-and-f...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976699300016160

    authors: Campbell SR,Wang DL,Jayaprakash C

    更新日期:1999-10-01 00:00:00

  • On the use of analytical expressions for the voltage distribution to analyze intracellular recordings.

    abstract::Different analytical expressions for the membrane potential distribution of membranes subject to synaptic noise have been proposed and can be very helpful in analyzing experimental data. However, all of these expressions are either approximations or limit cases, and it is not clear how they compare and which expressio...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2006.18.12.2917

    authors: Rudolph M,Destexhe A

    更新日期:2006-12-01 00:00:00

  • Synchrony of neuronal oscillations controlled by GABAergic reversal potentials.

    abstract::GABAergic synapse reversal potential is controlled by the concentration of chloride. This concentration can change significantly during development and as a function of neuronal activity. Thus, GABA inhibition can be hyperpolarizing, shunting, or partially depolarizing. Previous results pinpointed the conditions under...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2007.19.3.706

    authors: Jeong HY,Gutkin B

    更新日期:2007-03-01 00:00:00

  • Making the error-controlling algorithm of observable operator models constructive.

    abstract::Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algor...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.10-08-878

    authors: Zhao MJ,Jaeger H,Thon M

    更新日期:2009-12-01 00:00:00

  • Optimality of Upper-Arm Reaching Trajectories Based on the Expected Value of the Metabolic Energy Cost.

    abstract::When we move our body to perform a movement task, our central nervous system selects a movement trajectory from an infinite number of possible trajectories under constraints that have been acquired through evolution and learning. Minimization of the energy cost has been suggested as a potential candidate for a constra...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00757

    authors: Taniai Y,Nishii J

    更新日期:2015-08-01 00:00:00

  • The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction.

    abstract::Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attracto...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1996.8.6.1135

    authors: Casey M

    更新日期:1996-08-15 00:00:00

  • ParceLiNGAM: a causal ordering method robust against latent confounders.

    abstract::We consider learning a causal ordering of variables in a linear nongaussian acyclic model called LiNGAM. Several methods have been shown to consistently estimate a causal ordering assuming that all the model assumptions are correct. But the estimation results could be distorted if some assumptions are violated. In thi...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00533

    authors: Tashiro T,Shimizu S,Hyvärinen A,Washio T

    更新日期:2014-01-01 00:00:00

  • Investigating the fault tolerance of neural networks.

    abstract::Particular levels of partial fault tolerance (PFT) in feedforward artificial neural networks of a given size can be obtained by redundancy (replicating a smaller normally trained network), by design (training specifically to increase PFT), and by a combination of the two (replicating a smaller PFT-trained network). Th...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766053723096

    authors: Tchernev EB,Mulvaney RG,Phatak DS

    更新日期:2005-07-01 00:00:00

  • The computational structure of spike trains.

    abstract::Neurons perform computations, and convey the results of those computations through the statistical structure of their output spike trains. Here we present a practical method, grounded in the information-theoretic analysis of prediction, for inferring a minimal representation of that structure and for characterizing it...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.12-07-678

    authors: Haslinger R,Klinkner KL,Shalizi CR

    更新日期:2010-01-01 00:00:00

  • Synaptic runaway in associative networks and the pathogenesis of schizophrenia.

    abstract::Synaptic runaway denotes the formation of erroneous synapses and premature functional decline accompanying activity-dependent learning in neural networks. This work studies synaptic runaway both analytically and numerically in binary-firing associative memory networks. It turns out that synaptic runaway is of fairly m...

    journal_title:Neural computation

    pub_type: 杂志文章,评审

    doi:10.1162/089976698300017836

    authors: Greenstein-Messica A,Ruppin E

    更新日期:1998-02-15 00:00:00

  • Solving stereo transparency with an extended coarse-to-fine disparity energy model.

    abstract::Modeling stereo transparency with physiologically plausible mechanisms is challenging because in such frameworks, large receptive fields mix up overlapping disparities, whereas small receptive fields can reliably compute only small disparities. It seems necessary to combine information across scales. A coarse-to-fine ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00722

    authors: Li Z,Qian N

    更新日期:2015-05-01 00:00:00

  • Reinforcement learning in continuous time and space.

    abstract::This article presents a reinforcement learning framework for continuous-time dynamical systems without a priori discretization of time, state, and action. Based on the Hamilton-Jacobi-Bellman (HJB) equation for infinite-horizon, discounted reward problems, we derive algorithms for estimating value functions and improv...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015961

    authors: Doya K

    更新日期:2000-01-01 00:00:00

  • Learning Hough transform: a neural network model.

    abstract::A single-layered Hough transform network is proposed that accepts image coordinates of each object pixel as input and produces a set of outputs that indicate the belongingness of the pixel to a particular structure (e.g., a straight line). The network is able to learn adaptively the parametric forms of the linear segm...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976601300014501

    authors: Basak J

    更新日期:2001-03-01 00:00:00

  • Estimating functions of independent component analysis for temporally correlated signals.

    abstract::This article studies a general theory of estimating functions of independent component analysis when the independent source signals are temporarily correlated. Estimating functions are used for deriving both batch and on-line learning algorithms, and they are applicable to blind cases where spatial and temporal probab...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015079

    authors: Amari S

    更新日期:2000-09-01 00:00:00

  • Bayesian framework for least-squares support vector machine classifiers, gaussian processes, and kernel Fisher discriminant analysis.

    abstract::The Bayesian evidence framework has been successfully applied to the design of multilayer perceptrons (MLPs) in the work of MacKay. Nevertheless, the training of MLPs suffers from drawbacks like the nonconvex optimization problem and the choice of the number of hidden units. In support vector machines (SVMs) for class...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976602753633411

    authors: Van Gestel T,Suykens JA,Lanckriet G,Lambrechts A,De Moor B,Vandewalle J

    更新日期:2002-05-01 00:00:00

  • Parameter Identifiability in Statistical Machine Learning: A Review.

    abstract::This review examines the relevance of parameter identifiability for statistical models used in machine learning. In addition to defining main concepts, we address several issues of identifiability closely related to machine learning, showing the advantages and disadvantages of state-of-the-art research and demonstrati...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00947

    authors: Ran ZY,Hu BG

    更新日期:2017-05-01 00:00:00

  • Competition between synaptic depression and facilitation in attractor neural networks.

    abstract::We study the effect of competition between short-term synaptic depression and facilitation on the dynamic properties of attractor neural networks, using Monte Carlo simulation and a mean-field analysis. Depending on the balance of depression, facilitation, and the underlying noise, the network displays different behav...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2007.19.10.2739

    authors: Torres JJ,Cortes JM,Marro J,Kappen HJ

    更新日期:2007-10-01 00:00:00

  • Parsing Complex Sentences with Structured Connectionist Networks.

    abstract::A modular, recurrent connectionist network is taught to incrementally parse complex sentences. From input presented one word at a time, the network learns to do semantic role assignment, noun phrase attachment, and clause structure recognition, for sentences with both active and passive constructions and center-embedd...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1991.3.1.110

    authors: Jain AN

    更新日期:1991-04-01 00:00:00

  • Locality of global stochastic interaction in directed acyclic networks.

    abstract::The hypothesis of invariant maximization of interaction (IMI) is formulated within the setting of random fields. According to this hypothesis, learning processes maximize the stochastic interaction of the neurons subject to constraints. We consider the extrinsic constraint in terms of a fixed input distribution on the...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976602760805368

    authors: Ay N

    更新日期:2002-12-01 00:00:00

  • Neural integrator: a sandpile model.

    abstract::We investigated a model for the neural integrator based on hysteretic units connected by positive feedback. Hysteresis is assumed to emerge from the intrinsic properties of the cells. We consider the recurrent networks containing either bistable or multistable neurons. We apply our analysis to the oculomotor velocity-...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.12-06-416

    authors: Nikitchenko M,Koulakov A

    更新日期:2008-10-01 00:00:00