Invariant global motion recognition in the dorsal visual system: a unifying theory.

Abstract:

:The motion of an object (such as a wheel rotating) is seen as consistent independent of its position and size on the retina. Neurons in higher cortical visual areas respond to these global motion stimuli invariantly, but neurons in early cortical areas with small receptive fields cannot represent this motion, not only because of the aperture problem but also because they do not have invariant representations. In a unifying hypothesis with the design of the ventral cortical visual system, we propose that the dorsal visual system uses a hierarchical feedforward network architecture (V1, V2, MT, MSTd, parietal cortex) with training of the connections with a short-term memory trace associative synaptic modification rule to capture what is invariant at each stage. Simulations show that the proposal is computationally feasible, in that invariant representations of the motion flow fields produced by objects self-organize in the later layers of the architecture. The model produces invariant representations of the motion flow fields produced by global in-plane motion of an object, in-plane rotational motion, looming versus receding of the object, and object-based rotation about a principal axis. Thus, the dorsal and ventral visual systems may share some similar computational principles.

journal_name

Neural Comput

journal_title

Neural computation

authors

Rolls ET,Stringer SM

doi

10.1162/neco.2007.19.1.139

subject

Has Abstract

pub_date

2007-01-01 00:00:00

pages

139-69

issue

1

eissn

0899-7667

issn

1530-888X

journal_volume

19

pub_type

杂志文章
  • Fast population coding.

    abstract::Uncertainty coming from the noise in its neurons and the ill-posed nature of many tasks plagues neural computations. Maybe surprisingly, many studies show that the brain manipulates these forms of uncertainty in a probabilistically consistent and normative manner, and there is now a rich theoretical literature on the ...

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/neco.2007.19.2.404

    authors: Huys QJ,Zemel RS,Natarajan R,Dayan P

    更新日期:2007-02-01 00:00:00

  • Feature selection in simple neurons: how coding depends on spiking dynamics.

    abstract::The relationship between a neuron's complex inputs and its spiking output defines the neuron's coding strategy. This is frequently and effectively modeled phenomenologically by one or more linear filters that extract the components of the stimulus that are relevant for triggering spikes and a nonlinear function that r...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.02-09-956

    authors: Famulare M,Fairhall A

    更新日期:2010-03-01 00:00:00

  • MISEP method for postnonlinear blind source separation.

    abstract::In this letter, a standard postnonlinear blind source separation algorithm is proposed, based on the MISEP method, which is widely used in linear and nonlinear independent component analysis. To best suit a wide class of postnonlinear mixtures, we adapt the MISEP method to incorporate a priori information of the mixtu...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2007.19.9.2557

    authors: Zheng CH,Huang DS,Li K,Irwin G,Sun ZL

    更新日期:2007-09-01 00:00:00

  • Derivatives of logarithmic stationary distributions for policy gradient reinforcement learning.

    abstract::Most conventional policy gradient reinforcement learning (PGRL) algorithms neglect (or do not explicitly make use of) a term in the average reward gradient with respect to the policy parameter. That term involves the derivative of the stationary state distribution that corresponds to the sensitivity of its distributio...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.12-08-922

    authors: Morimura T,Uchibe E,Yoshimoto J,Peters J,Doya K

    更新日期:2010-02-01 00:00:00

  • Representation sharpening can explain perceptual priming.

    abstract::Perceiving and identifying an object is improved by prior exposure to the object. This perceptual priming phenomenon is accompanied by reduced neural activity. But whether suppression of neuronal activity with priming is responsible for the improvement in perception is unclear. To address this problem, we developed a ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.04-09-999

    authors: Moldakarimov S,Bazhenov M,Sejnowski TJ

    更新日期:2010-05-01 00:00:00

  • A theory of slow feature analysis for transformation-based input signals with an application to complex cells.

    abstract::We develop a group-theoretical analysis of slow feature analysis for the case where the input data are generated by applying a set of continuous transformations to static templates. As an application of the theory, we analytically derive nonlinear visual receptive fields and show that their optimal stimuli, as well as...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00072

    authors: Sprekeler H,Wiskott L

    更新日期:2011-02-01 00:00:00

  • Learning Precise Spike Train-to-Spike Train Transformations in Multilayer Feedforward Neuronal Networks.

    abstract::We derive a synaptic weight update rule for learning temporally precise spike train-to-spike train transformations in multilayer feedforward networks of spiking neurons. The framework, aimed at seamlessly generalizing error backpropagation to the deterministic spiking neuron setting, is based strictly on spike timing ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00829

    authors: Banerjee A

    更新日期:2016-05-01 00:00:00

  • Investigating the fault tolerance of neural networks.

    abstract::Particular levels of partial fault tolerance (PFT) in feedforward artificial neural networks of a given size can be obtained by redundancy (replicating a smaller normally trained network), by design (training specifically to increase PFT), and by a combination of the two (replicating a smaller PFT-trained network). Th...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766053723096

    authors: Tchernev EB,Mulvaney RG,Phatak DS

    更新日期:2005-07-01 00:00:00

  • Classification of temporal patterns in dynamic biological networks.

    abstract::A general method is presented to classify temporal patterns generated by rhythmic biological networks when synaptic connections and cellular properties are known. The method is discrete in nature and relies on algebraic properties of state transitions and graph theory. Elements of the set of rhythms generated by a net...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976698300017160

    authors: Roberts PD

    更新日期:1998-10-01 00:00:00

  • On the problem in model selection of neural network regression in overrealizable scenario.

    abstract::In considering a statistical model selection of neural networks and radial basis functions under an overrealizable case, the problem of unidentifiability emerges. Because the model selection criterion is an unbiased estimator of the generalization error based on the training error, this article analyzes the expected t...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976602760128090

    authors: Hagiwara K

    更新日期:2002-08-01 00:00:00

  • Parameter Sensitivity of the Elastic Net Approach to the Traveling Salesman Problem.

    abstract::Durbin and Willshaw's elastic net algorithm can find good solutions to the TSP. The purpose of this paper is to point out that for certain ranges of parameter values, the algorithm converges into local minima that do not correspond to valid tours. The key parameter is the ratio governing the relative strengths of the ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1991.3.3.363

    authors: Simmen MW

    更新日期:1991-10-01 00:00:00

  • Scalable Semisupervised Functional Neurocartography Reveals Canonical Neurons in Behavioral Networks.

    abstract::Large-scale data collection efforts to map the brain are underway at multiple spatial and temporal scales, but all face fundamental problems posed by high-dimensional data and intersubject variability. Even seemingly simple problems, such as identifying a neuron/brain region across animals/subjects, become exponential...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00852

    authors: Frady EP,Kapoor A,Horvitz E,Kristan WB Jr

    更新日期:2016-08-01 00:00:00

  • Accelerated spike resampling for accurate multiple testing controls.

    abstract::Controlling for multiple hypothesis tests using standard spike resampling techniques often requires prohibitive amounts of computation. Importance sampling techniques can be used to accelerate the computation. The general theory is presented, along with specific examples for testing differences across conditions using...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00399

    authors: Harrison MT

    更新日期:2013-02-01 00:00:00

  • A Gaussian attractor network for memory and recognition with experience-dependent learning.

    abstract::Attractor networks are widely believed to underlie the memory systems of animals across different species. Existing models have succeeded in qualitatively modeling properties of attractor dynamics, but their computational abilities often suffer from poor representations for realistic complex patterns, spurious attract...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2010.02-09-957

    authors: Hu X,Zhang B

    更新日期:2010-05-01 00:00:00

  • Direct estimation of inhomogeneous Markov interval models of spike trains.

    abstract::A necessary ingredient for a quantitative theory of neural coding is appropriate "spike kinematics": a precise description of spike trains. While summarizing experiments by complete spike time collections is clearly inefficient and probably unnecessary, the most common probabilistic model used in neurophysiology, the ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.07-08-828

    authors: Wójcik DK,Mochol G,Jakuczun W,Wypych M,Waleszczyk WJ

    更新日期:2009-08-01 00:00:00

  • A general probability estimation approach for neural comp.

    abstract::We describe an analytical framework for the adaptations of neural systems that adapt its internal structure on the basis of subjective probabilities constructed by computation of randomly received input signals. A principled approach is provided with the key property that it defines a probability density model that al...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015862

    authors: Khaikine M,Holthausen K

    更新日期:2000-02-01 00:00:00

  • On the performance of voltage stepping for the simulation of adaptive, nonlinear integrate-and-fire neuronal networks.

    abstract::In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00112

    authors: Kaabi MG,Tonnelier A,Martinez D

    更新日期:2011-05-01 00:00:00

  • Permitted and forbidden sets in symmetric threshold-linear networks.

    abstract::The richness and complexity of recurrent cortical circuits is an inexhaustible source of inspiration for thinking about high-level biological computation. In past theoretical studies, constraints on the synaptic connection patterns of threshold-linear networks were found that guaranteed bounded network dynamics, conve...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976603321192103

    authors: Hahnloser RH,Seung HS,Slotine JJ

    更新日期:2003-03-01 00:00:00

  • Parameter learning for alpha integration.

    abstract::In pattern recognition, data integration is an important issue, and when properly done, it can lead to improved performance. Also, data integration can be used to help model and understand multimodal processing in the brain. Amari proposed α-integration as a principled way of blending multiple positive measures (e.g.,...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00445

    authors: Choi H,Choi S,Choe Y

    更新日期:2013-06-01 00:00:00

  • Topographic mapping of large dissimilarity data sets.

    abstract::Topographic maps such as the self-organizing map (SOM) or neural gas (NG) constitute powerful data mining techniques that allow simultaneously clustering data and inferring their topological structure, such that additional features, for example, browsing, become available. Both methods have been introduced for vectori...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00012

    authors: Hammer B,Hasenfuss A

    更新日期:2010-09-01 00:00:00

  • Computation in a single neuron: Hodgkin and Huxley revisited.

    abstract::A spiking neuron "computes" by transforming a complex dynamical input into a train of action potentials, or spikes. The computation performed by the neuron can be formulated as dimensional reduction, or feature detection, followed by a nonlinear decision function over the low-dimensional space. Generalizations of the ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/08997660360675017

    authors: Agüera y Arcas B,Fairhall AL,Bialek W

    更新日期:2003-08-01 00:00:00

  • Effects of fast presynaptic noise in attractor neural networks.

    abstract::We study both analytically and numerically the effect of presynaptic noise on the transmission of information in attractor neural networks. The noise occurs on a very short timescale compared to that for the neuron dynamics and it produces short-time synaptic depression. This is inspired in recent neurobiological find...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976606775623342

    authors: Cortes JM,Torres JJ,Marro J,Garrido PL,Kappen HJ

    更新日期:2006-03-01 00:00:00

  • Changes in GABAB modulation during a theta cycle may be analogous to the fall of temperature during annealing.

    abstract::Changes in GABA modulation may underlie experimentally observed changes in the strength of synaptic transmission at different phases of the theta rhythm (Wyble, Linster, & Hasselmo, 1997). Analysis demonstrates that these changes improve sequence disambiguation by a neural network model of CA3. We show that in the fra...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976698300017539

    authors: Sohal VS,Hasselmo ME

    更新日期:1998-05-15 00:00:00

  • The time-organized map algorithm: extending the self-organizing map to spatiotemporal signals.

    abstract::The new time-organized map (TOM) is presented for a better understanding of the self-organization and geometric structure of cortical signal representations. The algorithm extends the common self-organizing map (SOM) from the processing of purely spatial signals to the processing of spatiotemporal signals. The main ad...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976603765202695

    authors: Wiemer JC

    更新日期:2003-05-01 00:00:00

  • Traveling waves of excitation in neural field models: equivalence of rate descriptions and integrate-and-fire dynamics.

    abstract::Field models provide an elegant mathematical framework to analyze large-scale patterns of neural activity. On the microscopic level, these models are usually based on either a firing-rate picture or integrate-and-fire dynamics. This article shows that in spite of the large conceptual differences between the two types ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/08997660260028656

    authors: Cremers D,Herz AV

    更新日期:2002-07-01 00:00:00

  • Supervised Determined Source Separation with Multichannel Variational Autoencoder.

    abstract::This letter proposes a multichannel source separation technique, the multichannel variational autoencoder (MVAE) method, which uses a conditional VAE (CVAE) to model and estimate the power spectrograms of the sources in a mixture. By training the CVAE using the spectrograms of training examples with source-class label...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01217

    authors: Kameoka H,Li L,Inoue S,Makino S

    更新日期:2019-09-01 00:00:00

  • Capturing the Forest but Missing the Trees: Microstates Inadequate for Characterizing Shorter-Scale EEG Dynamics.

    abstract::The brain is known to be active even when not performing any overt cognitive tasks, and often it engages in involuntary mind wandering. This resting state has been extensively characterized in terms of fMRI-derived brain networks. However, an alternate method has recently gained popularity: EEG microstate analysis. Pr...

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/neco_a_01229

    authors: Shaw SB,Dhindsa K,Reilly JP,Becker S

    更新日期:2019-11-01 00:00:00

  • Competition between synaptic depression and facilitation in attractor neural networks.

    abstract::We study the effect of competition between short-term synaptic depression and facilitation on the dynamic properties of attractor neural networks, using Monte Carlo simulation and a mean-field analysis. Depending on the balance of depression, facilitation, and the underlying noise, the network displays different behav...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2007.19.10.2739

    authors: Torres JJ,Cortes JM,Marro J,Kappen HJ

    更新日期:2007-10-01 00:00:00

  • Mismatched training and test distributions can outperform matched ones.

    abstract::In learning theory, the training and test sets are assumed to be drawn from the same probability distribution. This assumption is also followed in practical situations, where matching the training and test distributions is considered desirable. Contrary to conventional wisdom, we show that mismatched training and test...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00697

    authors: González CR,Abu-Mostafa YS

    更新日期:2015-02-01 00:00:00

  • Neural associative memory with optimal Bayesian learning.

    abstract::Neural associative memories are perceptron-like single-layer networks with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. Previous work optimized the memory capacity for various models of synaptic learning: linear Hopfield-type rules, the Willshaw model employ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00127

    authors: Knoblauch A

    更新日期:2011-06-01 00:00:00