A bio-inspired, computational model suggests velocity gradients of optic flow locally encode ordinal depth at surface borders and globally they encode self-motion.

Abstract:

:Visual navigation requires the estimation of self-motion as well as the segmentation of objects from the background. We suggest a definition of local velocity gradients to compute types of self-motion, segment objects, and compute local properties of optical flow fields, such as divergence, curl, and shear. Such velocity gradients are computed as velocity differences measured locally tangent and normal to the direction of flow. Then these differences are rotated according to the local direction of flow to achieve independence of that direction. We propose a bio-inspired model for the computation of these velocity gradients for video sequences. Simulation results show that local gradients encode ordinal surface depth, assuming self-motion in a rigid scene or object motions in a nonrigid scene. For translational self-motion velocity, gradients can be used to distinguish between static and moving objects. The information about ordinal surface depth and self-motion can help steering control for visual navigation.

journal_name

Neural Comput

journal_title

Neural computation

authors

Raudies F,Ringbauer S,Neumann H

doi

10.1162/NECO_a_00479

subject

Has Abstract

pub_date

2013-09-01 00:00:00

pages

2421-49

issue

9

eissn

0899-7667

issn

1530-888X

journal_volume

25

pub_type

杂志文章
  • A general probability estimation approach for neural comp.

    abstract::We describe an analytical framework for the adaptations of neural systems that adapt its internal structure on the basis of subjective probabilities constructed by computation of randomly received input signals. A principled approach is provided with the key property that it defines a probability density model that al...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015862

    authors: Khaikine M,Holthausen K

    更新日期:2000-02-01 00:00:00

  • Binocular receptive field models, disparity tuning, and characteristic disparity.

    abstract::Disparity tuning of visual cells in the brain depends on the structure of their binocular receptive fields (RFs). Freeman and coworkers have found that binocular RFs of a typical simple cell can be quantitatively described by two Gabor functions with the same gaussian envelope but different phase parameters in the sin...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1996.8.8.1611

    authors: Zhu YD,Qian N

    更新日期:1996-11-15 00:00:00

  • Boosted mixture of experts: an ensemble learning scheme.

    abstract::We present a new supervised learning procedure for ensemble machines, in which outputs of predictors, trained on different distributions, are combined by a dynamic classifier combination model. This procedure may be viewed as either a version of mixture of experts (Jacobs, Jordan, Nowlan, & Hintnon, 1991), applied to ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976699300016737

    authors: Avnimelech R,Intrator N

    更新日期:1999-02-15 00:00:00

  • Time-varying perturbations can distinguish among integrate-to-threshold models for perceptual decision making in reaction time tasks.

    abstract::Several integrate-to-threshold models with differing temporal integration mechanisms have been proposed to describe the accumulation of sensory evidence to a prescribed level prior to motor response in perceptual decision-making tasks. An experiment and simulation studies have shown that the introduction of time-varyi...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.07-08-817

    authors: Zhou X,Wong-Lin K,Philip H

    更新日期:2009-08-01 00:00:00

  • On the relation of slow feature analysis and Laplacian eigenmaps.

    abstract::The past decade has seen a rise of interest in Laplacian eigenmaps (LEMs) for nonlinear dimensionality reduction. LEMs have been used in spectral clustering, in semisupervised learning, and for providing efficient state representations for reinforcement learning. Here, we show that LEMs are closely related to slow fea...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00214

    authors: Sprekeler H

    更新日期:2011-12-01 00:00:00

  • A Reservoir Computing Model of Reward-Modulated Motor Learning and Automaticity.

    abstract::Reservoir computing is a biologically inspired class of learning algorithms in which the intrinsic dynamics of a recurrent neural network are mined to produce target time series. Most existing reservoir computing algorithms rely on fully supervised learning rules, which require access to an exact copy of the target re...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01198

    authors: Pyle R,Rosenbaum R

    更新日期:2019-07-01 00:00:00

  • Resonator Networks, 2: Factorization Performance and Capacity Compared to Optimization-Based Methods.

    abstract::We develop theoretical foundations of resonator networks, a new type of recurrent neural network introduced in Frady, Kent, Olshausen, and Sommer (2020), a companion article in this issue, to solve a high-dimensional vector factorization problem arising in Vector Symbolic Architectures. Given a composite vector formed...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01329

    authors: Kent SJ,Frady EP,Sommer FT,Olshausen BA

    更新日期:2020-12-01 00:00:00

  • A graphical model framework for decoding in the visual ERP-based BCI speller.

    abstract::We present a graphical model framework for decoding in the visual ERP-based speller system. The proposed framework allows researchers to build generative models from which the decoding rules are obtained in a straightforward manner. We suggest two models for generating brain signals conditioned on the stimulus events....

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/NECO_a_00066

    authors: Martens SM,Mooij JM,Hill NJ,Farquhar J,Schölkopf B

    更新日期:2011-01-01 00:00:00

  • Nonmonotonic generalization bias of Gaussian mixture models.

    abstract::Theories of learning and generalization hold that the generalization bias, defined as the difference between the training error and the generalization error, increases on average with the number of adaptive parameters. This article, however, shows that this general tendency is violated for a gaussian mixture model. Fo...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015439

    authors: Akaho S,Kappen HJ

    更新日期:2000-06-01 00:00:00

  • The computational structure of spike trains.

    abstract::Neurons perform computations, and convey the results of those computations through the statistical structure of their output spike trains. Here we present a practical method, grounded in the information-theoretic analysis of prediction, for inferring a minimal representation of that structure and for characterizing it...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.12-07-678

    authors: Haslinger R,Klinkner KL,Shalizi CR

    更新日期:2010-01-01 00:00:00

  • Neural Circuits Trained with Standard Reinforcement Learning Can Accumulate Probabilistic Information during Decision Making.

    abstract::Much experimental evidence suggests that during decision making, neural circuits accumulate evidence supporting alternative options. A computational model well describing this accumulation for choices between two options assumes that the brain integrates the log ratios of the likelihoods of the sensory inputs given th...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00917

    authors: Kurzawa N,Summerfield C,Bogacz R

    更新日期:2017-02-01 00:00:00

  • A Unifying Framework of Synaptic and Intrinsic Plasticity in Neural Populations.

    abstract::A neuronal population is a computational unit that receives a multivariate, time-varying input signal and creates a related multivariate output. These neural signals are modeled as stochastic processes that transmit information in real time, subject to stochastic noise. In a stationary environment, where the input sig...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01057

    authors: Leugering J,Pipa G

    更新日期:2018-04-01 00:00:00

  • Mean First Passage Memory Lifetimes by Reducing Complex Synapses to Simple Synapses.

    abstract::Memory models that store new memories by forgetting old ones have memory lifetimes that are rather short and grow only logarithmically in the number of synapses. Attempts to overcome these deficits include "complex" models of synaptic plasticity in which synapses possess internal states governing the expression of syn...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00956

    authors: Elliott T

    更新日期:2017-06-01 00:00:00

  • A modified algorithm for generalized discriminant analysis.

    abstract::Generalized discriminant analysis (GDA) is an extension of the classical linear discriminant analysis (LDA) from linear domain to a nonlinear domain via the kernel trick. However, in the previous algorithm of GDA, the solutions may suffer from the degenerate eigenvalue problem (i.e., several eigenvectors with the same...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976604773717612

    authors: Zheng W,Zhao L,Zou C

    更新日期:2004-06-01 00:00:00

  • Regularized neural networks: some convergence rate results.

    abstract::In a recent paper, Poggio and Girosi (1990) proposed a class of neural networks obtained from the theory of regularization. Regularized networks are capable of approximating arbitrarily well any continuous function on a compactum. In this paper we consider in detail the learning problem for the one-dimensional case. W...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1995.7.6.1225

    authors: Corradi V,White H

    更新日期:1995-11-01 00:00:00

  • Capturing the Dynamical Repertoire of Single Neurons with Generalized Linear Models.

    abstract::A key problem in computational neuroscience is to find simple, tractable models that are nevertheless flexible enough to capture the response properties of real neurons. Here we examine the capabilities of recurrent point process models known as Poisson generalized linear models (GLMs). These models are defined by a s...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01021

    authors: Weber AI,Pillow JW

    更新日期:2017-12-01 00:00:00

  • Computing sparse representations of multidimensional signals using Kronecker bases.

    abstract::Recently there has been great interest in sparse representations of signals under the assumption that signals (data sets) can be well approximated by a linear combination of few elements of a known basis (dictionary). Many algorithms have been developed to find such representations for one-dimensional signals (vectors...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00385

    authors: Caiafa CF,Cichocki A

    更新日期:2013-01-01 00:00:00

  • Generalization and multirate models of motor adaptation.

    abstract::When subjects adapt their reaching movements in the setting of a systematic force or visual perturbation, generalization of adaptation can be assessed psychophysically in two ways: by testing untrained locations in the work space at the end of adaptation (slow postadaptation generalization) or by determining the influ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00262

    authors: Tanaka H,Krakauer JW,Sejnowski TJ

    更新日期:2012-04-01 00:00:00

  • Online Reinforcement Learning Using a Probability Density Estimation.

    abstract::Function approximation in online, incremental, reinforcement learning needs to deal with two fundamental problems: biased sampling and nonstationarity. In this kind of task, biased sampling occurs because samples are obtained from specific trajectories dictated by the dynamics of the environment and are usually concen...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00906

    authors: Agostini A,Celaya E

    更新日期:2017-01-01 00:00:00

  • Estimation and marginalization using the Kikuchi approximation methods.

    abstract::In this letter, we examine a general method of approximation, known as the Kikuchi approximation method, for finding the marginals of a product distribution, as well as the corresponding partition function. The Kikuchi approximation method defines a certain constrained optimization problem, called the Kikuchi problem,...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766054026693

    authors: Pakzad P,Anantharam V

    更新日期:2005-08-01 00:00:00

  • Extraction of Synaptic Input Properties in Vivo.

    abstract::Knowledge of synaptic input is crucial for understanding synaptic integration and ultimately neural function. However, in vivo, the rates at which synaptic inputs arrive are high, so that it is typically impossible to detect single events. We show here that it is nevertheless possible to extract the properties of the ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00975

    authors: Puggioni P,Jelitai M,Duguid I,van Rossum MCW

    更新日期:2017-07-01 00:00:00

  • Insect-inspired estimation of egomotion.

    abstract::Tangential neurons in the fly brain are sensitive to the typical optic flow patterns generated during egomotion. In this study, we examine whether a simplified linear model based on the organization principles in tangential neurons can be used to estimate egomotion from the optic flow. We present a theory for the cons...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766041941899

    authors: Franz MO,Chahl JS,Krapp HG

    更新日期:2004-11-01 00:00:00

  • On the emergence of rules in neural networks.

    abstract::A simple associationist neural network learns to factor abstract rules (i.e., grammars) from sequences of arbitrary input symbols by inventing abstract representations that accommodate unseen symbol sets as well as unseen but similar grammars. The neural network is shown to have the ability to transfer grammatical kno...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976602320264079

    authors: Hanson SJ,Negishi M

    更新日期:2002-09-01 00:00:00

  • Discriminant component pruning. Regularization and interpretation of multi-layered back-propagation networks.

    abstract::Neural networks are often employed as tools in classification tasks. The use of large networks increases the likelihood of the task's being learned, although it may also lead to increased complexity. Pruning is an effective way of reducing the complexity of large networks. We present discriminant components pruning (D...

    journal_title:Neural computation

    pub_type: 杂志文章,评审

    doi:10.1162/089976699300016665

    authors: Koene RA,Takane Y

    更新日期:1999-04-01 00:00:00

  • Learning Slowness in a Sparse Model of Invariant Feature Detection.

    abstract::Primary visual cortical complex cells are thought to serve as invariant feature detectors and to provide input to higher cortical areas. We propose a single model for learning the connectivity required by complex cells that integrates two factors that have been hypothesized to play a role in the development of invaria...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00743

    authors: Chandrapala TN,Shi BE

    更新日期:2015-07-01 00:00:00

  • An internal model for acquisition and retention of motor learning during arm reaching.

    abstract::Humans have the ability to learn novel motor tasks while manipulating the environment. Several models of motor learning have been proposed in the literature, but few of them address the problem of retention and interference of motor memory. The modular selection and identification for control (MOSAIC) model, originall...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.03-08-721

    authors: Lonini L,Dipietro L,Zollo L,Guglielmelli E,Krebs HI

    更新日期:2009-07-01 00:00:00

  • Machine Learning: Deepest Learning as Statistical Data Assimilation Problems.

    abstract::We formulate an equivalence between machine learning and the formulation of statistical data assimilation as used widely in physical and biological sciences. The correspondence is that layer number in a feedforward artificial network setting is the analog of time in the data assimilation setting. This connection has b...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01094

    authors: Abarbanel HDI,Rozdeba PJ,Shirman S

    更新日期:2018-08-01 00:00:00

  • Traveling waves of excitation in neural field models: equivalence of rate descriptions and integrate-and-fire dynamics.

    abstract::Field models provide an elegant mathematical framework to analyze large-scale patterns of neural activity. On the microscopic level, these models are usually based on either a firing-rate picture or integrate-and-fire dynamics. This article shows that in spite of the large conceptual differences between the two types ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/08997660260028656

    authors: Cremers D,Herz AV

    更新日期:2002-07-01 00:00:00

  • The relationship between synchronization among neuronal populations and their mean activity levels.

    abstract::In the past decade the importance of synchronized dynamics in the brain has emerged from both empirical and theoretical perspectives. Fast dynamic synchronous interactions of an oscillatory or nonoscillatory nature may constitute a form of temporal coding that underlies feature binding and perceptual synthesis. The re...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976699300016287

    authors: Chawla D,Lumer ED,Friston KJ

    更新日期:1999-08-15 00:00:00

  • Variations on the Theme of Synaptic Filtering: A Comparison of Integrate-and-Express Models of Synaptic Plasticity for Memory Lifetimes.

    abstract::Integrate-and-express models of synaptic plasticity propose that synapses integrate plasticity induction signals before expressing synaptic plasticity. By discerning trends in their induction signals, synapses can control destabilizing fluctuations in synaptic strength. In a feedforward perceptron framework with binar...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00889

    authors: Elliott T

    更新日期:2016-11-01 00:00:00