Density-weighted Nyström method for computing large kernel eigensystems.


:The Nyström method is a well-known sampling-based technique for approximating the eigensystem of large kernel matrices. However, the chosen samples in the Nyström method are all assumed to be of equal importance, which deviates from the integral equation that defines the kernel eigenfunctions. Motivated by this observation, we extend the Nyström method to a more general, density-weighted version. We show that by introducing the probability density function as a natural weighting scheme, the approximation of the eigensystem can be greatly improved. An efficient algorithm is proposed to enforce such weighting in practice, which has the same complexity as the original Nyström method and hence is notably cheaper than several other alternatives. Experiments on kernel principal component analysis, spectral clustering, and image segmentation demonstrate the encouraging performance of our algorithm.


Neural Comput


Neural computation


Zhang K,Kwok JT




Has Abstract


2009-01-01 00:00:00












  • Robustness of connectionist swimming controllers against random variation in neural connections.

    abstract::The ability to achieve high swimming speed and efficiency is very important to both the real lamprey and its robotic implementation. In previous studies, we used evolutionary algorithms to evolve biologically plausible connectionist swimming controllers for a simulated lamprey. This letter investigates the robustness ...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Or J

    更新日期:2007-06-01 00:00:00

  • Statistical computer model analysis of the reciprocal and recurrent inhibitions of the Ia-EPSP in α-motoneurons.

    abstract::We simulate the inhibition of Ia-glutamatergic excitatory postsynaptic potential (EPSP) by preceding it with glycinergic recurrent (REN) and reciprocal (REC) inhibitory postsynaptic potentials (IPSPs). The inhibition is evaluated in the presence of voltage-dependent conductances of sodium, delayed rectifier potassium,...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Gradwohl G,Grossman Y

    更新日期:2013-01-01 00:00:00

  • Propagating distributions up directed acyclic graphs.

    abstract::In a previous article, we considered game trees as graphical models. Adopting an evaluation function that returned a probability distribution over values likely to be taken at a given position, we described how to build a model of uncertainty and use it for utility-directed growth of the search tree and for deciding o...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Baum EB,Smith WD

    更新日期:1999-01-01 00:00:00

  • Making the error-controlling algorithm of observable operator models constructive.

    abstract::Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algor...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Zhao MJ,Jaeger H,Thon M

    更新日期:2009-12-01 00:00:00

  • Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer Networks.

    abstract::Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms like backpropagation. Although backpropagation is efficient, its implementation in analog VLSI requires excessive computational hardware. In this paper we show that, for anal...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Jabri M,Flower B

    更新日期:1991-01-01 00:00:00

  • Feature selection in simple neurons: how coding depends on spiking dynamics.

    abstract::The relationship between a neuron's complex inputs and its spiking output defines the neuron's coding strategy. This is frequently and effectively modeled phenomenologically by one or more linear filters that extract the components of the stimulus that are relevant for triggering spikes and a nonlinear function that r...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Famulare M,Fairhall A

    更新日期:2010-03-01 00:00:00

  • Classification of temporal patterns in dynamic biological networks.

    abstract::A general method is presented to classify temporal patterns generated by rhythmic biological networks when synaptic connections and cellular properties are known. The method is discrete in nature and relies on algebraic properties of state transitions and graph theory. Elements of the set of rhythms generated by a net...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Roberts PD

    更新日期:1998-10-01 00:00:00

  • Temporal sequence learning, prediction, and control: a review of different models and their relation to biological mechanisms.

    abstract::In this review, we compare methods for temporal sequence learning (TSL) across the disciplines machine-control, classical conditioning, neuronal models for TSL as well as spike-timing-dependent plasticity (STDP). This review introduces the most influential models and focuses on two questions: To what degree are reward...

    journal_title:Neural computation

    pub_type: 杂志文章,评审


    authors: Wörgötter F,Porr B

    更新日期:2005-02-01 00:00:00

  • Variations on the Theme of Synaptic Filtering: A Comparison of Integrate-and-Express Models of Synaptic Plasticity for Memory Lifetimes.

    abstract::Integrate-and-express models of synaptic plasticity propose that synapses integrate plasticity induction signals before expressing synaptic plasticity. By discerning trends in their induction signals, synapses can control destabilizing fluctuations in synaptic strength. In a feedforward perceptron framework with binar...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Elliott T

    更新日期:2016-11-01 00:00:00

  • Reinforcement learning in continuous time and space.

    abstract::This article presents a reinforcement learning framework for continuous-time dynamical systems without a priori discretization of time, state, and action. Based on the Hamilton-Jacobi-Bellman (HJB) equation for infinite-horizon, discounted reward problems, we derive algorithms for estimating value functions and improv...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Doya K

    更新日期:2000-01-01 00:00:00

  • Online adaptive decision trees.

    abstract::Decision trees and neural networks are widely used tools for pattern classification. Decision trees provide highly localized representation, whereas neural networks provide a distributed but compact representation of the decision space. Decision trees cannot be induced in the online mode, and they are not adaptive to ...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Basak J

    更新日期:2004-09-01 00:00:00

  • Dissociable forms of repetition priming: a computational model.

    abstract::Nondeclarative memory and novelty processing in the brain is an actively studied field of neuroscience, and reducing neural activity with repetition of a stimulus (repetition suppression) is a commonly observed phenomenon. Recent findings of an opposite trend-specifically, rising activity for unfamiliar stimuli-questi...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Makukhin K,Bolland S

    更新日期:2014-04-01 00:00:00

  • STDP-Compatible Approximation of Backpropagation in an Energy-Based Model.

    abstract::We show that Langevin Markov chain Monte Carlo inference in an energy-based model with latent variables has the property that the early steps of inference, starting from a stationary point, correspond to propagating error gradients into internal layers, similar to backpropagation. The backpropagated error is with resp...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Bengio Y,Mesnard T,Fischer A,Zhang S,Wu Y

    更新日期:2017-03-01 00:00:00

  • Effects of fast presynaptic noise in attractor neural networks.

    abstract::We study both analytically and numerically the effect of presynaptic noise on the transmission of information in attractor neural networks. The noise occurs on a very short timescale compared to that for the neuron dynamics and it produces short-time synaptic depression. This is inspired in recent neurobiological find...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Cortes JM,Torres JJ,Marro J,Garrido PL,Kappen HJ

    更新日期:2006-03-01 00:00:00

  • Piecewise-linear neural networks and their relationship to rule extraction from data.

    abstract::This article addresses the topic of extracting logical rules from data by means of artificial neural networks. The approach based on piecewise linear neural networks is revisited, which has already been used for the extraction of Boolean rules in the past, and it is shown that this approach can be important also for t...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Holena M

    更新日期:2006-11-01 00:00:00

  • Mirror symmetric topographic maps can arise from activity-dependent synaptic changes.

    abstract::Multiple adjacent, roughly mirror-image topographic maps are commonly observed in the sensory neocortex of many species. The cortical regions occupied by these maps are generally believed to be determined initially by genetically controlled chemical markers during development, with thalamocortical afferent activity su...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Schulz R,Reggia JA

    更新日期:2005-05-01 00:00:00

  • Mismatched training and test distributions can outperform matched ones.

    abstract::In learning theory, the training and test sets are assumed to be drawn from the same probability distribution. This assumption is also followed in practical situations, where matching the training and test distributions is considered desirable. Contrary to conventional wisdom, we show that mismatched training and test...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: González CR,Abu-Mostafa YS

    更新日期:2015-02-01 00:00:00

  • Fast recursive filters for simulating nonlinear dynamic systems.

    abstract::A fast and accurate computational scheme for simulating nonlinear dynamic systems is presented. The scheme assumes that the system can be represented by a combination of components of only two different types: first-order low-pass filters and static nonlinearities. The parameters of these filters and nonlinearities ma...

    journal_title:Neural computation

    pub_type: 信件


    authors: van Hateren JH

    更新日期:2008-07-01 00:00:00

  • Parameter Sensitivity of the Elastic Net Approach to the Traveling Salesman Problem.

    abstract::Durbin and Willshaw's elastic net algorithm can find good solutions to the TSP. The purpose of this paper is to point out that for certain ranges of parameter values, the algorithm converges into local minima that do not correspond to valid tours. The key parameter is the ratio governing the relative strengths of the ...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Simmen MW

    更新日期:1991-10-01 00:00:00

  • Change-based inference in attractor nets: linear analysis.

    abstract::One standard interpretation of networks of cortical neurons is that they form dynamical attractors. Computations such as stimulus estimation are performed by mapping inputs to points on the networks' attractive manifolds. These points represent population codes for the stimulus values. However, this standard interpret...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Moazzezi R,Dayan P

    更新日期:2010-12-01 00:00:00

  • A general probability estimation approach for neural comp.

    abstract::We describe an analytical framework for the adaptations of neural systems that adapt its internal structure on the basis of subjective probabilities constructed by computation of randomly received input signals. A principled approach is provided with the key property that it defines a probability density model that al...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Khaikine M,Holthausen K

    更新日期:2000-02-01 00:00:00

  • Learning Slowness in a Sparse Model of Invariant Feature Detection.

    abstract::Primary visual cortical complex cells are thought to serve as invariant feature detectors and to provide input to higher cortical areas. We propose a single model for learning the connectivity required by complex cells that integrates two factors that have been hypothesized to play a role in the development of invaria...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Chandrapala TN,Shi BE

    更新日期:2015-07-01 00:00:00

  • Robust Closed-Loop Control of a Cursor in a Person with Tetraplegia using Gaussian Process Regression.

    abstract::Intracortical brain computer interfaces can enable individuals with paralysis to control external devices through voluntarily modulated brain activity. Decoding quality has been previously shown to degrade with signal nonstationarities-specifically, the changes in the statistics of the data between training and testin...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Brandman DM,Burkhart MC,Kelemen J,Franco B,Harrison MT,Hochberg LR

    更新日期:2018-11-01 00:00:00

  • Minimizing binding errors using learned conjunctive features.

    abstract::We have studied some of the design trade-offs governing visual representations based on spatially invariant conjunctive feature detectors, with an emphasis on the susceptibility of such systems to false-positive recognition errors-Malsburg's classical binding problem. We begin by deriving an analytical model that make...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Mel BW,Fiser J

    更新日期:2000-04-01 00:00:00

  • Spiking neural P systems with a generalized use of rules.

    abstract::Spiking neural P systems (SN P systems) are a class of distributed parallel computing devices inspired by spiking neurons, where the spiking rules are usually used in a sequential way (an applicable rule is applied one time at a step) or an exhaustive way (an applicable rule is applied as many times as possible at a s...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Zhang X,Wang B,Pan L

    更新日期:2014-12-01 00:00:00

  • An integral upper bound for neural network approximation.

    abstract::Complexity of one-hidden-layer networks is studied using tools from nonlinear approximation and integration theory. For functions with suitable integral representations in the form of networks with infinitely many hidden units, upper bounds are derived on the speed of decrease of approximation error as the number of n...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Kainen PC,Kůrková V

    更新日期:2009-10-01 00:00:00

  • Bias/Variance Decompositions for Likelihood-Based Estimators.

    abstract::The bias/variance decomposition of mean-squared error is well understood and relatively straightforward. In this note, a similar simple decomposition is derived, valid for any kind of error measure that, when using the appropriate probability model, can be derived from a Kullback-Leibler divergence or log-likelihood. ...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Heskes T

    更新日期:1998-07-28 00:00:00

  • Slow feature analysis: a theoretical analysis of optimal free responses.

    abstract::Temporal slowness is a learning principle that allows learning of invariant representations by extracting slowly varying features from quickly varying input signals. Slow feature analysis (SFA) is an efficient algorithm based on this principle and has been applied to the learning of translation, scale, and other invar...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Wiskott L

    更新日期:2003-09-01 00:00:00

  • Conductance-based integrate-and-fire models.

    abstract::A conductance-based model of Na+ and K+ currents underlying action potential generation is introduced by simplifying the quantitative model of Hodgkin and Huxley (HH). If the time course of rate constants can be approximated by a pulse, HH equations can be solved analytically. Pulse-based (PB) models generate action p...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Destexhe A

    更新日期:1997-04-01 00:00:00

  • Locality of global stochastic interaction in directed acyclic networks.

    abstract::The hypothesis of invariant maximization of interaction (IMI) is formulated within the setting of random fields. According to this hypothesis, learning processes maximize the stochastic interaction of the neurons subject to constraints. We consider the extrinsic constraint in terms of a fixed input distribution on the...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Ay N

    更新日期:2002-12-01 00:00:00