Robust boosting algorithm against mislabeling in multiclass problems.


:We discuss robustness against mislabeling in multiclass labels for classification problems and propose two algorithms of boosting, the normalized Eta-Boost.M and Eta-Boost.M, based on the Eta-divergence. Those two boosting algorithms are closely related to models of mislabeling in which the label is erroneously exchanged for others. For the two boosting algorithms, theoretical aspects supporting the robustness for mislabeling are explored. We apply the proposed two boosting methods for synthetic and real data sets to investigate the performance of these methods, focusing on robustness, and confirm the validity of the proposed methods.


Neural Comput


Neural computation


Takenouchi T,Eguchi S,Murata N,Kanamori T




Has Abstract


2008-06-01 00:00:00












  • The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction.

    abstract::Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attracto...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Casey M

    更新日期:1996-08-15 00:00:00

  • Mean First Passage Memory Lifetimes by Reducing Complex Synapses to Simple Synapses.

    abstract::Memory models that store new memories by forgetting old ones have memory lifetimes that are rather short and grow only logarithmically in the number of synapses. Attempts to overcome these deficits include "complex" models of synaptic plasticity in which synapses possess internal states governing the expression of syn...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Elliott T

    更新日期:2017-06-01 00:00:00

  • Scalable hybrid computation with spikes.

    abstract::We outline a hybrid analog-digital scheme for computing with three important features that enable it to scale to systems of large complexity: First, like digital computation, which uses several one-bit precise logical units to collectively compute a precise answer to a computation, the hybrid scheme uses several moder...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Sarpeshkar R,O'Halloran M

    更新日期:2002-09-01 00:00:00

  • Discriminant component pruning. Regularization and interpretation of multi-layered back-propagation networks.

    abstract::Neural networks are often employed as tools in classification tasks. The use of large networks increases the likelihood of the task's being learned, although it may also lead to increased complexity. Pruning is an effective way of reducing the complexity of large networks. We present discriminant components pruning (D...

    journal_title:Neural computation

    pub_type: 杂志文章,评审


    authors: Koene RA,Takane Y

    更新日期:1999-04-01 00:00:00

  • The successor representation and temporal context.

    abstract::The successor representation was introduced into reinforcement learning by Dayan ( 1993 ) as a means of facilitating generalization between states with similar successors. Although reinforcement learning in general has been used extensively as a model of psychological and neural processes, the psychological validity o...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Gershman SJ,Moore CD,Todd MT,Norman KA,Sederberg PB

    更新日期:2012-06-01 00:00:00

  • Change-based inference in attractor nets: linear analysis.

    abstract::One standard interpretation of networks of cortical neurons is that they form dynamical attractors. Computations such as stimulus estimation are performed by mapping inputs to points on the networks' attractive manifolds. These points represent population codes for the stimulus values. However, this standard interpret...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Moazzezi R,Dayan P

    更新日期:2010-12-01 00:00:00

  • Temporal coding: assembly formation through constructive interference.

    abstract::Temporal coding is studied for an oscillatory neural network model with synchronization and acceleration. The latter mechanism refers to increasing (decreasing) the phase velocity of each unit for stronger (weaker) or more coherent (decoherent) input from the other units. It has been demonstrated that acceleration gen...

    journal_title:Neural computation

    pub_type: 信件


    authors: Burwick T

    更新日期:2008-07-01 00:00:00

  • Derivatives of logarithmic stationary distributions for policy gradient reinforcement learning.

    abstract::Most conventional policy gradient reinforcement learning (PGRL) algorithms neglect (or do not explicitly make use of) a term in the average reward gradient with respect to the policy parameter. That term involves the derivative of the stationary state distribution that corresponds to the sensitivity of its distributio...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Morimura T,Uchibe E,Yoshimoto J,Peters J,Doya K

    更新日期:2010-02-01 00:00:00

  • Regularized neural networks: some convergence rate results.

    abstract::In a recent paper, Poggio and Girosi (1990) proposed a class of neural networks obtained from the theory of regularization. Regularized networks are capable of approximating arbitrarily well any continuous function on a compactum. In this paper we consider in detail the learning problem for the one-dimensional case. W...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Corradi V,White H

    更新日期:1995-11-01 00:00:00

  • Parameter Sensitivity of the Elastic Net Approach to the Traveling Salesman Problem.

    abstract::Durbin and Willshaw's elastic net algorithm can find good solutions to the TSP. The purpose of this paper is to point out that for certain ranges of parameter values, the algorithm converges into local minima that do not correspond to valid tours. The key parameter is the ratio governing the relative strengths of the ...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Simmen MW

    更新日期:1991-10-01 00:00:00

  • Synchrony in heterogeneous networks of spiking neurons.

    abstract::The emergence of synchrony in the activity of large, heterogeneous networks of spiking neurons is investigated. We define the robustness of synchrony by the critical disorder at which the asynchronous state becomes linearly unstable. We show that at low firing rates, synchrony is more robust in excitatory networks tha...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Neltner L,Hansel D,Mato G,Meunier C

    更新日期:2000-07-01 00:00:00

  • Neural Quadratic Discriminant Analysis: Nonlinear Decoding with V1-Like Computation.

    abstract::Linear-nonlinear (LN) models and their extensions have proven successful in describing transformations from stimuli to spiking responses of neurons in early stages of sensory hierarchies. Neural responses at later stages are highly nonlinear and have generally been better characterized in terms of their decoding perfo...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Pagan M,Simoncelli EP,Rust NC

    更新日期:2016-11-01 00:00:00

  • Replicating receptive fields of simple and complex cells in primary visual cortex in a neuronal network model with temporal and population sparseness and reliability.

    abstract::We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Tanaka T,Aoyagi T,Kaneko T

    更新日期:2012-10-01 00:00:00

  • Transmission of population-coded information.

    abstract::As neural activity is transmitted through the nervous system, neuronal noise degrades the encoded information and limits performance. It is therefore important to know how information loss can be prevented. We study this question in the context of neural population codes. Using Fisher information, we show how informat...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Renart A,van Rossum MC

    更新日期:2012-02-01 00:00:00

  • Minimal model for intracellular calcium oscillations and electrical bursting in melanotrope cells of Xenopus laevis.

    abstract::A minimal model is presented to explain changes in frequency, shape, and amplitude of Ca2+ oscillations in the neuroendocrine melanotrope cell of Xenopus Laevis. It describes the cell as a plasma membrane oscillator with influx of extracellular Ca2+ via voltage-gated Ca2+ channels in the plasma membrane. The Ca2+ osci...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Cornelisse LN,Scheenen WJ,Koopman WJ,Roubos EW,Gielen SC

    更新日期:2001-01-01 00:00:00

  • Statistical procedures for spatiotemporal neuronal data with applications to optical recording of the auditory cortex.

    abstract::This article presents new procedures for multisite spatiotemporal neuronal data analysis. A new statistical model - the diffusion model - is considered, whose parameters can be estimated from experimental data thanks to mean-field approximations. This work has been applied to optical recording of the guinea pig's audi...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: François O,Abdallahi LM,Horikawa J,Taniguchi I,Hervé T

    更新日期:2000-08-01 00:00:00

  • Whence the Expected Free Energy?

    abstract::The expected free energy (EFE) is a central quantity in the theory of active inference. It is the quantity that all active inference agents are mandated to minimize through action, and its decomposition into extrinsic and intrinsic value terms is key to the balance of exploration and exploitation that active inference...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Millidge B,Tschantz A,Buckley CL

    更新日期:2021-01-05 00:00:00

  • Rapid processing and unsupervised learning in a model of the cortical macrocolumn.

    abstract::We study a model of the cortical macrocolumn consisting of a collection of inhibitorily coupled minicolumns. The proposed system overcomes several severe deficits of systems based on single neurons as cerebral functional units, notably limited robustness to damage and unrealistically large computation time. Motivated ...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Lücke J,von der Malsburg C

    更新日期:2004-03-01 00:00:00

  • Generalization and multirate models of motor adaptation.

    abstract::When subjects adapt their reaching movements in the setting of a systematic force or visual perturbation, generalization of adaptation can be assessed psychophysically in two ways: by testing untrained locations in the work space at the end of adaptation (slow postadaptation generalization) or by determining the influ...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Tanaka H,Krakauer JW,Sejnowski TJ

    更新日期:2012-04-01 00:00:00

  • Nonmonotonic generalization bias of Gaussian mixture models.

    abstract::Theories of learning and generalization hold that the generalization bias, defined as the difference between the training error and the generalization error, increases on average with the number of adaptive parameters. This article, however, shows that this general tendency is violated for a gaussian mixture model. Fo...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Akaho S,Kappen HJ

    更新日期:2000-06-01 00:00:00

  • ASIC Implementation of a Nonlinear Dynamical Model for Hippocampal Prosthesis.

    abstract::A hippocampal prosthesis is a very large scale integration (VLSI) biochip that needs to be implanted in the biological brain to solve a cognitive dysfunction. In this letter, we propose a novel low-complexity, small-area, and low-power programmable hippocampal neural network application-specific integrated circuit (AS...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Qiao Z,Han Y,Han X,Xu H,Li WXY,Song D,Berger TW,Cheung RCC

    更新日期:2018-09-01 00:00:00

  • On the performance of voltage stepping for the simulation of adaptive, nonlinear integrate-and-fire neuronal networks.

    abstract::In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced ...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Kaabi MG,Tonnelier A,Martinez D

    更新日期:2011-05-01 00:00:00

  • Active Learning for Enumerating Local Minima Based on Gaussian Process Derivatives.

    abstract::We study active learning (AL) based on gaussian processes (GPs) for efficiently enumerating all of the local minimum solutions of a black-box function. This problem is challenging because local solutions are characterized by their zero gradient and positive-definite Hessian properties, but those derivatives cannot be ...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Inatsu Y,Sugita D,Toyoura K,Takeuchi I

    更新日期:2020-10-01 00:00:00

  • Modeling short-term synaptic depression in silicon.

    abstract::We describe a model of short-term synaptic depression that is derived from a circuit implementation. The dynamics of this circuit model is similar to the dynamics of some theoretical models of short-term depression except that the recovery dynamics of the variable describing the depression is nonlinear and it also dep...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Boegerhausen M,Suter P,Liu SC

    更新日期:2003-02-01 00:00:00

  • An integral upper bound for neural network approximation.

    abstract::Complexity of one-hidden-layer networks is studied using tools from nonlinear approximation and integration theory. For functions with suitable integral representations in the form of networks with infinitely many hidden units, upper bounds are derived on the speed of decrease of approximation error as the number of n...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Kainen PC,Kůrková V

    更新日期:2009-10-01 00:00:00

  • Parameter learning for alpha integration.

    abstract::In pattern recognition, data integration is an important issue, and when properly done, it can lead to improved performance. Also, data integration can be used to help model and understand multimodal processing in the brain. Amari proposed α-integration as a principled way of blending multiple positive measures (e.g.,...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Choi H,Choi S,Choe Y

    更新日期:2013-06-01 00:00:00

  • Time-varying perturbations can distinguish among integrate-to-threshold models for perceptual decision making in reaction time tasks.

    abstract::Several integrate-to-threshold models with differing temporal integration mechanisms have been proposed to describe the accumulation of sensory evidence to a prescribed level prior to motor response in perceptual decision-making tasks. An experiment and simulation studies have shown that the introduction of time-varyi...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Zhou X,Wong-Lin K,Philip H

    更新日期:2009-08-01 00:00:00

  • Methods for Assessment of Memory Reactivation.

    abstract::It has been suggested that reactivation of previously acquired experiences or stored information in declarative memories in the hippocampus and neocortex contributes to memory consolidation and learning. Understanding memory consolidation depends crucially on the development of robust statistical methods for assessing...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Liu S,Grosmark AD,Chen Z

    更新日期:2018-08-01 00:00:00

  • Density-weighted Nyström method for computing large kernel eigensystems.

    abstract::The Nyström method is a well-known sampling-based technique for approximating the eigensystem of large kernel matrices. However, the chosen samples in the Nyström method are all assumed to be of equal importance, which deviates from the integral equation that defines the kernel eigenfunctions. Motivated by this observ...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Zhang K,Kwok JT

    更新日期:2009-01-01 00:00:00

  • Scalable Semisupervised Functional Neurocartography Reveals Canonical Neurons in Behavioral Networks.

    abstract::Large-scale data collection efforts to map the brain are underway at multiple spatial and temporal scales, but all face fundamental problems posed by high-dimensional data and intersubject variability. Even seemingly simple problems, such as identifying a neuron/brain region across animals/subjects, become exponential...

    journal_title:Neural computation

    pub_type: 杂志文章


    authors: Frady EP,Kapoor A,Horvitz E,Kristan WB Jr

    更新日期:2016-08-01 00:00:00