Abstract:
:We discuss robustness against mislabeling in multiclass labels for classification problems and propose two algorithms of boosting, the normalized Eta-Boost.M and Eta-Boost.M, based on the Eta-divergence. Those two boosting algorithms are closely related to models of mislabeling in which the label is erroneously exchanged for others. For the two boosting algorithms, theoretical aspects supporting the robustness for mislabeling are explored. We apply the proposed two boosting methods for synthetic and real data sets to investigate the performance of these methods, focusing on robustness, and confirm the validity of the proposed methods.
journal_name
Neural Computjournal_title
Neural computationauthors
Takenouchi T,Eguchi S,Murata N,Kanamori Tdoi
10.1162/neco.2007.11-06-400subject
Has Abstractpub_date
2008-06-01 00:00:00pages
1596-630issue
6eissn
0899-7667issn
1530-888Xjournal_volume
20pub_type
信件abstract::We present formal specification and verification of a robot moving in a complex network, using temporal sequence learning to avoid obstacles. Our aim is to demonstrate the benefit of using a formal approach to analyze such a system as a complementary approach to simulation. We first describe a classical closed-loop si...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00493
更新日期:2013-11-01 00:00:00
abstract::Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)-based approaches and autoencoder (AE)-based approaches. CCA-based approaches...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00801
更新日期:2016-02-01 00:00:00
abstract::The motion of an object (such as a wheel rotating) is seen as consistent independent of its position and size on the retina. Neurons in higher cortical visual areas respond to these global motion stimuli invariantly, but neurons in early cortical areas with small receptive fields cannot represent this motion, not only...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.19.1.139
更新日期:2007-01-01 00:00:00
abstract::Neural associative memories are perceptron-like single-layer networks with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. Previous work optimized the memory capacity for various models of synaptic learning: linear Hopfield-type rules, the Willshaw model employ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00127
更新日期:2011-06-01 00:00:00
abstract::Temporal slowness is a learning principle that allows learning of invariant representations by extracting slowly varying features from quickly varying input signals. Slow feature analysis (SFA) is an efficient algorithm based on this principle and has been applied to the learning of translation, scale, and other invar...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976603322297331
更新日期:2003-09-01 00:00:00
abstract::The dynamic formation of groups of neurons--neuronal assemblies--is believed to mediate cognitive phenomena at many levels, but their detailed operation and mechanisms of interaction are still to be uncovered. One hypothesis suggests that synchronized oscillations underpin their formation and functioning, with a focus...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00502
更新日期:2013-11-01 00:00:00
abstract::We simulate the inhibition of Ia-glutamatergic excitatory postsynaptic potential (EPSP) by preceding it with glycinergic recurrent (REN) and reciprocal (REC) inhibitory postsynaptic potentials (IPSPs). The inhibition is evaluated in the presence of voltage-dependent conductances of sodium, delayed rectifier potassium,...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00375
更新日期:2013-01-01 00:00:00
abstract::Natural gradient learning is known to be efficient in escaping plateau, which is a main cause of the slow learning speed of neural networks. The adaptive natural gradient learning method for practical implementation also has been developed, and its advantage in real-world problems has been confirmed. In this letter, w...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976604322742065
更新日期:2004-02-01 00:00:00
abstract::In considering a statistical model selection of neural networks and radial basis functions under an overrealizable case, the problem of unidentifiability emerges. Because the model selection criterion is an unbiased estimator of the generalization error based on the training error, this article analyzes the expected t...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602760128090
更新日期:2002-08-01 00:00:00
abstract::Temporal coding is studied for an oscillatory neural network model with synchronization and acceleration. The latter mechanism refers to increasing (decreasing) the phase velocity of each unit for stronger (weaker) or more coherent (decoherent) input from the other units. It has been demonstrated that acceleration gen...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/neco.2008.09-06-342
更新日期:2008-07-01 00:00:00
abstract::We study the expressive power of positive neural networks. The model uses positive connection weights and multiple input neurons. Different behaviors can be expressed by varying the connection weights. We show that in discrete time and in the absence of noise, the class of positive neural networks captures the so-call...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00789
更新日期:2015-12-01 00:00:00
abstract::We examined how the depression of intracortical inhibition due to a reduction in ambient GABA concentration impairs perceptual information processing in schizophrenia. A neural network model with a gliotransmission-mediated ambient GABA regulatory mechanism was simulated. In the network, interneuron-to-glial-cell and ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00519
更新日期:2013-12-01 00:00:00
abstract::The hypothesis of invariant maximization of interaction (IMI) is formulated within the setting of random fields. According to this hypothesis, learning processes maximize the stochastic interaction of the neurons subject to constraints. We consider the extrinsic constraint in terms of a fixed input distribution on the...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602760805368
更新日期:2002-12-01 00:00:00
abstract::When we move our body to perform a movement task, our central nervous system selects a movement trajectory from an infinite number of possible trajectories under constraints that have been acquired through evolution and learning. Minimization of the energy cost has been suggested as a potential candidate for a constra...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00757
更新日期:2015-08-01 00:00:00
abstract::Neurons perform computations, and convey the results of those computations through the statistical structure of their output spike trains. Here we present a practical method, grounded in the information-theoretic analysis of prediction, for inferring a minimal representation of that structure and for characterizing it...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.12-07-678
更新日期:2010-01-01 00:00:00
abstract::We present a graphical model framework for decoding in the visual ERP-based speller system. The proposed framework allows researchers to build generative models from which the decoding rules are obtained in a straightforward manner. We suggest two models for generating brain signals conditioned on the stimulus events....
journal_title:Neural computation
pub_type: 信件
doi:10.1162/NECO_a_00066
更新日期:2011-01-01 00:00:00
abstract::The mutual information between a set of stimuli and the elicited neural responses is compared to the corresponding decoded information. The decoding procedure is presented as an artificial distortion of the joint probabilities between stimuli and responses. The information loss is quantified. Whenever the probabilitie...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602317318947
更新日期:2002-04-01 00:00:00
abstract::It has been suggested that reactivation of previously acquired experiences or stored information in declarative memories in the hippocampus and neocortex contributes to memory consolidation and learning. Understanding memory consolidation depends crucially on the development of robust statistical methods for assessing...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01090
更新日期:2018-08-01 00:00:00
abstract::The relationship between a neuron's complex inputs and its spiking output defines the neuron's coding strategy. This is frequently and effectively modeled phenomenologically by one or more linear filters that extract the components of the stimulus that are relevant for triggering spikes and a nonlinear function that r...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.02-09-956
更新日期:2010-03-01 00:00:00
abstract::We present a system for the automatic interpretation of cluttered scenes containing multiple partly occluded objects in front of unknown, complex backgrounds. The system is based on an extended elastic graph matching algorithm that allows the explicit modeling of partial occlusions. Our approach extends an earlier sys...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2006.18.6.1441
更新日期:2006-06-01 00:00:00
abstract::Recent experimental findings have shown the presence of robust and cell-type-specific intraburst firing patterns in bursting neurons. We address the problem of characterizing these patterns under the assumption that the bursts exhibit well-defined firing time distributions. We propose a method for estimating these dis...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2008.07-07-571
更新日期:2009-04-01 00:00:00
abstract::We propose a scalable semiparametric Bayesian model to capture dependencies among multiple neurons by detecting their cofiring (possibly with some lag time) patterns over time. After discretizing time so there is at most one spike at each interval, the resulting sequence of 1s (spike) and 0s (silence) for each neuron ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00631
更新日期:2014-09-01 00:00:00
abstract::Independent component analysis (ICA) finds a linear transformation to variables that are maximally statistically independent. We examine ICA and algorithms for finding the best transformation from the point of view of maximizing the likelihood of the data. In particular, we discuss the way in which scaling of the unmi...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976699300016043
更新日期:1999-11-15 00:00:00
abstract::Particular levels of partial fault tolerance (PFT) in feedforward artificial neural networks of a given size can be obtained by redundancy (replicating a smaller normally trained network), by design (training specifically to increase PFT), and by a combination of the two (replicating a smaller PFT-trained network). Th...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/0899766053723096
更新日期:2005-07-01 00:00:00
abstract::A neuronal population is a computational unit that receives a multivariate, time-varying input signal and creates a related multivariate output. These neural signals are modeled as stochastic processes that transmit information in real time, subject to stochastic noise. In a stationary environment, where the input sig...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01057
更新日期:2018-04-01 00:00:00
abstract::A representational scheme under which the ranking between represented similarities is isomorphic to the ranking between the corresponding shape similarities can support perfectly correct shape classification because it preserves the clustering of shapes according to the natural kinds prevailing in the external world. ...
journal_title:Neural computation
pub_type: 杂志文章,评审
doi:10.1162/neco.1997.9.4.701
更新日期:1997-05-15 00:00:00
abstract::Spiking neural networks (SNNs) with the event-driven manner of transmitting spikes consume ultra-low power on neuromorphic chips. However, training deep SNNs is still challenging compared to convolutional neural networks (CNNs). The SNN training algorithms have not achieved the same performance as CNNs. In this letter...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01319
更新日期:2020-12-01 00:00:00
abstract::This article presents a reinforcement learning framework for continuous-time dynamical systems without a priori discretization of time, state, and action. Based on the Hamilton-Jacobi-Bellman (HJB) equation for infinite-horizon, discounted reward problems, we derive algorithms for estimating value functions and improv...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300015961
更新日期:2000-01-01 00:00:00
abstract::Real classification problems involve structured data that can be essentially grouped into a relatively small number of clusters. It is shown that, under a local clustering condition, a set of points of a given class, embedded in binary space by a set of randomly parameterized surfaces, is linearly separable from other...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976601753196012
更新日期:2001-11-01 00:00:00
abstract::Due to many experimental reports of synchronous neural activity in the brain, there is much interest in understanding synchronization in networks of neural oscillators and its potential for computing perceptual organization. Contrary to Hopfield and Herz (1995), we find that networks of locally coupled integrate-and-f...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976699300016160
更新日期:1999-10-01 00:00:00