Sufficient dimension reduction via squared-loss mutual information estimation.

Abstract:

:The goal of sufficient dimension reduction in supervised learning is to find the low-dimensional subspace of input features that contains all of the information about the output values that the input features possess. In this letter, we propose a novel sufficient dimension-reduction method using a squared-loss variant of mutual information as a dependency measure. We apply a density-ratio estimator for approximating squared-loss mutual information that is formulated as a minimum contrast estimator on parametric or nonparametric models. Since cross-validation is available for choosing an appropriate model, our method does not require any prespecified structure on the underlying distributions. We elucidate the asymptotic bias of our estimator on parametric models and the asymptotic convergence rate on nonparametric models. The convergence analysis utilizes the uniform tail-bound of a U-process, and the convergence rate is characterized by the bracketing entropy of the model. We then develop a natural gradient algorithm on the Grassmann manifold for sufficient subspace search. The analytic formula of our estimator allows us to compute the gradient efficiently. Numerical experiments show that the proposed method compares favorably with existing dimension-reduction approaches on artificial and benchmark data sets.

journal_name

Neural Comput

journal_title

Neural computation

authors

Suzuki T,Sugiyama M

doi

10.1162/NECO_a_00407

subject

Has Abstract

pub_date

2013-03-01 00:00:00

pages

725-58

issue

3

eissn

0899-7667

issn

1530-888X

journal_volume

25

pub_type

杂志文章
  • Permitted and forbidden sets in symmetric threshold-linear networks.

    abstract::The richness and complexity of recurrent cortical circuits is an inexhaustible source of inspiration for thinking about high-level biological computation. In past theoretical studies, constraints on the synaptic connection patterns of threshold-linear networks were found that guaranteed bounded network dynamics, conve...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976603321192103

    authors: Hahnloser RH,Seung HS,Slotine JJ

    更新日期:2003-03-01 00:00:00

  • Learning only when necessary: better memories of correlated patterns in networks with bounded synapses.

    abstract::Learning in a neuronal network is often thought of as a linear superposition of synaptic modifications induced by individual stimuli. However, since biological synapses are naturally bounded, a linear superposition would cause fast forgetting of previously acquired memories. Here we show that this forgetting can be av...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766054615644

    authors: Senn W,Fusi S

    更新日期:2005-10-01 00:00:00

  • Parameter Identifiability in Statistical Machine Learning: A Review.

    abstract::This review examines the relevance of parameter identifiability for statistical models used in machine learning. In addition to defining main concepts, we address several issues of identifiability closely related to machine learning, showing the advantages and disadvantages of state-of-the-art research and demonstrati...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00947

    authors: Ran ZY,Hu BG

    更新日期:2017-05-01 00:00:00

  • Regulation of ambient GABA levels by neuron-glia signaling for reliable perception of multisensory events.

    abstract::Activities of sensory-specific cortices are known to be suppressed when presented with a different sensory modality stimulus. This is referred to as cross-modal inhibition, for which the conventional synaptic mechanism is unlikely to work. Interestingly, the cross-modal inhibition could be eliminated when presented wi...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00356

    authors: Hoshino O

    更新日期:2012-11-01 00:00:00

  • Analytical integrate-and-fire neuron models with conductance-based dynamics for event-driven simulation strategies.

    abstract::Event-driven simulation strategies were proposed recently to simulate integrate-and-fire (IF) type neuronal models. These strategies can lead to computationally efficient algorithms for simulating large-scale networks of neurons; most important, such approaches are more precise than traditional clock-driven numerical ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2006.18.9.2146

    authors: Rudolph M,Destexhe A

    更新日期:2006-09-01 00:00:00

  • An integral upper bound for neural network approximation.

    abstract::Complexity of one-hidden-layer networks is studied using tools from nonlinear approximation and integration theory. For functions with suitable integral representations in the form of networks with infinitely many hidden units, upper bounds are derived on the speed of decrease of approximation error as the number of n...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.04-08-745

    authors: Kainen PC,Kůrková V

    更新日期:2009-10-01 00:00:00

  • Multistability in spiking neuron models of delayed recurrent inhibitory loops.

    abstract::We consider the effect of the effective timing of a delayed feedback on the excitatory neuron in a recurrent inhibitory loop, when biological realities of firing and absolute refractory period are incorporated into a phenomenological spiking linear or quadratic integrate-and-fire neuron model. We show that such models...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2007.19.8.2124

    authors: Ma J,Wu J

    更新日期:2007-08-01 00:00:00

  • A neurocomputational approach to prepositional phrase attachment ambiguity resolution.

    abstract::A neurocomputational model based on emergent massively overlapping neural cell assemblies (CAs) for resolving prepositional phrase (PP) attachment ambiguity is described. PP attachment ambiguity is a well-studied task in natural language processing and is a case where semantics is used to determine the syntactic struc...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00290

    authors: Nadh K,Huyck C

    更新日期:2012-07-01 00:00:00

  • Discriminant component pruning. Regularization and interpretation of multi-layered back-propagation networks.

    abstract::Neural networks are often employed as tools in classification tasks. The use of large networks increases the likelihood of the task's being learned, although it may also lead to increased complexity. Pruning is an effective way of reducing the complexity of large networks. We present discriminant components pruning (D...

    journal_title:Neural computation

    pub_type: 杂志文章,评审

    doi:10.1162/089976699300016665

    authors: Koene RA,Takane Y

    更新日期:1999-04-01 00:00:00

  • Inhibition and Excitation Shape Activity Selection: Effect of Oscillations in a Decision-Making Circuit.

    abstract::Decision making is a complex task, and its underlying mechanisms that regulate behavior, such as the implementation of the coupling between physiological states and neural networks, are hard to decipher. To gain more insight into neural computations underlying ongoing binary decision-making tasks, we consider a neural...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01185

    authors: Bose T,Reina A,Marshall JAR

    更新日期:2019-05-01 00:00:00

  • Supervised learning in a recurrent network of rate-model neurons exhibiting frequency adaptation.

    abstract::For gradient descent learning to yield connectivity consistent with real biological networks, the simulated neurons would have to include more realistic intrinsic properties such as frequency adaptation. However, gradient descent learning cannot be used straightforwardly with adapting rate-model neurons because the de...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766054323017

    authors: Fortier PA,Guigon E,Burnod Y

    更新日期:2005-09-01 00:00:00

  • Topographic mapping of large dissimilarity data sets.

    abstract::Topographic maps such as the self-organizing map (SOM) or neural gas (NG) constitute powerful data mining techniques that allow simultaneously clustering data and inferring their topological structure, such that additional features, for example, browsing, become available. Both methods have been introduced for vectori...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00012

    authors: Hammer B,Hasenfuss A

    更新日期:2010-09-01 00:00:00

  • Slow feature analysis: a theoretical analysis of optimal free responses.

    abstract::Temporal slowness is a learning principle that allows learning of invariant representations by extracting slowly varying features from quickly varying input signals. Slow feature analysis (SFA) is an efficient algorithm based on this principle and has been applied to the learning of translation, scale, and other invar...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976603322297331

    authors: Wiskott L

    更新日期:2003-09-01 00:00:00

  • Higher-order statistics of input ensembles and the response of simple model neurons.

    abstract::Pairwise correlations among spike trains recorded in vivo have been frequently reported. It has been argued that correlated activity could play an important role in the brain, because it efficiently modulates the response of a postsynaptic neuron. We show here that a neuron's output firing rate critically depends on t...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976603321043702

    authors: Kuhn A,Aertsen A,Rotter S

    更新日期:2003-01-01 00:00:00

  • Propagating distributions up directed acyclic graphs.

    abstract::In a previous article, we considered game trees as graphical models. Adopting an evaluation function that returned a probability distribution over values likely to be taken at a given position, we described how to build a model of uncertainty and use it for utility-directed growth of the search tree and for deciding o...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976699300016881

    authors: Baum EB,Smith WD

    更新日期:1999-01-01 00:00:00

  • A Mean-Field Description of Bursting Dynamics in Spiking Neural Networks with Short-Term Adaptation.

    abstract::Bursting plays an important role in neural communication. At the population level, macroscopic bursting has been identified in populations of neurons that do not express intrinsic bursting mechanisms. For the analysis of phase transitions between bursting and non-bursting states, mean-field descriptions of macroscopic...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01300

    authors: Gast R,Schmidt H,Knösche TR

    更新日期:2020-09-01 00:00:00

  • A Mathematical Analysis of Memory Lifetime in a Simple Network Model of Memory.

    abstract::We study the learning of an external signal by a neural network and the time to forget it when this network is submitted to noise. The presentation of an external stimulus to the recurrent network of binary neurons may change the state of the synapses. Multiple presentations of a unique signal lead to its learning. Th...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01286

    authors: Helson P

    更新日期:2020-07-01 00:00:00

  • Visual Categorization with Random Projection.

    abstract::Humans learn categories of complex objects quickly and from a few examples. Random projection has been suggested as a means to learn and categorize efficiently. We investigate how random projection affects categorization by humans and by very simple neural networks on the same stimuli and categorization tasks, and how...

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/NECO_a_00769

    authors: Arriaga RI,Rutter D,Cakmak M,Vempala SS

    更新日期:2015-10-01 00:00:00

  • Patterns of synchrony in neural networks with spike adaptation.

    abstract::We study the emergence of synchronized burst activity in networks of neurons with spike adaptation. We show that networks of tonically firing adapting excitatory neurons can evolve to a state where the neurons burst in a synchronized manner. The mechanism leading to this burst activity is analyzed in a network of inte...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/08997660151134280

    authors: van Vreeswijk C,Hansel D

    更新日期:2001-05-01 00:00:00

  • Estimation and marginalization using the Kikuchi approximation methods.

    abstract::In this letter, we examine a general method of approximation, known as the Kikuchi approximation method, for finding the marginals of a product distribution, as well as the corresponding partition function. The Kikuchi approximation method defines a certain constrained optimization problem, called the Kikuchi problem,...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766054026693

    authors: Pakzad P,Anantharam V

    更新日期:2005-08-01 00:00:00

  • The computational structure of spike trains.

    abstract::Neurons perform computations, and convey the results of those computations through the statistical structure of their output spike trains. Here we present a practical method, grounded in the information-theoretic analysis of prediction, for inferring a minimal representation of that structure and for characterizing it...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.12-07-678

    authors: Haslinger R,Klinkner KL,Shalizi CR

    更新日期:2010-01-01 00:00:00

  • Synchrony of neuronal oscillations controlled by GABAergic reversal potentials.

    abstract::GABAergic synapse reversal potential is controlled by the concentration of chloride. This concentration can change significantly during development and as a function of neuronal activity. Thus, GABA inhibition can be hyperpolarizing, shunting, or partially depolarizing. Previous results pinpointed the conditions under...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2007.19.3.706

    authors: Jeong HY,Gutkin B

    更新日期:2007-03-01 00:00:00

  • NMDA Receptor Alterations After Mild Traumatic Brain Injury Induce Deficits in Memory Acquisition and Recall.

    abstract::Mild traumatic brain injury (mTBI) presents a significant health concern with potential persisting deficits that can last decades. Although a growing body of literature improves our understanding of the brain network response and corresponding underlying cellular alterations after injury, the effects of cellular disru...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01343

    authors: Gabrieli D,Schumm SN,Vigilante NF,Meaney DF

    更新日期:2021-01-01 00:00:00

  • Active Learning for Enumerating Local Minima Based on Gaussian Process Derivatives.

    abstract::We study active learning (AL) based on gaussian processes (GPs) for efficiently enumerating all of the local minimum solutions of a black-box function. This problem is challenging because local solutions are characterized by their zero gradient and positive-definite Hessian properties, but those derivatives cannot be ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01307

    authors: Inatsu Y,Sugita D,Toyoura K,Takeuchi I

    更新日期:2020-10-01 00:00:00

  • Gaussian process approach to spiking neurons for inhomogeneous Poisson inputs.

    abstract::This article presents a new theoretical framework to consider the dynamics of a stochastic spiking neuron model with general membrane response to input spike. We assume that the input spikes obey an inhomogeneous Poisson process. The stochastic process of the membrane potential then becomes a gaussian process. When a ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976601317098529

    authors: Amemori KI,Ishii S

    更新日期:2001-12-01 00:00:00

  • Characterization of minimum error linear coding with sensory and neural noise.

    abstract::Robust coding has been proposed as a solution to the problem of minimizing decoding error in the presence of neural noise. Many real-world problems, however, have degradation in the input signal, not just in neural representations. This generalized problem is more relevant to biological sensory coding where internal n...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00181

    authors: Doi E,Lewicki MS

    更新日期:2011-10-01 00:00:00

  • Sequential Tests for Large-Scale Learning.

    abstract::We argue that when faced with big data sets, learning and inference algorithms should compute updates using only subsets of data items. We introduce algorithms that use sequential hypothesis tests to adaptively select such a subset of data points. The statistical properties of this subsampling process can be used to c...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00796

    authors: Korattikara A,Chen Y,Welling M

    更新日期:2016-01-01 00:00:00

  • Transmission of population-coded information.

    abstract::As neural activity is transmitted through the nervous system, neuronal noise degrades the encoded information and limits performance. It is therefore important to know how information loss can be prevented. We study this question in the context of neural population codes. Using Fisher information, we show how informat...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00227

    authors: Renart A,van Rossum MC

    更新日期:2012-02-01 00:00:00

  • A finite-sample, distribution-free, probabilistic lower bound on mutual information.

    abstract::For any memoryless communication channel with a binary-valued input and a one-dimensional real-valued output, we introduce a probabilistic lower bound on the mutual information given empirical observations on the channel. The bound is built on the Dvoretzky-Kiefer-Wolfowitz inequality and is distribution free. A quadr...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00144

    authors: VanderKraats ND,Banerjee A

    更新日期:2011-07-01 00:00:00

  • Temporal sequence learning, prediction, and control: a review of different models and their relation to biological mechanisms.

    abstract::In this review, we compare methods for temporal sequence learning (TSL) across the disciplines machine-control, classical conditioning, neuronal models for TSL as well as spike-timing-dependent plasticity (STDP). This review introduces the most influential models and focuses on two questions: To what degree are reward...

    journal_title:Neural computation

    pub_type: 杂志文章,评审

    doi:10.1162/0899766053011555

    authors: Wörgötter F,Porr B

    更新日期:2005-02-01 00:00:00