Generalization and selection of examples in feedforward neural networks.

Abstract:

:In this work, we study how the selection of examples affects the learning procedure in a boolean neural network and its relationship with the complexity of the function under study and its architecture. We analyze the generalization capacity for different target functions with particular architectures through an analytical calculation of the minimum number of examples needed to obtain full generalization (i.e., zero generalization error). The analysis of the training sets associated with such parameter leads us to propose a general architecture-independent criterion for selection of training examples. The criterion was checked through numerical simulations for various particular target functions with particular architectures, as well as for random target functions in a nonoverlapping receptive field perceptron. In all cases, the selection sampling criterion lead to an improvement in the generalization capacity compared with a pure random sampling. We also show that for the parity problem, one of the most used problems for testing learning algorithms, only the use of the whole set of examples ensures global learning in a depth two architecture. We show that this difficulty can be overcome by considering a tree-structured network of depth 2log2(N)-1.

journal_name

Neural Comput

journal_title

Neural computation

authors

Franco L,Cannas SA

doi

10.1162/089976600300014999

subject

Has Abstract

pub_date

2000-10-01 00:00:00

pages

2405-26

issue

10

eissn

0899-7667

issn

1530-888X

journal_volume

12

pub_type

杂志文章
  • Derivatives of logarithmic stationary distributions for policy gradient reinforcement learning.

    abstract::Most conventional policy gradient reinforcement learning (PGRL) algorithms neglect (or do not explicitly make use of) a term in the average reward gradient with respect to the policy parameter. That term involves the derivative of the stationary state distribution that corresponds to the sensitivity of its distributio...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.12-08-922

    authors: Morimura T,Uchibe E,Yoshimoto J,Peters J,Doya K

    更新日期:2010-02-01 00:00:00

  • Connecting cortical and behavioral dynamics: bimanual coordination.

    abstract::For the paradigmatic case of bimanual coordination, we review levels of organization of behavioral dynamics and present a description in terms of modes of behavior. We briefly review a recently developed model of spatiotemporal brain activity that is based on short- and long-range connectivity of neural ensembles. Thi...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976698300016954

    authors: Jirsa VK,Fuchs A,Kelso JA

    更新日期:1998-11-15 00:00:00

  • On the use of analytical expressions for the voltage distribution to analyze intracellular recordings.

    abstract::Different analytical expressions for the membrane potential distribution of membranes subject to synaptic noise have been proposed and can be very helpful in analyzing experimental data. However, all of these expressions are either approximations or limit cases, and it is not clear how they compare and which expressio...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2006.18.12.2917

    authors: Rudolph M,Destexhe A

    更新日期:2006-12-01 00:00:00

  • Kernels for longitudinal data with variable sequence length and sampling intervals.

    abstract::We develop several kernel methods for classification of longitudinal data and apply them to detect cognitive decline in the elderly. We first develop mixed-effects models, a type of hierarchical empirical Bayes generative models, for the time series. After demonstrating their utility in likelihood ratio classifiers (a...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00164

    authors: Lu Z,Leen TK,Kaye J

    更新日期:2011-09-01 00:00:00

  • Training nu-support vector classifiers: theory and algorithms.

    abstract::The nu-support vector machine (nu-SVM) for classification proposed by Schölkopf, Smola, Williamson, and Bartlett (2000) has the advantage of using a parameter nu on controlling the number of support vectors. In this article, we investigate the relation between nu-SVM and C-SVM in detail. We show that in general they a...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976601750399335

    authors: Chang CC,Lin CJ

    更新日期:2001-09-01 00:00:00

  • Does high firing irregularity enhance learning?

    abstract::In this note, we demonstrate that the high firing irregularity produced by the leaky integrate-and-fire neuron with the partial somatic reset mechanism, which has been shown to be the most likely candidate to reflect the mechanism used in the brain for reproducing the highly irregular cortical neuron firing at high ra...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00090

    authors: Christodoulou C,Cleanthous A

    更新日期:2011-03-01 00:00:00

  • Cortical spatiotemporal dimensionality reduction for visual grouping.

    abstract::The visual systems of many mammals, including humans, are able to integrate the geometric information of visual stimuli and perform cognitive tasks at the first stages of the cortical processing. This is thought to be the result of a combination of mechanisms, which include feature extraction at the single cell level ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00738

    authors: Cocci G,Barbieri D,Citti G,Sarti A

    更新日期:2015-06-01 00:00:00

  • The Ornstein-Uhlenbeck process does not reproduce spiking statistics of neurons in prefrontal cortex.

    abstract::Cortical neurons of behaving animals generate irregular spike sequences. Recently, there has been a heated discussion about the origin of this irregularity. Softky and Koch (1993) pointed out the inability of standard single-neuron models to reproduce the irregularity of the observed spike sequences when the model par...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976699300016511

    authors: Shinomoto S,Sakai Y,Funahashi S

    更新日期:1999-05-15 00:00:00

  • Synchrony and desynchrony in integrate-and-fire oscillators.

    abstract::Due to many experimental reports of synchronous neural activity in the brain, there is much interest in understanding synchronization in networks of neural oscillators and its potential for computing perceptual organization. Contrary to Hopfield and Herz (1995), we find that networks of locally coupled integrate-and-f...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976699300016160

    authors: Campbell SR,Wang DL,Jayaprakash C

    更新日期:1999-10-01 00:00:00

  • Classification of temporal patterns in dynamic biological networks.

    abstract::A general method is presented to classify temporal patterns generated by rhythmic biological networks when synaptic connections and cellular properties are known. The method is discrete in nature and relies on algebraic properties of state transitions and graph theory. Elements of the set of rhythms generated by a net...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976698300017160

    authors: Roberts PD

    更新日期:1998-10-01 00:00:00

  • Normalization enables robust validation of disparity estimates from neural populations.

    abstract::Binocular fusion takes place over a limited region smaller than one degree of visual angle (Panum's fusional area), which is on the order of the range of preferred disparities measured in populations of disparity-tuned neurons in the visual cortex. However, the actual range of binocular disparities encountered in natu...

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/neco.2008.05-07-532

    authors: Tsang EK,Shi BE

    更新日期:2008-10-01 00:00:00

  • Spiking neural P systems with a generalized use of rules.

    abstract::Spiking neural P systems (SN P systems) are a class of distributed parallel computing devices inspired by spiking neurons, where the spiking rules are usually used in a sequential way (an applicable rule is applied one time at a step) or an exhaustive way (an applicable rule is applied as many times as possible at a s...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00665

    authors: Zhang X,Wang B,Pan L

    更新日期:2014-12-01 00:00:00

  • A Unifying Framework of Synaptic and Intrinsic Plasticity in Neural Populations.

    abstract::A neuronal population is a computational unit that receives a multivariate, time-varying input signal and creates a related multivariate output. These neural signals are modeled as stochastic processes that transmit information in real time, subject to stochastic noise. In a stationary environment, where the input sig...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01057

    authors: Leugering J,Pipa G

    更新日期:2018-04-01 00:00:00

  • McCulloch-Pitts Brains and Pseudorandom Functions.

    abstract::In a pioneering classic, Warren McCulloch and Walter Pitts proposed a model of the central nervous system. Motivated by EEG recordings of normal brain activity, Chvátal and Goldsmith asked whether these dynamical systems can be engineered to produce trajectories that are irregular, disorderly, and apparently unpredict...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00841

    authors: Chvátal V,Goldsmith M,Yang N

    更新日期:2016-06-01 00:00:00

  • Effects of fast presynaptic noise in attractor neural networks.

    abstract::We study both analytically and numerically the effect of presynaptic noise on the transmission of information in attractor neural networks. The noise occurs on a very short timescale compared to that for the neuron dynamics and it produces short-time synaptic depression. This is inspired in recent neurobiological find...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976606775623342

    authors: Cortes JM,Torres JJ,Marro J,Garrido PL,Kappen HJ

    更新日期:2006-03-01 00:00:00

  • Learning Slowness in a Sparse Model of Invariant Feature Detection.

    abstract::Primary visual cortical complex cells are thought to serve as invariant feature detectors and to provide input to higher cortical areas. We propose a single model for learning the connectivity required by complex cells that integrates two factors that have been hypothesized to play a role in the development of invaria...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00743

    authors: Chandrapala TN,Shi BE

    更新日期:2015-07-01 00:00:00

  • Mean First Passage Memory Lifetimes by Reducing Complex Synapses to Simple Synapses.

    abstract::Memory models that store new memories by forgetting old ones have memory lifetimes that are rather short and grow only logarithmically in the number of synapses. Attempts to overcome these deficits include "complex" models of synaptic plasticity in which synapses possess internal states governing the expression of syn...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00956

    authors: Elliott T

    更新日期:2017-06-01 00:00:00

  • Distributed control of uncertain systems using superpositions of linear operators.

    abstract::Control in the natural environment is difficult in part because of uncertainty in the effect of actions. Uncertainty can be due to added motor or sensory noise, unmodeled dynamics, or quantization of sensory feedback. Biological systems are faced with further difficulties, since control must be performed by networks o...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00151

    authors: Sanger TD

    更新日期:2011-08-01 00:00:00

  • The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction.

    abstract::Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attracto...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1996.8.6.1135

    authors: Casey M

    更新日期:1996-08-15 00:00:00

  • On the relation of slow feature analysis and Laplacian eigenmaps.

    abstract::The past decade has seen a rise of interest in Laplacian eigenmaps (LEMs) for nonlinear dimensionality reduction. LEMs have been used in spectral clustering, in semisupervised learning, and for providing efficient state representations for reinforcement learning. Here, we show that LEMs are closely related to slow fea...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00214

    authors: Sprekeler H

    更新日期:2011-12-01 00:00:00

  • Generalization and multirate models of motor adaptation.

    abstract::When subjects adapt their reaching movements in the setting of a systematic force or visual perturbation, generalization of adaptation can be assessed psychophysically in two ways: by testing untrained locations in the work space at the end of adaptation (slow postadaptation generalization) or by determining the influ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00262

    authors: Tanaka H,Krakauer JW,Sejnowski TJ

    更新日期:2012-04-01 00:00:00

  • Replicating receptive fields of simple and complex cells in primary visual cortex in a neuronal network model with temporal and population sparseness and reliability.

    abstract::We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00341

    authors: Tanaka T,Aoyagi T,Kaneko T

    更新日期:2012-10-01 00:00:00

  • Optimal sequential detection of stimuli from multiunit recordings taken in densely populated brain regions.

    abstract::We address the problem of detecting the presence of a recurring stimulus by monitoring the voltage on a multiunit electrode located in a brain region densely populated by stimulus reactive neurons. Published experimental results suggest that under these conditions, when a stimulus is present, the measurements are gaus...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00257

    authors: Nossenson N,Messer H

    更新日期:2012-04-01 00:00:00

  • Employing the zeta-transform to optimize the calculation of the synaptic conductance of NMDA and other synaptic channels in network simulations.

    abstract::Calculation of the total conductance change induced by multiple synapses at a given membrane compartment remains one of the most time-consuming processes in biophysically realistic neural network simulations. Here we show that this calculation can be achieved in a highly efficient way even for multiply converging syna...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976698300017061

    authors: Köhn J,Wörgötter F

    更新日期:1998-10-01 00:00:00

  • Simultaneous rate-synchrony codes in populations of spiking neurons.

    abstract::Firing rates and synchronous firing are often simultaneously relevant signals, and they independently or cooperatively represent external sensory inputs, cognitive events, and environmental situations such as body position. However, how rates and synchrony comodulate and which aspects of inputs are effectively encoded...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976606774841521

    authors: Masuda N

    更新日期:2006-01-01 00:00:00

  • Spiking neural P systems with weights.

    abstract::A variant of spiking neural P systems with positive or negative weights on synapses is introduced, where the rules of a neuron fire when the potential of that neuron equals a given value. The involved values-weights, firing thresholds, potential consumed by each rule-can be real (computable) numbers, rational numbers,...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00022

    authors: Wang J,Hoogeboom HJ,Pan L,Păun G,Pérez-Jiménez MJ

    更新日期:2010-10-01 00:00:00

  • Irregular firing of isolated cortical interneurons in vitro driven by intrinsic stochastic mechanisms.

    abstract::Pharmacologically isolated GABAergic irregular spiking and stuttering interneurons in the mouse visual cortex display highly irregular spike times, with high coefficients of variation approximately 0.9-3, in response to a depolarizing, constant current input. This is in marked contrast to cortical pyramidal cells, whi...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.20.1.44

    authors: Englitz B,Stiefel KM,Sejnowski TJ

    更新日期:2008-01-01 00:00:00

  • A general probability estimation approach for neural comp.

    abstract::We describe an analytical framework for the adaptations of neural systems that adapt its internal structure on the basis of subjective probabilities constructed by computation of randomly received input signals. A principled approach is provided with the key property that it defines a probability density model that al...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015862

    authors: Khaikine M,Holthausen K

    更新日期:2000-02-01 00:00:00

  • Higher-order statistics of input ensembles and the response of simple model neurons.

    abstract::Pairwise correlations among spike trains recorded in vivo have been frequently reported. It has been argued that correlated activity could play an important role in the brain, because it efficiently modulates the response of a postsynaptic neuron. We show here that a neuron's output firing rate critically depends on t...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976603321043702

    authors: Kuhn A,Aertsen A,Rotter S

    更新日期:2003-01-01 00:00:00

  • State-Space Representations of Deep Neural Networks.

    abstract::This letter deals with neural networks as dynamical systems governed by finite difference equations. It shows that the introduction of k -many skip connections into network architectures, such as residual networks and additive dense n...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01165

    authors: Hauser M,Gunn S,Saab S Jr,Ray A

    更新日期:2019-03-01 00:00:00