Investigating the fault tolerance of neural networks.

Abstract:

:Particular levels of partial fault tolerance (PFT) in feedforward artificial neural networks of a given size can be obtained by redundancy (replicating a smaller normally trained network), by design (training specifically to increase PFT), and by a combination of the two (replicating a smaller PFT-trained network). This letter investigates the method of achieving the highest PFT per network size (total number of units and connections) for classification problems. It concludes that for non-toy problems, there exists a normally trained network of optimal size that produces the smallest fully fault-tolerant network when replicated. In addition, it shows that for particular network sizes, the best level of PFT is achieved by training a network of that size for fault tolerance. The results and discussion demonstrate how the outcome depends on the levels of saturation of the network nodes when classifying data points. With simple training tasks, where the complexity of the problem and the size of the network are well within the ability of the training method, the hidden-layer nodes operate close to their saturation points, and classification is clean. Under such circumstances, replicating the smallest normally trained correct network yields the highest PFT for any given network size. For hard training tasks (difficult classification problems or network sizes close to the minimum), normal training obtains networks that do not operate close to their saturation points, and outputs are not as close to their targets. In this case, training a larger network for fault tolerance yields better PFT than replicating a smaller, normally trained network. However, since fault-tolerant training on its own produces networks that operate closer to their linear areas than normal training, replicating normally trained networks ultimately leads to better PFT than replicating fault-tolerant networks of the same initial size.

journal_name

Neural Comput

journal_title

Neural computation

authors

Tchernev EB,Mulvaney RG,Phatak DS

doi

10.1162/0899766053723096

subject

Has Abstract

pub_date

2005-07-01 00:00:00

pages

1646-64

issue

7

eissn

0899-7667

issn

1530-888X

journal_volume

17

pub_type

杂志文章
  • Minimizing binding errors using learned conjunctive features.

    abstract::We have studied some of the design trade-offs governing visual representations based on spatially invariant conjunctive feature detectors, with an emphasis on the susceptibility of such systems to false-positive recognition errors-Malsburg's classical binding problem. We begin by deriving an analytical model that make...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015574

    authors: Mel BW,Fiser J

    更新日期:2000-04-01 00:00:00

  • A Mathematical Analysis of Memory Lifetime in a Simple Network Model of Memory.

    abstract::We study the learning of an external signal by a neural network and the time to forget it when this network is submitted to noise. The presentation of an external stimulus to the recurrent network of binary neurons may change the state of the synapses. Multiple presentations of a unique signal lead to its learning. Th...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01286

    authors: Helson P

    更新日期:2020-07-01 00:00:00

  • Online adaptive decision trees.

    abstract::Decision trees and neural networks are widely used tools for pattern classification. Decision trees provide highly localized representation, whereas neural networks provide a distributed but compact representation of the decision space. Decision trees cannot be induced in the online mode, and they are not adaptive to ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766041336396

    authors: Basak J

    更新日期:2004-09-01 00:00:00

  • Temporal sequence learning, prediction, and control: a review of different models and their relation to biological mechanisms.

    abstract::In this review, we compare methods for temporal sequence learning (TSL) across the disciplines machine-control, classical conditioning, neuronal models for TSL as well as spike-timing-dependent plasticity (STDP). This review introduces the most influential models and focuses on two questions: To what degree are reward...

    journal_title:Neural computation

    pub_type: 杂志文章,评审

    doi:10.1162/0899766053011555

    authors: Wörgötter F,Porr B

    更新日期:2005-02-01 00:00:00

  • Supervised learning in a recurrent network of rate-model neurons exhibiting frequency adaptation.

    abstract::For gradient descent learning to yield connectivity consistent with real biological networks, the simulated neurons would have to include more realistic intrinsic properties such as frequency adaptation. However, gradient descent learning cannot be used straightforwardly with adapting rate-model neurons because the de...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766054323017

    authors: Fortier PA,Guigon E,Burnod Y

    更新日期:2005-09-01 00:00:00

  • Statistical computer model analysis of the reciprocal and recurrent inhibitions of the Ia-EPSP in α-motoneurons.

    abstract::We simulate the inhibition of Ia-glutamatergic excitatory postsynaptic potential (EPSP) by preceding it with glycinergic recurrent (REN) and reciprocal (REC) inhibitory postsynaptic potentials (IPSPs). The inhibition is evaluated in the presence of voltage-dependent conductances of sodium, delayed rectifier potassium,...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00375

    authors: Gradwohl G,Grossman Y

    更新日期:2013-01-01 00:00:00

  • Learning Hough transform: a neural network model.

    abstract::A single-layered Hough transform network is proposed that accepts image coordinates of each object pixel as input and produces a set of outputs that indicate the belongingness of the pixel to a particular structure (e.g., a straight line). The network is able to learn adaptively the parametric forms of the linear segm...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976601300014501

    authors: Basak J

    更新日期:2001-03-01 00:00:00

  • Spikernels: predicting arm movements by embedding population spike rate patterns in inner-product spaces.

    abstract::Inner-product operators, often referred to as kernels in statistical learning, define a mapping from some input space into a feature space. The focus of this letter is the construction of biologically motivated kernels for cortical activities. The kernels we derive, termed Spikernels, map spike count sequences into an...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766053019944

    authors: Shpigelman L,Singer Y,Paz R,Vaadia E

    更新日期:2005-03-01 00:00:00

  • Neural coding: higher-order temporal patterns in the neurostatistics of cell assemblies.

    abstract::Recent advances in the technology of multiunit recordings make it possible to test Hebb's hypothesis that neurons do not function in isolation but are organized in assemblies. This has created the need for statistical approaches to detecting the presence of spatiotemporal patterns of more than two neurons in neuron sp...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300014872

    authors: Martignon L,Deco G,Laskey K,Diamond M,Freiwald W,Vaadia E

    更新日期:2000-11-01 00:00:00

  • Synchrony and desynchrony in integrate-and-fire oscillators.

    abstract::Due to many experimental reports of synchronous neural activity in the brain, there is much interest in understanding synchronization in networks of neural oscillators and its potential for computing perceptual organization. Contrary to Hopfield and Herz (1995), we find that networks of locally coupled integrate-and-f...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976699300016160

    authors: Campbell SR,Wang DL,Jayaprakash C

    更新日期:1999-10-01 00:00:00

  • Deficient GABAergic gliotransmission may cause broader sensory tuning in schizophrenia.

    abstract::We examined how the depression of intracortical inhibition due to a reduction in ambient GABA concentration impairs perceptual information processing in schizophrenia. A neural network model with a gliotransmission-mediated ambient GABA regulatory mechanism was simulated. In the network, interneuron-to-glial-cell and ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00519

    authors: Hoshino O

    更新日期:2013-12-01 00:00:00

  • The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction.

    abstract::Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attracto...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.1996.8.6.1135

    authors: Casey M

    更新日期:1996-08-15 00:00:00

  • Mismatched training and test distributions can outperform matched ones.

    abstract::In learning theory, the training and test sets are assumed to be drawn from the same probability distribution. This assumption is also followed in practical situations, where matching the training and test distributions is considered desirable. Contrary to conventional wisdom, we show that mismatched training and test...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00697

    authors: González CR,Abu-Mostafa YS

    更新日期:2015-02-01 00:00:00

  • Irregular firing of isolated cortical interneurons in vitro driven by intrinsic stochastic mechanisms.

    abstract::Pharmacologically isolated GABAergic irregular spiking and stuttering interneurons in the mouse visual cortex display highly irregular spike times, with high coefficients of variation approximately 0.9-3, in response to a depolarizing, constant current input. This is in marked contrast to cortical pyramidal cells, whi...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.20.1.44

    authors: Englitz B,Stiefel KM,Sejnowski TJ

    更新日期:2008-01-01 00:00:00

  • Simultaneous Estimation of Nongaussian Components and Their Correlation Structure.

    abstract::The statistical dependencies that independent component analysis (ICA) cannot remove often provide rich information beyond the linear independent components. It would thus be very useful to estimate the dependency structure from data. While such models have been proposed, they have usually concentrated on higher-order...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01006

    authors: Sasaki H,Gutmann MU,Shouno H,Hyvärinen A

    更新日期:2017-11-01 00:00:00

  • Spike train decoding without spike sorting.

    abstract::We propose a novel paradigm for spike train decoding, which avoids entirely spike sorting based on waveform measurements. This paradigm directly uses the spike train collected at recording electrodes from thresholding the bandpassed voltage signal. Our approach is a paradigm, not an algorithm, since it can be used wit...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2008.02-07-478

    authors: Ventura V

    更新日期:2008-04-01 00:00:00

  • Synchronized firings in the networks of class 1 excitable neurons with excitatory and inhibitory connections and their dependences on the forms of interactions.

    abstract::Synchronized firings in the networks of class 1 excitable neurons with excitatory and inhibitory connections are investigated, and their dependences on the forms of interactions are analyzed. As the forms of interactions, we treat the double exponential coupling and the interactions derived from it: pulse coupling, ex...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/0899766053630387

    authors: Kanamaru T,Sekine M

    更新日期:2005-06-01 00:00:00

  • The computational structure of spike trains.

    abstract::Neurons perform computations, and convey the results of those computations through the statistical structure of their output spike trains. Here we present a practical method, grounded in the information-theoretic analysis of prediction, for inferring a minimal representation of that structure and for characterizing it...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco.2009.12-07-678

    authors: Haslinger R,Klinkner KL,Shalizi CR

    更新日期:2010-01-01 00:00:00

  • Random embedding machines for pattern recognition.

    abstract::Real classification problems involve structured data that can be essentially grouped into a relatively small number of clusters. It is shown that, under a local clustering condition, a set of points of a given class, embedded in binary space by a set of randomly parameterized surfaces, is linearly separable from other...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976601753196012

    authors: Baram Y

    更新日期:2001-11-01 00:00:00

  • Propagating distributions up directed acyclic graphs.

    abstract::In a previous article, we considered game trees as graphical models. Adopting an evaluation function that returned a probability distribution over values likely to be taken at a given position, we described how to build a model of uncertainty and use it for utility-directed growth of the search tree and for deciding o...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976699300016881

    authors: Baum EB,Smith WD

    更新日期:1999-01-01 00:00:00

  • Evaluating auditory performance limits: II. One-parameter discrimination with random-level variation.

    abstract::Previous studies have combined analytical models of stochastic neural responses with signal detection theory (SDT) to predict psychophysical performance limits; however, these studies have typically been limited to simple models and simple psychophysical tasks. A companion article in this issue ("Evaluating Auditory P...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976601750541813

    authors: Heinz MG,Colburn HS,Carney LH

    更新日期:2001-10-01 00:00:00

  • Neuronal assembly dynamics in supervised and unsupervised learning scenarios.

    abstract::The dynamic formation of groups of neurons--neuronal assemblies--is believed to mediate cognitive phenomena at many levels, but their detailed operation and mechanisms of interaction are still to be uncovered. One hypothesis suggests that synchronized oscillations underpin their formation and functioning, with a focus...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00502

    authors: Moioli RC,Husbands P

    更新日期:2013-11-01 00:00:00

  • Enhanced stimulus encoding capabilities with spectral selectivity in inhibitory circuits by STDP.

    abstract::The ability to encode and transmit a signal is an essential property that must demonstrate many neuronal circuits in sensory areas in addition to any processing they may provide. It is known that an appropriate level of lateral inhibition, as observed in these areas, can significantly improve the encoding ability of a...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00100

    authors: Coulon A,Beslon G,Soula HA

    更新日期:2011-04-01 00:00:00

  • Parameter Identifiability in Statistical Machine Learning: A Review.

    abstract::This review examines the relevance of parameter identifiability for statistical models used in machine learning. In addition to defining main concepts, we address several issues of identifiability closely related to machine learning, showing the advantages and disadvantages of state-of-the-art research and demonstrati...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00947

    authors: Ran ZY,Hu BG

    更新日期:2017-05-01 00:00:00

  • Resonator Networks, 2: Factorization Performance and Capacity Compared to Optimization-Based Methods.

    abstract::We develop theoretical foundations of resonator networks, a new type of recurrent neural network introduced in Frady, Kent, Olshausen, and Sommer (2020), a companion article in this issue, to solve a high-dimensional vector factorization problem arising in Vector Symbolic Architectures. Given a composite vector formed...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/neco_a_01329

    authors: Kent SJ,Frady EP,Sommer FT,Olshausen BA

    更新日期:2020-12-01 00:00:00

  • Learning object representations using a priori constraints within ORASSYLL.

    abstract::In this article, a biologically plausible and efficient object recognition system (called ORASSYLL) is introduced, based on a set of a priori constraints motivated by findings of developmental psychology and neurophysiology. These constraints are concerned with the organization of the input in local and corresponding ...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976601300014583

    authors: Krüger N

    更新日期:2001-02-01 00:00:00

  • A theory of slow feature analysis for transformation-based input signals with an application to complex cells.

    abstract::We develop a group-theoretical analysis of slow feature analysis for the case where the input data are generated by applying a set of continuous transformations to static templates. As an application of the theory, we analytically derive nonlinear visual receptive fields and show that their optimal stimuli, as well as...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00072

    authors: Sprekeler H,Wiskott L

    更新日期:2011-02-01 00:00:00

  • Synchrony in heterogeneous networks of spiking neurons.

    abstract::The emergence of synchrony in the activity of large, heterogeneous networks of spiking neurons is investigated. We define the robustness of synchrony by the critical disorder at which the asynchronous state becomes linearly unstable. We show that at low firing rates, synchrony is more robust in excitatory networks tha...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/089976600300015286

    authors: Neltner L,Hansel D,Mato G,Meunier C

    更新日期:2000-07-01 00:00:00

  • Fast recursive filters for simulating nonlinear dynamic systems.

    abstract::A fast and accurate computational scheme for simulating nonlinear dynamic systems is presented. The scheme assumes that the system can be represented by a combination of components of only two different types: first-order low-pass filters and static nonlinearities. The parameters of these filters and nonlinearities ma...

    journal_title:Neural computation

    pub_type: 信件

    doi:10.1162/neco.2008.04-07-506

    authors: van Hateren JH

    更新日期:2008-07-01 00:00:00

  • A Novel Reconstruction Framework for Time-Encoded Signals with Integrate-and-Fire Neurons.

    abstract::Integrate-and-fire neurons are time encoding machines that convert the amplitude of an analog signal into a nonuniform, strictly increasing sequence of spike times. Under certain conditions, the encoded signals can be reconstructed from the nonuniform spike time sequences using a time decoding machine. Time encoding a...

    journal_title:Neural computation

    pub_type: 杂志文章

    doi:10.1162/NECO_a_00764

    authors: Florescu D,Coca D

    更新日期:2015-09-01 00:00:00