Abstract:
:In a recent paper, Poggio and Girosi (1990) proposed a class of neural networks obtained from the theory of regularization. Regularized networks are capable of approximating arbitrarily well any continuous function on a compactum. In this paper we consider in detail the learning problem for the one-dimensional case. We show that in the case of output data observed with noise, regularized networks are capable of learning and approximating (on compacta) elements of certain classes of Sobolev spaces, known as reproducing kernel Hilbert spaces (RKHS), at a nonparametric rate that optimally exploits the smoothness properties of the unknown mapping. In particular we show that the total squared error, given by the sum of the squared bias and the variance, will approach zero at a rate of n(-2m)/(2m+1), where m denotes the order of differentiability of the true unknown function. On the other hand, if the unknown mapping is a continuous function but does not belong to an RKHS, then there still exists a unique regularized solution, but this is no longer guaranteed to converge in mean square to a well-defined limit. Further, even if such a solution converges, the total squared error is bounded away from zero for all n sufficiently large.
journal_name
Neural Computjournal_title
Neural computationauthors
Corradi V,White Hdoi
10.1162/neco.1995.7.6.1225subject
Has Abstractpub_date
1995-11-01 00:00:00pages
1225-44issue
6eissn
0899-7667issn
1530-888Xjournal_volume
7pub_type
杂志文章abstract::In this note, we demonstrate that the high firing irregularity produced by the leaky integrate-and-fire neuron with the partial somatic reset mechanism, which has been shown to be the most likely candidate to reflect the mechanism used in the brain for reproducing the highly irregular cortical neuron firing at high ra...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00090
更新日期:2011-03-01 00:00:00
abstract::A spiking neuron "computes" by transforming a complex dynamical input into a train of action potentials, or spikes. The computation performed by the neuron can be formulated as dimensional reduction, or feature detection, followed by a nonlinear decision function over the low-dimensional space. Generalizations of the ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/08997660360675017
更新日期:2003-08-01 00:00:00
abstract::A firing rate map, also known as a tuning curve, describes the nonlinear relationship between a neuron's spike rate and a low-dimensional stimulus (e.g., orientation, head direction, contrast, color). Here we investigate Bayesian active learning methods for estimating firing rate maps in closed-loop neurophysiology ex...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00615
更新日期:2014-08-01 00:00:00
abstract::The brain is known to be active even when not performing any overt cognitive tasks, and often it engages in involuntary mind wandering. This resting state has been extensively characterized in terms of fMRI-derived brain networks. However, an alternate method has recently gained popularity: EEG microstate analysis. Pr...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/neco_a_01229
更新日期:2019-11-01 00:00:00
abstract::Inner-product operators, often referred to as kernels in statistical learning, define a mapping from some input space into a feature space. The focus of this letter is the construction of biologically motivated kernels for cortical activities. The kernels we derive, termed Spikernels, map spike count sequences into an...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/0899766053019944
更新日期:2005-03-01 00:00:00
abstract::The nu-support vector machine (nu-SVM) for classification proposed by Schölkopf, Smola, Williamson, and Bartlett (2000) has the advantage of using a parameter nu on controlling the number of support vectors. In this article, we investigate the relation between nu-SVM and C-SVM in detail. We show that in general they a...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976601750399335
更新日期:2001-09-01 00:00:00
abstract::Regression aims at estimating the conditional mean of output given input. However, regression is not informative enough if the conditional density is multimodal, heteroskedastic, and asymmetric. In such a case, estimating the conditional density itself is preferable, but conditional density estimation (CDE) is challen...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00683
更新日期:2015-01-01 00:00:00
abstract::Integrate-and-express models of synaptic plasticity propose that synapses integrate plasticity induction signals before expressing synaptic plasticity. By discerning trends in their induction signals, synapses can control destabilizing fluctuations in synaptic strength. In a feedforward perceptron framework with binar...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00889
更新日期:2016-11-01 00:00:00
abstract::We consider the effect of the effective timing of a delayed feedback on the excitatory neuron in a recurrent inhibitory loop, when biological realities of firing and absolute refractory period are incorporated into a phenomenological spiking linear or quadratic integrate-and-fire neuron model. We show that such models...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.19.8.2124
更新日期:2007-08-01 00:00:00
abstract::This article studies a general theory of estimating functions of independent component analysis when the independent source signals are temporarily correlated. Estimating functions are used for deriving both batch and on-line learning algorithms, and they are applicable to blind cases where spatial and temporal probab...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300015079
更新日期:2000-09-01 00:00:00
abstract::The problem of designing input signals for optimal generalization is called active learning. In this article, we give a two-stage sampling scheme for reducing both the bias and variance, and based on this scheme, we propose two active learning methods. One is the multipoint search method applicable to arbitrary models...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300014773
更新日期:2000-12-01 00:00:00
abstract::In the past decade the importance of synchronized dynamics in the brain has emerged from both empirical and theoretical perspectives. Fast dynamic synchronous interactions of an oscillatory or nonoscillatory nature may constitute a form of temporal coding that underlies feature binding and perceptual synthesis. The re...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976699300016287
更新日期:1999-08-15 00:00:00
abstract::The past decade has seen a rise of interest in Laplacian eigenmaps (LEMs) for nonlinear dimensionality reduction. LEMs have been used in spectral clustering, in semisupervised learning, and for providing efficient state representations for reinforcement learning. Here, we show that LEMs are closely related to slow fea...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00214
更新日期:2011-12-01 00:00:00
abstract::Experimental studies of reasoning and planned behavior have provided evidence that nervous systems use internal models to perform predictive motor control, imagery, inference, and planning. Classical (model-free) reinforcement learning approaches omit such a model; standard sensorimotor models account for forward and ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976606776240995
更新日期:2006-05-01 00:00:00
abstract::Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are ext...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00882
更新日期:2016-10-01 00:00:00
abstract::This article reviews statistical techniques for combining multiple probability distributions. The framework is that of a decision maker who consults several experts regarding some events. The experts express their opinions in the form of probability distributions. The decision maker must aggregate the experts' distrib...
journal_title:Neural computation
pub_type: 杂志文章,评审
doi:10.1162/neco.1995.7.5.867
更新日期:1995-09-01 00:00:00
abstract::Recently there has been great interest in sparse representations of signals under the assumption that signals (data sets) can be well approximated by a linear combination of few elements of a known basis (dictionary). Many algorithms have been developed to find such representations for one-dimensional signals (vectors...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00385
更新日期:2013-01-01 00:00:00
abstract::A fast and accurate computational scheme for simulating nonlinear dynamic systems is presented. The scheme assumes that the system can be represented by a combination of components of only two different types: first-order low-pass filters and static nonlinearities. The parameters of these filters and nonlinearities ma...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/neco.2008.04-07-506
更新日期:2008-07-01 00:00:00
abstract::We outline a hybrid analog-digital scheme for computing with three important features that enable it to scale to systems of large complexity: First, like digital computation, which uses several one-bit precise logical units to collectively compute a precise answer to a computation, the hybrid scheme uses several moder...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602320263971
更新日期:2002-09-01 00:00:00
abstract::The Kalman filter provides a simple and efficient algorithm to compute the posterior distribution for state-space models where both the latent state and measurement models are linear and gaussian. Extensions to the Kalman filter, including the extended and unscented Kalman filters, incorporate linearizations for model...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/neco_a_01275
更新日期:2020-05-01 00:00:00
abstract::The new time-organized map (TOM) is presented for a better understanding of the self-organization and geometric structure of cortical signal representations. The algorithm extends the common self-organizing map (SOM) from the processing of purely spatial signals to the processing of spatiotemporal signals. The main ad...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976603765202695
更新日期:2003-05-01 00:00:00
abstract::Independent component analysis (ICA) aims at separating a multivariate signal into independent nongaussian signals by optimizing a contrast function with no knowledge on the mixing mechanism. Despite the availability of a constellation of contrast functions, a Hartley-entropy-based ICA contrast endowed with the discri...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00700
更新日期:2015-03-01 00:00:00
abstract::The Bayesian evidence framework has been successfully applied to the design of multilayer perceptrons (MLPs) in the work of MacKay. Nevertheless, the training of MLPs suffers from drawbacks like the nonconvex optimization problem and the choice of the number of hidden units. In support vector machines (SVMs) for class...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602753633411
更新日期:2002-05-01 00:00:00
abstract::The instantaneous phase of neural rhythms is important to many neuroscience-related studies. In this letter, we show that the statistical sampling properties of three instantaneous phase estimators commonly employed to analyze neuroscience data share common features, allowing an analytical investigation into their beh...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00422
更新日期:2013-04-01 00:00:00
abstract::It has been suggested that reactivation of previously acquired experiences or stored information in declarative memories in the hippocampus and neocortex contributes to memory consolidation and learning. Understanding memory consolidation depends crucially on the development of robust statistical methods for assessing...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01090
更新日期:2018-08-01 00:00:00
abstract::We investigate the approximation ability of a multilayer perceptron (MLP) network when it is extended to the complex domain. The main challenge for processing complex data with neural networks has been the lack of bounded and analytic complex nonlinear activation functions in the complex domain, as stated by Liouville...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976603321891846
更新日期:2003-07-01 00:00:00
abstract::The ability to encode and transmit a signal is an essential property that must demonstrate many neuronal circuits in sensory areas in addition to any processing they may provide. It is known that an appropriate level of lateral inhibition, as observed in these areas, can significantly improve the encoding ability of a...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00100
更新日期:2011-04-01 00:00:00
abstract::A minimal model is presented to explain changes in frequency, shape, and amplitude of Ca2+ oscillations in the neuroendocrine melanotrope cell of Xenopus Laevis. It describes the cell as a plasma membrane oscillator with influx of extracellular Ca2+ via voltage-gated Ca2+ channels in the plasma membrane. The Ca2+ osci...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976601300014655
更新日期:2001-01-01 00:00:00
abstract::The Nyström method is a well-known sampling-based technique for approximating the eigensystem of large kernel matrices. However, the chosen samples in the Nyström method are all assumed to be of equal importance, which deviates from the integral equation that defines the kernel eigenfunctions. Motivated by this observ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2008.11-07-651
更新日期:2009-01-01 00:00:00
abstract::We describe an analytical framework for the adaptations of neural systems that adapt its internal structure on the basis of subjective probabilities constructed by computation of randomly received input signals. A principled approach is provided with the key property that it defines a probability density model that al...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300015862
更新日期:2000-02-01 00:00:00