Abstract:
:We argue that when faced with big data sets, learning and inference algorithms should compute updates using only subsets of data items. We introduce algorithms that use sequential hypothesis tests to adaptively select such a subset of data points. The statistical properties of this subsampling process can be used to control the efficiency and accuracy of learning or inference. In the context of learning by optimization, we test for the probability that the update direction is no more than 90 degrees in the wrong direction. In the context of posterior inference using Markov chain Monte Carlo, we test for the probability that our decision to accept or reject a sample is wrong. We experimentally evaluate our algorithms on a number of models and data sets.
journal_name
Neural Computjournal_title
Neural computationauthors
Korattikara A,Chen Y,Welling Mdoi
10.1162/NECO_a_00796subject
Has Abstractpub_date
2016-01-01 00:00:00pages
45-70issue
1eissn
0899-7667issn
1530-888Xpii
10.1162/NECO_a_00796journal_volume
28pub_type
杂志文章abstract::Spiking neural networks (SNNs) with the event-driven manner of transmitting spikes consume ultra-low power on neuromorphic chips. However, training deep SNNs is still challenging compared to convolutional neural networks (CNNs). The SNN training algorithms have not achieved the same performance as CNNs. In this letter...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01319
更新日期:2020-12-01 00:00:00
abstract::In considering a statistical model selection of neural networks and radial basis functions under an overrealizable case, the problem of unidentifiability emerges. Because the model selection criterion is an unbiased estimator of the generalization error based on the training error, this article analyzes the expected t...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602760128090
更新日期:2002-08-01 00:00:00
abstract::Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)-based approaches and autoencoder (AE)-based approaches. CCA-based approaches...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00801
更新日期:2016-02-01 00:00:00
abstract::A necessary ingredient for a quantitative theory of neural coding is appropriate "spike kinematics": a precise description of spike trains. While summarizing experiments by complete spike time collections is clearly inefficient and probably unnecessary, the most common probabilistic model used in neurophysiology, the ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.07-08-828
更新日期:2009-08-01 00:00:00
abstract::In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00112
更新日期:2011-05-01 00:00:00
abstract::We consider the problem of training a linear feedforward neural network by using a gradient descent-like LMS learning algorithm. The objective is to find a weight matrix for the network, by repeatedly presenting to it a finite set of examples, so that the sum of the squares of the errors is minimized. Kohonen showed t...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1991.3.2.226
更新日期:1991-07-01 00:00:00
abstract::Recent experimental and computational evidence suggests that several dynamical properties may characterize the operating point of functioning neural networks: critical branching, neutral stability, and production of a wide range of firing patterns. We seek the simplest setting in which these properties emerge, clarify...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00461
更新日期:2013-07-01 00:00:00
abstract::A fast and accurate computational scheme for simulating nonlinear dynamic systems is presented. The scheme assumes that the system can be represented by a combination of components of only two different types: first-order low-pass filters and static nonlinearities. The parameters of these filters and nonlinearities ma...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/neco.2008.04-07-506
更新日期:2008-07-01 00:00:00
abstract::Integrate-and-fire neurons are time encoding machines that convert the amplitude of an analog signal into a nonuniform, strictly increasing sequence of spike times. Under certain conditions, the encoded signals can be reconstructed from the nonuniform spike time sequences using a time decoding machine. Time encoding a...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00764
更新日期:2015-09-01 00:00:00
abstract::In this letter, we investigate the fundamental limits on how the interspike time of a neuron oscillator can be perturbed by the application of a bounded external control input (a current stimulus) with zero net electric charge accumulation. We use phase models to study the dynamics of neurons and derive charge-balance...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00643
更新日期:2014-10-01 00:00:00
abstract::Knowledge of synaptic input is crucial for understanding synaptic integration and ultimately neural function. However, in vivo, the rates at which synaptic inputs arrive are high, so that it is typically impossible to detect single events. We show here that it is nevertheless possible to extract the properties of the ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00975
更新日期:2017-07-01 00:00:00
abstract::In this article, a biologically plausible and efficient object recognition system (called ORASSYLL) is introduced, based on a set of a priori constraints motivated by findings of developmental psychology and neurophysiology. These constraints are concerned with the organization of the input in local and corresponding ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976601300014583
更新日期:2001-02-01 00:00:00
abstract::We propose that replication (with mutation) of patterns of neuronal activity can occur within the brain using known neurophysiological processes. Thereby evolutionary algorithms implemented by neuro- nal circuits can play a role in cognition. Replication of structured neuronal representations is assumed in several cog...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00031
更新日期:2010-11-01 00:00:00
abstract::In this letter, we investigate the impact of choosing different loss functions from the viewpoint of statistical learning theory. We introduce a convexity assumption, which is met by all loss functions commonly used in the literature, and study how the bound on the estimation error changes with the loss. We also deriv...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976604773135104
更新日期:2004-05-01 00:00:00
abstract::We show how a Hopfield network with modifiable recurrent connections undergoing slow Hebbian learning can extract the underlying geometry of an input space. First, we use a slow and fast analysis to derive an averaged system whose dynamics derives from an energy function and therefore always converges to equilibrium p...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00322
更新日期:2012-09-01 00:00:00
abstract::Attractor networks are widely believed to underlie the memory systems of animals across different species. Existing models have succeeded in qualitatively modeling properties of attractor dynamics, but their computational abilities often suffer from poor representations for realistic complex patterns, spurious attract...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2010.02-09-957
更新日期:2010-05-01 00:00:00
abstract::A stochastic model of spike-timing-dependent plasticity (STDP) postulates that single synapses presented with a single spike pair exhibit all-or-none quantal jumps in synaptic strength. The amplitudes of the jumps are independent of spiking timing, but their probabilities do depend on spiking timing. By making the amp...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.07-08-814
更新日期:2010-01-01 00:00:00
abstract::Pairwise correlations among spike trains recorded in vivo have been frequently reported. It has been argued that correlated activity could play an important role in the brain, because it efficiently modulates the response of a postsynaptic neuron. We show here that a neuron's output firing rate critically depends on t...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976603321043702
更新日期:2003-01-01 00:00:00
abstract::Humans learn categories of complex objects quickly and from a few examples. Random projection has been suggested as a means to learn and categorize efficiently. We investigate how random projection affects categorization by humans and by very simple neural networks on the same stimuli and categorization tasks, and how...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/NECO_a_00769
更新日期:2015-10-01 00:00:00
abstract::Theories of learning and generalization hold that the generalization bias, defined as the difference between the training error and the generalization error, increases on average with the number of adaptive parameters. This article, however, shows that this general tendency is violated for a gaussian mixture model. Fo...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300015439
更新日期:2000-06-01 00:00:00
abstract::Uncertainty coming from the noise in its neurons and the ill-posed nature of many tasks plagues neural computations. Maybe surprisingly, many studies show that the brain manipulates these forms of uncertainty in a probabilistically consistent and normative manner, and there is now a rich theoretical literature on the ...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/neco.2007.19.2.404
更新日期:2007-02-01 00:00:00
abstract::A modular, recurrent connectionist network is taught to incrementally parse complex sentences. From input presented one word at a time, the network learns to do semantic role assignment, noun phrase attachment, and clause structure recognition, for sentences with both active and passive constructions and center-embedd...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1991.3.1.110
更新日期:1991-04-01 00:00:00
abstract::Temporal coding is studied for an oscillatory neural network model with synchronization and acceleration. The latter mechanism refers to increasing (decreasing) the phase velocity of each unit for stronger (weaker) or more coherent (decoherent) input from the other units. It has been demonstrated that acceleration gen...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/neco.2008.09-06-342
更新日期:2008-07-01 00:00:00
abstract::The instantaneous phase of neural rhythms is important to many neuroscience-related studies. In this letter, we show that the statistical sampling properties of three instantaneous phase estimators commonly employed to analyze neuroscience data share common features, allowing an analytical investigation into their beh...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00422
更新日期:2013-04-01 00:00:00
abstract::Place cells in the rat hippocampus play a key role in creating the animal's internal representation of the world. During active navigation, these cells spike only in discrete locations, together encoding a map of the environment. Electrophysiological recordings have shown that the animal can revisit this map mentally ...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/NECO_a_00840
更新日期:2016-06-01 00:00:00
abstract::Calculation of the total conductance change induced by multiple synapses at a given membrane compartment remains one of the most time-consuming processes in biophysically realistic neural network simulations. Here we show that this calculation can be achieved in a highly efficient way even for multiply converging syna...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976698300017061
更新日期:1998-10-01 00:00:00
abstract::The ability to achieve high swimming speed and efficiency is very important to both the real lamprey and its robotic implementation. In previous studies, we used evolutionary algorithms to evolve biologically plausible connectionist swimming controllers for a simulated lamprey. This letter investigates the robustness ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.19.6.1568
更新日期:2007-06-01 00:00:00
abstract::The visual systems of many mammals, including humans, are able to integrate the geometric information of visual stimuli and perform cognitive tasks at the first stages of the cortical processing. This is thought to be the result of a combination of mechanisms, which include feature extraction at the single cell level ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00738
更新日期:2015-06-01 00:00:00
abstract::Reservoir computing is a biologically inspired class of learning algorithms in which the intrinsic dynamics of a recurrent neural network are mined to produce target time series. Most existing reservoir computing algorithms rely on fully supervised learning rules, which require access to an exact copy of the target re...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01198
更新日期:2019-07-01 00:00:00
abstract::Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attracto...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1996.8.6.1135
更新日期:1996-08-15 00:00:00