Abstract:
:We outline a hybrid analog-digital scheme for computing with three important features that enable it to scale to systems of large complexity: First, like digital computation, which uses several one-bit precise logical units to collectively compute a precise answer to a computation, the hybrid scheme uses several moderate-precision analog units to collectively compute a precise answer to a computation. Second, frequent discrete signal restoration of the analog information prevents analog noise and offset from degrading the computation. And, third, a state machine enables complex computations to be created using a sequence of elementary computations. A natural choice for implementing this hybrid scheme is one based on spikes because spike-count codes are digital, while spike-time codes are analog. We illustrate how spikes afford easy ways to implement all three components of scalable hybrid computation. First, as an important example of distributed analog computation, we show how spikes can create a distributed modular representation of an analog number by implementing digital carry interactions between spiking analog neurons. Second, we show how signal restoration may be performed by recursive spike-count quantization of spike-time codes. And, third, we use spikes from an analog dynamical system to trigger state transitions in a digital dynamical system, which reconfigures the analog dynamical system using a binary control vector; such feedback interactions between analog and digital dynamical systems create a hybrid state machine (HSM). The HSM extends and expands the concept of a digital finite-state-machine to the hybrid domain. We present experimental data from a two-neuron HSM on a chip that implements error-correcting analog-to-digital conversion with the concurrent use of spike-time and spike-count codes. We also present experimental data from silicon circuits that implement HSM-based pattern recognition using spike-time synchrony. We outline how HSMs may be used to perform learning, vector quantization, spike pattern recognition and generation, and how they may be reconfigured.
journal_name
Neural Computjournal_title
Neural computationauthors
Sarpeshkar R,O'Halloran Mdoi
10.1162/089976602320263971subject
Has Abstractpub_date
2002-09-01 00:00:00pages
2003-38issue
9eissn
0899-7667issn
1530-888Xjournal_volume
14pub_type
杂志文章abstract::In this review, we compare methods for temporal sequence learning (TSL) across the disciplines machine-control, classical conditioning, neuronal models for TSL as well as spike-timing-dependent plasticity (STDP). This review introduces the most influential models and focuses on two questions: To what degree are reward...
journal_title:Neural computation
pub_type: 杂志文章,评审
doi:10.1162/0899766053011555
更新日期:2005-02-01 00:00:00
abstract::Knowledge of synaptic input is crucial for understanding synaptic integration and ultimately neural function. However, in vivo, the rates at which synaptic inputs arrive are high, so that it is typically impossible to detect single events. We show here that it is nevertheless possible to extract the properties of the ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00975
更新日期:2017-07-01 00:00:00
abstract::In this letter, we develop a gaussian process model for clustering. The variances of predictive values in gaussian processes learned from a training data are shown to comprise an estimate of the support of a probability density function. The constructed variance function is then applied to construct a set of contours ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.19.11.3088
更新日期:2007-11-01 00:00:00
abstract::Volterra and Wiener series are perhaps the best-understood nonlinear system representations in signal processing. Although both approaches have enjoyed a certain popularity in the past, their application has been limited to rather low-dimensional and weakly nonlinear systems due to the exponential growth of the number...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2006.18.12.3097
更新日期:2006-12-01 00:00:00
abstract::Humans learn categories of complex objects quickly and from a few examples. Random projection has been suggested as a means to learn and categorize efficiently. We investigate how random projection affects categorization by humans and by very simple neural networks on the same stimuli and categorization tasks, and how...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/NECO_a_00769
更新日期:2015-10-01 00:00:00
abstract::The emergence of synchrony in the activity of large, heterogeneous networks of spiking neurons is investigated. We define the robustness of synchrony by the critical disorder at which the asynchronous state becomes linearly unstable. We show that at low firing rates, synchrony is more robust in excitatory networks tha...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300015286
更新日期:2000-07-01 00:00:00
abstract::In considering a statistical model selection of neural networks and radial basis functions under an overrealizable case, the problem of unidentifiability emerges. Because the model selection criterion is an unbiased estimator of the generalization error based on the training error, this article analyzes the expected t...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602760128090
更新日期:2002-08-01 00:00:00
abstract::For gradient descent learning to yield connectivity consistent with real biological networks, the simulated neurons would have to include more realistic intrinsic properties such as frequency adaptation. However, gradient descent learning cannot be used straightforwardly with adapting rate-model neurons because the de...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/0899766054323017
更新日期:2005-09-01 00:00:00
abstract::We describe a model of short-term synaptic depression that is derived from a circuit implementation. The dynamics of this circuit model is similar to the dynamics of some theoretical models of short-term depression except that the recovery dynamics of the variable describing the depression is nonlinear and it also dep...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976603762552942
更新日期:2003-02-01 00:00:00
abstract::Reservoir computing is a biologically inspired class of learning algorithms in which the intrinsic dynamics of a recurrent neural network are mined to produce target time series. Most existing reservoir computing algorithms rely on fully supervised learning rules, which require access to an exact copy of the target re...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01198
更新日期:2019-07-01 00:00:00
abstract::Regression aims at estimating the conditional mean of output given input. However, regression is not informative enough if the conditional density is multimodal, heteroskedastic, and asymmetric. In such a case, estimating the conditional density itself is preferable, but conditional density estimation (CDE) is challen...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00683
更新日期:2015-01-01 00:00:00
abstract::Neural networks are often employed as tools in classification tasks. The use of large networks increases the likelihood of the task's being learned, although it may also lead to increased complexity. Pruning is an effective way of reducing the complexity of large networks. We present discriminant components pruning (D...
journal_title:Neural computation
pub_type: 杂志文章,评审
doi:10.1162/089976699300016665
更新日期:1999-04-01 00:00:00
abstract::A mathematical theory of interacting hypercolumns in primary visual cortex (V1) is presented that incorporates details concerning the anisotropic nature of long-range lateral connections. Each hypercolumn is modeled as a ring of interacting excitatory and inhibitory neural populations with orientation preferences over...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602317250870
更新日期:2002-03-01 00:00:00
abstract::We present formal specification and verification of a robot moving in a complex network, using temporal sequence learning to avoid obstacles. Our aim is to demonstrate the benefit of using a formal approach to analyze such a system as a complementary approach to simulation. We first describe a classical closed-loop si...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00493
更新日期:2013-11-01 00:00:00
abstract::Generalized discriminant analysis (GDA) is an extension of the classical linear discriminant analysis (LDA) from linear domain to a nonlinear domain via the kernel trick. However, in the previous algorithm of GDA, the solutions may suffer from the degenerate eigenvalue problem (i.e., several eigenvectors with the same...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976604773717612
更新日期:2004-06-01 00:00:00
abstract::The ability to encode and transmit a signal is an essential property that must demonstrate many neuronal circuits in sensory areas in addition to any processing they may provide. It is known that an appropriate level of lateral inhibition, as observed in these areas, can significantly improve the encoding ability of a...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00100
更新日期:2011-04-01 00:00:00
abstract::We study the emergence of synchronized burst activity in networks of neurons with spike adaptation. We show that networks of tonically firing adapting excitatory neurons can evolve to a state where the neurons burst in a synchronized manner. The mechanism leading to this burst activity is analyzed in a network of inte...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/08997660151134280
更新日期:2001-05-01 00:00:00
abstract::Synaptically generated subthreshold membrane potential (Vm) fluctuations can be characterized within the framework of stochastic calculus. It is possible to obtain analytic expressions for the steady-state Vm distribution, even in the case of conductance-based synaptic currents. However, as we show here, the analytic ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/0899766054796932
更新日期:2005-11-01 00:00:00
abstract::The important task of generating the minimum number of sequential triangle strips (tristrips) for a given triangulated surface model is motivated by applications in computer graphics. This hard combinatorial optimization problem is reduced to the minimum energy problem in Hopfield nets by a linear-size construction. I...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2008.10-07-623
更新日期:2009-02-01 00:00:00
abstract::Recent work suggests that synchronization of neuronal activity could serve to define functionally relevant relationships between spatially distributed cortical neurons. At present, it is not known to what extent this hypothesis is compatible with the widely supported notion of coarse coding, which assumes that feature...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1995.7.3.469
更新日期:1995-05-01 00:00:00
abstract::Modeling stereo transparency with physiologically plausible mechanisms is challenging because in such frameworks, large receptive fields mix up overlapping disparities, whereas small receptive fields can reliably compute only small disparities. It seems necessary to combine information across scales. A coarse-to-fine ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00722
更新日期:2015-05-01 00:00:00
abstract::The Bayesian evidence framework has been successfully applied to the design of multilayer perceptrons (MLPs) in the work of MacKay. Nevertheless, the training of MLPs suffers from drawbacks like the nonconvex optimization problem and the choice of the number of hidden units. In support vector machines (SVMs) for class...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602753633411
更新日期:2002-05-01 00:00:00
abstract::The visual systems of many mammals, including humans, are able to integrate the geometric information of visual stimuli and perform cognitive tasks at the first stages of the cortical processing. This is thought to be the result of a combination of mechanisms, which include feature extraction at the single cell level ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00738
更新日期:2015-06-01 00:00:00
abstract::Characterizing neural spiking activity as a function of intrinsic and extrinsic factors is important in neuroscience. Point process models are valuable for capturing such information; however, the process of fully applying these models is not always obvious. A complete model application has four broad steps: specifica...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00198
更新日期:2011-11-01 00:00:00
abstract::We study the expressive power of positive neural networks. The model uses positive connection weights and multiple input neurons. Different behaviors can be expressed by varying the connection weights. We show that in discrete time and in the absence of noise, the class of positive neural networks captures the so-call...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00789
更新日期:2015-12-01 00:00:00
abstract::Independent component analysis (ICA) aims at separating a multivariate signal into independent nongaussian signals by optimizing a contrast function with no knowledge on the mixing mechanism. Despite the availability of a constellation of contrast functions, a Hartley-entropy-based ICA contrast endowed with the discri...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00700
更新日期:2015-03-01 00:00:00
abstract::We propose a novel paradigm for spike train decoding, which avoids entirely spike sorting based on waveform measurements. This paradigm directly uses the spike train collected at recording electrodes from thresholding the bandpassed voltage signal. Our approach is a paradigm, not an algorithm, since it can be used wit...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2008.02-07-478
更新日期:2008-04-01 00:00:00
abstract::The past decade has seen a rise of interest in Laplacian eigenmaps (LEMs) for nonlinear dimensionality reduction. LEMs have been used in spectral clustering, in semisupervised learning, and for providing efficient state representations for reinforcement learning. Here, we show that LEMs are closely related to slow fea...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00214
更新日期:2011-12-01 00:00:00
abstract::We have created a network that allocates a new computational unit whenever an unusual pattern is presented to the network. This network forms compact representations, yet learns easily and rapidly. The network can be used at any time in the learning process and the learning patterns do not have to be repeated. The uni...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1991.3.2.213
更新日期:1991-07-01 00:00:00
abstract::In this work, we propose a two-layered descriptive model for motion processing from retina to the cortex, with an event-based input from the asynchronous time-based image sensor (ATIS) camera. Spatial and spatiotemporal filtering of visual scenes by motion energy detectors has been implemented in two steps in a simple...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01191
更新日期:2019-06-01 00:00:00