Abstract:
:The Kalman filter provides a simple and efficient algorithm to compute the posterior distribution for state-space models where both the latent state and measurement models are linear and gaussian. Extensions to the Kalman filter, including the extended and unscented Kalman filters, incorporate linearizations for models where the observation model p ( observation | state ) is nonlinear. We argue that in many cases, a model for p ( state | observation ) proves both easier to learn and more accurate for latent state estimation. Approximating p ( state | observation ) as gaussian leads to a new filtering algorithm, the discriminative Kalman filter (DKF), which can perform well even when p ( observation | state ) is highly nonlinear and/or nongaussian. The approximation, motivated by the Bernstein-von Mises theorem, improves as the dimensionality of the observations increases. The DKF has computational complexity similar to the Kalman filter, allowing it in some cases to perform much faster than particle filters with similar precision, while better accounting for nonlinear and nongaussian observation models than Kalman-based extensions. When the observation model must be learned from training data prior to filtering, off-the-shelf nonlinear and nonparametric regression techniques can provide a gaussian model for p ( observation | state ) that cleanly integrates with the DKF. As part of the BrainGate2 clinical trial, we successfully implemented gaussian process regression with the DKF framework in a brain-computer interface to provide real-time, closed-loop cursor control to a person with a complete spinal cord injury. In this letter, we explore the theory underlying the DKF, exhibit some illustrative examples, and outline potential extensions.
journal_name
Neural Computjournal_title
Neural computationauthors
Burkhart MC,Brandman DM,Franco B,Hochberg LR,Harrison MTdoi
10.1162/neco_a_01275subject
Has Abstractpub_date
2020-05-01 00:00:00pages
969-1017issue
5eissn
0899-7667issn
1530-888Xjournal_volume
32pub_type
信件abstract::This article addresses the topic of extracting logical rules from data by means of artificial neural networks. The approach based on piecewise linear neural networks is revisited, which has already been used for the extraction of Boolean rules in the past, and it is shown that this approach can be important also for t...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2006.18.11.2813
更新日期:2006-11-01 00:00:00
abstract::Recent studies have employed simple linear dynamical systems to model trial-by-trial dynamics in various sensorimotor learning tasks. Here we explore the theoretical and practical considerations that arise when employing the general class of linear dynamical systems (LDS) as a model for sensorimotor learning. In this ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976606775774651
更新日期:2006-04-01 00:00:00
abstract::We have created a network that allocates a new computational unit whenever an unusual pattern is presented to the network. This network forms compact representations, yet learns easily and rapidly. The network can be used at any time in the learning process and the learning patterns do not have to be repeated. The uni...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1991.3.2.213
更新日期:1991-07-01 00:00:00
abstract::This letter proposes a multichannel source separation technique, the multichannel variational autoencoder (MVAE) method, which uses a conditional VAE (CVAE) to model and estimate the power spectrograms of the sources in a mixture. By training the CVAE using the spectrograms of training examples with source-class label...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01217
更新日期:2019-09-01 00:00:00
abstract::How do multiple feature maps that coexist in the same region of cerebral cortex align with each other? We hypothesize that such alignment is governed by temporal correlations: features in one map that are temporally correlated with those in another come to occupy the same spatial locations in cortex over time. To exam...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1996.8.4.731
更新日期:1996-05-15 00:00:00
abstract::Pharmacologically isolated GABAergic irregular spiking and stuttering interneurons in the mouse visual cortex display highly irregular spike times, with high coefficients of variation approximately 0.9-3, in response to a depolarizing, constant current input. This is in marked contrast to cortical pyramidal cells, whi...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2008.20.1.44
更新日期:2008-01-01 00:00:00
abstract::This article presents a reinforcement learning framework for continuous-time dynamical systems without a priori discretization of time, state, and action. Based on the Hamilton-Jacobi-Bellman (HJB) equation for infinite-horizon, discounted reward problems, we derive algorithms for estimating value functions and improv...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300015961
更新日期:2000-01-01 00:00:00
abstract::Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are ext...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00882
更新日期:2016-10-01 00:00:00
abstract::Previous studies have combined analytical models of stochastic neural responses with signal detection theory (SDT) to predict psychophysical performance limits; however, these studies have typically been limited to simple models and simple psychophysical tasks. A companion article in this issue ("Evaluating Auditory P...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976601750541813
更新日期:2001-10-01 00:00:00
abstract::Neural networks are often employed as tools in classification tasks. The use of large networks increases the likelihood of the task's being learned, although it may also lead to increased complexity. Pruning is an effective way of reducing the complexity of large networks. We present discriminant components pruning (D...
journal_title:Neural computation
pub_type: 杂志文章,评审
doi:10.1162/089976699300016665
更新日期:1999-04-01 00:00:00
abstract::Calculation of the total conductance change induced by multiple synapses at a given membrane compartment remains one of the most time-consuming processes in biophysically realistic neural network simulations. Here we show that this calculation can be achieved in a highly efficient way even for multiply converging syna...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976698300017061
更新日期:1998-10-01 00:00:00
abstract::Neuroscience is progressing vigorously, and knowledge at different levels of description is rapidly accumulating. To establish relationships between results found at these different levels is one of the central challenges. In this simulation study, we demonstrate how microscopic cellular properties, taking the example...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976699300016377
更新日期:1999-07-01 00:00:00
abstract::We consider learning a causal ordering of variables in a linear nongaussian acyclic model called LiNGAM. Several methods have been shown to consistently estimate a causal ordering assuming that all the model assumptions are correct. But the estimation results could be distorted if some assumptions are violated. In thi...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00533
更新日期:2014-01-01 00:00:00
abstract::This review examines the relevance of parameter identifiability for statistical models used in machine learning. In addition to defining main concepts, we address several issues of identifiability closely related to machine learning, showing the advantages and disadvantages of state-of-the-art research and demonstrati...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00947
更新日期:2017-05-01 00:00:00
abstract::Recurrent neural architectures having oscillatory dynamics use rhythmic network activity to represent patterns stored in short-term memory. Multiple stored patterns can be retained in memory over the same neural substrate because the network's state persistently switches between them. Here we present a simple oscillat...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2008.02-08-715
更新日期:2009-03-01 00:00:00
abstract::Bursting plays an important role in neural communication. At the population level, macroscopic bursting has been identified in populations of neurons that do not express intrinsic bursting mechanisms. For the analysis of phase transitions between bursting and non-bursting states, mean-field descriptions of macroscopic...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01300
更新日期:2020-09-01 00:00:00
abstract::As neural activity is transmitted through the nervous system, neuronal noise degrades the encoded information and limits performance. It is therefore important to know how information loss can be prevented. We study this question in the context of neural population codes. Using Fisher information, we show how informat...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00227
更新日期:2012-02-01 00:00:00
abstract::To exhibit social intelligence, animals have to recognize whom they are communicating with. One way to make this inference is to select among internal generative models of each conspecific who may be encountered. However, these models also have to be learned via some form of Bayesian belief updating. This induces an i...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01239
更新日期:2019-12-01 00:00:00
abstract::In this letter, we develop a gaussian process model for clustering. The variances of predictive values in gaussian processes learned from a training data are shown to comprise an estimate of the support of a probability density function. The constructed variance function is then applied to construct a set of contours ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.19.11.3088
更新日期:2007-11-01 00:00:00
abstract::In this note, we demonstrate that the high firing irregularity produced by the leaky integrate-and-fire neuron with the partial somatic reset mechanism, which has been shown to be the most likely candidate to reflect the mechanism used in the brain for reproducing the highly irregular cortical neuron firing at high ra...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00090
更新日期:2011-03-01 00:00:00
abstract::A hierarchical dynamical map is proposed as the basic framework for sensory cortical mapping. To show how the hierarchical dynamical map works in cognitive processes, we applied it to a typical cognitive task known as priming, in which cognitive performance is facilitated as a consequence of prior experience. Prior to...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/08997660152469341
更新日期:2001-08-01 00:00:00
abstract::The hypothesis of invariant maximization of interaction (IMI) is formulated within the setting of random fields. According to this hypothesis, learning processes maximize the stochastic interaction of the neurons subject to constraints. We consider the extrinsic constraint in terms of a fixed input distribution on the...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602760805368
更新日期:2002-12-01 00:00:00
abstract::Different analytical expressions for the membrane potential distribution of membranes subject to synaptic noise have been proposed and can be very helpful in analyzing experimental data. However, all of these expressions are either approximations or limit cases, and it is not clear how they compare and which expressio...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2006.18.12.2917
更新日期:2006-12-01 00:00:00
abstract::The new time-organized map (TOM) is presented for a better understanding of the self-organization and geometric structure of cortical signal representations. The algorithm extends the common self-organizing map (SOM) from the processing of purely spatial signals to the processing of spatiotemporal signals. The main ad...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976603765202695
更新日期:2003-05-01 00:00:00
abstract::We simulate the inhibition of Ia-glutamatergic excitatory postsynaptic potential (EPSP) by preceding it with glycinergic recurrent (REN) and reciprocal (REC) inhibitory postsynaptic potentials (IPSPs). The inhibition is evaluated in the presence of voltage-dependent conductances of sodium, delayed rectifier potassium,...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00375
更新日期:2013-01-01 00:00:00
abstract::Under the goal-driven paradigm, Yamins et al. ( 2014 ; Yamins & DiCarlo, 2016 ) have shown that by optimizing only the final eight-way categorization performance of a four-layer hierarchical network, not only can its top output layer quantitatively predict IT neuron responses but its penultimate layer can also automat...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01039
更新日期:2018-02-01 00:00:00
abstract::A single-layered Hough transform network is proposed that accepts image coordinates of each object pixel as input and produces a set of outputs that indicate the belongingness of the pixel to a particular structure (e.g., a straight line). The network is able to learn adaptively the parametric forms of the linear segm...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976601300014501
更新日期:2001-03-01 00:00:00
abstract::Robust coding has been proposed as a solution to the problem of minimizing decoding error in the presence of neural noise. Many real-world problems, however, have degradation in the input signal, not just in neural representations. This generalized problem is more relevant to biological sensory coding where internal n...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00181
更新日期:2011-10-01 00:00:00
abstract::For the paradigmatic case of bimanual coordination, we review levels of organization of behavioral dynamics and present a description in terms of modes of behavior. We briefly review a recently developed model of spatiotemporal brain activity that is based on short- and long-range connectivity of neural ensembles. Thi...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976698300016954
更新日期:1998-11-15 00:00:00
abstract::Generalized discriminant analysis (GDA) is an extension of the classical linear discriminant analysis (LDA) from linear domain to a nonlinear domain via the kernel trick. However, in the previous algorithm of GDA, the solutions may suffer from the degenerate eigenvalue problem (i.e., several eigenvectors with the same...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976604773717612
更新日期:2004-06-01 00:00:00