Abstract:
:Intracortical brain computer interfaces can enable individuals with paralysis to control external devices through voluntarily modulated brain activity. Decoding quality has been previously shown to degrade with signal nonstationarities-specifically, the changes in the statistics of the data between training and testing data sets. This includes changes to the neural tuning profiles and baseline shifts in firing rates of recorded neurons, as well as nonphysiological noise. While progress has been made toward providing long-term user control via decoder recalibration, relatively little work has been dedicated to making the decoding algorithm more resilient to signal nonstationarities. Here, we describe how principled kernel selection with gaussian process regression can be used within a Bayesian filtering framework to mitigate the effects of commonly encountered nonstationarities. Given a supervised training set of (neural features, intention to move in a direction)-pairs, we use gaussian process regression to predict the intention given the neural data. We apply kernel embedding for each neural feature with the standard radial basis function. The multiple kernels are then summed together across each neural dimension, which allows the kernel to effectively ignore large differences that occur only in a single feature. The summed kernel is used for real-time predictions of the posterior mean and variance under a gaussian process framework. The predictions are then filtered using the discriminative Kalman filter to produce an estimate of the neural intention given the history of neural data. We refer to the multiple kernel approach combined with the discriminative Kalman filter as the MK-DKF. We found that the MK-DKF decoder was more resilient to nonstationarities frequently encountered in-real world settings yet provided similar performance to the currently used Kalman decoder. These results demonstrate a method by which neural decoding can be made more resistant to nonstationarities.
journal_name
Neural Computjournal_title
Neural computationauthors
Brandman DM,Burkhart MC,Kelemen J,Franco B,Harrison MT,Hochberg LRdoi
10.1162/neco_a_01129subject
Has Abstractpub_date
2018-11-01 00:00:00pages
2986-3008issue
11eissn
0899-7667issn
1530-888Xjournal_volume
30pub_type
杂志文章abstract::When we move our body to perform a movement task, our central nervous system selects a movement trajectory from an infinite number of possible trajectories under constraints that have been acquired through evolution and learning. Minimization of the energy cost has been suggested as a potential candidate for a constra...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00757
更新日期:2015-08-01 00:00:00
abstract::Generalized discriminant analysis (GDA) is an extension of the classical linear discriminant analysis (LDA) from linear domain to a nonlinear domain via the kernel trick. However, in the previous algorithm of GDA, the solutions may suffer from the degenerate eigenvalue problem (i.e., several eigenvectors with the same...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976604773717612
更新日期:2004-06-01 00:00:00
abstract::We study both analytically and numerically the effect of presynaptic noise on the transmission of information in attractor neural networks. The noise occurs on a very short timescale compared to that for the neuron dynamics and it produces short-time synaptic depression. This is inspired in recent neurobiological find...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976606775623342
更新日期:2006-03-01 00:00:00
abstract::Numerous animal behaviors, such as locomotion in vertebrates, are produced by rhythmic contractions that alternate between two muscle groups. The neuronal networks generating such alternate rhythmic activity are generally thought to rely on pacemaker cells or well-designed circuits consisting of inhibitory and excitat...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976698300017449
更新日期:1998-07-01 00:00:00
abstract::In learning theory, the training and test sets are assumed to be drawn from the same probability distribution. This assumption is also followed in practical situations, where matching the training and test distributions is considered desirable. Contrary to conventional wisdom, we show that mismatched training and test...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00697
更新日期:2015-02-01 00:00:00
abstract::This article presents new procedures for multisite spatiotemporal neuronal data analysis. A new statistical model - the diffusion model - is considered, whose parameters can be estimated from experimental data thanks to mean-field approximations. This work has been applied to optical recording of the guinea pig's audi...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300015150
更新日期:2000-08-01 00:00:00
abstract::The statistical dependencies that independent component analysis (ICA) cannot remove often provide rich information beyond the linear independent components. It would thus be very useful to estimate the dependency structure from data. While such models have been proposed, they have usually concentrated on higher-order...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01006
更新日期:2017-11-01 00:00:00
abstract::Mild traumatic brain injury (mTBI) presents a significant health concern with potential persisting deficits that can last decades. Although a growing body of literature improves our understanding of the brain network response and corresponding underlying cellular alterations after injury, the effects of cellular disru...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01343
更新日期:2021-01-01 00:00:00
abstract::Recently there has been great interest in sparse representations of signals under the assumption that signals (data sets) can be well approximated by a linear combination of few elements of a known basis (dictionary). Many algorithms have been developed to find such representations for one-dimensional signals (vectors...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00385
更新日期:2013-01-01 00:00:00
abstract::We outline a hybrid analog-digital scheme for computing with three important features that enable it to scale to systems of large complexity: First, like digital computation, which uses several one-bit precise logical units to collectively compute a precise answer to a computation, the hybrid scheme uses several moder...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602320263971
更新日期:2002-09-01 00:00:00
abstract::In "Isotropic Sequence Order Learning" (pp. 831-864 in this issue), we introduced a novel algorithm for temporal sequence learning (ISO learning). Here, we embed this algorithm into a formal nonevaluating (teacher free) environment, which establishes a sensor-motor feedback. The system is initially guided by a fixed r...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/08997660360581930
更新日期:2003-04-01 00:00:00
abstract::Mechanisms influencing learning in neural networks are usually investigated on either a local or a global scale. The former relates to synaptic processes, the latter to unspecific modulatory systems. Here we study the interaction of a local learning rule that evaluates coincidences of pre- and postsynaptic action pote...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300015682
更新日期:2000-03-01 00:00:00
abstract::Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)-based approaches and autoencoder (AE)-based approaches. CCA-based approaches...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00801
更新日期:2016-02-01 00:00:00
abstract::In this letter, we perform a complete and in-depth analysis of Lorentzian noises, such as those arising from [Formula: see text] and [Formula: see text] channel kinetics, in order to identify the source of [Formula: see text]-type noise in neurological membranes. We prove that the autocovariance of Lorentzian noise de...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_01067
更新日期:2018-07-01 00:00:00
abstract::Characterizing neural spiking activity as a function of intrinsic and extrinsic factors is important in neuroscience. Point process models are valuable for capturing such information; however, the process of fully applying these models is not always obvious. A complete model application has four broad steps: specifica...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00198
更新日期:2011-11-01 00:00:00
abstract::Field models provide an elegant mathematical framework to analyze large-scale patterns of neural activity. On the microscopic level, these models are usually based on either a firing-rate picture or integrate-and-fire dynamics. This article shows that in spite of the large conceptual differences between the two types ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/08997660260028656
更新日期:2002-07-01 00:00:00
abstract::To date, Hebbian learning combined with some form of constraint on synaptic inputs has been demonstrated to describe well the development of neural networks. The previous models revealed mathematically the importance of synaptic constraints to reproduce orientation selectivity in the visual cortical neurons, but biolo...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.04-08-752
更新日期:2009-09-01 00:00:00
abstract::As neural activity is transmitted through the nervous system, neuronal noise degrades the encoded information and limits performance. It is therefore important to know how information loss can be prevented. We study this question in the context of neural population codes. Using Fisher information, we show how informat...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00227
更新日期:2012-02-01 00:00:00
abstract::A key problem in computational neuroscience is to find simple, tractable models that are nevertheless flexible enough to capture the response properties of real neurons. Here we examine the capabilities of recurrent point process models known as Poisson generalized linear models (GLMs). These models are defined by a s...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01021
更新日期:2017-12-01 00:00:00
abstract::A firing rate map, also known as a tuning curve, describes the nonlinear relationship between a neuron's spike rate and a low-dimensional stimulus (e.g., orientation, head direction, contrast, color). Here we investigate Bayesian active learning methods for estimating firing rate maps in closed-loop neurophysiology ex...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00615
更新日期:2014-08-01 00:00:00
abstract::The Nyström method is a well-known sampling-based technique for approximating the eigensystem of large kernel matrices. However, the chosen samples in the Nyström method are all assumed to be of equal importance, which deviates from the integral equation that defines the kernel eigenfunctions. Motivated by this observ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2008.11-07-651
更新日期:2009-01-01 00:00:00
abstract::The problem of designing input signals for optimal generalization is called active learning. In this article, we give a two-stage sampling scheme for reducing both the bias and variance, and based on this scheme, we propose two active learning methods. One is the multipoint search method applicable to arbitrary models...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300014773
更新日期:2000-12-01 00:00:00
abstract::The ability to achieve high swimming speed and efficiency is very important to both the real lamprey and its robotic implementation. In previous studies, we used evolutionary algorithms to evolve biologically plausible connectionist swimming controllers for a simulated lamprey. This letter investigates the robustness ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.19.6.1568
更新日期:2007-06-01 00:00:00
abstract::The mutual information between a set of stimuli and the elicited neural responses is compared to the corresponding decoded information. The decoding procedure is presented as an artificial distortion of the joint probabilities between stimuli and responses. The information loss is quantified. Whenever the probabilitie...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602317318947
更新日期:2002-04-01 00:00:00
abstract::The pyloric network of the stomatogastric ganglion in crustacea is a central pattern generator that can produce the same basic rhythm over a wide frequency range. Three electrically coupled neurons, the anterior burster (AB) neuron and two pyloric dilator (PD) neurons, act as a pacemaker unit for the pyloric network. ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1991.3.4.487
更新日期:1991-01-01 00:00:00
abstract::We discuss robustness against mislabeling in multiclass labels for classification problems and propose two algorithms of boosting, the normalized Eta-Boost.M and Eta-Boost.M, based on the Eta-divergence. Those two boosting algorithms are closely related to models of mislabeling in which the label is erroneously exchan...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/neco.2007.11-06-400
更新日期:2008-06-01 00:00:00
abstract::We describe a model of short-term synaptic depression that is derived from a circuit implementation. The dynamics of this circuit model is similar to the dynamics of some theoretical models of short-term depression except that the recovery dynamics of the variable describing the depression is nonlinear and it also dep...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976603762552942
更新日期:2003-02-01 00:00:00
abstract::We investigated a model for the neural integrator based on hysteretic units connected by positive feedback. Hysteresis is assumed to emerge from the intrinsic properties of the cells. We consider the recurrent networks containing either bistable or multistable neurons. We apply our analysis to the oculomotor velocity-...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2008.12-06-416
更新日期:2008-10-01 00:00:00
abstract::We consider the problem of training a linear feedforward neural network by using a gradient descent-like LMS learning algorithm. The objective is to find a weight matrix for the network, by repeatedly presenting to it a finite set of examples, so that the sum of the squares of the errors is minimized. Kohonen showed t...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1991.3.2.226
更新日期:1991-07-01 00:00:00
abstract::Primary visual cortical complex cells are thought to serve as invariant feature detectors and to provide input to higher cortical areas. We propose a single model for learning the connectivity required by complex cells that integrates two factors that have been hypothesized to play a role in the development of invaria...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00743
更新日期:2015-07-01 00:00:00