Abstract:
:In this letter, we develop a gaussian process model for clustering. The variances of predictive values in gaussian processes learned from a training data are shown to comprise an estimate of the support of a probability density function. The constructed variance function is then applied to construct a set of contours that enclose the data points, which correspond to cluster boundaries. To perform clustering tasks of the data points, an associated dynamical system is built, and its topological invariant property is investigated. The experimental results show that the proposed method works successfully for clustering problems with arbitrary shapes.
journal_name
Neural Computjournal_title
Neural computationauthors
Kim HC,Lee Jdoi
10.1162/neco.2007.19.11.3088subject
Has Abstractpub_date
2007-11-01 00:00:00pages
3088-107issue
11eissn
0899-7667issn
1530-888Xjournal_volume
19pub_type
杂志文章abstract::We extend the neural Turing machine (NTM) model into a dynamic neural Turing machine (D-NTM) by introducing trainable address vectors. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing stra...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01060
更新日期:2018-04-01 00:00:00
abstract::We investigate a recently proposed model for decision learning in a population of spiking neurons where synaptic plasticity is modulated by a population signal in addition to reward feedback. For the basic model, binary population decision making based on spike/no-spike coding, a detailed computational analysis is giv...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2010.05-09-1010
更新日期:2010-07-01 00:00:00
abstract::A study of a general central pattern generator (CPG) is carried out by means of a measure of the gain of information between the number of available topology configurations and the output rhythmic activity. The neurons of the CPG are chaotic Hindmarsh-Rose models that cooperate dynamically to generate either chaotic o...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.19.4.974
更新日期:2007-04-01 00:00:00
abstract::We explicitly analyze the trajectories of learning near singularities in hierarchical networks, such as multilayer perceptrons and radial basis function networks, which include permutation symmetry of hidden nodes, and show their general properties. Such symmetry induces singularities in their parameter space, where t...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/neco.2007.12-06-414
更新日期:2008-03-01 00:00:00
abstract::The Kalman filter provides a simple and efficient algorithm to compute the posterior distribution for state-space models where both the latent state and measurement models are linear and gaussian. Extensions to the Kalman filter, including the extended and unscented Kalman filters, incorporate linearizations for model...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/neco_a_01275
更新日期:2020-05-01 00:00:00
abstract::Nondeclarative memory and novelty processing in the brain is an actively studied field of neuroscience, and reducing neural activity with repetition of a stimulus (repetition suppression) is a commonly observed phenomenon. Recent findings of an opposite trend-specifically, rising activity for unfamiliar stimuli-questi...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00569
更新日期:2014-04-01 00:00:00
abstract::Neural networks are often employed as tools in classification tasks. The use of large networks increases the likelihood of the task's being learned, although it may also lead to increased complexity. Pruning is an effective way of reducing the complexity of large networks. We present discriminant components pruning (D...
journal_title:Neural computation
pub_type: 杂志文章,评审
doi:10.1162/089976699300016665
更新日期:1999-04-01 00:00:00
abstract::We show that Langevin Markov chain Monte Carlo inference in an energy-based model with latent variables has the property that the early steps of inference, starting from a stationary point, correspond to propagating error gradients into internal layers, similar to backpropagation. The backpropagated error is with resp...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00934
更新日期:2017-03-01 00:00:00
abstract::We considered a gamma distribution of interspike intervals as a statistical model for neuronal spike generation. A gamma distribution is a natural extension of the Poisson process taking the effect of a refractory period into account. The model is specified by two parameters: a time-dependent firing rate and a shape p...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2006.18.10.2359
更新日期:2006-10-01 00:00:00
abstract::A key problem in computational neuroscience is to find simple, tractable models that are nevertheless flexible enough to capture the response properties of real neurons. Here we examine the capabilities of recurrent point process models known as Poisson generalized linear models (GLMs). These models are defined by a s...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01021
更新日期:2017-12-01 00:00:00
abstract::Recent experimental findings have shown the presence of robust and cell-type-specific intraburst firing patterns in bursting neurons. We address the problem of characterizing these patterns under the assumption that the bursts exhibit well-defined firing time distributions. We propose a method for estimating these dis...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2008.07-07-571
更新日期:2009-04-01 00:00:00
abstract::We study the effect of competition between short-term synaptic depression and facilitation on the dynamic properties of attractor neural networks, using Monte Carlo simulation and a mean-field analysis. Depending on the balance of depression, facilitation, and the underlying noise, the network displays different behav...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.19.10.2739
更新日期:2007-10-01 00:00:00
abstract::The goal of sufficient dimension reduction in supervised learning is to find the low-dimensional subspace of input features that contains all of the information about the output values that the input features possess. In this letter, we propose a novel sufficient dimension-reduction method using a squared-loss variant...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00407
更新日期:2013-03-01 00:00:00
abstract::Recent advances in the technology of multiunit recordings make it possible to test Hebb's hypothesis that neurons do not function in isolation but are organized in assemblies. This has created the need for statistical approaches to detecting the presence of spatiotemporal patterns of more than two neurons in neuron sp...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300014872
更新日期:2000-11-01 00:00:00
abstract::For any memoryless communication channel with a binary-valued input and a one-dimensional real-valued output, we introduce a probabilistic lower bound on the mutual information given empirical observations on the channel. The bound is built on the Dvoretzky-Kiefer-Wolfowitz inequality and is distribution free. A quadr...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00144
更新日期:2011-07-01 00:00:00
abstract::We investigated a model for the neural integrator based on hysteretic units connected by positive feedback. Hysteresis is assumed to emerge from the intrinsic properties of the cells. We consider the recurrent networks containing either bistable or multistable neurons. We apply our analysis to the oculomotor velocity-...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2008.12-06-416
更新日期:2008-10-01 00:00:00
abstract::A hierarchical dynamical map is proposed as the basic framework for sensory cortical mapping. To show how the hierarchical dynamical map works in cognitive processes, we applied it to a typical cognitive task known as priming, in which cognitive performance is facilitated as a consequence of prior experience. Prior to...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/08997660152469341
更新日期:2001-08-01 00:00:00
abstract::Recent experimental and computational evidence suggests that several dynamical properties may characterize the operating point of functioning neural networks: critical branching, neutral stability, and production of a wide range of firing patterns. We seek the simplest setting in which these properties emerge, clarify...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00461
更新日期:2013-07-01 00:00:00
abstract::Neurons perform computations, and convey the results of those computations through the statistical structure of their output spike trains. Here we present a practical method, grounded in the information-theoretic analysis of prediction, for inferring a minimal representation of that structure and for characterizing it...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.12-07-678
更新日期:2010-01-01 00:00:00
abstract::Although the commonly used quadratic Hebbian-anti-Hebbian rules lead to successful models of plasticity and learning, they are inconsistent with neurophysiology. Other rules, more physiologically plausible, fail to specify the biological mechanism of bidirectionality and the biological mechanism that prevents synapses...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976698300017629
更新日期:1998-04-01 00:00:00
abstract::The nu-support vector machine (nu-SVM) for classification proposed by Schölkopf, Smola, Williamson, and Bartlett (2000) has the advantage of using a parameter nu on controlling the number of support vectors. In this article, we investigate the relation between nu-SVM and C-SVM in detail. We show that in general they a...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976601750399335
更新日期:2001-09-01 00:00:00
abstract::The hypothesis of invariant maximization of interaction (IMI) is formulated within the setting of random fields. According to this hypothesis, learning processes maximize the stochastic interaction of the neurons subject to constraints. We consider the extrinsic constraint in terms of a fixed input distribution on the...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602760805368
更新日期:2002-12-01 00:00:00
abstract::Topographic maps such as the self-organizing map (SOM) or neural gas (NG) constitute powerful data mining techniques that allow simultaneously clustering data and inferring their topological structure, such that additional features, for example, browsing, become available. Both methods have been introduced for vectori...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00012
更新日期:2010-09-01 00:00:00
abstract::Learning in a neuronal network is often thought of as a linear superposition of synaptic modifications induced by individual stimuli. However, since biological synapses are naturally bounded, a linear superposition would cause fast forgetting of previously acquired memories. Here we show that this forgetting can be av...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/0899766054615644
更新日期:2005-10-01 00:00:00
abstract::Regression aims at estimating the conditional mean of output given input. However, regression is not informative enough if the conditional density is multimodal, heteroskedastic, and asymmetric. In such a case, estimating the conditional density itself is preferable, but conditional density estimation (CDE) is challen...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00683
更新日期:2015-01-01 00:00:00
abstract::Many neurons that initially respond to a stimulus stop responding if the stimulus is presented repeatedly but recover their response if a different stimulus is presented. This phenomenon is referred to as stimulus-specific adaptation (SSA). SSA has been investigated extensively using oddball experiments, which measure...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00077
更新日期:2011-02-01 00:00:00
abstract::Izhikevich (2003) proposed a new canonical neuron model of spike generation. The model was surprisingly simple yet able to accurately replicate the firing patterns of different types of cortical cell. Here, we derive a solution method that allows efficient simulation of the model. ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.19.12.3216
更新日期:2007-12-01 00:00:00
abstract::We present a reduction of a Hodgkin-Huxley (HH)--style bursting model to a hybridized integrate-and-fire (IF) formalism based on a thorough bifurcation analysis of the neuron's dynamics. The model incorporates HH--style equations to evolve the subthreshold currents and includes IF mechanisms to characterize spike even...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976603322518768
更新日期:2003-12-01 00:00:00
abstract::We consider learning a causal ordering of variables in a linear nongaussian acyclic model called LiNGAM. Several methods have been shown to consistently estimate a causal ordering assuming that all the model assumptions are correct. But the estimation results could be distorted if some assumptions are violated. In thi...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00533
更新日期:2014-01-01 00:00:00
abstract::In learning theory, the training and test sets are assumed to be drawn from the same probability distribution. This assumption is also followed in practical situations, where matching the training and test distributions is considered desirable. Contrary to conventional wisdom, we show that mismatched training and test...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00697
更新日期:2015-02-01 00:00:00