Abstract:
:When we move our body to perform a movement task, our central nervous system selects a movement trajectory from an infinite number of possible trajectories under constraints that have been acquired through evolution and learning. Minimization of the energy cost has been suggested as a potential candidate for a constraint determining locomotor parameters, such as stride frequency and stride length; however, other constraints have been proposed for a human upper-arm reaching task. In this study, we examined whether the minimum metabolic energy cost model can also explain the characteristics of the upper-arm reaching trajectories. Our results show that the optimal trajectory that minimizes the expected value of energy cost under the effect of signal-dependent noise on motor commands expresses not only the characteristics of reaching movements of typical speed but also those of slower movements. These results suggest that minimization of the energy cost would be a basic constraint not only in locomotion but also in upper-arm reaching.
journal_name
Neural Computjournal_title
Neural computationauthors
Taniai Y,Nishii Jdoi
10.1162/NECO_a_00757subject
Has Abstractpub_date
2015-08-01 00:00:00pages
1721-37issue
8eissn
0899-7667issn
1530-888Xjournal_volume
27pub_type
杂志文章abstract::To exhibit social intelligence, animals have to recognize whom they are communicating with. One way to make this inference is to select among internal generative models of each conspecific who may be encountered. However, these models also have to be learned via some form of Bayesian belief updating. This induces an i...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01239
更新日期:2019-12-01 00:00:00
abstract::Real classification problems involve structured data that can be essentially grouped into a relatively small number of clusters. It is shown that, under a local clustering condition, a set of points of a given class, embedded in binary space by a set of randomly parameterized surfaces, is linearly separable from other...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976601753196012
更新日期:2001-11-01 00:00:00
abstract::For gradient descent learning to yield connectivity consistent with real biological networks, the simulated neurons would have to include more realistic intrinsic properties such as frequency adaptation. However, gradient descent learning cannot be used straightforwardly with adapting rate-model neurons because the de...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/0899766054323017
更新日期:2005-09-01 00:00:00
abstract::In this letter, we investigate the impact of choosing different loss functions from the viewpoint of statistical learning theory. We introduce a convexity assumption, which is met by all loss functions commonly used in the literature, and study how the bound on the estimation error changes with the loss. We also deriv...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976604773135104
更新日期:2004-05-01 00:00:00
abstract::The important task of generating the minimum number of sequential triangle strips (tristrips) for a given triangulated surface model is motivated by applications in computer graphics. This hard combinatorial optimization problem is reduced to the minimum energy problem in Hopfield nets by a linear-size construction. I...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2008.10-07-623
更新日期:2009-02-01 00:00:00
abstract::We investigate the approximation ability of a multilayer perceptron (MLP) network when it is extended to the complex domain. The main challenge for processing complex data with neural networks has been the lack of bounded and analytic complex nonlinear activation functions in the complex domain, as stated by Liouville...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976603321891846
更新日期:2003-07-01 00:00:00
abstract::In this work, we study how the selection of examples affects the learning procedure in a boolean neural network and its relationship with the complexity of the function under study and its architecture. We analyze the generalization capacity for different target functions with particular architectures through an analy...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300014999
更新日期:2000-10-01 00:00:00
abstract::In pattern recognition, data integration is an important issue, and when properly done, it can lead to improved performance. Also, data integration can be used to help model and understand multimodal processing in the brain. Amari proposed α-integration as a principled way of blending multiple positive measures (e.g.,...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00445
更新日期:2013-06-01 00:00:00
abstract::Several integrate-to-threshold models with differing temporal integration mechanisms have been proposed to describe the accumulation of sensory evidence to a prescribed level prior to motor response in perceptual decision-making tasks. An experiment and simulation studies have shown that the introduction of time-varyi...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.07-08-817
更新日期:2009-08-01 00:00:00
abstract::It has been suggested that reactivation of previously acquired experiences or stored information in declarative memories in the hippocampus and neocortex contributes to memory consolidation and learning. Understanding memory consolidation depends crucially on the development of robust statistical methods for assessing...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01090
更新日期:2018-08-01 00:00:00
abstract::Characterizing neural spiking activity as a function of intrinsic and extrinsic factors is important in neuroscience. Point process models are valuable for capturing such information; however, the process of fully applying these models is not always obvious. A complete model application has four broad steps: specifica...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00198
更新日期:2011-11-01 00:00:00
abstract::Current research on discrete and rhythmic movements differs in both experimental procedures and theory, despite the ubiquitous overlap between discrete and rhythmic components in everyday behaviors. Models of rhythmic movements usually use oscillatory systems mimicking central pattern generators (CPGs). In contrast, m...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2008.03-08-720
更新日期:2009-05-01 00:00:00
abstract::We discuss robustness against mislabeling in multiclass labels for classification problems and propose two algorithms of boosting, the normalized Eta-Boost.M and Eta-Boost.M, based on the Eta-divergence. Those two boosting algorithms are closely related to models of mislabeling in which the label is erroneously exchan...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/neco.2007.11-06-400
更新日期:2008-06-01 00:00:00
abstract::In "Isotropic Sequence Order Learning" (pp. 831-864 in this issue), we introduced a novel algorithm for temporal sequence learning (ISO learning). Here, we embed this algorithm into a formal nonevaluating (teacher free) environment, which establishes a sensor-motor feedback. The system is initially guided by a fixed r...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/08997660360581930
更新日期:2003-04-01 00:00:00
abstract::Place cells in the rat hippocampus play a key role in creating the animal's internal representation of the world. During active navigation, these cells spike only in discrete locations, together encoding a map of the environment. Electrophysiological recordings have shown that the animal can revisit this map mentally ...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/NECO_a_00840
更新日期:2016-06-01 00:00:00
abstract::This article presents a new theoretical framework to consider the dynamics of a stochastic spiking neuron model with general membrane response to input spike. We assume that the input spikes obey an inhomogeneous Poisson process. The stochastic process of the membrane potential then becomes a gaussian process. When a ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976601317098529
更新日期:2001-12-01 00:00:00
abstract::We present a first-order nonhomogeneous Markov model for the interspike-interval density of a continuously stimulated spiking neuron. The model allows the conditional interspike-interval density and the stationary interspike-interval density to be expressed as products of two separate functions, one of which describes...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.06-07-548
更新日期:2009-06-01 00:00:00
abstract::This review examines the relevance of parameter identifiability for statistical models used in machine learning. In addition to defining main concepts, we address several issues of identifiability closely related to machine learning, showing the advantages and disadvantages of state-of-the-art research and demonstrati...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00947
更新日期:2017-05-01 00:00:00
abstract::Volterra and Wiener series are perhaps the best-understood nonlinear system representations in signal processing. Although both approaches have enjoyed a certain popularity in the past, their application has been limited to rather low-dimensional and weakly nonlinear systems due to the exponential growth of the number...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2006.18.12.3097
更新日期:2006-12-01 00:00:00
abstract::In this letter, we perform a complete and in-depth analysis of Lorentzian noises, such as those arising from [Formula: see text] and [Formula: see text] channel kinetics, in order to identify the source of [Formula: see text]-type noise in neurological membranes. We prove that the autocovariance of Lorentzian noise de...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_01067
更新日期:2018-07-01 00:00:00
abstract::Durbin and Willshaw's elastic net algorithm can find good solutions to the TSP. The purpose of this paper is to point out that for certain ranges of parameter values, the algorithm converges into local minima that do not correspond to valid tours. The key parameter is the ratio governing the relative strengths of the ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1991.3.3.363
更新日期:1991-10-01 00:00:00
abstract::Neural networks are often employed as tools in classification tasks. The use of large networks increases the likelihood of the task's being learned, although it may also lead to increased complexity. Pruning is an effective way of reducing the complexity of large networks. We present discriminant components pruning (D...
journal_title:Neural computation
pub_type: 杂志文章,评审
doi:10.1162/089976699300016665
更新日期:1999-04-01 00:00:00
abstract::The goal of sufficient dimension reduction in supervised learning is to find the low-dimensional subspace of input features that contains all of the information about the output values that the input features possess. In this letter, we propose a novel sufficient dimension-reduction method using a squared-loss variant...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00407
更新日期:2013-03-01 00:00:00
abstract::Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)-based approaches and autoencoder (AE)-based approaches. CCA-based approaches...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00801
更新日期:2016-02-01 00:00:00
abstract::This letter proposes a multichannel source separation technique, the multichannel variational autoencoder (MVAE) method, which uses a conditional VAE (CVAE) to model and estimate the power spectrograms of the sources in a mixture. By training the CVAE using the spectrograms of training examples with source-class label...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01217
更新日期:2019-09-01 00:00:00
abstract::This article addresses the relationship between long-term reward predictions and slow-timescale neural activity in temporal difference (TD) models of the dopamine system. Such models attempt to explain how the activity of dopamine (DA) neurons relates to errors in the prediction of future rewards. Previous models have...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602760407973
更新日期:2002-11-01 00:00:00
abstract::Natural gradient learning is known to be efficient in escaping plateau, which is a main cause of the slow learning speed of neural networks. The adaptive natural gradient learning method for practical implementation also has been developed, and its advantage in real-world problems has been confirmed. In this letter, w...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976604322742065
更新日期:2004-02-01 00:00:00
abstract::For the paradigmatic case of bimanual coordination, we review levels of organization of behavioral dynamics and present a description in terms of modes of behavior. We briefly review a recently developed model of spatiotemporal brain activity that is based on short- and long-range connectivity of neural ensembles. Thi...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976698300016954
更新日期:1998-11-15 00:00:00
abstract::The ability to achieve high swimming speed and efficiency is very important to both the real lamprey and its robotic implementation. In previous studies, we used evolutionary algorithms to evolve biologically plausible connectionist swimming controllers for a simulated lamprey. This letter investigates the robustness ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.19.6.1568
更新日期:2007-06-01 00:00:00
abstract::In this letter, a standard postnonlinear blind source separation algorithm is proposed, based on the MISEP method, which is widely used in linear and nonlinear independent component analysis. To best suit a wide class of postnonlinear mixtures, we adapt the MISEP method to incorporate a priori information of the mixtu...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2007.19.9.2557
更新日期:2007-09-01 00:00:00