Abstract:
:Under the goal-driven paradigm, Yamins et al. ( 2014 ; Yamins & DiCarlo, 2016 ) have shown that by optimizing only the final eight-way categorization performance of a four-layer hierarchical network, not only can its top output layer quantitatively predict IT neuron responses but its penultimate layer can also automatically predict V4 neuron responses. Currently, deep neural networks (DNNs) in the field of computer vision have reached image object categorization performance comparable to that of human beings on ImageNet, a data set that contains 1.3 million training images of 1000 categories. We explore whether the DNN neurons (units in DNNs) possess image object representational statistics similar to monkey IT neurons, particularly when the network becomes deeper and the number of image categories becomes larger, using VGG19, a typical and widely used deep network of 19 layers in the computer vision field. Following Lehky, Kiani, Esteky, and Tanaka ( 2011 , 2014 ), where the response statistics of 674 IT neurons to 806 image stimuli are analyzed using three measures (kurtosis, Pareto tail index, and intrinsic dimensionality), we investigate the three issues in this letter using the same three measures: (1) the similarities and differences of the neural response statistics between VGG19 and primate IT cortex, (2) the variation trends of the response statistics of VGG19 neurons at different layers from low to high, and (3) the variation trends of the response statistics of VGG19 neurons when the numbers of stimuli and neurons increase. We find that the response statistics on both single-neuron selectivity and population sparseness of VGG19 neurons are fundamentally different from those of IT neurons in most cases; by increasing the number of neurons in different layers and the number of stimuli, the response statistics of neurons at different layers from low to high do not substantially change; and the estimated intrinsic dimensionality values at the low convolutional layers of VGG19 are considerably larger than the value of approximately 100 reported for IT neurons in Lehky et al. ( 2014 ), whereas those at the high fully connected layers are close to or lower than 100. To the best of our knowledge, this work is the first attempt to analyze the response statistics of DNN neurons with respect to primate IT neurons in image object representation.
journal_name
Neural Computjournal_title
Neural computationauthors
Dong Q,Wang H,Hu Zdoi
10.1162/neco_a_01039subject
Has Abstractpub_date
2018-02-01 00:00:00pages
447-476issue
2eissn
0899-7667issn
1530-888Xjournal_volume
30pub_type
杂志文章abstract::We present formal specification and verification of a robot moving in a complex network, using temporal sequence learning to avoid obstacles. Our aim is to demonstrate the benefit of using a formal approach to analyze such a system as a complementary approach to simulation. We first describe a classical closed-loop si...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00493
更新日期:2013-11-01 00:00:00
abstract::Slightly modified versions of an early Hebbian/anti-Hebbian neural network are shown to be capable of extracting the sparse, independent linear components of a prefiltered natural image set. An explanation for this capability in terms of a coupling between two hypothetical networks is presented. The simple networks pr...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976606775093891
更新日期:2006-02-01 00:00:00
abstract::In considering a statistical model selection of neural networks and radial basis functions under an overrealizable case, the problem of unidentifiability emerges. Because the model selection criterion is an unbiased estimator of the generalization error based on the training error, this article analyzes the expected t...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976602760128090
更新日期:2002-08-01 00:00:00
abstract::An iterative reweighted least squares (IRWLS) procedure recently proposed is shown to converge to the support vector machine solution. The convergence to a stationary point is ensured by modifying the original IRWLS procedure. ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/0899766052530875
更新日期:2005-01-01 00:00:00
abstract::We discuss robustness against mislabeling in multiclass labels for classification problems and propose two algorithms of boosting, the normalized Eta-Boost.M and Eta-Boost.M, based on the Eta-divergence. Those two boosting algorithms are closely related to models of mislabeling in which the label is erroneously exchan...
journal_title:Neural computation
pub_type: 信件
doi:10.1162/neco.2007.11-06-400
更新日期:2008-06-01 00:00:00
abstract::The problem of designing input signals for optimal generalization is called active learning. In this article, we give a two-stage sampling scheme for reducing both the bias and variance, and based on this scheme, we propose two active learning methods. One is the multipoint search method applicable to arbitrary models...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976600300014773
更新日期:2000-12-01 00:00:00
abstract::This article presents a new theoretical framework to consider the dynamics of a stochastic spiking neuron model with general membrane response to input spike. We assume that the input spikes obey an inhomogeneous Poisson process. The stochastic process of the membrane potential then becomes a gaussian process. When a ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976601317098529
更新日期:2001-12-01 00:00:00
abstract::In this letter, we investigate the impact of choosing different loss functions from the viewpoint of statistical learning theory. We introduce a convexity assumption, which is met by all loss functions commonly used in the literature, and study how the bound on the estimation error changes with the loss. We also deriv...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976604773135104
更新日期:2004-05-01 00:00:00
abstract::To date, Hebbian learning combined with some form of constraint on synaptic inputs has been demonstrated to describe well the development of neural networks. The previous models revealed mathematically the importance of synaptic constraints to reproduce orientation selectivity in the visual cortical neurons, but biolo...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.04-08-752
更新日期:2009-09-01 00:00:00
abstract::We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00341
更新日期:2012-10-01 00:00:00
abstract::The bias/variance decomposition of mean-squared error is well understood and relatively straightforward. In this note, a similar simple decomposition is derived, valid for any kind of error measure that, when using the appropriate probability model, can be derived from a Kullback-Leibler divergence or log-likelihood. ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976698300017232
更新日期:1998-07-28 00:00:00
abstract::Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algor...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.10-08-878
更新日期:2009-12-01 00:00:00
abstract::A hippocampal prosthesis is a very large scale integration (VLSI) biochip that needs to be implanted in the biological brain to solve a cognitive dysfunction. In this letter, we propose a novel low-complexity, small-area, and low-power programmable hippocampal neural network application-specific integrated circuit (AS...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01107
更新日期:2018-09-01 00:00:00
abstract::This article reviews statistical techniques for combining multiple probability distributions. The framework is that of a decision maker who consults several experts regarding some events. The experts express their opinions in the form of probability distributions. The decision maker must aggregate the experts' distrib...
journal_title:Neural computation
pub_type: 杂志文章,评审
doi:10.1162/neco.1995.7.5.867
更新日期:1995-09-01 00:00:00
abstract::A representational scheme under which the ranking between represented similarities is isomorphic to the ranking between the corresponding shape similarities can support perfectly correct shape classification because it preserves the clustering of shapes according to the natural kinds prevailing in the external world. ...
journal_title:Neural computation
pub_type: 杂志文章,评审
doi:10.1162/neco.1997.9.4.701
更新日期:1997-05-15 00:00:00
abstract::For any memoryless communication channel with a binary-valued input and a one-dimensional real-valued output, we introduce a probabilistic lower bound on the mutual information given empirical observations on the channel. The bound is built on the Dvoretzky-Kiefer-Wolfowitz inequality and is distribution free. A quadr...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00144
更新日期:2011-07-01 00:00:00
abstract::Function approximation in online, incremental, reinforcement learning needs to deal with two fundamental problems: biased sampling and nonstationarity. In this kind of task, biased sampling occurs because samples are obtained from specific trajectories dictated by the dynamics of the environment and are usually concen...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00906
更新日期:2017-01-01 00:00:00
abstract::In a previous article, we considered game trees as graphical models. Adopting an evaluation function that returned a probability distribution over values likely to be taken at a given position, we described how to build a model of uncertainty and use it for utility-directed growth of the search tree and for deciding o...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976699300016881
更新日期:1999-01-01 00:00:00
abstract::Ohshiro, Hussain, and Weliky (2011) recently showed that ferrets reared with exposure to flickering spot stimuli, in the absence of oriented visual experience, develop oriented receptive fields. They interpreted this as refutation of efficient coding models, which require oriented input in order to develop oriented re...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00333
更新日期:2012-09-01 00:00:00
abstract::The successor representation was introduced into reinforcement learning by Dayan ( 1993 ) as a means of facilitating generalization between states with similar successors. Although reinforcement learning in general has been used extensively as a model of psychological and neural processes, the psychological validity o...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00282
更新日期:2012-06-01 00:00:00
abstract::We study the expressive power of positive neural networks. The model uses positive connection weights and multiple input neurons. Different behaviors can be expressed by varying the connection weights. We show that in discrete time and in the absence of noise, the class of positive neural networks captures the so-call...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00789
更新日期:2015-12-01 00:00:00
abstract::The role of correlations between neuronal responses is crucial to understanding the neural code. A framework used to study this role comprises a breakdown of the mutual information between stimuli and responses into terms that aim to account for different coding modalities and the distinction between different notions...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00588
更新日期:2014-06-01 00:00:00
abstract::Based on the dopamine hypotheses of cocaine addiction and the assumption of decrement of brain reward system sensitivity after long-term drug exposure, we propose a computational model for cocaine addiction. Utilizing average reward temporal difference reinforcement learning, we incorporate the elevation of basal rewa...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2009.10-08-882
更新日期:2009-10-01 00:00:00
abstract::We propose that replication (with mutation) of patterns of neuronal activity can occur within the brain using known neurophysiological processes. Thereby evolutionary algorithms implemented by neuro- nal circuits can play a role in cognition. Replication of structured neuronal representations is assumed in several cog...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/NECO_a_00031
更新日期:2010-11-01 00:00:00
abstract::Although the number of artificial neural network and machine learning architectures is growing at an exponential pace, more attention needs to be paid to theoretical guarantees of asymptotic convergence for novel, nonlinear, high-dimensional adaptive learning algorithms. When properly understood, such guarantees can g...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco_a_01117
更新日期:2018-10-01 00:00:00
abstract::Durbin and Willshaw's elastic net algorithm can find good solutions to the TSP. The purpose of this paper is to point out that for certain ranges of parameter values, the algorithm converges into local minima that do not correspond to valid tours. The key parameter is the ratio governing the relative strengths of the ...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1991.3.3.363
更新日期:1991-10-01 00:00:00
abstract::A single-layered Hough transform network is proposed that accepts image coordinates of each object pixel as input and produces a set of outputs that indicate the belongingness of the pixel to a particular structure (e.g., a straight line). The network is able to learn adaptively the parametric forms of the linear segm...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/089976601300014501
更新日期:2001-03-01 00:00:00
abstract::Recurrent neural architectures having oscillatory dynamics use rhythmic network activity to represent patterns stored in short-term memory. Multiple stored patterns can be retained in memory over the same neural substrate because the network's state persistently switches between them. Here we present a simple oscillat...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.2008.02-08-715
更新日期:2009-03-01 00:00:00
abstract::We present a graphical model framework for decoding in the visual ERP-based speller system. The proposed framework allows researchers to build generative models from which the decoding rules are obtained in a straightforward manner. We suggest two models for generating brain signals conditioned on the stimulus events....
journal_title:Neural computation
pub_type: 信件
doi:10.1162/NECO_a_00066
更新日期:2011-01-01 00:00:00
abstract::A modular, recurrent connectionist network is taught to incrementally parse complex sentences. From input presented one word at a time, the network learns to do semantic role assignment, noun phrase attachment, and clause structure recognition, for sentences with both active and passive constructions and center-embedd...
journal_title:Neural computation
pub_type: 杂志文章
doi:10.1162/neco.1991.3.1.110
更新日期:1991-04-01 00:00:00