Deep Neural Network深度神经网络(Deep Neural Networks, 简称DNN)是深度学习的基础,是深度学习的一种框架。它是一种具备至少一个隐层的神经网络,与浅层神经网络类似,深度神经网络也能够为复杂非线性系统提供建模,但多出的层次为模型提供了更高的抽象层次,因而提高了模型的能力。
Proceedings of the National Academy of Sciences of the United States of America, no. 48 (2020): 30071-30078
The units reveal how the network decomposes the recognition of specific scene classes into particular visual concepts that are important to each scene class
Cited by0BibtexViews109DOI
0
0
european conference on computer vision, pp.307-322, (2020)
We propose a novel patch-wise iterative algorithm – a black-box attack towards mainstream normally trained and defense models, which differs from the existing attack methods manipulating pixel-wise noise
Cited by0BibtexViews108DOI
0
0
Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele,Kristof T. Schütt,Grégoire Montavon,Wojciech Samek,Klaus-Robert Müller,Sven Dähne,Pieter-Jan Kindermans
Journal of Machine Learning Research, (2019)
To demonstrate the versatility of iNNvestigate, we provide an analysis of image classifications for variety of state-of-the-art neural network architectures
Cited by59BibtexViews106DOI
0
0
Materials & Design, (2019): 300-310
Study in this paper demonstrates that small/narrow deep neural network with small dataset and special training methods has huge potential for extensive applications in material study, especially for those multivariable nonlinear problems
Cited by48BibtexViews58
0
0
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), (2019): 2053-2062
We provide an improved analysis of the global convergence of gradient descent for training deep neural networks, which only requires a milder over-parameterization condition than previous work in terms of the training sample size and other problem-dependent parameters
Cited by43BibtexViews59DOI
0
0
Philipp Grohs, Dmytro Perekrestenko, Dennis Elbrächter,Helmut Bölcskei
arXiv: Learning, (2019)
Deep neural networks provide optimal approximation of a very wide range of functions and function classes used in mathematical signal processing
Cited by28BibtexViews53DOI
0
0
Mohsen Imani, Saransh Gupta, Yeseong Kim,Tajana Rosing
Proceedings of the 46th International Symposium on Computer Architecture, pp.802-815, (2019)
Our evaluation shows that FloatPIM can achieve on average 4.3× and 15.8× higher speedup and energy efficiency as compared to PipeLayer, the state-of-the-art Processing In-Memory accelerator, during training
Cited by23BibtexViews119DOI
0
0
arXiv: Neural and Evolutionary Computing, (2019)
The growth of convolutional and recurrent neural networks in practice has prompted the development of specialized methods that perform better than standard dropout on specific kinds of neural networks
Cited by15BibtexViews42DOI
0
0
Huang Jiaoyang,Yau Horng-Tzer
ICML, pp.4542-4551, (2019)
We show that the training dynamic is given by a data dependent infinite hierarchy of ordinary differential equations, i.e., the neural tangent hierarchy
Cited by10BibtexViews42DOI
0
0
Nature medicine, no. 1 (2019): 65-69
We validated the deep neural network on a test dataset that consisted of 328 ECG records collected from 328 unique patients, and which was annotated by a consensus committee of expert cardiologists
Cited by3BibtexViews134DOI
0
0
international conference on software engineering, (2018): 303-314
Recent advances in Deep Neural Networks (DNNs) have led to the development of DNN-driven autonomous cars that, using sensors like camera, LiDAR, etc., can drive without any human intervention. Most major manufacturers including Tesla, GM, Ford, BMW, and Waymo/Google are working o...
Cited by423BibtexViews174DOI
0
0
ICLR, (2018)
Use of a Gaussian process prior on functions enables exact Bayesian inference for regression from matrix computations, and we are able to obtain predictions and uncertainty estimates from deep neural networks without stochastic gradient-based training
Cited by319BibtexViews84DOI
0
0
IEEE Transactions on Image Processing, no. 1 (2018): 206-219
The experimental results show that the proposed methods outperforms other state-of-the-art approaches for NR as well as for FR image quality assessment and achieve generalization capabilities competitive to state-of-the-art data-driven approaches
Cited by265BibtexViews70DOI
0
0
Simon S. Du,Jason D. Lee, Haochuan Li,Liwei Wang, Xiyu Zhai
international conference on machine learning, (2018)
The current paper focuses on the training loss, but does not address the test loss
Cited by237BibtexViews97DOI
0
0
Science (New York, N.Y.), (2018)
We introduce an all-optical deep learning framework, where the neural network is physically formed by multiple layers of diffractive surfaces that work in collaboration to optically perform an arbitrary function that the network can statistically learn
Cited by224BibtexViews44DOI
0
0
SIGIR, (2018): 95-104
By combining the strengths of convolutional and recurrent neural networks and an autoregressive component, the proposed approach significantly improved the state-of-the-art results in time series forecasting on multiple benchmark datasets
Cited by142BibtexViews107DOI
0
0
national conference on artificial intelligence, (2018)
An acceleration framework – DeepRebirth is proposed to speed up the neural networks with satisfactory accuracy, which operates by re-generating new tensor layers from optimizing non-tensor layers and their neighborhood units
Cited by53BibtexViews55DOI
0
0
national conference on artificial intelligence, (2018)
Inspired by the analysis in, we propose a method to measure the effect of parameter quantization errors in individual layers on the overall model prediction accuracy
Cited by45BibtexViews38DOI
0
0
Applied Soft Computing, (2018): 251-258
Experimental results show that DBN approach can be used to determine the initial deep neural network parameters, biases and weights, which in most cases outperform DNN method
Cited by42BibtexViews42DOI
0
0
IEEE Access, (2018): 9454-9463
Considering that the above-mentioned information may be difficult to obtain for most recommendation systems, in this paper, we propose a recommendation model based on deep neural network that does not need any extra information
Cited by35BibtexViews58DOI
0
0
No data, please see others