Comparison Of Two Methods Of Adding Jitter To Artificial Neural Network

CARS 2004: COMPUTER ASSISTED RADIOLOGY AND SURGERY, PROCEEDINGS(2004)

引用 19|浏览21
暂无评分
摘要
We compare two methods of training artificial neural networks (ANNs) that potentially reduce the risk of the neural network overfitting the training data set. We refer to these methods as training with jitter. In one method of training with jitter, a new random noise vector is added to each training-data vector between successive iterations. In this work, we propose a different method of training with jitter, in which instead of adding different random noise vectors between iterations, a number of random vectors are used to expand the training data set prior to training. This artificially expanded data set is then used to train the artificial neural network in the conventional manner. These two methods are compared to the conventional method of training artificial neural networks. We find that although training with a single expanded training data set does increase the performance of the neural networks, overfitting can still occur after a large number of training iterations. (C) 2004 CARS and Elsevier B.V. All rights reserved.
更多
查看译文
关键词
artificial neural networks, computer-aided diagnosis, classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要