How to Limit Label Dissipation in Neural-network Validation: Exploring Label-free Early-stopping Heuristics

ACM JOURNAL ON COMPUTING AND CULTURAL HERITAGE(2023)

引用 2|浏览5
暂无评分
摘要
In recent years, deep learning (DL) has achieved impressive successes inmany application domains, including HandwrittenText Recognition. However, DL methods demand a long training process and a huge amount of human-based labeled data. To address these issues, we explore several label-free heuristics for detecting the early-stopping point in training convolutional-neural networks: (1) Cumulative Distribution of the standard deviation of kernel weights (SKW); (2) the moving standard deviation of SKW, and (3) the standard deviation of the sum of weights over a window in the epoch series. We applied the proposed methods to the common RIMES and Bentham data sets as well as another highly challenging historical data set. In comparison with the usual stopping criterion which uses labels for validation, the label-free heuristics are at least 10 times faster per epoch when the same training set is used. The use of alternative stopping heuristics may require additional epochs, however, they never require the original computing time. The character error rate (%) on the test set of the label-free heuristics is about a percentage point less in comparison to the usual stopping criterion. The label-free early-stopping methods have two benefits: They do not require a computationally intensive evaluation of a validation set per epoch and all labels can be used for training, specifically benefitting the underrepresented word or letter classes.
更多
查看译文
关键词
Deep learning,early-stopping criterion,convolutional neural networks,historical handwritten word recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要