A novel shape augmentation approach in training neural networks using Branch Length Similarity entropy

PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS(2023)

引用 0|浏览0
暂无评分
摘要
This study proposes a new approach for training deep neural networks for shape classification. The approach uses Branch Length Similarity (BLS) entropy, a measure defined for simple networks. The BLS entropy profile is obtained by computing entropy values in either a clockwise or counterclockwise direction of the boundary pixels forming the shape. Instead of directly training the deep neural network on the shape, the BLS entropy profile is trained as a representation of the shape. To assess the effectiveness of this approach, we conducted experiments using the MPEG-7 dataset. We selected 40 shape classes, each consisting of 10 similar shapes, and created 20 deformed versions per shape by modifying their aspect ratios. We trained GoogLeNet on these deformed shapes (referred to as GoogLeNet_Shapes), as well as on an image of the entropy profile derived from the deformed shapes (referred to as GoogLeNet_Entropy) and a circularly shifted image of the profile (referred to as GoogLeNet_Shifted). During the classification task on rotated shapes that were not included in the training set, GoogLeNet_Shapes achieved a classification accuracy of 41.66%, GoogLeNet_Entropy achieved 45.47% accuracy, and GoogLeNet_Shifted achieved 82.26% accuracy. Similarly, we compared the test accuracy of GoogLeNet_Shape, GoogLeNet_Entropy, and GoogLeNet_Shifted for rotation and scaling transformations. Interestingly, we observed that GoogLeNet_Shape performed well for the transformations it was trained on, while GoogLeNet_Shifted exhibited superior performance for other transformations that were not part of the training data, all under the condition of the same amount of data. (c) 2023 Elsevier B.V. All rights reserved.
更多
查看译文
关键词
Shape classification,Branch Length Similarity (BLS) entropy,Shape augmentation,Training process
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要