Deep Neural Networks with Mixture of Experts Layers for Complex Event Recognition from Images

Mingyao Li,Sei-ichiro Kamata

2018 Joint 7th International Conference on Informatics, Electronics & Vision (ICIEV) and 2018 2nd International Conference on Imaging, Vision & Pattern Recognition (icIVPR)(2018)

引用 2|浏览2
暂无评分
摘要
With the need for the real-world applications, event recognition from static images has become more and more popular in these years. Although there remain good achievements, recognizing events from images with a complex background like WIDER dataset is still very hard to get good results. In this paper, we show this gap is probably caused by the large discrepancy of data. Most of the existing methods choose to use various modifications on pre-trained CNN network model to solve the problem. Although we follow this thought, after a review of existing methods, we choose two other ways to solve this problem. Firstly, we reveal that a deep one-channel model with end-to-end structure is more suitable to this problem than other multi-channel or multi-task models, which leads we to propose a model under this rule by modifying on one single pre-trained ResNet channel. Secondly, we propose a Mixture of Experts (MoE) neural network layer to overcome the large discrepancy of data. To increase the performance and enhance the specialization of the MoE layer, we also involve a simple neural network transfer method, Elastic Weight Consolidation, to transfer knowledge from SocEID dataset. The result shows that we enhance the accuracy of the WIDER dataset from the state-of-the-art by 9.4% with lower computational time and memory consumption. And some experiments are also listed there to proof the validation of our method.
更多
查看译文
关键词
Task analysis,Image recognition,Neural networks,Memory management,Computational modeling,Feature extraction,Pattern recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要