谷歌浏览器插件
订阅小程序
在清言上使用

MoDLF: a model-driven deep learning framework for autonomous vehicle perception (AVP).

ACM/IEEE International Conference on Model Driven Engineering Languages and Systems (MoDELS)(2022)

引用 0|浏览3
暂无评分
摘要
Modern vehicles are extremely complex embedded systems that integrate software and hardware from a large set of contributors. Modeling standards like EAST-ADL have shown promising results to reduce complexity and expedite system development. However, such standards are unable to cope with the growing demands of the automotive industry. A typical example of this phenomenon is autonomous vehicle perception (AVP) where deep learning architectures (DLA) are required for computer vision (CV) tasks like real-time object recognition and detection. However, existing modeling standards in the automotive industry are unable to manage such CV tasks at a higher abstraction level. Consequently, system development is currently accomplished through modeling approaches like EAST-ADL while DLA-based CV features for AVP are implemented in isolation at a lower abstraction level. This significantly compromises productivity due to integration challenges. In this article, we introduce MoDLF - A Model-Driven Deep learning Framework to design deep convolutional neural network (DCNN) architectures for AVP tasks. Particularly, Model Driven Architecture (MDA) is leveraged to propose a metamodel along with a conformant graphical modeling workbench to model DCNNs for CV tasks in AVP at a higher abstraction level. Furthermore, Model-To-Text (M2T) transformations are provided to generate executable code for MATLAB ® and Python. The framework is validated via two case studies on benchmark datasets for key AVP tasks. The results prove that MoDLF effectively enables model-driven architectural exploration of deep convnets for AVP system development while supporting integration with renowned existing standards like EAST-ADL.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要