谷歌浏览器插件
订阅小程序
在清言上使用

A Collaborative AI-Enabled Pretrained Language Model for AIoT Domain Question Answering

IEEE transactions on industrial informatics(2022)

引用 8|浏览100
暂无评分
摘要
Large-scale knowledge in the artificial intelligence of things (AIoT) field urgently needs effective models to understand human language and automatically answer questions. Pretrained language models achieve state-of-the-art performance on some question answering (OA) datasets, but few models can answer questions on AIoT domain knowledge. Currently, the AIoT domain lacks sufficient OA datasets and large-scale pretraining corpora. In this article, we propose RoBERTa(AIoT) to address the problem of the lack of high-quality large-scale labeled AIoT OA datasets. We construct an AIoT corpus to further pretrain RoBERTa and BERT. RoBERTa(AIoT) and BERTAIoT leverage unsupervised pretraining on a large corpus composed of AIoT-oriented Wikipedia webpages to learn more domain-specific context and improve performance on the AIoT OA tasks. To fine-tune and evaluate the model, we construct three AIoT OA datasets based on the community OA websites. We evaluate our approach on these datasets, and the experimental results demonstrate the significant improvements of our approach.
更多
查看译文
关键词
Artificial intelligence of things (AIoT),BERT,domain-specific,question answering (QA),RoBERTa
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要