Artificial Intelligence Security in 5G Networks: Adversarial Examples for Estimating a Travel Time Task

IEEE Vehicular Technology Magazine(2020)

引用 20|浏览45
暂无评分
摘要
With the rapid development of the Internet, the nextgeneration network (5G) has emerged. 5G can support a variety of new applications, such as the Internet of Things (IoT), virtual reality (VR), and the Internet of Vehicles. Most of these new applications depend on deep learning algorithms, which have made great advances in many areas of artificial intelligence (AI). However, researchers have found that AI algorithms based on deep learning pose numerous security problems. For example, deep learning is susceptible to a well-designed input sample formed by adding small perturbations to the original sample. This well-designed input with small perturbations, which are imperceptible to humans, is called an adversarial example. An adversarial example is similar to a truth example, but it can render the deep learning model invalid. In this article, we generate adversarial examples for spatiotemporal data. Based on the travel time estimation (TTE) task, we use two methods-white-box and blackbox attacks-to invalidate deep learning models. Experiment results show that the adversarial examples successfully attack the deep learning model and thus that AI security is a big challenge of 5G.
更多
查看译文
关键词
adversarial example,truth example,deep learning model,travel time estimation task,AI security,artificial intelligence security,next-generation network,5G networks,deep learning algorithms,AI algorithms,Internet,spatiotemporal data,blackbox attacks,white-box attacks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要