谷歌浏览器插件
订阅小程序
在清言上使用

Safety of Multimodal Large Language Models on Images and Text

IJCAI 2024(2024)

引用 0|浏览20
暂无评分
摘要
Attracted by the impressive power of Multimodal Large Language Models (MLLMs), the public is increasingly utilizing them to improve the efficiency of daily work. Nonetheless, the vulnerabilities of MLLMs to unsafe instructions bring huge safety risks when these models are deployed in real-world scenarios. In this paper, we systematically survey current efforts on the evaluation, attack, and defense of MLLMs' safety on images and text. We begin with introducing the overview of MLLMs on images and text and understanding of safety, which helps researchers know the detailed scope of our survey. Then, we review the evaluation datasets and metrics for measuring the safety of MLLMs. Next, we comprehensively present attack and defense techniques related to MLLMs' safety. Finally, we analyze several unsolved issues and discuss promising research directions. The relevant papers are collected at "https://github.com/isXinLiu/Awesome-MLLM-Safety".
更多
查看译文
关键词
AI Ethics, Trust, Fairness: General,AI Ethics, Trust, Fairness: ETF: Safety and robustness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要