Image Safeguarding: Reasoning with Conditional Vision Language Model and Obfuscating Unsafe Content Counterfactually
CoRR(2024)
摘要
Social media platforms are being increasingly used by malicious actors to
share unsafe content, such as images depicting sexual activity, cyberbullying,
and self-harm. Consequently, major platforms use artificial intelligence (AI)
and human moderation to obfuscate such images to make them safer. Two critical
needs for obfuscating unsafe images is that an accurate rationale for
obfuscating image regions must be provided, and the sensitive regions should be
obfuscated (e.g. blurring) for users' safety. This process involves
addressing two key problems: (1) the reason for obfuscating unsafe images
demands the platform to provide an accurate rationale that must be grounded in
unsafe image-specific attributes, and (2) the unsafe regions in the image must
be minimally obfuscated while still depicting the safe regions. In this work,
we address these key issues by first performing visual reasoning by designing a
visual reasoning model (VLM) conditioned on pre-trained unsafe image
classifiers to provide an accurate rationale grounded in unsafe image
attributes, and then proposing a counterfactual explanation algorithm that
minimally identifies and obfuscates unsafe regions for safe viewing, by first
utilizing an unsafe image classifier attribution matrix to guide segmentation
for a more optimal subregion segmentation followed by an informed greedy search
to determine the minimum number of subregions required to modify the
classifier's output based on attribution score. Extensive experiments on
uncurated data from social networks emphasize the efficacy of our proposed
method. We make our code available at:
https://github.com/SecureAIAutonomyLab/ConditionalVLM
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要