Asking Multimodal Clarifying Questions in Mixed-Initiative Conversational Search
WWW 2024(2024)
摘要
In mixed-initiative conversational search systems, clarifying questions are
used to help users who struggle to express their intentions in a single query.
These questions aim to uncover user's information needs and resolve query
ambiguities. We hypothesize that in scenarios where multimodal information is
pertinent, the clarification process can be improved by using non-textual
information. Therefore, we propose to add images to clarifying questions and
formulate the novel task of asking multimodal clarifying questions in
open-domain, mixed-initiative conversational search systems. To facilitate
research into this task, we collect a dataset named Melon that contains over 4k
multimodal clarifying questions, enriched with over 14k images. We also propose
a multimodal query clarification model named Marto and adopt a prompt-based,
generative fine-tuning strategy to perform the training of different stages
with different prompts. Several analyses are conducted to understand the
importance of multimodal contents during the query clarification phase.
Experimental results indicate that the addition of images leads to significant
improvements of up to 90
images. Extensive analyses are also performed to show the superiority of Marto
compared with discriminative baselines in terms of effectiveness and
efficiency.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要