Multimodal Data and Resource Efficient Device-Directed Speech Detection with Large Foundation Models
CoRR(2023)
摘要
Interactions with virtual assistants typically start with a trigger phrase
followed by a command. In this work, we explore the possibility of making these
interactions more natural by eliminating the need for a trigger phrase. Our
goal is to determine whether a user addressed the virtual assistant based on
signals obtained from the streaming audio recorded by the device microphone. We
address this task by combining 1-best hypotheses and decoder signals from an
automatic speech recognition system with acoustic representations from an audio
encoder as input features to a large language model (LLM). In particular, we
are interested in data and resource efficient systems that require only a small
amount of training data and can operate in scenarios with only a single frozen
LLM available on a device. For this reason, our model is trained on 80k or less
examples of multimodal data using a combination of low-rank adaptation and
prefix tuning. We compare the proposed system to unimodal baselines and show
that the multimodal approach achieves lower equal-error-rates (EERs), while
using only a fraction of the training data. We also show that low-dimensional
specialized audio representations lead to lower EERs than high-dimensional
general audio representations.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要