VocalPrint: exploring a resilient and secure voice authentication via mmWave biometric interrogation

SenSys '20: The 18th ACM Conference on Embedded Networked Sensor Systems Virtual Event Japan November, 2020(2020)

引用 32|浏览93
暂无评分
摘要
With the continuing growth of voice-controlled devices, voice metrics have been widely used for user identification. However, voice biometrics is vulnerable to replay attacks and ambient noise. We identify that the fundamental vulnerability in voice biometrics is rooted in its indirect sensing modality (e.g., microphone). In this paper, we present VocalPrint, a resilient mmWave interrogation system which directly captures and analyzes the vocal vibrations for user authentication. Specifically, VocalPrint exploits the unique disturbance of the skin-reflect radio frequency (RF) signals around the near-throat region of the user, caused by the vocal vibrations during communication. The complex ambient noise is isolated from the RF signal using a novel resilience-aware clutter suppression approach for preserving fine-grained vocal biometric properties. Afterward, we extract the text-independent vocal tract and vocal source features and input them to an ensemble classifier for user authentication. VocalPrint is practical as it leverages a low-cost, portable, and energy-efficient hardware allowing effortless transition to a smartphone while having sufficient usability as typical voice authentication systems due to its non-contact nature. Our experimental results from 41 participants with different interrogation distances, orientations, and body motions show that VocalPrint can achieve over 96% authentication accuracy even under unfavorable conditions. We demonstrate the resilience of our system against complex noise interference and spoof attacks of various threat levels.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要