Discovering and Understanding Algorithmic Biases in Autonomous Pedestrian Trajectory Predictions

SenSys(2022)

引用 0|浏览4
暂无评分
摘要
Pedestrian trajectory prediction is an important module in autonomous vehicles (AVs) to ensure safe and effective motion planning. Recently, many deep learning algorithms that achieve near real-time trajectory predictions have been developed. However, people in the artificial intelligence (AI) ethics community have raised critical concerns about the bias and fairness of many general deep learning algorithms. For example, most pedestrian trajectory data is collected from majority populations, and models learned from this data may not generalize well to the heterogeneous needs and behavior patterns of different pedestrian groups, especially for vulnerable pedestrians like the disabled, the elderly, and children. Biases present in trajectory prediction algorithms could mean that pedestrians from certain vulnerable demographics are more likely to be involved in vehicle crashes. In this work, we test two state-of-the-art pedestrian trajectory prediction models for age and gender biases across three different datasets. We design and utilize novel evaluation metrics for comparing model performance. We find that both models perform worse on children and the elderly compared to adults. However, their performance is similar between men and women. We identify potential sources of these biases, as well as discuss several limitations of our study. Our future work will consist of testing more models, refining our evaluation metrics, further differentiating the dataset bias from the algorithmic bias, and mitigating the algorithmic biases.
更多
查看译文
关键词
bias, fairness, trajectory prediction, algorithm evaluation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要