Calibration of human driving behavior and preference using vehicle trajectory data

Transportation Research Part C: Emerging Technologies(2022)

引用 3|浏览20
暂无评分
摘要
In a recent work (Dai et al., 2021) we proposed a multi-agent computational framework in which each agent’s driving policy at micro-level is derived by maximizing its own utility function. Here we tackle the inverse problem of calibrating the utility parameters using observed vehicle trajectory data. Our starting point is to cast the state evolution as a state space model where driver’s decision is treated as the control input. Then the preference calibration can be achieved with the maximum likelihood estimation technique of the associated Kalman filter. We explicitly illustrate our approach using the vehicle trajectory data from the Sugiyama experiment. Not only the estimated state filter can fit the observed data well for each individual vehicle, the inferred utility functions can also reproduce quantitatively similar pattern of the observed collective traffic pattern. Because our forward model treats driving decision process, which is intrinsically dynamic with multi-agent interactions, as a sequence of independent static optimization problems contingent on the state with a finite look ahead anticipation, we are able to sidestep solving interacting dynamic inversion problems. Consequently, our approach can be made very efficient computationally. We further demonstrate how the calibrated model can be used to gain insights on human driving behaviors via counterfactual simulations. A number of variations of the modeling setting show that our approach is robust and intuitively generalizable to similar driving contexts.
更多
查看译文
关键词
Behavioral modeling,Autonomous vehicles,Inverse Reinforcement Learning,Game theory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要