How to Cover up Anomalous Accesses to Electronic Health Records

Xiaojun Xu,Qingying Hao, Zhuolin Yang,Bo Li, David Liebovitz,Gang Wang,Carl A. Gunter

USENIX Security Symposium(2023)

引用 1|浏览12
暂无评分
摘要
Illegitimate access detection systems in hospital logs perform post hoc detection instead of runtime access restriction to allow widespread access in emergencies. We study the effectiveness of adversarial machine learning strategies against such detection systems on a large-scale dataset consisting of a year of access logs at a major hospital. We study a range of graph-based anomaly detection systems, including heuristic-based and Graph Neural Network (GNN)-based models. We find that evasion attacks, in which covering accesses (that is, accesses made to disguise a target access) are injected during evaluation period of the target access, can successfully fool the detection system. We also show that such evasion attacks can transfer among different detection algorithms. On the other hand, we find that poisoning attacks, in which adversaries inject covering accesses during the training phase of the model, do not effectively mislead the trained detection system unless the attacker is given unrealistic capabilities such as injecting over 10,000 accesses or imposing a high weight on the covering accesses in the training algorithm. To examine the generalizability of the results, we also apply our attack against a state-of-the-art detection model on the LANL network lateral movement dataset, and observe similar conclusions.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络