In-n-Out: Calibrating Graph Neural Networks for Link Prediction
CoRR(2024)
摘要
Deep neural networks are notoriously miscalibrated, i.e., their outputs do
not reflect the true probability of the event we aim to predict. While networks
for tabular or image data are usually overconfident, recent works have shown
that graph neural networks (GNNs) show the opposite behavior for node-level
classification. But what happens when we are predicting links? We show that, in
this case, GNNs often exhibit a mixed behavior. More specifically, they may be
overconfident in negative predictions while being underconfident in positive
ones. Based on this observation, we propose IN-N-OUT, the first-ever method to
calibrate GNNs for link prediction. IN-N-OUT is based on two simple intuitions:
i) attributing true/false labels to an edge while respecting a GNNs prediction
should cause but small fluctuations in that edge's embedding; and, conversely,
ii) if we label that same edge contradicting our GNN, embeddings should change
more substantially. An extensive experimental campaign shows that IN-N-OUT
significantly improves the calibration of GNNs in link prediction, consistently
outperforming the baselines available – which are not designed for this
specific task.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要