Rigorous Assessment of Model Inference Accuracy using Language Cardinality
arxiv(2022)
摘要
Models such as finite state automata are widely used to abstract the behavior
of software systems by capturing the sequences of events observable during
their execution. Nevertheless, models rarely exist in practice and, when they
do, get easily outdated; moreover, manually building and maintaining models is
costly and error-prone. As a result, a variety of model inference methods that
automatically construct models from execution traces have been proposed to
address these issues.
However, performing a systematic and reliable accuracy assessment of inferred
models remains an open problem. Even when a reference model is given, most
existing model accuracy assessment methods may return misleading and biased
results. This is mainly due to their reliance on statistical estimators over a
finite number of randomly generated traces, introducing avoidable uncertainty
about the estimation and being sensitive to the parameters of the random trace
generative process.
This paper addresses this problem by developing a systematic approach based
on analytic combinatorics that minimizes bias and uncertainty in model accuracy
assessment by replacing statistical estimation with deterministic accuracy
measures. We experimentally demonstrate the consistency and applicability of
our approach by assessing the accuracy of models inferred by state-of-the-art
inference tools against reference models from established specification mining
benchmarks.
更多查看译文
关键词
model inference accuracy,cardinality
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要