AutoSMP: An Evaluation Platform for Sampling Algorithms

SPLC '21 - PROCEEDINGS OF THE 25TH ACM INTERNATIONAL SYSTEMS AND SOFTWARE PRODUCT LINE CONFERENCE, VOL B(2021)

引用 5|浏览13
暂无评分
摘要
Testing configurable systems is a challenging task due to the combinatorial explosion problem. Sampling is a promising approach to reduce the testing effort for product-based systems by finding a small but still representative subset (i.e., a sample) of all configurations for testing. The quality of a generated sample wrt. evaluation criteria such as run time of sample generation, feature coverage, sample size, and sampling stability depends on the subject systems and the sampling algorithm. Choosing the right sampling algorithm for practical applications is challenging because each sampling algorithm fulfills the evaluation criteria to a different degree. Researchers keep developing new sampling algorithms with improved performance or unique properties to satisfy application-specific requirements. Comparing sampling algorithms is therefore a necessary task for researchers. However, this task needs a lot of effort because of missing accessibility of existing algorithm implementations and benchmarks. Our platform AutoSMP eases practitioners and researchers lifes by automatically executing sampling algorithms on predefined benchmarks and evaluating the sampling results wrt. specific user requirements. In this paper, we introduce the open-source application of AutoSMP and a set of predefined benchmarks as well as a set of T-wise sampling algorithms as examples.
更多
查看译文
关键词
product lines, sampling, sampling evalutaion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要