Crowd intelligence enhances automated mobile testing.

ASE(2017)

引用 67|浏览81
暂无评分
摘要
We show that information extracted from crowd-based testing can enhance automated mobile testing. We introduce Polariz, which generates replicable test scripts from crowd-based testing, extracting cross-app äóÖmotifäó» events: automatically inferred reusable higher-level event sequences composed of lower-level observed event actions. Our empirical study used 434 crowd workers from 24 countries to perform 1,350 testing tasks on 9 popular Google Play apps, each with at least 1 million user installs. The findings reveal that the crowd was able to achieve 60.5% unique activity coverage and proved to be complementary to automated search-based testing in 5 out of the 9 subjects studied. Our leave-one-out evaluation demonstrates that coverage attainment can be improved (6 out of 9 cases, with no disimprovement on the remaining 3) by combining crowd-based and search-based testing.
更多
查看译文
关键词
Crowdsourced Software Engineering, Mobile App Testing, Test Generation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要