AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
Despite the many bene ts of Continuous integration, developers still encounter a wide variety of problems with CI

Trade-offs in continuous integration: assurance, security, and flexibility

ESEC/SIGSOFT FSE, pp.197-207, (2017)

Cited by: 78|Views162
EI

Abstract

Continuous integration (CI) systems automate the compilation, building, and testing of software. Despite CI being a widely used activity in software engineering, we do not know what motivates developers to use CI, and what barriers and unmet needs they face. Without such knowledge, developers make easily avoidable errors, tool builders in...More

Code:

Data:

0
Introduction
  • Continuous integration (CI) systems automate the compilation, building, and testing of software.
  • In the previous work [19], the authors examine the usage of CI among open-source projects on GitHub, and show that projects that use CI release more frequently than projects that do not.
  • These studies do not present what barriers and needs developers face when using CI, or what trade-o s developers must make when using CI.
Highlights
  • Continuous integration (CI) systems automate the compilation, building, and testing of software
  • We found some developers consider deployment to be a part of CI, and others consider continuous deployment (CD) to be a separate process
  • Despite the many bene ts of CI, developers still encounter a wide variety of problems with CI
  • We hope that this paper motivates researchers to tackle the hard problems that developers face with CI
  • Flaky test identi cation tools could automatically detect aky tests to help developers know if CI failures are due to aky tests or legitimate test failures
  • CI is here to stay as a development practice, and we need continuous improvement (“CI” of a di erent kind) of CI to realize its full potential
Methods
  • Interviews are a qualitative method and are e ective at discovering the knowledge and experiences of the participants
  • They often have a limited sample size [41].
  • Surveys are a quantitative technique that summarizes information over a larger sample size and provides broader results.
  • Together, they provide a much clearer picture than either can provide alone.
  • The interview script, code set, survey questions, and the responses can be found on the companion site
Results
  • 4.1 Barriers

    The authors answer What barriers do developers face when using CI? (RQ1) The authors collected a list of barriers which prevent of hinder adoption and use of CI that the interview participants reported experiencing when using CI.
  • The authors asked the survey participants to select up to three problems that they had experienced.
  • When a CI build fails, some participants begin the process of identifying why the build failed.
  • Sometimes, this can be fairly straightforward.
  • For some build failures on the CI server, where the developer does not have the same access as they have when debugging locally, troubleshooting the failure can be quite challenging.
  • S4 described one such situation: If the author gets lucky, the author can spot the cause of the problem right from the results from the Jenkins reports, and if not, it becomes more complicated
Conclusion
  • The authors discuss the trade-o s developers face when using CI, the implications of those trade-o s, and the di erences between the two surveys. 5.1 CI Trade-O s

    As with any technology, developers who use CI should be aware of the trade-o s that arise when using that technology.
  • Building and running all these additional tests causes the CI to slow down, which developers considered a problem.
  • Ensuring that their code is correctly tested, but keeping build times manageable, is a trade-o developers must be aware of.
  • Future work should examine the relationship between developers’ desired and actual build times when using CI.
  • CI is here to stay as a development practice, and the authors need continuous improvement (“CI” of a di erent kind) of CI to realize its full potential
Summary
  • Introduction:

    Continuous integration (CI) systems automate the compilation, building, and testing of software.
  • In the previous work [19], the authors examine the usage of CI among open-source projects on GitHub, and show that projects that use CI release more frequently than projects that do not.
  • These studies do not present what barriers and needs developers face when using CI, or what trade-o s developers must make when using CI.
  • Methods:

    Interviews are a qualitative method and are e ective at discovering the knowledge and experiences of the participants
  • They often have a limited sample size [41].
  • Surveys are a quantitative technique that summarizes information over a larger sample size and provides broader results.
  • Together, they provide a much clearer picture than either can provide alone.
  • The interview script, code set, survey questions, and the responses can be found on the companion site
  • Results:

    4.1 Barriers

    The authors answer What barriers do developers face when using CI? (RQ1) The authors collected a list of barriers which prevent of hinder adoption and use of CI that the interview participants reported experiencing when using CI.
  • The authors asked the survey participants to select up to three problems that they had experienced.
  • When a CI build fails, some participants begin the process of identifying why the build failed.
  • Sometimes, this can be fairly straightforward.
  • For some build failures on the CI server, where the developer does not have the same access as they have when debugging locally, troubleshooting the failure can be quite challenging.
  • S4 described one such situation: If the author gets lucky, the author can spot the cause of the problem right from the results from the Jenkins reports, and if not, it becomes more complicated
  • Conclusion:

    The authors discuss the trade-o s developers face when using CI, the implications of those trade-o s, and the di erences between the two surveys. 5.1 CI Trade-O s

    As with any technology, developers who use CI should be aware of the trade-o s that arise when using that technology.
  • Building and running all these additional tests causes the CI to slow down, which developers considered a problem.
  • Ensuring that their code is correctly tested, but keeping build times manageable, is a trade-o developers must be aware of.
  • Future work should examine the relationship between developers’ desired and actual build times when using CI.
  • CI is here to stay as a development practice, and the authors need continuous improvement (“CI” of a di erent kind) of CI to realize its full potential
Tables
  • Table1: Interview participants
  • Table2: Barriers developers encounter when using CI
  • Table3: Developer needs unmet by CI
  • Table4: Developers’ motivation for using CI
Download tables as Excel
Related work
  • Continuous Integration Studies Vasilescu et al [50] performed a preliminary quantitative study of quality outcomes for open-source projects using CI. Our previous work [19] presented a quantitative study of the costs, bene ts, and usage of CI in open-source software. These studies do not examine barriers or needs when using CI, nor do they address the trade-o s developers must contend with. In contrast to these studies, we develop a deep understanding of the the barriers and unmet needs of developers through interviews and surveys. We also discover trade-o s users face when using CI.

    Debbiche et al [10] present a case study of challenges faced by a telecommunications company when adopting CI. They present barriers from a speci c company, but provide no generalized ndings and do not address needs, experiences, or bene ts of CI.
Funding
  • This research was partially supported by NSF grants CCF1421503, CCF-1438982, CCF-1439957, and CCF-1553741
Reference
  • Shay Artzi, Julian Dolby, Simon Holm Jensen, Anders Møller, and Frank Tip. 201A Framework for Automated Testing of JavaScript Web Applications. In ICSE.
    Google ScholarFindings
  • Alberto Bacchelli and Christian Bird. 2013.
    Google ScholarFindings
  • Kent Beck. 1999. Embracing Change with Extreme Programming. IEEE Computer (1999).
    Google ScholarLocate open access versionFindings
  • John Bible, Gregg Rothermel, and David S. Rosenblum. 2001. A Comparative Study of Coarse- and Fine-grained Safe Regression Test-selection Techniques. TOSEM (2001).
    Google ScholarLocate open access versionFindings
  • Michael H Birnbaum. 2004. Human Research and Data Collection via the Internet. Annual Review of Psychology (2004).
    Google ScholarLocate open access versionFindings
  • Grady Booch. 1990. Object-Oriented Design with Applications. BenjaminCummings Publishing Co., Inc.
    Google ScholarFindings
  • John L. Campbell, Charles Quincy, Jordan Osserman, and Ove K. Pedersen. 2013. Coding In-depth Semistructured Interviews: Problems of Unitization and Intercoder Reliability and Agreement. Sociological Methods & Research (2013).
    Google ScholarLocate open access versionFindings
  • Mihai Codoban, Sruti Srinivasa Ragavan, Danny Dig, and Brian Bailey. 2015. Software History Under the Lens: A Study on Why and How Developers Examine It. In ICSME.
    Google ScholarFindings
  • Michael de Jong and Arie van Deursen. 2015. Continuous Deployment and Schema Evolution in SQL Databases. In RELENG.
    Google ScholarLocate open access versionFindings
  • Adam Debbiche, Mikael Dienér, and Richard Berntsson Svensson. 2014. Challenges When Adopting Continuous Integration: A Case Study. In PROFES.
    Google ScholarFindings
  • Prem Devanbu, Thomas Zimmermann, and Christian Bird. 2016.
    Google ScholarFindings
  • Nima Dini, Allison Sullivan, Milos Gligoric, and Gregg Rothermel. 2016. The E ect of Test Suite Type on Regression Test Selection. In ISSRE.
    Google ScholarFindings
  • Sebastian Elbaum, Gregg Rothermel, and John Penix. 2014. Techniques for Improving Regression Testing in Continuous Integration Development Environments. In FSE.
    Google ScholarFindings
  • Martin Fowler. 2006. Continuous Integration. http://martinfowler.com/articles/continuousIntegration.html. (2006).
    Findings
  • Lisa M Given. 2008. The SAGE Encyclopedia of Qualitative Research Methods.
    Google ScholarFindings
  • Milos Gligoric, Lamyaa Eloussi, and Darko Marinov. 2015. Practical Regression
    Google ScholarFindings
  • Markus Gälli, Michele Lanza, Oscar Nierstrasz, and Roel Wuyts. 2004. Ordering
    Google ScholarFindings
  • Christopher Henard, Mike Papadakis, Mark Harman, Yue Jia, and Yves Le Traon.
    Google ScholarFindings
  • 2016. Comparing White-box and Black-box Test Prioritization. In ICSE.
    Google ScholarFindings
  • [19] Michael Hilton, Timothy Tunnell, Kai Huang, Darko Marinov, and Danny Dig.
    Google ScholarFindings
  • 2016. Usage, Costs, and Bene ts of Continuous Integration in Open-Source Projects. In ASE.
    Google ScholarFindings
  • [20] Sheng Huang, Jun Zhu, and Yuan Ni. 2009. ORTS: A Tool for Optimized Regression Testing Selection. In OOPSLA.
    Google ScholarFindings
  • [21] Matthew Jorde, Sebastian Elbaum, and Matthew B. Dwyer. 2008. Increasing Test Granularity by Aggregating Unit Tests. In ASE.
    Google ScholarFindings
  • [22] Noureddine Kerzazi and Bram Adams. 2016. Botched Releases: Do We Need to Roll Back? Empirical Study on a Commercial Web App. In SANER.
    Google ScholarFindings
  • [23] Noureddine Kerzazi, Foutse Khomh, and Bram Adams. 2014. Why Do Automated Builds Break? An Empirical Study. In ICSME.
    Google ScholarFindings
  • [24] Thomas D. LaToza, Gina Venolia, and Robert DeLine. 2006. Maintaining Mental Models: A Study of Developer Work Habits. In ICSE.
    Google ScholarFindings
  • [25] Lucas Layman, Madeline Diep, Meiyappan Nagappan, Janice Singer, Robert Deline, and Gina Venolia. 2013. Debugging Revisited: Toward Understanding the Debugging Needs of Contemporary Software Developers. In ESEM.
    Google ScholarLocate open access versionFindings
  • [26] Marko Leppänen, Simo Mäkinen, Max Pagels, Veli-Pekka Eloranta, Juha Itkonen, Mika V. Mäntylä, and Tomi Männistö. 2015. The Highways and Country Roads to Continuous Deployment. IEEE Software (2015).
    Google ScholarLocate open access versionFindings
  • [27] Qingzhou Luo, Farah Hariri, Lamyaa Eloussi, and Darko Marinov. 2014. An Empirical Analysis of Flaky Tests. In FSE.
    Google ScholarFindings
  • [28] Kıvanç Muşlu, Christian Bird, Nachiappan Nagappan, and Jacek Czerwonka. 2014. Transition from Centralized to Decentralized Version Control Systems: A Case
    Google ScholarFindings
  • [29] Kıvanç Muşlu, Yuriy Brun, and Alexandra Meliou. 2015. Preventing Data Errors with Continuous Testing. In ISSTA.
    Google ScholarFindings
  • [30] Helena Holmström Olsson, Hiva Alahyari, and Jan Bosch. 2012. Climbing the
    Google ScholarFindings
  • [31] Shaun Phillips, Thomas Zimmermann, and Christian Bird. 2014. Understanding and Improving Software Build Teams. In ICSE.
    Google ScholarLocate open access versionFindings
  • [32] Gerald V. Post and Albert Kagan. 2007. Evaluating Information Security Tradeo s: Restricting Access Can Interfere with User Tasks. Computers & Security (2007).
    Google ScholarLocate open access versionFindings
  • [33] Puppet and DevOps Research and Assessments (DORA). 2016. 2016 State of DevOps Report. (2016).
    Google ScholarFindings
  • [34] Gregg Rothermel, Roland H. Untch, Chengyun Chu, and Mary Jean Harrold. 1999. Test Case Prioritization: An Empirical Study. In ICSM.
    Google ScholarFindings
  • [35] Johnny Saldaña. 2015. The Coding Manual for Qualitative Researchers (3 ed.). SAGE Publications.
    Google ScholarFindings
  • [36] Eddie Antonio Santos and Abram Hindle. 2016. Judging a Commit by Its Cover: Correlating Commit Message Entropy with Build Status on Travis-CI. In MSR.
    Google ScholarFindings
  • [37] Tony Savor, Mitchell Douglas, Michael Gentili, Laurie Williams, Kent Beck, and Michael Stumm. 2016. Continuous Deployment at Facebook and OANDA. In ICSE.
    Google ScholarFindings
  • [38] Gerald Schermann, Jürgen Cito, Philipp Leitner, and Harald C. Gall. 2016. Towards Quality Gates in Continuous Delivery and Deployment. In ICPC.
    Google ScholarFindings
  • [39] Irving Seidman. 2006. Interviewing as Qualitative Research: A Guide for Researchers in Education and the Social Sciences. Teachers College Press.
    Google ScholarFindings
  • [40] August Shi, Alex Gyori, Milos Gligoric, Andrey Zaytsev, and Darko Marinov. 2014. Balancing Trade-o s in Test-suite Reduction. In FSE.
    Google ScholarFindings
  • [41] Forrest Shull, Janice Singer, and Dag I. K. Sjøberg (Eds.). 2008. Guide to Advanced Empirical Software Engineering.
    Google ScholarFindings
  • [42] Edward Smith, Robert Loftin, Emerson Murphy-Hill, Christian Bird, and Thomas Zimmermann. 2013. Improving Developer Participation Rates in Surveys. In CHASE.
    Google ScholarFindings
  • [43] Hema Srikanth, Mikaela Cashman, and Myra B. Cohen. 2016. Test Case Prioritization of Build Acceptance Tests for an Enterprise Cloud Application: An Industrial Case Study. JSS (2016).
    Google ScholarLocate open access versionFindings
  • [44] Daniel Ståhl and Jan Bosch. 2014. Automated Software Integration Flows in Industry: A Multiple-Case Study. In ICSE Companion.
    Google ScholarFindings
  • [45] Mark Staples, Liming Zhu, and John Grundy. 2016. Continuous Validation for Data Analytics Systems. In ICSE.
    Google ScholarFindings
  • [46] Sean Stolberg. 2009. Enabling Agile Testing through Continuous Integration. In AGILE.
    Google ScholarFindings
  • [47] Megan Sumrell. 2007. From Waterfall to Agile – How does a QA Team Transition?. In AGILE.
    Google ScholarFindings
  • [48] Yida Tao, Yingnong Dang, Tao Xie, Dongmei Zhang, and Sunghun Kim. 2012. How Do Software Engineers Understand Code Changes? – An Exploratory Study in Industry. In FSE.
    Google ScholarFindings
  • [49] Akond Ashfaque Ur Rahman and Laurie Williams. 2016. Security Practices in DevOps. In HotSos.
    Google ScholarFindings
  • [50] Bogdan Vasilescu, Yue Yu, Huaimin Wang, Premkumar Devanbu, and Vladimir Filkov. 2015. Quality and Productivity Outcomes Relating to Continuous Integration in GitHub. In ESEC/FSE.
    Google ScholarLocate open access versionFindings
  • [51] VersionOne. 2016. 10th Annual State of Agile Report. (2016).
    Google ScholarFindings
  • [52] Tanja Vos, Paolo Tonella, Wishnu Prasetya, Peter M. Kruse, Alessandra Bagnato, Mark Harman, and Onn Shehory. 2014. FITTEST: A New Continuous and Automated Testing Process for Future Internet Applications. In CSMR-WCRE.
    Google ScholarFindings
  • [53] Tianyin Xu, Long Jin, Xuepeng Fan, Yuanyuan Zhou, Shankar Pasupathy, and Rukma Talwadker. 2015.
    Google ScholarFindings
  • [54] Shin Yoo and Mark Harman. 2012. Regression Testing Minimization, Selection and Prioritization: A Survey. STVR (2012).
    Google ScholarLocate open access versionFindings
Author
Nicholas Nelson
Nicholas Nelson
Timothy Tunnell
Timothy Tunnell
Your rating :
0

 

Best Paper
Best Paper of FSE/ESEC, 2017
Tags
Comments
小科