AI帮你理解科学

AI 生成解读视频

AI抽取解析论文重点内容自动生成视频


pub
生成解读视频

AI 溯源

AI解析本论文相关学术脉络


Master Reading Tree
生成 溯源树

AI 精读

AI抽取本论文的概要总结


微博一下
Tool-based code review, uncovering both a wide range of motivations for review and that the outcomes do not always match those motivations

Expectations, outcomes, and challenges of modern code review

Software Engineering, (2013): 712-721

引用464|浏览57
EI WOS
下载 PDF 全文
引用
微博一下

摘要

Code review is a common software engineering practice employed both in open source and industrial contexts. Review today is less formal and more “lightweight” than the code inspections performed and studied in the 70s and 80s. We empirically explore the motivations, challenges, and outcomes of tool-based code reviews. We observed, intervi...更多

代码

数据

0
简介
  • A manual inspection of source code by developers other than the author, is recognized as a valuable tool for reducing software defects and improving the quality of software projects [2], [1].
  • Nowadays many organizations are adopting more lightweight code review practices to limit the inefficiencies of inspections.
  • In the context of this paper, the authors define Modern Code Review, as review that is (1) informal, (2) tool-based, and that (3) occurs regularly in practice nowadays, for example at companies such as Microsoft, Google [19], Facebook [36], and in other organizations and open source software (OSS) projects [40]
重点内容
  • Peer code review, a manual inspection of source code by developers other than the author, is recognized as a valuable tool for reducing software defects and improving the quality of software projects [2], [1]
  • The top motivation driving code reviews is finding defects, the practice and the actual outcomes are less about finding errors than expected: Defect related comments comprise a small proportion and mainly cover small logical low-level issues
  • Over the past two years, a common tool for code review at Microsoft has achieved wide-spread adoption. As it represents a common and growing solution for code review, we focused on developers using this tool for code review— CodeFlow
  • Tool-based code review, uncovering both a wide range of motivations for review and that the outcomes do not always match those motivations
  • We identified understanding as a key component and provided recommendations to both practitioners and researchers
  • It is our hope that the insights we have discovered lead to more effective review in practice and improved tools, based on research, to aid developers perform code reviews
方法
  • The authors define the research questions, describe the research settings, and outline the research method.

    A.
  • 2) What are the actual outcomes of modern code review?
  • 3) What are the main challenges experienced when performing modern code reviews relative to the expectations and outcomes?.
  • Over the past two years, a common tool for code review at Microsoft has achieved wide-spread adoption.
  • As it represents a common and growing solution for code review, the authors focused on developers using this tool for code review— CodeFlow
结论
  • Tool-based code review, uncovering both a wide range of motivations for review and that the outcomes do not always match those motivations.
  • The authors identified understanding as a key component and provided recommendations to both practitioners and researchers.
  • It is the hope that the insights the authors have discovered lead to more effective review in practice and improved tools, based on research, to aid developers perform code reviews
相关工作
  • Previous studies have examined the practices of code inspection and code review. Stein et al conducted a study focusing specifically on distributed, asynchronous code inspections [33]. The study included evaluation of a tool that allowed for identification and sharing of code faults or defects. Participants at separated locations can then discuss faults via the tool. Laitenburger conducted a survey of code inspection methods, and presented a taxonomy of code inspection techniques [22]. Johnson conducted an investigation into code review practices in OSS development and their effect on choices made by software project managers [18].
引用论文
  • A. Ackerman, L. Buchwald, and F. Lewski. Software inspections: An effective verification process. Software, IEEE, 6(3):31–36, 1989.
    Google ScholarLocate open access versionFindings
  • A. Ackerman, P. Fowler, and R. Ebenau. Software inspections and the industrial production of software. In Proc. of a symposium on 13–40. Elsevier North-Holland, Inc., 1984.
    Google ScholarFindings
  • J. Adair. The hawthorne effect: A reconsideration of the methodological artifact. Journal of applied psychology, 69(2):334, 1984.
    Google ScholarLocate open access versionFindings
  • S. Adolph, W. Hall, and P. Kruchten. Using grounded theory to study the experience of software development. Empirical Software Engineering, 16(4):487–513, 2011.
    Google ScholarLocate open access versionFindings
  • N. Ayewah, W. Pugh, J. Morgenthaler, J. Penix, and Y. Zhou. Using findbugs on production software. In Companion to the 22nd ACM SIGPLAN conference on Object-oriented programming systems and applications companion, pages 805–806. ACM, 2007.
    Google ScholarLocate open access versionFindings
  • A. Bacchelli and C. Bird.
    Google ScholarFindings
  • [8] V. Basili, F. Shull, and F. Lanubile. Building knowledge through families of experiments. IEEE Trans. on Software Eng., 25(4):456–473, 1999.
    Google ScholarLocate open access versionFindings
  • [9] B. Berg and H. Lune. Qualitative research methods for the social sciences. Pearson Boston, 2004.
    Google ScholarFindings
  • [10] C. Bird, A. Gourley, P. Devanbu, A. Swaminathan, and G. Hsu. Open borders? immigration in open source projects. In The Fourth International
    Google ScholarLocate open access versionFindings
  • [11] L. Brothers, V. Sembugamoorthy, and M. Muller. Icicle: groupware for code inspection. In Proceedings of the 1990 ACM conference on Computer-supported cooperative work, pages 169–181. ACM, 1990.
    Google ScholarLocate open access versionFindings
  • [15] J. Gintell, J. Arnold, M. Houde, J. Kruszelnicki, R. McKenney, and G. Memmi. Scrutiny: A collaborative inspection and review system. Software EngineeringESEC’93, pages 344–360, 1993.
    Google ScholarLocate open access versionFindings
  • [20] B. Kitchenham and S. Pfleeger. Personal opinion surveys. Guide to Advanced Empirical Software Engineering, pages 63–92, 2008.
    Google ScholarLocate open access versionFindings
  • [23] T. LaToza, G. Venolia, and R. DeLine. Maintaining mental models: a study of developer work habits. In Proceedings of the 28th international conference on Software engineering, pages 492–501. ACM, 2006.
    Google ScholarLocate open access versionFindings
  • [24] T. Lethbridge, S. Sim, and J. Singer. Studying software engineers: Data collection techniques for software field studies. Empirical Software Engineering, 10(3):311–341, 2005.
    Google ScholarLocate open access versionFindings
  • [25] V. Mashayekhi, C. Feulner, and J. Riedl. Cais: collaborative asynchronous inspection of software. In ACM SIGSOFT Software Engineering Notes, volume 19, pages 21–34. ACM, 1994.
    Google ScholarLocate open access versionFindings
  • [26] A. Porter, H. Siy, and L. Votta. A review of software inspections. Advances in Computers, 42:39–76, 1996.
    Google ScholarLocate open access versionFindings
  • [27] T. Punter, M. Ciolkowski, B. Freimut, and I. John. Conducting online surveys in software engineering. In International Symposium on Empirical Software Engineering. IEEE, 2003.
    Google ScholarLocate open access versionFindings
  • [28] P. Rigby, B. Cleary, F. Painchaud, M. Storey, and D. German. Open source peer review–lessons and recommendations for closed source. IEEE Software, 2012.
    Google ScholarLocate open access versionFindings
  • [29] P. Rigby, D. German, and M. Storey. Open source software peer review practices: a case study of the apache server. In Proceedings of the 30th international conference on Software engineering. ACM, 2008.
    Google ScholarLocate open access versionFindings
  • [30] P. C. Rigby and M.-A. Storey. Understanding broadcast based peer review on open source software projects. In Proceedings of ICSE 2011 (33rd International Conference on Software Engineering), pages 541–550. ACM, 2011.
    Google ScholarLocate open access versionFindings
  • [31] J. E. Shade and S. J. Janis. Improving Performance Through Statistical Thinking. Mcgraw-Hill, 2000.
    Google ScholarFindings
  • [32] F. Shull and C. Seaman. Inspecting the history of inspections: An example of evidence-based technology diffusion. Software, IEEE, 25(1):88–90, 2008.
    Google ScholarLocate open access versionFindings
  • [33] M. Stein, J. Riedl, S. J. Harner, and V. Mashayekhi. A case study of distributed, asynchronous software inspection. In Proceedings of the international conference on Software engineering. ACM, 1997.
    Google ScholarLocate open access versionFindings
  • [34] A. Sutherland and G. Venolia. Can peer code reviews be exploited for later information needs? In International Conference on Software Engineering, New Ideas and Emerging Results Track, 2009.
    Google ScholarLocate open access versionFindings
  • [35] B. Taylor and T. Lindlof. Qualitative communication research methods. Sage Publications, Incorporated, 2010.
    Google ScholarFindings
  • [38] L. Votta Jr. Does every inspection need a meeting? ACM SIGSOFT Software Engineering Notes, 18(5):107–114, 1993.
    Google ScholarLocate open access versionFindings
0
您的评分 :

暂无评分

标签
评论
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn