Risk of Bias Assessment for Diversity Groups in Community-Based Primary Health Care Artificial Intelligence Systems: A Rapid Review Protocol (Preprint)

crossref(2023)

引用 0|浏览1
暂无评分
摘要
BACKGROUND Current literature identifies several potential benefits of artificial intelligence (AI) in community-based primary health care (CBPHC). However, there is a lack of understanding on how the risk of bias is considered in the development of CBPHC AI-based algorithms and to what extend they perpetuate or introduce potential biases toward groups that could be considered as vulnerable in function of their characteristics (e.g., age, sex, gender identity, sexual orientation, race, ethnicity, religion, physical ability, socioeconomic status (SES), etc.). To the best of our knowledge no reviews are currently available to identify the relevant methods to assess risk of bias in CBPHC algorithms. There is a lack of overview of mitigation and in which groups they are considered. OBJECTIVE To identify 1) relevant methods (e.g., frameworks, tools, checklists, etc.) to assess the risk of bias toward diversity groups in the development and/or deployment of algorithms in CBPHC and 2) mitigation interventions deployed to promote and increase equity, diversity and inclusion (EDI) in these algorithms. METHODS Rapid review of the literature in four databases (PubMed, CINAHL, Web of Science and PsychInfo) in the last 5 years. Two reviewers will independently screen the titles and abstracts and the full text of the identified records. We will include all studies on methods developed and/or tested to assess risk of bias in algorithms that can be relevant in CHPHC settings. Data extraction will use a validated extraction grid. We will present results using structured narrative summaries. RESULTS In November 2022 an information specialist developed a specific search strategy based on the main concepts of our primary review question in the most relevant databases (PubMed, CINAHL, Web of Science and PsychInfo) in the last 5 years. We completed the search strategy in December 2022 and 1022 sources were identified. We planned to start the screening in February and to complete the review by June 2023. CONCLUSIONS This review will develop a comprehensive description of bias risk assessment used in CBPHC algorithms. This knowledge could be useful to researchers and other CBPHC stakeholders in order to identify potential sources of bias in algorithm development and eventually try to reduce or eliminate them. CLINICALTRIAL N/A
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要