Current incident reporting and learning systems yield unreliable information on patient safetyIncident reporting reliability

Mari Plukka,Auvo Rauhala,Lisbeth Fagerström, Tuija Ikonen

Research Square (Research Square)(2022)

引用 0|浏览0
暂无评分
摘要
Abstract BackgroundIncident reporting systems are being implemented throughout the world to record safety incidents in healthcare. The quality of the recording and analysis of reporting systems is important for the development of safety promotion measures. MethodsTo assess the reliability of incident reporting ratings collected in a hospital setting, a three-level interrater comparison was undertaken. The routine ratings of the frontline event handlers responsible for evaluating safety incident reports (n=495) were compared with the parallel ratings of two trained patient safety coordinators. The two patient safety coordinators then each separately reviewed about half of the 495 reports, followed by reclassification of a random sample (a random data subset of 60 reports) previously reclassified by the other coordinator during the first reclassification. The following seven patient safety variables were included: nature of the incident, type of incident, patient impact, treating unit impact, circumstances and contributory factors, immediate actions taken, risk category. Interrater agreement was tested with kappa, weighted kappa or iota.ResultsFor the seven variables examined, event handlers had an average of 1.36 missing answers, patient safety coordinators 0.32. For the first interrater comparison, the average change between the three ordinal scale variables for all variables together was towards more serious in 29 % (95%CI: 27%, 32%) and towards less serious in 2 % (95%CI: 0%, 5%) of incidents. The net change for the first interrater comparison was 27% (95%CI: 25%, 30%) towards a more serious incident. The average selection of several categories, when allowed, increased from 7 % (95%CI: 6%, 8%) to 33% (95%CI: 31%, 35%). For all three paired interrater comparisons, the average interrater agreements were in the range of 0.44 to 0.53 and considered moderate. While patient safety coordinators should in theory represent a ‘golden standard’, the coordinator interrater agreement seen in this study was moderate. ConclusionConsensus at national level on how to classify high-risk incidents is needed to develop incident reporting reliability. Also, continuous training in common practices, terminology and rating systems should be given more attention. Having a patient safety coordinator reclassify incident reports can improve reporting accuracy and thereby corrective actions and learning.
更多
查看译文
关键词
unreliable information,reliability,reporting,current incident
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要