The Future of INCOG (Is Now).

The Journal of head trauma rehabilitation(2023)

引用 1|浏览28
暂无评分
摘要
IT HAS BEEN 8 years since the first iteration of the INCOG clinical practice guidelines (CPGs) were published. Much has happened since 2014, and a considerable body of evidence has been published in the various domains of cognitive rehabilitation research represented in this special issue. Over this time, significant developments in the science of identifying, appraising, and distilling research evidence into practically applicable CPGs have emerged, as well as implementation efforts to ensure meaningful change in care delivery.1,2 Many of these developments have been either driven or “supercharged” by the COVID-19 pandemic.3–5 The pandemic led to a global spotlight on science and—due to the importance of public health measures to control the virus—the role of science in guiding our day-to-day lives.5 Specifically, exponential increases in demand for science to support real-time decision-making led to a number of poorly designed and coordinated COVID trials and reviews. A more carefully planned evidence-to-practice pipeline would have been more helpful in guiding COVID responses.4 In this sense, the pandemic reinforces the original and ongoing mission of INCOG: to provide robust reviews of the best available cognitive rehabilitation research evidence to clinicians who want to facilitate and optimize patient recovery following traumatic brain injury (TBI). As such, this commentary provides key insights from the review and guideline sciences, highlighting their relevance to this and future INCOG updates. Getting the question right: Codesigning and prioritizing research questions ensure that research effort is focused on areas where impact is most needed. The original INCOG guidelines grew out of a series of codesign and evidence synthesis projects culminating in an international workshop in which 25 clinicians, researchers, and knowledge translation scientists representing 4 countries prioritized cognitive rehabilitation following TBI as an important area of knowledge translation focus.6 In parallel with the foundational work leading to INCOG 2014, the importance of creating, growing, and harnessing communities of practice has continued to emerge but with an increasing emphasis on the participation of patients with lived experience of injury or illness and its consequences. For example, it has been more than a decade since the establishment of the Patient Centered Outcomes Research Institute (PCORI), which placed renewed focus on the importance of patient input into “practical questions, relevant outcomes and study populations, and the possibility that treatment effects may differ across patient populations.”7(p1583) While experts in the field have command of scientific and medical knowledge, patients are best placed to provide perspectives on the issues that need to be addressed to optimize their function and quality of life. The addition of patient and other perspectives (eg, those of service deliverers, policy makers, and funders) builds valuable insights into the development of health research questions and approaches and other areas of problem solving.7–9 For example, this can involve gathering qualitative insights into understanding the experience of the impairment; how clinical interventions are experienced; to what extent these interventions meet real-world needs and what tailoring may be required to better match these needs. Once the questions have been formulated, there are also opportunities for patients to be part of the research team. Thomas et al10 described how “citizen scientists” (community members with interest in contributing to science but without formal scientific training) can partner with review researchers to undertake some of the many tasks within a systematic review. They are identified and trained through an online “Cochrane Crowd” platform, which now has almost 24 000 contributors across the world.11 The underlying thread that connects PCORI and “Cochrane Crowd” is the idea of meaningful involvement in health research through research codesign. This involves going beyond isolated activities without meaningful outcomes and clearly communicated outputs (like a one-off workshop with a patient group) to specifying explicit roles and responsibilities of research partners such as patients; recognizing their contributions accordingly; and transparently reporting their contributions to the final research outputs. Although this paints a picture of what research codesign looks like, our review of 23 codesign reviews published in 2021 concluded that while the concept and importance of codesign are acknowledged, the actual codesign process is rarely described in detail or evaluated.9 This can create frustration for those who participate in activities badged as “codesign” but that fall short of respectful and meaningful involvement. Although potentially helpful frameworks and strategies have been developed for examining the extent of patient and family engagement over the past decade, these are not embedded in routine practice.12,13 Addressing the challenge of doing “true codesign” is not easy; it involves developing new ways of working and additional resources. However, if research effort is directed to high-priority areas of need, the impact gains far outweigh these costs. Our experience of codesign in research question development has underlined this, as it has yielded unexpected and important insights.14,15 Relative to other interventions such as medicines and surgery, cognitive rehabilitation is highly interactive. The collaborative nature of rehabilitation underlines the need for meaningful involvement of persons with lived experience from the creation and development of effective treatment interventions to the implementation of CPGs. Such involvement has not been “business as usual” in CPGs, including INCOG. Therefore, our challenge is to explore methods of recognizing and harnessing this potential to ensure that each iteration of INCOG reflects the views and interests of the many groups it seeks to serve. This challenge extends beyond INCOG to the primary research that informs the guidelines, as there is little evidence of codesign in many published cognitive rehabilitation randomized controlled trials and other studies. Streamlining the reviews that drive the guidelines: Technology has brought us closer to the “holy grail” of guidelines that are both comprehensive and up to date. In parallel with the advances in cognitive rehabilitation that are reflected in INCOG 2022, there have been a number of developments in the science of developing CPGs. Systematic reviews—a comprehensive assembling of research literature in a defined area of medicine and the substrate of CPGs—are hundreds of years old, with James Lind's 1753 Treatise on Scurvy frequently acknowledged as the first example.16 Core systematic review activities—search, selection, synthesis, and interpretation—were progressively codified over the centuries that followed, culminating in the formation of the Cochrane Collaboration and evidence-based practice movement in the late 20th century. CPGs that translate review findings into statements of recommended practice, graded according to the strength of their underlying evidence, have also continued to evolve as an essential component of the end game of implementation/practice change.17 Technological and informational revolutions of the last 20 to 25 years have created a double-edged sword. There is an abundance of evidence that is readily available—the number of journal articles in existence has been shown to be doubling approximately every 24 years18—and advancements in technology have enabled this growing volume of research evidence to be instantly accessible; however, these vast amounts of information cannot be handled using traditional manual review approaches. Fortunately, technology has also resulted in advancements in review methods that can accelerate systematic review processes. For example, the Covidence online platform can manage screening, selection, and data extraction tasks between 2 or more members of a research team, including automating the identification and resolution of conflicts between reviewers in selecting, appraising, and summarizing research studies.19,20 This is one of a staggering array of tools for every step of the review process—more can be found at http://systematicreviewtools.com/. Several important manifestations of this over the last decade warrant mention. First, rapid reviews (in which systematic review methods are modified, for example, by focusing on review-level evidence or altering other review parameters) have evolved as a viable means of extracting key themes from published studies in much shorter time frames (generally several weeks) than previously required using traditional systematic review methods.21 Second, as indicated by “Cochrane Crowd” earlier, larger communities of practice can distribute high volumes of work more efficiently. Finally, technology itself can dramatically accelerate review processes, from online platforms that manage review processes to the use of artificial intelligence and machine learning to replace time-consuming manual tasks with increasing precision.22,23 These technology developments have led to the advent of “living” reviews and CPGs, which harness distributed human resources and machine effort to create reviews and update them continuously.10,24,25 Living INCOG guidelines would facilitate a process to update recommendations as soon as relevant new information becomes available. With the move to virtual meetings stemming from the COVID-19 pandemic, living guideline panels are more feasible and acceptable than ever. This opportunity to maximize timeliness and relevance unlocks the potential to update INCOG in real time, rather than after a number of years. Although the COVID-19 pandemic has raised the profile of science and the role of research evidence, it has also laid bare preexisting and serious flaws in the evidence-to-practice pipeline. Poor coordination of COVID trials led to many of them being underpowered; similarly, review efforts were rushed, with insufficient attention to overlap between different review groups. These led to “research waste at an unprecedented scale.”4(p183) Existing systems such as the PROSPERO global systematic review registration platform go some way to addressing these challenges, but COVID showed that review and trial registration is insufficient. Rather than individual tools and registration platforms, the vision for the future of evidence-based medicine is better thought of at a system design level. There is bold, transformative thinking in this space, for example, efforts to link primary, review, and CPG research and associated data, rather than the existing, poorly connected “silos.” Various models of fully integrated “evidence ecosystems” have been proposed.26 For example, Nakagawa et al27 describe a fully open-access platform that enables primary, review, and other researchers to share data based on “FAIR” principles—findable, accessible, interoperable (ie, data can be integrated with other data and can be used across applications and workflows), and reusable. What could such a platform look like for INCOG? The possibilities are tantalizing. An “INCOG research ecosystem” could facilitate numerous connected efforts: Patients or stakeholders identifying an issue of importance to them (eg, “I want access to tools that can aid my memory”); Clinicians posing interventions addressing this (an online memory portal or a new approach to memory retraining); A large, globally coordinated trial of the new intervention with a robust sample size; access to research data and findings by all involved in developing the question, undertaking and participating in the trial; Development of an INCOG recommendation based on the trial findings that could be fed into a connected guideline portal alongside related research inputs (such as ERABI; https://erabi.ca); Collection of audit data showing the extent to which the relevant recommendation is reflected by practice; Gathering of information about barriers and facilitators to adoption; and Planning of implementation or other follow-on research responding to the various research and audit findings. Connecting recommendations to practice: The considerable research effort that goes into guidelines is wasted if implementation and connection to practiceareinadequate or unsupported. The challenge of connecting academic research to the clinical point of care, often characterized as “closing the evidence-practice gap,” has long been recognized. There are multiple facets to implementing guideline recommendations starting with the guideline recommendations themselves. A review by Kastner et al28 of factors associated with successful guideline implementation highlighted the importance of CPG content (process, evidence, clinical applicability, recommendation feasibility) and communication (simple, clear and persuasive language, CPG format). This work led to development of the Guideline Implementability for Decision Excellence Model (GUIDE-M) for CPG developers.29 In developing and designing these new INCOG recommendations, we have given consideration to CPG implementation. This reflects our belief that better assessment, treatment, and outcomes for individuals with TBI are only possible if CPG teams keep implementation at the forefront of their thinking and is further underlined by our efforts to measure the extent to which clinical practice reflects awareness and use of INCOG guideline recommendations.2,30 As we found in reviewing previous cognitive rehabilitation guidelines, this focus on auditability (evaluating whether recommendations have been translated into clinical practice) is an area traditionally neglected in CPGs.31 It has been more than a quarter of a century since David Sackett and colleagues designed and tested an actual “evidence cart” comprising a computer, CD-ROM, and hard copy knowledge resources in a hospital setting. Their efforts were ultimately hampered by the sheer volume and weight of 1990s-era technology.32 Today, all of this information can be readily stored in a handheld smartphone. The “ViaTherapy” app, which supports clinical decision-making following upper-extremity stroke, is an example of how modern technology can be harnessed to achieve Sackett's ambition. The app provides evidence-based recommendations tailored to a person following an upper-extremity stroke based on 4 questions asked to the treating clinician. The ViaTherapy recommendations are further prioritized on the basis of expert panel input, with a star rating to indicate the most feasible and important therapies. Recommendations can be further filtered if the individual wants to provide group rehabilitation. Video demonstrations of the therapy and potential outcomes to use to measure progress are also provided.33,34 However, as decades of implementation science have shown, the existence of a resource such as ViaTherapy is not sufficient to achieve implementation into routine, sustainable practice. It has been shown that efforts to achieve this are more likely to be successful if barriers and facilitators to uptake across various contexts and settings are addressed.35 Two important considerations warrant mention in this regard. First, knowledge and clinical practice should be viewed as a 2-way exchange rather than a 1-way street of guideline dissemination. The importance of this 2-way exchange is reflected in the development of the “learning healthcare systems” concept, created by the Institute of Medicine (IOM) following an evidence-based medicine roundtable in 2006. This approach views evidence-based practice as not just dissemination of information to support care (eg, using ViaTherapy) but also a continuous process of learning through implementation and making refinements based on insights gained from caregivers, patients, and families.26 This reflects the Knowledge to Action framework developed by Graham and colleagues,36 which has guided INCOG from the beginning. In a learning healthcare system, ViaTherapy implementation would not end with downloading the app. Insights into its utility; the feasibility, acceptability, and affordability of the recommended therapies; and new questions and research needs would be gathered from clinicians, patients, and families and used to iterate and improve the ViaTherapy resource. Our own work on how INCOG 2014 has translated into practice indicates that this type of continuous learning is presently lacking.2,30 Second, the healthcare system must provide an enabling environment that embeds the importance of evidence-based practice in clinical training; sets (and funds) evidence-based practice as an expectation; supports clinicians to understand and realize this ambition; and recognizes efforts to achieve this. Junior clinicians in all fields including cognitive rehabilitation can be overwhelmed in various ways, including their exposure to a health system that is almost continuously overstretched and underfunded; the expectations of patients and their families; and balancing the sheer volume of clinical work with a range of administrative and other workplace responsibilities. Opportunities to stay up to date with research evidence, reflect on what it means for practice, and learn new skills may be nonexistent, rare, or perceived as a low priority in environments that may place priority on productivity, that is, quantity of work, above the quality of work. Furthermore, the new knowledge brought from graduates may be met by senior colleagues who are set in their approaches and routines of practice. However, investment in the participation of clinicians in a larger evidence ecosystem offers substantial downstream benefits to patients. The onus is therefore on health service and clinical managers to facilitate this opportunity. If this opportunity is lost, the work of INCOG and other CPGs risks remaining on shelves and in unused apps, where it cannot improve the outcomes and quality of life of those in need of the best cognitive rehabilitation we can and should offer.
更多
查看译文
关键词
incog,future
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要