A systematic assessment has revealed that clinical practice guidelines for osteoporosis screening vary in quality and their recommendations often differ. Lamia H. Hayawi, a research assistant at Pallium Canada, Ottawa, and colleagues used the Appraisal of Guidelines for Research and Evaluation (AGREE II) instrument and the Institutes of Medicine standards for trustworthy guidelines to measure guideline quality. The researchers found the clinical practice guidelines have not improved over a 14-year period. Their findings were published online in PLoS One.1
Also by this Author
Clinical practice guidelines should consist of recommendations for assessment and/or management of a specific disease. In 2010, an international team of researchers developed the AGREE II instrument to define the essential components of a good guideline. This tool is comprehensive and covers implementation and dissemination issues related to guidelines. However, it does not assess the content of the guidelines. In 2011, the Institutes of Medicine standards were created to aid in the development of quality evidence-based guidelines. Among other things, these standards evaluate the foundational evidence of the guidelines. Both tools evaluate the influence of funding bodies and conflicts of interest.
The researchers identified and assessed 33 guidelines for screening for osteoporosis that were published in English between 2002 and 2016 from 13 countries. Although the guidelines were based on country-specific data and cost-effectiveness and would naturally vary by country, the authors found the guidelines varied even within the same country. They found the most variability in recommendations for screening of individuals without previous fractures and the most consistency in recommendations for the sites of bone mineral density testing.
When the authors analyzed the guidelines using the AGREE II instrument, they calculated the highest mean AGREE II domain scores were for clarity of presentation and scope and purpose, and the lowest domain scores were for applicability and editorial independence. Moreover, they found most guideline developers did not seek the views and preferences of patients when developing guidelines.
“By assessing the compliance of guidelines to the criteria of the [Institutes of Medicine] standards, we found that 64–67% of guidelines fulfilled the standards for establishing evidence, strength of recommendations and systematic review standards,” write the authors. “However, most guidelines fell short in involving patients and public representatives in their guideline development and didn’t adequately describe the method for external review. Though, the [Institutes of Medicine] standards were developed in 2011, we found few studies that assessed the quality of [clinical practice guidelines] using these standards.”
When the team looked at the periods of time between 2002–2010 and 2011–2016, they found no change in the compliance of guidelines to the criteria of AGREE II and Institutes of Medicine.
“Our systematic review emphasizes the variability in the use of the different grading systems to aggregate the level of evidence and rate the strength of recommendations,” they write. “Establishing the level of evidence that underlies the recommendations is essential in guideline development.”
The authors conclude that guideline developers should work together to improve the quality and consistency of recommendations, as well as the reporting of guideline development. Such an effort may improve the likelihood the guidelines will be used in clinical practice.
Lara C. Pullen, PhD, is a medical writer based in the Chicago area.
- Hayawi LM, Graham ID, Tugwell P, et al. Screening for osteoporosis: A systematic assessment of the quality and content of clinical practice guidelines, using the AGREE II instrument and the IOM Standards for Trustworthy Guidelines. PLoS One. 2018 Dec 6;13(12):e0208251. doi: 10.1371/journal.pone.0208251. eCollection 2018.