Over the past several decades, the medical community has been moving toward a model of shared decision making. In addition to its ethical advantages, shared decision making potentially yields such benefits as improved medical adherence and better health outcomes. With the proliferation of treatment options and changes in the larger culture, shared decision making is even more important. Yet practical barriers leave many questions about how to best implement the practice.
For several decades, modern medicine has been moving away from a paternalistic model of medical care and toward patient-centered medicine, with the earliest mention of shared decision making dating from 1982.1 An influential article on clinical practice guidelines argued that interventions should be considered standard only if there is almost unanimous agreement among patients about the desired outcomes.2
For the majority of clinical decisions, no single intervention meets these criteria. In most cases, more than one reasonable option is available, with each option having its own strengths and possible side effects. Thus, patient preferences and values must be considered in determining the optimal treatment strategy.3