Literature
Home医源资料库在线期刊放射学杂志2003年1月第226卷第2期

The Tyranny of Accuracy in Radiologic Education1

来源:放射学杂志
摘要:WhyAccuracy。Accuracyhasacertainmethodologicappeal。DoesAccuracyFit。Radiologicperformanceispowerfullyshapedbytheprofessionalandinstitutionalcontextsinwhichitispracticed。...

点击显示 收起

1 From the Department of Radiology, Indiana University School of Medicine, 702 Barnhill Dr, Room 1053, Indianapolis, IN 46202-5200. Received March 9, 2001; accepted March 20.

Index terms: Education • Perspectives • Radiology and radiologists

 

Her taste exact

For faultless fact

Amounts to a disease. 

W. S. Gilbert, The Mikado, act II

Like most physicians, radiologists take "getting it right" very seriously. They take pride in their work, enjoy using what they know to help patients, and do not like to be told that they are wrong. Moreover, they do not care much for uncertainty, particularly when they expect to be told whether something they have said is correct. At our institution, we recently tested this hunger for certainty in the venue of our department’s daily noon conference. Generally, when the residents are asked to discuss an unknown case, they are expected to focus their attention on this question: What is the abnormality? This question rests on at least two assumptions: (a) that the resident reviewing the case does not know the correct answer and (b) that there is a correct answer.

To test these assumptions, we showed the residents a series of cases in a way that departed from the usual routine in two crucial respects: In some cases, we told residents the diagnosis as soon as they saw the case, and in others, we did not tell them the diagnosis. This caused a good bit of discomfort and even consternation among some of the residents. "If you are going to tell us the diagnosis up front, why even show us the case?" some asked. Others demanded, "How are we ever going to learn anything if you show us cases without telling us what the diagnosis is?" Underlying their expectations was a set of assumptions about what it means to perform well as a radiologist. At the core of those assumptions lies the archetype of accuracy.

Before proceeding, we should state that we harbor no doubt that accuracy is a good thing. Given the choice, every radiologist should prefer to get the correct diagnosis in every case. No one wants to be wrong, not only out of regard for his or her own reputation but also from a laudable intent to help both the referring physician and the patient. It is frustrating to miss correctly diagnosing a case, especially when the failure to detect a lesion or to correctly diagnose a finding places the patient at risk.

Yet accuracy is not the only criterion, and in some cases perhaps not even the most important criterion, of radiologic performance. The tendency in many of our training programs to focus so intently on accuracy may in some respects do trainees a disservice by distracting their attention from other no less important aspects of the radiologist’s mission. It can also send a message about the nature of expertise in radiology that is at best misleading (1). Getting the diagnosis of a case right may not completely discharge the radiologist’s professional obligation, and if we suppose that it does so, we may be shortchanging not only our referring colleagues and the patients they serve but also ourselves and those we train. It is not wrong to want to be right, but too narrow a focus on getting the diagnosis of each case right may undermine the overall value of radiologic service.

The purpose of this article is threefold: (a) to explore the roots of accuracy’s dominance in radiologic thinking; (b) to examine some of the key shortcomings of accuracy as a criterion of radiologic performance, especially when it is regarded as the sole criterion; and (c) to offer an expanded view of radiologic performance for residency training programs. Our overarching purpose is not to supplant accuracy but to situate accuracy in a larger and more complete context of radiologic performance. It is only in this larger context that the true value of accuracy can be assessed.

 

Why Accuracy?
The allure of accuracy is multifactorial. It stems in part from generic features of the medical profession. As a group, physicians tend to prefer situations where their roles are clearly defined, where they have direct personal influence over the outcome of events, and where there is prompt and clear feedback on their performance (2). We yearn for the clarity of the classroom, where expectations were clearly specified at the beginning of the term, performance was regularly assessed with examinations, and we could readily determine how well we were doing. The goal in such settings was clear: to answer every examination question correctly. The higher one’s examination scores, the better one was doing.

A similar attitude permeates radiology training. We tend to take most seriously those aspects of radiologic performance that can be quantified, especially in the format of standardized multiple-choice examinations. In this setting, residents are given several choices from which they must select the best response. One answer is right, and the others are wrong. When residents review cases in a case conference format, they are informed whether they get the diagnosis of each case right. They are encouraged to focus on the images at hand, to describe the findings, and to offer a differential diagnosis, all the while looking for clues about whether they are "on the right track." The ideal case, from the point of view of many residents, seems to be the "Aunt Minnie," a case in which the imaging findings alone provide a clear diagnosis.

Accuracy has a certain methodologic appeal. If we ground our model of radiologic performance in a paradigm of accuracy, it becomes much easier to measure how well we are doing (3). One can show a radiologist 100 cases and calculate his or her missed diagnosis rate. One can calculate the sensitivity and specificity of the radiologist’s readings and plot a receiver operating characteristic curve. This paradigm of accuracy lends itself quite nicely to standardized quantitative assessments of radiologic performance. However, what tends to drop out of discussions like these is what Kundel et al (4) term "intelligence," that is, how images relate to the practical reasoning of clinical medicine. It often seems to be assumed in the literature that if these images are realistic (detailed) enough, this problem will take care of itself.

One of the most important lessons in all of pedagogy is to appreciate the motivational and attitudinal importance of testing. Students who wish to receive good evaluations will adapt their habits of preparation and performance to meet the feedback they receive from their teachers. Moreover, there is a spillover effect from training to practice. Habits of success inculcated over years of training will continue after training is over. Residents whose formative years steep them in this ideal of accuracy will likely continue to approach images as though their job were to review cases and get the right answer.

In the real-world practice of radiology, who holds the right answer? Frequently, the answer is hard to specify. In some cases, the pathologist serves as the arbiter of truth, since he uses histologic findings and the principles of molecular biology to supply the answer. In a greater number of cases, the natural history of the disease and response to therapy supply the answer, such as when fever resolves or does not resolve with antibiotics. In a still greater number of cases, including the majority of cases with imaging findings that radiologists interpret as normal, no independent right or wrong answer is ever clearly specified.

When the resident and the attending radiologist disagree in regard to the interpretation of a particular image, it is the attending radiologist’s impression that appears in the final report. Neither really knows who is right, because there is no answer key to grade their responses. In most cases, no follow-up or pathologic verification is ever obtained.

The resident and the attending radiologist function like co-conspirators in a plot to protect their mutual faith in accuracy. The resident needs an answer key to guide his or her developing sense of how to interpret radiographs. The attending radiologist needs to believe in his or her own reliability. No one wants to be wrong. Yet, far more disconcerting than being wrong is to be repeatedly told that we really do not know the answer.

Does Accuracy Fit?
One of the problems with accuracy is its tendency to narrow the radiologist’s focus. Radiologists enamored of accuracy tend to think of themselves as a kind of computer, converting input, in the form of images, into output, in the form of diagnoses. As we have seen, this computational model lends itself very nicely to a certain kind of measurement and evaluation. Yet, it can so narrowly focus our attention on the degree of correspondence between what the radiologist sees on the image and what the pathologist sees under the microscope that we overlook other aspects of radiologic performance to which we very much need to attend.

One such aspect is the clinical context of the image. If the tests that we administer to residents repeatedly emphasize getting the right answer from the information on the image, then they are unlikely to develop skills as active investigators. After all, "the critical element of ‘learned expertise’ is the ability to understand the context of the diagnostic examination—to know what to look for in the images and why" (5). In many cases, key pieces to the diagnostic puzzle are not on the image and can be elicited only by investigating the patient’s medical history and physical examination findings, laboratory results, and even the results of other imaging studies (6).

In the language of perceptual psychology, if the image is the figure, then the clinical context is the ground. How we perceive, describe, and interpret a figure is powerfully influenced by the background against which it is projected (7). For example, the interpretation of an abnormal chest radiograph can and should be markedly influenced by such clinical information as a history of compromised immunity, the presence of leukocytosis, and the availability of an identical chest radiograph taken 2 years previously.

If misapplied, the accuracy model tends to promote a rather stripped-down view of radiologic consultation, in which the referring physician simply inserts an image into a slot labeled "radiologist," and out pops a diagnosis. In fact, however, the appropriate radiologic output should be determined in part by the nature of a more complex clinical input. The radiologist needs to know what question the referring physician is asking (8). The radiologist also needs to ask these questions: Why is the imaging study being ordered, and what effect will different radiologic interpretations have on patient treatment? Is the test that has been ordered really the optimal choice? Could the examination be better customized to fit this particular situation?

The computational ideal that emerges from an overly zealous pursuit of accuracy implies a radiologist operating in relative isolation, gleaning information from images and directing output into a microphone. In fact, however, radiologic interpretation is a deeply social enterprise (9). Residents learn how to be radiologists largely through apprenticeship, working side by side with instructors. Radiologic performance is powerfully shaped by the professional and institutional contexts in which it is practiced. The goal of interpretation is not to write as generic, bland, and medicolegally innocuous a report as possible but to communicate with the referring physician in a way that genuinely contributes to the treatment of the patient. To achieve that objective, one must know something about one’s colleagues and one’s patients.

Though both the patient and the referring physician have legitimate interests in the diagnostic accuracy of the radiologist’s report, mere accuracy is not their ultimate objective (Littenberg B, personal communication, 1999). The patient wants to regain health or remain healthy, and the referring physician wants the diagnostic information he or she needs to help bring that about. From the referring physician’s point of view, an accurate diagnosis is not the be-all and end-all of medicine, but a kind of tool, or useful information, that can help guide treatment. A model of radiologic practice that seeks only accuracy threatens to undermine the radiologist’s clinical importance. Radiologists need to be not only accurate but also relevant.

A radiologist could issue reports that, strictly speaking, are accurate but prove essentially useless. Consider an abdominal radiograph that is interpreted as manifesting a "nonspecific bowel gas pattern" (10). How much help does that provide the referring physician? In some cases, an accurate report can even harm a patient, such as when a computed tomographic scan is read as normal in a situation where magnetic resonance imaging should have been performed or when the report of an incidental finding results in an expensive and burdensome diagnostic work-up. Likewise, an imprecise report may provide all the information a clinician really needs. While "right infiltrate" may represent a rather sloppy description of a chest radiograph, it may give the referring clinician all the information he or she really needs to treat a previously healthy patient with new-onset cough and fever.

Radiology trainees need to develop a sense of proportion about accuracy. The clinical context must be taken into account in determining what degree of accuracy to pursue. It is vitally important not to miss a case of subarachnoid hemorrhage or child abuse. If there is doubt about such a diagnosis, obtaining additional images or even performing additional studies is likely to be warranted. On the other hand, whether a several-millimeter hypoechoic lesion in the kidney of an asymptomatic patient warrants further work-up is, at the very least, questionable (11). Mere accuracy for accuracy’s sake should not be the radiologist’s goal. Rather, the pursuit of accuracy should be shaped by the larger clinical importance of radiologic findings.

The accuracy paradigm also distracts attention from the importance of providing good "customer service." Whether we choose to regard our patients and colleagues as "customers," there is no doubt that most of us could increase the efficiency, friendliness, and relevance of the service we provide (12). Providing accurate readings of diagnostic images is an essential link in the radiology value chain, but a chain is only as strong as its weakest link. Poor performance in other areas can seriously undermine the value of an accurate report.

We also can do more to build the quality of collaborative relationships with colleagues than the accuracy paradigm would suggest. A request for an imaging study should be regarded as a request for a radiologic consultation, which requires a two-way flow of information and a sense of teamwork in meeting the needs of the patient. Building good collaborative relationships enriches the practice of radiology; it is simply more rewarding when one enjoys mutually respectful and stimulating relationships with colleagues. Such relationships provide a major impetus to continuing professional learning and growth.

The single-minded pursuit of accuracy also may threaten vital academic missions of radiologic practice. The radiologist who strives merely to maximize the correspondence between his or her diagnostic impressions and some hypothetical answer key may end up foregoing important opportunities to advance the field of radiology (13). The information in the radiology textbooks of today is inadequate, and more needs to be added.

We do not know everything we need to know about diagnostic imaging, and broadening our understanding will require that radiologists do more than simply categorize findings accurately according to existing standards. If radiology is to progress in the future, we need to educate radiologists to be capable of viewing their clinical material with an element of skepticism, curiosity, and creativity. To put it in terms that the partisans of accuracy would understand, we need to develop new and better "answer keys."

Taking Education beyond Accuracy
If our diagnostic radiology residency programs are to provide optimal training for the next generation of radiologists, we must replace the monolith of accuracy with a more complete model of radiologic performance. Residents should be encouraged to regard themselves not as diagnostic computers but as radiologic consultants. The pursuit of the right answer should be expanded to encompass such consultative factors as clinical relevance and collegiality. Social competence should receive as much attention as lesion detection and differential diagnosis. Faculty should strive to exemplify the characteristics of effective consultants and expect residents to do the same.

Especially in academic centers, we should encourage trainees to ask questions, even embarrassing ones, about the right and wrong answers and to insist on high standards of evidence, rather than merely to assume that the attending radiologist is right. An atmosphere that prizes conformity should be replaced by one that promotes rigorous standards of intellectual inquiry. Residents should be spoon-fed less and challenged more. The goal of a good education should be to form active and effective inquirers—not so much people who know all the answers as people who know how to ask good questions. They should be trained to recognize when they are not being given enough information, and they should know how to pursue what they need. During the course of routine clinical work, cases should also be mined for their potential value in research and education.

We should revisit our attitude toward error. Wherever possible, errors should be treated not as signs of failure but as learning opportunities (14). In the real world of radiologic practice, the good radiologists learn as much or more from their mistakes as they learn from their successes. The accuracy paradigm contributes to an atmosphere of intolerance toward error, in which trainees may soon see mistakes as embarrassments that should be ignored or even concealed. Such an attitude is inimical to the spirit of inquiry and the determination to improve the quality of one’s practice.

As a practical matter, how can we expand our educational programs beyond the narrow and inadequate confines of accuracy? One technique, already alluded to, is to present residents with cases in which the diagnosis is already known. Frequently, so much energy is devoted to figuring out the right answer that other important aspects of the case are neglected. By laying out the right answer initially, residents can devote more attention to other aspects of the case, such as the quality of the clinical history, the appropriateness of the examination, the accuracy of the description of the findings, an assessment of the severity of the process, the formulation of a differential diagnosis, an estimation of the degree of diagnostic confidence provided by an imaging examination, and suggestions for further evaluation.

Another technique, also already alluded to, is to show occasional cases for which the correct answer is not known or, at least, is not provided to the residents. Though at first this tends to prove rather disconcerting, it enables participants to begin to evaluate their performance according to criteria other than the final diagnosis. In real-world practice, one signs the radiology report in a setting of uncertainty, and the training environment should incorporate more of this perspective as well. Always supplying the correct answer can create an unrealistic expectation of certainty where uncertainty usually is the rule. We should encourage residents to ask this question again and again: How can I tell that I am doing a good job, even when I do not know whether I offered the correct diagnosis?

This is not to suggest that radiologists should not make every effort to determine the accuracy of their clinical work through systematic audits of their practice. We should continually strive to develop and refine diagnostic processes in ways that reduce error and improve performance. By always and immediately telling residents whether they were right, however, we may undermine the development of their own process-improvement initiatives.

Furthermore, we need to make sure that residents are evaluated in ways that transcend mere accuracy. Scores on standardized tests, which invariably overemphasize the ability to accurately apply a fund of knowledge, neglect vital factors, such as consultative effectiveness and intellectual rigor. Because most people will adapt their performance to meet the criteria according to which they are being evaluated and rewarded, we must avoid the tendency to allow the most easily evaluated factors of radiologic performance, such as accuracy, to dominate the system. Systems of evaluation and reward should be tailored to a complete and balanced view of excellence in the field. To do otherwise is to distort the educational process and, ultimately, its product.

In short, accuracy is good, but it is not good enough. It is the easiest performance factor for trainees and faculty to focus on, but it is not the only one, nor is it necessarily the best. By deliberately focusing attention on radiologic performance factors other than accuracy and by better situating the link of accuracy within the larger value chain of radiologic service, we can educate radiologists to more accurately represent our vision of what excellence in radiology should be.

REFERENCES 

  1. Nodine CF, Kundel HL, Mello-Thomas C, et al. How experience and training influence mammography expertise. Acad Radiol 1999; 6:575-585.

  2. Fox R. The sociology of medicine: a participant observer’s view New York, NY: Prentice Hall, 1988.

  3. Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 1982; 143:29-36.

  4. Kundel HL, Revesz G, Ziskin M, Shea F. The image and its influence on quantitative radiological data. Invest Radiol 1972; 7:187-198.

  5. Robinson PJ. Radiology’s Achilles’ heel: error and variation in the interpretation of the Röntgen image. Br J Radiol 1997; 70:1085-1098.

  6. Peterson C. Factors associated with success or failure in radiological interpretation: diagnostic thinking approaches. Med Educ 1999; 33:251-259.

  7. Berger J. Ways of seeing New York, NY: Viking, 1995.

  8. Gunderman RB, Phillips MD, Cohen MD. Improving clinical histories on radiology requisitions. Acad Radiol 2001; 8:299-303.

  9. Kaplan B. Objectification and negotiation in interpreting clinical images: implications for computer-based patient records. Artif Intell Med 1995; 7:439-454.

  10. Flak B, Rowley VA. Acute abdomen: plain film utilization and analysis. Can Assoc Radiol J 1993; 44:423-428.

  11. Black WC. Overdiagnosis: an unrecognized cause of confusion and harm in cancer screening. J Natl Cancer Inst 2000; 92:1280-1282.

  12. Dacher JN, Charlin B, Bergeron D, Tardif J. Consultation skills in radiology: a qualitative study. Can Assoc Radiol J 1998; 3:167-171.

  13. Drayer BP. Balance between creativity and anatomic correlation in image interpretation (editorial). Radiology 1998; 168:214.

  14. Bosk CL. Forgive and remember Chicago, Ill: University of Chicago Press, 1979.
作者: Richard B. Gunderman MD PhD and James M. Nyce P 2007-5-12
医学百科App—中西医基础知识学习工具
  • 相关内容
  • 近期更新
  • 热文榜
  • 医学百科App—健康测试工具