Study suggests diagnostic mammography’s effectiveness depends largely on the reader’s skill.

| December 12, 2007

I recently completed reading How Doctors Think by Dr. Jerome Groopman.  He holds the Dina and Raphael Recanati Chair of Medicine at Harvard Medical School and is chief of experimental medicine at Beth Israel Deaconess Medical Center in Boston.  He is also a staff writer at The New Yorker.

In the book he cites several examples of intra- and interobserver variablity amongst clinicians, radiologists and pathologists, including examples from his own experience with the healthcare system and those supported by the literature.  I would recommend the book to anyone who is a reader of the blog.

Below are a couple of news briefs about a recent publication published in the December 11 issue of the Journal of the National Cancer Institute highlighting the old adages to err is human and there is no substitute for experience and perhaps speaks to the growing market, demand and perception for sub-specialty services in medicine….

The Chicago Tribune (12/12, Peres) reports on a new study that suggests a mammogram’s "ability to detect cancer is strongly dependent on who is reading it."  The study, which appears in this week’s edition of the Journal of the National Cancer Institute, found "wide variation in radiologists’ ability to detect cancer in breast X-rays, with some missing as many as seven out of 10 cases."  Overall, researchers from the Group Health Center for Health Studies concluded that radiologists "miss an average of two in every 10 cases of breast cancer."  The finding is a "stark reminder of the limitation of the common diagnostic test."  While previous studies have shown variations in cancer detection, most dealt with "screening mammograms."   This one focused instead on "diagnostic mammograms."  The researchers, led by Diana Miglioretti, studied "the performance of 123 radiologists who interpreted nearly 36,000 diagnostic mammograms between 1996 and 2003 at 72 U.S. facilities."

According to HealthDay (12/12, Gardner), radiologists participating in the study had a sensitivity rating ranging from 27 percent to a perfect 100 percent.  False-positive readings ranged from 0 to 16 percent among them.  The researchers determined that the "strongest factor linked to better accuracy was if the radiologist was affiliated with an academic medical center."  While these radiologists "were correct 88 percent of the time," others had an accuracy rating of 76 percent.  Similarly, those who spent at least 20 percent of their time reading mammograms "were more accurate than those who spent less time on breast imaging — 80 percent versus 70 percent."  However, the study indicated that experience is not always helpful.  "[M]ore experienced radiologists" in the study were found to have "missed more cancers than those with less experience."

Miglioretti suggested that the best way to reduce variation among radiologists was to "identify the radiologists who are least accurate at reading mammograms — and to improve their performance with extra training," ScienceDaily (12/12) added.

Category: General Healthcare News

Comments are closed.