More Thoughts on JAMA Breast Biopsy Study

| May 12, 2015
Breast-microscopy-466x276

Image courtesy of CLP

Over the weekend I saw an article published in Clinical Lab Products online entitled “Pathology Under Fire”. In the article, there is a roundtable discussion moderated by Steve Halasey who interviews Drs. Ken Bloom (Clarient Diagnostic Services), Eric Walk (Roche Diagnostics) and David Dabbs (University of Pittsburgh Medical Center), three very well-known respected pathologists in their respective areas of interests and companies/affiliations. Among the major issue discussed is the March JAMA breast biopsy concordance study. The “Elkmore Study” of course showed poor concordance among practicing pathologists for atypical and in-situ diagnoses and some disagreement among “expert” pathologists with independent review of single breast biopsy slides, where the participants were blinded as to clinical findings, unable to use deeper sections and/or IHC stains and limited to their own opinions.

Much is being made of the study design – namely, the inability to show cases around, get additional sections, correlated with clinical and imaging findings, perform IHC stains, etc… But isn’t that really the point? This was a research study. This is how studies go. Someone may pick 120 cases of a particular kind of tumor, pick 3 IHC stains, stain the tumors and observe the pattern and report the findings. Over time perhaps, the stains become recognized has having some utility in clinical practice with additional studies, public discussion at meetings and further validation by other investigators. Other studies may look at cell-signaling pathways perhaps as markers for more aggressive biology or targets for therapy or are novel findings that, again, over time may prove to be useful in clinical practice if the findings are substantiated over time and further validated.   Sometimes pathology studies focus on practice methods, such as “imprint cytology for sentinel lymph nodes in breast cancer” or use of molecular techniques to differentiate tumors and researchers report their experience. Again, over time these practices may become more widely practiced or not with increasing use and experience by many folks.

When I was a resident, my attendings favorite stain was the Movat pentachrome. Without looking it up now, I couldn’t tell you the 5 stains or what color stained what but it was their favorite stain because it took our lab 2 days to perform and gave them time to think about the case. This was before wide use of IHC in our lab and among our now long-retired attendings who did not train with or use IHC stains significantly in the bulk of their years of practice. It would be hard to find a journal article proposing the use of a stain simply to have an excuse to think about the case more without the stain actually providing any useful information when it was received. Some practices aren’t written about but still occur.

To the point – the study appeared in a well-respected medical journal, I think most physicians would agree JAMA is in this category and the study was designed, perhaps to an exaggerated degree, to ask what would happen if you showed a bunch of practicing pathologists a bunch of slides and ask them to give a diagnosis on the slide alone.

This happens every year in places like Tampa, Florida. A bunch of people are seated in front of a microscope and computer with, as I recall, 50 glass slides and at the time, 3 days of computer images and questions and you are asked your opinion on a wide variety of cases with often times limited clinical history, ancillary studies, imaging information, the ability to show the cases around, etc…

Sometimes you can zoom in on the computer image and make what you don’t know more pixelated and sometimes you can eliminate 1 or 2 choices out of 5 possible responses and increase your chances of a successful educated guess vis-à-vis SAT style.

This process is called The Pathology Boards. We use it to certify pathologists as “board certified” and if you are old enough like me avoid the requirement with this lifetime certification for MOC and SAM credits to remain “certified”.

Of course The Pathology Boards and certification and practice are all different things and require different skill sets and requirements. The point is that this study, just as if you showed 50 cardiologists 150 EKGs or 50 radiologists 150 CT scans, and asked about a diagnosis or management, there would be some level of disagreement, particularly if the studies were outside their realm of interest/expertise and/or the amount of information available to them beyond the images was limited.

The roundtable panelists in the article discuss how the study is not “real life” as has been discussed here and elsewhere and commented on by pathologists who participated in the study. That is the point. It was a research study to observe what would happen if you left folks to themselves and see what happens.

There is an old joke about this, “If you show the same case to three pathologists, you are likely to get four opinions.”

Fortunately, this is not real practice. It is real study but not real practice. Of course we get deeper levels, show the case internally, obtain second opinions, expert consultations, perform IHC stains, etc… Dr. Bloom mentions increasing use of digital pathology could further facilitate second reviews.

I think it is time we stop criticizing the actual breast biopsy study and address best practices, breast biopsies or otherwise.

Fortunately, partly by choice and partly by fault, pathology is a medical specialty that mandates quality assurance in our practices. While some specialties may do random “chart reviews” and “peer reviews” when required by by-laws or issues with a provider, we do this routinely. Through the course of training and practice we learn to know what we don’t know or can’t maintain. Sub-specialization has driven this further. While I use to be comfortable with medical kidney and muscle biopsies, I no longer am.

It is time for us to recognize our collective weaknesses and do what is in the best interests of the patient. Repeat the study in your own practices and see what the results are. Perhaps you do not need to because you have “breast pathologists” or all cases get “two sets of eyes” before sign-out with internally recorded high concordance.

This will reassure not only ourselves but our customers, our patients.

Tags: , , , , , ,

Category: Anatomic Pathology, Clinical Laboratories, Current Affairs, Education, General Healthcare News, Laboratory Compliance, Laboratory Management & Operations, Medical Research, Pathology News, Patient Advocacy, Reports

Comments (2)

Trackback URL | Comments RSS Feed

  1. George Leonard says:

    KJK:

    With respect, I disagree. Of course I agree with the premise that we as pathologists are not optimally consistent when it comes to atypical ductal lesions. However, the vitriol that many pathologists (including myself) are expending to bring to light the flaws with this study is not undue, nor misplaced. We don’t need to rehash all the reasons why it’s not a valid study. The more important issue at hand is the court of public opinion. Once this study broke the newspapers latched on and put some poor woman’s face to accompany a hatchet article about how pathologists are as good as guessing with your breast biopsies.

    I don’t agree with the analogy of the board exam. The stakes are very different. If you don’t pass your boards you can take it again, no harm, no foul. Even if you never get it right the worst that happens is that you don’t get board-certified. Get a breast biopsy wrong and the outcomes are different. I think a more apt analogy (and one that is pertinent) is the pap smear. A similar furor arose over the diagnoses rendered on cervical cytology specimens and now we have a mandated annual exam to take in order to diagnose cervical cytology specimens. I think we can all agree that particular test is not an assessment of how well we can diagnose paps. It’s a ridiculous regulatory hoop that takes time and money from practicing pathologists without ultimately benefiting patients.

    This brings me to the final point. If the article is taken seriously it’s not unreasonable to draw the conclusion that more regulation is needed with the diagnosis of breast biopsy specimens. It may be cynical of me, but precedent has been set as mentioned above with paps. One day there may be mandatory pathologist proficiency testing to sign out breast biopsy specimens. Time and money that doesn’t benefit patients, but rather lines the pockets of the regulatory organizations. Even more cynical of me would be to note that perhaps it will be required that all atypical cases are reviewed by an “expert”. And who certifies these experts? Regulatory organizations for a price. Or the ultimate in cynicism – you can send your cases to a national expert, such as one of the experts who was a pathology contributor to the paper.

    Bottom line – my opinion is that it’s in JAMA not because it’s good, but because it’s big and controversial. If we accept its conclusions we move further down a shady path to perdition for our specialty. I refute the paper conclusions because they are drawn on poor study design. If we don’t speak out as a specialty as such we risk being railroaded into more regulation without benefit to patients.

  2. Congratulations on a very well written article. I agree that with the many industry pressures facing all physicians, pathologists need to keep their eye on what is really important: accurate diagnoses and expanded focus on value based pathology services.