MedicalResearch.com Interview with:
Li Zhou, MD, PhD, FACMI
Associate Professor of Medicine
Division of General Internal Medicine and Primary Care
Brigham and Women’s Hospital, Harvard Medical School
Somerville, MA 02145
MedicalResearch.com: What is the background for this study? What are the main findings?
Response: Documentation is one of the most time-consuming and costly aspects of electronic health record (EHR) use.
Speech recognition (SR) technology, the automatic translation of voice to text, has been increasingly adopted to help clinicians complete their documentation in an efficient and cost-effective manner. One way in which SR can assist this process is commonly known as “back-end” SR, in which the clinician dictates into the telephone, the recorded audio is automatically transcribed to text by an speech recognition engine, and the text is edited by a professional medical transcriptionist and sent back to the EHR for the clinician to review and sign.
In this study, we analyzed errors at different processing stages of clinical documents collected from 2 health care institutions using the same back-end SR vendor. We defined a comprehensive schema to systematically classify and analyze these errors, focusing particularly on clinically significant errors (errors that could plausibly affect a patient’s future care). We found an average of 7 errors per 100 words in raw speech recognition transcriptions, and about 6% of those errors were clinically significant. 96.3% of the raw speech recognition transcriptions evaluated contained at least one error, and 63.6% had at least one clinically significant error. However, the rate of errors fell significantly after review by a medical transcriptionist, and it fell further still after the clinician reviewed the edited transcript.