Accuracy of Google Translate™ for Medical Educational Material

1University of California, San Francisco, San Francisco, CA
2Carnegie Mellon Silicon Valley, Moffett Field, CA
3University of California, San Francisco, San Francisco, CA
4University of California, San Francisco, San Francisco, CA
5University of California, San Francisco, San Francisco, CA

Meeting: Hospital Medicine 2010, April 8-11, Washington, D.C.

Abstract number: 76

Background:

Language barriers present a major challenge to patient education in the rapidly growing population with limited English proficiency. Although professional translations of standard educational material can help to address this challenge, clinicians often encounter situations that require giving brief written instructions tailored to the needs of an individual patient. Automated online translation tools present a potential inexpensive and ubiquitous alternative, but their accuracy for medical information is unknown. We studied the accuracy of Google Translate™ compared with professional translation when applied to medical educational material.

Methods:

We obtained both English and professionally translated Spanish versions of a widely available patient pamphlet on warfarin use prepared by the Agency of Healthcare Research and Quality. The English text was copied and pasted verbatim into the Google Translate™ Web page to obtain an automated online translated version of the document into Spanish. Twenty sentences from the Google translation and 10 sentences from the professional translation were then randomly selected and scored on 4 domains of accuracy by a Winded native Spanish speaker. These domains were: fluency (accuracy of grammar); adequacy (how much information from the original was preserved); meaning (whether the translation meant exactly what the original meant); and if there was an erroneous translation, the severity of the error, ranging from harmless to dangerous. Each domain was assessed along a 5‐point Likert scale, with 5 points the best and 1 the worst. In addition, the rater was asked to state a preference for 1 of 2 translations (one the Google translation, the other the professional one) in a blinded fashion for 10 of the sentences.

Results:

Compared with the professional translation, the Google translation sentences had lower fluency (mean score 3,6 vs. 5.0, P = 0.003) but comparable adequacy (4.7 vs. 4.7, P = 0.82) and meaning (3.5 vs. 4.1, P = 0.35). Of the Google translation sentences, 80% were either perfect or had errors of low severity unlikely to affect patient care. One Google translation sentence was considered so nonsensical as to be potentially dangerous. In 8 of 10 comparisons, the professional translation was preferred over the Google Translation; in 2 no version preference was noted.

Conclusions:

In this preliminary study of the accuracy of Google Translate™, translations into Spanish had lower fluency but comparable adequacy and meaning compared to professional translations. However, at least 1 potentially dargerous error resulted from a nonsensical translation, and the blinded rater overwhelmingly preferred the professional translations. Further work is required to investigate the impact of impaired fluency on patient comprehension for information translated in this way.

Author Disclosure:

R. Khanna, none; M. Eck, none; C. Koenig, none; L. Karliner, none; M. Fang, none.

To cite this abstract:

Khanna R, Eck M, Koenig C, Karliner L, Fang M. Accuracy of Google Translate™ for Medical Educational Material. Abstract published at Hospital Medicine 2010, April 8-11, Washington, D.C. Abstract 76. Journal of Hospital Medicine. 2010; 5 (suppl 1). https://www.shmabstracts.com/abstract/accuracy-of-google-translate-for-medical-educational-material/. Accessed September 18, 2019.

« Back to Hospital Medicine 2010, April 8-11, Washington, D.C.