Several new studies have shown that computers
can outperform doctors in cancer screenings and disease diagnoses. What does
that mean for newly trained radiologists and pathologists?
A young Johns Hopkins University fellow
recently asked that question while chatting with Elliot Fishman, MD, about #artificial_intelligence (AI). The two men were on the opposite ends of the
career spectrum: Fishman has been at Johns Hopkins Medicine since 1980 and a
professor of radiology and oncology there since 1991; the fellow was preparing
for his first job as a radiologist.
Fishman
laughs when he tells the story, but he understands the concern. Over the past
few years, many #AI proponents and medical professionals have branded radiology
and pathology as dinosaur professions, doomed for extinction. In 2016, a New England Journal of Medicine article
predicted that “#machine_learning will displace much of the work of radiologists and anatomical pathologists,” adding that
“it will soon exceed human accuracy.” That same year, Geoffrey Hinton, PhD, a
professor emeritus at the University of Toronto who also designs #machine_learning algorithms for Google (and who received the Association for #Computing
Machinery’s A.M. Turing Award often called the Nobel Prize of computing, in
2019), declared, “We should
stop training radiologists now."
The reason for the
predictions? #AI’s tantalizing power to identify patterns and anomalies and to
examine “pathologies that look certain ways,” says Fishman, who is among the
enthusiasts: He’s studying the use of AI for early detection of pancreatic
cancer.
“The hope is that if we could
pick up early tumors that are missed, we would have better outcomes,” he says.
An array of studies have
offered glimpses of #AI’s enormous potential. In a study published by #Nature_Medicine in May 2019, a Google algorithm outperformed six
radiologists to
determine if patients had lung cancer. The algorithm, which was developed using
42,000 patient scans from a #National_Institutes of Health clinical trial,
detected 5% more cancers than its human counterparts and reduced false
positives by 11%. False positives are a particular problem with lung cancer: A
study in JAMA Internal Medicine of
2,100 patients found a false
positive rate of 97.5%.
Furthermore, #AI performed comparably to breast
screening radiologists in
a study in the March 2019 Journal of
the National Cancer Institute. At Stanford University, computer scientists
developed an algorithm for diagnosing skin cancer, using a database of nearly
130,000 skin disease images. In diagnostic tests, the algorithm’s success rate was almost identical to that of 21 dermatologists,
according to a study published in Nature in
2017. In another skin cancer study, #AI surpassed the performance of 58
international dermatologists. The algorithm not only missed fewer melanomas,
but it was less likely to misdiagnose benign moles as malignant, the European
Society for Medical Oncology found.
No comments:
Post a Comment