Speakers at a recent Radiological Society of North America course in Paris, France, highlighted a potential challenge for artificial intelligence (AI) in radiology.
A common message among experts speaking at the ‘Practical Applications in Artificial Intelligence’ course was that the quality of data could be an issue for systems looking to learn from human examples.
A consistent message was that radiologists may have to scrub up on their annotation skills before AI can do its job.
Luke Oakden-Raider, research associate of the Australian Institute of Machine Learning, noted that AI performance improves when the software is trained on lots of data… from 50,000 to a million examples, for instance.
The annotation of training data by doctors will be an important part of AI development, he pointed out. However, today AI algorithms are being trained on fixed data sets.
To allow AI to keep learning, he said, a limiting factor will be access to data… and how willing and able radiologists will be to build high-quality datasets through careful annotation. Katherine Andriole, associate professor of radiology at Harvard Medical School, echoed this view.
Her team used its own annotation tool to evaluate three lung nodule AI systems. One algorithm was in line with claimed accuracy, one almost got there and the third fell short.
Andriole’s conclusion was that radiology teams may need to consider creating their own validation datasets so they can measure accuracy of AI tools. This could involve at least two radiologists annotating anywhere between 300 to 3,000 cases per application, she said.
Claudio Silvestrin, head of the AI Centre of Excellence at Unilabs, acknowledged that data could be a stumbling block to the rapid uptake of AI in radiology.
“It is one of the current challenges to obtain enough well-annotated data for training and validation of AI algorithms,” he said.