From one brain scan, more information for medical artificial intelligence
07 October 2019
MIT researchers have developed a system that gleans far more labeled training data from unlabeled data, which could help machine-learning models better detect structural patterns in brain scans associated with neurological diseases. The system learns structural and appearance variations in unlabeled scans, and uses that information to shape and mold one labeled scan into thousands of new, distinct labeled scans. Image courtesy of the researchers
System helps machine-learning models glean training information for diagnosing and treating brain conditions.
MIT researchers have devised a novel method to glean more information from images used to train machine-learning models, including those that can analyse medical scans to help diagnose and treat brain conditions.
An active new area in medicine involves training deep-learning models to detect structural patterns in brain scans associated with neurological diseases and disorders, such as Alzheimer’s disease and multiple sclerosis.
Hand-labelled by neurological experts
But collecting the training data is laborious: all anatomical structures in each scan must be separately outlined or hand-labelled by neurological experts. And, in some cases, such as for rare brain conditions in children, only a few scans may be available in the first place.
In a paper presented at the recent Conference on Computer Vision and Pattern Recognition, the MIT researchers describe a system that uses a single labelled scan, along with unlabelled scans, to automatically synthesise a massive dataset of distinct training examples.
The dataset can be used to better train machine-learning models to find anatomical structures in new scans — the more training data, the better those predictions.
The crux of the work is automatically generating data for the ‘image segmentation’ process, which partitions an image into regions of pixels that are more meaningful and easier to analyse.
To do so, the system uses a convolutional neural network (CNN), a machine-learning model that’s become a powerhouse for image-processing tasks. The network analyses a lot of unlabelled scans from different patients and different equipment to ‘learn’ anatomical, brightness, and contrast variations.
Then, it applies a random combination of those learned variations to a single labelled scan to synthesise new scans that are both realistic and accurately labelled. These newly synthesised scans are then fed into a different CNN that learns how to segment new images.
“We’re hoping this will make image segmentation more accessible in realistic situations where you don’t have a lot of training data,” says first author Amy Zhao, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and Computer Science and Artificial Intelligence Laboratory (CSAIL).
‘Mimic the variations in unlabelled scans’
“In our approach, you can learn to mimic the variations in unlabelled scans to intelligently synthesise a large dataset to train your network.”
There’s interest in using the system, for instance, to help train predictive-analytics models at Massachusetts General Hospital, Zhao says, where only one or two labelled scans may exist of particularly uncommon brain conditions among child patients.
Joining Zhao on the paper are: Guha Balakrishnan, a postdoc in EECS and CSAIL; EECS professors Fredo Durand and John Guttag, and senior author Adrian Dalca, who is also a faculty member in radiology at Harvard Medical School.
The ‘Magic’ behind the system
Although now applied to medical imaging, the system actually started as a means to synthesise training data for a smartphone app that could identify and retrieve information about cards from the popular collectable card game, ‘Magic: The Gathering’.
Released in the early 1990s, ‘Magic’ has more than 20,000 unique cards — with more released every few months — that players can use to build custom playing decks.
Zhao, an avid ‘Magic’ player, wanted to develop a CNN-powered app that took a photo of any card with a smartphone camera and automatically pulled information such as price and rating from online card databases.
“When I was picking out cards from a game store, I got tired of entering all their names into my phone and looking up ratings and combos,” says Zhao. “Wouldn’t it be awesome if I could scan them with my phone and pull up that information?”
But she realised that’s a very tough computer-vision training task. “You’d need many photos of all 20,000 cards, under all different lighting conditions and angles. No one is going to collect that dataset,” says Zhao.
Instead, Zhao trained a CNN on smaller dataset of about 200 cards, with 10 distinct photos of each card, to learn how to warp a card into various positions.
It computed different lighting, angles, and reflections — for when cards are placed in plastic sleeves — to synthesised realistic warped versions of any card in the dataset.
It was an exciting passion project, Zhao says: “But we realised this approach was really well suited for medical images, because this type of warping fits really well with MRIs.”
Magnetic resonance images (MRIs) are composed of three-dimensional pixels, called voxels. When segmenting MRIs, experts separate and label voxel regions based on the anatomical structure containing them.
The diversity of scans, caused by variations in individual brains and equipment used, poses a challenge to using machine learning to automate this process.
Some existing methods can synthesise training examples from labeled scans using ‘data augmentation’, which warps labelled voxels into different positions.
But these methods require experts to hand-write various augmentation guidelines, and some synthesised scans look nothing like a realistic human brain, which may be detrimental to the learning process.
Instead, the researchers’ system automatically learns how to synthesise realistic scans. The researchers trained their system on 100 unlabelled scans from real patients to compute spatial transformations — anatomical correspondences from scan to scan.
How voxels move from one scan to another
This generated as many ‘flow fields’, which model how voxels move from one scan to another. Simultaneously, it computes intensity transformations, which capture appearance variations caused by image contrast, noise, and other factors.
In generating a new scan, the system applies a random flow field to the original labelled scan, which shifts around voxels until it structurally matches a real, unlabelled scan.
Then, it overlays a random intensity transformation. Finally, the system maps the labels to the new structures, by following how the voxels moved in the flow field. In the end, the synthesised scans closely resemble the real, unlabelled scans — but with accurate labels.
To test their automated segmentation accuracy, the researchers used Dice scores, which measure how well one 3D shape fits over another, on a scale of 0 to 1.
They compared their system to traditional segmentation methods — manual and automated — on 30 different brain structures across 100 held-out test scans.
Large structures were comparably accurate among all the methods. But the researchers’ system outperformed all other approaches on smaller structures, such as the hippocampus, which occupies only about 0.6 per cent of a brain, by volume.
“That shows that our method improves over other methods, especially as you get into the smaller structures, which can be very important in understanding disease,” says Zhao. “And we did that while only needing a single hand-labelled scan.”
In a nod to the work’s ‘Magic’ roots, the code is publicly available on Github under the name of one of the game’s cards, ‘Brainstorm’.https://www.engineersjournal.ie/2019/10/07/from-one-brain-scan-more-information-for-medical-artificial-intelligence/https://www.engineersjournal.ie/wp-content/uploads/2019/10/a1b-4.jpghttps://www.engineersjournal.ie/wp-content/uploads/2019/10/a1b-4-300x300.jpgBioArtificial Intelligence,biomedical,MIT