HYBRID Publications (4): Computers draw a liver
In April this year, Xikai Tang, one of our fellows, published a paper on deep learning in liver segmentation. The algorithm that he developed is a big step towards clinical implementation of deep learning methods in patients with liver tumours (link to publication below).
Xikai’s project focuses on personalized treatment planning for patients with liver tumours who receive internal radiation therapy. He aims to improve the efficacy of the treatment. His goal is to develop prototype software for dose prediction, which can assist physicians to achieve more precise treatment.
In this paper, an algorithm for liver segmentation was developed and tested. Liver segmentation is the ‘drawing’ of the liver in a CT image. A correct segmentation of the liver is necessary to calculate the radiation dose that the liver and the tumour receive. This way, physicians can make sure that the liver is not damaged during the treatment, while the tumour receives enough radiation to be destroyed.
Selective internal radiation therapy (SIRT) is a treatment for liver tumours or metastases. A radioactive compound is injected into a liver artery. It is a very precise treatment - most of the radioactive compound ends up in the tumour, where it destroys the tumour cells.
During the treatment, PET and CT/MR scans are used to calculate the amount of radioactivity that the liver receives. This way, the effect of the radioactive compound on the healthy liver is assessed. In order to calculate this correctly, the liver needs to be identified on the CT scan. This is where segmentation comes in.
Liver segmentation is done manually by experts in medical imaging. They use computer programmes to help them.
But there is a problem: there is always a difference between experts on how they ‘draw’ the liver. Using machine learning algorithms for liver segmentation could decrease the variability between experts. This would improve the calculation of the amount of radioactivity in the liver and the tumour, thereby improving care for patients.
Deep learning is a form of artificial intelligence where an algorithm is trained to recognize a feature by learning from a lot of these features. Deep learning is used in areas of object recognition, object tracking, image segmentation, and so on.
Xikai’s deep learning algorithm was trained using over a 100 images of liver tumours. It was also trained on images from patients who receive SIRT, since these patients often have damaged livers due to earlier treatments. This makes it more difficult to ‘draw’ the liver. After that, the algorithm was used for liver segmentation on 40 images of SIRT patients.
Xikai compared three different segmentations for each image. First, manual segmentation by two experts. Second, the segmentation proposed by the algorithm. And finally, an ‘adjusted’ segmentation, which was the one proposed by the algorithm, corrected by the medical imaging experts.
He found that the ‘adjusted’ segmentation was more comparable to the segmentation proposed by the algorithm than to the manual segmentation by the experts. This suggests that the algorithm helps the experts in making a better segmentation.
The experts also scored the segmentations by the algorithm on a scale of ‘not useful’ to ‘ready for clinical use’. Forty percent of the segmentations were considered ready for clinical use, and another forty-eight percent needed minor corrections from the experts. Only twelve percent was considered not useful.
The differences between the adjusted segmentations of the two medical experts was a lot smaller than between their manual segmentations. Therefore, this paper shows that machine learning can decrease the variability between experts in liver segmentation. This will in turn lead to more standardized treatment for patients.
Publication: Tang, X., Jafargholi Rangraz, E., Coudyzer, W. et al. Whole liver segmentation based on deep learning and manual adjustment for clinical use in SIRT. Eur J Nucl Med Mol Imaging (2020). https://doi.org/10.1007/s00259-020-04800-3
Images: PET/CT image from iStock, not related to the publication