Brain-ID: Learning Contrast-agnostic Anatomical Representations for Brain Imaging
Peirong Liu*, Oula Puonti, Xiaoling Hu, Daniel C. Alexander, Juan E. Iglesias
;
Abstract
"Recent learning-based approaches have made astonishing advances in calibrated medical imaging like computerized tomography (CT). Yet, they struggle to generalize in uncalibrated modalities – notably magnetic resonance (MR) imaging, where performance is highly sensitive to the differences in MR contrast, resolution, and orientation. This prevents broad applicability to diverse real-world clinical protocols. We introduce Brain-ID, an anatomical representation learning model for brain imaging. With the proposed “mild-to-severe” intra-subject generation, Brain-ID is robust to the subject-specific brain anatomy regardless of the appearance of acquired images. Trained entirely on synthetic inputs, Brain-ID readily adapts to various downstream tasks through one layer. We present new metrics to validate the intra/inter-subject robustness of Brain-ID features, and evaluate their performance on four downstream applications, covering contrast-independent (anatomy reconstruction, brain segmentation), and contrast-dependent (super-resolution, bias field estimation) tasks (showcase). Extensive experiments on six public datasets demonstrate that Brain-ID achieves state-of-the-art performance in all tasks on different MR contrasts and CT, and more importantly, preserves its performance on low-resolution and small datasets. Code is available at https://github.com/peirong26/Brain-ID."
Related Material
[pdf]
[supplementary material]
[DOI]