Introduction
By working through the paper ‘Med3D: Transfer Learning for 3D Medical Image Analysis’, I am going to work on 3D image segmentation.
This paper is trying to solve the following challenge: “It is extremely challenging to build a sufficiently large dataset due to diffculty of data acquisition and annotation in 3D medical imaging.” They build a image data set called 3DSeg-8 by aggregate other dataset. Then they designed Med3D, a heterogeneous 3D network, to pre-train models. As the paper show, Med3D can speed up training covergence and improve accuracy.
As they said, “The motivation of our work is to train a high performance DCNN model with a relatively large 3D medical dataset, that can be used as the backbone pre-trained model to boost other tasks with insufficient training data.” There are 3 steps they did in their work.
Step1:
Collecting several 3D segmentation datasets and put them together. The new dataset is 3DSeg-8. Then they normalized 3DSeg-8 for both spatial and intensity distribution.
Step2:
Training a DCNN model. They call this model Med3D. This network has same encoder and different decoder branches for each specific dataset.
Step3:
Extracting feature from pre-trained Med3D model. Then trying with other medical tasks to boost the network performance.
Details Describe
References
https://arxiv.org/abs/1904.00625
https://github.com/Tencent/MedicalNet
If you have any questions, please contact with tianluwu@gmail.com