Skip to content

Deep learning algorithm to automate brain MRI contrast identification process

Notifications You must be signed in to change notification settings

ricardopizarro/MRI-contrast

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 

Repository files navigation

Deep learning to identify the brain MRI contrast

Background

We developed a deep learning (DL) algorithm with neutral networks to automatically infer the contrast of a MRI scan based on image intensity of multiple sagittal slices. The DL algorithm consisted of an initial convolutional neural network (CNN), which inferred the contrast on a single slice of the MRI, and a subsequent dense neural network (DNN), which relied on the CNN output to infer a contrast for the entire MRI volume.

We made the implementation openly available here on GitHub, and developed the algorithm in Python with a Theano backend and compiled on Keras. Keras is a high-level software package that provides extensive flexibility to easily design and implement DL algorithms. We created a Python virtual environment named deep_env with the requirements found here. We emperically selected all the parameters that defined the network architecture, including the number and type of layers, the number of layer nodes, and C, the number of final possible contrasts.

Implementation

We developed two sequential neural networks to identify the brain MRI contrast. The first network was a convolutional neural network that inferred the modality on a sagittal slice. The second network combined the result generated by the first nerwork to infer the modality on the entire volume.

We have uploaded Python scripts that can be used to reproduce our results. Unfortunately the data cannot be released for the public but we expect our results to be reproducible. The Python scripts can be used to (1) define the neural network (NN) acrhitecture, (2) train the NN, and (3) test the DNN.

(1) Define the neural network

The modality.save_NNarch_toJson.py script was used to define the architectures of the two networks and save them as .json string files to later be loaded during the training and testing phase. The architecture can be modified to meet different requirements including data size, number of classes (MRI contrast) to model, and architecture parameters.

$ (deep_env) python modality.save_NNarch_toJson.py

Upon execution this script saves .json files that specify the neural network acrhitecture.

(2) Train the neural network

The modality.train_CNN.py script was used to train the CNN and the modality.train_DNN.py script was used to train the DNN. The model architecture was loaded from .json string file saved using modality.save_NNarch_toJson.py. The dataset was divided into three sets: training, validation, and testing. The training set was used to estimate the model parameters. The validation set was used to estimate performance after each epoch completed. The testing set was used in a different script to test the model after training completed.

The training dataset was too large to load at once so a file_generator was used to generate data as needed by Keras' fit_generator. The training parameters such as number of epochs, number of steps per epoch, and number of samples per steps were determined emperically and can easily be changed by the user. The file_generator populates the numpy arrays needed to train the model from randomly selected MRI volumes within the set. During the training the weights for the model parameters are saved if performance (accuracy and loss) is improved.

$ (deep_env) python modality.train_CNN.py
$ (deep_env) python modality.train_DNN.py

Upon completion this script saves the final weights of the parameters after the number of epochs specified.

(3) Test the dense neural network

The modality.test_DNN.py script was used to test the entire DL algortihm. We used the output generated by the CNN to test the second network, DNN. The model architecture was loaded from .json string f saved using modality.save_NNarch_toJson.py. The dataset was previously divided into three sets: training, validation, and testing. The training and validation sets were used while training the algorithm. The testing set was kept separate from the training and used in this script to test the model after training completed.

$ (deep_env) python modality.test_DNN.py

The MRI volmes in the testing set were iteratively loaded and tested. After iterating through the entire testing set, the average accuracy and a confusion matrix were computed as way to characterize the performance of the algorithm.

About

Deep learning algorithm to automate brain MRI contrast identification process

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages