Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DKI dataset with one with only one phase-encoding direction #323

Open
abhishekpatil32 opened this issue Apr 15, 2024 · 4 comments
Open

DKI dataset with one with only one phase-encoding direction #323

abhishekpatil32 opened this issue Apr 15, 2024 · 4 comments

Comments

@abhishekpatil32
Copy link

abhishekpatil32 commented Apr 15, 2024

Hello all,

Thank you for introducing me to Pydesigner .
I am new to the DKI preprocessing using MRtrix/fsl and pydesigner and

Specifically, my collected data consists only of three files: *.nii, *.bvec, and *.bval.
PS: I do not have PA or AP phase-encoded image files included.

I used the video tutorial to perform my analysis but I am encountering the following highlighted warnings and errors with my dataset. Heres the code that I have used in the work:

pydesigner --standard --verbose -o /mnt/hgfs/try/DKI/HC1/try_HC1/ /mnt/hgfs/try/DKI/HC1/try_HC1/10670802_SE19_DKI_64dir_DKI_64dir_20160603170653_19.nii

I get a few warnings but a single datasets takes around 5-6 hours to complete.
Is there a way to resolve this? Any specific recommendations?

Regards,
Abhishek

@gonzoBlackMamba
Copy link
Collaborator

Hi Abhishek,

The most extended portion of the pipeline is the eddy call that does the motion correction and outlier "detect/replacement". That is not an unreasonable time frame for one subject to complete if each b-value has 64 directions. Does the program fail completely or do you get a dwi_preprocessed.nii/.bvec/.bval file at the end?

-Hunter

@abhishekpatil32
Copy link
Author

abhishekpatil32 commented Apr 19, 2024

Hello @gonzoBlackMamba,

Thank you for your reply. This is an amazing tool for analyzing DKI datasets.

I think the problem stemmed from limited RAM and the lack of a GPU. Initially, I ran all the packages on VMWare, but after migrating them, including Pydesigner, to the server, processing has improved significantly.

I want to further analyse the data such that I obtain structural connectivity matrices from the data. Can you tell me how this can be performed??

@gonzoBlackMamba
Copy link
Collaborator

Hi @abhishekpatil32,

I'm glad you got the server to do the processing, that will save lots of time. For obtaining structural connectivity, it is possible to use MRTrix3 or DSIstudio. I am no expert on the proper way to generate a structural connectivity matrix. However, PyDesigner outputs files called dki_odf.nii (for use with MRTrix3) and the dki_dsistudio.fib file that will work with DSIstudio. I would recommend MRtrix3 since it is well maintained by a development team. You would want to start at the Tissue Segmentation portion at this link: [https://mrtrix.readthedocs.io/en/latest/quantitative_structural_connectivity/act.html#tissue-segmentation].

Basically, you will need to:

  1. Get the subject's T1 into DWI space by registering to the DKI B0 image. (I do this with FSL's FLIRT, something like flirt -dof 6 -in < T1_image > -ref < B0_image > -out < T12B0 filename > ).
  2. Using MRtrix3, run the tissue segmentation like: 5ttgen fsl -nocrop <T12B0_5tt outputfilename>
  3. Then with MRtrix again, run the anatomically constrained tractography (ACT) like:
    tckgen -algorithm SD_STREAM dki_odf.nii -act <T12B0_5tt image> -seed_image -seeds <# seedpoints, I use 250000 to check, and then rerun with more like 10^6>
  4. Afterwards, you would need to follow the MRtrix documentation about SIFT and then how to make a connectome. I have not gotten this far yet, but the prior steps should get you close.

I am happy to share a Jupyter Notebook with you that does these steps, but you will have to modify some of the paths and filenames to fit your setup. Let me know!

-Hunter

@abhishekpatil32
Copy link
Author

Hello @gonzoBlackMamba,

Thank you for your response and I apologize for not getting back to you sooner.

It took me a long time to preprocess the data since the number of datasets was huge in my case.
I have now preprocessed the data and ready to perform the next stage in my analysis.

If you dont mind, it would be great if you could share the Jupyter Notebook. It would help me have a clear idea of how to go ahead with the data.
Thanking you in advance.

Regards,
Abhishek

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants