Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hcp data model training #2

Open
dgai91 opened this issue Sep 17, 2024 · 4 comments
Open

hcp data model training #2

dgai91 opened this issue Sep 17, 2024 · 4 comments

Comments

@dgai91
Copy link

dgai91 commented Sep 17, 2024

Hello, I applied for all the HCP (Human Connectome Project) young adult data and used the 17-network ROI (regions of interest) to extract features from the fMRI data. I also hope to use your model to differentiate between working memory tasks with different loads and categories. However, the performance of the model I trained is poor. Could you provide the parameters and data used for training your model?

@Dandy5721
Copy link
Owner

Hello, I applied for all the HCP (Human Connectome Project) young adult data and used the 17-network ROI (regions of interest) to extract features from the fMRI data. I also hope to use your model to differentiate between working memory tasks with different loads and categories. However, the performance of the model I trained is poor. Could you provide the parameters and data used for training your model?

Hi, the main parameters are set as follows: the size of convolution kernel k is set to 5, the scalar α is set as 0, ε is set to 10−4, the kernel width σ is initialized to 0.1, the SGD parameters with learning rate set to 0.005, weight decay is 10−5, momentum is 0.9. I noticed that your ROI is only 17 (we used 268 ROIs), it is pretty small, we haven't tried it before, you can try to adjust the window size, convolution kernel and network layers, and you can test it on our other method--Uncovering shape signatures of resting-state functional connectivity by geometric deep learning on Riemannian manifold, the parameters are small.

@Dandy5721
Copy link
Owner

Dandy5721 commented Sep 17, 2024

Hello, I applied for all the HCP (Human Connectome Project) young adult data and used the 17-network ROI (regions of interest) to extract features from the fMRI data. I also hope to use your model to differentiate between working memory tasks with different loads and categories. However, the performance of the model I trained is poor. Could you provide the parameters and data used for training your model?

Hi, the main parameters are set as follows: the size of convolution kernel k is set to 5, the scalar α is set as 0, ε is set to 10−4, the kernel width σ is initialized to 0.1, the SGD parameters with learning rate set to 0.005, weight decay is 10−5, momentum is 0.9. I noticed that your ROI is only 17 (we used 268 ROIs), it is pretty small, we haven't tried it before, you can try to adjust the window size, convolution kernel and network layers, and you can test it on our other method--Uncovering shape signatures of resting-state functional connectivity by geometric deep learning on Riemannian manifold, the parameters are small.

BTW, we have provided some simulated data, but for the real training data, we have mentioned we use Shen functional
atlas [28], you can process the data following this atlas, otherwise, can you contact the corresponding author ([email protected]) by email? Thank you for your understanding.

@dgai91
Copy link
Author

dgai91 commented Sep 18, 2024

hi dr Dan! thanks for your response.
I still want to further understand the process of generating HCP time series. Currently, I haven't performed ICA-AROMA, but I have already constructed the masker according to the description in your paper:

atlas_path = roi_label_root + 'shen_1mm_268_parcellation_MNI152NLin2009cAsym.nii.gz'
atlas_image = nil.image.load_img(atlas_path)
atlas_image = nil.image.resample_to_img(atlas_image, target_atlas_image)
atlas_masker = NiftiLabelsMasker(atlas_image, low_pass=0.08, high_pass=0.009, standardize=True, t_r=0.72)

In addition, the data I am using is 'HCP_xxxxxx_tfMRI_WM_LR.nii.gz', and each file has 405 time points. Before processing, I remove the first five dummy scans. However, I noticed that the fMRI data in your paper only has 393 time points. My code for constructing the time series (ts) is as follows:

mri_file = 'HCP_xxxxxx_tfMRI_WM_LR.nii.gz'
scan_image = nil.image.load_img(mri_root + mri_file)
roi_timeseries = atlas_masker.fit_transform(scan_image)[5:]

My question is, after the above processing, is it consistent with the processing in your paper? Apart from ICA-AROMA. And, how were the 393 data points obtained?

@Dandy5721
Copy link
Owner

hi dr Dan! thanks for your response. I still want to further understand the process of generating HCP time series. Currently, I haven't performed ICA-AROMA, but I have already constructed the masker according to the description in your paper:

atlas_path = roi_label_root + 'shen_1mm_268_parcellation_MNI152NLin2009cAsym.nii.gz'
atlas_image = nil.image.load_img(atlas_path)
atlas_image = nil.image.resample_to_img(atlas_image, target_atlas_image)
atlas_masker = NiftiLabelsMasker(atlas_image, low_pass=0.08, high_pass=0.009, standardize=True, t_r=0.72)

In addition, the data I am using is 'HCP_xxxxxx_tfMRI_WM_LR.nii.gz', and each file has 405 time points. Before processing, I remove the first five dummy scans. However, I noticed that the fMRI data in your paper only has 393 time points. My code for constructing the time series (ts) is as follows:

mri_file = 'HCP_xxxxxx_tfMRI_WM_LR.nii.gz'
scan_image = nil.image.load_img(mri_root + mri_file)
roi_timeseries = atlas_masker.fit_transform(scan_image)[5:]

My question is, after the above processing, is it consistent with the processing in your paper? Apart from ICA-AROMA. And, how were the 393 data points obtained?

Hi, for the data question, can you please contact the corresponding author ([email protected]), Prof. Wu will give you some detailed responses. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants