Skip to content

Commit

Permalink
Merge pull request #234 from GhazalehManj/old_containers
Browse files Browse the repository at this point in the history
Old containers
  • Loading branch information
GhazalehManj authored Sep 18, 2024
2 parents 92bb7fb + c68c9db commit 959a3ac
Show file tree
Hide file tree
Showing 7 changed files with 28 additions and 292 deletions.
2 changes: 1 addition & 1 deletion Quick_start_workflow_automation.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ git pull #in case you need to pull new code
source ./stage_2.sh
```

## stage 3 (xcp_d, xcp_noGSR, qsirecon2, parcellation_ciftify):
## stage 3 (xcp_d, xcp_noGSR, qsirecon2):

```sh
# note step one is to make sure you are on one of the login nodes
Expand Down
23 changes: 0 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,6 @@ ${BASEDIR}
│ │ ├── freesurfer_long # freesurfer longitudinal
│ │ ├── qsiprep # full qsiprep derivatives
│ │ ├── ciftify # ciftify derivatives
│ │ ├── parcellated_ciftify # parcellation-ciftify derivatives
│ │ ├── ENIGMA_extract # extracted cortical and subcortical csv files
│ │ ├── dtifit #dtifit
│ │ ├── enigmaDTI #enigmadti
Expand All @@ -41,7 +40,6 @@ ${BASEDIR}
│ ├── fmriprep # contains only qc images and metadata
│ ├── qsiprep # contains only qc images and metadata
│ ├── ciftify # contains only qc images and metadata
│ ├── parcellated_ciftify # contains only qc images and metadata
│ ├── ENIGMA_extract # extracted cortical and subcortical csv files
│ ├── enigmaDTI # enigmaDTI
│ ├── xcp-d # contains xcp results with GSR
Expand Down Expand Up @@ -91,7 +89,6 @@ Currently this repo is going to be set up for running things on SciNet Niagara c
|stage 3 | 03a | [Run xcp-d](#Running-xcp-d) | 10 hours on slurm |
|^ | 03b | [Run xcp no GSR](#Running-xcp-noGSR) | 10 hours on slurm |
|^ | 03c | [Run qsirecon step2](#Running-qsirecon-step2) | 1 hour of slurm |
|^ | 03d | [Running the parcellation-ciftify step](#Running-the-parcellation-ciftify-step) | 20 mins on slurm |
|stage 4 | 04a | [Run enigma-dti](#Running-enigma-dti) | 1 hours on slurm |
|^ | 04b | [Check tsv files](#Check-tsv-files) | |
|stage 5 | 05a | [Run extract and share to move to data to sharable folder](#Syncing-the-data-with-to-the-share-directory) | 30 min in terminal |
Expand Down Expand Up @@ -590,26 +587,6 @@ echo "number of array is: ${array_job_length}"
sbatch --array=0-${array_job_length} ./code/03_qsirecon_step2_scinet.sh
```

## Running the parcellation-ciftify step

```sh
## note step one is to make sure you are on one of the login nodes
ssh nia-login07

## go to the repo and pull new changes
cd ${SCRATCH}/SCanD_project
git pull

## figuring out appropriate array-job size
SUB_SIZE=10 # for func the sub size is moving to 1 participant because there are two runs and 8 tasks per run..
N_SUBJECTS=$(( $( wc -l ./data/local/bids/participants.tsv | cut -f1 -d' ' ) - 1 ))
array_job_length=$(echo "$N_SUBJECTS/${SUB_SIZE}" | bc)
echo "number of array is: ${array_job_length}"

## submit the array job to the queue
sbatch --array=0-${array_job_length} ./code/03_parcellate_ciftify_scinet.sh
```

## Running enigma-dti

```sh
Expand Down
221 changes: 0 additions & 221 deletions code/03_parcellate_ciftify_scinet.sh

This file was deleted.

4 changes: 0 additions & 4 deletions code/05_extract_to_share.sh
Original file line number Diff line number Diff line change
Expand Up @@ -82,10 +82,6 @@ rsync -a ${PROJECT_DIR}/data/local/ciftify/qc_recon_all ${PROJECT_DIR}/data/sha
echo "copying over the ENIGMA extracted cortical and subcortical files"
rsync -a ${PROJECT_DIR}/data/local/ENIGMA_extract ${PROJECT_DIR}/data/share/

## copy over the parcellated_ciftify files
echo "copying over the parcellated_ciftify files"
rsync -a ${PROJECT_DIR}/data/local/ciftify/parcellated_ciftify ${PROJECT_DIR}/data/share/


## copy over the enigmaDTI files
echo "copying over the enigmaDTI files"
Expand Down
50 changes: 26 additions & 24 deletions code/gen_qsiprep_motion_metrics.py
Original file line number Diff line number Diff line change
@@ -1,33 +1,35 @@
import pandas as pd
import json
import csv
import os
import glob

# Get the SCRATCH directory
scratch_dir = os.getenv('SCRATCH')
# Define the path to the JSON file and the output CSV file
json_file_path = 'data/local/qsiprep/dwiqc.json'
csv_file_path = 'data/local/qsiprep/qsiprep_metrics.csv'

# Expand the path with the SCRATCH directory
search_pattern = f"{scratch_dir}/SCanD_project/data/local/qsiprep/*/*/dwi/*desc-ImageQC_dwi.csv"
# Create the directory for the CSV file if it doesn't exist
os.makedirs(os.path.dirname(csv_file_path), exist_ok=True)

# Get list of all relevant CSV files
dwi_metrics_files = glob.glob(search_pattern, recursive=True)
# Load JSON data from the file
with open(json_file_path, 'r') as json_file:
data = json.load(json_file)

# Function to read each CSV and add the filename (without .csv extension)
def read_and_add_filename(filepath):
df = pd.read_csv(filepath)
df['filename'] = os.path.basename(filepath).replace('.csv', '')
return df
# Extract the list of subjects from the JSON data
subjects = data.get('subjects', [])

# Read all CSV files into a single DataFrame
dwi_metrics = pd.concat([read_and_add_filename(f) for f in dwi_metrics_files], ignore_index=True)
# Check if there are subjects in the data
if subjects:
# Open a CSV file to write the data
with open(csv_file_path, 'w', newline='') as csv_file:
writer = csv.writer(csv_file)

# Separate 'filename' column into 'subject' and 'session'
# Extract subject and session from the filename
dwi_metrics['subject'] = dwi_metrics['filename'].str.split('_').str[0].replace('sub-', '')
dwi_metrics['session'] = dwi_metrics['filename'].str.split('_').str[1].replace('ses-', '')
# Extract headers (keys) from the first subject dictionary
headers = list(subjects[0].keys())
writer.writerow(headers) # Write header row

# Iterate through each subject and write its values to the CSV
for subject in subjects:
writer.writerow([subject.get(header, '') for header in headers]) # Write row values for each subject

output_dir = f"{scratch_dir}/SCanD_project/data/local/qsiprep"

# Write the combined DataFrame to a new CSV file
output_file = os.path.join(output_dir, "qsiprep_metrics.csv")
dwi_metrics.to_csv(output_file, index=False)
print(f"Data has been written to {csv_file_path}")
else:
print("No subjects found in the JSON data.")
17 changes: 0 additions & 17 deletions share folder.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,23 +33,6 @@ ${BASEDIR}/data/share
│ ├── group_bold.tsv
│ ├── group_T1w.tsv
│ └── group_T2w.tsv
├── parcellated_ciftify
│ ├── atlas-4S1056Parcels
│ ├── atlas-4S156Parcels
│ ├── atlas-4S256Parcels
│ ├── atlas-4S256Parcels
│ ├── atlas-4S456Parcel
│ ├── atlas-4S556Parcels
│ ├── atlas-4S656Parcels
│ ├── atlas-4S756Parcels
│ ├── atlas-4S856Parcels
│ ├── atlas-4S956Parcels
│ ├── atlas-aparc
│ ├── atlas-Glasser
│ ├── atlas-Gordon
│ ├── atlas-HCP
│ ├── atlas-Tian
│ ├── cifti_dense_anat
├── qsiprep
│ └── qc images and meta data for each scan
├── xcp-d
Expand Down
3 changes: 1 addition & 2 deletions stage_3.sh
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
#!/bin/bash

## stage 3 (xcp-d, xcp_noGSR, qsirecon_step2, parcellation_ciftify):
## stage 3 (xcp-d, xcp_noGSR, qsirecon_step2):

submit_array_job() {
local script=$1
Expand Down Expand Up @@ -29,4 +29,3 @@ run_pipeline() {
run_pipeline "xcp-d" "code/03_xcp_scinet.sh" 1
run_pipeline "xcp-noGSR" "code/03_xcp_noGSR_scinet.sh" 1
run_pipeline "qsirecon_step2" "code/03_qsirecon_step2_scinet.sh" 1
run_pipeline "parcellation_ciftify" "./code/03_parcellate_ciftify_scinet.sh" 10

0 comments on commit 959a3ac

Please sign in to comment.