Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

casting nii to int16 in preprocessfmri.m #1

Open
tommysprague opened this issue Mar 6, 2017 · 4 comments
Open

casting nii to int16 in preprocessfmri.m #1

tommysprague opened this issue Mar 6, 2017 · 4 comments

Comments

@tommysprague
Copy link

For nii's where intensity values are high, this seems to be a reasonable step. However, when the intensity values are small (e.g., <= 400 over entire brain), this step aggressively compresses the dynamic range of the image, especially in the time dimension. My resulting .nii's after processing end up with 5-20 unique values over time for any given voxel.

Perhaps it's possible to add a check somewhere as to whether the cast to int16 would compress the data based on its range?

@kendrickkay
Copy link
Member

kendrickkay commented Mar 6, 2017 via email

@tommysprague
Copy link
Author

I was thinking the easiest way would be to scale the input image to match the range of int16 data format before casting - if done across an entire session, I don't think this would cause clipping, etc.

And I actually didn't track it down to a line number - I just noticed that time series plotted from the output nii's result in a small number of discrete values, so it's whenever the int16 conversion from nifti float/single-precision happens.

I can send you some example files if you like.

@kendrickkay
Copy link
Member

kendrickkay commented Mar 8, 2017 via email

@tommysprague
Copy link
Author

Yes - a few labs at NYU use a wrapper over it (Winawer lab, & parts of Curtis lab) for preprocessing.

And the first option sounds sensical if there's an agreed-upon minimum time-series precision (8 bits?). I can't imagine a scalar value difference inducing any large problems, but I'm likely overlooking something.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants