Replies: 1 comment
-
Your data appears to be quite challenging (e.g. many sessions, complex design). The issues you raise I think need a longer discussion and deeper thought in the exact scientific/analysis approach. The benefits of HRF tailoring depend on the experiment... In general, the more "event-like" your data and modeling approach, the more benefits there will be. (Since there will be relatively large number of high frequency transients, events, etc.) The more "long-block-like" your data and modeling approach, the less benefits there will be. (Since most of the signals will be carried at low frequencies.) In your case, it sounds like the data are intrinsically continuous (a movie), so the issue becomes a bit more tricky, like, what sorts of neural signals do you expect to evoke in your data, and how exactly are you choosing to try to model these signals? |
Beta Was this translation helpful? Give feedback.
-
Hello,
I’m working on how neural representations of characters change overtime during a longitudinal fMRI TV watching. I have annotations of each main character’s face (whether this character is present in this shot), so I have a dataframe where each row is the start time for a shot change and columns are [“char1”, “char2”, “char3”, …, “unknown”].
I reformatted the dataframe to n_tr x n_condition, where conditions are char1, char2, char3, char1 x char2, char 1 x char 2 x char 3.
I tested running on a sub-dataset and here are some questions before running on the full dataset
I used fMRIPrep for preprocessing, but I did run “removing confounds” using the nilearn function load_confounds_strategy (“compcor”) https://nilearn.github.io/dev/modules/generated/nilearn.interfaces.fmriprep.load_confounds_strategy.html I guess I should not do it before fitting GLMsingle. How about I use the data directly from fMRIPrep to fit GLMsingle? Do you recommend signal cleaning? (Detrending, z-scoring https://nilearn.github.io/dev/modules/generated/nilearn.signal.clean.html#nilearn.signal.clean)?
I saw the question about “duration” on wiki. Currently, my TR is 1.49 second. I set duration as 2 seconds, which is kind of arbitrary. As I mentioned before, I first segment the video into different shots (using pyScenes), then I tracked character faces within each shot, so my character annotation is at the shot level. The length of each shot changes, ranging from 1 second to 1 minute, probably. Considering HRF, 2 seconds is probably the smallest duration that can still work reasonably (?). Do you have any recommendation for choosing the duration?
The fraction values for all 33 vertices are 0.05. In your wiki, it says, "This does indicate that signals appear to be weak; however, it does not necessarily indicate that there is a problem with the data or analysis per se.” It sounds reasonable; probably, my signal is weak. Maybe it will improve if I add more runs. but I did modeling on each session, so adding more runs probably won't help. I do want to model all runs together using
sessionindicator
, not sure whether modeling 150 runs together will burn the compute node (I could try)Speaking of runs, in total I have ~150 runs, which belong to ~30 sessions. Each session has 1 to 10 runs. Those runs contain no repeated watching, they all belong to the same TV show. So I’m thinking of running GLMsingle on each session, then normalize betas from different sessions based on the wiki. My ultimate goal is to understand how neural representation of each character changes over time. Would you suggest doing something else?
I also want to derive individual HRFs using our data. If time permits, I might do it. Will it improve the results a lot?
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions