Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle cases where a contrast is missing from the first level #380

Open
effigies opened this issue Jul 21, 2022 · 4 comments
Open

Handle cases where a contrast is missing from the first level #380

effigies opened this issue Jul 21, 2022 · 4 comments

Comments

@effigies
Copy link
Collaborator

Environment

  • Python version: 3.9
  • fitlins version: 0.10.1

Expected Behavior

My understanding of the situation: @sjshim has a dataset with a demeaned_RT regressor, but this is only not nan if the subject responds at least once during a run. In case of a run where no actions had, this regressor will be missing from the design matrix, and the contrast will be missing from the L1 model outputs.

When passed to the L2 model, it might expect 5 input stat maps, but only get 4. We should detect this case and remove rows from the design matrix when an expected statistical map is missing.

@adelavega
Copy link
Collaborator

adelavega commented Jul 21, 2022

Is this what --drop-missing is for? Or is this to ensure that if you do --drop-missing the next level handles the new shape of predictors correctly?

Also, maybe --drop-missing should be default behavior? This seems to trip people up frequently.

@effigies
Copy link
Collaborator Author

Is this what --drop-missing is for? Or is this to ensure that if you do --drop-missing the next level handles the new shape of predictors correctly?

Yes, we need to handle the new shape correctly. Could make it contingent on --drop-missing or do it all the time and warn?

Also, maybe --drop-missing should be default behavior? This seems to trip people up frequently.

I would worry that this would make us silently ignore errors in the model spec.

@adelavega
Copy link
Collaborator

I'm still not 100% when --drop-missing works and when it doesn't, bc it always seems to work for me. For example, if one subject is missing a predictor in run 1 but not runs 2-3, it seems to handle the new shape fine.

I think that's a valid worry. Maybe for now we could make it contingent on --drop-missing, but also throw a useful error suggestion --drop-missing if weird shapes are detected

@sjshim
Copy link

sjshim commented Jul 21, 2022

When I did call this job, I did include --drop-missing. Happy to share other details of our dataset if that helps with implementing more options.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants