2024
Demidenko MI, Mumford JA, Poldrack RA (2024). Impact of analytic decisions on test-retest reliability of individual and group estimates in functional magnetic resonance imaging: a multiverse analysis using the monetary incentive delay task. bioRxiv. OA DOI
diff --git a/docs/search.json b/docs/search.json index 4bb6a0a..d0f18b1 100644 --- a/docs/search.json +++ b/docs/search.json @@ -267,7 +267,7 @@ "href": "labguide/research/intellectual_property.html", "title": "Intellectual property", "section": "", - "text": "All products of research at Stanford (including data and code) are the property of the University. However, faculty have wide latitude to release software as open source under the University’s open source policy, as long as it doesn’t conflict with any other obligations. Students are allowed to release code under an open source license at Stanford with faculty permission; students in the Poldracklab have blanket permission to do so, as all of our code is intended to be made open available, as discussed in the section on code sharing. After leaving the lab, trainees can continue to reuse any code or other research materials (e.g. stimuli) developed as part of their work in the lab as long as the code has been released under an open source license and they continue to abide by the terms of the license.\nAs discussed in the section on data management and sharing, all data collected within our laboratory is meant to be shared upon submission of the related paper. In cases where these data can be deidentified they will be shared under a public domain dedication (CC0), which places no restrictions on their use by other researchers. Thus, any researcher can continue to use those data once they leave the lab. In other cases it may be necessary to restrict data sharing (e.g. when the data cannot be deidentified), in which case researchers will need to obtain a data use agreement from Stanford in order to access those data at their new institution. Dr. Poldrack commits to supporting any such requests, unless they violate other obligations of his or the University.\nIn many cases we use data within the lab under Data Use Agreements (such as the ABCD or HCP datasets), and any researcher wishing to use such datasets must be explcitly listed on the DUA. Once a researcher is no longer included in the Stanford DUA for a particular dataset, they must no longer access the dataset via Stanford computer systems.\n\n\nWhen a trainee leaves the lab, there is often uncertainty about which ideas are theirs to pursue independently and which must be pursued in collaboration with their former mentor. It is difficult to specify a blanket policy on this issue, as it will depend on many different circumstances. Trainees are encouraged to discuss this issue openly with Dr. Poldrack prior to departing the lab, to prevent any misunderstandings. In many cases Dr. Poldrack will be happy for the trainee to take the ideas and pursue them independently in their new position; this has happened on a number of occasions. In other cases, particularly when the ideas were developed in close collaboration with Dr. Poldrack, there may be an expectation that further development of the ideas will occur through collaborative efforts.", + "text": "All products of research at Stanford (including data and code) are the property of the University. However, faculty have wide latitude to release software as open source under the University’s open source policy, as long as it doesn’t conflict with any other obligations. Students are allowed to release code under an open source license at Stanford with faculty permission; students in the Poldracklab have blanket permission to do so, as all of our code is intended to be made open available, as discussed in the section on code sharing. After leaving the lab, trainees can continue to reuse any code or other research materials (e.g. stimuli) developed as part of their work in the lab as long as the code has been released under an open source license and they continue to abide by the terms of the license.\nAs discussed in the section on data management and sharing, all data collected within our laboratory is meant to be shared upon submission of the related paper. In cases where these data can be deidentified they will be shared under a public domain dedication (CC0), which places no restrictions on their use by other researchers. Thus, any researcher can continue to use those data once they leave the lab. In other cases it may be necessary to restrict data sharing (e.g. when the data cannot be deidentified), in which case researchers will need to obtain a data use agreement from Stanford in order to access those data at their new institution. Dr. Poldrack commits to supporting any such requests, unless they violate other obligations of his or the University.\nIn many cases we use data within the lab under Data Use Agreements (such as the ABCD or HCP datasets), and any researcher wishing to use such datasets must be explicitly listed on the DUA. Once a researcher is no longer included in the Stanford DUA for a particular dataset, they must no longer access the dataset via Stanford computer systems.\n\n\nWhen a trainee leaves the lab, there is often uncertainty about which ideas are theirs to pursue independently and which must be pursued in collaboration with their former mentor. It is difficult to specify a blanket policy on this issue, as it will depend on many different circumstances. Trainees are encouraged to discuss this issue openly with Dr. Poldrack prior to departing the lab, to prevent any misunderstandings. In many cases Dr. Poldrack will be happy for the trainee to take the ideas and pursue them independently in their new position; this has happened on a number of occasions. In other cases, particularly when the ideas were developed in close collaboration with Dr. Poldrack, there may be an expectation that further development of the ideas will occur through collaborative efforts.", "crumbs": [ "Lab guide", "Research practices", @@ -552,7 +552,7 @@ "href": "labguide/research/statistical_methods.html", "title": "Statistical methods", "section": "", - "text": "Do not hesitate to ask Jeanette Mumford for help with any statistical analyses.\n\n\n\n\n\nMethods for multiple testing correction should be chosen prior to data analysis and reported in the preregistration document.\nThe nature of grouping for multiple testing corrections (e.g., parameter within single models, sets ot models, etc) should be pre-specified.\nFor fMRI data, if using cluster-based thresholding the cluster forming threshold must be Z ≥ 3.1 if using parametric thresholding. There are no restrictions on threshold when using nonparametric thresholding, but the threshold should be pre-specified.\n\n\n\n\n\nProper cross-validation practices should be considered. Refer to https://pubmed.ncbi.nlm.nih.gov/31774490/ for more detail.\n\nIn-sample model fit indices should not be reported as evidence for predictive accuracy\nThe cross-validation procedure should encompass all operations applied to the data\nPrediction analyses should not be performed with samples smaller than several hundred observations\nMultiple measures of prediction accuracy should be examined and reported\nThe coefficient of determination should be computed using the sums of squares formulation and not the squared correlation coefficient\nLeave-one-out cross-validation should be avoided in favor of shuffle-split or K-fold cross-validation.\n\n\n\n\n\n\nMixed effects models should include all possible random effects, subject to the constraint of adequate model convergence.\nConvergence should be checked for all models, and models should be checked for highly influential observations.\nSimplification or random effects structure can be done if the p-value > 0.2 (https://www.google.com/url?q=https://www.sciencedirect.com/science/article/pii/S0749596X17300013&sa=D&source=editors&ust=1623857125982000&usg=AOvVaw2E_twuab7e56Nn_i1yTdmn) assuming convergence.\n\n\n\n\n\nDoes the model properly test the hypothesis of interest?\nDo the data conform to the assumptions of the model?\n\nAll model fits should be criticized to identify violation of any assumptions or presence of outliers.\n\n\n\n\n\nEffect size, parameter estimate, standard error, p-value and confidence interval should all be reported.\nIn cases where null effects are reported, some measure of evidence for the null hypothesis (e.g. Bayes factors, equivalence tests) should be reported.\nResults for all analyses that were preregistered should be reported.", + "text": "Do not hesitate to ask Jeanette Mumford for help with any statistical analyses.\n\n\n\n\n\nMethods for multiple testing correction should be chosen prior to data analysis and reported in the preregistration document.\nThe nature of grouping for multiple testing corrections (e.g., parameter within single models, sets of models, etc) should be pre-specified.\nFor fMRI data, if using cluster-based thresholding the cluster forming threshold must be Z ≥ 3.1 if using parametric thresholding. There are no restrictions on threshold when using nonparametric thresholding, but the threshold should be pre-specified.\n\n\n\n\n\nProper cross-validation practices should be considered. Refer to https://pubmed.ncbi.nlm.nih.gov/31774490/ for more detail.\n\nIn-sample model fit indices should not be reported as evidence for predictive accuracy\nThe cross-validation procedure should encompass all operations applied to the data\nPrediction analyses should not be performed with samples smaller than several hundred observations\nMultiple measures of prediction accuracy should be examined and reported\nThe coefficient of determination should be computed using the sums of squares formulation and not the squared correlation coefficient\nLeave-one-out cross-validation should be avoided in favor of shuffle-split or K-fold cross-validation.\n\n\n\n\n\n\nMixed effects models should include all possible random effects, subject to the constraint of adequate model convergence.\nConvergence should be checked for all models, and models should be checked for highly influential observations.\nSimplification or random effects structure can be done if the p-value > 0.2 (https://www.google.com/url?q=https://www.sciencedirect.com/science/article/pii/S0749596X17300013&sa=D&source=editors&ust=1623857125982000&usg=AOvVaw2E_twuab7e56Nn_i1yTdmn) assuming convergence.\n\n\n\n\n\nDoes the model properly test the hypothesis of interest?\nDo the data conform to the assumptions of the model?\n\nAll model fits should be criticized to identify violation of any assumptions or presence of outliers.\n\n\n\n\n\nEffect size, parameter estimate, standard error, p-value and confidence interval should all be reported.\nIn cases where null effects are reported, some measure of evidence for the null hypothesis (e.g. Bayes factors, equivalence tests) should be reported.\nResults for all analyses that were preregistered should be reported.", "crumbs": [ "Lab guide", "Research practices", @@ -576,7 +576,7 @@ "href": "labguide/research/statistical_methods.html#multiple-testing", "title": "Statistical methods", "section": "", - "text": "Methods for multiple testing correction should be chosen prior to data analysis and reported in the preregistration document.\nThe nature of grouping for multiple testing corrections (e.g., parameter within single models, sets ot models, etc) should be pre-specified.\nFor fMRI data, if using cluster-based thresholding the cluster forming threshold must be Z ≥ 3.1 if using parametric thresholding. There are no restrictions on threshold when using nonparametric thresholding, but the threshold should be pre-specified.", + "text": "Methods for multiple testing correction should be chosen prior to data analysis and reported in the preregistration document.\nThe nature of grouping for multiple testing corrections (e.g., parameter within single models, sets of models, etc) should be pre-specified.\nFor fMRI data, if using cluster-based thresholding the cluster forming threshold must be Z ≥ 3.1 if using parametric thresholding. There are no restrictions on threshold when using nonparametric thresholding, but the threshold should be pre-specified.", "crumbs": [ "Lab guide", "Research practices", @@ -707,7 +707,7 @@ "href": "labguide/environment/code_of_conduct.html", "title": "Code of conduct", "section": "", - "text": "All lab members and guests are expected to adhere to the following code of conduct. In addition, all Stanford University affiliates are required to adhere to the University Code of Conduct. For definitions of an act of intolerance & hate crime escalation policy please see this page on University AOI. If you wish to speak with someone outside of the lab regarding an incident that occured in or outside of the lab, the university has thislist of resources.", + "text": "All lab members and guests are expected to adhere to the following code of conduct. In addition, all Stanford University affiliates are required to adhere to the University Code of Conduct. For definitions of an act of intolerance & hate crime escalation policy please see this page on University AOI. If you wish to speak with someone outside of the lab regarding an incident that occurred in or outside of the lab, the university has thislist of resources.", "crumbs": [ "Lab guide", "Lab environment", diff --git a/docs/software.html b/docs/software.html index 60cd46b..9331b0a 100644 --- a/docs/software.html +++ b/docs/software.html @@ -424,7 +424,7 @@