Skip to content

Commit

Permalink
adding missing info to wedge and ring
Browse files Browse the repository at this point in the history
  • Loading branch information
ferponcem committed Mar 11, 2024
1 parent 33bca34 commit 789523f
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions ibc_data/ibc_tasks.tsv
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@ Self "The Self task was adapted from (`Genom et al., 2014 <https://doi.org/10.10
Bang "The Bang task was adapted from the study (`Campbell et al., 2015 <https://doi.org/10.1016/j.neurobiolaging.2015.07.028>`__), dedicated to investigate aging effects on neural responsiveness during naturalistic viewing. The task relies on watching - viewing and listening - of an edited version of the episode ""Bang! You're Dead"" from the TV series ""Alfred Hitchcock Presents"". The original black-and-white, 25-minute episode was condensed to seven minutes and fifty five seconds while preserving its narrative. The plot of the final movie includes scenes with characters talking to each other as well as scenes with no verbal communication. This task was performed during a single run in one unique session. Participants were never informed of the title of the movie before the end of the session. Ten seconds of acquisition were added at the end of the run. The total duration of the run was thus eight minutes and five seconds. :raw-html:`<br />` **Note:** We used the MagnaCoil (Magnacoustics) audio device for all subjects except for *subject-08*, for whom we employed MRConfon MKII." Expyriment 0.9.0 (Python 2.7) MagnaCoil (Magnacoustics) 1024x769
Clips The Clips battery stands for an adaptation of (`Nishimoto et al., 2011 <https://doi.org/10.1016/j.cub.2011.08.031>`__), in which participants were to visualize naturalistic scenes edited as video clips of ten and a half minutes each. Each run was always dedicated to the data collection of one video clip at a time. As in the original study, runs were grouped in two tasks pertaining to the acquisition of training data and test data, respectively. Scenes from training-clips (ClipsTrn) task were shown only once. Contrariwise, scenes from the test-clips (ClipsVal) task were composed of approximately one-minute-long excerpts extracted from the clips presented during training. Excerpts were concatenated to construct the sequence of every ClipsVal run; each sequence was predetermined by randomly permuting many excerpts that were repeated ten times each across all runs. The same randomized sequences, employed across ClipsVal runs, were used to collect data from all participants. :raw-html:`<br />` There were twelve and nine runs dedicated to the collection of the ClipsTrn and ClipsVal tasks, respectively. Data from nine runs of each task were interspersedly acquired in three full sessions; the three remaining runs devoted to train-data collection were acquired in half of one last session, before the `Wedge`_ and `Ring`_ tasks. To assure the same topographic reference of the visual field for all participants, a colored fixation point was always presented at the center of the images. Such point was changing three times per second to ensure that it was visible regardless the color of the movie. Ten and twenty extra seconds of acquisition were respectively added at the beginning and end of every run. The total duration of each run was thus ten minutes and fifty seconds. Note that images from the test-clips task (ClipsVal) were presented three times to each participant. More precisely, in a given session, three test runs showing the same images were acquired, with the order of images varying between runs. Regardless of the session, one can find the order of images on our GitHub repository for the `first <https://github.com/individual-brain-charting/public_protocols/blob/master/clips/protocol/valseq3minby10_01.index>`__, `second <https://github.com/individual-brain-charting/public_protocols/blob/master/clips/protocol/valseq3minby10_02.index>`__ and `third <https://github.com/individual-brain-charting/public_protocols/blob/master/clips/protocol/valseq3minby10_03.index>`__ test-clips runs. Lastly, the `order of images for the training-clips <https://github.com/individual-brain-charting/public_protocols/blob/master/clips/protocol/trnseq.index>`__ is the same in all training runs and can be found on our GitHub repository. Python 2.7 MRConfon MKII 800x600 `See demo <https://www.youtube.com/watch?v=sWuZdJoZrms&t=38s>`__
WedgeClock The Retinotopy protocols on IBC include classic retinotopic paradigms, namely the Wedge and the Ring tasks. Within the Wedge protocol, the **Wedge Clock** task consists of visual stimuli of a slowly rotating clockwise checkerboard. The phase of the periodic response at the rotation frequency, measured at each voxel, corresponds to the assessment of perimetric parameters related to the polar angle (`Sereno et al., 1995 <https://doi.org/10.1126/science.7754376>`__). Under IBC, two runs were dedicated to this task (one run for each phase-encoding direction). Each run was five-and-a-half minutes long. They were programmed for the same session following the last three *training-data* runs of the `Clips`_ task. Similarly to the Clips task, a point was displayed at the center of the visual stimulus in order to keep constant the perimetric origin in all participants. Participants were thus to fixate continuously this point whose color flickered between red, green, blue and yellow throughout the entire run. To keep the participants engaged in the task, they were instructed that after each run, they would be asked which color had most often been presented. Additionally, ten seconds of a non-flickering, red fixation cross were displayed at the end of every run. Psychopy (Python 2.7) Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4) 1920x1080
WedgeAnti The Retinotopy protocols on IBC include classic retinotopic paradigms, namely the Wedge and the Ring tasks. Within the Wedge protocol, the **Wedge Anticlock** task consists of visual stimuli of a slowly rotating counterclockwise checkerboard. The phase of the periodic response at the rotation frequency, measured at each voxel, corresponds to the assessment of perimetric parameters related to the polar angle (`Sereno et al., 1995 <https://doi.org/10.1126/science.7754376>`__). Under IBC, two runs were dedicated to this task (one run for each phase-encoding direction). Each run was five-and-a-half minutes long. A point was displayed at the center of the visual stimulus in order to keep constant the perimetric origin in all participants. Participants were thus to fixate continuously this point whose color flickered between red, green, blue and yellow throughout the entire run. To keep the participants engaged in the task, they were instructed that after each run, they would be asked which color had most often been presented. Additionally, ten seconds of a non-flickering, red fixation cross were displayed at the end of every run.
WedgeAnti The Retinotopy protocols on IBC include classic retinotopic paradigms, namely the Wedge and the Ring tasks. Within the Wedge protocol, the **Wedge Anticlock** task consists of visual stimuli of a slowly rotating counterclockwise checkerboard. The phase of the periodic response at the rotation frequency, measured at each voxel, corresponds to the assessment of perimetric parameters related to the polar angle (`Sereno et al., 1995 <https://doi.org/10.1126/science.7754376>`__). Under IBC, two runs were dedicated to this task (one run for each phase-encoding direction). Each run was five-and-a-half minutes long. A point was displayed at the center of the visual stimulus in order to keep constant the perimetric origin in all participants. Participants were thus to fixate continuously this point whose color flickered between red, green, blue and yellow throughout the entire run. To keep the participants engaged in the task, they were instructed that after each run, they would be asked which color had most often been presented. Additionally, ten seconds of a non-flickering, red fixation cross were displayed at the end of every run. Psychopy (Python 2.7) Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4) 1920x1080
ContRing The Retinotopy protocols on IBC include classic retinotopic paradigms, namely the Wedge and the Ring tasks. The **Contracting Ring** task consists of visual stimuli depicting a thick, contracting ring. The phase of the periodic response at the contraction frequency, measured at each voxel, corresponds to the assessment of the perimetric parameters related to eccentricity (`Sereno et al., 1995 <https://doi.org/10.1126/science.7754376>`__). Under IBC, one run was dedicated to this task (*ap* phase-encoding direction), which was five-and-a-half minutes long. A point was displayed at the center of the visual stimulus in order to keep constant the perimetric origin in all participants. Participants were thus to fixate continuously this point whose color flickered between red, green, blue and yellow throughout the entire run. To keep the participants engaged in the task, they were instructed that after the run, they would be asked which color had most often been presented. Additionally, ten seconds of a non-flickering, red fixation cross were displayed at the end of the run. Psychopy (Python 2.7) Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4) 1920x1081
ExpRing The Retinotopy protocols on IBC include classic retinotopic paradigms, namely the Wedge and the Ring tasks. The **Expanding Ring** task consists of visual stimuli depicting a thick, dilating ring. The phase of the periodic response at the dilation frequency, measured at each voxel, corresponds to the assessment of the perimetric parameters related to eccentricity (`Sereno et al., 1995 <https://doi.org/10.1126/science.7754376>`__). Under IBC, one run was dedicated to this task (*pa* phase-encoding direction), which was five-and-a-half minutes long. A point was displayed at the center of the visual stimulus in order to keep constant the perimetric origin in all participants. Participants were thus to fixate continuously this point whose color flickered between red, green, blue and yellow throughout the entire run. To keep the participants engaged in the task, they were instructed that after the run, they would be asked which color had most often been presented. Additionally, ten seconds of a non-flickering, red fixation cross were displayed at the end of the run.
ExpRing The Retinotopy protocols on IBC include classic retinotopic paradigms, namely the Wedge and the Ring tasks. The **Expanding Ring** task consists of visual stimuli depicting a thick, dilating ring. The phase of the periodic response at the dilation frequency, measured at each voxel, corresponds to the assessment of the perimetric parameters related to eccentricity (`Sereno et al., 1995 <https://doi.org/10.1126/science.7754376>`__). Under IBC, one run was dedicated to this task (*pa* phase-encoding direction), which was five-and-a-half minutes long. A point was displayed at the center of the visual stimulus in order to keep constant the perimetric origin in all participants. Participants were thus to fixate continuously this point whose color flickered between red, green, blue and yellow throughout the entire run. To keep the participants engaged in the task, they were instructed that after the run, they would be asked which color had most often been presented. Additionally, ten seconds of a non-flickering, red fixation cross were displayed at the end of the run. Psychopy (Python 2.7) Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4) 1920x1081
Raiders The Raiders task was adapted from (`Haxby et al., 2011 <http://doi.org/10.1016/j.neuron.2011.08.026>`__), in which the full-length action movie Raiders of the Lost Ark was presented to the participants. The main goal of the original study was the estimation of the hyperalignment parameters that transform voxel space of functional data into feature space of brain responses, linked to the visual characteristics of the movie displayed. Similarly, herein, the movie was shown to the IBC participants in contiguous runs determined according to the chapters of the movie defined in the DVD. This task was completed in two sessions. In order to use the acquired fMRI data in train-test split and cross-validation experiments, we performed three extra-runs at the end of the second session in which the three first chapters of the movie were repeated. To account for stabilization of the BOLD signal, ten seconds of acquisition were added at the end of the run. Expyriment 0.9.0 (Python 2.7) MRConfon MKII 1024x768
Lec2 This task belongs to a battery of 8 different localizers that tap on a wide array of cognitive functions provided to us by the `Labex Cortex group <https://labex-cortex.universite-lyon.fr/>`__ at the University of Lyon. Originally described in (`Perrone-Bertolotti et al., 2012 <https://doi.org/10.1523/JNEUROSCI.2982-12.2012>`__), this task focuses on silent reading. During the task, participants were presented with two intermixed stories, shown word by word at a rapid rate. One of the stories was written in black (on a gray screen) and the other in white. Consecutive words with the same color formed a meaningful and simple short story in French. Participants were instructed to read the black story to report it at the end of the block, while ignoring the white one. Each block comprised 400 words, with 200 black words (attend condition) and 200 white words (ignore condition) for the two stories. The time sequence of colors within the 400 words series was randomized, so that participants could not predict whether the subsequent word was to be attended or not; however, the randomization was constrained to forbid series of more than three consecutive words with the same color. Data was acquired in two runs, and each word was presented for 100 ms, with a jittered inter-stimulus interval centered around 700 ms. Presentation (Version 20.1, Neurobehavioral Systems, Inc., Berkeley, CA) 1024x768
Audi "This task belongs to a battery of 8 different localizers that tap on a wide array of cognitive functions provided to us by the `Labex Cortex group <https://labex-cortex.universite-lyon.fr/>`__ at the University of Lyon. This task was originally described in (`Perrone-Bertolotti et al., 2012 <https://doi.org/10.1523/JNEUROSCI.2982-12.2012>`__) together with the `Lec2`_ localizer. Participants listened to sounds of several categories with the instruction that three of them would be presented again at the end of the task, together with three novel sounds and that they should be able to detect previously played items. There were three speech and speech-like categories, including sentences told by a computerized voice in a language familiar to the participant (French) or unfamiliar (Suomi), and reversed speech, originally in French (the same sentences as the ""French"" category, played backwards). These categories were compared with nonspeech-like human sounds (coughing and yawning), music, environmental sounds, and animal sounds. Participants were instructed to close their eyes while listening to three sounds of each category, with a duration of 12s each, along with three 12 s intervals with no stimulation, serving as a baseline (Silence). Consecutive sounds were separated by a 3 s silent interval. The sequence was pseudorandom, to ensure that two sounds of the same category did not follow each other." Presentation (Version 20.1, Neurobehavioral Systems, Inc., Berkeley, CA) MagnaCoil (Magnacoustics) 1024x768
Expand Down

0 comments on commit 789523f

Please sign in to comment.