Skip to content

Commit

Permalink
adding audio info to new LPPLocalizer task
Browse files Browse the repository at this point in the history
  • Loading branch information
ferponcem committed Sep 18, 2024
1 parent dcbd946 commit 7540c0d
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion ibc_data/ibc_tasks.tsv
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ ColumbiaCards This task is a part of a battery of several tasks coming from the
DotPatterns This task is a part of a battery of several tasks coming from the `experiment factory <https://github.com/expfactory/expfactory-experiments>`__ published in (`Eisenberg et al., 2017 <https://doi.org/10.1016/j.brat.2017.09.014>`__) and presented using `expfactory-python <https://github.com/expfactory/expfactory-python>`__ package. The battery was used to capture several aspects of self-regulation, including behavioral inhibition, decision making and planning abilities, among others. The adjustments concerned the translation to all written stimuli and instructions into french, as well as fixing a total time limit for experimentsthat allowed the participants their own pace for responding. All these modifications were done with extreme care of not altering the psychological state that the original tasks were designed to capture during scanning. :raw-html:`<br />` DotPatterns task presents the participant with pairs of stimuli, separated by a fixation cross. The participant has to press a button (index finger) as fast as possible after the presentation of the probe, and only one specific combination of cue-probe is instructed to be responded to differently. This task was designed to capture activation relative to the expectancy of the probe elicited by the correct cue. The task is composed by 160 trials divided in 4 blocks of 40 trials each. Each cue and probe lasted for 500ms, with a fixation cross that separates both lasting for 2000ms. It was acquired in two runs, within the same session as other tasks from the battery and using different phase-encoding directions. JavaScript, Python 2.7 Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4) 3200x1800
WardAndAllport "This task is a part of a battery of several tasks coming from the `experiment factory <https://github.com/expfactory/expfactory-experiments>`__ published in (`Eisenberg et al., 2017 <https://doi.org/10.1016/j.brat.2017.09.014>`__) and presented using `expfactory-python <https://github.com/expfactory/expfactory-python>`__ package. The battery was used to capture several aspects of self-regulation, including behavioral inhibition, decision making and planning abilities, among others. The adjustments concerned the translation to all written stimuli and instructions into french, as well as fixing a total time limit for experimentsthat allowed the participants their own pace for responding. All these modifications were done with extreme care of not altering the psychological state that the original tasks were designed to capture during scanning. :raw-html:`<br />` WardAndAllport task is a digital version of the WATT3 task (`Ward, Allport, 1997 <https://doi.org/10.1080/713755681>`__, `Shallice, 1982 <https://doi.org/10.1098/rstb.1982.0082>`__), and its main purpose is to capture activation related to planning abilities. For this, the task uses a factorial manipulation of 2 task parameters: search depth and goal hierarchy. Search depth involves mentally constructing the steps necessary to reach the goal state, and the interdependecy between steps in order to do so. This is expressed by the presence or absence of intermediate movements necessary for an optimal solution of each problem. Goal hierarchy refers to whether the order in which the three balls have to be put in their goal positions can be completely extracted from looking at the goal state or if it requires the participant to integrate information between goal and starting states (which result in unambiguous or partially ambiguous goal states, respectively). Detailed explanations and examples of each one of the four categories can be found in `Kaller et al., 2011 <https://doi.org/10.1093/cercor/bhq096>`__. :raw-html:`<br />` The task was divided in 4 practice trials, followed by 48 test trials divided in 3 blocks of 14 trials each, separated by 10 seconds of resting period. Data was only acquired during the test trials, although the practice trials were also performed inside the scanner with its corresponding equipment. In each trial, the participant would see two configurations of the towers: the test towers on the left, and the target towers on the right. The towers of the right showed the final configuration of balls required to complete the trial. Three buttons were assigned to the left (index finger' button), middle (middle finger's button) and right (ring finger's button) columns respectively, and each button press would either take the upper ball of the selected column or drop the ball in hand at the top of the selected column. On the upper left corner, a gray square with the text ""Ball in hand"" would show the ball currently in hand. All trials could be solved in 3 movements, considering taking a ball and putting it elsewhere as a single movement. The time between the end of one trial and the beginning of the next one was 1000 ms." JavaScript, Python 2.7 Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4) 3200x1800
LePetitPrince This experiment is a natural language comprehension protocol, originally implemented by (`Bhattasali et al., 2019 <https://doi.org/10.1080/23273798.2018.1518533>`__, `Hale et al., 2022 <https://doi.org/10.1146/annurev-linguistics-051421-020803>`__). Each run of this task comprised three chapters of The Little Prince story by Antoine de Saint-Exupery in french (Le Petit Prince). During each run, the participant was presented with the audio of the story. In between runs, the experimenters would ask some multiple choice questions, as well as two or three open ended questions about the contents of the previous run, to keep participants engaged. The length of the runs varied between nine and thirteen minutes. Data were acquired in two different sessions, each one comprising five and four runs, respectively. The protocol also included a six-minutes localizer at the end of the second acquisition, in order to accurately map language areas for each participant, see :ref:`LPPLocalizer` for details. :raw-html:`<br />` **Note:** We used the OptoACTIVE (Optoacoustics) audio device for all subjects except for *subject-08*, for whom we employed MRConfon MKII. Expyriment 0.9.0 (Python 3.6) OptoACTIVE (Optoacoustics) 1920x1080
LPPLocalizer **Le Petit Prince Localizer** was included as part of the :ref:`LePetitPrince` task and was performed at the end of the second acquisition. It aimed to accurately map the language areas of each participant, which would later be used for further analysis. The stimuli consisted of two types of audio clips: phrases and their reversed versions. The phrases were 2-second voice recordings (audio only) of context-free sentences in French. The reversed stimuli used the same clips but played backward, making the content unintelligible. The run consisted of alternating blocks of 3 trials with phrases (French trials) and 3 trials with reversed phrases (control trials). This localizer was conducted in a single run, lasting 6 minutes and 32 seconds. Expyriment 0.9.0 (Python 3.6) OptoACTIVE (Optoacoustics)
LPPLocalizer **Le Petit Prince Localizer** was included as part of the :ref:`LePetitPrince` task and was performed at the end of the second acquisition. It aimed to accurately map the language areas of each participant, which would later be used for further analysis. The stimuli consisted of two types of audio clips: phrases and their reversed versions. The phrases were 2-second voice recordings (audio only) of context-free sentences in French. The reversed stimuli used the same clips but played backward, making the content unintelligible. The run consisted of alternating blocks of 3 trials with phrases (French trials) and 3 trials with reversed phrases (control trials). This localizer was conducted in a single run, lasting 6 minutes and 32 seconds. :raw-html:`<br />` **Note:** We used the OptoACTIVE (Optoacoustics) audio device for all subjects except for *subject-08*, for whom we employed MRConfon MKII. Expyriment 0.9.0 (Python 3.6) OptoACTIVE (Optoacoustics)
BiologicalMotion1 "The phenomenon known as *biological motion* was first introduced in (`Johansson, 1973 <https://doi.org/10.3758/BF03212378>`__), and consisted in point-light displays arranged and moving in a way that resembled a person moving. The task that we used was originally developed by (`Chang et al., 2018 <https://doi.org/10.1016/j.neuroimage.2018.03.013>`__). During the task, the participants were shown a point-light ""walker"", and they had to decide if the walker's orientation was to the left or right, by pressing on the response box respectively on the index finger's button or the middle finger's button. The stimuli were divided in 6 different categories: three types of walkers, as well as their reversed versions. The division of the categories focuses on three types of information that the participant can get from the walker: global information, local information and orientation. Global information refers to the general structure of the body and the spatial relationships between its parts. Local information refers to kinematics, speed of the points and mirror-symmetric motion. Please see `Chang et al., 2018 <https://doi.org/10.1016/j.neuroimage.2018.03.013>`__ for more details about the stimuli. The data was acquired in 4 runs. Each run comprises 12 blocks with 8 trials per block. The stimulus duration was 500ms and the inter-stimulus interval 1500ms (total 16s per block). Each of the blocks was followed by a fixation block, that also lasted 16s. Each run contained 4 of the six conditions, repeated 3 times each. There were 2 different types of runs: type 1 and 2. This section refers to run type 1, which contained both global types (natural and inverted) and both local naturals. For run type 2 refer to :ref:`BiologicalMotion2`." Psychophysics Toolbox Version 3 (PTB-3), aka Psychtoolbox-3, for GNU Octave Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4) 1024x768 `See demo <https://www.youtube.com/watch?v=VDfsUzu8_Gw>`__
BiologicalMotion2 "The phenomenon known as *biological motion* was first introduced in (`Johansson, 1973 <https://doi.org/10.3758/BF03212378>`__), and consisted in point-light displays arranged and moving in a way that resembled a person moving. The task that we used was originally developed by (`Chang et al., 2018 <https://doi.org/10.1016/j.neuroimage.2018.03.013>`__). During the task, the participants were shown a point-light ""walker"", and they had to decide if the walker's orientation was to the left or right, by pressing on the response box respectively on the index finger's button or the middle finger's button. The stimuli was divided in 6 different categories: three types of walkers, as well as their reversed versions. The division of the categories focuses on three types of information that the participant can get from the walker: global information, local information and orientation. Global information refers to the general structure of the body and the spatial relationships between its parts. Local information refers to kinematics, speed of the points and mirror-symmetric motion. Please see `Chang et al., 2018 <https://doi.org/10.1016/j.neuroimage.2018.03.013>`__ for more details about the stimuli. The data was acquired in 4 runs. Each run comprises 12 blocks with 8 trials per block. The stimulus duration was 500ms and the inter-stimulus interval 1500ms (total 16s per block). Each of the blocks was followed by a fixation block, that also lasted 16s. Each run contained 4 of the six conditions, repeated 3 times each. This section refers to run type 2, which contained both local naturals and both local modified." Psychophysics Toolbox Version 3 (PTB-3), aka Psychtoolbox-3, for GNU Octave Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4) 1024x769
MathLanguage The **Mathematics and Language** protocol was taken from (`Amalric et al., 2016 <https://doi.org/10.1073/pnas.1603205113>`__). This task aims to comprehensively capture the activation related with several types of mathematical and other types of facts, presented as sentences. During the task, the participants are presented a series of sentences, each one in either of two modalities: auditory or visual. Some of the categories include theory of mind statements, arithmetic facts and geometry facts. After each sentence, the participant has to indicate whether they believe the presented fact to be true or false, by respectively pressing the button in the left or right hand. A second version of each run (runs *B*) was generated reverting the modality for each trial, so those being visual in the original runs (runs *A*), would be auditory in their corresponding *B* version, and vice-versa. Each participant performed four A-type runs, followed three B-type runs due to time constraints. Each run had an equal number of trials of each category, and the order of the trials was the same for all subjects. :raw-html:`<br />` **Note:** We used the OptoACTIVE (Optoacoustics) audio device for all subjects except for *subject-05* and *subject-08*, who completed the session using MRConfon MKII. Expyriment 0.9.0 (Python 3.6) In-house custom-made sticks featuring one-top button, each one to be used in each hand OptoACTIVE (Optoacoustics) 1920x1080 `See demo <https://youtu.be/FuOiQHS2764>`__ `Repository <https://github.com/individual-brain-charting/public_protocols/tree/master/MathLanguage>`__
Expand Down

0 comments on commit 7540c0d

Please sign in to comment.