Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jon Language Study #86

Open
wants to merge 43 commits into
base: main
Choose a base branch
from
Open

Conversation

Jayeon6550
Copy link

No description provided.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your contribution we will shortly be reviewing your pull request.

@Hzaatiti
Copy link
Owner

Hzaatiti commented Nov 8, 2024

Hi Jayeon, thank you for your contribution.

Let us start by testing the trigger script, if that works then we can start looking into the response box to make it work with a PsychoPy experiment. Also I will try to run the faceinversion experiment on the stimulus computer it would give us a first running psychopy visual experiment.

@Jayeon6550
Copy link
Author

Hi Jayeon, thank you for your contribution.

Let us start by testing the trigger script, if that works then we can start looking into the response box to make it work with a PsychoPy experiment. Also I will try to run the faceinversion experiment on the stimulus computer it would give us a first running psychopy visual experiment.

Hi Hadi, I came to the lab and logged in to vpixx desktop, but I cannot find a way to connect the wifi. The desktop also looks very different from what I remember. Also, to see the trigger script and response box work or not, I should set the MEG room ready (as if I collect empty room data), right?

@Hzaatiti
Copy link
Owner

Hi Jayeon, please whatsapp me on +33 6 30 47 27 36
Lets have a quick call

@Hzaatiti
Copy link
Owner

wkc267 and others added 20 commits November 14, 2024 15:18
these are the scripts for each different csv files. I wanted to compare the code you completed today with what I have as of now. These scripts are working on my laptop. It turns out the problem was because of some empty raws in csv files.
line 112 for screen setting
…tudy' into language_Jon-study

# Conflicts:
#	experiments/psychopy/jon_language_study/RSVP_egyptian_backward.py
#	experiments/psychopy/jon_language_study/RSVP_egyptian_bound.py
#	experiments/psychopy/jon_language_study/RSVP_emirati_backward.py
#	experiments/psychopy/jon_language_study/RSVP_emirati_bound.py
#	experiments/psychopy/jon_language_study/RSVP_mandarin_bound.py
#	experiments/psychopy/jon_language_study/egyptian_backward.csv
#	experiments/psychopy/jon_language_study/egyptian_bound.csv
#	experiments/psychopy/jon_language_study/emirati_backward.csv
#	experiments/psychopy/jon_language_study/emirati_bound.csv
#	experiments/psychopy/jon_language_study/mandarin_bound.csv
@Hzaatiti
Copy link
Owner

@Jayeon6550 the script experiments/psychopy/general/trigger_test_psychopy_digital_out.py works for sending triggers to the KIT
You can use this code in your experiment to trigger the channels
the other script that uses pixel mode doesn't work yet because it requires full screen

@Hzaatiti
Copy link
Owner

Hzaatiti commented Nov 25, 2024

@Jayeon6550

  • Full screen needs fixing (partially fixed, need vpixx support, use a Cloth black to cover border)
  • First word of each sentence is being showed slowly
  • Diacritic fix (investigate using a different arabic font or rewriting in the CSV or the enocding of the .CSV file || another solution is displaying an image that contains the text rather than the text itself, for image try to use this package https://github.com/dotping-me/arabic-word-to-image/blob/main/pyarabic_word_to_image.py)
  • Trigger issue (need to fix the if/else condition
  • Add a saving for the response box button into the experiment script

Perspective for the test run:

  • ensure triggers are correct (number of trials is correct)
  • response and data saved by the experiment script is correct
  • visual display is good with the timing

Copy link

vercel bot commented Nov 27, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
meg-pipeline ✅ Ready (Inspect) Visit Preview 💬 Add feedback Nov 29, 2024 0:08am

@Hzaatiti
Copy link
Owner

For the diacritic issue:
it would be best to:

  • use the script in genera/convert_text_image.py to generate images from all your csv texts

  • Name the generated images properly ex: egypatian_sentence_1, egyptian_sentence_2,....

  • then in your CSV, add a new column saying Image Path and put the path of the correspondign image for each sentence

  • Use an image display instead of text image = visual.ImageStim(win, image=trialList[trialIndex]['Image Path'], pos=(0, 0))

…cipant is expected to press button either to move on to next sentence or to answer yes or no. However the response is either very slow or does not work.response = getbutton() # listen to a button

    responses.append(response)  # everytime we get a response we add it to the table
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants