Annotation Tool software for the project Ethically-driven Multimodal Emotion Detection for Children with Autism
This web-based system supports the annotation task to create annotated dataset. The annotator accesses a URL address containing an authentication-protected system with the study session recordings that need to be annotated. The system records the video time along with the assigned emotion class identified by the annotator and the annotator's identification. This information is then saved in a database and can be retrieved later when creating the working dataset with the annotations.
This repository includes the web-based annotation task system and a Python script to create working annotated datasets from the annotations collected during annotation session.
- Python 3.8
- Docker
- Prepare the virtual environment (Create and activate virtual environment with venv).
python -m venv ./venv
source ./venv/bin/activate
- Run the script
python app.py
- Build the images
docker compose build
- Start the services
docker compose up -d
A user can configure each task's content and time by adding/changing the variables values in
a .env
file.
The example.env
file contains examples of variable and values format.
(Extract from SQLITE3 to CSV file)
The original .db
file is stored in the db_from_server
folder by session number.
command:
sqlite3 video_annotation_with_annotator_session_xx_xx.db
And then:
.headers on
.mode csv
.output data_annotation_session_XX_XX.csv
select id,video_file_name, annotator, emotion_zone,time_of_video_seconds,timestamp_annotation from emotion_indices_annotation;
Save the .csv
in the folder working_dataset_creation/output_from_db
.
- Change the values of (to reflect desired working dataset and session):
- annotation_file
- target_session_video
- Run the file:
python working_dataset_creation/generate_working_dataset.py
This repository is released under dual-licencing:
For non-commercial use of the Software, it is released under the 3-Cause BSD Licence.
For commercial use of the Software, you are required to contact the University of Galway to arrange a commercial licence.
Please refer to LICENSE.md file for details on the licence.
If you use any of the resources provided on this repository in any of your publications we ask you to cite the following work:
Sousa, Annanda, et al. "Introducing CALMED: Multimodal Annotated Dataset for Emotion Detection in Children with Autism." International Conference on Human-Computer Interaction. Cham: Springer Nature Switzerland, 2023.
Author: Annanda Sousa
Author's contact: [email protected]