Skip to content

WMT15 Eye Tracking Results - How do Humans Evaluate Machine Translation

Notifications You must be signed in to change notification settings

qcri/wmt15eyetracking

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Data and source code for Guzman et al. WMT2015

This repository contains the data gathered through experimentation, and the analysis performed for our paper. We also include the sources for our paper, and presentation.

CONTENT:

README.md: this file.

data

exported-tasks-2014-11-25.ver4.xml (raw data) data-4.1+trans.dat (preprocessed data) wmt12.RNK_results.filtered.human.noties (human eval scores) wmt12.spanish_quality (translations selected based on their quality)

analysis

lib.R (general R functions) wmt15_time_final.R (code for reproducing analysis)

Related publications

F. Guzman, A. Abdelali, H. Sajjad, I. Temnikova, and S. Vogel, "How do Humans Evaluate Machine Translation", The Proceedings of the Tenth Workshop on Statistical Machine Translation WMT2015. 17-18 September 2015, Lisbon, Portugal [PDF] [BibTeX]

Contacts

If you have any question about the corpus, please direct your inquiries to Ahmed Abdelali or Francisco Guzman.

License

Developed by: Qatar Computing Research Institute Arabic Language Technologies Group

About

WMT15 Eye Tracking Results - How do Humans Evaluate Machine Translation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published