Skip to content

tuetschek/e2e-metrics

 
 

Repository files navigation

E2E NLG Challenge Evaluation metrics

The metrics used for the challenge include:

Running the evaluation

Requirements/Installation

The metrics script requires the following dependencies:

To install the required Python packages, run (assuming root access or virtualenv):

pip install -r requirements.txt

To install the required Perl module, run (assuming root access or perlbrew/plenv):

curl -L https://cpanmin.us | perl - App::cpanminus  # install cpanm
cpanm XML::Twig

Usage

The main entry point is measure_scores.py. To get a listing of all available options, run:

./measure_scores.py -h

The system outputs and human references can either be in a TSV/CSV format, or in plain text. This is distinguished by the file extension (plain text assumed, unless it's .tsv or .csv).

For TSV/CSV, the script assumes that the first column contains source MRs/texts and the second column contains system outputs or references. Multiple references for the same source MRs/texts are grouped automatically (either by the same source as in the system output file, if it's also a TSV/CSV, or by consecutive identical sources). If there are headers in the TSV/CSV file with reasonably identifiable labels (e.g. “MR”, “source”, “system output”, “reference” etc., there's some guessing involved), the columns should be identified automatically. In that case, the file doesn't need to have just two columns in the exact order.

For plain text files, the script assumes one instance per line for your system outputs and one entry per line or multiple references for the same instance separated by empty lines for the references (see TGen data conversion).

Example human reference and system output files are provided in the example-inputs subdirectory -- you can try the script on them using this command:

./measure_scores.py example-inputs/devel-conc.txt example-inputs/baseline-output.txt

Source metrics scripts

MT-Eval

We used the NIST MT-Eval v13a script adapted for significance tests, from http://www.cs.cmu.edu/~ark/MT/. We adapted the script to allow a variable number of references.

Microsoft COCO Caption Evaluation

These provide a different variant of BLEU (which is not used for evaluation in the E2E challenge), METEOR, ROUGE-L, CIDER. We used the Github code for these metrics. The metrics are unchanged, apart from removing support for images and some of the dependencies.

References

Acknowledgements

Original developers of the MSCOCO evaluation scripts:

Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, David Chiang, Michael Denkowski, Alexander Rush

About

E2E NLG Challenge Evaluation metrics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 64.1%
  • Perl 35.9%