Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation.

Overview

Understanding Minimum Bayes Risk Decoding

This repo provides code and documentation for the following paper:

Müller and Sennrich (2021): Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation.

@inproceedings{muller2021understanding,
      title={Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation}, 
      author = {M{\"u}ller, Mathias  and
      Sennrich, Rico},
      year={2021},
      eprint={2105.08504},
      booktitle = "Proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021)"
}

Basic Setup

Clone this repo in the desired place:

git clone https://github.com/ZurichNLP/understanding-mbr
cd understanding-mbr

then proceed to install software before running any experiments.

Install required software

Create a new virtualenv that uses Python 3. Please make sure to run this command outside of any virtual Python environment:

./scripts/create_venv.sh

Important: Then activate the env by executing the source command that is output by the shell script above.

Download and install required software:

./scripts/download.sh

The download script makes several important assumptions, such as: your OS is Linux, you have CUDA 10.2 installed, you have access to a GPU for training and translation, your folder for temp files is /var/tmp. Edit the script before running it to fit to your needs.

Running experiments in general

Definition of "run"

We define a "run" as one complete experiment, in the sense that a run executes a pipeline of steps. Every run is completely self-contained: it does everything from downloading the data until evaluation of a trained model.

The series of steps executed in a run is defined in

scripts/tatoeba/run_tatoeba_generic.sh

This script is generic and will never be called on its own (many variables would be undefined), but all our scripts eventually call this script.

SLURM jobs

Individual steps in runs are submitted to a SLURM system. The generic run script:

scripts/tatoeba/run_tatoeba_generic.sh

will submit each individual step (such as translation, or model training) as a separate SLURM job. Depending on the nature of the task, the scripts submits to a different cluster, or asks for different resources.

IMPORTANT: if

  • you do not work on a cluster that uses SLURM for job management,
  • your cluster layout, resource naming etc. is different

you absolutely need to modify or replace the generic script scripts/tatoeba/run_tatoeba_generic.sh before running anything. If you do not use SLURM at all, it might be possible to just replace calls to scripts/tatoeba/run_tatoeba_generic.sh with scripts/tatoeba/run_tatoeba_generic_no_slurm.sh.

scripts/tatoeba/run_tatoeba_generic_no_slurm.sh is a script we provide for convenience, but have not tested it ourselves. We cannot guarantee that it runs without error.

Dry run

Before you run actual experiments, it can be useful to perform a dry run. Dry runs attempt to run all commands, create all files etc. but are finished within minutes and use CPU only. Dry runs help to catch some bugs (such as file permissions) early.

To dry-run a baseline system for the language pair DAN-EPO, run:

./scripts/tatoeba/dry_run_baseline.sh

Single (non-dry!) example run

To run the entire pipeline (downloading data until evaluation of trained model) for a single language pair from Tatoeba, run

./scripts/tatoeba/run_baseline.sh

This will train a model for the language pair DAN-EPO, but also execute all steps before and after model training.

Start a certain group of runs

It is possible to submit several runs at the same time, using the same shell script. For instance, to run all required steps for a number of medium-resource language pairs, run

./scripts/tatoeba/run_mediums.sh

Recovering partial runs

Steps within a run pipeline depend on each other (SLURM sbatch --afterok dependency in most cases). This means that if a job X fails, subsequent jobs that depend on X will never start. If you attempt to re-run completed steps they exit immediately -- so you can always re-run an entire pipeline if any step fails.

Reproducing the results presented in our paper in particular

Training and evaluating the models

To create all models and statistics necessary to compare MBR with different utility functions:

scripts/tatoeba/run_compare_risk_functions.sh

To reproduce experiments on domain robustness:

scripts/tatoeba/run_robustness_data.sh

To reproduce experiments on copy noise in the training data:

scripts/tatoeba/run_copy_noise.sh

Creating visualizations and result tables

To reproduce exactly the tables and figures we show in the paper, use our Google Colab here:

https://colab.research.google.com/drive/1GYZvxRB1aebOThGllgb0teY8A4suH5j-?usp=sharing

This is possible only because we have hosted the results of our experiments on our servers and Colab can retrieve files from there.

Browse MBR samples

We also provide examples for pools of MBR samples for your perusal, as HTML files that can be viewed in any browser. The example HTML files are created by running the following script:

./scripts/tatoeba/local_html.sh

and are available at the following URLs (Markdown does not support clickable links, sorry!):

Domain robustness

language pair domain test set link
DEU-ENG it https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/deu-eng.domain_robustness.it.html
DEU-ENG koran https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/deu-eng.domain_robustness.koran.html
DEU-ENG law https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/deu-eng.domain_robustness.law.html
DEU-ENG medical https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/deu-eng.domain_robustness.medical.html
DEU-ENG subtitles https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/deu-eng.domain_robustness.subtitles.html

Copy noise in training data

language pair amount of copy noise link
ARA-DEU 0.001 https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/ara-deu.copy_noise.0.001.slice-test.html
ARA-DEU 0.005 https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/ara-deu.copy_noise.0.005.slice-test.html
ARA-DEU 0.01 https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/ara-deu.copy_noise.0.01.slice-test.html
ARA-DEU 0.05 https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/ara-deu.copy_noise.0.05.slice-test.html
ARA-DEU 0.075 https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/ara-deu.copy_noise.0.075.slice-test.html
ARA-DEU 0.1 https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/ara-deu.copy_noise.0.1.slice-test.html
ARA-DEU 0.25 https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/ara-deu.copy_noise.0.25.slice-test.html
ARA-DEU 0.5 https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/ara-deu.copy_noise.0.5.slice-test.html
language pair amount of copy noise link
ENG-MAR 0.001 https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/eng-mar.copy_noise.0.001.slice-test.html
ENG-MAR 0.005 https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/eng-mar.copy_noise.0.005.slice-test.html
ENG-MAR 0.01 https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/eng-mar.copy_noise.0.01.slice-test.html
ENG-MAR 0.05 https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/eng-mar.copy_noise.0.05.slice-test.html
ENG-MAR 0.075 https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/eng-mar.copy_noise.0.075.slice-test.html
ENG-MAR 0.1 https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/eng-mar.copy_noise.0.1.slice-test.html
ENG-MAR 0.25 https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/eng-mar.copy_noise.0.25.slice-test.html
ENG-MAR 0.5 https://files.ifi.uzh.ch/cl/archiv/2020/clcontra/eng-mar.copy_noise.0.5.slice-test.html
Owner
ZurichNLP
University of Zurich, Department of Computational Linguistics
ZurichNLP
Pytorch port of Google Research's LEAF Audio paper

leaf-audio-pytorch Pytorch port of Google Research's LEAF Audio paper published at ICLR 2021. This port is not completely finished, but the Leaf() fro

Dennis Fedorishin 80 Oct 31, 2022
Simple tutorials using Google's TensorFlow Framework

TensorFlow-Tutorials Introduction to deep learning based on Google's TensorFlow framework. These tutorials are direct ports of Newmu's Theano Tutorial

Nathan Lintz 6k Jan 06, 2023
Official repo for QHack—the quantum machine learning hackathon

Note: This repository has been frozen while we consider the submissions for the QHack Open Hackathon. We hope you enjoyed the event! Welcome to QHack,

Xanadu 118 Jan 05, 2023
Bayesian Deep Learning and Deep Reinforcement Learning for Object Shape Error Response and Correction of Manufacturing Systems

Bayesian Deep Learning for Manufacturing 2.0 (dlmfg) Object Shape Error Response (OSER) Digital Lifecycle Management - In Process Quality Improvement

Sumit Sinha 30 Oct 31, 2022
SemiNAS: Semi-Supervised Neural Architecture Search

SemiNAS: Semi-Supervised Neural Architecture Search This repository contains the code used for Semi-Supervised Neural Architecture Search, by Renqian

Renqian Luo 21 Aug 31, 2022
🚗 INGI Dakar 2K21 - Be the first one on the finish line ! 🚗

🚗 INGI Dakar 2K21 - Be the first one on the finish line ! 🚗 This year's first semester Club Info challenge will put you at the head of a car racing

ClubINFO INGI (UCLouvain) 6 Dec 10, 2021
Code for EMNLP2020 long paper: BERT-Attack: Adversarial Attack Against BERT Using BERT

BERT-ATTACK Code for our EMNLP2020 long paper: BERT-ATTACK: Adversarial Attack Against BERT Using BERT Dependencies Python 3.7 PyTorch 1.4.0 transform

Linyang Li 142 Jan 04, 2023
PyTorch Implementation of Daft-Exprt: Robust Prosody Transfer Across Speakers for Expressive Speech Synthesis

Daft-Exprt - PyTorch Implementation PyTorch Implementation of Daft-Exprt: Robust Prosody Transfer Across Speakers for Expressive Speech Synthesis The

Keon Lee 47 Dec 18, 2022
ICON: Implicit Clothed humans Obtained from Normals (CVPR 2022)

ICON: Implicit Clothed humans Obtained from Normals Yuliang Xiu · Jinlong Yang · Dimitrios Tzionas · Michael J. Black CVPR 2022 News 🚩 [2022/04/26] H

Yuliang Xiu 1.1k Jan 04, 2023
Implementation of "GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings" in PyTorch

PyGAS: Auto-Scaling GNNs in PyG PyGAS is the practical realization of our G NN A uto S cale (GAS) framework, which scales arbitrary message-passing GN

Matthias Fey 139 Dec 25, 2022
Does Pretraining for Summarization Reuqire Knowledge Transfer?

Pretraining summarization models using a corpus of nonsense

Approximately Correct Machine Intelligence (ACMI) Lab 12 Dec 19, 2022
[ACL-IJCNLP 2021] "EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets"

EarlyBERT This is the official implementation for the paper in ACL-IJCNLP 2021 "EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets" by

VITA 13 May 11, 2022
A project to build an AI voice assistant using Python . The Voice assistant interacts with the humans to perform basic tasks.

AI_Personal_Voice_Assistant_Using_Python A project to build an AI voice assistant using Python . The Voice assistant interacts with the humans to perf

Chumui Tripura 1 Oct 30, 2021
Apache Flink

Apache Flink Apache Flink is an open source stream processing framework with powerful stream- and batch-processing capabilities. Learn more about Flin

The Apache Software Foundation 20.4k Dec 30, 2022
Artifacts for paper "MMO: Meta Multi-Objectivization for Software Configuration Tuning"

MMO: Meta Multi-Objectivization for Software Configuration Tuning This repository contains the data and code for the following paper that is currently

0 Nov 17, 2021
PyTorch implementation of adversarial patch

adversarial-patch PyTorch implementation of adversarial patch This is an implementation of the Adversarial Patch paper. Not official and likely to hav

Jamie Hayes 172 Nov 29, 2022
A large-image collection explorer and fast classification tool

IMAX: Interactive Multi-image Analysis eXplorer This is an interactive tool for visualize and classify multiple images at a time. It written in Python

Matias Carrasco Kind 23 Dec 16, 2022
Graph Posterior Network: Bayesian Predictive Uncertainty for Node Classification (NeurIPS 2021)

Graph Posterior Network This is the official code repository to the paper Graph Posterior Network: Bayesian Predictive Uncertainty for Node Classifica

Maximilian Stadler 30 Dec 05, 2022
Code for the paper "VisualBERT: A Simple and Performant Baseline for Vision and Language"

This repository contains code for the following two papers: VisualBERT: A Simple and Performant Baseline for Vision and Language (arxiv) with a short

Natural Language Processing @UCLA 463 Dec 09, 2022
Development kit for MIT Scene Parsing Benchmark

Development Kit for MIT Scene Parsing Benchmark [NEW!] Our PyTorch implementation is released in the following repository: https://github.com/hangzhao

MIT CSAIL Computer Vision 424 Dec 01, 2022