Neural models of common sense. 🤖

Related tags

Deep Learningrainbow
Overview

Unicorn on Rainbow

Neural models of common sense.

This repository is for the paper: Unicorn on Rainbow: A Universal Commonsense Reasoning Model on a New Multitask Benchmark. Unicorn on Rainbow introduces a new evaluation, the cost equivalent curve, which compares models in terms of their cost-benefit trade offs. Using cost equivalent curves, we conduct a large-scale empirical study of intermediate-task transfer for common sense on a new benchmark collection of commonsense reasoning datasets, Rainbow. With findings from this study, we create a new state-of-the-art model for commonsense reasoning: Unicorn.

Jump to a section of the readme to accomplish different goals:

  • Rainbow: Read about and download data for Rainbow, our new commonsense reasoning benchmark.
  • Unicorn: Get up and running with Unicorn, our state-of-the-art commonsense reasoning model.
  • Cost Equivalent Curves: Learn how to generate cost equivalent curves for your own predictions.
  • Experimental Results: Download and analyze the results from our hundreds of experiments.
  • Setup: Get set up to run the code in this repository.
  • Quickstart: Run the code in this repo.
  • Citation: Cite the Unicorn on Rainbow paper.
  • Contact: Reach out with questions or comments.

Note: This repository is intended for research. There is no intention for ongoing maintenance.

Rainbow

Rainbow brings together six pre-existing commonsense reasoning benchmarks: aNLI, Cosmos QA, HellaSWAG, Physical IQa, Social IQa, and WinoGrande. These commonsense reasoning benchmarks span both social and physical common sense.

Note: Rainbow pins these datasets to specific versions. To make sure you're using the correct data, please download those versions below.

Getting the Data

Rainbow preprocesses all of the datasets into a text-to-text format for ease of modeling.

Alternatively, you can download the individual tasks and preprocess them yourself.

All checksums are sha256. To compute the checksum with openssl, run:

$ openssl sha256 $FILE_PATH

Submitting to the Leaderboard

If you develop a model for Rainbow, please feel free to submit to the leaderboard!

Unicorn

Unicorn (a UNIversal COmmonsense Reasoning Model) solves commonsense reasoning tasks in the text-to-text format. In principle, Unicorn may be trained on any NLP task, simply feed it text input and ask it to predict text output. Unicorn derives from T5, supercharging it for commonsense reasoning tasks and achieving state-of-the-art across a number of popular benchmarks, including Rainbow and CommonsenseQA.

To try Unicorn on your own data, first download the weights then fine-tune and evaluate it on your own data.

Downloading the Weights

To run Unicorn, you'll first need to download its weight files into a directory or path on Google Cloud. Using gsutil:

gsutil cp -r \
  gs://ai2-mosaic-public/projects/rainbow/v1.0/unicorns/lr-2e-3_batch-size-32
  $DST

Where $DST is the destination directory.

Reproducing our Results

In Unicorn on Rainbow, we trained different Unicorns that were first multitasked on Rainbow using different hyper-parameters. The checkpoint we've made available had the best performance most often. If you need the other checkpoints, please email the authors.

Cost Equivalent Curves

Cost equivalent curves compare the cost-benefit trade offs different techniques offer. In particular, cost equivalent curves plot the baseline and new technique's equivalent costs, or the costs where they achieve the same performance. For example, if the cost is measured as the number of examples and performance is measured by accuracy, then the cost equivalent curve shows how many examples the baseline needs to match the new technique's accuracy.

The plot_cost_equivalent_curves function in bin/create-multi-experiment-figures.py offers example code for how to create cost equivalent curves in Python.

Stay Tuned! We'll soon be releasing an easy-to-use, standalone package for creating cost equivalent curves. Check back here for it in the future.

Experimental Results

For Unicorn on Rainbow, we ran hundreds of experiments. We've made available the results from all those experiments in order to facilitate future research. For example, you may want those thousands of training curves to study hyper-parameter tuning or how loss evolves over training.

Among other things, you'll find:

  • predictions on dev from every checkpoint saved during training
  • training curves (training step vs. loss)
  • learning curves (dataset size vs. accuracy)
  • hyper-parameter tuning
  • all tables and figures from the paper
  • and more...

Our hope is that researchers can reuse this large collection of experiments to derive new practical and research insights.

Downloading the Results

Five collections of results are available:

All checksums are sha256. To compute the checksum with openssl, run:

$ openssl sha256 $FILE_PATH

NOTE: The learning curves experiments varied the number of training examples up to 16,000; however, CommonsenseQA has fewer than 16,000 training examples. Thus, for CommonsenseQA numbers higher than 9,741 are truncated to that size. This subtlety is taken care of by the data processing pipeline when the experiments are processed into the results tables, so it only affects rainbow-predictions.tar.gz and rainbow-experiments.tar.gz.

Replicating Our Analysis Pipeline

All the scripts to replicate our analysis pipeline reside in bin/. In order to run the scripts, you'll need to get set up for development.

The overall pipeline is as follows:

+----------------------------+
| rainbow-predictions.tar.gz |
+----------------------------+
              |
              | (bin/organize-experiments)
              V
+----------------------------+
| rainbow-experiments.tar.gz |
+----------------------------+
              |
              | (bin/generate-tables.py)
              V
  +------------------------+
  | rainbow-results.tar.gz |
  +------------------------+
         |         |
         |         | (bin/generate-latex-tables.py)
         |         V
         |     +-----------------------------+
         |     | rainbow-latex-tables.tar.gz |
         |     +-----------------------------+
         |
         | (bin/create-single-experiment-figures.py)
         | (bin/create-multi-experiment-figures.py)
         V
+------------------------+
| rainbow-figures.tar.gz |
+------------------------+

To run the pipeline, start by downloading rainbow-predictions.tar.gz (see Downloading the Results above).

Use bin/organize-experiments to produce rainbow-experiments.tar.gz:

$ tar -xf rainbow-predictions.tar.gz
$ bin/organize-experiments rainbow-predictions $DST

Where $DST is the desired destination directory (for example the current directory, .).

Use bin/generate-tables.py to produce rainbow-results.tar.gz:

$ bin/generate-tables.py rainbow-experiments rainbow-results

Use bin/create-single-experiment-figures.py and bin/create-multi-experiment-figures.py to create rainbow-figures.tar.gz:

$ bin/create-single-experiment-figures.py rainbow-results rainbow-figures/single-experiment
$ bin/create-multi-experiment-figures.py rainbow-results rainbow-figures/multi-experiment

And use bin/generate-latex-tables.py to produce rainbow-latex-tables.tar.gz:

$ bin/generate-latex-tables.py rainbow-results rainbow-latex-tables

All scripts except bin/organize-experiments are also self-documenting, so pass --help to any of them for more information.

Setup

This project requires Python 3.6 or above.

First, install the project's dependencies:

./bin/install

Next, make sure you have the following environment variables set:

  1. RAINBOW_DATASETS_DIR: The directory for storing all relevant datasets.
  2. RAINBOW_PREPROCESSED_DATASETS_DIR: The directory for storing the preprocessed dataset split files.
  3. RAINBOW_TFDS_DATASETS_DIR: The directory for storing the TFDS (tensorflow datasets) datasets.

Training requires TPUs. For training, all directories should point to Google Cloud Storage prefixes. Additionally, you'll need the following environment variables:

  1. PROJECT: Your Google Cloud project's ID.
  2. ZONE: Your Google Cloud virtual machine's zone.
  3. TPU_NAME: Your TPU's name.
  4. TPU_TOPOLOGY: Your TPU's topology.

Then, download and prepare all the datasets for text-to-text modeling:

$ ./bin/prepare.py --help
Usage: prepare.py [OPTIONS]

  Prepare all relevant datasets for text-to-text modeling.

  Download to and read the datasets from --src, transform them into CSVs
  suitable for text-to-text models, then write the results to --dst. Google
  storage paths are supported.

Options:
  --src TEXT        The directory to which to download all the relevant
                    datasets. Defaults to the RAINBOW_DATASETS_DIR environment
                    variable.  [required]
  --dst TEXT        The directory to which to write the preprocessed dataset
                    files. Defaults to the RAINBOW_PREPROCESSED_DATASETS_DIR
                    environment variable.  [required]
  --force-download  Force downloads of all the datasets, otherwise only
                    missing datasets will be downloaded.
  --help            Show this message and exit.

Finally, verify your installation:

./bin/verify

Quickstart

Before following this section, make sure you've done the Setup.

Fine-tuning

To fine-tune the model, use bin/fine-tune.py:

$ ./bin/fine-tune.py --help
Usage: fine-tune.py [OPTIONS] MIXTURE RESULTS_DIR

  Fine-tune the model on MIXTURE, writing results to RESULTS_DIR.

Options:
  --pretrained-model TEXT         The path to or name of the pretrained model.
                                  Defaults to 3B.
  --n-steps INTEGER               The number of gradient updates. Defaults to
                                  25,000.
  --learning-rate FLOAT           The learning rate to use for training.
                                  Defaults to 3e-3.
  --batch-size INTEGER            The batch size to use for training. For
                                  efficient training on the TPU, choose a
                                  multiple of either 8 or 128. Defaults to 16.
  --model-parallelism INTEGER     The degree of model parallelism to use.
                                  Defaults to 8.
  --save-checkpoints-steps INTEGER
                                  The number of steps to take before saving a
                                  checkpoint. Defaults to 5000.
  --n-checkpoints-to-keep INTEGER
                                  The number of checkpoints to keep during
                                  fine-tuning. Defaults to 4.
  --tpu-name TEXT                 The name of the TPU. Defaults to the
                                  TPU_NAME environment variable.  [required]
  --tpu-topology TEXT             The topology of the TPU. Defaults to the
                                  TPU_TOPOLOGY environment variable.
                                  [required]
  --help                          Show this message and exit.

Evaluation

To evaluate the model, use bin/evaluate.py:

$ ./bin/evaluate.py --help
Usage: evaluate.py [OPTIONS] MIXTURE RESULTS_DIR

  Evaluate the model located at RESULTS_DIR on MIXTURE.

Options:
  --batch-size INTEGER         The batch size to use for prediction. For
                               efficient prediction on the TPU, choose a
                               multiple of either 8 or 128. Defaults to 64.
  --model-parallelism INTEGER  The degree of model parallelism to use.
                               Defaults to 8.
  --tpu-name TEXT              The name of the TPU. Defaults to the TPU_NAME
                               environment variable.  [required]
  --tpu-topology TEXT          The topology of the TPU. Defaults to the
                               TPU_TOPOLOGY environment variable.  [required]
  --help                       Show this message and exit.

Tests and Code Quality

The code is formatted with black. You can run the formatter using the bin/format script:

$ ./bin/format

To run code quality checks, use the bin/verify script:

$ ./bin/verify

For fine-grained control of which tests to run, use pytest directly:

$ pytest

You can also skip slower tests by passing the --skip-slow (-s) flag:

$ pytest --skip-slow

Citation

Unicorn on Rainbow is a AAAI 2021 paper. Please check back here soon for the bibtex citation.

Contact

For public, non-sensitive questions and concerns, please file an issue on this repository.

For private or sensitive inquiries email mosaic on the allenai.org website.

The implementation of DeBERTa

DeBERTa: Decoding-enhanced BERT with Disentangled Attention This repository is the official implementation of DeBERTa: Decoding-enhanced BERT with Dis

Microsoft 1.2k Jan 06, 2023
Java and SHACL code commented in the paper "Towards compliance checking in reified I/O logic via SHACL" submitted to ICAIL 2021

shRIOL The subfolder shRIOL contains Java files to execute the SHACL files on the OWL ontology. To compile the Java files: "javac -cp ./src/;./lib/* -

1 Dec 06, 2022
Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019)

Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019) Introduction Official implementation of Dynamic Multi-scale Filters for Semant

23 Oct 21, 2022
Trading environnement for RL agents, backtesting and training.

TradzQAI Trading environnement for RL agents, backtesting and training. Live session with coinbasepro-python is finaly arrived ! Available sessions: L

Tony Denion 164 Oct 30, 2022
This is a vision-based 3d model manipulation and control UI

Manipulation of 3D Models Using Hand Gesture This program allows user to manipulation 3D models (.obj format) with their hands. The project support bo

Cortic Technology Corp. 43 Oct 23, 2022
Person Re-identification

Person Re-identification Final project of Computer Vision Table of content Person Re-identification Table of content Students: Proposed method Dataset

Nguyễn Hoàng Quân 4 Jun 17, 2021
[MICCAI'20] AlignShift: Bridging the Gap of Imaging Thickness in 3D Anisotropic Volumes

AlignShift NEW: Code for our new MICCAI'21 paper "Asymmetric 3D Context Fusion for Universal Lesion Detection" will also be pushed to this repository

Medical 3D Vision 42 Jan 06, 2023
Semi-Supervised Semantic Segmentation with Cross-Consistency Training (CCT)

Semi-Supervised Semantic Segmentation with Cross-Consistency Training (CCT) Paper, Project Page This repo contains the official implementation of CVPR

Yassine 344 Dec 29, 2022
TensorFlow implementation of original paper : https://github.com/hszhao/PSPNet

Keras implementation of PSPNet(caffe) Implemented Architecture of Pyramid Scene Parsing Network in Keras. For the best compability please use Python3.

VladKry 386 Dec 29, 2022
Implementation of the bachelor's thesis "Real-time stock predictions with deep learning and news scraping".

Real-time stock predictions with deep learning and news scraping This repository contains a partial implementation of my bachelor's thesis "Real-time

David Álvarez de la Torre 0 Feb 09, 2022
This repository contains code demonstrating the methods outlined in Path Signature Area-Based Causal Discovery in Coupled Time Series presented at Causal Analysis Workshop 2021.

signed-area-causal-inference This repository contains code demonstrating the methods outlined in Path Signature Area-Based Causal Discovery in Coupled

Will Glad 1 Mar 11, 2022
Reinforcement-learning - Repository of the class assignment questions for the course on reinforcement learning

DSE 314/614: Reinforcement Learning This repository containing reinforcement lea

Manav Mishra 4 Apr 15, 2022
Sarus implementation of classical ML models. The models are implemented using the Keras API of tensorflow 2. Vizualization are implemented and can be seen in tensorboard.

Sarus published models Sarus implementation of classical ML models. The models are implemented using the Keras API of tensorflow 2. Vizualization are

Sarus Technologies 39 Aug 19, 2022
SciKit-Learn Laboratory (SKLL) makes it easy to run machine learning experiments.

SciKit-Learn Laboratory This Python package provides command-line utilities to make it easier to run machine learning experiments with scikit-learn. O

ETS 528 Nov 25, 2022
Official implementation of "GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators" (NeurIPS 2020)

GS-WGAN This repository contains the implementation for GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators (NeurIPS

46 Nov 09, 2022
Erpnext app for make employee salary on payroll entry based on one or more project with percentage for all project equal 100 %

Project Payroll this app for make payroll for employee based on projects like project on 30 % and project 2 70 % as account dimension it makes genral

Ibrahim Morghim 8 Jan 02, 2023
This repo provides function call to track multi-objects in videos

Custom Object Tracking Introduction This repo provides function call to track multi-objects in videos with a given trained object detection model and

Jeff Lo 51 Nov 22, 2022
This is an official implementation of the CVPR2022 paper "Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots".

Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots Blind2Unblind Citing Blind2Unblind @inproceedings{wang2022blind2unblind, tit

demonsjin 58 Dec 06, 2022
Corgis are the cutest creatures; have 30K of them!

corgi-net This is a dataset of corgi images scraped from the corgi subreddit. After filtering using an ImageNet classifier, the training set consists

Alex Nichol 6 Dec 24, 2022
Fully Automatic Page Turning on Real Scores

Fully Automatic Page Turning on Real Scores This repository contains the corresponding code for our extended abstract Henkel F., Schwaiger S. and Widm

Florian Henkel 7 Jan 02, 2022