Implementation of the paper "Generating Symbolic Reasoning Problems with Transformer GANs"

Related tags

Deep LearningTGAN-SR
Overview

Generating Symbolic Reasoning Problems with Transformer GANs

This is the implementation of the paper Generating Symbolic Reasoning Problems with Transformer GANs.

Constructing training data for symbolic reasoning domains is challenging: On the one hand existing instances are typically hand-crafted and too few to be trained on directly, on the other hand synthetically generated instances are often hard to evaluate in terms of their meaningfulness.

We provide a GAN and a Wasserstein GAN equipped with Transformer encoders to generate sensible and challenging training data for symbolic reasoning domains. Even without autoregression, the GAN models produce syntactically correct problem instances. The generated data can be used as a substitute for real training data, and, especially, the training data can be generated from a real data set that is too small to be trained on directly.

For example, the models produced the following correct mathematical expressions:

and the following correct Linear-time Temporal Logic (LTL) formulas used in verification:

Installation

The code is shipped as a Python package that can be installed by executing

pip install -e .

in the impl directory (where setup.py is located). Python version 3.6 or higher is required. Additional dependencies such as tensorflow will be installed automatically. To generate datasets or solve instances immediately after generation, the LTL satisfiability checking tool aalta is required as binary. It can be obtained from bitbucket (earliest commit in that repository). After compiling, ensure that the binary aalta resides under the bin folder.

Datasets

A zip file containing our original datasets can be downloaded from here. Unpack its contents to the datasets directory.

Dataset generation

Alternatively, datasets can be generated from scratch. The following procedure describes how to construct a dataset similar to the main base dataset (LTLbase):

First, generate a raw dataset by

python -m tgan_sr.data_generation.generator -od datasets/LTLbase --splits all_raw:1 --timeout 2 -nv 10 -ne 1600000 -ts 50 --log-each-x-percent 1 --frac-unsat None

(possibly rename to not override the supplied dataset). Enter the newly created directory.

Optional: Visualize the dataset (like Figures 5 and 6 in the paper)

python -m tgan_sr.utils.analyze_dataset all_raw.txt formula,sat

To filter the dataset for duplicates and balance classes per size

python -m tgan_sr.utils.update_dataset all_raw.txt unique - | python -m tgan_sr.utils.update_dataset - balance_per_size all_balanced.txt

Optional: Calculate relaxed satisfiability

python -m tgan_sr.utils.update_dataset all_balanced.txt relaxed_sat all_balanced_rs.txt

Optional: Visualize the dataset (like Figures 7 and 8 in the paper)

python -m tgan_sr.utils.analyze_dataset all_balanced_rs.txt formula,sat+relaxed

Split the data into training and validation sets

python -m tgan_sr.utils.update_dataset all_balanced_rs.txt shuffle+split=train:8,val:1,test:1

Experiments (training)

The folder configs contains JSON files for each type of experiment in the paper. Settings for different hyperparameters can be easily adjusted.

A model can be trained like this:

python -m tgan_sr.train.gan --run-name NAME --params-file configs/CONFIG.json

During training, relevant metrics will be logged to train_custom in the run's directory and can be viewed with tensorboard afterwards.

A list of all configurations and corresponding JSON files:

  • Standard WGAN: wgan_gp10_nl6-4_nc2_bs1024.json
  • Standard GAN: gan_nl6-4_nc2_bs1024.json
  • different σ for added noise: add parameter "gan_sigma_real" and assign desired value
  • WGAN on 10K-sized base dataset: n10k_wgan_gp10_nl6-4_nc2_bs512.json
  • Sample data from the trained WGAN: sample_n10k_wgan_gp10_nl6-4_nc2_bs512.json (ensure the "load_from" field matches your trained run name)
  • Classifier on default dataset: class_nl4_bs1024.json
  • Classifier on generated dataset: class_Generated_nl4_bs1024.json
  • WGAN with included classifier: wgan+class_nl6-3s1_nc2_bs1024.json
  • WGAN with absolute uncertainty objective: wgan+class+uncert-abs_nl6-3s1_nc2_bs1024.json (ensure the "looad_from" field matches your pre-trained name)
  • WGAN with entropy uncertainty objective: wgan+class+uncert-entr_nl6-3s1_nc2_bs1024.json (ensure the "looad_from" field matches your pre-trained name)
  • Sample data from the trained WGAN with entropy uncertainty objective: sample_wgan+class+uncert-entr_nl6-3s1_nc2_bs1024.json (ensure the "load_from" field matches your trained run name)

Evaluation

To test a trained classifier on an arbitrary dataset (validation):

python -m tgan_sr.train.gan --run-name NAME --test --ds-name DATASET_NAME

The model will be automatically loaded from the latest checkpoint in the run's directory.

How to Cite

@article{TGAN-SR,
    title = {Generating Symbolic Reasoning Problems with Transformer GANs},
    author = {Kreber, Jens U and Hahn, Christopher},
    journal = {arXiv preprint},
    year = {2021}
}
Owner
Reactive Systems Group
Saarland University
Reactive Systems Group
Cleaned test data list of DukeMTMC-reID, ICCV2021

Cleaned DukeMTMC-reID Cleaned data list of DukeMTMC-reID released with our paper accepted by ICCV 2021: Learning Instance-level Spatial-Temporal Patte

14 Feb 19, 2022
Code for Environment Dynamics Decomposition (ED2).

ED2 Code for Environment Dynamics Decomposition (ED2). Installation Follow the installation in MBPO and Dreamer. Usage First follow the SD2 method for

0 Aug 10, 2021
A framework for joint super-resolution and image synthesis, without requiring real training data

SynthSR This repository contains code to train a Convolutional Neural Network (CNN) for Super-resolution (SR), or joint SR and data synthesis. The met

83 Jan 01, 2023
EfficientNetV2-with-TPU - Cifar-10 case study

EfficientNetV2-with-TPU EfficientNet EfficientNetV2 adalah jenis jaringan saraf convolutional yang memiliki kecepatan pelatihan lebih cepat dan efisie

Sultan syach 1 Dec 28, 2021
This repository contains code, network definitions and pre-trained models for working on remote sensing images using deep learning

Deep learning for Earth Observation This repository contains code, network definitions and pre-trained models for working on remote sensing images usi

Nicolas Audebert 447 Jan 05, 2023
Deep Implicit Moving Least-Squares Functions for 3D Reconstruction

DeepMLS: Deep Implicit Moving Least-Squares Functions for 3D Reconstruction This repository contains the implementation of the paper: Deep Implicit Mo

103 Dec 22, 2022
GNNAdvisor: An Efficient Runtime System for GNN Acceleration on GPUs

GNNAdvisor: An Efficient Runtime System for GNN Acceleration on GPUs [Paper, Slides, Video Talk] at USENIX OSDI'21 @inproceedings{GNNAdvisor, title=

YUKE WANG 47 Jan 03, 2023
Solving SMPL/MANO parameters from keypoint coordinates.

Minimal-IK A simple and naive inverse kinematics solver for MANO hand model, SMPL body model, and SMPL-H body+hand model. Briefly, given joint coordin

Yuxiao Zhou 305 Dec 30, 2022
LibMTL: A PyTorch Library for Multi-Task Learning

LibMTL LibMTL is an open-source library built on PyTorch for Multi-Task Learning (MTL). See the latest documentation for detailed introductions and AP

765 Jan 06, 2023
The most simple and minimalistic navigation dashboard.

Navigation This project follows a goal to have simple and lightweight dashboard with different links. I use it to have my own self-hosted service dash

Yaroslav 23 Dec 23, 2022
A collection of Reinforcement Learning algorithms from Sutton and Barto's book and other research papers implemented in Python.

Reinforcement-Learning-Notebooks A collection of Reinforcement Learning algorithms from Sutton and Barto's book and other research papers implemented

Pulkit Khandelwal 1k Dec 28, 2022
Supporting code for short YouTube series Neural Networks Demystified.

Neural Networks Demystified Supporting iPython notebooks for the YouTube Series Neural Networks Demystified. I've included formulas, code, and the tex

Stephen 1.3k Dec 23, 2022
Multiple paper open-source codes of the Microsoft Research Asia DKI group

📫 Paper Code Collection (MSRA DKI Group) This repo hosts multiple open-source codes of the Microsoft Research Asia DKI Group. You could find the corr

Microsoft 249 Jan 08, 2023
Deep Illuminator is a data augmentation tool designed for image relighting. It can be used to easily and efficiently generate a wide range of illumination variants of a single image.

Deep Illuminator Deep Illuminator is a data augmentation tool designed for image relighting. It can be used to easily and efficiently generate a wide

George Chogovadze 52 Nov 29, 2022
Repository of Jupyter notebook tutorials for teaching the Deep Learning Course at the University of Amsterdam (MSc AI), Fall 2020

Repository of Jupyter notebook tutorials for teaching the Deep Learning Course at the University of Amsterdam (MSc AI), Fall 2020

Phillip Lippe 1.1k Jan 07, 2023
Pytorch implementation of "ARM: Any-Time Super-Resolution Method"

ARM-Net Dependencies Python 3.6 Pytorch 1.7 Results Train Data preprocessing cd data_scripts python extract_subimages_test.py python data_augmentation

Bohong Chen 55 Nov 24, 2022
Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions

torch-imle Concise and self-contained PyTorch library implementing the I-MLE gradient estimator proposed in our NeurIPS 2021 paper Implicit MLE: Backp

UCL Natural Language Processing 249 Jan 03, 2023
Package to compute Mauve, a similarity score between neural text and human text. Install with `pip install mauve-text`.

MAUVE MAUVE is a library built on PyTorch and HuggingFace Transformers to measure the gap between neural text and human text with the eponymous MAUVE

Krishna Pillutla 182 Jan 02, 2023
Meta Self-learning for Multi-Source Domain Adaptation: A Benchmark

Meta Self-Learning for Multi-Source Domain Adaptation: A Benchmark Project | Arxiv | YouTube | | Abstract In recent years, deep learning-based methods

CVSM Group - email: <a href=[email protected]"> 188 Dec 12, 2022
Learning with Subset Stacking

Learning with Subset Stacking (LESS) LESS is a new supervised learning algorithm that is based on training many local estimators on subsets of a given

S. Ilker Birbil 19 Oct 04, 2022