Open-L2O: A Comprehensive and Reproducible Benchmark for Learning to Optimize Algorithms

Overview

Open-L2O

This repository establishes the first comprehensive benchmark efforts of existing learning to optimize (L2O) approaches on a number of problems and settings. We release our software implementation and data as the Open-L2O package, for reproducible research and fair benchmarking in the L2O field. [Paper]

License: MIT

Overview

What is learning to optimize (L2O)?

L2O (Learning to optimize) aims to replace manually designed analytic optimization algorithms (SGD, RMSProp, Adam, etc.) with learned update rules.

How does L2O work?

L2O serves as functions that can be fit from data. L2O gains experience from training optimization tasks in a principled and automatic way.

What can L2O do for you?

L2O is particularly suitable for solving a certain type of optimization over a specific distribution of data repeatedly. In comparison to classic methods, L2O is shown to find higher-quality solutions and/or with much faster convergence speed for many problems.

Open questions for research?

  • There are significant theoretical and practicality gaps between manually designed optimizers and existing L2O models.

Main Results

Learning to optimize sparse recovery

Learning to optimize Lasso functions

Learning to optimize non-convex Rastrigin functions

Learning to optimize neural networks

Supported Model-base Learnable Optimizers

All codes are available at here.

  1. LISTA (feed-forward form) from Learning fast approximations of sparse coding [Paper]
  2. LISTA-CP from Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and Thresholds [Paper]
  3. LISTA-CPSS from Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and Thresholds [Paper]
  4. LFISTA from Understanding Trainable Sparse Coding via Matrix Factorization [Paper]
  5. LAMP from AMP-Inspired Deep Networks for Sparse Linear Inverse Problems [Paper]
  6. ALISTA from ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA [Paper]
  7. GLISTA from Sparse Coding with Gated Learned ISTA [Paper]

Supported Model-free Learnable Optimizers

  1. L2O-DM from Learning to learn by gradient descent by gradient descent [Paper] [Code]
  2. L2O-RNNProp Learning Gradient Descent: Better Generalization and Longer Horizons from [Paper] [Code]
  3. L2O-Scale from Learned Optimizers that Scale and Generalize [Paper] [Code]
  4. L2O-enhanced from Training Stronger Baselines for Learning to Optimize [Paper] [Code]
  5. L2O-Swarm from Learning to Optimize in Swarms [Paper] [Code]
  6. L2O-Jacobian from HALO: Hardware-Aware Learning to Optimize [Paper] [Code]
  7. L2O-Minmax from Learning A Minimax Optimizer: A Pilot Study [Paper] [Code]

Supported Optimizees

Convex Functions:

  • Quadratic
  • Lasso

Non-convex Functions:

  • Rastrigin

Minmax Functions:

  • Saddle
  • Rotated Saddle
  • Seesaw
  • Matrix Game

Neural Networks:

  • MLPs on MNIST
  • ConvNets on MNIST and CIFAR-10
  • LeNet
  • NAS searched archtectures

Other Resources

  • This is a Pytorch implementation of L2O-DM. [Code]
  • This is the original L2O-Swarm repository. [Code]
  • This is the original L2O-Jacobian repository. [Code]

Future Works

  • TF2.0 Implementated toolbox v2 with a unified framework and lib dependency.

Cite

@misc{chen2021learning,
      title={Learning to Optimize: A Primer and A Benchmark}, 
      author={Tianlong Chen and Xiaohan Chen and Wuyang Chen and Howard Heaton and Jialin Liu and Zhangyang Wang and Wotao Yin},
      year={2021},
      eprint={2103.12828},
      archivePrefix={arXiv},
      primaryClass={math.OC}
}
Owner
VITA
Visual Informatics Group @ University of Texas at Austin
VITA
Research on Tabular Deep Learning (Python package & papers)

Research on Tabular Deep Learning For paper implementations, see the section "Papers and projects". rtdl is a PyTorch-based package providing a user-f

Yura Gorishniy 510 Dec 30, 2022
🏅 The Most Comprehensive List of Kaggle Solutions and Ideas 🏅

🏅 Collection of Kaggle Solutions and Ideas 🏅

Farid Rashidi 2.3k Jan 08, 2023
A embed able annotation tool for end to end cross document co-reference

CoRefi CoRefi is an emebedable web component and stand alone suite for exaughstive Within Document and Cross Document Coreference Anntoation. For a de

PythicCoder 39 Dec 12, 2022
Predicting lncRNA–protein interactions based on graph autoencoders and collaborative training

Predicting lncRNA–protein interactions based on graph autoencoders and collaborative training Code for our paper "Predicting lncRNA–protein interactio

zhanglabNKU 1 Nov 29, 2022
Code for the ICCV2021 paper "Personalized Image Semantic Segmentation"

PSS: Personalized Image Semantic Segmentation Paper PSS: Personalized Image Semantic Segmentation Yu Zhang, Chang-Bin Zhang, Peng-Tao Jiang, Ming-Ming

张宇 15 Jul 09, 2022
Conjugated Discrete Distributions for Distributional Reinforcement Learning (C2D)

Conjugated Discrete Distributions for Distributional Reinforcement Learning (C2D) Code & Data Appendix for Conjugated Discrete Distributions for Distr

1 Jan 11, 2022
Unofficial PyTorch implementation of Attention Free Transformer (AFT) layers by Apple Inc.

aft-pytorch Unofficial PyTorch implementation of Attention Free Transformer's layers by Zhai, et al. [abs, pdf] from Apple Inc. Installation You can i

Rishabh Anand 184 Dec 12, 2022
CoReNet is a technique for joint multi-object 3D reconstruction from a single RGB image.

CoReNet CoReNet is a technique for joint multi-object 3D reconstruction from a single RGB image. It produces coherent reconstructions, where all objec

Google Research 80 Dec 25, 2022
Super Resolution for images using deep learning.

Neural Enhance Example #1 — Old Station: view comparison in 24-bit HD, original photo CC-BY-SA @siv-athens. As seen on TV! What if you could increase

Alex J. Champandard 11.7k Dec 29, 2022
Learning where to learn - Gradient sparsity in meta and continual learning

Learning where to learn - Gradient sparsity in meta and continual learning In this paper, we investigate gradient sparsity found by MAML in various co

Johannes Oswald 28 Dec 09, 2022
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

61.4k Jan 04, 2023
Team nan solution repository for FPT data-centric competition. Data augmentation, Albumentation, Mosaic, Visualization, KNN application

FPT_data_centric_competition - Team nan solution repository for FPT data-centric competition. Data augmentation, Albumentation, Mosaic, Visualization, KNN application

Pham Viet Hoang (Harry) 2 Oct 30, 2022
A Python script that creates subtitles of a given length from text paragraphs that can be easily imported into any Video Editing software such as FinalCut Pro for further adjustments.

Text to Subtitles - Python This python file creates subtitles of a given length from text paragraphs that can be easily imported into any Video Editin

Dmytro North 9 Dec 24, 2022
Pytorch implementation of "Training a 85.4% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet"

Token Labeling: Training an 85.4% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet (arxiv) This is a Pytorch implementation of our te

蒋子航 383 Dec 27, 2022
Membership Inference Attack against Graph Neural Networks

MIA GNN Project Starter If you meet the version mismatch error for Lasagne library, please use following command to upgrade Lasagne library. pip insta

6 Nov 09, 2022
This repo is official PyTorch implementation of MobileHumanPose: Toward real-time 3D human pose estimation in mobile devices(CVPRW 2021).

Github Code of "MobileHumanPose: Toward real-time 3D human pose estimation in mobile devices" Introduction This repo is official PyTorch implementatio

Choi Sang Bum 203 Jan 05, 2023
Clustering with variational Bayes and population Monte Carlo

pypmc pypmc is a python package focusing on adaptive importance sampling. It can be used for integration and sampling from a user-defined target densi

45 Feb 06, 2022
Deep Text Search is an AI-powered multilingual text search and recommendation engine with state-of-the-art transformer-based multilingual text embedding (50+ languages).

Deep Text Search - AI Based Text Search & Recommendation System Deep Text Search is an AI-powered multilingual text search and recommendation engine w

19 Sep 29, 2022
Framework for Spectral Clustering on the Sparse Coefficients of Learned Dictionaries

Dictionary Learning for Clustering on Hyperspectral Images Overview Framework for Spectral Clustering on the Sparse Coefficients of Learned Dictionari

Joshua Bruton 6 Oct 25, 2022
CoSMA: Convolutional Semi-Regular Mesh Autoencoder. From Paper "Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes"

Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes Implementation of CoSMA: Convolutional Semi-Regular Mesh Autoencoder arXiv p

Fraunhofer SCAI 10 Oct 11, 2022