A deep learning framework for historical document image analysis

Related tags

Deep LearningDIVA-DAF
Overview

DIVA-DAF

PyTorch Lightning Config: Hydra Template
tests codecov

Description

A deep learning framework for historical document image analysis.

How to run

Install dependencies

# clone project
git clone https://github.com/DIVA-DIA/unsupervised_learning.git
cd unsupervised_learing

# create conda environment (IMPORTANT: needs Python 3.8+)
conda env create -f conda_env_gpu.yaml

# activate the environment using .autoenv
source .autoenv

# install requirements
pip install -r requirements.txt

Train model with default configuration. Care: you need to change the value of data_dir in config/datamodule/cb55_10_cropped_datamodule.yaml.

# default run based on config/config.yaml
python run.py

# train on CPU
python run.py trainer.gpus=0

# train on GPU
python run.py trainer.gpus=1

Train using GPU

# [default] train on all available GPUs
python run.py trainer.gpus=-1

# train on one GPU
python run.py trainer.gpus=1

# train on two GPUs
python run.py trainer.gpus=2

# train on CPU
python run.py trainer.accelerator=ddp_cpu

Train using CPU for debugging

# train on CPU
python run.py trainer.accelerator=ddp_cpu trainer.precision=32

Train model with chosen experiment configuration from configs/experiment/

python run.py +experiment=experiment_name

You can override any parameter from command line like this

python run.py trainer.max_epochs=20 datamodule.batch_size=64

Setup PyCharm

  1. Fork this repo
  2. Clone the repo to your local filesystem (git clone CLONELINK)
  3. Clone the repo onto your remote machine
  4. Move into the folder on your remote machine and create the conda environment (conda env create -f conda_env_gpu.yaml)
  5. Run source .autoenv in the root folder on your remote machine (activates the environment)
  6. Open the folder in PyCharm (File -> open)
  7. Add the interpreter (Preferences -> Project -> Python interpreter -> top left gear icon -> add... -> SSH Interpreter) follow the instructions (set the correct mapping to enable deployment)
  8. Upload the files (deployment)
  9. Create a wandb account (wandb.ai)
  10. Log via ssh onto your remote machine
  11. Go to the root folder of the framework and activate the environment (source .autoenv OR conda activate unsupervised_learning)
  12. Log into wandb. Execute wandb login and follow the instructions
  13. Now you should be able to run the basic experiment from PyCharm

Loading models

You can load the different model parts backbone or header as well as the whole task. To load the backbone or the header you need to add to your experiment config the field path_to_weights. e.g.

model:
    header:
        path_to_weights: /my/path/to/the/pth/file

To load the whole task you need to provide the path to the whole task to the trainer. This is with the field resume_from_checkpoint. e.g.

trainer:
    resume_from_checkpoint: /path/to/.ckpt/file

Freezing model parts

You can freeze both parts of the model (backbone or header) with the freeze flag in the config. E.g. you want to freeze the backbone: In the command line:

python run.py +model.backbone.freeze=True

In the config (e.g. model/backbone/baby_unet_model.yaml):

...
freeze: True
...

CARE: You can not train a model when you do not have trainable parameters (e.g. freezing backbone and header).

Selection in datasets

If you use the selection key you can either use an int, which takes the first n files, or a list of strings to filter the different datasets. In the case you are using a full-page dataset be aware that the selection list is a list of file names without the extension.

Cite us

@misc{vögtlin2022divadaf,
      title={DIVA-DAF: A Deep Learning Framework for Historical Document Image Analysis}, 
      author={Lars Vögtlin and Paul Maergner and Rolf Ingold},
      year={2022},
      eprint={2201.08295},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Comments
  • Not working with ddp_cpu

    Not working with ddp_cpu

    Describe the bug If we want to run the framework with ddp_cpu as accelerator it wont work as it has a working directory problem.

    To Reproduce python run.py trainer.accelerator='ddp_cpu' trainer.precision=32

    Expected behavior We can use ddp_cpu to debug our system

    Additional context To avoid this problem at the moment we can just use the full path to the run.py file ($PWD/run.py).

    Checklist

    • [ ] Add a warning if ddp_cpu and not presicion=32
    bug If time Pipeline 
    opened by lvoegtlin 3
  • Use deepspeed to speed up the training

    Use deepspeed to speed up the training

    Is your feature request related to a problem? Please describe. To accelerate the training we could use the deepspeed plugin

    Describe the solution you'd like Make it possible to activate deepspeed through the config

    Checklist

    • [x] Test deepspeed
    • [ ] Include it into the config system
    wontfix If time Pipeline 
    opened by lvoegtlin 3
  • Load model checkpoint instead of default init

    Load model checkpoint instead of default init

    differ between train test and train and test Already started with two parameters train and test to define what part of the process should be done. need to include loading from ckpt for fine-tuning or just testing

    https://pytorch-lightning.readthedocs.io/en/stable/common/weights_loading.html

    PXL_20210706_154513642

    Evtl. weights_only would work

    We need to make our own callback which inherits from ModelCheckpoint and override/add the just model checkpoint save (https://github.com/PyTorchLightning/pytorch-lightning/blob/bca5adf6de1ae74c7103839aac54c8648464bee6/pytorch_lightning/callbacks/model_checkpoint.py#L485)

    Checklist

    • [x] test check if path_to_weights is set
    • [x] load model state from path
    • [x] create a generic model which takes an encoder and a header (configs)
    • [x] #15
    • [x] save model with a callback (create callback)
    • [x] if we are just testing we need a path_to_weights for both
    Important Module Pipeline 
    opened by lvoegtlin 3
  • Updating dependecies

    Updating dependecies

    Description

    Updating PL, torchmetrics and pytest to the newest version. Also introduces codecoverage with sonarcloud. Each PR will now be tested on testcoverage

    How to Test/Run?

    pytest

    opened by lvoegtlin 2
  • Fixed problem with multiple empty folders in checkpoints

    Fixed problem with multiple empty folders in checkpoints

    Description

    The checkpoint callback created the checkpoints in a dedicated epochs folder. The folder should get deleted if it's no longer the best. This did not also work with the built-in version of the model checkpoint callback. Solved it by doing a clean-up at the end of the experiment.

    How to Test/Run?

    python run.py trainer.max_epochs=20

    Something missing?

    opened by lvoegtlin 2
  • Feature/datamodule for gif imgs

    Feature/datamodule for gif imgs

    Description

    A datamodule that takes advantage of the index format. It no longer determines the classes by the color but takes the classes directly form the raw image and uses the palette as class encoding.

    How to Test/Run?

    pytest or python run.py experiment=development_baby_unet_indexed.yaml

    opened by lvoegtlin 2
  • DDP metric bias

    DDP metric bias

    Is your feature request related to a problem? Please describe. When running an experiment with DDP we have a little data bias if the dataset is not dividable by batch_size * num_processors. To make the users aware of this problem we can add a warning if num_samples % (batch_size * num_processors) != 0. Problem described here

    Describe the solution you'd like Raining an error if the condition from above is not met. Also, add a flag to ignore this error (ignore_ddp_bias)

    Describe alternatives you've considered Solve it with the ddp join function from PyTorch but it is very hard to hack that into pl.

    Checklist

    • [x] Create check and warning
    • [x] Add shuffle and drop_last_batch options to datamodule config
    • [x] Add shuffle/drop_last_batch to default config files
    enhancement Pipeline 
    opened by lvoegtlin 2
  • Add the strict parameter to make it possible to load non-fitting models

    Add the strict parameter to make it possible to load non-fitting models

    Describe the feature

    Make it possible to transfer weights between similar models

    Describe the solution you'd like

    A parameter strict in the models which defines the way to load if it is not fitting the weights file

    Checklist

    • [x] Add this parameter in the model config
    • [x] Use it to load the model
    • [x] Add log for missed/unexpected keys
    If time Module Pipeline 
    opened by lvoegtlin 2
  • Loss function as config

    Loss function as config

    Is your feature request related to a problem? Please describe. Make it possible to define the loss function in the config.

    Describe the solution you'd like Define some defaults functions and create a config for them. Then hand over the criterion object to the task at the beginning of the training.

    Checklist

    • [ ] define 4 basic losses (Xentropy, L1, MSE, BCE)
    • [ ] create configs
    • [ ] hand over the loss function as a parameter to the task
    enhancement If time Module Pipeline 
    opened by lvoegtlin 2
  • Specify metric via callback

    Specify metric via callback

    Is your feature request related to a problem? Please describe. To make the system more flexible we have to implement the metrics with callbacks s.t. we can combine multiple metrics and also reuse them in other tasks.

    Describe the solution you'd like Implement mIoU (jar fashion), precision, recall, and accuracy as metric callbacks. Call metrics at the end of the steps (see) Also make sure that when we are testing and in ddp that we just run it on one gpu or with join (documentation of join) (look here or here)

    Checklist

    • [x] Implement DIVA HisDB metric class (our metric)
    • [x] Metric which is exactly like the jar
    • [x] Create config for mIoU
    enhancement If time Module Pipeline 
    opened by lvoegtlin 2
  • Feature/add fcn

    Feature/add fcn

    Description

    UNet now has a swappable classifier. This makes working with it way easier, as we can easily fine-tune it onto a dataset with more or less classes.

    How to Test/Run?

    pytest or python run.py

    opened by lvoegtlin 1
  • Training/validation and test time

    Training/validation and test time

    Is your feature request related to a problem? Please describe. Get the exact time for the training (incl. validation) and the testing in seconds. This can be reported overall as well as for an epoch. The setup time of the framework should be excluded.

    Describe the solution you'd like Log these times into the used loggers and report it in the experiment summary file.

    Checklist

    • [ ] Check if PL already provides such a feature
    • [ ] Create timers for the different phases
    • [ ] Report these times
    • [ ] Test
    • [ ] PR
    opened by lvoegtlin 1
  • More complex return

    More complex return

    Is your feature request related to a problem? Please describe. Let the framework return more information, like beast model path, metric, etc. as a dictionary, s.t. calling files can chain together multiple frameworks runs.

    Describe the solution you'd like With a dictionary

    Checklist

    • [ ] Check what return information are needed
    • [ ] Add it tot he execution class
    • [ ] Test
    • [ ] PR
    enhancement Needed Config 
    opened by lvoegtlin 0
  • Rework the backbone header model

    Rework the backbone header model

    Is your feature request related to a problem? Please describe. Think about the current Backboneheader model and try to adapt it to the new needs. Eventually, changes it to a new model.

    Checklist

    • [ ] Evaluate the existing model with the new needs
    • [ ] Think about solutions
    • [ ] Prototype the solutions
    • [ ] Implementation (models, workflow, callbacks)
    • [ ] Config adaption
    • [ ] Test
    • [ ] PR
    enhancement Needed Config 
    opened by lvoegtlin 0
  • Test if possible conf_mat from base_task into a callback

    Test if possible conf_mat from base_task into a callback

    Is your feature request related to a problem? Please describe. The problem before with the conf mat callback was that it had a semaphore leak. As described here (https://github.com/ashleve/lightning-hydra-template/issues/189#issuecomment-1003532448), it should work now with the usage of torchmetrics.

    Checklist

    • [ ] Factor the conf mat log into callback
    • [ ] Extensice testing
    • [ ] Tests
    • [ ] PR
    enhancement Config 
    opened by lvoegtlin 0
  • Update hydra to 1.2

    Update hydra to 1.2

    Is your feature request related to a problem? Please describe. Update hydra to the newest version

    Checklist

    • [ ] update
    • [ ] adapt code
    • [ ] test
    • [ ] PR
    enhancement 
    opened by lvoegtlin 0
  • Hyperparameter optimization

    Hyperparameter optimization

    Is your feature request related to a problem? Please describe. Create a possibility to do hyperparameter optimization with the framework

    Checklist

    • [ ] Check out which one works best
    • [ ] integrate it or use it as a script
    • [ ] Test
    • [ ] PR
    enhancement 
    opened by lvoegtlin 0
Releases(version_0.2.2)
  • version_0.2.2(Jun 24, 2022)

    What's Changed

    • Experiment for rotnet with unet backbone by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/101
    • Created additional tests by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/100
    • Updated the version on PL to 1.5.10 by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/112
    • Added tests for RolfFormat datamodule and RGB takes by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/114
    • Release 0.2.2 by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/113

    Full Changelog: https://github.com/DIVA-DIA/DIVA-DAF/compare/version_0.2.1...version_0.2.2

    Source code(tar.gz)
    Source code(zip)
  • version_0.2.1(Dec 2, 2021)

    What's Changed

    • Fixed selection parameter, removed todos, improved print_config, added self to configs by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/87
    • Added tests for tasks and fixed merge scripts by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/89
    • New log folder structure by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/91
    • Replacing numpy with torch in divahisdb functional by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/93
    • Rename config saved during a run, and print commands to rerun a run by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/95
    • Release 0.2.1 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/98

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/compare/version_0.2.0...version_0.2.1

    Source code(tar.gz)
    Source code(zip)
  • version_0.2.0(Nov 25, 2021)

    Some new things

    • new architectures (resnet)
    • new datamodules (rolf format, RGB, full-page, and SSL)
    • different bug fixes
    • experiment configs
    • refactoring and deletion of unused code
    • callback to check the compatibility of backbone and header
    • inference/prediction stage (list of files with regex)
    • freezing header or backbone
    • improved readme
    • improved testing

    What's Changed

    • Dev data refactoring by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/74
    • Dev rgb encoding by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/76
    • RotNet by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/75
    • log more by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/77
    • More architectures by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/78
    • Dev fixing tests by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/79
    • Created resnet FCN header by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/83
    • Dev rolf data format by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/84
    • Introduce inference/prediction and refactoring by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/85
    • release 0.2.0 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/86

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/compare/version_0.1.1...version_0.2.0

    Source code(tar.gz)
    Source code(zip)
  • version_0.1.1(Oct 22, 2021)

    Changelog:

    • fixed conf mat
    • optimized test and validation step
    • improved merging of crops
    • more metrics and optimizers
    • updated requirements

    What's Changed

    • made tests running also in the terminal by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/60
    • fixed evaluation tool problem by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/62
    • adding new optimiser configs by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/64
    • removed unused dependency by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/65
    • Dev improve datamodule tests by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/66
    • Dev fixing conf and f1 heatmap by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/68
    • :art: each worker of the dl gets now an own seed by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/69
    • Dev reduce gpu memory by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/71
    • upload run config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/72
    • release version 0.1.1 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/73

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/compare/version_0.1.0...version_0.1.1

    Source code(tar.gz)
    Source code(zip)
  • version_0.1.0(Oct 6, 2021)

    The first version of the framework

    What's Changed

    • Dev 38 create hydra configs by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/1
    • Dev 47 better logger name by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/3
    • Dev 43 configurable optimizers by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/2
    • Dev 44 load model checkpoint by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/16
    • dev synced metric logging by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/17
    • When DDP num_workers = 0 was forced by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/19
    • Resolve ddp warning by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/20
    • Add strict parameter by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/21
    • Config refinement by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/23
    • Save config file for each run by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/28
    • add env by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/29
    • Dev 25 torchmetric introduction by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/30
    • Removed custom hydra config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/32
    • Dev 24 abstract task class by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/33
    • Dev 26 loading warning improvements by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/34
    • update pl to 1.4.4 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/36
    • Loss functions as config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/37
    • ddp cpu not working by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/39
    • Dev shuffle data option by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/44
    • Dev dataset selected pages by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/49
    • Dev 9 metric as config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/47
    • Fix conf mat and extend by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/51
    • Save metrics to csv by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/52
    • Check backbone header compatibility by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/53
    • abstract datamodule and resolvers by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/56
    • Dev refactoring and tests by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/57
    • Dev 34 refactoring semantic segmentation by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/58
    • Version 0.1.0 of the fw by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/59

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/commits/version_0.1.0

    Source code(tar.gz)
    Source code(zip)
An efficient implementation of GPNN

Efficient-GPNN An efficient implementation of GPNN as depicted in "Drop the GAN: In Defense of Patches Nearest Neighbors as Single Image Generative Mo

7 Apr 16, 2022
Fast and simple implementation of RL algorithms, designed to run fully on GPU.

RSL RL Fast and simple implementation of RL algorithms, designed to run fully on GPU. This code is an evolution of rl-pytorch provided with NVIDIA's I

Robotic Systems Lab - Legged Robotics at ETH Zürich 68 Dec 29, 2022
Born-Infeld (BI) for AI: Energy-Conserving Descent (ECD) for Optimization

Born-Infeld (BI) for AI: Energy-Conserving Descent (ECD) for Optimization This repository contains the code for the BBI optimizer, introduced in the p

G. Bruno De Luca 5 Sep 06, 2022
The code release of paper Low-Light Image Enhancement with Normalizing Flow

[AAAI 2022] Low-Light Image Enhancement with Normalizing Flow Paper | Project Page Low-Light Image Enhancement with Normalizing Flow Yufei Wang, Renji

Yufei Wang 176 Jan 06, 2023
Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Denis Emelin 42 Nov 24, 2022
Functional deep learning

Pipeline abstractions for deep learning. Full documentation here: https://lf1-io.github.io/padl/ PADL: is a pipeline builder for PyTorch. may be used

LF1 101 Nov 09, 2022
You Only 👀 One Sequence

You Only 👀 One Sequence TL;DR: We study the transferability of the vanilla ViT pre-trained on mid-sized ImageNet-1k to the more challenging COCO obje

Hust Visual Learning Team 666 Jan 03, 2023
From Fidelity to Perceptual Quality: A Semi-Supervised Approach for Low-Light Image Enhancement (CVPR'2020)

Under-exposure introduces a series of visual degradation, i.e. decreased visibility, intensive noise, and biased color, etc. To address these problems, we propose a novel semi-supervised learning app

Yang Wenhan 117 Jan 03, 2023
The coda and data for "Measuring Fine-Grained Domain Relevance of Terms: A Hierarchical Core-Fringe Approach" (ACL '21)

We propose a hierarchical core-fringe learning framework to measure fine-grained domain relevance of terms – the degree that a term is relevant to a broad (e.g., computer science) or narrow (e.g., de

Jie Huang 14 Oct 21, 2022
A library for efficient similarity search and clustering of dense vectors.

Faiss Faiss is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any

Meta Research 18.8k Jan 08, 2023
SVG Icon processing tool for C++

BAWR This is a tool to automate the icons generation from sets of svg files into fonts and atlases. The main purpose of this tool is to add it to the

Frank David Martínez M 66 Dec 14, 2022
A New Approach to Overgenerating and Scoring Abstractive Summaries

We provide the source code for the paper "A New Approach to Overgenerating and Scoring Abstractive Summaries" accepted at NAACL'21. If you find the code useful, please cite the following paper.

Kaiqiang Song 4 Apr 03, 2022
RL Algorithms with examples in Python / Pytorch / Unity ML agents

Reinforcement Learning Project This project was created to make it easier to get started with Reinforcement Learning. It now contains: An implementati

Rogier Wachters 3 Aug 19, 2022
Machine Learning Time-Series Platform

cesium: Open-Source Platform for Time Series Inference Summary cesium is an open source library that allows users to: extract features from raw time s

632 Dec 26, 2022
Workshop Materials Delivered on 28/02/2022

intro-to-cnn-p1 Repo for hosting workshop materials delivered on 28/02/2022 Questions you will answer in this workshop Learning Objectives What are co

Beginners Machine Learning 5 Feb 28, 2022
Controlling the MicriSpotAI robot from scratch

Project-MicroSpot-AI Controlling the MicriSpotAI robot from scratch Colaborators Alexander Dennis Components from MicroSpot The MicriSpotAI has the fo

Dennis Núñez-Fernández 5 Oct 20, 2022
Source code for the GPT-2 story generation models in the EMNLP 2020 paper "STORIUM: A Dataset and Evaluation Platform for Human-in-the-Loop Story Generation"

Storium GPT-2 Models This is the official repository for the GPT-2 models described in the EMNLP 2020 paper [STORIUM: A Dataset and Evaluation Platfor

Nader Akoury 27 Dec 20, 2022
Modifications of the official PyTorch implementation of StyleGAN3. Let's easily generate images and videos with StyleGAN2/2-ADA/3!

Alias-Free Generative Adversarial Networks (StyleGAN3) Official PyTorch implementation of the NeurIPS 2021 paper Alias-Free Generative Adversarial Net

Diego Porres 185 Dec 24, 2022
Le dataset des images du projet d'IA de 2021

face-mask-dataset-ilc-2021 Le dataset des images du projet d'IA de 2021, Indiquez vos id git dans la issue pour les droits TL;DR: Choisir 200 images J

7 Nov 15, 2021
2.86% and 15.85% on CIFAR-10 and CIFAR-100

Shake-Shake regularization This repository contains the code for the paper Shake-Shake regularization. This arxiv paper is an extension of Shake-Shake

Xavier Gastaldi 294 Nov 22, 2022