Can we visualize a large scientific data set with a surrogate model? We're building a GAN for the Earth's Mantle Convection data set to see if we can!

Overview

EarthGAN - Earth Mantle Surrogate Modeling

Can a surrogate model of the Earthโ€™s Mantle Convection data set be built such that it can be readily run in a web-browser and produce high-fidelity results? We're trying to do just that through the use of a generative adversarial network -- we call ours EarthGAN. We are in active research.

See how EarthGAN currently works! Open up the Colab notebook and create results from the preliminary generator: Open In Colab

compare_epoch41_rindex165_moll

Progress updates, along with my thoughts, can be found in the devlog. The preliminary results were presented at VIS 2021 as part of the SciVis contest. See the paper on arXiv, here.

This is active research. If you have any thoughts, suggestions, or would like to collaborate, please reach out! You can also post questions/ideas in the discussions section.

Source code arXiv

Current Approach

We're leveraging the excellent work of Li et al. who have implemented a GAN for creating super-resolution cosmological simulations. The general method is in their map2map repository. We've used their GAN implementation as it works on 3D data. Please cite their work if you find it useful!

The current approach is based on the StyleGAN2 model. In addition, a conditional-GAN (cGAN) is used to produce results that are partially deterministic.

Setup

Works best if you are in a HPC environment (I used Compute Canada). Also tested locally in linux (MacOS should also work). If you run windows you'll have to do much of the environment setup and data download/preprocessing manually.

To reproduce data pipeline and begin training: *

  1. Clone this repo - clone https://github.com/tvhahn/EarthGAN.git

  2. Create virtual environment. Assumes that Conda is installed when on a local computer.

    • HPC: make create_environment will detect HPC environment and automatically create environment from make_hpc_venv.sh. Tested on Compute Canada. Modify make_hpc_venv.sh for your own HPC cluster.

    • Linux/MacOS: use command from Makefile - `make create_environment

  3. Download raw data.

    • HPC: use make download. Will automatically detect HPC environment.

    • Linux/MacOS: use make download. Will automatically download to appropriate data/raw directory.

  4. Extract raw data.

    • HPC: use make download. Will automatically detect HPC environment. Again, modify for your HPC cluster.
    • Linux/MacOS: use make extract. Will automatically extract to appropriate data/raw directory.
  5. Ensure virtual environment is activated. conda activate earth

  6. From root directory of EarthGAN, run pip install -e . -- this will give the python scripts access to the src folders.

  7. Create the processed data that will be used for training.

    • HPC: use make data. Will automatically detect HPC environment and create the processed data.

      ๐Ÿ“ Note: You will have to modify the make_hpc_data.sh in the ./bash_scripts/ folder to match the requirements of your HPC environment

    • Linux/MacOS: use make data.

  8. Copy the processed data to the scratch folder if you're on the HPC. Modify copy_processed_data_to_scratch.sh in ./bash_scripts/ folder.

  9. Train!

    • HPC: use make train. Again, modify for your HPC cluster. Not yet optimized for multi-GPU training, so be warned, it will be SLOW!

    • Linux/MacOS: use make train.

* Let me know if you run into any problems! This is still in development.

Project Organization

โ”œโ”€โ”€ Makefile           <- Makefile with commands like `make data` or `make train`
โ”‚
โ”œโ”€โ”€ bash_scripts	   <- Bash scripts used in for training models or setting up environment
โ”‚   โ”œโ”€โ”€ train_model_hpc.sh       <- Bash/SLURM script used to train models on HPC (you will need to	modify this to work on your HPC). Called with `make train`
โ”‚   โ””โ”€โ”€ train_model_local.sh     <- Bash script used to train models locally. Called on with `make train`
โ”‚
โ”œโ”€โ”€ data
โ”‚   โ”œโ”€โ”€ interim        <- Intermediate data before we've applied any scaling.
โ”‚   โ”œโ”€โ”€ processed      <- The final, canonical data sets for modeling.
โ”‚   โ””โ”€โ”€ raw            <- Original data from Earth Mantle Convection simulation.
โ”‚
โ”œโ”€โ”€ models             <- Trained and serialized models, model predictions, or model summaries
โ”‚   โ””โ”€โ”€ interim        <- Interim models and summaries
โ”‚   โ””โ”€โ”€ final          <- Final, cononical models
โ”‚
โ”œโ”€โ”€ notebooks          <- Jupyter notebooks. Generally used for explaining various components
โ”‚   โ”‚                     of the code base.
โ”‚   โ””โ”€โ”€ scratch        <- Rough-draft notebooks, of questionable quality. Be warned!
โ”‚
โ”œโ”€โ”€ references         <- Data dictionaries, manuals, and all other explanatory materials.
โ”‚
โ”œโ”€โ”€ reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
โ”‚   โ””โ”€โ”€ figures        <- Generated graphics and figures to be used in reporting
โ”‚
โ”œโ”€โ”€ requirements.txt   <- Recommend using `make create_environment`. However, can use this file
โ”‚                         for to recreate environment with pip
โ”œโ”€โ”€ envearth.yml       <- Used to create conda environment. Use `make create_environment` when
โ”‚                         on local compute				
โ”‚
โ”œโ”€โ”€ setup.py           <- makes project pip installable (pip install -e .) so src can be imported
โ”œโ”€โ”€ src                <- Source code for use in this project.
โ”‚   โ”œโ”€โ”€ __init__.py    <- Makes src a Python module
โ”‚   โ”‚
โ”‚   โ”œโ”€โ”€ data           <- Scripts to download or generate data
โ”‚   โ”‚   โ”œโ”€โ”€ make_dataset.py			<- Script for making downsampled data from the original
โ”‚   โ”‚   โ”œโ”€โ”€ data_prep_utils.py		<- Misc functions used in data prep
โ”‚   โ”‚   โ”œโ”€โ”€ download.sh				<- Bash script to download entire Earth Mantle data set
โ”‚   โ”‚   โ”‚  							   (used when `make data` called)
โ”‚   โ”‚   โ””โ”€โ”€download.sh				<- Bash script to extract all Earth Mantle data set files
โ”‚   โ”‚    							   from zip (used when `make extract` called)								   
โ”‚   โ”‚
โ”‚   โ”œโ”€โ”€ models         <- Scripts to train models and then use trained models to make
โ”‚   โ”‚   โ”‚                 predictions
โ”‚   โ”‚   โ”‚
โ”‚   โ”‚   โ””โ”€โ”€ train_model.py
โ”‚   โ”‚
โ”‚   โ””โ”€โ”€ visualization  <- Scripts to create exploratory and results oriented visualizations
โ”‚       โ””โ”€โ”€ visualize.py
โ”‚
โ”œโ”€โ”€ LICENSE
โ””โ”€โ”€ README.md          <- README describing project.
You might also like...
An implementation of the [Hierarchical (Sig-Wasserstein) GAN] algorithm for large dimensional Time Series Generation
An implementation of the [Hierarchical (Sig-Wasserstein) GAN] algorithm for large dimensional Time Series Generation

Hierarchical GAN for large dimensional financial market data Implementation This repository is an implementation of the [Hierarchical (Sig-Wasserstein

Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR 2022)
Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR 2022)

Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR2022)[paper] Authors: Chenhang He, Ruihuang Li, Shuai Li, L

A multi-functional library for full-stack Deep Learning. Simplifies Model Building, API development, and Model Deployment.
A multi-functional library for full-stack Deep Learning. Simplifies Model Building, API development, and Model Deployment.

chitra What is chitra? chitra (เคšเคฟเคคเฅเคฐ) is a multi-functional library for full-stack Deep Learning. It simplifies Model Building, API development, and M

Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

PyTorch Implementation of Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers 1 Using Colab Please notic

Visualize Camera's Pose Using Extrinsic Parameter by Plotting Pyramid Model on 3D Space
Visualize Camera's Pose Using Extrinsic Parameter by Plotting Pyramid Model on 3D Space

extrinsic2pyramid Visualize Camera's Pose Using Extrinsic Parameter by Plotting Pyramid Model on 3D Space Intro A very simple and straightforward modu

Language Models Can See: Plugging Visual Controls in Text Generation
Language Models Can See: Plugging Visual Controls in Text Generation

Language Models Can See: Plugging Visual Controls in Text Generation Authors: Yixuan Su, Tian Lan, Yahui Liu, Fangyu Liu, Dani Yogatama, Yan Wang, Lin

This is my codes that can visualize the psnr image in testing videos.
This is my codes that can visualize the psnr image in testing videos.

CVPR2018-Baseline-PSNRplot This is my codes that can visualize the psnr image in testing videos. Future Frame Prediction for Anomaly Detection โ€“ A New

A library for answering questions using data you cannot see
A library for answering questions using data you cannot see

A library for computing on data you do not own and cannot see PySyft is a Python library for secure and private Deep Learning. PySyft decouples privat

Code and data for the paper
Code and data for the paper "Hearing What You Cannot See"

Hearing What You Cannot See: Acoustic Vehicle Detection Around Corners Public repository of the paper "Hearing What You Cannot See: Acoustic Vehicle D

Releases(v1.0.0)
  • v1.0.0(Nov 4, 2021)

Owner
Tim
Data science. Innovation. ML practitioner.
Tim
Byzantine-robust decentralized learning via self-centered clipping

Byzantine-robust decentralized learning via self-centered clipping In this paper, we study the challenging task of Byzantine-robust decentralized trai

EPFL Machine Learning and Optimizationย Laboratory 4 Aug 27, 2022
Multi-Glimpse Network With Python

Multi-Glimpse Network Our code requires Python โ‰ฅ 3.8 Installation For example, venv + pip: $ python3 -m venv env $ source env/bin/activate (env) $ pyt

9 May 10, 2022
Reporting and Visualization for Hazardous Events

Reporting and Visualization for Hazardous Events

Jv Kyle Eclarin 2 Oct 03, 2021
atmaCup #11 ใฎ Public 4th / Pricvate 5th Solution ใฎใƒชใƒใ‚ธใƒˆใƒชใงใ™ใ€‚

#11 atmaCup 2021-07-09 ~ 2020-07-21 ใซ่กŒใ‚ใ‚ŒใŸ #11 [ๅˆๅฟƒ่€…ๆญ“่ฟŽ! / ็”ปๅƒ็ทจ] atmaCup ใฎใƒชใƒใ‚ธใƒˆใƒชใงใ™ใ€‚็ตๆžœใฏ Public 4th / Private 5th ใงใ—ใŸใ€‚ ใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใฏ PyTorch ใงใ€ๅฎŸ่ฃ…ใฏ pytorch-image-m

Tawara 12 Apr 07, 2022
Position detection system of mobile robot in the warehouse enviroment

Autonomous-Forklift-System About | GUI | Tests | Starting | License | Author | ๐ŸŽฏ About An application that run the autonomous forklift paletization a

Kamil Goล› 1 Nov 24, 2021
This is Unofficial Repo. Lips Don't Lie: A Generalisable and Robust Approach to Face Forgery Detection (CVPR 2021)

Lips Don't Lie: A Generalisable and Robust Approach to Face Forgery Detection This is a PyTorch implementation of the LipForensics paper. This is an U

Minha Kim 2 May 11, 2022
Learning 3D Part Assembly from a Single Image

Learning 3D Part Assembly from a Single Image This repository contains a PyTorch implementation of the paper: Learning 3D Part Assembly from A Single

18 Dec 21, 2022
Statsmodels: statistical modeling and econometrics in Python

About statsmodels statsmodels is a Python package that provides a complement to scipy for statistical computations including descriptive statistics an

statsmodels 8.1k Jan 02, 2023
Research Artifact of USENIX Security 2022 Paper: Automated Side Channel Analysis of Media Software with Manifold Learning

Manifold-SCA Research Artifact of USENIX Security 2022 Paper: Automated Side Channel Analysis of Media Software with Manifold Learning The repo is org

Yuanyuan Yuan 172 Dec 29, 2022
Official codebase for "B-Pref: Benchmarking Preference-BasedReinforcement Learning" contains scripts to reproduce experiments.

B-Pref Official codebase for B-Pref: Benchmarking Preference-BasedReinforcement Learning contains scripts to reproduce experiments. Install conda env

48 Dec 20, 2022
This repo provides the official code for TransBTS: Multimodal Brain Tumor Segmentation Using Transformer (https://arxiv.org/pdf/2103.04430.pdf).

TransBTS: Multimodal Brain Tumor Segmentation Using Transformer This repo is the official implementation for TransBTS: Multimodal Brain Tumor Segmenta

Raymond 247 Dec 28, 2022
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.

Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.

Nerdy Rodent 2.3k Jan 04, 2023
ไธ€ไบ›็ปๅ…ธ็š„CTR็ฎ—ๆณ•็š„ๅค็Žฐ; LR, FM, FFM, AFM, DeepFM๏ผŒxDeepFM, PNN, DCN, DCNv2, DIFM, AutoInt, FiBiNet,AFN,ONN,DIN, DIEN ... ๏ผˆpytorch, tf2.0๏ผ‰

CTR Algorithm ๆ นๆฎ่ฎบๆ–‡, ๅšๅฎข, ็ŸฅไนŽ็ญ‰ๆ–นๅผๅญฆไน ไธ€ไบ›CTR็›ธๅ…ณ็š„็ฎ—ๆณ• ็†่งฃๅŽŸ็†ๅนถ่‡ชๅทฑๅŠจๆ‰‹ๆฅๅฎž็Žฐไธ€้ pytorch & tf2.0 ไฟๆŒไธ€้ข—ๅญฆๅพ’็š„ๅฟƒ๏ผ Schedule Model pytorch tensorflow2.0 paper LR โœ”๏ธ โœ”๏ธ \ FM โœ”๏ธ โœ”๏ธ Fac

luo han 149 Dec 20, 2022
Coarse implement of the paper "A Simultaneous Denoising and Dereverberation Framework with Target Decoupling", On DNS-2020 dataset, the DNSMOS of first stage is 3.42 and second stage is 3.47.

SDDNet Coarse implement of the paper "A Simultaneous Denoising and Dereverberation Framework with Target Decoupling", On DNS-2020 dataset, the DNSMOS

Cyril Lv 43 Nov 21, 2022
Shape-Adaptive Selection and Measurement for Oriented Object Detection

Source Code of AAAI22-2171 Introduction The source code includes training and inference procedures for the proposed method of the paper submitted to t

houliping 24 Nov 29, 2022
Python library for analysis of time series data including dimensionality reduction, clustering, and Markov model estimation

deeptime Releases: Installation via conda recommended. conda install -c conda-forge deeptime pip install deeptime Documentation: deeptime-ml.github.io

495 Dec 28, 2022
Autoencoders pretraining using clustering

Autoencoders pretraining using clustering

IITiS PAN 2 Dec 16, 2021
Direct design of biquad filter cascades with deep learning by sampling random polynomials.

IIRNet Direct design of biquad filter cascades with deep learning by sampling random polynomials. Usage git clone https://github.com/csteinmetz1/IIRNe

Christian J. Steinmetz 55 Nov 02, 2022
Data Augmentation Using Keras and Python

Data-Augmentation-Using-Keras-and-Python Data augmentation is the process of increasing the number of training dataset. Keras library offers a simple

Happy N. Monday 3 Feb 15, 2022
Content shared at DS-OX Meetup

Streamlit-Projects Streamlit projects available in this repo: An introduction to Streamlit presented at DS-OX (Feb 26, 2020) meetup Streamlit 101 - Ja

Arvindra 69 Dec 23, 2022