code for paper -- "Seamless Satellite-image Synthesis"

Overview

Seamless Satellite-image Synthesis

by Jialin Zhu and Tom Kelly.

Project site. The code of our models borrows heavily from the BicycleGAN repository and SPADE repository. Some missing description can be found in the original repository.

Watch the video

YouTube video

Web UI system

Watch the video

  • The UI system is developed by web framework - Django.
  • Clone the code and cd web_ui
  • Install required packages(mainly Django 3.1 and PyTorch 1.7.1)
    • These are easy to install so we do not provide a requirements.txt file.
    • Packages other than Django and PyTorch can be installed in sequence according to the output error logs.
  • Download pre-trained weights and put them in web_ui/sss_ui/checkpoints.
  • Run python manage.py migrate and python manage.py makemigrations.
  • Run python runserver.py.
  • Access 127.0.0.1/index thourough a web browser.
  • Start play with the UI system

Pre-trained weights are available here: Mega link

We provide some preset map data, if you want more extensive or other map data, you need to replace the map data yourself. There are some features that have not yet been implemented. Please report bugs as github issues.

SSS pipeline

The SSS whole pipeline will allow users to generate a set of satellite images from map data of three different scale level.

  • Clone the code and cd SPADE.
  • Install required packages(mainly PyTorch 1.7.1)
  • Run bash scit_m.sh [level_1_dataset_dir] [raw_data_dir] [results_output_dir].
  • The generated satellite images are in the [results_output_path] folder.

We provide some preset map data, if you want more extensive or other map data, you need to replace the map data yourself.

Training

You can also re-train the whole pipeline or train with your own data. For copyright reasons, we will not provide download links for the data we use. But they are very easy to obtain, especially for academic institutions such as universities. Our training data is from Digimap. We use OS MasterMap® Topography Layer with GDAL and GeoPandas to render map images, and we use satellite images from Aerial via Getmapping.

To train map2sat for level 1:

  • Clone the code and cd SPADE.
  • Run python train.py --name [z1] --dataset_mode ins --label_dir [label_dir] --image_dir [image_dir] --instance_dir [instance_dir] --label_nc 13 --load_size 256 --crop_size 256 --niter_decay 20 --use_vae --ins_edge --gpu_ids 0,1,2,3 --batchSize 16.
  • We recommend using a larger batch size so that the encoder can generate results with greater style differences.

To train map2sat for level z (z > 1):

  • Clone the code and cd SPADE.
  • Run python trainCG.py --name [z2_cg] --dataset_mode insgb --label_dir [label_dir] --image_dir [image_dir] --instance_dir [instance_dir] --label_nc 13 --load_size 256 --crop_size 256 --niter_decay 20 --ins_edge --cg --netG spadebranchn --cg_size 256 --gbk_size 8.

To train seam2cont:

  • Clone the code and cd BicycleGAN.
  • Run python train.py --dataroot [dataset_dir] --name [z1sn] --model sn --direction AtoB --load_size 256 --save_epoch_freq 201 --lambda_ml 0 --input_nc 8 --dataset_mode sn --seams_map --batch_size 1 --ndf 32 --conD --forced_mask.

Citation

@inproceedings{zhu2021seamless,
  title={Seamless Satellite-image Synthesis},
  author={Zhu, J and Kelly, T},
  booktitle={Computer Graphics Forum},
  year={2021},
  organization={Wiley}
}

Acknowledgements

We would like to thank Nvidia Corporation for hardware and Ordnance Survey Mapping for map data which made this project possible. This work was undertaken on ARC4, part of the High Performance Computing facilities at the University of Leeds, UK. This work made use of the facilities of the N8 Centre of Excellence in Computationally Intensive Research (N8 CIR) provided and funded by the N8 research partnership and EPSRC (Grant No. EP/T022167/1).

Owner
Light
I am really skilled at printing "hello world" in various programming languages.
Light
Adversarial Framework for (non-) Parametric Image Stylisation Mosaics

Fully Adversarial Mosaics (FAMOS) Pytorch implementation of the paper "Copy the Old or Paint Anew? An Adversarial Framework for (non-) Parametric Imag

Zalando Research 120 Dec 24, 2022
Code implementation for the paper 'Conditional Gaussian PAC-Bayes'.

CondGauss This repository contains PyTorch code for the paper Stochastic Gaussian PAC-Bayes. A novel PAC-Bayesian training method is implemented. Ther

0 Nov 01, 2021
Offical implementation for "Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation".

Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation (NeurIPS 2021) by Qiming Hu, Xiaojie Guo. Dependencies P

Qiming Hu 31 Dec 20, 2022
LibMTL: A PyTorch Library for Multi-Task Learning

LibMTL LibMTL is an open-source library built on PyTorch for Multi-Task Learning (MTL). See the latest documentation for detailed introductions and AP

765 Jan 06, 2023
PyTorch Implementation for Fracture Detection in Wrist Bone X-ray Images

wrist-d PyTorch Implementation for Fracture Detection in Wrist Bone X-ray Images note: Paper: Under Review at MPDI Diagnostics Submission Date: Novemb

Fatih UYSAL 5 Oct 12, 2022
Using Language Model to Bootstrap Human Activity Recognition Ambient Sensors Based in Smart Homes

Using Language Model to Bootstrap Human Activity Recognition Ambient Sensors Based in Smart Homes This repository is the official implementation of Us

Damien Bouchabou 0 Oct 18, 2021
Data Augmentation with Variational Autoencoders

Documentation Pyraug This library provides a way to perform Data Augmentation using Variational Autoencoders in a reliable way even in challenging con

112 Nov 30, 2022
Code release for SLIP Self-supervision meets Language-Image Pre-training

SLIP: Self-supervision meets Language-Image Pre-training What you can find in this repo: Pre-trained models (with ViT-Small, Base, Large) and code to

Meta Research 621 Dec 31, 2022
Transformer Tracking (CVPR2021)

TransT - Transformer Tracking [CVPR2021] Official implementation of the TransT (CVPR2021) , including training code and trained models. We are revisin

chenxin 465 Jan 06, 2023
Image Segmentation and Object Detection in Pytorch

Image Segmentation and Object Detection in Pytorch Pytorch-Segmentation-Detection is a library for image segmentation and object detection with report

Daniil Pakhomov 732 Dec 10, 2022
Relative Human dataset, CVPR 2022

Relative Human (RH) contains multi-person in-the-wild RGB images with rich human annotations, including: Depth layers (DLs): relative depth relationsh

Yu Sun 112 Dec 02, 2022
Clean Machine Learning, a Coding Kata

Kata: Clean Machine Learning From Dirty Code First, open the Kata in Google Colab (or else download it) You can clone this project and launch jupyter-

Neuraxio 13 Nov 03, 2022
FTIR-Deep Learning - FTIR Deep Learning With Python

CANDIY-spectrum Human analyis of chemical spectra such as Mass Spectra (MS), Inf

Wei Mei 1 Jan 03, 2022
This is the official source code of "BiCAT: Bi-Chronological Augmentation of Transformer for Sequential Recommendation".

BiCAT This is our TensorFlow implementation for the paper: "BiCAT: Sequential Recommendation with Bidirectional Chronological Augmentation of Transfor

John 15 Dec 06, 2022
"Exploring Vision Transformers for Fine-grained Classification" at CVPRW FGVC8

FGVC8 Exploring Vision Transformers for Fine-grained Classification paper presented at the CVPR 2021, The Eight Workshop on Fine-Grained Visual Catego

Marcos V. Conde 19 Dec 06, 2022
Detection of PCBA defect

Detection_of_PCBA_defect Detection_of_PCBA_defect Use yolov5 to train. $pip install -r requirements.txt Detect.py will detect file(jpg,mp4...) in cu

6 Nov 28, 2022
Code for Discriminative Sounding Objects Localization (NeurIPS 2020)

Discriminative Sounding Objects Localization Code for our NeurIPS 2020 paper Discriminative Sounding Objects Localization via Self-supervised Audiovis

51 Dec 11, 2022
Boost learning for GNNs from the graph structure under challenging heterophily settings. (NeurIPS'20)

Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu,

GEMS Lab: Graph Exploration & Mining at Scale, University of Michigan 70 Dec 18, 2022
Official Code Release for "CLIP-Adapter: Better Vision-Language Models with Feature Adapters"

Official Code Release for "CLIP-Adapter: Better Vision-Language Models with Feature Adapters" Pipeline of CLIP-Adapter CLIP-Adapter is a drop-in modul

peng gao 157 Dec 26, 2022
A fast Protein Chain / Ligand Extractor and organizer.

Are you tired of using visualization software, or full blown suites just to separate protein chains / ligands ? Are you tired of organizing the mess o

Amine Abdz 9 Nov 06, 2022