Fully Convolutional DenseNet (A.K.A 100 layer tiramisu) for semantic segmentation of images implemented in TensorFlow.

Overview

FC-DenseNet-Tensorflow

This is a re-implementation of the 100 layer tiramisu, technically a fully convolutional DenseNet, in TensorFlow (Tiramisu). The aim of the repository is to break down the working modules of the network, as presented in the paper, for ease of understanding. To facilitate this, the network is defined in a class, with functions for each block in the network. This promotes a modular view, and an understanding of what each component does individually. I tried to make the model code more readable, and this is the main aim of the this repository.

Network Architecture

Submodules

The "submodules" that build up the Tiramisu are explained here. Note: The graphics are just a redrawing of the ones from the original paper.

The Conv Layer:

The "conv layer" is the most atomic unit of the FC-DenseNet, it is the building block of all other modules. The following image shows the conv layer:

In code, it is implemented as:
def conv_layer(self, x, training, filters, name):
    with tf.name_scope(name):
        x = self.batch_norm(x, training, name=name+'_bn')
        x = tf.nn.relu(x, name=name+'_relu')
        x = tf.layers.conv2d(x,
                             filters=filters,
                             kernel_size=[3, 3],
                             strides=[1, 1],
                             padding='SAME',
                             dilation_rate=[1, 1],
                             activation=None,
                             kernel_initializer=tf.contrib.layers.xavier_initializer(),
                             name=name+'_conv3x3')
        x = tf.layers.dropout(x, rate=0.2, training=training, name=name+'_dropout')

As can be seen, each "convolutional" layer is actually a 4 step procedure of batch normalization -> Relu -> 2D-Convolution -> Dropout.

The Dense Block

The dense block is a sequence of convolutions followed by concatenations. The output of a conv layer is concated depth wise with its input, this forms the input to the next layer, and is repeated for all layers in a dense block. For the final output i.e., the output of the Dense Block, all the outputs of each conv layer in the block are concated, as shown:

In code, it is implemented as:

def dense_block(self, x, training, block_nb, name):
    dense_out = []
    with tf.name_scope(name):
        for i in range(self.layers_per_block[block_nb]):
            conv = self.conv_layer(x, training, self.growth_k, name=name+'_layer_'+str(i))
            x = tf.concat([conv, x], axis=3)
            dense_out.append(conv)

        x = tf.concat(dense_out, axis=3)

    return x

How to Run

To run the network on your own dataset, do the following:

  1. Clone this repository.
  2. Open up your terminal and navigate to the cloned repository
  3. Type in the following:
python main.py --mode=train --train_data=path/to/train/data --val_data=path/to/validation/data \
--ckpt=path/to/save/checkpoint/model.ckpt --layers_per_block=4,5,7,10,12,15 \
--batch_size=8 --epochs=10 --growth_k=16 --num_classes=2 --learning_rate=0.001

The "layers_per_block" argument is only specified for the downsample path, upto the final bottleneck dense block, the upsample path is then automatically built by mirroring the downsample path.

Run with trained checkpoint

To run the code with a trained checkpoint file on images, use the infer mode in in the command line options, like so:

python main.py --mode=infer --infer_data=path/to/infer/data --batch_size=4 \
--ckpt=models/model.ckpt-20 --output_folder=outputs

Tests

The python files ending with "*_test.py" are unit test files, if you make changes or have just cloned the repo, it is a good idea to run them once in your favorite Python IDE, they should let you know if your changes break anything. Currently, the test coverage is not that high, I plan to keep adding more in the future.

TODOs:

  1. Add some more functionality in the code.
  2. Add more detail into this readme.
  3. Save model graph.
  4. Rework command line arguments.
  5. Update with some examples of performance once trained.
  6. Increase test coverage.
  7. Save loss summaries for Tensorboard.
Owner
Hasnain Raza
Hasnain Raza
This is an implementation of PIFuhd based on Pytorch

Open-PIFuhd This is a unofficial implementation of PIFuhd PIFuHD: Multi-Level Pixel-Aligned Implicit Function forHigh-Resolution 3D Human Digitization

Lingteng Qiu 235 Dec 19, 2022
NALSM: Neuron-Astrocyte Liquid State Machine

NALSM: Neuron-Astrocyte Liquid State Machine This package is a Tensorflow implementation of the Neuron-Astrocyte Liquid State Machine (NALSM) that int

Computational Brain Lab 4 Nov 28, 2022
Train Dense Passage Retriever (DPR) with a single GPU

Gradient Cached Dense Passage Retrieval Gradient Cached Dense Passage Retrieval (GC-DPR) - is an extension of the original DPR library. We introduce G

Luyu Gao 92 Jan 02, 2023
TICC is a python solver for efficiently segmenting and clustering a multivariate time series

TICC TICC is a python solver for efficiently segmenting and clustering a multivariate time series. It takes as input a T-by-n data matrix, a regulariz

406 Dec 12, 2022
CV backbones including GhostNet, TinyNet and TNT, developed by Huawei Noah's Ark Lab.

CV Backbones including GhostNet, TinyNet, TNT (Transformer in Transformer) developed by Huawei Noah's Ark Lab. GhostNet Code TinyNet Code TNT Code Pyr

HUAWEI Noah's Ark Lab 3k Jan 08, 2023
Code for the paper 'A High Performance CRF Model for Clothes Parsing'.

Clothes Parsing Overview This code provides an implementation of the research paper: A High Performance CRF Model for Clothes Parsing Edgar Simo-S

Edgar Simo-Serra 119 Nov 21, 2022
Robust Instance Segmentation through Reasoning about Multi-Object Occlusion [CVPR 2021]

Robust Instance Segmentation through Reasoning about Multi-Object Occlusion [CVPR 2021] Abstract Analyzing complex scenes with DNN is a challenging ta

Irene Yuan 24 Jun 27, 2022
Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

ImageProcessingTransformer Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

61 Jan 01, 2023
Code for ICML 2021 paper: How could Neural Networks understand Programs?

OSCAR This repository contains the source code of our ICML 2021 paper How could Neural Networks understand Programs?. Environment Run following comman

Dinglan Peng 115 Dec 17, 2022
🤖 A Python library for learning and evaluating knowledge graph embeddings

PyKEEN PyKEEN (Python KnowlEdge EmbeddiNgs) is a Python package designed to train and evaluate knowledge graph embedding models (incorporating multi-m

PyKEEN 1.1k Jan 09, 2023
This package implements the algorithms introduced in Smucler, Sapienza, and Rotnitzky (2020) to compute optimal adjustment sets in causal graphical models.

optimaladj: A library for computing optimal adjustment sets in causal graphical models This package implements the algorithms introduced in Smucler, S

Facundo Sapienza 6 Aug 04, 2022
A PyTorch library for Vision Transformers

VFormer A PyTorch library for Vision Transformers Getting Started Read the contributing guidelines in CONTRIBUTING.rst to learn how to start contribut

Society for Artificial Intelligence and Deep Learning 142 Nov 28, 2022
Algorithmic encoding of protected characteristics and its implications on disparities across subgroups

Algorithmic encoding of protected characteristics and its implications on disparities across subgroups This repository contains the code for the paper

Team MIRA - BioMedIA 15 Oct 24, 2022
Codes for paper "Towards Diverse Paragraph Captioning for Untrimmed Videos". CVPR 2021

Towards Diverse Paragraph Captioning for Untrimmed Videos This repository contains PyTorch implementation of our paper Towards Diverse Paragraph Capti

Yuqing Song 61 Oct 11, 2022
Supporting code for "Autoregressive neural-network wavefunctions for ab initio quantum chemistry".

naqs-for-quantum-chemistry This repository contains the codebase developed for the paper Autoregressive neural-network wavefunctions for ab initio qua

Tom Barrett 24 Dec 23, 2022
Faster RCNN with PyTorch

Faster RCNN with PyTorch Note: I re-implemented faster rcnn in this project when I started learning PyTorch. Then I use PyTorch in all of my projects.

Long Chen 1.6k Dec 23, 2022
Implementation of CVPR'2022:Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors

Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors (CVPR 2022) Personal Web Pages | Paper | Project Page This repository contains

151 Dec 26, 2022
Official Pytorch implementation of 6DRepNet: 6D Rotation representation for unconstrained head pose estimation.

6D Rotation Representation for Unconstrained Head Pose Estimation (Pytorch) Paper Thorsten Hempel and Ahmed A. Abdelrahman and Ayoub Al-Hamadi, "6D Ro

Thorsten Hempel 284 Dec 23, 2022
Fast and robust certifiable relative pose estimation

Fast and Robust Relative Pose Estimation for Calibrated Cameras This repository contains the code for the relative pose estimation between two central

42 Dec 06, 2022
Transfer Reinforcement Learning for Differing Action Spaces via Q-Network Representations

Transfer-Learning-in-Reinforcement-Learning Transfer Reinforcement Learning for Differing Action Spaces via Q-Network Representations Final Report Tra

Trung Hieu Tran 4 Oct 17, 2022