DilatedNet in Keras for image segmentation

Overview

Keras implementation of DilatedNet for semantic segmentation

A native Keras implementation of semantic segmentation according to Multi-Scale Context Aggregation by Dilated Convolutions (2016). Optionally uses the pretrained weights by the authors'.

The code has been tested on Tensorflow 1.3, Keras 1.2, and Python 3.6.

Using the pretrained model

Download and extract the pretrained model:

curl -L https://github.com/nicolov/segmentation_keras/releases/download/model/nicolov_segmentation_model.tar.gz | tar xvf -

Install dependencies and run:

pip install -r requirements.txt
# For GPU support
pip install tensorflow-gpu==1.3.0

python predict.py --weights_path conversion/converted/dilation8_pascal_voc.npy

The output image will be under images/cat_seg.png.

Converting the original Caffe model

Follow the instructions in the conversion folder to convert the weights to the TensorFlow format that can be used by Keras.

Training

Download the Augmented Pascal VOC dataset here:

curl -L http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz | tar -xvf -

This will create a benchmark_RELEASE directory in the root of the repo. Use the convert_masks.py script to convert the provided masks in .mat format to RGB pngs:

python convert_masks.py \
    --in-dir benchmark_RELEASE/dataset/cls \
    --out-dir benchmark_RELEASE/dataset/pngs

Start training:

python train.py --batch-size 2

Model checkpoints are saved under trained/, and can be used with the predict.py script for testing.

The training code is currently limited to the frontend module, and thus only outputs 16x16 segmentation maps. The augmentation pipeline does mirroring but not cropping or rotation.


Fisher Yu and Vladlen Koltun, Multi-Scale Context Aggregation by Dilated Convolutions, 2016

Comments
  • training and validation loss nan

    training and validation loss nan

    First of all I just want to thank you for the great work. I am having an issue during training, my loss and val_loss is nan, however I am still getting values for accuracy and val_acc. I am training on the PASCAL_VOC 2012 dataset with the segmentation class pngs. rsz_screenshot_from_2017-08-09_17-24-13

    keras 1.2.1 & 2.0.6 tensorflow-gpu 1.2.1 python 3.6.1

    opened by Barfknecht 9
  • Fine tuning ...

    Fine tuning ...

    Hello,

    You have provided the pre-trained model of VOC. I have a small dataset with 2 classes, which I annotated based on VOC and I want to fine-tune it. Would you please guide me through the process?

    opened by MyVanitar 8
  • Modifying number of class

    Modifying number of class

    Hi Nicolov,

    Thanks for the great work! I tried to train new dataset by generating my own set of jpg and png masks. However I realized it only works for pre-defined 20 classes. For example I wanted to re-train this network to segment screws from background, I wasn't able to find way to add new classes but to use a existed color 0x181818 which was originally trained for cats. After training it did segmented the screw. However I'm still wondering is there any way to change the number of classes and specify which color value are associated with certain class?

    opened by francisbitontistudio 7
  • Black image after segmentation

    Black image after segmentation

    Hi! I have val accuracy = 1, but when i am trying to predict mask on the image from train set it displays me black image. Does anybody know what is the reason of this behaviour?

    opened by dimaxano 7
  • docker running error

    docker running error

    Hi, @nicolov ,

    For the caffe weight conversion, I got the following error:

    (tf_1.0) [email protected]:/data/code/segmentation_keras/conversion# docker run -v $(pwd):/workspace -ti `docker build -q .`
    Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
    "docker run" requires at least 1 argument(s).
    See 'docker run --help'.
    
    Usage:  docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
    
    Run a command in a new container
    (tf_1.0) ro[email protected]:/data/code/segmentation_keras/conversion#
    
    

    It shows that docker daemon is not running. Any other command should I input before it?

    Thanks

    opened by amiltonwong 7
  • the way of loading the weight

    the way of loading the weight

    Hi nicolov,

    In the post, you explained how to do the weight conversion. Due to the development environment constraints, it is a little bit hard for me to follow exactly your step.

    In keras blog, author also show a way to load VGG16 weight from Keras directly. Do you think this weight can be used for your implementation? Do we have to use the converted caffe model weight for pascal_voc. The data set I will be using is of different domain with the data set published in the paper. Thanks for your advice.

    capture

    opened by wenouyang 5
  • Problems with CuDNN library

    Problems with CuDNN library

    While running train.py, this is the error message: Epoch 1/20 E tensorflow/stream_executor/cuda/cuda_dnn.cc:378] Loaded runtime CuDNN library: 6021 (compatibility version 6000) but source was compiled with 5105 (compatibility version 5100). If using a binary install, upgrade your CuDNN library to match. If building from sources, make sure the library loaded at runtime matches a compatible version specified during compile configuration.

    Since I don't have the root account, I can't install CuDNN v5. Do you know how I can fix this? Thanks!

    opened by Yuren-Zhong 4
  • IoU results

    IoU results

    Have you by any chance compared this to the original implementation with regards to the mean IoU? If so, what implementation of IoU did you use and what were your results?

    opened by Barfknecht 4
  • about the required pre-trained vgg model

    about the required pre-trained vgg model

    Hi, @nicolov ,

    According to this line, vgg_conv.npy is needed as pre-trained vgg model in training. Could you list the download location for corresponding caffemodel and prototxt file? And, is the conversion step the same as here?

    Thanks!

    opened by amiltonwong 4
  • regarding loading_weights

    regarding loading_weights

    Hi nicolov,

    In the train.py, you have included the function of load_weights(model, weights_path):. My understanding is that you are trying to load a pre-training vcg model. If I do not want to use this pretrained model because the problem I am working one may belong to a totally different domain, should I just skip calling this load_weights function? Or using a pre-trained model is always preferable, I am kind of confusing about this.

    In the notes, you mentioned that The training code is currently limited to the frontend module, and thus only outputs 16x16 segmentation maps. If I would like to leverage this code for my own data set, what are the modifications that I have to make? Do I still have to load the weights?

    Thank you very much!

    opened by wenouyang 4
  • Cannot locate Dockerfile: Dockerfile

    Cannot locate Dockerfile: Dockerfile

    Probably a rookie error but when I am trying to run the conversion step in conversion by running the docker I get the following error:

    $sudo docker run -v $(pwd):/workspace -ti `docker build -q .`
    time="2017-02-09T09:15:11-08:00" level=fatal msg="Cannot locate Dockerfile: Dockerfile" 
    docker: "run" requires a minimum of 1 argument. See 'docker run --help'.
    
    opened by mongoose54 4
  • Training freezes

    Training freezes

    On executing command: python train.py --batch-size 2 ,training freezes at last step of first epoch.

    All the libraries are according to the requirement.txt file

    opened by ghost 1
  • AtrousConvolution2D vs.Conv2DTranspose

    AtrousConvolution2D vs.Conv2DTranspose

    Hi @nicolov I was wondering whether in your model, you wouldn't need to have a Conv2DTranspose or Upsample layer to compensate for the maxpool and obtain predictions with the same size as your input image?

    opened by tinalegre 0
  • How to handle high resolution  images

    How to handle high resolution images

    Hello @nicolov ,

    let me first express my appreciation to your work in image segmentation its great (Y)

    small suggestion , i just want to notify you that there is a missing -- in input parsing . very minor change

    parser.add_argument('--input_path', nargs='?', default='images/cat.jpg',
                            help='Required path to input image') 
    

    I'm hoping you can help me in understanding how to handle high res images as 1028 and 4k ,

    also in the code i found you set input_width, input_height = 900, 900 and label_margin = 186 can you please illustrate what is the reason for this static number and how they effect on the output high and width

    output_height = input_height - 2 * label_margin
    output_width = input_width - 2 * label_margin
    
    opened by engahmed1190 2
  • Context module training implementation plans

    Context module training implementation plans

    Thanks for creating this implementation. Do you have any plans to implement training of the context module (to allow producing full resolution segmentation maps)?

    opened by OliverColeman 3
  • palette conversion not needed

    palette conversion not needed

    https://github.com/nicolov/segmentation_keras/blob/master/convert_masks.py isn't necessary.

    Just use Pillow and you can load the classes separately from the color palette, which means it will already be in the format you want!

    from https://github.com/aurora95/Keras-FCN/blob/master/utils/SegDataGenerator.py#L203

                    from PIL import Image
                    label = Image.open(label_filepath)
                    if self.save_to_dir and self.palette is None:
                        self.palette = label.palette
    

    cool right?

    opened by ahundt 6
Releases(caffemodel)
Materials for upcoming beginner-friendly PyTorch course (work in progress).

Learn PyTorch for Deep Learning (work in progress) I'd like to learn PyTorch. So I'm going to use this repo to: Add what I've learned. Teach others in

Daniel Bourke 2.3k Dec 29, 2022
Pip-package for trajectory benchmarking from "Be your own Benchmark: No-Reference Trajectory Metric on Registered Point Clouds", ECMR'21

Map Metrics for Trajectory Quality Map metrics toolkit provides a set of metrics to quantitatively evaluate trajectory quality via estimating consiste

Mobile Robotics Lab. at Skoltech 31 Oct 28, 2022
Official repository of OFA. Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

Paper | Blog OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image gene

OFA Sys 1.4k Jan 08, 2023
PromptDet: Expand Your Detector Vocabulary with Uncurated Images

PromptDet: Expand Your Detector Vocabulary with Uncurated Images Paper Website Introduction The goal of this work is to establish a scalable pipeline

103 Dec 20, 2022
An implementation of a discriminant function over a normal distribution to help classify datasets.

CS4044D Machine Learning Assignment 1 By Dev Sony, B180297CS The question, report and source code can be found here. Github Repo Solution 1 Based on t

Dev Sony 6 Nov 09, 2021
A novel Engagement Detection with Multi-Task Training (ED-MTT) system

A novel Engagement Detection with Multi-Task Training (ED-MTT) system which minimizes MSE and triplet loss together to determine the engagement level of students in an e-learning environment.

Onur Çopur 12 Nov 11, 2022
Some experiments with tennis player aging curves using Hilbert space GPs in PyMC. Only experimental for now.

NOTE: This is still being developed! Setup notes This document uses Jeff Sackmann's tennis data. You can obtain it as follows: git clone https://githu

Martin Ingram 1 Jan 20, 2022
Progressive Domain Adaptation for Object Detection

Progressive Domain Adaptation for Object Detection Implementation of our paper Progressive Domain Adaptation for Object Detection, based on pytorch-fa

96 Nov 25, 2022
Deep Learning Visuals contains 215 unique images divided in 23 categories

Deep Learning Visuals contains 215 unique images divided in 23 categories (some images may appear in more than one category). All the images were originally published in my book "Deep Learning with P

Daniel Voigt Godoy 1.3k Dec 28, 2022
💛 Code and Dataset for our EMNLP 2021 paper: "Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes"

Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes Official PyTorch implementation and EmoCause evaluatio

Hyunwoo Kim 51 Jan 06, 2023
这是一个yolox-keras的源码,可以用于训练自己的模型。

YOLOX:You Only Look Once目标检测模型在Keras当中的实现 目录 性能情况 Performance 实现的内容 Achievement 所需环境 Environment 小技巧的设置 TricksSet 文件下载 Download 训练步骤 How2train 预测步骤 Ho

Bubbliiiing 64 Nov 10, 2022
PyTorch implementation of our CVPR2021 (oral) paper "Prototype Augmentation and Self-Supervision for Incremental Learning"

PASS - Official PyTorch Implementation [CVPR2021 Oral] Prototype Augmentation and Self-Supervision for Incremental Learning Fei Zhu, Xu-Yao Zhang, Chu

67 Dec 27, 2022
Code for the CVPR2022 paper "Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity"

Introduction This is an official release of the paper "Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity" (arxiv link). Abstrac

Leo 21 Nov 23, 2022
Meta-TTS: Meta-Learning for Few-shot SpeakerAdaptive Text-to-Speech

Meta-TTS: Meta-Learning for Few-shot SpeakerAdaptive Text-to-Speech This repository is the official implementation of "Meta-TTS: Meta-Learning for Few

Sung-Feng Huang 128 Dec 25, 2022
Predicting Tweet Sentiment Maching Learning and streamlit

Predicting-Tweet-Sentiment-Maching-Learning-and-streamlit (I prefere using Visual Studio Code ) Open the folder in VS Code Run the first cell in requi

1 Nov 20, 2021
AnimationKit: AI Upscaling & Interpolation using Real-ESRGAN+RIFE

ALPHA 2.5: Frostbite Revival (Released 12/23/21) Changelog: [ UI ] Chained design. All steps link to one another! Use the master override toggles to s

87 Nov 16, 2022
EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks

EncT5 (Unofficial) Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks About Finetune T5 model for classification & r

Jangwon Park 34 Jan 01, 2023
Python project to take sound as input and output as RGB + Brightness values suitable for DMX

sound-to-light Python project to take sound as input and output as RGB + Brightness values suitable for DMX Current goals: Get one pixel working: Vary

Bobby Cox 1 Nov 17, 2021
TransZero++: Cross Attribute-guided Transformer for Zero-Shot Learning

TransZero++ This repository contains the testing code for the paper "TransZero++: Cross Attribute-guided Transformer for Zero-Shot Learning" submitted

Shiming Chen 6 Aug 16, 2022
Adaptable tools to make reinforcement learning and evolutionary computation algorithms.

Pearl The Parallel Evolutionary and Reinforcement Learning Library (Pearl) is a pytorch based package with the goal of being excellent for rapid proto

38 Jan 01, 2023