SCALoss: Side and Corner Aligned Loss for Bounding Box Regression (AAAI2022).

Related tags

Deep LearningSCALoss
Overview

SCALoss

PyTorch implementation of the paper "SCALoss: Side and Corner Aligned Loss for Bounding Box Regression" (AAAI 2022).

Introduction

corner_center_comp

  • IoU-based loss has the gradient vanish problem in the case of low overlapping bounding boxes with slow convergence speed.
  • Side Overlap can put more penalty for low overlapping bounding box cases and Corner Distance can speed up the convergence.
  • SCALoss, which combines Side Overlap and Corner Distance, can serve as a comprehensive similarity measure, leading to better localization performance and faster convergence speed.

Prerequisites

Install

Conda is not necessary for the installation. Nevertheless, the installation process here is described using it.

$ conda create -n sca-yolo python=3.8 -y
$ conda activate sca-yolo
$ git clone https://github.com/Turoad/SCALoss
$ cd SCALoss
$ pip install -r requirements.txt

Getting started

Train a model:

python train.py --data [dataset config] --cfg [model config] --weights [path of pretrain weights] --batch-size [batch size num]

For example, to train yolov3-tiny on COCO dataset from scratch with batch size=128.

python train.py --data coco.yaml --cfg yolov3-tiny.yaml --weights '' --batch-size 128

For multi-gpu training, it is recommended to use:

python -m torch.distributed.launch --nproc_per_node 4 train.py --img 640 --batch 32 --epochs 300 --data coco.yaml --weights '' --cfg yolov3.yaml --device 0,1,2,3

Test a model:

python val.py --data coco.yaml --weights runs/train/exp15/weights/last.pt --img 640 --iou-thres=0.65

Results and Checkpoints

YOLOv3-tiny

Model mAP
0.5:0.95
AP
0.5
AP
0.65
AP
0.75
AP
0.8
AP
0.9
IoU 18.8 36.2 27.2 17.3 11.6 1.9
GIoU
relative improv.(%)
18.8
0%
36.2
0%
27.1
-0.37%
17.6
1.73%
11.8
1.72%
2.1
10.53%
DIoU
relative improv.(%)
18.8
0%
36.4
0.55%
26.9
-1.1%
17.2
-0.58%
11.8
1.72%
1.9
0%
CIoU
relative improv.(%)
18.9
0.53%
36.6
1.1%
27.3
0.37%
17.2
-0.58%
11.6
0%
2.1
10.53%
SCA
relative improv.(%)
19.9
5.85%
36.6
1.1%
28.3
4.04%
19.1
10.4%
13.3
14.66%
2.7
42.11%

The convergence curves of different losses on YOLOV3-tiny: converge curve

YOLOv3

Model mAP
0.5:0.95
AP
0.5
AP
0.65
AP
0.75
AP
0.8
AP
0.9
IoU 44.8 64.2 57.5 48.8 41.8 20.7
GIoU
relative improv.(%)
44.7
-0.22%
64.4
0.31%
57.5
0%
48.5
-0.61%
42
0.48%
20.4
-1.45%
DIoU
relative improv.(%)
44.7
-0.22%
64.3
0.16%
57.5
0%
48.9
0.2%
42.1
0.72%
19.8
-4.35%
CIoU
relative improv.(%)
44.7
-0.22%
64.3
0.16%
57.5
0%
48.9
0.2%
41.7
-0.24%
19.8
-4.35%
SCA
relative improv.(%)
45.3
1.12%
64.1
-0.16%
57.9
0.7%
49.9
2.25%
43.3
3.59%
21.4
3.38%

YOLOV5s

comming soon

Citation

If our paper and code are beneficial to your work, please consider citing:

@inproceedings{zheng2022scaloss,
  title={SCALoss: Side and Corner Aligned Loss for Bounding Box Regression},
  author={Zheng, Tu and Zhao, Shuai and Liu, Yang and Liu, Zili and Cai, Deng},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2022}
}

Acknowledgement

The code is modified from ultralytics/yolov3.

You might also like...
An implementation for the loss function proposed in Decoupled Contrastive Loss paper.

Decoupled-Contrastive-Learning This repository is an implementation for the loss function proposed in Decoupled Contrastive Loss paper. Requirements P

Implement of "Training deep neural networks via direct loss minimization" in PyTorch for 0-1 loss

This is the implementation of "Training deep neural networks via direct loss minimization" published at ICML 2016 in PyTorch. The implementation targe

Official PyTorch implementation of
Official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Recognition" in AAAI2022.

AimCLR This is an official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Reco

CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes (AAAI2022)
CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes (AAAI2022)

CMUA-Watermark The official code for CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes (AAAI2022) arxiv. It is bas

Repository for
Repository for "Improving evidential deep learning via multi-task learning," published in AAAI2022

Improving evidential deep learning via multi task learning It is a repository of AAAI2022 paper, “Improving evidential deep learning via multi-task le

Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)
Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

Code repository for paper `Skeleton Merger: an Unsupervised Aligned Keypoint Detector`.
Code repository for paper `Skeleton Merger: an Unsupervised Aligned Keypoint Detector`.

Skeleton Merger Skeleton Merger, an Unsupervised Aligned Keypoint Detector. The paper is available at https://arxiv.org/abs/2103.10814. A map of the r

Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)
Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision (ICCV 2021)
Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision (ICCV 2021)

Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision (ICCV 2021) PyTorch implementation of Learning RAW-to-sRGB Mappings with Inaccurat

Owner
TuZheng
TuZheng
Retinal vessel segmentation based on GT-UNet

Retinal vessel segmentation based on GT-UNet Introduction This project is a retinal blood vessel segmentation code based on UNet-like Group Transforme

Kent0n 27 Dec 18, 2022
TransMorph: Transformer for Medical Image Registration

TransMorph: Transformer for Medical Image Registration keywords: Vision Transformer, Swin Transformer, convolutional neural networks, image registrati

Junyu Chen 180 Jan 07, 2023
PERIN is Permutation-Invariant Semantic Parser developed for MRP 2020

PERIN: Permutation-invariant Semantic Parsing David Samuel & Milan Straka Charles University Faculty of Mathematics and Physics Institute of Formal an

ÚFAL 40 Jan 04, 2023
A containerized REST API around OpenAI's CLIP model.

OpenAI's CLIP — REST API This is a container wrapping OpenAI's CLIP model in a RESTful interface. Running the container locally First, build the conta

Santiago Valdarrama 48 Nov 06, 2022
Datasets for new state-of-the-art challenge in disentanglement learning

High resolution disentanglement datasets This repository contains the Falcor3D and Isaac3D datasets, which present a state-of-the-art challenge for co

NVIDIA Research Projects 37 May 26, 2022
QAT(quantize aware training) for classification with MQBench

MQBench Quantization Aware Training with PyTorch I am using MQBench(Model Quantization Benchmark)(http://mqbench.tech/) to quantize the model for depl

Ling Zhang 29 Nov 18, 2022
Lazy, a tool for running things in idle time

Lazy, a tool for running things in idle time Mostly used to stop CUDA ML model training from making my desktop unusable. Simply monitors keyboard/mous

N Shepperd 46 Nov 06, 2022
Code of the paper "Multi-Task Meta-Learning Modification with Stochastic Approximation".

Multi-Task Meta-Learning Modification with Stochastic Approximation This repository contains the code for the paper "Multi-Task Meta-Learning Modifica

Andrew 3 Jan 05, 2022
Lexical Substitution Framework

LexSubGen Lexical Substitution Framework This repository contains the code to reproduce the results from the paper: Arefyev Nikolay, Sheludko Boris, P

Samsung 37 Sep 15, 2022
Fang Zhonghao 13 Nov 19, 2022
Codes and Data Processing Files for our paper.

Code Scripts and Processing Files for EEG Sleep Staging Paper 1. Folder Tree ./src_preprocess (data preprocessing files for SHHS and Sleep EDF) sleepE

Chaoqi Yang 18 Dec 12, 2022
Deep Learning and Logical Reasoning from Data and Knowledge

Logic Tensor Networks (LTN) Logic Tensor Network (LTN) is a neurosymbolic framework that supports querying, learning and reasoning with both rich data

171 Dec 29, 2022
Implementation of Basic Machine Learning Algorithms on small datasets using Scikit Learn.

Basic Machine Learning Algorithms All the basic Machine Learning Algorithms are implemented in Python using libraries Acknowledgements Machine Learnin

Piyal Banik 47 Oct 16, 2022
A complete, self-contained example for training ImageNet at state-of-the-art speed with FFCV

ffcv ImageNet Training A minimal, single-file PyTorch ImageNet training script designed for hackability. Run train_imagenet.py to get... ...high accur

FFCV 92 Dec 31, 2022
PyTorch implementation of CloudWalk's recent work DenseBody

densebody_pytorch PyTorch implementation of CloudWalk's recent paper DenseBody. Note: For most recent updates, please check out the dev branch. Update

Lingbo Yang 401 Nov 19, 2022
Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks

Adversarially-Robust-Periphery Code + Data from the paper "Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks" by A

Anne Harrington 2 Feb 07, 2022
Extracting and filtering paraphrases by bridging natural language inference and paraphrasing

nli2paraphrases Source code repository accompanying the preprint Extracting and filtering paraphrases by bridging natural language inference and parap

Matej Klemen 1 Mar 09, 2022
PIXIE: Collaborative Regression of Expressive Bodies

PIXIE: Collaborative Regression of Expressive Bodies [Project Page] This is the official Pytorch implementation of PIXIE. PIXIE reconstructs an expres

Yao Feng 331 Jan 04, 2023
Analyses of the individual electric field magnitudes with Roast.

Aloi Davide - PhD Student (UoB) Analysis of electric field magnitudes (wp2a dataset only at the moment) and correlation analysis with Dynamic Causal M

Davide Aloi 7 Dec 15, 2022
PyTorch trainer and model for Sequence Classification

PyTorch-trainer-and-model-for-Sequence-Classification After cloning the repository, modify your training data so that the training data is a .csv file

NhanTieu 2 Dec 09, 2022