As a part of the HAKE project, includes the reproduced SOTA models and the corresponding HAKE-enhanced versions (CVPR2020).

Overview

HAKE-Action

HAKE-Action (TensorFlow) is a project to open the SOTA action understanding studies based on our Human Activity Knowledge Engine. It includes reproduced SOTA models and their HAKE-enhanced versions. HAKE-Action is authored by Yong-Lu Li, Xinpeng Liu, Liang Xu, Cewu Lu. Currently, it is manintained by Yong-Lu Li, Xinpeng Liu and Liang Xu.

News: (2021.10.06) Our extended version of SymNet is accepted by TPAMI! Paper and code are coming soon.

(2021.2.7) Upgraded HAKE-Activity2Vec is released! Images/Videos --> human box + ID + skeleton + part states + action + representation. [Description]

Full demo: [YouTube], [bilibili]

(2021.1.15) Our extended version of TIN (Transferable Interactiveness Network) is accepted by TPAMI! New paper and code will be released soon.

(2020.10.27) The code of IDN (Paper) in NeurIPS'20 is released!

(2020.6.16) Our larger version HAKE-Large (>120K images, activity and part state labels) is released!

We released the HAKE-HICO (image-level part state labels upon HICO) and HAKE-HICO-DET (instance-level part state labels upon HICO-DET). The corresponding data can be found here: HAKE Data.

  • Paper is here.
  • More data and part states (e.g., upon AVA, more kinds of action categories, more rare actions...) are coming.
  • We will keep updating HAKE-Action to include more SOTA models and their HAKE-enhanced versions.

Data Mode

  • HAKE-HICO (PaStaNet* mode in paper): image-level, add the aggression of all part states in an image (belong to one or multiple active persons), compared with original HICO, the only additional labels are image-level human body part states.

  • HAKE-HICO-DET (PaStaNet* in paper): instance-level, add part states for each annotated persons of all images in HICO-DET, the only additional labels are instance-level human body part states.

  • HAKE-Large (PaStaNet in paper): contains more than 120K images, action labels and the corresponding part state labels. The images come from the existing action datasets and crowdsourcing. We mannully annotated all the active persons with our novel part-level semantics.

  • GT-HAKE (GT-PaStaNet* in paper): GT-HAKE-HICO and G-HAKE-HICO-DET. It means that we use the part state labels as the part state prediction. That is, we can perfectly estimate the body part states of a person. Then we use them to infer the instance activities. This mode can be seen as the upper bound of our HAKE-Action. From the results below we can find that, the upper bound is far beyond the SOTA performance. Thus, except for the current study on the conventional instance-level method, continue promoting part-level method based on HAKE would be a very promising direction.

Notion

Activity2Vec and PaSta-R are our part state based modules, which operate action inference based on part semantics, different from previous instance semantics. For example, Pairwise + HAKE-HICO pre-trained Activity2Vec + Linear PaSta-R (the seventh row) achieves 45.9 mAP on HICO. More details can be found in our CVPR2020 paper: PaStaNet: Toward Human Activity Knowledge Engine.

Code

The two versions of HAKE-Action are relesased in two branches of this repo:

Models on HICO

Instance-level +Activity2Vec +PaSta-R mAP [email protected] [email protected] [email protected]
R*CNN - - 28.5 - - -
Girdhar et.al. - - 34.6 - - -
Mallya et.al. - - 36.1 - - -
Pairwise - - 39.9 13.0 19.8 22.3
- HAKE-HICO Linear 44.5 26.9 30.0 30.7
Mallya et.al. HAKE-HICO Linear 45.0 26.5 29.1 30.3
Pairwise HAKE-HICO Linear 45.9 26.2 30.6 31.8
Pairwise HAKE-HICO MLP 45.6 26.0 30.8 31.9
Pairwise HAKE-HICO GCN 45.6 25.2 30.0 31.4
Pairwise HAKE-HICO Seq 45.9 25.3 30.2 31.6
Pairwise HAKE-HICO Tree 45.8 24.9 30.3 31.8
Pairwise HAKE-Large Linear 46.3 24.7 31.8 33.1
Pairwise HAKE-Large Linear 46.3 24.7 31.8 33.1
Pairwise GT-HAKE-HICO Linear 65.6 47.5 55.4 56.6

Models on HICO-DET

Using Object Detections from iCAN

Instance-level +Activity2Vec +PaSta-R Full(def) Rare(def) None-Rare(def) Full(ko) Rare(ko) None-Rare(ko)
iCAN - - 14.84 10.45 16.15 16.26 11.33 17.73
TIN - - 17.03 13.42 18.11 19.17 15.51 20.26
iCAN HAKE-HICO-DET Linear 19.61 17.29 20.30 22.10 20.46 22.59
TIN HAKE-HICO-DET Linear 22.12 20.19 22.69 24.06 22.19 24.62
TIN HAKE-Large Linear 22.65 21.17 23.09 24.53 23.00 24.99
TIN GT-HAKE-HICO-DET Linear 34.86 42.83 32.48 35.59 42.94 33.40

Models on AVA (Frame-based)

Method +Activity2Vec +PaSta-R mAP
AVA-TF-Baseline - - 11.4
LFB-Res-50-baseline - - 22.2
LFB-Res-101-baseline - - 23.3
AVA-TF-Baeline HAKE-Large Linear 15.6
LFB-Res-50-baseline HAKE-Large Linear 23.4
LFB-Res-101-baseline HAKE-Large Linear 24.3

Models on V-COCO

Method +Activity2Vec +PaSta-R AP(role), Scenario 1 AP(role), Scenario 2
iCAN - - 45.3 52.4
TIN - - 47.8 54.2
iCAN HAKE-Large Linear 49.2 55.6
TIN HAKE-Large Linear 51.0 57.5

Training Details

We first pre-train the Activity2Vec and PaSta-R with activities and PaSta labels. Then we change the last FC in PaSta-R to fit the activity categories of the target dataset. Finally, we freeze Activity2Vec and fine-tune PaSta-R on the train set of the target dataset. Here, HAKE works like the ImageNet and Activity2Vec is used as a pre-trained knowledge engine to promote other tasks.

Citation

If you find our work useful, please consider citing:

@inproceedings{li2020pastanet,
  title={PaStaNet: Toward Human Activity Knowledge Engine},
  author={Li, Yong-Lu and Xu, Liang and Liu, Xinpeng and Huang, Xijie and Xu, Yue and Wang, Shiyi and Fang, Hao-Shu and Ma, Ze and Chen, Mingyang and Lu, Cewu},
  booktitle={CVPR},
  year={2020}
}
@inproceedings{li2019transferable,
  title={Transferable Interactiveness Knowledge for Human-Object Interaction Detection},
  author={Li, Yong-Lu and Zhou, Siyuan and Huang, Xijie and Xu, Liang and Ma, Ze and Fang, Hao-Shu and Wang, Yanfeng and Lu, Cewu},
  booktitle={CVPR},
  year={2019}
}
@inproceedings{lu2018beyond,
  title={Beyond holistic object recognition: Enriching image understanding with part states},
  author={Lu, Cewu and Su, Hao and Li, Yonglu and Lu, Yongyi and Yi, Li and Tang, Chi-Keung and Guibas, Leonidas J},
  booktitle={CVPR},
  year={2018}
}

HAKE

HAKE[website] is a new large-scale knowledge base and engine for human activity understanding. HAKE provides elaborate and abundant body part state labels for active human instances in a large scale of images and videos. With HAKE, we boost the action understanding performance on widely-used human activity benchmarks. Now we are still enlarging and enriching it, and looking forward to working with outstanding researchers around the world on its applications and further improvements. If you have any pieces of advice or interests, please feel free to contact Yong-Lu Li ([email protected]).

If you get any problems or if you find any bugs, don't hesitate to comment on GitHub or make a pull request!

HAKE-Action is freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, please drop an e-mail. We will send the detail agreement to you.

Owner
Yong-Lu Li
Ph.D. CV_Robotics
Yong-Lu Li
SAGE: Sensitivity-guided Adaptive Learning Rate for Transformers

SAGE: Sensitivity-guided Adaptive Learning Rate for Transformers This repo contains our codes for the paper "No Parameters Left Behind: Sensitivity Gu

Chen Liang 23 Nov 07, 2022
ESGD-M - A stochastic non-convex second order optimizer, suitable for training deep learning models, for PyTorch

ESGD-M - A stochastic non-convex second order optimizer, suitable for training deep learning models, for PyTorch

Katherine Crowson 53 Dec 29, 2022
Original code for "Zero-Shot Domain Adaptation with a Physics Prior"

Zero-Shot Domain Adaptation with a Physics Prior [arXiv] [sup. material] - ICCV 2021 Oral paper, by Attila Lengyel, Sourav Garg, Michael Milford and J

Attila Lengyel 40 Dec 21, 2022
Locationinfo - A script helps the user to show network information such as ip address

Description This script helps the user to show network information such as ip ad

Roxcoder 1 Dec 30, 2021
Tianshou - An elegant PyTorch deep reinforcement learning library.

Tianshou (天授) is a reinforcement learning platform based on pure PyTorch. Unlike existing reinforcement learning libraries, which are mainly based on

Tsinghua Machine Learning Group 5.5k Jan 05, 2023
Pytorch implementation for "Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion" (NeurIPS 2021)

Density-aware Chamfer Distance This repository contains the official PyTorch implementation of our paper: Density-aware Chamfer Distance as a Comprehe

Tong WU 93 Dec 15, 2022
PyTorch IPFS Dataset

PyTorch IPFS Dataset IPFSDataset(Dataset) See the jupyter notepad to see how it works and how it interacts with a standard pytorch DataLoader You need

Jake Kalstad 2 Apr 13, 2022
Hierarchical Uniform Manifold Approximation and Projection

HUMAP Hierarchical Manifold Approximation and Projection (HUMAP) is a technique based on UMAP for hierarchical non-linear dimensionality reduction. HU

Wilson Estécio Marcílio Júnior 160 Jan 06, 2023
Segmentation models with pretrained backbones. Keras and TensorFlow Keras.

Python library with Neural Networks for Image Segmentation based on Keras and TensorFlow. The main features of this library are: High level API (just

Pavel Yakubovskiy 4.2k Jan 09, 2023
Deeplab-resnet-101 in Pytorch with Jaccard loss

Deeplab-resnet-101 Pytorch with Lovász hinge loss Train deeplab-resnet-101 with binary Jaccard loss surrogate, the Lovász hinge, as described in http:

Maxim Berman 95 Apr 15, 2022
Keras-1D-NN-Classifier

Keras-1D-NN-Classifier This code is based on the reference codes linked below. reference 1, reference 2 This code is for 1-D array data classification

Jae-Hoon Shim 6 May 18, 2021
A font family with a great monospaced variant for programmers.

Fantasque Sans Mono A programming font, designed with functionality in mind, and with some wibbly-wobbly handwriting-like fuzziness that makes it unas

Jany Belluz 6.3k Jan 08, 2023
An interactive DNN Model deployed on web that predicts the chance of heart failure for a patient with an accuracy of 98%

Heart Failure Predictor About A Web UI deployed Dense Neural Network Model Made using Tensorflow that predicts whether the patient is healthy or has c

Adit Ahmedabadi 0 Jan 09, 2022
CROSS-LINGUAL ABILITY OF MULTILINGUAL BERT: AN EMPIRICAL STUDY

M-BERT-Study CROSS-LINGUAL ABILITY OF MULTILINGUAL BERT: AN EMPIRICAL STUDY Motivation Multilingual BERT (M-BERT) has shown surprising cross lingual a

CogComp 1 Feb 28, 2022
Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones

HaloNet - Pytorch Implementation of the Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones. This re

Phil Wang 189 Nov 22, 2022
Official code for "Simpler is Better: Few-shot Semantic Segmentation with Classifier Weight Transformer. ICCV2021".

Simpler is Better: Few-shot Semantic Segmentation with Classifier Weight Transformer. ICCV2021. Introduction We proposed a novel model training paradi

Lucas 103 Dec 14, 2022
SAT Project - The first project I had done at General Assembly, performed EDA, data cleaning and created data visualizations

Project 1: Standardized Test Analysis by Adam Klesc Overview This project covers: Basic statistics and probability Many Python programming concepts Pr

Adam Muhammad Klesc 1 Jan 03, 2022
PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

Saim Wani 4 May 08, 2022
Pytorch Geometric Tutorials

Pytorch Geometric Tutorials

Antonio Longa 648 Jan 08, 2023
Deep learning library featuring a higher-level API for TensorFlow.

TFLearn: Deep learning library featuring a higher-level API for TensorFlow. TFlearn is a modular and transparent deep learning library built on top of

TFLearn 9.6k Jan 02, 2023