👐OpenHands : Making Sign Language Recognition Accessible (WiP 🚧👷‍♂️🏗)

Overview

👐 OpenHands: Sign Language Recognition Library

Making Sign Language Recognition Accessible

Check the documentation on how to use the library:
ReadTheDocs: 👐 OpenHands

License

This project is released under the Apache 2.0 license.

Citation

If you find our work useful in your research, please consider citing us:

@misc{2021_openhands_slr_preprint,
      title={OpenHands: Making Sign Language Recognition Accessible with Pose-based Pretrained Models across Languages}, 
      author={Prem Selvaraj and Gokul NC and Pratyush Kumar and Mitesh Khapra},
      year={2021},
      eprint={2110.05877},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Comments
  • Question about GSL dataset

    Question about GSL dataset

    I have no idea how to get the Isolated gloss sign language recognition (GSL isol.) data (xxx_signerx_repx_glosses), while I only find the continuous sign language recognition data (xxx_signerx_repx_sentences) from https://zenodo.org/record/3941811.

    Thank you very much for any information about this.

    opened by snorlaxse 6
  • Question about 'Config-based training'

    Question about 'Config-based training'

    I try the code from Config-based training as below.

    import omegaconf
    from openhands.apis.classification_model import ClassificationModel
    from openhands.core.exp_utils import get_trainer
    import os 
    
    os.environ["CUDA_VISIBLE_DEVICES"]="2,3"
    cfg = omegaconf.OmegaConf.load("examples/configs/lsa64/decoupled_gcn.yaml")
    trainer = get_trainer(cfg)
    
    
    model = ClassificationModel(cfg=cfg, trainer=trainer)
    model.init_from_checkpoint_if_available()
    model.fit()
    
    /raid/xxx/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:747: UserWarning: You requested multiple GPUs but did not specify a backend, e.g. `Trainer(accelerator="dp"|"ddp"|"ddp2")`. Setting `accelerator="ddp_spawn"` for you.
      "You requested multiple GPUs but did not specify a backend, e.g."
    GPU available: True, used: True
    TPU available: False, using: 0 TPU cores
    IPU available: False, using: 0 IPUs
    /raid/xxx/OpenHands/openhands/apis/inference.py:21: LightningDeprecationWarning: The `LightningModule.datamodule` property is deprecated in v1.3 and will be removed in v1.5. Access the datamodule through using `self.trainer.datamodule` instead.
      self.datamodule.setup(stage=stage)
    Found 64 classes in train splits
    Found 64 classes in test splits
    Train set size: 2560
    Valid set size: 320
    /raid/xxx/anaconda3/lib/python3.7/site-packages/pytorch_lightning/core/datamodule.py:424: LightningDeprecationWarning: DataModule.setup has already been called, so it will not be called again. In v1.6 this behavior will change to always call DataModule.setup.
      f"DataModule.{name} has already been called, so it will not be called again. "
    LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [2,3]
    Traceback (most recent call last):
      File "study_train.py", line 15, in <module>
        model.fit()
      File "/raid/xxx/OpenHands/openhands/apis/classification_model.py", line 104, in fit
        self.trainer.fit(self, self.datamodule)
      File "/raid/xxx/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 552, in fit
        self._run(model)
      File "/raid/xxx/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 917, in _run
        self._dispatch()
      File "/raid/xxx/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 985, in _dispatch
        self.accelerator.start_training(self)
      File "/raid/xxx/anaconda3/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training
        self.training_type_plugin.start_training(trainer)
      File "/raid/xxx/anaconda3/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 158, in start_training
        mp.spawn(self.new_process, **self.mp_spawn_kwargs)
      File "/raid/xxx/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 199, in spawn
        return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
      File "/raid/xxx/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 148, in start_processes
        process.start()
      File "/raid/xxx/anaconda3/lib/python3.7/multiprocessing/process.py", line 112, in start
        self._popen = self._Popen(self)
      File "/raid/xxx/anaconda3/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
        return Popen(process_obj)
      File "/raid/xxx/anaconda3/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
        super().__init__(process_obj)
      File "/raid/xxx/anaconda3/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
        self._launch(process_obj)
      File "/raid/xxx/anaconda3/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 47, in _launch
        reduction.dump(process_obj, fp)
      File "/raid/xxx/anaconda3/lib/python3.7/multiprocessing/reduction.py", line 60, in dump
        ForkingPickler(file, protocol).dump(obj)
    AttributeError: Can't pickle local object 'DecoupledGCN_TCN_unit.__init__.<locals>.<lambda>'
    (base) 
    
    opened by snorlaxse 4
  • installation issue

    installation issue

    Hello, thank you for providing such a great framework, but there was an error when I import the module. Could you please offer me a help? code:

    import omegaconf
    from openhands.apis.classification_model import ClassificationModel
    from openhands.core.exp_utils import get_trainer
    
    cfg = omegaconf.OmegaConf.load("1.yaml")
    trainer = get_trainer(cfg)
    
    model = ClassificationModel(cfg=cfg, trainer=trainer)
    model.init_from_checkpoint_if_available()
    model.fit()
    

    ERROR: Traceback (most recent call last): File "/home/hxz/project/pose_SLR/main.py", line 3, in from openhands.apis.classification_model import ClassificationModel ModuleNotFoundError: No module named 'openhands.apis'

    opened by Xiaolong-han 4
  • visibility object

    visibility object

    https://github.com/narVidhai/SLR/blob/2f26455c7cb530265618949203859b953224d0aa/scripts/mediapipe_extract.py#L48

    Doesn't this object contain visibility value as well. If so, we could add some logic for conditioning and merge it with the above function

    enhancement 
    opened by grohith327 3
  • About the wrong st_gcn checkpoints files provided on GSL

    About the wrong st_gcn checkpoints files provided on GSL

    import omegaconf
    from openhands.apis.inference import InferenceModel
    
    cfg = omegaconf.OmegaConf.load("GSL/gsl/st_gcn/config.yaml")
    model = InferenceModel(cfg=cfg)
    model.init_from_checkpoint_if_available()
    if cfg.data.test_pipeline.dataset.inference_mode:
        model.test_inference()
    else:
        model.compute_test_accuracy()
    
    ---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
    /tmp/ipykernel_6585/2983784194.py in <module>
          4 cfg = omegaconf.OmegaConf.load("GSL/gsl/st_gcn/config.yaml")
          5 model = InferenceModel(cfg=cfg)
    ----> 6 model.init_from_checkpoint_if_available()
          7 if cfg.data.test_pipeline.dataset.inference_mode:
          8     model.test_inference()
    
    ~/OpenHands/openhands/apis/inference.py in init_from_checkpoint_if_available(self, map_location)
         47         print(f"Loading checkpoint from: {ckpt_path}")
         48         ckpt = torch.load(ckpt_path, map_location=map_location)
    ---> 49         self.load_state_dict(ckpt["state_dict"], strict=False)
         50         del ckpt
         51 
    
    ~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)
       1050         if len(error_msgs) > 0:
       1051             raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    -> 1052                                self.__class__.__name__, "\n\t".join(error_msgs)))
       1053         return _IncompatibleKeys(missing_keys, unexpected_keys)
       1054 
    
    RuntimeError: Error(s) in loading state_dict for InferenceModel:
    	size mismatch for model.encoder.A: copying a param with shape torch.Size([2, 27, 27]) from checkpoint, the shape in current model is torch.Size([3, 27, 27]).
    	size mismatch for model.encoder.st_gcn_networks.0.gcn.conv.weight: copying a param with shape torch.Size([128, 2, 1, 1]) from checkpoint, the shape in current model is torch.Size([192, 2, 1, 1]).
    	size mismatch for model.encoder.st_gcn_networks.0.gcn.conv.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
    	size mismatch for model.encoder.st_gcn_networks.1.gcn.conv.weight: copying a param with shape torch.Size([128, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([192, 64, 1, 1]).
    	size mismatch for model.encoder.st_gcn_networks.1.gcn.conv.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
    	size mismatch for model.encoder.st_gcn_networks.2.gcn.conv.weight: copying a param with shape torch.Size([128, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([192, 64, 1, 1]).
    	size mismatch for model.encoder.st_gcn_networks.2.gcn.conv.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
    	size mismatch for model.encoder.st_gcn_networks.3.gcn.conv.weight: copying a param with shape torch.Size([128, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([192, 64, 1, 1]).
    	size mismatch for model.encoder.st_gcn_networks.3.gcn.conv.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
    	size mismatch for model.encoder.st_gcn_networks.4.gcn.conv.weight: copying a param with shape torch.Size([256, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([384, 64, 1, 1]).
    	size mismatch for model.encoder.st_gcn_networks.4.gcn.conv.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([384]).
    	size mismatch for model.encoder.st_gcn_networks.5.gcn.conv.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([384, 128, 1, 1]).
    	size mismatch for model.encoder.st_gcn_networks.5.gcn.conv.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([384]).
    	size mismatch for model.encoder.st_gcn_networks.6.gcn.conv.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([384, 128, 1, 1]).
    	size mismatch for model.encoder.st_gcn_networks.6.gcn.conv.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([384]).
    	size mismatch for model.encoder.st_gcn_networks.7.gcn.conv.weight: copying a param with shape torch.Size([512, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([768, 128, 1, 1]).
    	size mismatch for model.encoder.st_gcn_networks.7.gcn.conv.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([768]).
    	size mismatch for model.encoder.st_gcn_networks.8.gcn.conv.weight: copying a param with shape torch.Size([512, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([768, 256, 1, 1]).
    	size mismatch for model.encoder.st_gcn_networks.8.gcn.conv.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([768]).
    	size mismatch for model.encoder.st_gcn_networks.9.gcn.conv.weight: copying a param with shape torch.Size([512, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([768, 256, 1, 1]).
    	size mismatch for model.encoder.st_gcn_networks.9.gcn.conv.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([768]).
    	size mismatch for model.encoder.edge_importance.0: copying a param with shape torch.Size([2, 27, 27]) from checkpoint, the shape in current model is torch.Size([3, 27, 27]).
    	size mismatch for model.encoder.edge_importance.1: copying a param with shape torch.Size([2, 27, 27]) from checkpoint, the shape in current model is torch.Size([3, 27, 27]).
    	size mismatch for model.encoder.edge_importance.2: copying a param with shape torch.Size([2, 27, 27]) from checkpoint, the shape in current model is torch.Size([3, 27, 27]).
    	size mismatch for model.encoder.edge_importance.3: copying a param with shape torch.Size([2, 27, 27]) from checkpoint, the shape in current model is torch.Size([3, 27, 27]).
    	size mismatch for model.encoder.edge_importance.4: copying a param with shape torch.Size([2, 27, 27]) from checkpoint, the shape in current model is torch.Size([3, 27, 27]).
    	size mismatch for model.encoder.edge_importance.5: copying a param with shape torch.Size([2, 27, 27]) from checkpoint, the shape in current model is torch.Size([3, 27, 27]).
    	size mismatch for model.encoder.edge_importance.6: copying a param with shape torch.Size([2, 27, 27]) from checkpoint, the shape in current model is torch.Size([3, 27, 27]).
    	size mismatch for model.encoder.edge_importance.7: copying a param with shape torch.Size([2, 27, 27]) from checkpoint, the shape in current model is torch.Size([3, 27, 27]).
    	size mismatch for model.encoder.edge_importance.8: copying a param with shape torch.Size([2, 27, 27]) from checkpoint, the shape in current model is torch.Size([3, 27, 27]).
    	size mismatch for model.encoder.edge_importance.9: copying a param with shape torch.Size([2, 27, 27]) from checkpoint, the shape in current model is torch.Size([3, 27, 27]).
    
    opened by snorlaxse 2
  • Refactoring code to remove bugs, code smells

    Refactoring code to remove bugs, code smells

    Following changes were made as part of the PR

    • Refactored data loading component of the package
    • Created/Renamed files under slr.datasets.isolated to allow for modularization
    • added __init__.py files for importing
    opened by grohith327 1
  • ST-GCN does not work for mediapipe

    ST-GCN does not work for mediapipe

    Currently the openpose layout seems to be hardcoded in graph_utils.py's Graph class. Should we also add a layout for mediapipe, or pass the joints via yml?

    bug 
    opened by GokulNC 1
  • Scale normalization for pose

    Scale normalization for pose

    For example, if the signer is moving forward or backward in the video, this augmentation will help normalize the scale throughout the video: https://github.com/AmitMY/pose-format#data-normalization

    Will involve explicitly specifying the joint (edge) based on which scaling has to be performed.

    enhancement 
    opened by GokulNC 1
  • Function not called

    Function not called

    https://github.com/narVidhai/SLR/blob/2f26455c7cb530265618949203859b953224d0aa/scripts/mediapipe_extract.py#L129

    Is this function not called anywhere?

    question 
    opened by grohith327 1
  • Support for GCN + BERT model

    Support for GCN + BERT model

    Add the model proposed in

    https://openaccess.thecvf.com/content/WACV2021W/HBU/papers/Tunga_Pose-Based_Sign_Language_Recognition_Using_GCN_and_BERT_WACVW_2021_paper.pdf

    enhancement 
    opened by Prem-kumar27 0
  • Sinusoidal Train/Val Accuracy

    Sinusoidal Train/Val Accuracy

    I'm noticing that the transformer and the SL-GCN architectures, while learning on WLASL2000, have an accuracy curve that resembles a sine curve with period of about 20 epochs and amplitude of about 5-10%. I am using the example config provided in the repo, and verified that the batches are being shuffled. I have also played around with logging on_step=True in case this is an artifact of torch.nn.log, but that didn't help either. Any ideas why this is happening?

    opened by leekezar 1
  • Lower accuracy when inferring a single video

    Lower accuracy when inferring a single video

    Hello,

    When I supply the inference model with multiple videos, the model predicts all of them right. But if I supply only one video then the prediction is wrong. I am curious about the cause of this? Can anyone please explain?

    Thank you!

    opened by burakkaraceylan 1
  • Using `pose-format` for consistent `.pose` files

    Using `pose-format` for consistent `.pose` files

    Seems like for pose data you are using pkl and h5. Also, that you have a custom mediapipe holistic script

    Personally I believe it would be more shareable, and faster, to use a binary format like https://github.com/AmitMY/pose-format Every pose file also declares its content, so you can transfer them between projects, or convert them to different formats with relative is.

    Besides the fact that it has a holistic loading script and multiple formats of OpenPose, it is a binary format which is faster to load, allows loading to numpy, torch and tensorflow, and can perform several operations on poses.

    It also allows the visualization of pose files, separately or on top of videos, and while admittedly this repository is not perfect, in my opinion it is better than having json or pkl files.

    opened by AmitMY 9
  • Consistent Dataset Handling

    Consistent Dataset Handling

    Very nice repo and documentation!

    I think this repository can benefit from using https://github.com/sign-language-processing/datasets as data loaders.

    It is fast, consistent across datasets, and allows loading videos / poses from multiple datasets. If a dataset you are using is not there, you can ask for it or add it yourself, it is a breeze.

    The repo supports many datasets, multiple pose estimation formats, binary pose files, fps and resolution manipulations, and dataset disk mapping.

    Finally, this would make this repo less complex. This repo does pre-training and fine-tuning, the other repo does datasets, and they could be used together.

    Please consider :)

    opened by AmitMY 5
  • Resume training, but load only parameters

    Resume training, but load only parameters

    Not the entire state stored by Lightning.

    Use an option called pretrained to achieve it, like this: https://github.com/AI4Bharat/OpenHands/blob/26c17ed0fca2ac786950d1f4edfa5a88419d06e6/examples/configs/include/decoupled_gcn.yaml#L1

    important feature 
    opened by GokulNC 1
Owner
AI4Bhārat
Building open-source AI solutions for India!
AI4Bhārat
Train an RL agent to execute natural language instructions in a 3D Environment (PyTorch)

Gated-Attention Architectures for Task-Oriented Language Grounding This is a PyTorch implementation of the AAAI-18 paper: Gated-Attention Architecture

Devendra Chaplot 234 Nov 05, 2022
RADIal is available now! Check the download section

Latest news: RADIal is available now! Check the download section. However, because we are currently working on the data anonymization, we provide for

valeo.ai 55 Jan 03, 2023
Implementation of the paper "Fine-Tuning Transformers: Vocabulary Transfer"

Transformer-vocabulary-transfer Implementation of the paper "Fine-Tuning Transfo

LEYA 13 Nov 30, 2022
(NeurIPS 2021) Pytorch implementation of paper "Re-ranking for image retrieval and transductive few-shot classification"

SSR (NeurIPS 2021) Pytorch implementation of paper "Re-ranking for image retrieval and transductivefew-shot classification" [Paper] [Project webpage]

xshen 29 Dec 06, 2022
Code for the submitted paper Surrogate-based cross-correlation for particle image velocimetry

Surrogate-based cross-correlation (SBCC) This repository contains code for the submitted paper Surrogate-based cross-correlation for particle image ve

5 Jun 30, 2022
Code Release for Learning to Adapt to Evolving Domains

EAML Code release for "Learning to Adapt to Evolving Domains" (NeurIPS 2020) Prerequisites PyTorch = 0.4.0 (with suitable CUDA and CuDNN version) tor

23 Dec 07, 2022
NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems.

NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems.

880 Jan 07, 2023
Machine Learning Platform for Kubernetes

Reproduce, Automate, Scale your data science. Welcome to Polyaxon, a platform for building, training, and monitoring large scale deep learning applica

polyaxon 3.2k Dec 23, 2022
Semi-supervised semantic segmentation needs strong, varied perturbations

Semi-supervised semantic segmentation using CutMix and Colour Augmentation Implementations of our papers: Semi-supervised semantic segmentation needs

146 Dec 20, 2022
BaseCls BaseCls 是一个基于 MegEngine 的预训练模型库,帮助大家挑选或训练出更适合自己科研或者业务的模型结构

BaseCls BaseCls 是一个基于 MegEngine 的预训练模型库,帮助大家挑选或训练出更适合自己科研或者业务的模型结构。 文档地址:https://basecls.readthedocs.io 安装 安装环境 BaseCls 需要 Python = 3.6。 BaseCls 依赖 M

MEGVII Research 28 Dec 23, 2022
Get 2D point positions (e.g., facial landmarks) projected on 3D mesh

points2d_projection_mesh Input 2D points (e.g. facial landmarks) on an image Camera parameters (extrinsic and intrinsic) of the image Aligned 3D mesh

5 Dec 08, 2022
A tensorflow implementation of an HMM layer

tensorflow_hmm Tensorflow and numpy implementations of the HMM viterbi and forward/backward algorithms. See Keras example for an example of how to use

Zach Dwiel 283 Oct 19, 2022
A PyTorch Implementation of ViT (Vision Transformer)

ViT - Vision Transformer This is an implementation of ViT - Vision Transformer by Google Research Team through the paper "An Image is Worth 16x16 Word

Quan Nguyen 7 May 11, 2022
Offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation

Shunted Transformer This is the offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation by Sucheng Ren, Daquan Zhou, Shengf

156 Dec 27, 2022
[NeurIPS 2021 Spotlight] Code for Learning to Compose Visual Relations

Learning to Compose Visual Relations This is the pytorch codebase for the NeurIPS 2021 Spotlight paper Learning to Compose Visual Relations. Demo Imag

Nan Liu 88 Jan 04, 2023
2021搜狐校园文本匹配算法大赛 分比我们低的都是帅哥队

sohu_text_matching 2021搜狐校园文本匹配算法大赛Top2:分比我们低的都是帅哥队 本repo包含了本次大赛决赛环节提交的代码文件及答辩PPT,提交的模型文件可在百度网盘获取(链接:https://pan.baidu.com/s/1T9FtwiGFZhuC8qqwXKZSNA ,

hflserdaniel 43 Oct 01, 2022
A more easy-to-use implementation of KPConv based on PyTorch.

A more easy-to-use implementation of KPConv This repo contains a more easy-to-use implementation of KPConv based on PyTorch. Introduction KPConv is a

Zheng Qin 36 Dec 29, 2022
Predict Breast Cancer Wisconsin (Diagnostic) using Naive Bayes

Naive-Bayes Predict Breast Cancer Wisconsin (Diagnostic) using Naive Bayes Downloading Data Set Use our Breast Cancer Wisconsin Data Set Also you can

Faeze Habibi 0 Apr 06, 2022
Multi-Object Tracking in Satellite Videos with Graph-Based Multi-Task Modeling

TGraM Multi-Object Tracking in Satellite Videos with Graph-Based Multi-Task Modeling, Qibin He, Xian Sun, Zhiyuan Yan, Beibei Li, Kun Fu Abstract Rece

Qibin He 6 Nov 25, 2022
PyTorch 1.5 implementation for paper DECOR-GAN: 3D Shape Detailization by Conditional Refinement.

DECOR-GAN PyTorch 1.5 implementation for paper DECOR-GAN: 3D Shape Detailization by Conditional Refinement, Zhiqin Chen, Vladimir G. Kim, Matthew Fish

Zhiqin Chen 72 Dec 31, 2022