Deep Learning to Create StepMania SM FIles

Overview

StepCOVNet

header_example

Codacy Badge

Running Audio to SM File Generator

Currently only produces .txt files. Use SMDataTools to convert .txt to .sm

python stepmania_note_generator.py -i --input <string> -o --output <string> --model <string> -v --verbose <int>
  • -i --input input directory path to audio files
  • -o --output output directory path to .txt files
  • -m --model input directory path to StepCOVNet model````
  • OPTIONAL: -v --verbose 1 shows full verbose, 0 shows no verbose; default is 0

Creating Training Dataset

Link to training data: https://drive.google.com/open?id=1eCRYSf2qnbsSOzC-KmxPWcSbMzi1fLHi

To create a training dataset, you need to parse the .sm files and convert sound files into .wav files:

  • SMDataTools should be used to parse the .sm files into .txt files.
  • wav_converter.py can be used to convert the audio files into .wav files. The default sample rate is 16000hz.

Once the parsed .txt files and .wav files are generated, place the .wav files into separate directories and run training_data_collection.py.

python training_data_collection.py -w --wav <string> -t --timing <string> -o --output <string> --multi <int> --limit <int> --cores <int> --name <string> --distributed <int>
  • -w --wav input directory path to .wav files
  • -t --timing input directory path to timing files
  • -o --output output directory path to output dataset
  • OPTIONAL: --multi 1 collects STFTs using frame_size of [2048, 1024, 4096], 0 collects STFTs using frame_size of [2048]; default is 0
  • OPTIONAL: --limit > 0 stops data collection at limit, -1 means unlimited; default is -1
  • OPTIONAL: --cores > 0 sets the number of cores to use when collecting data; -1 means uses the number of physical cores; default is 1
  • OPTIONAL: --name name to give the dataset; default names dataset based on the configuration parameters
  • OPTIONAL: --distributed 0 creates a single dataset, 1 creates a distributed dataset; default is 0

Training Model

Once training dataset has been created, run train.py.

python train.py -i --input <string> -o --output <string> -d --difficulty <int> --lookback <int> --limit <int> --name <string> --log <string>
  • -i --input input directory path to training dataset
  • -o --output output directory path to save model
  • OPTIONAL: -d --difficulty [0, 1, 2, 3, 4] sets the song difficulty to use when training to ["challenge", "hard", "medium", "easy", "beginner"], respectively; default is 0 or "challenge"
  • OPTIONAL: --lookback > 2 uses timeseries based on lookback when modeling; default is 3
  • OPTIONAL: --limit > 0 limits the amount of training samples used during training, -1 uses all the samples; default is -1
  • OPTIONAL: --name name to give the finished model; default names model based on dat aset used
  • OPTIONAL: --log output directory path to store tensorboard data

TODO

  • End-to-end unit tests for all modules

Credits

Owner
Chimezie Iwuanyanwu
Software Engineer
Chimezie Iwuanyanwu
Image Recognition using Pytorch

PyTorch Project Template A simple and well designed structure is essential for any Deep Learning project, so after a lot practice and contributing in

Sarat Chinni 1 Nov 02, 2021
LinkNet - This repository contains our Torch7 implementation of the network developed by us at e-Lab.

LinkNet This repository contains our Torch7 implementation of the network developed by us at e-Lab. You can go to our blogpost or read the article Lin

e-Lab 158 Nov 11, 2022
ReLoss - Official implementation for paper "Relational Surrogate Loss Learning" ICLR 2022

Relational Surrogate Loss Learning (ReLoss) Official implementation for paper "R

Tao Huang 31 Nov 22, 2022
Real-time analysis of intracranial neurophysiology recordings.

py_neuromodulation Click this button to run the "Tutorial ML with py_neuro" notebooks: The py_neuromodulation toolbox allows for real time capable pro

Interventional Cognitive Neuromodulation - Neumann Lab Berlin 15 Nov 03, 2022
Code for Transformer Hawkes Process, ICML 2020.

Transformer Hawkes Process Source code for Transformer Hawkes Process (ICML 2020). Run the code Dependencies Python 3.7. Anaconda contains all the req

Simiao Zuo 111 Dec 26, 2022
GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks

GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks This repository implements a capsule model Inten

Joel Huang 15 Dec 24, 2022
COIN the currently largest dataset for comprehensive instruction video analysis.

COIN Dataset COIN is the currently largest dataset for comprehensive instruction video analysis. It contains 11,827 videos of 180 different tasks (i.e

86 Dec 28, 2022
[BMVC'21] Official PyTorch Implementation of Grounded Situation Recognition with Transformers

Grounded Situation Recognition with Transformers Paper | Model Checkpoint This is the official PyTorch implementation of Grounded Situation Recognitio

Junhyeong Cho 18 Jul 19, 2022
Variational autoencoder for anime face reconstruction

VAE animeface Variational autoencoder for anime face reconstruction Introduction This repository is an exploratory example to train a variational auto

Minzhe Zhang 2 Dec 11, 2021
Demos of essentia classifiers hosted on replicate.ai

essentia-replicate-demos Demos of Essentia models hosted on replicate.ai's MTG site. The models Check our site for a complete list of the models avail

Music Technology Group - Universitat Pompeu Fabra 12 Nov 14, 2022
Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation

Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation Requirements This repository needs mmsegmentation Training To train

Adelaide Intelligent Machines (AIM) Group 7 Sep 12, 2022
Fader Networks: Manipulating Images by Sliding Attributes - NIPS 2017

FaderNetworks PyTorch implementation of Fader Networks (NIPS 2017). Fader Networks can generate different realistic versions of images by modifying at

Facebook Research 753 Dec 23, 2022
A package for music online and offline rhythmic information analysis including music Beat, downbeat, tempo and meter tracking.

BeatNet A package for music online and offline rhythmic information analysis including music Beat, downbeat, tempo and meter tracking. This repository

Mojtaba Heydari 157 Dec 27, 2022
Pipeline for employing a Lightweight deep learning models for LOW-power systems

PL-LOW A high-performance deep learning model lightweight pipeline that gradually lightens deep neural networks in order to utilize high-performance d

POSTECH Data Intelligence Lab 9 Aug 13, 2022
Code for the ICCV'21 paper "Context-aware Scene Graph Generation with Seq2Seq Transformers"

ICCV'21 Context-aware Scene Graph Generation with Seq2Seq Transformers Authors: Yichao Lu*, Himanshu Rai*, Cheng Chang*, Boris Knyazev†, Guangwei Yu,

Layer6 Labs 37 Dec 18, 2022
Evaluating different engineering tricks that make RL work

Reinforcement Learning Tricks, Index This repository contains the code for the paper "Distilling Reinforcement Learning Tricks for Video Games". Short

Anssi 15 Dec 26, 2022
Taichi Course Homework Template

太极图形课S1-标题部分 这个作业未来或将是你的开源项目,标题的内容可以来自作业中的核心关键词,让读者一眼看出你所完成的工作/做出的好玩demo 如果暂时未想好,起名时可以参考“太极图形课S1-xxx作业” 如下是作业(项目)展开说明的方法,可以帮大家理清思路,并且也对读者非常友好,请小伙伴们多多参

TaichiCourse 30 Nov 19, 2022
USAD - UnSupervised Anomaly Detection on multivariate time series

USAD - UnSupervised Anomaly Detection on multivariate time series Scripts and utility programs for implementing the USAD architecture. Implementation

116 Jan 04, 2023
CondNet: Conditional Classifier for Scene Segmentation

CondNet: Conditional Classifier for Scene Segmentation Introduction The fully convolutional network (FCN) has achieved tremendous success in dense vis

ycszen 31 Jul 22, 2022
A PyTorch implementation of "Graph Classification Using Structural Attention" (KDD 2018).

GAM ⠀⠀ A PyTorch implementation of Graph Classification Using Structural Attention (KDD 2018). Abstract Graph classification is a problem with practic

Benedek Rozemberczki 259 Dec 05, 2022