Search and filter videos based on objects that appear in them using convolutional neural networks

Overview

Thingscoop: Utility for searching and filtering videos based on their content

Description

Thingscoop is a command-line utility for analyzing videos semantically - that means searching, filtering, and describing videos based on objects, places, and other things that appear in them.

When you first run thingscoop on a video file, it uses a convolutional neural network to create an "index" of what's contained in the every second of the input by repeatedly performing image classification on a frame-by-frame basis. Once an index for a video file has been created, you can search (i.e. get the start and end times of the regions in the video matching the query) and filter (i.e. create a supercut of the matching regions) the input using arbitrary queries. Thingscoop uses a very basic query language that lets you to compose queries that test for the presence or absence of labels with the logical operators ! (not), || (or) and && (and). For example, to search a video the presence of the sky and the absence of the ocean: thingscoop search 'sky && !ocean' <file>.

Right now two models are supported by thingscoop: vgg_imagenet uses the architecture described in "Very Deep Convolutional Networks for Large-Scale Image Recognition" to recognize objects from the ImageNet database, and googlenet_places uses the architecture described in "Going Deeper with Convolutions" to recognize settings and places from the MIT Places database. You can specify which model you'd like to use by running thingscoop models use <model>, where <model> is either vgg_imagenet or googlenet_places. More models will be added soon.

Thingscoop is based on Caffe, an open-source deep learning framework.

Installation

  1. Install ffmpeg, imagemagick, and ghostscript: brew install ffmpeg imagemagick ghostscript (Mac OS X) or apt-get install ffmpeg imagemagick ghostscript (Ubuntu).
  2. Follow the installation instructions on the Caffe Installation page.
  3. Make sure you build the Python bindings by running make pycaffe (on Caffe's directory).
  4. Set the environment variable CAFFE_ROOT to point to Caffe's directory: export CAFFE_ROOT=[Caffe's directory].
  5. Install thingscoop: easy_install thingscoop or pip install thingscoop.

Usage

thingscoop search <query> <files...>

Print the start and end times (in seconds) of the regions in <files> that match <query>. Creates an index for <file> using the current model if it does not exist.

Example output:

$ thingscoop search violin waking_life.mp4
/Users/anastasis/Downloads/waking_life.mp4 148.000000 162.000000
/Users/anastasis/Downloads/waking_life.mp4 176.000000 179.000000
/Users/anastasis/Downloads/waking_life.mp4 180.000000 186.000000
/Users/anastasis/Downloads/waking_life.mp4 189.000000 190.000000
/Users/anastasis/Downloads/waking_life.mp4 192.000000 200.000000
/Users/anastasis/Downloads/waking_life.mp4 211.000000 212.000000
/Users/anastasis/Downloads/waking_life.mp4 222.000000 223.000000
/Users/anastasis/Downloads/waking_life.mp4 235.000000 243.000000
/Users/anastasis/Downloads/waking_life.mp4 247.000000 249.000000
/Users/anastasis/Downloads/waking_life.mp4 251.000000 253.000000
/Users/anastasis/Downloads/waking_life.mp4 254.000000 258.000000

####thingscoop filter <query> <files...>

Generate a video compilation of the regions in the <files> that match <query>. Creates index for <file> using the current model if it does not exist.

Example output:

thingscoop sort <file>

Create a compilation video showing examples for every label recognized in the video (in alphabetic order). Creates an index for <file> using the current model if it does not exist.

Example output:

thingscoop describe <file>

Print every label that appears in <file> along with the number of times it appears. Creates an index for <file> using the current model if it does not exist.

thingscoop preview <file>

Create a window that plays the input video <file> while also displaying the labels the model recognizes on every frame.

$ thingscoop describe koyaanisqatsi.mp4 -m googlenet_places
sky 405
skyscraper 363
canyon 141
office_building 130
highway 78
lighthouse 66
hospital 64
desert 59
shower 49
volcano 45
underwater 44
airport_terminal 43
fountain 39
runway 36
assembly_line 35
aquarium 34
fire_escape 34
music_studio 32
bar 28
amusement_park 28
stage 26
wheat_field 25
butchers_shop 25
engine_room 24
slum 20
butte 20
igloo 20
...etc

thingscoop index <file>

Create an index for <file> using the current model if it does not exist.

thingscoop models list

List all models currently available in Thingscoop.

$ thingscoop models list
googlenet_imagenet            Model described in the paper "Going Deeper with Convolutions" trained on the ImageNet database
googlenet_places              Model described in the paper "Going Deeper with Convolutions" trained on the MIT Places database
vgg_imagenet                  16-layer model described in the paper "Return of the Devil in the Details: Delving Deep into Convolutional Nets" trained on the ImageNet database

thingscoop models info <model>

Print more detailed information about <model>.

$ thingscoop models info googlenet_places
Name: googlenet_places
Description: Model described in the paper "Going Deeper with Convolutions" trained on the MIT Places database
Dataset: MIT Places

thingscoop models freeze

List all models that have already been downloaded.

$ thingscoop models freeze
googlenet_places
vgg_imagenet

thingscoop models current

Print the model that is currently in use.

$ thingscoop models current
googlenet_places

thingscoop models use <model>

Set the current model to <model>. Downloads that model locally if it hasn't been downloaded already.

thingscoop models download <model>

Download the model <model> locally.

thingscoop models remove <model>

Remove the model <model> locally.

thingscoop models clear

Remove all models stored locally.

thingscoop labels list

Print all the labels used by the current model.

$ thingscoop labels list
abacus
abaya
abstraction
academic gown
accessory
accordion
acorn
acorn squash
acoustic guitar
act
actinic radiation
action
activity
adhesive bandage
adjudicator
administrative district
admiral
adornment
adventurer
advocate
...

thingscoop labels search <regexp>

Print all the labels supported by the current model that match the regular expression <regexp>.

$ thingscoop labels search instrument$
beating-reed instrument
bowed stringed instrument
double-reed instrument
free-reed instrument
instrument
keyboard instrument
measuring instrument
medical instrument
musical instrument
navigational instrument
negotiable instrument
optical instrument
percussion instrument
scientific instrument
stringed instrument
surveying instrument
wind instrument
...

Full usage options

thingscoop - Command-line utility for searching and filtering videos based on their content

Usage:
  thingscoop filter <query> <files>... [-o <output_path>] [-m <model>] [-s <sr>] [-c <mc>] [--recreate-index] [--gpu-mode] [--open]
  thingscoop search <query> <files>... [-o <output_path>] [-m <model>] [-s <sr>] [-c <mc>] [--recreate-index] [--gpu-mode] 
  thingscoop describe <file> [-n <words>] [-m <model>] [--recreate-index] [--gpu-mode] [-c <mc>]
  thingscoop index <files> [-m <model>] [-s <sr>] [-c <mc>] [-r <ocr>] [--recreate-index] [--gpu-mode] 
  thingscoop sort <file> [-m <model>] [--gpu-mode] [--min-confidence <ct>] [--max-section-length <ms>] [-i <ignore>] [--open]
  thingscoop preview <file> [-m <model>] [--gpu-mode] [--min-confidence <ct>]
  thingscoop labels list [-m <model>]
  thingscoop labels search <regexp> [-m <model>]
  thingscoop models list
  thingscoop models info <model>
  thingscoop models freeze
  thingscoop models current
  thingscoop models use <model>
  thingscoop models download <model>
  thingscoop models remove <model>
  thingscoop models clear

Options:
  --version                       Show version.
  -h --help                       Show this screen.
  -o --output <dst>               Output file for supercut
  -s --sample-rate <sr>           How many frames to classify per second (default = 1)
  -c --min-confidence <mc>        Minimum prediction confidence required to consider a label (default depends on model)
  -m --model <model>              Model to use (use 'thingscoop models list' to see all available models)
  -n --number-of-words <words>    Number of words to describe the video with (default = 5)
  -t --max-section-length <ms>    Max number of seconds to show examples of a label in the sorted video (default = 5)
  -r --min-occurrences <ocr>      Minimum number of occurrences of a label in video required for it to be shown in the sorted video (default = 2)
  -i --ignore-labels <labels>     Labels to ignore when creating the sorted video video
  --title <title>                 Title to show at the beginning of the video (sort mode only)
  --gpu-mode                      Enable GPU mode
  --recreate-index                Recreate object index for file if it already exists
  --open                          Open filtered video after creating it (OS X only)

CHANGELOG

0.2 (8/16/2015)

  • Added sort option for creating a video compilation of all labels appearing in a video
  • Now using JSON for the index files

0.1 (8/5/2015)

  • Conception

License

MIT

Owner
Anastasis Germanidis
🎭
Anastasis Germanidis
GEP (GDB Enhanced Prompt) - a GDB plug-in for GDB command prompt with fzf history search, fish-like autosuggestions, auto-completion with floating window, partial string matching in history, and more!

GEP (GDB Enhanced Prompt) GEP (GDB Enhanced Prompt) is a GDB plug-in which make your GDB command prompt more convenient and flexibility. Why I need th

Alan Li 23 Dec 21, 2022
Compressed Video Action Recognition

Compressed Video Action Recognition Chao-Yuan Wu, Manzil Zaheer, Hexiang Hu, R. Manmatha, Alexander J. Smola, Philipp Krähenbühl. In CVPR, 2018. [Proj

Chao-Yuan Wu 479 Dec 26, 2022
Python-based Informatics Kit for Analysing Chemical Units

INSTALLATION Python-based Informatics Kit for the Analysis of Chemical Units Step 1: Make a conda environment: conda create -n pikachu python=3.9 cond

47 Dec 23, 2022
This repository contains an implementation of ConvMixer for the ICLR 2022 submission "Patches Are All You Need?".

Patches Are All You Need? 🤷 This repository contains an implementation of ConvMixer for the ICLR 2022 submission "Patches Are All You Need?". Code ov

ICLR 2022 Author 934 Dec 30, 2022
A code implementation of AC-GC: Activation Compression with Guaranteed Convergence, in NeurIPS 2021.

Code For AC-GC: Lossy Activation Compression with Guaranteed Convergence This code is intended to be used as a supplemental material for submission to

Dave Evans 2 Nov 01, 2022
Open-Domain Question-Answering for COVID-19 and Other Emergent Domains

Open-Domain Question-Answering for COVID-19 and Other Emergent Domains This repository contains the source code for an end-to-end open-domain question

7 Sep 27, 2022
AnimationKit: AI Upscaling & Interpolation using Real-ESRGAN+RIFE

ALPHA 2.5: Frostbite Revival (Released 12/23/21) Changelog: [ UI ] Chained design. All steps link to one another! Use the master override toggles to s

87 Nov 16, 2022
Implementation of the method described in the Speech Resynthesis from Discrete Disentangled Self-Supervised Representations.

Speech Resynthesis from Discrete Disentangled Self-Supervised Representations Implementation of the method described in the Speech Resynthesis from Di

4 Mar 11, 2022
MISSFormer: An Effective Medical Image Segmentation Transformer

MISSFormer Code for paper "MISSFormer: An Effective Medical Image Segmentation Transformer". Please read our preprint at the following link: paper_add

Fong 22 Dec 24, 2022
Running AlphaFold2 (from ColabFold) in Azure Machine Learning

Running AlphaFold2 (from ColabFold) in Azure Machine Learning Colby T. Ford, Ph.D. Companion repository for Medium Post: How to predict many protein s

Colby T. Ford 3 Feb 18, 2022
GPU implementation of $k$-Nearest Neighbors and Shared-Nearest Neighbors

GPU implementation of kNN and SNN GPU implementation of $k$-Nearest Neighbors and Shared-Nearest Neighbors Supported by numba cuda and faiss library E

Hyeon Jeon 7 Nov 23, 2022
Wikidated : An Evolving Knowledge Graph Dataset of Wikidata’s Revision History

Wikidated Wikidated 1.0 is a dataset of Wikidata’s full revision history, which encodes changes between Wikidata revisions as sets of deletions and ad

Lukas Schmelzeisen 11 Aug 16, 2022
MoCoGAN: Decomposing Motion and Content for Video Generation

MoCoGAN: Decomposing Motion and Content for Video Generation This repository contains an implementation and further details of MoCoGAN: Decomposing Mo

Sergey Tulyakov 514 Dec 18, 2022
A Learning-based Camera Calibration Toolbox

Learning-based Camera Calibration A Learning-based Camera Calibration Toolbox Paper The pdf file can be found here. @misc{zhang2022learningbased,

Eason 14 Dec 21, 2022
Learning from History: Modeling Temporal Knowledge Graphs with Sequential Copy-Generation Networks

CyGNet This repository reproduces the AAAI'21 paper “Learning from History: Modeling Temporal Knowledge Graphs with Sequential Copy-Generation Network

CunchaoZ 89 Jan 03, 2023
This repository contains the entire code for our work "Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid Precoding"

Two-Timescale-DNN Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid Precoding This repository contains the entire code for our work

QiyuHu 3 Mar 07, 2022
AI Based Smart Exam Proctoring Package

AI Based Smart Exam Proctoring Package It takes image (base64) as input: Provide Output as: Detection of Mobile phone. Detection of More than 1 person

NARENDER KESWANI 3 Sep 09, 2022
Python script that analyses the given datasets and comes up with the best polynomial regression representation with the smallest polynomial degree possible

Python script that analyses the given datasets and comes up with the best polynomial regression representation with the smallest polynomial degree possible, to be the most reliable with the least com

Nikolas B Virionis 2 Aug 01, 2022
PyTorch implementation of DARDet: A Dense Anchor-free Rotated Object Detector in Aerial Images

DARDet PyTorch implementation of "DARDet: A Dense Anchor-free Rotated Object Detector in Aerial Images", [pdf]. Highlights: 1. We develop a new dense

41 Oct 23, 2022
Wordplay, an artificial Intelligence based crossword puzzle solver.

Wordplay, AI based crossword puzzle solver A crossword is a word puzzle that usually takes the form of a square or a rectangular grid of white- and bl

Vaibhaw 4 Nov 16, 2022