TensorFlow-LiveLessons - "Deep Learning with TensorFlow" LiveLessons

Overview

TensorFlow-LiveLessons

Note that the second edition of this video series is now available here. The second edition contains all of the content from this (first) edition plus quite a bit more, as well as updated library versions.

This repository is home to the code that accompanies Jon Krohn's:

  1. Deep Learning with TensorFlow LiveLessons (summary blog post here)
  2. Deep Learning for Natural Language Processing LiveLessons (summary blog post here)
  3. Deep Reinforcement Learning and GANs LiveLessons (summary blog post here)

The above order is the recommended sequence in which to undertake these LiveLessons. That said, Deep Learning with TensorFlow provides a sufficient theoretical and practical background for the other LiveLessons.

Prerequisites

Command Line

Working through these LiveLessons will be easiest if you are familiar with the Unix command line basics. A tutorial of these fundamentals can be found here.

Python for Data Analysis

In addition, if you're unfamiliar with using Python for data analysis (e.g., the pandas, scikit-learn, matplotlib packages), the data analyst path of DataQuest will quickly get you up to speed -- steps one (Introduction to Python) and two (Intermediate Python and Pandas) provide the bulk of the essentials.

Installation

Step-by-step guides for running the code in this repository can be found in the installation directory.

Notebooks

All of the code that I cover in the LiveLessons can be found in this directory as Jupyter notebooks.

Below is the lesson-by-lesson sequence in which I covered them:

Deep Learning with TensorFlow LiveLessons

Lesson One: Introduction to Deep Learning

1.1 Neural Networks and Deep Learning
  • via analogy to their biological inspirations, this section introduces Artificial Neural Networks and how they developed to their predominantly deep architectures today
1.2 Running the Code in These LiveLessons
1.3 An Introductory Artificial Neural Network
  • get your hands dirty with a simple-as-possible neural network (shallow_net_in_keras.ipynb) for classifying handwritten digits
  • introduces Jupyter notebooks and their most useful hot keys
  • introduces a gentle quantity of deep learning terminology by whiteboarding through:
    • the MNIST digit data set
    • the preprocessing of images for analysis with a neural network
    • a shallow network architecture

Lesson Two: How Deep Learning Works

2.1 The Families of Deep Neural Nets and their Applications
  • talk through the function and popular applications of the predominant modern families of deep neural nets:
    • Dense / Fully-Connected
    • Convolutional Networks (ConvNets)
    • Recurrent Neural Networks (RNNs) / Long Short-Term Memory units (LSTMs)
    • Reinforcement Learning
    • Generative Adversarial Networks
2.2 Essential Theory I —- Neural Units
  • the following essential deep learning concepts are explained with intuitive, graphical explanations:
    • neural units and activation functions
2.3 Essential Theory II -- Cost Functions, Gradient Descent, and Backpropagation
2.4 TensorFlow Playground -- Visualizing a Deep Net in Action
2.5 Data Sets for Deep Learning
  • overview of canonical data sets for image classification and meta-resources for data sets ideally suited to deep learning
2.6 Applying Deep Net Theory to Code I
  • apply the theory learned throughout Lesson Two to create an intermediate-depth image classifier (intermediate_net_in_keras.ipynb)
  • builds on, and greatly outperforms, the shallow architecture from Section 1.3

Lesson Three: Convolutional Networks

3.1 Essential Theory III -- Mini-Batches, Unstable Gradients, and Avoiding Overfitting
  • add to our state-of-the-art deep learning toolkit by delving further into essential theory, specifically:
    • weight initialization
      • uniform
      • normal
      • Xavier Glorot
    • stochastic gradient descent
      • learning rate
      • batch size
      • second-order gradient learning
        • momentum
        • Adam
    • unstable gradients
      • vanishing
      • exploding
    • avoiding overfitting / model generalization
      • L1/L2 regularization
      • dropout
      • artificial data set expansion
    • batch normalization
    • more layers
      • max-pooling
      • flatten
3.2 Applying Deep Net Theory to Code II
  • apply the theory learned in the previous section to create a deep, dense net for image classification (deep_net_in_keras.ipynb)
  • builds on, and outperforms, the intermediate architecture from Section 2.5
3.3 Introduction to Convolutional Neural Networks for Visual Recognition
  • whiteboard through an intuitive explanation of what convolutional layers are and how they're so effective
3.4 Classic ConvNet Architectures -— LeNet-5
  • apply the theory learned in the previous section to create a deep convolutional net for image classification (lenet_in_keras.ipynb) that is inspired by the classic LeNet-5 neural network introduced in section 1.1
3.5 Classic ConvNet Architectures -— AlexNet and VGGNet
3.6 TensorBoard and the Interpretation of Model Outputs
  • return to the networks from the previous section, adding code to output results to the TensorBoard deep learning results-visualization tool
  • explore TensorBoard and explain how to interpret model results within it

Lesson Four: Introduction to TensorFlow

4.1 Comparison of the Leading Deep Learning Libraries
  • discuss the relative strengths, weaknesses, and common applications of the leading deep learning libraries:
    • Caffe
    • Torch
    • Theano
    • TensorFlow
    • and the high-level APIs TFLearn and Keras
  • conclude that, for the broadest set of applications, TensorFlow is the best option
4.2 Introduction to TensorFlow
4.3 Fitting Models in TensorFlow
4.4 Dense Nets in TensorFlow
4.5 Deep Convolutional Nets in TensorFlow
  • create a deep convolutional neural net (lenet_in_tensorflow.ipynb) in TensorFlow with an architecture identical to the LeNet-inspired one built in Keras in Section 3.4

Lesson Five: Improving Deep Networks

5.1 Improving Performance and Tuning Hyperparameters
  • detail systematic steps for improving the performance of deep neural nets, including by tuning hyperparameters
5.2 How to Built Your Own Deep Learning Project
  • specific steps for designing and evaluating your own deep learning project
5.3 Resources for Self-Study
  • topics worth investing time in to become an expert deployer of deep learning models


Deep Learning for Natural Language Processing

Lesson One: The Power and Elegance of Deep Learning for NLP

1.1 Introduction to Deep Learning for Natural Language Processing
  • high-level overview of deep learning as it pertains to Natural Language Processing (NLP)
  • influential examples of industrial applications of NLP
  • timeline of contemporary breakthroughs that have brought Deep Learning approaches to the forefront of NLP research and development
1.2 Computational Representations of Natural Language Elements
  • introduce the elements of natural language
  • contrast how these elements are represented by traditional machine-learning models and emergent deep-learning models
1.3 NLP Applications
  • specify common NLP applications and bucket them into three tiers of relative complexity
1.4 Installation, Including GPU Considerations
1.5 Review of Prerequisite Deep Learning Theory
1.6 A Sneak Peak
  • take a tantalising look ahead at the capabilities developed over the course of these LiveLessons

Lesson Two: Word Vectors

2.1 Vector-Space Embedding
  • leverage interactive demos to enable an intuitive understanding of vector-space embeddings of words, nuanced quantitative representations of word meaning
2.2 word2vec
  • key papers that led to the development of word2vec, a technique for transforming natural language into vector representations
  • essential word2vec theory introduced:
    • architectures:
      1. Skip-Gram
      2. Continuous Bag of Words
    • training algorithms:
      1. hierarchical softmax
      2. negative sampling
    • evaluation perspectives:
      1. intrinsic
      2. extrinsic
    • hyperparameters:
      1. number of dimensions
      2. context-word window size
      3. number of iterations
      4. size of data set
  • contrast word2vec with its leading alternative, GloVe
2.3 Data Sets for NLP
2.4 Creating Word Vectors with word2vec

Lesson Three: Modeling Natural Language Data

3.1 Best Practices for Preprocessing Natural Language Data
  • in natural_language_preprocessing_best_practices.ipynb, apply the following recommended best practices to clean up a corpus natural language data prior to modeling:
    • tokenize
    • convert all characters to lowercase
    • remove stopwords
    • remove punctuation
    • stem words
    • handle bigram (and trigram) word collocations
3.2 The Area Under the ROC Curve
  • detail the calculation and functionality of the area under the Receiver Operating Characteristic curve summary metric, which is used throughout the remainder of the LiveLessons for evaluating model performance
3.3 Dense Neural Network Classification
3.4 Convolutional Neural Network Classification

Lesson Four: Recurrent Neural Networks

4.1 Essential Theory of RNNs
  • provide an intuitive understanding of Recurrent Neural Networks (RNNs), which permit backpropagation through time over sequential data, such as natural language and financial time series data
4.2 RNNs in Practice
  • incorporate simple RNN layers into a model that classifies documents by their sentiment (rnn_in_keras.ipynb
4.3 Essential Theory of LSTMs and GRUs
  • develop familiarity with the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) varieties of RNNs which provide markedly more productive modeling of sequential data with deep learning models
4.4 LSTMs and GRUs in Practice

Lesson Five: Advanced Models

5.1 Bi-Directional LSTMs
  • Bi-directional LSTMs are an especially potent variant of the LSTM
  • high-level theory on Bi-LSTMs before leveraging them in practice (bidirectional_lstm.ipynb)
5.2 Stacked LSTMs
5.3 Parallel Network Architectures
  • advanced data modeling capabilities are possible with non-sequential architectures, e.g., parallel convolutional layers, each with unique hyperparameters (multi_convnet_architectures.ipynb)


Deep Reinforcement Learning and GANs

Lesson One: The Foundations of Artificial Intelligence

1.1 The Contemporary State of AI
  • examine what the term "Artificial Intelligence" means and how it relates to deep learning
  • define narrow, general, and super intelligence
1.2 Applications of Generative Adversarial Networks
  • uncover the rapidly-improving quality of Generative Adversarial Networks for creating compelling novel imagery in the style of humans
  • involves the fun, interactive pix2pix tool
1.3 Applications of Deep Reinforcement Learning
  • distinguish supervised and unsupervised learning from reinforcement learning
  • provide an overview of the seminal contemporary deep reinforcement learning breakthroughs, including:
    • the Deep Q-Learning algorithm
    • AlphaGo
    • AlphaGo Zero
    • AlphaZero
    • robotics advances
  • introduce the most popular deep reinforcement learning environments:
1.4 Running the Code in these LiveLessons
1.5 Review of Prerequisite Deep Learning Theory

Lesson Two: Generative Adversarial Networks (GANs)

2.1 Essential GAN Theory
  • cover the high-level theory of what GANs are and how they are able to generate realistic-looking images
2.2 The “Quick, Draw!” Game Dataset
  • show the Quick, Draw! game, which we use as the source of hundreds of thousands of hand-drawn images from a single class for a GAN to learn to imitate
2.3 A Discriminator Network
2.4 A Generator Network
2.5 Training an Adversarial Network

Lesson Three: Deep Q-Learning Networks (DQNs)

3.1 The Cartpole Game
  • introduce the Cartpole Game, an environment provided by OpenAI and used throughout the remainder these LiveLessons to train deep reinforcement learning algorithms
3.2 Essential Deep RL Theory
  • delve into the essential theory of deep reinforcement learning in general
3.3 Essential DQN Theory
  • delve into the essential theory of Deep Q-Learning networks, a popular, particular type of deep reinforcement learning algorithm
3.4 Defining a DQN Agent
3.5 Interacting with an OpenAI Gym Environment
  • leverage OpenAI Gym to enable our Deep Q-Learning agent to master the Cartpole Game (cartpole_dqn.ipynb completed)

Lesson Four: OpenAI Lab

4.1 Visualizing Agent Performance
  • use the OpenAI Lab to visualise our Deep Q-Learning agent's performance in real-time
4.2 Modifying Agent Hyperparameters
  • learn to straightforwardly optimise a deep reinforcement learning agent's hyperparameters
4.3 Automated Hyperparameter Experimentation and Optimization
  • automate the search through hyperparameters to optimize our agent’s performance
4.4 Fitness
  • calculate summary metrics to gauge our agent’s overall fitness

Lesson Five: Advanced Deep Reinforcement Learning Agents

5.1 Policy Gradients and the REINFORCE Algorithm
  • at a high level, discover Policy Gradient algorithms in general and the classic REINFORCE implementation in particular
5.2 The Actor-Critic Algorithm
  • cover how Policy Gradients can be combined with Deep Q-Learning to facilitate the Actor-Critic algorithms
5.3 Software 2.0
  • discuss how deep learning is ushering in a new era of software development driven by data in place of hard-coded rules
5.4 Approaching Artificial General Intelligence
  • return to our discussion of Artificial Intelligence, specifically addressing the limitations of modern deep learning approaches


Owner
Deep Learning Study Group
Deep Learning Study Group
Pytorch implementation of SELF-ATTENTIVE VAD, ICASSP 2021

SELF-ATTENTIVE VAD: CONTEXT-AWARE DETECTION OF VOICE FROM NOISE (ICASSP 2021) Pytorch implementation of SELF-ATTENTIVE VAD | Paper | Dataset Yong Rae

97 Dec 23, 2022
RAANet: Range-Aware Attention Network for LiDAR-based 3D Object Detection with Auxiliary Density Level Estimation

RAANet: Range-Aware Attention Network for LiDAR-based 3D Object Detection with Auxiliary Density Level Estimation Anonymous submission Abstract 3D obj

30 Sep 16, 2022
Image-generation-baseline - MUGE Text To Image Generation Baseline

MUGE Text To Image Generation Baseline Requirements and Installation More detail

23 Oct 17, 2022
[ArXiv 2021] One-Shot Generative Domain Adaptation

GenDA - One-Shot Generative Domain Adaptation One-Shot Generative Domain Adaptation Ceyuan Yang*, Yujun Shen*, Zhiyi Zhang, Yinghao Xu, Jiapeng Zhu, Z

GenForce: May Generative Force Be with You 46 Dec 19, 2022
AutoML library for deep learning

Official Website: autokeras.com AutoKeras: An AutoML system based on Keras. It is developed by DATA Lab at Texas A&M University. The goal of AutoKeras

Keras 8.7k Jan 08, 2023
Validated, scalable, community developed variant calling, RNA-seq and small RNA analysis

Validated, scalable, community developed variant calling, RNA-seq and small RNA analysis. You write a high level configuration file specifying your in

Blue Collar Bioinformatics 917 Jan 03, 2023
pixelNeRF: Neural Radiance Fields from One or Few Images

pixelNeRF: Neural Radiance Fields from One or Few Images Alex Yu, Vickie Ye, Matthew Tancik, Angjoo Kanazawa UC Berkeley arXiv: http://arxiv.org/abs/2

Alex Yu 1k Jan 04, 2023
GARCH and Multivariate LSTM forecasting models for Bitcoin realized volatility with potential applications in crypto options trading, hedging, portfolio management, and risk management

Bitcoin Realized Volatility Forecasting with GARCH and Multivariate LSTM Author: Chi Bui This Repository Repository Directory ├── README.md

Chi Bui 113 Dec 29, 2022
Code for the paper "Asymptotics of ℓ2 Regularized Network Embeddings"

README Code for the paper Asymptotics of L2 Regularized Network Embeddings. Requirements Requires Stellargraph 1.2.1, Tensorflow 2.6.0, scikit-learm 0

Andrew Davison 0 Jan 06, 2022
A Simulation Environment to train Robots in Large Realistic Interactive Scenes

iGibson: A Simulation Environment to train Robots in Large Realistic Interactive Scenes iGibson is a simulation environment providing fast visual rend

Stanford Vision and Learning Lab 493 Jan 04, 2023
[CVPR'20] TTSR: Learning Texture Transformer Network for Image Super-Resolution

TTSR Official PyTorch implementation of the paper Learning Texture Transformer Network for Image Super-Resolution accepted in CVPR 2020. Contents Intr

Multimedia Research 689 Dec 28, 2022
ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021

ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021 Dataset Code Demos Authors: He Zhang, Yuting Ye, Tak

HE ZHANG 194 Dec 06, 2022
Deep Learning to Create StepMania SM FIles

StepCOVNet Running Audio to SM File Generator Currently only produces .txt files. Use SMDataTools to convert .txt to .sm python stepmania_note_generat

Chimezie Iwuanyanwu 8 Jan 08, 2023
An Implementation of SiameseRPN with Feature Pyramid Networks

SiameseRPN with FPN This project is mainly based on HelloRicky123/Siamese-RPN. What I've done is just add a Feature Pyramid Network method to the orig

3 Apr 16, 2022
A highly efficient, fast, powerful and light-weight anime downloader and streamer for your favorite anime.

AnimDL - Download & Stream Your Favorite Anime AnimDL is an incredibly powerful tool for downloading and streaming anime. Core features Abuses the dev

KR 759 Jan 08, 2023
FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack

FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack Case study of the FCA. The code can be find in FCA. Cas

IDRL 21 Dec 15, 2022
This repository accompanies the ACM TOIS paper "What can I cook with these ingredients?" - Understanding cooking-related information needs in conversational search

In this repository you find data that has been gathered when conducting in-situ experiments in a conversational cooking setting. These data include tr

6 Sep 22, 2022
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.

English | 简体中文 | 繁體中文 State-of-the-art Natural Language Processing for Jax, PyTorch and TensorFlow 🤗 Transformers provides thousands of pretrained mo

Hugging Face 77.2k Jan 02, 2023
Machine Unlearning with SISA

Machine Unlearning with SISA Lucas Bourtoule, Varun Chandrasekaran, Christopher Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, N

CleverHans Lab 70 Jan 01, 2023
Repository aimed at compiling code, papers, demos etc.. related to my PhD on 3D vision and machine learning for fruit detection and shape estimation at the university of Lincoln

PhD_3DPerception Repository aimed at compiling code, papers, demos etc.. related to my PhD on 3D vision and machine learning for fruit detection and s

lelouedec 2 Oct 06, 2022