Simple and ready-to-use tutorials for TensorFlow

Overview

TensorFlow World

https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat https://badges.frapsoft.com/os/v2/open-source.svg?v=102 https://coveralls.io/repos/github/astorfi/TensorFlow-World/badge.svg?branch=master https://img.shields.io/twitter/follow/amirsinatorfi.svg?label=Follow&style=social

To support maintaining and upgrading this project, please kindly consider Sponsoring the project developer.

Any level of support is a great contribution here ❤️

This repository aims to provide simple and ready-to-use tutorials for TensorFlow. The explanations are present in the wiki associated with this repository.

Each tutorial includes source code and associated documentation.

Slack Group

Table of Contents

Motivation

There are different motivations for this open source project. TensorFlow (as we write this document) is one of / the best deep learning frameworks available. The question that should be asked is why has this repository been created when there are so many other tutorials about TensorFlow available on the web?

Why use TensorFlow?

Deep Learning is in very high interest these days - there's a crucial need for rapid and optimized implementations of the algorithms and architectures. TensorFlow is designed to facilitate this goal.

The strong advantage of TensorFlow is it flexibility in designing highly modular models which can also be a disadvantage for beginners since a lot of the pieces must be considered together when creating the model.

This issue has been facilitated as well by developing high-level APIs such as Keras and Slim which abstract a lot of the pieces used in designing machine learning algorithms.

The interesting thing about TensorFlow is that it can be found anywhere these days. Lots of the researchers and developers are using it and its community is growing at the speed of light! So many issues can be dealt with easily since they're usually the same issues that a lot of other people run into considering the large number of people involved in the TensorFlow community.

What's the point of this repository?

Developing open source projects for the sake of just developing something is not the reason behind this effort. Considering the large number of tutorials that are being added to this large community, this repository has been created to break the jump-in and jump-out process that usually happens to most of the open source projects, but why and how?

First of all, what's the point of putting effort into something that most of the people won't stop by and take a look? What's the point of creating something that does not help anyone in the developers and researchers community? Why spend time for something that can easily be forgotten? But how we try to do it? Even up to this very moment there are countless tutorials on TensorFlow whether on the model design or TensorFlow workflow.

Most of them are too complicated or suffer from a lack of documentation. There are only a few available tutorials which are concise and well-structured and provide enough insight for their specific implemented models.

The goal of this project is to help the community with structured tutorials and simple and optimized code implementations to provide better insight about how to use TensorFlow quick and effectively.

It is worth noting that, the main goal of this project is to provide well-documented tutorials and less-complicated code!

TensorFlow Installation and Setup the Environment

alternate text

In order to install TensorFlow please refer to the following link:

_img/mainpage/installation.gif

The virtual environment installation is recommended in order to prevent package conflict and having the capacity to customize the working environment.

TensorFlow Tutorials

The tutorials in this repository are partitioned into relevant categories.


Warm-up

alternate text

# topic Source Code  
1 Start-up Welcome / IPython Documentation

Basics

alternate text

# topic Source Code  
2 TensorFLow Basics Basic Math Operations / IPython Documentation
3 TensorFLow Basics TensorFlow Variables / IPython Documentation

Basic Machine Learning

alternate text

# topic Source Code  
4 Linear Models Linear Regression / IPython Documentation
5 Predictive Models Logistic Regression / IPython Documentation
6 Support Vector Machines Linear SVM / IPython  
7 Support Vector Machines MultiClass Kernel SVM / IPython  

Neural Networks

alternate text

# topic Source Code  
8 Multi Layer Perceptron Simple Multi Layer Perceptron / IPython  
9 Convolutional Neural Network Simple Convolutional Neural Networks Documentation
10 Autoencoder Undercomplete Autoencoder Documentation
11 Recurrent Neural Network RNN / IPython  

Some Useful Tutorials

Contributing

When contributing to this repository, please first discuss the change you wish to make via issue, email, or any other method with the owners of this repository before making a change. For typos, please do not create a pull request. Instead, declare them in issues or email the repository owner.

Please note we have a code of conduct, please follow it in all your interactions with the project.

Pull Request Process

Please consider the following criterions in order to help us in a better way:

  • The pull request is mainly expected to be a code script suggestion or improvement.
  • A pull request related to non-code-script sections is expected to make a significant difference in the documentation. Otherwise, it is expected to be announced in the issues section.
  • Ensure any install or build dependencies are removed before the end of the layer when doing a build and creating a pull request.
  • Add comments with details of changes to the interface, this includes new environment variables, exposed ports, useful file locations and container parameters.
  • You may merge the Pull Request in once you have the sign-off of at least one other developer, or if you do not have permission to do that, you may request the owner to merge it for you if you believe all checks are passed.

Final Note

We are looking forward to your kind feedback. Please help us to improve this open source project and make our work better. For contribution, please create a pull request and we will investigate it promptly. Once again, we appreciate your kind feedback and elaborate code inspections.

Acknowledgement

I have taken huge efforts in this project for hopefully being a small part of TensorFlow world. However, it would not have been plausible without the kind support and help of my friend and colleague Domenick Poster for his valuable advices. He helped me for having a better understanding of TensorFlow and my special appreciation goes to him.

Comments
  • TensorFlow

    TensorFlow

    $ git clone https://github.com/TensorFlow-World.git Cloning into 'TensorFlow-World'... remote: Not Found fatal: repository 'https://github.com/TensorFlow-World.git/' not found

    opened by ashu-22 13
  • RE: Policy regarding typos in codebase.

    RE: Policy regarding typos in codebase.

    This issue is regarding your policy regarding typos in your codebase. Here is the relevant section in your CONTRIBUTING.rst: For typos, please do not create a pull request. Instead, declare them in issues or email the repository owner.

    I suggest this policy be revised as it creates an extra step for you, the maintainer of this repo. For example, here is your current process:

    1. Contributor finds a typo.
    2. Contributor opens an issue.
    3. Repo owner reads the issue.
    4. Repo owner decides to create a code change to fix the typo and pushes the change.

    Here is the suggested process:

    1. Contributor finds a typo.
    2. Contributor creates a code change to fix the typo and creates a pull request
    3. Repo owner decides to accept the pull request and merges the changes.

    If typos can be discussed within a pull request, I don't see the point for a contributor to create an issue and then the repo owner creates a code change to fix the typo. I suggest using Github Issues to discuss lengthy proposals, but typos should be handled directly within a pull request. For example, see this Contributing guide for Github's open source guide.

    opened by adyavanapalli 4
  • Look for Python syntax errors or undefined names

    Look for Python syntax errors or undefined names

    • http://flake8.pycqa.org with find syntax errors and undefined names that can halt your program.
      • --select=E901,E999,F821,F822,F823 focuses the tool on the most critical issues
    • Fxxx codes are here: http://flake8.pycqa.org/en/latest/user/error-codes.html
    • Other codes are here: https://pycodestyle.readthedocs.io/en/latest/intro.html#error-codes
    • The output is here: https://travis-ci.org/astorfi/TensorFlow-World/builds/272817787

    F821 is really helpful for finding Python 2 / 3 differences but also for typos, copy/paste errors, etc.

    opened by cclauss 4
  • Update README.rst

    Update README.rst

    So, I cleaned up the grammar / spelling and got to the section about contributing to this repository.

    Based on this - it's definitely going to the top.

    Also. No. Here's your pull request.

    opened by razodactyl 3
  • logits is an undefined name in this context, should it be logits_last?

    logits is an undefined name in this context, should it be logits_last?

    Undefined names can raise NameErrorat runtime.

    https://travis-ci.org/astorfi/TensorFlow-World/jobs/272817788#L623-L626

    https://github.com/astorfi/TensorFlow-World/blob/master/codes/3-neural_networks/multi-layer-perceptron/code/test_classifier.py#L113

    opened by cclauss 2
  • train_op in linear regression

    train_op in linear regression

    Is defining train_op for each data point and epoch anew really needed? I'm new to TensorFlow so I can't tell why or why not this would make sense. For me, the regression seems to work fine (and much faster) if the line is removed.

    opened by mzur 2
  • sudo apt-get install nvidia-current-updates nvidia-settings-updates error

    sudo apt-get install nvidia-current-updates nvidia-settings-updates error

    Hello, just wanted to say this is a great guide. but when i execute : sudo apt-get install nvidia-current-updates nvidia-settings-updates its says: E: Unable to locate package nvidia-settings-updates

    can someone help me with this?

    opened by ghost 1
  • linear regression tutorial cost only reported for last data point

    linear regression tutorial cost only reported for last data point

    I noticed in the notebook for the linear regression that the cost was only being calculated for the last piece of data in each epoch.

    with tf.Session() as sess:
    
        # Initialize the variables[w and b].
        sess.run(tf.global_variables_initializer())
    
        # Get the input tensors
        X, Y = inputs()
    
        # Return the train loss and create the train_op.
        train_loss = loss(X, Y)
        train_op = train(train_loss)
    
        # Step 8: train the model
        for epoch_num in range(num_epochs): # run 100 epochs
            for x, y in data:
              train_op = train(train_loss)
    
              # Session runs train_op to minimize loss
              loss_value,_ = sess.run([train_loss,train_op], feed_dict={X: x, Y: y})
    
            # Displaying the loss per epoch.
            print('epoch %d, loss=%f' %(epoch_num+1, loss_value))
    
            # save the values of weight and bias
            wcoeff, bias = sess.run([W, b])
    

    data is being iterated over and the loss_value that is calculated is written over each time through the loop. Thus, the loss is only for the last piece of data. Since the loss needs to be computed over all of the data being used to train, the cost function should probably be something more like the following:

    def loss(X, Y):
        '''
        compute the loss by comparing the predicted value to the actual label.
        :param X: The inputs.
        :param Y: The labels.
        :return: The loss over the samples.
        '''
    
        # Making the prediction.
        Y_predicted = inference(X)
        return tf.reduce_sum(tf.squared_difference(Y, Y_predicted))/(2*data.shape[0])
    

    With this change above, the training section could be changed to the following (with the looping over data removed completely):

    with tf.Session() as sess:
    
        # Initialize the variables[w and b].
        sess.run(tf.global_variables_initializer())
    
        # Get the input tensors
        X, Y = inputs()
    
        # Return the train loss and create the train_op.
        train_loss = loss(X, Y)
        train_op = train(loss(X, Y))
    
        # Step 8: train the model
        for epoch_num in range(num_epochs): # run 100 epochs
            loss_value, _ = sess.run([train_loss,train_op], feed_dict={X: data[:,0], Y: data[:,1]})
    
            # Displaying the loss per epoch.
            print('epoch %d, loss=%f' %(epoch_num+1, loss_value))
    
            # save the values of weight and bias
            wcoeff, bias = sess.run([W, b])
    

    This would result in output like the following:

    epoch 1, loss=1573.599976
    epoch 2, loss=1332.513916
    epoch 3, loss=1128.868408
    epoch 4, loss=956.848999
    epoch 5, loss=811.544067
    

    I would be glad to submit a pull request with these and other minor changes. Please let me know if I have some misunderstanding.

    opened by mulhod 1
  • No Transformer Notebook

    No Transformer Notebook

    Hey,

    I see that there are no tutorial notebooks for Transformer implementations in this repository yet. Transformers are used primarily in the field of natural language processing. Like recurrent neural networks, Transformers are designed to handle sequential data, such as natural language, for tasks such as translation and text summarization.

    I would like to add such tutorial notebooks.

    opened by SauravMaheshkar 0
  • docs: fix simple typo, visualiaing -> visualising

    docs: fix simple typo, visualiaing -> visualising

    There is a small typo in docs/tutorials/1-basics/basic_math_operations/README.rst.

    Should read visualising rather than visualiaing.

    Semi-automated pull request generated by https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md

    opened by timgates42 0
  • a small mistake in doc

    a small mistake in doc

    In the tutorial doc of chapter 1 "Basics/variables", there might be a misktake here:

    # "variable_list_custom" is the list of variables that we want to initialize.
    variable_list_custom = [weights, custom_variable]
    
    # The initializer
    init_custom_op = tf.variables_initializer(var_list=all_variables_list)
    

    The last line of the code above might end up with var_list=variable_list_custom, not all_variables_list.

    Here's url of the doc: https://github.com/astorfi/TensorFlow-World/tree/master/docs/tutorials/1-basics/variables#initializing-specific-variables Thank you for your repo, it helps me a lot.

    opened by Xiaokeai18 0
Releases(v1.0)
Owner
Amirsina Torfi
PhD & Developer working on Deep Learning, Computer Vision & NLP
Amirsina Torfi
Codebase of deep learning models for inferring stability of mRNA molecules

Kaggle OpenVaccine Models Codebase of deep learning models for inferring stability of mRNA molecules, corresponding to the Kaggle Open Vaccine Challen

Eternagame 40 Dec 29, 2022
Pre-trained BERT Models for Ancient and Medieval Greek, and associated code for LaTeCH 2021 paper titled - "A Pilot Study for BERT Language Modelling and Morphological Analysis for Ancient and Medieval Greek"

Ancient Greek BERT The first and only available Ancient Greek sub-word BERT model! State-of-the-art post fine-tuning on Part-of-Speech Tagging and Mor

Pranaydeep Singh 22 Dec 08, 2022
ESL: Event-based Structured Light

ESL: Event-based Structured Light Video (click on the image) This is the code for the 2021 3DV paper ESL: Event-based Structured Light by Manasi Mugli

Robotics and Perception Group 29 Oct 24, 2022
To SMOTE, or not to SMOTE?

To SMOTE, or not to SMOTE? This package includes the code required to repeat the experiments in the paper and to analyze the results. To SMOTE, or not

Amazon Web Services 1 Jan 03, 2022
Elucidating Robust Learning with Uncertainty-Aware Corruption Pattern Estimation

Elucidating Robust Learning with Uncertainty-Aware Corruption Pattern Estimation Introduction 📋 Official implementation of Explainable Robust Learnin

JeongEun Park 6 Apr 19, 2022
Dynamic Realtime Animation Control

Our project is targeted at making an application that dynamically detects the user’s expressions and gestures and projects it onto an animation software which then renders a 2D/3D animation realtime

Harsh Avinash 10 Aug 01, 2022
MINERVA: An out-of-the-box GUI tool for offline deep reinforcement learning

MINERVA is an out-of-the-box GUI tool for offline deep reinforcement learning, designed for everyone including non-programmers to do reinforcement learning as a tool.

Takuma Seno 80 Nov 06, 2022
[NIPS 2021] UOTA: Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration.

UOTA: Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration This repository is the official PyTorch implementation of UOT

6 Jun 29, 2022
Title: Heart-Failure-Classification

This Notebook is based off an open source dataset available on where I have created models to classify patients who can potentially witness heart failure on the basis of various parameters. The best

Akarsh Singh 2 Sep 13, 2022
BasicNeuralNetwork - This project looks over the basic structure of a neural network and how machine learning training algorithms work

BasicNeuralNetwork - This project looks over the basic structure of a neural network and how machine learning training algorithms work. For this project, I used the sigmoid function as an activation

Manas Bommakanti 1 Jan 22, 2022
Regularized Frank-Wolfe for Dense CRFs: Generalizing Mean Field and Beyond

CRF - Conditional Random Fields A library for dense conditional random fields (CRFs). This is the official accompanying code for the paper Regularized

Đ.Khuê Lê-Huu 21 Nov 26, 2022
The official repository for "Score Transformer: Generating Musical Scores from Note-level Representation" (MMAsia '21)

Score Transformer This is the official repository for "Score Transformer": Score Transformer: Generating Musical Scores from Note-level Representation

22 Dec 22, 2022
Meandering In Networks of Entities to Reach Verisimilar Answers

MINERVA Meandering In Networks of Entities to Reach Verisimilar Answers Code and models for the paper Go for a Walk and Arrive at the Answer - Reasoni

Shehzaad Dhuliawala 271 Dec 13, 2022
PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 2021

Neural Scene Flow Fields PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 20

Zhengqi Li 585 Jan 04, 2023
Official repository of "DeepMIH: Deep Invertible Network for Multiple Image Hiding", TPAMI 2022.

DeepMIH: Deep Invertible Network for Multiple Image Hiding (TPAMI 2022) This repo is the official code for DeepMIH: Deep Invertible Network for Multip

Junpeng Jing 67 Nov 22, 2022
This is the official implementation of VaxNeRF (Voxel-Accelearated NeRF).

VaxNeRF Paper | Google Colab This is the official implementation of VaxNeRF (Voxel-Accelearated NeRF). This codebase is implemented using JAX, buildin

naruya 132 Nov 21, 2022
This is the solution for 2nd rank in Kaggle competition: Feedback Prize - Evaluating Student Writing.

Feedback Prize - Evaluating Student Writing This is the solution for 2nd rank in Kaggle competition: Feedback Prize - Evaluating Student Writing. The

Udbhav Bamba 41 Dec 14, 2022
Image inpainting using Gaussian Mixture Models

dmfa_inpainting Source code for: MisConv: Convolutional Neural Networks for Missing Data (to be published at WACV 2022) Estimating conditional density

Marcin Przewięźlikowski 8 Oct 09, 2022
Repository for the AugmentedPCA Python package.

Overview This Python package provides implementations of Augmented Principal Component Analysis (AugmentedPCA) - a family of linear factor models that

Billy Carson 6 Dec 07, 2022
Deep Q-network learning to play flappybird.

AI Plays Flappy Bird I've trained a DQN that learns to play flappy bird on it's own. Try the pre-trained model First install the pip requirements and

Anish Shrestha 3 Mar 01, 2022