In this project we predict the forest cover type using the cartographic variables in the training/test datasets.

Overview

Kaggle Competition: Forest Cover Type Prediction

In this project we predict the forest cover type (the predominant kind of tree cover) using the cartographic variables given in the training/test datasets. You can find more about this project at Forest Cover Type Prediction.

This project and its detailed notebooks were created and published on Kaggle.

Project Objective

  • We are given raw unscaled data with both numerical and categorical variables.
  • First, we performed Exploratory Data Analysis in order to visualize the characteristics of our given variables.
  • We constructed various models to train our data - utilizing Optuna hyperparameter tuning to get parameters that maximize the model accuracies.
  • Using feature engineering techniques, we built new variables to help improve the accuracy of our models.
  • Using the strategies above, we built our final model and generated forest cover type predictions for the test dataset.

Links to Detailed Notebooks

EDA Summary

The purpose of the EDA is to provide an overview of how python visualization tools can be used to understand the complex and large dataset. EDA is the first step in this workflow where the decision-making process is initiated for the feature selection. Some valuable insights can be obtained by looking at the distribution of the target, relationship to the target and link between the features.

Visualize Numerical Variables

  • Using histograms, we can visualize the spread and values of the 10 numeric variables.
  • The Slope, Vertical Distance to Hydrology, Horizontal Distance to Hydrology, Roadways and Firepoints are all skewed right.
  • Hillshade 9am, Noon, and 3pm are all skewed left. visualize numerical variables histograms

Visualize Categorical Variables

  • The plots below the number of observations of the different Wilderness Areas and Soil Types.
  • Wilderness Areas 3 and 4 have the most presence.
  • Wilderness Area 2 has the least amount of observations.
  • The most observations are seen having Soil Type 10 followed by Soil Type 29.
  • The Soil Types with the least amount of observations are Soil Type 7 and 15. # of observations of wilderness areas # of observations of soil types

Feature Correlation

With the heatmap excluding binary variables this helps us visualize the correlations of the features. We were also able to provide scatterplots for four pairs of features that had a positive correlation greater than 0.5. These are one of the many visualization that helped us understand the characteristics of the features for future feature engineering and model selection.

heatmap scatterplots

Summary of Challenges

EDA Challenges

  • This project consists of a lot of data and can have countless of patterns and details to look at.
  • The training data was not a simple random sample of the entire dataset, but a stratified sample of the seven forest cover type classes which may not represent the final predictions well.
  • Creating a "story" to be easily incorporated into the corresponding notebooks such as Feature Engineering, Models, etc.
  • Manipulating the Wilderness_Area and Soil_Type (one-hot encoded variables) to visualize its distribution compared to Cover_Type.

Feature Engineering Challenges

  • Adding new variables during feature engineering often produced lower accuracy.
  • Automated feature engineering using entities and transformations amongst existing columns from a single dataset created many new columns that did not positively contribute to the model's accuracy - even after feature selection.
  • Testing the new features produced was very time consuming, even with the GPU accelerator.
  • After playing around with several different sets of new features, we found that only including manually created new features yielded the highest results.

Modeling Challenges

  • Ensemble and stacking methods initially resulted in models yielding higher accuracy on the test set, but as we added features and refined the parameters for each individual model, an individual model yielded a better score on the test set.
  • Performing hyperparameter tuning and training for several of the models was computationally expensive. While we were able to enable GPU acceleration for the XGBoost model, activating the GPU accelerator seemed to increase the tuning and training for the other models in the training notebook.
  • Optuna worked to reduce the time to process hyperparameter trials, but some of the hyperparameters identified through this method yielded weaker models than the hyperparameters identified through GridSearchCV. A balance between the two was needed.

Summary of Modeling Techniques

We used several modeling techniques for this project. We began by training simple, standard models and applying the predictions to the test set. This resulted in models with only 50%-60% accuracy, necessitating more complex methods. The following process was used to develop the final model:

  • Scaling the training data to perform PCA and identify the most important features (see the Feature_Engineering Notebook for more detail).
  • Preprocessing the training data to add in new features.
  • Performing GridSearchCV and using the Optuna approach (see the ModelParams Notebook for more detail) for identifying optimal parameters for the following models with corresponding training set accuracy scores:
    • Logistic Regression (.7126)
    • Decision Tree (.9808)
    • Random Forest (1.0)
    • Extra Tree Classifier (1.0)
    • Gradient Boosting Classifier (1.0)
    • Extreme Gradient Boosting Classifier (using GPU acceleration; 1.0)
    • AdaBoost Classifier (.5123)
    • Light Gradient Boosting Classifier (.8923)
    • Ensemble/Voting Classifiers (assorted combinations of the above models; 1.0)
  • Saving and exporting the preprocessor/scaler and each each version of the model with the highest accuracy on the training set and highest cross validation score (see the Training notebook for more detail).
  • Calculating each model's predictions for the test set and submitting to determine accuracy on the test set:
    • Logistic Regression (.6020)
    • Decision Tree (.7102)
    • Random Forest (.7465)
    • Extra Tree Classifier (.7962)
    • Gradient Boosting Classifier (.7905)
    • Extreme Gradient Boosting Classifier (using GPU acceleration; .7803)
    • AdaBoost Classifier (.1583)
    • Light Gradient Boosting Classifier (.6891)
    • Ensemble/Voting Classifier (assorted combinations of the above models; .7952)

Summary of Final Results

The model with the highest accuracy on the out of sample (test set) data was selected as our final model. It should be noted that the model with the highest accuracy according to 10-fold cross validation was not the most accurate model on the out of sample data (although it was close). The best model was the Extra Tree Classifier with an accuracy of .7962 on the test set. The Extra Trees model outperformed our Ensemble model (.7952), which had been our best model for several weeks. See the Submission Notebook and FinalModelEvaluation Notebook for additional detail.

Owner
Marianne Joy Leano
A recent graduate with a Master's in Data Science. Excited to explore data and create projects!
Marianne Joy Leano
MiniSom is a minimalistic implementation of the Self Organizing Maps

MiniSom Self Organizing Maps MiniSom is a minimalistic and Numpy based implementation of the Self Organizing Maps (SOM). SOM is a type of Artificial N

Giuseppe Vettigli 1.2k Jan 03, 2023
BESS: Balanced Evolutionary Semi-Stacking for Disease Detection via Partially Labeled Imbalanced Tongue Data

Balanced-Evolutionary-Semi-Stacking Code for the paper ''BESS: Balanced Evolutionary Semi-Stacking for Disease Detection via Partially Labeled Imbalan

0 Jan 16, 2022
Gesture Volume Control Using OpenCV and MediaPipe

This Project Uses OpenCV and MediaPipe Hand solutions to identify hands and Change system volume by taking thumb and index finger positions

Pratham Bhatnagar 6 Sep 12, 2022
BasicRL: easy and fundamental codes for deep reinforcement learning。It is an improvement on rainbow-is-all-you-need and OpenAI Spinning Up.

BasicRL: easy and fundamental codes for deep reinforcement learning BasicRL is an improvement on rainbow-is-all-you-need and OpenAI Spinning Up. It is

RayYoh 12 Apr 28, 2022
EfficientNetV2 implementation using PyTorch

EfficientNetV2-S implementation using PyTorch Train Steps Configure imagenet path by changing data_dir in train.py python main.py --benchmark for mode

Jahongir Yunusov 86 Dec 29, 2022
Leveraging Social Influence based on Users Activity Centers for Point-of-Interest Recommendation

SUCP Leveraging Social Influence based on Users Activity Centers for Point-of-Interest Recommendation () Direct Friends (i.e., users who follow each o

Kosar 8 Nov 26, 2022
RaftMLP: How Much Can Be Done Without Attention and with Less Spatial Locality?

RaftMLP RaftMLP: How Much Can Be Done Without Attention and with Less Spatial Locality? By Yuki Tatsunami and Masato Taki (Rikkyo University) [arxiv]

Okojo 20 Aug 31, 2022
ProFuzzBench - A Benchmark for Stateful Protocol Fuzzing

ProFuzzBench - A Benchmark for Stateful Protocol Fuzzing ProFuzzBench is a benchmark for stateful fuzzing of network protocols. It includes a suite of

155 Jan 08, 2023
Pytorch implementation of "Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech"

GradTTS Unofficial Pytorch implementation of "Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech" (arxiv) About this repo This is an unoffic

HeyangXue1997 103 Dec 23, 2022
Code for Emergent Translation in Multi-Agent Communication

Emergent Translation in Multi-Agent Communication PyTorch implementation of the models described in the paper Emergent Translation in Multi-Agent Comm

Facebook Research 75 Jul 15, 2022
This is the solution for 2nd rank in Kaggle competition: Feedback Prize - Evaluating Student Writing.

Feedback Prize - Evaluating Student Writing This is the solution for 2nd rank in Kaggle competition: Feedback Prize - Evaluating Student Writing. The

Udbhav Bamba 41 Dec 14, 2022
Automatically download the cwru data set, and then divide it into training data set and test data set

Automatically download the cwru data set, and then divide it into training data set and test data set.自动下载cwru数据集,然后分训练数据集和测试数据集

6 Jun 27, 2022
Code for the paper "Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds" (ICCV 2021)

Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds This is the official code implementation for the paper "Spatio-temporal Se

Hesper 63 Jan 05, 2023
Memory-efficient optimum einsum using opt_einsum planning and PyTorch kernels.

opt-einsum-torch There have been many implementations of Einstein's summation. numpy's numpy.einsum is the least efficient one as it only runs in sing

Haoyan Huo 9 Nov 18, 2022
Code for "Localization with Sampling-Argmax", NeurIPS 2021

Localization with Sampling-Argmax [Paper] [arXiv] [Project Page] Localization with Sampling-Argmax Jiefeng Li, Tong Chen, Ruiqi Shi, Yujing Lou, Yong-

JeffLi 71 Dec 17, 2022
ROS Basics and TurtleSim

Waypoint Follower Anna Garverick This package draws given waypoints, then waits for a service call with a start position to send the turtle to each wa

Anna Garverick 1 Dec 13, 2021
TrTr: Visual Tracking with Transformer

TrTr: Visual Tracking with Transformer We propose a novel tracker network based on a powerful attention mechanism called Transformer encoder-decoder a

趙 漠居(Zhao, Moju) 66 Dec 27, 2022
DEMix Layers for Modular Language Modeling

DEMix This repository contains modeling utilities for "DEMix Layers: Disentangling Domains for Modular Language Modeling" (Gururangan et. al, 2021). T

Suchin 43 Nov 11, 2022
My personal Home Assistant configuration.

About This is my personal Home Assistant configuration. My guiding princile is to have full local control of all my devices. I intend everything to ru

Chris Turra 13 Jun 07, 2022
A DeepStack custom model for detecting common objects in dark/night images and videos.

DeepStack_ExDark This repository provides a custom DeepStack model that has been trained and can be used for creating a new object detection API for d

MOSES OLAFENWA 98 Dec 24, 2022