Machine learning template for projects based on sklearn library.

Overview

Scikit-learn-project-template

About the project

  • Folder structure suitable for many machine learning projects. Especially for those with small amount of available training data.
  • .json config file support for convenient parameter tuning.
  • Customizable command line options for more convenient parameter tuning.
  • Abstract base classes for faster development:
    • BaseOptimizer handles execution of grid search, saving and loading of models and formation of test and train reports.
    • BaseDataLoader handles splitting of training and testing data. Spilt is performed depending on settings provided in config file.
    • BaseModel handles construction of consecutive steps defined in config file.

Getting Started

To get a local copy up and running follow steps below.

Requirements

  • Python >= 3.7
  • Packages included in requirements.txt file
  • (Anaconda for easy installation)

Install dependencies

Create and activate virtual environment:

conda create -n yourenvname python=3.7
conda activate yourenvname

Install packages:

python -m pip install -r requirements.txt

Folder Structure

sklearn-project-template/
│
├── main.py - main script to start training and (optionally) testing
│
├── base/ - abstract base classes
│   ├── base_data_loader.py
│   ├── base_model.py
│   └── base_optimizer.py
│
├── configs/ - holds configuration for training and testing
│   ├── config_classification.json
│   ├── config_regression.json
│
├── data/ - default directory for storing input data
│
├── data_loaders/ - anything about data loading goes here
│   └── data_loaders.py
│
├── models/ - models
│   ├── __init__.py - defined models by name
│   └── models.py
│
├── optimizers/ - optimizers
│   └── optimizers.py
│
├── saved/ - config, model and reports are saved here
│   ├── Classification
│   └── Regression
│
├── utils/ - utility functions
│   └── parse_config.py - class to handle config file and cli options
│   ├── utils.py
│
├── wrappers/ - wrappers of modified sklearn models or self defined transforms
│   ├── data_transformations.py
│   └── wrappers.py

Usage

Models in this repo are trained on two well-known datasets: iris and boston. First is used for classification and second for regression problem.

Run classification:

python main.py -c configs/config_classification.json

Run regression:

python main.py -c configs/config_regression.json

Config file format

Config files are in .json format. Example of such config is shown below:

{
    "name": "Classification",   // session name

    "model": {
        "type": "Model",    // model name
        "args": {
            "pipeline": ["scaler", "PLS", "pf", "SVC"]     // pipeline of methods
        }
    },

    "tuned_parameters":[{   // parameters to be tuned with search method
                        "SVC__kernel": ["rbf"],
                        "SVC__gamma": [1e-5, 1e-6, 1],
                        "SVC__C": [1, 100, 1000],
                        "PLS__n_components": [1,2,3]
                    }],

    "optimizer": "OptimizerClassification",    // name of optimizer

    "search_method":{
        "type": "GridSearchCV",    // method used to search through parameters
        "args": {
            "refit": false,
            "n_jobs": -1,
            "verbose": 2,
            "error_score": 0
        }
    },

    "cross_validation": {
        "type": "RepeatedStratifiedKFold",     // type of cross-validation used
        "args": {
            "n_splits": 5,
            "n_repeats": 10,
            "random_state": 1
        }
    },

    "data_loader": {
        "type": "Classification",      // name of dataloader class
        "args":{
            "data_path": "data/path-to-file",    // path to data
            "shuffle": true,    // if data shuffled before optimization
            "test_split": 0.2,  // use split method for model testing
            "stratify": true,   // if data stratified before optimization
            "random_state":1    // random state for repeaded output
        }
    },

    "score": "max balanced_accuracy",     // mode and metrics used for scoring
    "test_model": true,     // if model is tested after training
    "save_dir": "saved/"    // directory of saved reports, models and configs

}

Additional parameters can be added to config file. See SK-learn documentation for description of tuned parameters, search method and cross validation. Possible metrics for model evaluation could be found here.

Pipeline

Methods added to config pipeline must be first defined in models/__init__.py file. For previous example of config file the following must be added:

from wrappers import *
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import PolynomialFeatures

methods_dict = {
  'pf': PolynomialFeatures,
  'scaler': StandardScaler,
  'PLS':PLSRegressionWrapper,
  'SVC':SVC,
}

Majority of algorithms implemented in SK-learn library can be directly imported and used. Some algorithms need a little modification before usage. Such an example is Partial least squares (PLS). Modification is implemented in wrappers/wrappers.py. In case you want to implement your own method it can be done as well. An example wrapper for Savitzky golay filter is shown in wrappers/data_transformations.py. Implementation must satisfy standard method calls, eg. fit(), tranform() etc.

Customization

Custom CLI options

Changing values of config file is a clean, safe and easy way of tuning hyperparameters. However, sometimes it is better to have command line options if some values need to be changed too often or quickly.

This template uses the configurations stored in the json file by default, but by registering custom options as follows you can change some of them using CLI flags.

# simple class-like object having 3 attributes, `flags`, `type`, `target`.
CustomArgs = collections.namedtuple('CustomArgs', 'flags type target')
options = [
      CustomArgs(['-cv', '--cross_validation'], type=int, target='cross_validation;args;n_repeats'),
    # options added here can be modified by command line flags.
]

target argument should be sequence of keys, which are used to access that option in the config dict. In this example, target number of repeats in cross validation option is ('cross_validation', 'args', 'n_repeats') because config['cross_validation']['args']['n_repeats'] points to number of repeats.

Data Loader

  • Writing your own data loader
  1. Inherit BaseDataLoader

    BaseDataLoader handles:

    • Train/test procedure
    • Data shuffling
  • Usage

    Loaded data must be assigned to data_handler (dh) in appropriate manner. If dh.X_data_test and dh.y_data_test are not assigned in advance, train/test split could be created by base data loader. In case "test_split":0.0 is set in config file, whole dataset is used for training. Another option is to assign both train and test sets as shown below. In this case train data will be used for optimization and test data will be used for evaluation of a model.

    data_handler.X_data = X_train
    data_handler.y_data = y_train
    data_handler.X_data_test = X_test
    data_handler.y_data_test = y_test
  • Example

    Please refer to data_loaders/data_loaders.py for data loading example.

Optimizer

  • Writing your own optimizer
  1. Inherit BaseOptimizer

    BaseOptimizer handles:

    • Optimization procedure
    • Model saving and loading
    • Report saving
  2. Implementing abstract methods

    You need to implement fitted_model() which must return fitted model. Optionally you can implement format of train/test reports with create_train_report() and create_test_report().

  • Example

    Please refer to optimizers/optimizers.py for optimizer example.

Model

  • Writing your own model
  1. Inherit BaseModel

    BaseModel handles:

    • Initialization defined in config pipeline
    • Modification of steps
  2. Implementing abstract methods

    You need to implement created_model() which must return created model.

  • Usage

    Initialization of pipeline methods is performed with create_steps(). Steps can be later modified with the use of change_step(). An example on how to change a step is shown bellow where Sequential feature selector is added to the pipeline.

    def __init__(self, pipeline):
        steps = self.create_steps(pipeline)
    
        rf = RandomForestRegressor(random_state=1)
        clf = TransformedTargetRegressor(regressor=rf,
                                        func=np.log1p,
                                        inverse_func=np.expm1)
        sfs = SequentialFeatureSelector(clf, n_features_to_select=2, cv=3)
    
        steps = self.change_step('sfs', sfs, steps)
    
        self.model = Pipeline(steps=steps)

    Beware that in this case 'sfs' needs to be added to pipeline in config file. Otherwise, no step in the pipeline is changed.

  • Example

    Please refer to models/models.py model example.

Roadmap

See open issues to request a feature or report a bug.

Contribution

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

How to start with contribution:

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

Feel free to contribute any kind of function or enhancement.

License

This project is licensed under the MIT License. See LICENSE for more details.

Acknowledgements

This project is inspired by the project pytorch-template by Victor Huang. I would like to confess that some functions, architecture and some parts of readme were directly copied from this repo. But to be honest, what should I do - the project is absolutely amazing!

Consider supporting

Do you feel generous today? I am still a student and would make a good use of some extra money :P

Owner
Janez Lapajne
Janez Lapajne
Land Cover Classification Random Forest

You can perform Land Cover Classification on Satellite Images using Random Forest and visualize the result using Earthpy package. Make sure to install the required packages and such as

Dr. Sander Ali Khowaja 1 Jan 21, 2022
Machine Learning Study 혼자 해보기

Machine Learning Study 혼자 해보기 기여자 (Contributors) ✨ Teddy Lee 🏠 HongJaeKwon 🏠 Seungwoo Han 🏠 Tae Heon Kim 🏠 Steve Kwon 🏠 SW Song 🏠 K1A2 🏠 Wooil

Teddy Lee 1.7k Jan 01, 2023
Traingenerator 🧙 A web app to generate template code for machine learning ✨

Traingenerator 🧙 A web app to generate template code for machine learning ✨ 🎉 Traingenerator is now live! 🎉

Johannes Rieke 1.2k Jan 07, 2023
MasTrade is a trading bot in baselines3,pytorch,gym

mastrade MasTrade is a trading bot in baselines3,pytorch,gym idea we have for example 1 btc and we buy a crypto with it with market option to trade in

Masoud Azizi 18 May 24, 2022
mlpack: a scalable C++ machine learning library --

a fast, flexible machine learning library Home | Documentation | Doxygen | Community | Help | IRC Chat Download: current stable version (3.4.2) mlpack

mlpack 4.2k Jan 01, 2023
Sequence learning toolkit for Python

seqlearn seqlearn is a sequence classification toolkit for Python. It is designed to extend scikit-learn and offer as similar as possible an API. Comp

Lars 653 Dec 27, 2022
This project used bitcoin, S&P500, and gold to construct an investment portfolio that aimed to minimize risk by minimizing variance.

minvar_invest_portfolio This project used bitcoin, S&P500, and gold to construct an investment portfolio that aimed to minimize risk by minimizing var

1 Jan 06, 2022
ml4h is a toolkit for machine learning on clinical data of all kinds including genetics, labs, imaging, clinical notes, and more

ml4h is a toolkit for machine learning on clinical data of all kinds including genetics, labs, imaging, clinical notes, and more

Broad Institute 65 Dec 20, 2022
Create large-scale ML-driven multiscale simulation ensembles to study the interactions

MuMMI RAS v0.1 Released: Nov 16, 2021 MuMMI RAS is the application component of the MuMMI framework developed to create large-scale ML-driven multisca

4 Feb 16, 2022
Python package for concise, transparent, and accurate predictive modeling

Python package for concise, transparent, and accurate predictive modeling. All sklearn-compatible and easy to use. 📚 docs • 📖 demo notebooks Modern

Chandan Singh 983 Jan 01, 2023
A Python-based application demonstrating various search algorithms, namely Depth-First Search (DFS), Breadth-First Search (BFS), and A* Search (Manhattan Distance Heuristic)

A Python-based application demonstrating various search algorithms, namely Depth-First Search (DFS), Breadth-First Search (BFS), and the A* Search (using the Manhattan Distance Heuristic)

17 Aug 14, 2022
A Python Module That Uses ANN To Predict A Stocks Price And Also Provides Accurate Technical Analysis With Many High Potential Implementations!

Stox A Module to predict the "close price" for the next day and give "technical analysis". It uses a Neural Network and the LSTM algorithm to predict

Stox 31 Dec 16, 2022
Scikit-Garden or skgarden is a garden for Scikit-Learn compatible decision trees and forests.

Scikit-Garden or skgarden (pronounced as skarden) is a garden for Scikit-Learn compatible decision trees and forests.

260 Dec 21, 2022
Automatically create Faiss knn indices with the most optimal similarity search parameters.

It selects the best indexing parameters to achieve the highest recalls given memory and query speed constraints.

Criteo 419 Jan 01, 2023
PySurvival is an open source python package for Survival Analysis modeling

PySurvival What is Pysurvival ? PySurvival is an open source python package for Survival Analysis modeling - the modeling concept used to analyze or p

Square 265 Dec 27, 2022
Software Engineer Salary Prediction

Based on 2021 stack overflow data, this machine learning web application helps one predict the salary based on years of experience, level of education and the country they work in.

Jhanvi Mimani 1 Jan 08, 2022
Distributed scikit-learn meta-estimators in PySpark

sk-dist: Distributed scikit-learn meta-estimators in PySpark What is it? sk-dist is a Python package for machine learning built on top of scikit-learn

Ibotta 282 Dec 09, 2022
Simple data balancing baselines for worst-group-accuracy benchmarks.

BalancingGroups Code to replicate the experimental results from Simple data balancing baselines achieve competitive worst-group-accuracy. Replicating

Facebook Research 29 Dec 02, 2022
pure-predict: Machine learning prediction in pure Python

pure-predict speeds up and slims down machine learning prediction applications. It is a foundational tool for serverless inference or small batch prediction with popular machine learning frameworks l

Ibotta 84 Dec 29, 2022
Open-Source CI/CD platform for ML teams. Deliver ML products, better & faster. ⚡️🧑‍🔧

Deliver ML products, better & faster Giskard is an Open-Source CI/CD platform for ML teams. Inspect ML models visually from your Python notebook 📗 Re

Giskard 335 Jan 04, 2023