A Python 3 package for state-of-the-art statistical dimension reduction methods

Related tags

Deep Learningdirepack
Overview

direpack: a Python 3 library for state-of-the-art statistical dimension reduction techniques

This package delivers a scikit-learn compatible Python 3 package for sundry state-of-the art multivariate statistical methods, with a focus on dimension reduction.

The categories of methods delivered in this package, are:

  • Projection pursuit dimension reduction (ppdire)
  • Sufficient dimension reduction (sudire)
  • Robust M-estimators for dimension reduction (sprm) each of which are presented as scikit-learn compatible objects in the corresponding folders.

We hope that this package leads to scientific success. If it does so, we kindly ask to cite the direpack vignette [0], as well as the original publication of the corresponding method.

The package also contains a set of tools for pre- and postprocessing:

  • The preprocessing folder provides classical and robust centring and scaling, as well as spatial sign transforms [4]
  • The dicomo folder contains a versatile class to access a wide variety of moment and co-moment statistics, and statistics derived from those. Check out the dicomo Documentation file and the dicomo Examples Notebook.
  • Plotting utilities in the plot folder
  • Cross-validation utilities in the cross-validation folder

AIG sprm score space

Methods in the sprm folder

  • The estimator (sprm.py) [1]
  • The Sparse NIPALS (SNIPLS) estimator [3](snipls.py)
  • Robust M regression estimator (rm.py)
  • Ancillary functions for M-estimation (_m_support_functions.py)

Methods in the ppdire folder

The ppdire class will give access to a wide range of projection pursuit dimension reduction techniques. These include slower approximate estimates for well-established methods such as PCA, PLS and continuum regression. However, the class provides unique access to a set of robust options, such as robust continuum regression (RCR) [5], through its native grid optimization algorithm, first published for RCR as well [6]. Moreover, ppdire is also a great gateway to calculate generalized betas, using the CAPI projection index [7].

The code is orghanized in

  • ppdire.py - the main PP dimension reduction class
  • capi.py - the co-moment analysis projection index.

Methods in the sudire folder

The sudire folder gives access to an extensive set of methods that resort under the umbrella of sufficient dimension reduction. These range from meanwhile long-standing, well-accepted approaches, such as sliced inverse regression (SIR) and the closely related SAVE [8,9], through methods such as directional regression [10] and principal Hessian directions [11], and more. However, the package also contains some of the most recently developed, state-of-the-art sufficient dimension reduction techniques, that require no distributional assumptions. The options provided in this category are based on energy statistics (distance covariance [12] or martingale difference divergence [13]) and ball statistics (ball covariance) [14]. All of these options can be called by setting the corresponding parameters in the sudire class, cf. the docs. Note: the ball covariance option will require some lines to be uncommented as indicated. We decided not to make that option generally available, since it depends on the Ball package that seems to be difficult to install on certain architectures.

How to install

The package is distributed through PyPI, so install through:

    pip install direpack

Note that some of the key methods in the sudire subpackage rely on the IPOPT optimization package, which according to their recommendation, can best be installed directly as:

    conda install -c conda-forge cyipopt

Documentation

  • Detailed documentation can be found in the ReadTheDocs page.
  • A more extensive description on the background is presented in the direpack vignette.
  • Examples on how to use each of the dicomo, ppdire, sprm and sudire classes are presented as Jupyter notebooks in the examples folder
  • Furthemore, the docs folder contains a few markdown files on usage of the classes.

References

  1. direpack: A Python 3 package for state-of-the-art statistical dimension reduction methods
  2. Sparse partial robust M regression, Irene Hoffmann, Sven Serneels, Peter Filzmoser, Christophe Croux, Chemometrics and Intelligent Laboratory Systems, 149 (2015), 50-59.
  3. Partial robust M regression, Sven Serneels, Christophe Croux, Peter Filzmoser, Pierre J. Van Espen, Chemometrics and Intelligent Laboratory Systems, 79 (2005), 55-64.
  4. Sparse and robust PLS for binary classification, I. Hoffmann, P. Filzmoser, S. Serneels, K. Varmuza, Journal of Chemometrics, 30 (2016), 153-162.
  5. Spatial Sign Preprocessing:  A Simple Way To Impart Moderate Robustness to Multivariate Estimators, Sven Serneels, Evert De Nolf, Pierre J. Van Espen, Journal of Chemical Information and Modeling, 46 (2006), 1402-1409.
  6. Robust Continuum Regression, Sven Serneels, Peter Filzmoser, Christophe Croux, Pierre J. Van Espen, Chemometrics and Intelligent Laboratory Systems, 76 (2005), 197-204.
  7. Robust Multivariate Methods: The Projection Pursuit Approach, Peter Filzmoser, Sven Serneels, Christophe Croux and Pierre J. Van Espen, in: From Data and Information Analysis to Knowledge Engineering, Spiliopoulou, M., Kruse, R., Borgelt, C., Nuernberger, A. and Gaul, W., eds., Springer Verlag, Berlin, Germany, 2006, pages 270--277.
  8. Projection pursuit based generalized betas accounting for higher order co-moment effects in financial market analysis, Sven Serneels, in: JSM Proceedings, Business and Economic Statistics Section. Alexandria, VA: American Statistical Association, 2019, 3009-3035.
  9. Sliced Inverse Regression for Dimension Reduction Li K-C, Journal of the American Statistical Association (1991), 86, 316-327.
  10. Sliced Inverse Regression for Dimension Reduction: Comment, R.D. Cook, and Sanford Weisberg, Journal of the American Statistical Association (1991), 86, 328-332.
  11. On directional regression for dimension reduction , B. Li and S.Wang, Journal of the American Statistical Association (2007), 102:997–1008.
  12. On principal hessian directions for data visualization and dimension reduction:Another application of stein’s lemma, K.-C. Li. , Journal of the American Statistical Association(1992)., 87,1025–1039.
  13. Sufficient Dimension Reduction via Distance Covariance, Wenhui Sheng and Xiangrong Yin in: Journal of Computational and Graphical Statistics (2016), 25, issue 1, pages 91-104.
  14. A martingale-difference-divergence-based estimation of central mean subspace, Yu Zhang, Jicai Liu, Yuesong Wu and Xiangzhong Fang, in: Statistics and Its Interface (2019), 12, number 3, pages 489-501.
  15. Robust Sufficient Dimension Reduction Via Ball Covariance Jia Zhang and Xin Chen, Computational Statistics and Data Analysis 140 (2019) 144–154

Release Notes can be checked out in the repository.

A list of possible topics for further development is provided as well. Additions and comments are welcome!

You might also like...
Code for paper "A Critical Assessment of State-of-the-Art in Entity Alignment" (https://arxiv.org/abs/2010.16314)

A Critical Assessment of State-of-the-Art in Entity Alignment This repository contains the source code for the paper A Critical Assessment of State-of

Quickly comparing your image classification models with the state-of-the-art models (such as DenseNet, ResNet, ...)
Quickly comparing your image classification models with the state-of-the-art models (such as DenseNet, ResNet, ...)

Image Classification Project Killer in PyTorch This repo is designed for those who want to start their experiments two days before the deadline and ki

State of the art Semantic Sentence Embeddings

Contrastive Tension State of the art Semantic Sentence Embeddings Published Paper · Huggingface Models · Report Bug Overview This is the official code

LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models
LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models

LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models. Developers can reproduce these SOTA methods and build their own methods.

Deep Text Search is an AI-powered multilingual text search and recommendation engine with state-of-the-art transformer-based multilingual text embedding (50+ languages).
Deep Text Search is an AI-powered multilingual text search and recommendation engine with state-of-the-art transformer-based multilingual text embedding (50+ languages).

Deep Text Search - AI Based Text Search & Recommendation System Deep Text Search is an AI-powered multilingual text search and recommendation engine w

State-of-the-art data augmentation search algorithms in PyTorch
State-of-the-art data augmentation search algorithms in PyTorch

MuarAugment Description MuarAugment is a package providing the easiest way to a state-of-the-art data augmentation pipeline. How to use You can instal

This is the unofficial code of  Deep Dual-resolution Networks for Real-time and Accurate Semantic Segmentation of Road Scenes. which achieve state-of-the-art trade-off between accuracy and speed on cityscapes and camvid, without using inference acceleration and extra data
A selection of State Of The Art research papers (and code) on human locomotion (pose + trajectory) prediction (forecasting)

A selection of State Of The Art research papers (and code) on human trajectory prediction (forecasting). Papers marked with [W] are workshop papers.

A state of the art of new lightweight YOLO model implemented by TensorFlow 2.
A state of the art of new lightweight YOLO model implemented by TensorFlow 2.

CSL-YOLO: A New Lightweight Object Detection System for Edge Computing This project provides a SOTA level lightweight YOLO called "Cross-Stage Lightwe

Comments
  • `p` should never be smaller than `n_components` in `sprm.fit`

    `p` should never be smaller than `n_components` in `sprm.fit`

    The variable p should never be smaller than n_components in sprm.fit otherwise an error occurs. This is checked for at the top of fit but p can be redefined at line 185.

    Inserting as line 186:

                self.n_components = min(p, self.n_components)
    

    ...appears to fix the issue, but I have not done extensive testing. It may also be advisable to raise a warning if n_components is reduced in this way.

    opened by MattWenham 5
  • gsspp.GenSpatialSignPrePprocessor().transform() is not working

    gsspp.GenSpatialSignPrePprocessor().transform() is not working

    Dear sirs,

    I like to make spatial sign transform for my data when I come across your module and found it won't work. My codes is as the following:

    scaler = gsspp.GenSpatialSignPrePprocessor(center = 'kstepLTS', fun = 'ball').fit(X_train) X_scaled = scaler.transform(X_train)

    It won't work for scaler don't have the transform method due to no object type is defined which makes it no attribute or method bestowed upon. The error message is as the following:

    AttributeError: 'NoneType' object has no attribute 'transform'

    maurice

    opened by shinhongwu 2
  • coef_ attribute expected but missing when using ppdire

    coef_ attribute expected but missing when using ppdire

    Below is a reproducible code for the error. The cells with # NB code are code blocks while the other are jupyter outputs.

    # NB code
    import numpy as np
    from direpack import dicomo, ppdire
    
    X = np.random.rand(5,5)
    
    reducer = ppdire(
        projection_index = dicomo,
        # mode of projection_index class defines dim reduction 'method'
        pi_arguments = {'mode' : 'var'},
        n_components=4,
        optimizer='SLSQP'
    )
    reducer.fit(X)
    reducer.x_loadings_
    
    array([[-0.36157257,  0.59084429,  0.31816485, -0.13799567],
           [-0.59046145, -0.14633256,  0.28087908, -0.57627361],
           [ 0.52330409,  0.27622013, -0.27929959, -0.75601132],
           [ 0.09839508,  0.72132604,  0.11781207,  0.27450752],
           [-0.48692072,  0.18133122, -0.85322337,  0.04425411]])
    
    # NB code
    reducer.transform(X)
    
    
    ---------------------------------------------------------------------------
    AttributeError                            Traceback (most recent call last)
    /tmp/ipykernel_63144/911793123.py in <module>
    ----> 1 reducer.transform(X)
    
    ~/.conda/envs/prod3/lib/python3.9/site-packages/direpack/ppdire/ppdire.py in transform(self, Xn)
        759         Xn = convert_X_input(Xn)
        760         (n,p) = Xn.shape
    --> 761         if p!= self.coef_.shape[0]:
        762             raise(ValueError('New data must have seame number of columns as the ones the model has been trained with'))
        763         Xnc = scale_data(Xn,self.x_loc_,self.x_sca_)
    
    AttributeError: 'ppdire' object has no attribute 'coef_'
    

    I looked into the code and the issue seems to come from this attribute only being created in there is no flag one-block.

    but a data check on the transform and predict functions uses that attribute.

    opened by nikml 1
  • A possible mistake in the estimation basis of SDR

    A possible mistake in the estimation basis of SDR

    Thanks for the package you provide, and I found a confusing problem. in src/direpack/sudire/sudire.py Line 489. When using scale, x_loadings should be set to N2 multiply P, not P, because we do scale. I notice you intended to do so in Line225 in src/direpack/sudire/_sudire_utils.py (take SIR for example), but x passed to this function has already been scaled, so variable "signsqrt" in this function is always identity matrix, which can not function as we want.

    opened by I-zhouqh 1
Releases(1.0.25)
Owner
Sven Serneels
I Presently manage a team on stats, machine learning and AI. On the side, avid method developer for high dimensional stats and machine learning.
Sven Serneels
A flag generation AI created using DeepAIs API

Vex AI or Vexiology AI is an Artifical Intelligence created to generate custom made flag design texts. It uses DeepAIs API. Please be aware that you must include your own DeepAI API key. See instruct

Bernie 10 Apr 06, 2022
Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation

Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation [Arxiv] [Video] Evaluation code for Unrestricted Facial Geometry Reconstr

Matan Sela 242 Dec 30, 2022
An example showing how to use jax to train resnet50 on multi-node multi-GPU

jax-multi-gpu-resnet50-example This repo shows how to use jax for multi-node multi-GPU training. The example is adapted from the resnet50 example in d

Yangzihao Wang 20 Jul 04, 2022
Crowd-sourced Annotation of Human Motion.

Motion Annotation Tool Live: https://motion-annotation.humanoids.kit.edu Paper: The KIT Motion-Language Dataset Installation Start by installing all P

Matthias Plappert 4 May 25, 2020
Code accompanying the paper on "An Empirical Investigation of Domain Generalization with Empirical Risk Minimizers" published at NeurIPS, 2021

Code for "An Empirical Investigation of Domian Generalization with Empirical Risk Minimizers" (NeurIPS 2021) Motivation and Introduction Domain Genera

Meta Research 15 Dec 27, 2022
Compare GAN code.

Compare GAN This repository offers TensorFlow implementations for many components related to Generative Adversarial Networks: losses (such non-saturat

Google 1.8k Jan 05, 2023
The official implementation of NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021]. https://arxiv.org/pdf/2101.12378.pdf

NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021] Release Notes The offical PyTorch implementation of NeMo, p

Angtian Wang 76 Nov 23, 2022
An addernet CUDA version

Training addernet accelerated by CUDA Usage cd adder_cuda python setup.py install cd .. python main.py Environment pytorch 1.10.0 CUDA 11.3 benchmark

LingXY 4 Jun 20, 2022
How will electric vehicles affect traffic congestion and energy consumption: an integrated modelling approach

EV-charging-impact This repository contains the code that has been used for the Queue modelling for the paper "How will electric vehicles affect traff

7 Nov 30, 2022
Project page for our ICCV 2021 paper "The Way to my Heart is through Contrastive Learning"

The Way to my Heart is through Contrastive Learning: Remote Photoplethysmography from Unlabelled Video This is the official project page of our ICCV 2

36 Jan 06, 2023
Official code implementation for "Personalized Federated Learning using Hypernetworks"

Personalized Federated Learning using Hypernetworks This is an official implementation of Personalized Federated Learning using Hypernetworks paper. [

Aviv Shamsian 121 Dec 25, 2022
GeDML is an easy-to-use generalized deep metric learning library

GeDML is an easy-to-use generalized deep metric learning library

Borui Zhang 32 Dec 05, 2022
Hyperbolic Image Segmentation, CVPR 2022

Hyperbolic Image Segmentation, CVPR 2022 This is the implementation of paper Hyperbolic Image Segmentation (CVPR 2022). Repository structure assets :

Mina Ghadimi Atigh 46 Dec 29, 2022
This repo implements several applications of the proposed generalized Bures-Wasserstein (GBW) geometry on symmetric positive definite matrices.

GBW This repo implements several applications of the proposed generalized Bures-Wasserstein (GBW) geometry on symmetric positive definite matrices. Ap

Andi Han 0 Oct 22, 2021
Cooperative Driving Dataset: a dataset for multi-agent driving scenarios

Cooperative Driving Dataset (CODD) The Cooperative Driving dataset is a synthetic dataset generated using CARLA that contains lidar data from multiple

Eduardo Henrique Arnold 124 Dec 28, 2022
Code for "Human Pose Regression with Residual Log-likelihood Estimation", ICCV 2021 Oral

Human Pose Regression with Residual Log-likelihood Estimation [Paper] [arXiv] [Project Page] Human Pose Regression with Residual Log-likelihood Estima

JeffLi 347 Dec 24, 2022
Learning with Noisy Labels via Sparse Regularization, ICCV2021

Learning with Noisy Labels via Sparse Regularization This repository is the official implementation of [Learning with Noisy Labels via Sparse Regulari

Xiong Zhou 38 Oct 20, 2022
YOLOv3 in PyTorch > ONNX > CoreML > TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices

Ultralytics 9.3k Jan 07, 2023
Distributed Deep learning with Keras & Spark

Elephas: Distributed Deep Learning with Keras & Spark Elephas is an extension of Keras, which allows you to run distributed deep learning models at sc

Max Pumperla 1.6k Jan 05, 2023
Unofficial implementation of PatchCore anomaly detection

PatchCore anomaly detection Unofficial implementation of PatchCore(new SOTA) anomaly detection model Original Paper : Towards Total Recall in Industri

Changwoo Ha 268 Dec 22, 2022