A package, and script, to perform imaging transcriptomics on a neuroimaging scan.

Overview

Imaging Transcriptomics

DOI License: GPL v3 Maintainer Generic badge Documentation Status

Imaging-transcriptomics_overwiew

Imaging transcriptomics is a methodology that allows to identify patterns of correlation between gene expression and some property of brain structure or function as measured by neuroimaging (e.g., MRI, fMRI, PET).


The imaging-transcriptomics package allows performing imaging transcriptomics analysis on a neuroimaging scan (e.g., PET, MRI, fMRI...).

The software is implemented in Python3 (v.3.7), its source code is available on GitHub, it can be installed via Pypi and is released under the GPL v3 license.

NOTE Versions from v1.0.0 are or will be maintained. The original script linked by the BioRxiv preprint (v0.0) is still available on GitHub but no changes will be made to that code. If you have downloaded or used that script please update to the newer version by installing this new version.

Installation

NOTE We recommend to install the package in a dedicated environment of your choice (e.g., venv or anaconda). Once you have created your environment and you have activated it, you can follow the below guide to install the package and dependencies. This process will avoid clashes between conflicting packages that could happen during/after the installation.

To install the imaging-transcriptomics Python package, first you will need to install a packages that can't be installed directly from PyPi, but require to be downloaded from GitHub. The package to install is pypyls. To install this package you can follow the installation on the documentation for the package or simply run the command

pip install -e git+https://github.com/netneurolab/pypyls.git/#egg=pyls

to download the package, and its dependencies directly from GitHub by using pip.

Once this package is installed you can install the imaging-transcriptomics package by running

pip install imaging-transcriptomics

Usage

Once installed the software can be used in two ways:

  • as standalone script
  • as part of some python script

WARNING Before running the script make sure the Pyhton environment where you have installed the package is activated.

Standalone script


To run the standalone script from the terminal use the command:

imagingtranscriptomics options

The options available are:

  • -i (--input): Path to the imaging file to analise. The path should be given to the program as an absolute path (e.g., /Users/myusername/Documents/my_scan.nii, since a relative path could raise permission errors and crashes. The script only accepts imaging files in the NIfTI format (.nii, .nii.gz).
  • -v (--variance): Amount of variance that the PLS components must explain. This MUST be in the range 0-100.

    NOTE: if the variance given as input is in the range 0-1 the script will treat this as 30% the same way as if the number was in the range 10-100 (e.g., the script treats the inputs -v 30 and -v 0.3 in the exact same way and the resulting components will explain 30% of the variance).

  • -n (--ncomp): Number of components to be used in the PLS regression. The number MUST be in the range 1-15.
  • --corr: Run the analysis using Spearman correlation instead of PLS.

    NOTE: if you run with the --corr command no other input is required, apart from the input scan (-i).

  • -o (--output) (optional): Path where to save the results. If none is provided the results will be saved in the same directory as the input scan.

WARNING: The -i flag is MANDATORY to run the script, and so is one, and only one, of the -n or -v flags. These last two are mutually exclusive, meaning that ONLY one of the two has to be given as input.

Part of Python script


When used as part of a Python script the library can be imported as:

import imaging_transcriptomics as imt

The core class of the package is the ImagingTranscriptomics class which gives access to the methods used in the standalone script. To use the analysis in your scripts you can initialise the class and then simply call the ImagingTranscriptomics().run() method.

import numpy as np
import imaging_transcriptomics as imt
my_data = np.ones(41)  # MUST be of size 41 
                       # (corresponds to the regions in left hemisphere of the DK atlas)

analysis = imt.ImagingTranscriptomics(my_data, n_components=1)
analysis.run()
# If instead of running PLS you want to analysze the data with correlation you can run the analysis with:
analysis.run(method="corr")

Once completed the results will be part of the analysis object and can be accessed with analysis.gene_results.

The import of the imaging_transcriptomics package will import other helpful functions for input and reporting. For a complete explanation of this please refer to the official documentation of the package.

Documentation

The documentation of the script is available at imaging-transcriptomics.rtfd.io/.

Troubleshooting

For any problems with the software you can open an issue in GitHub or contact the maintainer) of the package.

Citing

If you publish work using imaging-transcriptomics as part of your analysis please cite:

Imaging transcriptomics: Convergent cellular, transcriptomic, and molecular neuroimaging signatures in the healthy adult human brain. Daniel Martins, Alessio Giacomel, Steven CR Williams, Federico Turkheimer, Ottavia Dipasquale, Mattia Veronese, PET templates working group. bioRxiv 2021.06.18.448872; doi: https://doi.org/10.1101/2021.06.18.448872

Imaging-transcriptomics: Second release update (v1.0.2).Alessio Giacomel, & Daniel Martins. (2021). Zenodo. https://doi.org/10.5281/zenodo.5726839

Comments
  • pip installation can not resolve enigmatoolbox dependencies

    pip installation can not resolve enigmatoolbox dependencies

    After pip install -e git+https://github.com/netneurolab/pypyls.git/#egg=pyls and pip install imaging-transcriptomics in a new conda environment with Python=3.8, an error was occurred when import imaging-transcriptomics package that it can't find the module named enigmatoolbox. I figured out that the enigmatoolbox package seems can not be resolve by pip installation automatically, so I have to install the enigmatoolbox package from Github manually, with the code bellow according to the document of enigmatoolbox:

    git clone https://github.com/MICA-MNI/ENIGMA.git
    cd ENIGMA
    python setup.py install
    
    opened by YCHuang0610 4
  • DK atlas regions

    DK atlas regions

    Dear alegiac95,

    thanks for providing the scripts! I have just gone through the paper and description of this GitHub repo and I want to adapt your software to my project. However, I use the typical implementation of the DK from Freesurfer with 34 cortical DK ROIs instead of the 41 ROIs that you have used and, if I'm not mistaken, 41 ROIs are required to implement the script as ist is. Is it possible to change the input to other cortical parcellations as well (i.e., DK-34)?

    Cheers, Melissa

    enhancement 
    opened by Melissa1909 3
  • Script not calling the correct python version

    Script not calling the correct python version

    The script in version v1.0.0 is invoking the #!/usr/bin/env python interpreter, which could generate some issue if you default python is python2 (e.g., in older MacOS versions).

    bug 
    opened by alegiac95 1
  • Version 1.1.0

    Version 1.1.0

    Updated the scripts with:

    • support for both full brain analysis and cortical regions only
    • GSEA analysis (both during the analysis and as a separate script)
    • pdf report of the analysis
    opened by alegiac95 0
  • clean code and fix test

    clean code and fix test

    This commit does an extensive code cleaning following the PEP8 standard. It also fixes a test that was most probably intended for previous unstable versions of the software.

    Still to do:

    • Remove logging
    opened by matteofrigo 0
  • Add mathematical background on PLS

    Add mathematical background on PLS

    A more detailed explanation on PLS model and regression is required in the docs.

    • [ ] Add a general mathematical formulation of PLS
    • [ ] Use of PLS in neuroimaging applications
    • [ ] Description of the SIMPLS algorithm used by pypls

    In addition provide some background on correlation, since it is now added to the methods available in the python package/script

    documentation 
    opened by alegiac95 0
Releases(v.1.1.8)
A plug-and-play library for neural networks written in Python

A plug-and-play library for neural networks written in Python!

Dimos Michailidis 2 Jul 16, 2022
Code implementation of "Sparsity Probe: Analysis tool for Deep Learning Models"

Sparsity Probe: Analysis tool for Deep Learning Models This repository is a limited implementation of Sparsity Probe: Analysis tool for Deep Learning

3 Jun 09, 2021
A memory-efficient implementation of DenseNets

efficient_densenet_pytorch A PyTorch =1.0 implementation of DenseNets, optimized to save GPU memory. Recent updates Now works on PyTorch 1.0! It uses

Geoff Pleiss 1.4k Dec 25, 2022
An official TensorFlow implementation of “CLCC: Contrastive Learning for Color Constancy” accepted at CVPR 2021.

CLCC: Contrastive Learning for Color Constancy (CVPR 2021) Yi-Chen Lo*, Chia-Che Chang*, Hsuan-Chao Chiu, Yu-Hao Huang, Chia-Ping Chen, Yu-Lin Chang,

Yi-Chen (Howard) Lo 58 Dec 17, 2022
AVD Quickstart Containerlab

AVD Quickstart Containerlab WARNING This repository is still under construction. It's fully functional, but has number of limitations. For example: RE

Carl Buchmann 3 Apr 10, 2022
Continual World is a benchmark for continual reinforcement learning

Continual World Continual World is a benchmark for continual reinforcement learning. It contains realistic robotic tasks which come from MetaWorld. Th

41 Dec 24, 2022
A framework for using LSTMs to detect anomalies in multivariate time series data. Includes spacecraft anomaly data and experiments from the Mars Science Laboratory and SMAP missions.

Telemanom (v2.0) v2.0 updates: Vectorized operations via numpy Object-oriented restructure, improved organization Merge branches into single branch fo

Kyle Hundman 844 Dec 28, 2022
Code for "NeRS: Neural Reflectance Surfaces for Sparse-View 3D Reconstruction in the Wild," in NeurIPS 2021

Code for Neural Reflectance Surfaces (NeRS) [arXiv] [Project Page] [Colab Demo] [Bibtex] This repo contains the code for NeRS: Neural Reflectance Surf

Jason Y. Zhang 234 Dec 30, 2022
[ICCV 2021 (oral)] Planar Surface Reconstruction from Sparse Views

Planar Surface Reconstruction From Sparse Views Linyi Jin, Shengyi Qian, Andrew Owens, David F. Fouhey University of Michigan ICCV 2021 (Oral) This re

Linyi Jin 89 Jan 05, 2023
Implicit Model Specialization through DAG-based Decentralized Federated Learning

Federated Learning DAG Experiments This repository contains software artifacts to reproduce the experiments presented in the Middleware '21 paper "Imp

Operating Systems and Middleware Group 5 Oct 16, 2022
Time Delayed NN implemented in pytorch

Pytorch Time Delayed NN Time Delayed NN implemented in PyTorch. Usage kernels = [(1, 25), (2, 50), (3, 75), (4, 100), (5, 125), (6, 150)] tdnn = TDNN

Daniil Gavrilov 79 Aug 04, 2022
Explainability of the Implications of Supervised and Unsupervised Face Image Quality Estimations Through Activation Map Variation Analyses in Face Recognition Models

Explainable_FIQA_WITH_AMVA Note This is the official repository of the paper: Explainability of the Implications of Supervised and Unsupervised Face I

3 May 08, 2022
Codebase for Amodal Segmentation through Out-of-Task andOut-of-Distribution Generalization with a Bayesian Model

Codebase for Amodal Segmentation through Out-of-Task andOut-of-Distribution Generalization with a Bayesian Model

Yihong Sun 12 Nov 15, 2022
MNE: Magnetoencephalography (MEG) and Electroencephalography (EEG) in Python

MNE-Python MNE-Python software is an open-source Python package for exploring, visualizing, and analyzing human neurophysiological data such as MEG, E

MNE tools for MEG and EEG data analysis 2.1k Dec 28, 2022
A developer interface for creating Chat AIs for the Chai app.

ChaiPy A developer interface for creating Chat AIs for the Chai app. Usage Local development A quick start guide is available here, with a minimal exa

Chai 28 Dec 28, 2022
Code for Temporally Abstract Partial Models

Code for Temporally Abstract Partial Models Accompanies the code for the experimental section of the paper: Temporally Abstract Partial Models, Khetar

DeepMind 19 Jul 13, 2022
Reinforcement learning algorithms in RLlib

raylab Reinforcement learning algorithms in RLlib and PyTorch. Installation pip install raylab Quickstart Raylab provides agents and environments to b

Ângelo 50 Sep 08, 2022
Implementation of the master's thesis "Temporal copying and local hallucination for video inpainting".

Temporal copying and local hallucination for video inpainting This repository contains the implementation of my master's thesis "Temporal copying and

David Álvarez de la Torre 1 Dec 02, 2022
A study project using the AA-RMVSNet to reconstruct buildings from multiple images

3d-building-reconstruction This is part of a study project using the AA-RMVSNet to reconstruct buildings from multiple images. Introduction It is exci

17 Oct 17, 2022
This is a Pytorch implementation of paper: DropEdge: Towards Deep Graph Convolutional Networks on Node Classification

DropEdge: Towards Deep Graph Convolutional Networks on Node Classification This is a Pytorch implementation of paper: DropEdge: Towards Deep Graph Con

401 Dec 16, 2022