Source code of NeurIPS 2021 Paper ''Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration''

Related tags

Deep LearningCaGCN
Overview

CaGCN

This repo is for source code of NeurIPS 2021 paper "Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration".

Paper Link: https://arxiv.org/abs/2109.14285

Environment

  • python == 3.8.8
  • pytorch == 1.8.1
  • dgl -cuda11.1 == 0.6.1
  • networkx == 2.5
  • numpy == 1.20.2

GPU: GeForce RTX 2080 Ti

CPU: Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz

Confidence Calibration

CaGCN

python CaGCN.py --model GCN --hidden 64 --dataset dataset --labelrate labelrate --stage 1 --lr_for_cal 0.01 --l2_for_cal 5e-3
python CaGCN.py --model GAT --hidden 8 --dataset dataset --labelrate labelrate --dropout 0.6 --lr 0.005 --stage 1 --lr_for_cal 0.01 --l2_for_cal 5e-3
  • dataset: including [Cora, Citeseer, Pubmed], required.
  • labelrate: including [20, 40, 60], required.

e.g.,

python CaGCN.py --model GCN --hidden 64 --dataset Cora --labelrate 20 --stage 1 --lr_for_cal 0.01 --l2_for_cal 5e-3
python CaGCN.py --model GAT --hidden 8 --dataset Cora --labelrate 20 --dropout 0.6 --lr 0.005 --stage 1 --lr_for_cal 0.01 --l2_for_cal 5e-3

For CoraFull,

python CaGCN.py --model GCN --hidden 64 --dataset CoraFull --labelrate labelrate --stage 1 --lr_for_cal 0.01 --l2_for_cal 0.03
python CaGCN.py --model GAT --hidden 8 --dataset CoraFull --labelrate labelrate --dropout 0.6 --lr 0.005 --stage 1 --lr_for_cal 0.01 --l2_for_cal 0.03
  • labelrate: including [20, 40, 60], required.

Uncalibrated model

python train_others.py --model GCN --hidden 64 --dataset dataset --labelrate labelrate --stage 1 
python train_others.py --model GAT --hidden 8 --dataset dataset --labelrate labelrate --stage 1 --dropout 0.6 --lr 0.005
  • dataset: including [Cora, Citeseer, Pubmed, CoraFull], required.
  • labelrate: including [20, 40, 60], required.

e.g.,

python train_others.py --model GCN --hidden 64 --dataset Cora --labelrate 20 --stage 1
python train_others.py --model GAT --hidden 8 --dataset Cora --labelrate 20 --stage 1 --dropout 0.6 --lr 0.005

Temperature scaling & Matring Scaling

python train_others.py --model GCN --scaling_method method --hidden 64 --dataset dataset --labelrate labelrate --stage 1 --lr_for_cal 0.01 --max_iter 50
python train_others.py --model GAT --scaling_method method --hidden 8 --dataset dataset --labelrate labelrate --dropout 0.6 --lr 0.005 --stage 1 --lr_for_cal 0.01 --max_iter 50
  • method: including [TS, MS], required.
  • dataset: including [Cora, Citeseer, Pubmed, CoraFull], required.
  • labelrate: including [20, 40, 60], required.

e.g.,

python train_others.py --model GCN --scaling_method TS --hidden 64 --dataset Cora --labelrate 20 --stage 1 --lr_for_cal 0.01 --max_iter 50
python train_others.py --model GAT --scaling_method TS --hidden 8 --dataset Cora --labelrate 20 --dropout 0.6 --lr 0.005 --stage 1 --lr_for_cal 0.01 --max_iter 50

Self-Training

GCN L/C=20

python CaGCN.py --model GCN --hidden 64 --dataset Cora --labelrate 20 --stage 4 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 200 --threshold 0.8
python CaGCN.py --model GCN --hidden 64 --dataset Citeseer --labelrate 20 --stage 5 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 150 --threshold 0.9
python CaGCN.py --model GCN --hidden 64 --dataset Pubmed --labelrate 20 --stage 6 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 100 --threshold 0.8
python CaGCN.py --model GCN --hidden 64 --dataset CoraFull --labelrate 20 --stage 4 --lr_for_cal 0.001 --l2_for_cal 0.03 --epoch_for_st 500 --threshold 0.85

GCN L/C=40

python CaGCN.py --model GCN --hidden 64 --dataset Cora --labelrate 40 --stage 2 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 200 --threshold 0.8
python CaGCN.py --model GCN --hidden 64 --dataset Citeseer --labelrate 40 --stage 2 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 150 --threshold 0.85
python CaGCN.py --model GCN --hidden 64 --dataset Pubmed --labelrate 40 --stage 4 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 100 --threshold 0.8
python CaGCN.py --model GCN --hidden 64 --dataset CoraFull --labelrate 40 --stage 4 --lr_for_cal 0.001 --l2_for_cal 0.03 --epoch_for_st 500 --threshold 0.99

GCN L/C=60

python CaGCN.py --model GCN --hidden 64 --dataset Cora --labelrate 60 --stage 4 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 200 --threshold 0.8
python CaGCN.py --model GCN --hidden 64 --dataset Citeseer --labelrate 60 --stage 2 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 150 --threshold 0.8
python CaGCN.py --model GCN --hidden 64 --dataset Pubmed --labelrate 60 --stage 3 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 100 --threshold 0.6
python CaGCN.py --model GCN --hidden 64 --dataset CoraFull --labelrate 60 --stage 5 --lr_for_cal 0.001 --l2_for_cal 0.03 --epoch_for_st 500 --threshold 0.9

GAT L/C=20

python CaGCN.py --model GAT --hidden 8 --dataset Cora --labelrate 20 --dropout 0.6 --lr 0.005 --stage 6 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 200 --threshold 0.8
python CaGCN.py --model GAT --hidden 8 --dataset Citeseer --labelrate 20 --dropout 0.6 --lr 0.005 --stage 3 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 150 --threshold 0.7
python CaGCN.py --model GAT --hidden 8 --dataset Pubmed --labelrate 20 --dropout 0.6 --lr 0.005 --weight_decay 1e-3 --stage 2 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 100 --threshold 0.8 
python CaGCN.py --model GAT --hidden 8 --dataset CoraFull --labelrate 20 --dropout 0.6 --lr 0.005 --stage 5 --lr_for_cal 0.001 --l2_for_cal 0.03 --epoch_for_st 500 --threshold 0.95

GAT L/C=40

python CaGCN.py --model GAT --hidden 8 --dataset Cora --labelrate 40 --dropout 0.6 --lr 0.005 --stage 4 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 200 --threshold 0.9
python CaGCN.py --model GAT --hidden 8 --dataset Citeseer --labelrate 40 --dropout 0.6 --lr 0.005 --stage 2 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 150 --threshold 0.8
python CaGCN.py --model GAT --hidden 8 --dataset Pubmed --labelrate 40 --dropout 0.6 --lr 0.005 --weight_decay 1e-3 --stage 2 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 100 --threshold 0.8 
python CaGCN.py --model GAT --hidden 8 --dataset CoraFull --labelrate 40 --dropout 0.6 --lr 0.005 --stage 2 --lr_for_cal 0.001 --l2_for_cal 0.03 --epoch_for_st 500 --threshold 0.95

GAT L/C=60

python CaGCN.py --model GAT --hidden 8 --dataset Cora --labelrate 60 --dropout 0.6 --lr 0.005 --stage 2 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 200 --threshold 0.8
python CaGCN.py --model GAT --hidden 8 --dataset Citeseer --labelrate 60 --dropout 0.6 --lr 0.005 --stage 6 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 150 --threshold 0.8
python CaGCN.py --model GAT --hidden 8 --dataset Pubmed --labelrate 60 --dropout 0.6 --lr 0.005 --weight_decay 1e-3 --stage 3 --lr_for_cal 0.001 --l2_for_cal 5e-3 --epoch_for_st 100 --threshold 0.85 
python CaGCN.py --model GAT --hidden 8 --dataset CoraFull --labelrate 60 --dropout 0.6 --lr 0.005 --stage 2 --lr_for_cal 0.001 --l2_for_cal 0.03 --epoch_for_st 500 --threshold 0.95

More Parameters

For more parameters of baselines, please refer to the Parameter.md

Contact

If you have any questions, please feel free to contact me with [email protected]

Playing around with FastAPI and streamlit to create a YoloV5 object detector

FastAPI-Streamlit-based-YoloV5-detector Playing around with FastAPI and streamlit to create a YoloV5 object detector It turns out that a User Interfac

2 Jan 20, 2022
Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks

Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks This repository contains a TensorFlow implementation of "

Jingwei Zheng 5 Jan 08, 2023
Model Zoo for AI Model Efficiency Toolkit

We provide a collection of popular neural network models and compare their floating point and quantized performance.

Qualcomm Innovation Center 137 Jan 03, 2023
FLAVR is a fast, flow-free frame interpolation method capable of single shot multi-frame prediction

FLAVR is a fast, flow-free frame interpolation method capable of single shot multi-frame prediction. It uses a customized encoder decoder architecture with spatio-temporal convolutions and channel ga

Tarun K 280 Dec 23, 2022
Source code for paper: Knowledge Inheritance for Pre-trained Language Models

Knowledge-Inheritance Source code paper: Knowledge Inheritance for Pre-trained Language Models (preprint). The trained model parameters (in Fairseq fo

THUNLP 31 Nov 19, 2022
Face recognize system

FRS Face_recognize_system This project contains my work that target on solving some problems of FRS: Face detection: Retinaface Face anti-spoofing: Fo

Tran Anh Tuan 4 Nov 18, 2021
Codes and pretrained weights for winning submission of 2021 Brain Tumor Segmentation (BraTS) Challenge

Winning submission to the 2021 Brain Tumor Segmentation Challenge This repo contains the codes and pretrained weights for the winning submission to th

94 Dec 28, 2022
A Pytorch implementation of "Manifold Matching via Deep Metric Learning for Generative Modeling" (ICCV 2021)

Manifold Matching via Deep Metric Learning for Generative Modeling A Pytorch implementation of "Manifold Matching via Deep Metric Learning for Generat

69 Dec 10, 2022
Traditional deepdream with VQGAN+CLIP and optical flow. Ready to use in Google Colab

VQGAN-CLIP-Video cat.mp4 policeman.mp4 schoolboy.mp4 forsenBOG.mp4

23 Oct 26, 2022
The source code for Adaptive Kernel Graph Neural Network at AAAI2022

AKGNN The source code for Adaptive Kernel Graph Neural Network at AAAI2022. Please cite our paper if you think our work is helpful to you: @inproceedi

11 Nov 25, 2022
Code for the ICCV 2021 Workshop paper: A Unified Efficient Pyramid Transformer for Semantic Segmentation.

Unified-EPT Code for the ICCV 2021 Workshop paper: A Unified Efficient Pyramid Transformer for Semantic Segmentation. Installation Linux, CUDA=10.0,

29 Aug 23, 2022
DC3: A Learning Method for Optimization with Hard Constraints

DC3: A learning method for optimization with hard constraints This repository is by Priya L. Donti, David Rolnick, and J. Zico Kolter and contains the

CMU Locus Lab 57 Dec 26, 2022
Range Image-based LiDAR Localization for Autonomous Vehicles Using Mesh Maps

Range Image-based 3D LiDAR Localization This repo contains the code for our ICRA2021 paper: Range Image-based LiDAR Localization for Autonomous Vehicl

Photogrammetry & Robotics Bonn 208 Dec 15, 2022
Hyperparameter Optimization for TensorFlow, Keras and PyTorch

Hyperparameter Optimization for Keras Talos • Key Features • Examples • Install • Support • Docs • Issues • License • Download Talos radically changes

Autonomio 1.6k Dec 15, 2022
Implementation of the Remixer Block from the Remixer paper, in Pytorch

Remixer - Pytorch Implementation of the Remixer Block from the Remixer paper, in Pytorch. It claims that substituting the feedforwards in transformers

Phil Wang 35 Aug 23, 2022
Code for models used in Bashiri et al., "A Flow-based latent state generative model of neural population responses to natural images".

A Flow-based latent state generative model of neural population responses to natural images Code for "A Flow-based latent state generative model of ne

Sinz Lab 5 Aug 26, 2022
CycleTransGAN-EVC: A CycleGAN-based Emotional Voice Conversion Model with Transformer

CycleTransGAN-EVC CycleTransGAN-EVC: A CycleGAN-based Emotional Voice Conversion Model with Transformer Demo emotion CycleTransGAN CycleTransGAN Cycle

24 Dec 15, 2022
Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021

ACTOR Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021. Please visit our we

Mathis Petrovich 248 Dec 23, 2022
Unofficial Alias-Free GAN implementation. Based on rosinality's version with expanded training and inference options.

Alias-Free GAN An unofficial version of Alias-Free Generative Adversarial Networks (https://arxiv.org/abs/2106.12423). This repository was heavily bas

dusk (they/them) 75 Dec 12, 2022
Data-depth-inference - Data depth inference with python

Welcome! This readme will guide you through the use of the code in this reposito

Marco 3 Feb 08, 2022