Official PyTorch Implementation of "Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs". NeurIPS 2020.

Overview

Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs

This repository is the implementation of SELAR.

Dasol Hwang* , Jinyoung Park* , Sunyoung Kwon, Kyung-min Kim, Jung-Woo Ha, Hyunwoo J. Kim, Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs, In Advanced in Neural Information Processing Systems (NeurIPS 2020).

Data Preprocessing

We used datasets from KGNN-LS and RippleNet for link prediction. Download meta-paths label (meta_labels/) from this link.

  • data/music/

    • ratings_final.npy : preprocessed rating file released by KGNN-LS;
    • kg_final.npy : knowledge graph file;
      • meta_labels/
        • pos_meta{}_{}.pickle : meta-path positive label for auxiliary task
        • neg_meta{}_{}.pickle : meta-path negative label for auxiliary task
  • data/book/

    • ratings_final.npy : preprocessed rating file released by RippleNet;
    • kg_final.npy : knowledge graph file;
      • meta_labels/
        • pos_meta{}_{}.pickle : meta-path positive label for auxiliary task
        • neg_meta{}_{}.pickle : meta-path negative label for auxiliary task

Required packages

A list of dependencies will need to be installed in order to run the code. We provide the dependency yaml file (env.yml)

$ conda env create -f env.yml

Running the code

# check optional arguments [-h]
$ python main_music.py
$ python main_book.py

Overview of the results of link prediction

Last-FM (Music)

Base GNNs Vanilla w/o MP w/ MP SELAR SELAR+Hint
GCN 0.7963 0.7899 0.8235 0.8296 0.8121
GAT 0.8115 0.8115 0.8263 0.8294 0.8302
GIN 0.8199 0.8217 0.8242 0.8361 0.8350
SGC 0.7703 0.7766 0.7718 0.7827 0.7975
GTN 0.7836 0.7744 0.7865 0.7988 0.8067

Book-Crossing (Book)

Base GNNs Vanilla w/o MP w/ MP SELAR SELAR+Hint
GCN 0.7039 0.7031 0.7110 0.7182 0.7208
GAT 0.6891 0.6968 0.7075 0.7345 0.7360
GIN 0.6979 0.7210 0.7338 0.7526 0.7513
SGC 0.6860 0.6808 0.6792 0.6902 0.6926
GTN 0.6732 0.6758 0.6724 0.6858 0.6850

Citation

@inproceedings{NEURIPS2020_74de5f91,
 author = {Hwang, Dasol and Park, Jinyoung and Kwon, Sunyoung and Kim, KyungMin and Ha, Jung-Woo and Kim, Hyunwoo J},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
 pages = {10294--10305},
 publisher = {Curran Associates, Inc.},
 title = {Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs},
 url = {https://proceedings.neurips.cc/paper/2020/file/74de5f915765ea59816e770a8e686f38-Paper.pdf},
 volume = {33},
 year = {2020}
}

License

Copyright (c) 2020-present NAVER Corp. and Korea University 
Owner
MLV Lab (Machine Learning and Vision Lab at Korea University)
MLV Lab (Machine Learning and Vision Lab at Korea University)
The GitHub repository for the paper: “Time Series is a Special Sequence: Forecasting with Sample Convolution and Interaction“.

SCINet This is the original PyTorch implementation of the following work: Time Series is a Special Sequence: Forecasting with Sample Convolution and I

386 Jan 01, 2023
Implementation of the federated dual coordinate descent (FedDCD) method.

FedDCD.jl Implementation of the federated dual coordinate descent (FedDCD) method. Installation To install, just call Pkg.add("https://github.com/Zhen

Zhenan Fan 6 Sep 21, 2022
Smart edu-autobooking - Johnson @ DMI-UNICT study room self-booking system

smart_edu-autobooking Sistema di autoprenotazione per l'aula studio [email protected]

Davide Carnemolla 17 Jun 20, 2022
Colar: Effective and Efficient Online Action Detection by Consulting Exemplars, CVPR 2022.

Colar: Effective and Efficient Online Action Detection by Consulting Exemplars This repository is the official implementation of Colar. In this work,

LeYang 246 Dec 13, 2022
Large scale embeddings on a single machine.

Marius Marius is a system under active development for training embeddings for large-scale graphs on a single machine. Training on large scale graphs

Marius 107 Jan 03, 2023
Pseudo lidar - (CVPR 2019) Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving

Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving This paper has been accpeted by Conference o

Yan Wang 881 Dec 27, 2022
YOLOv5🚀 reproduction by Guo Quanhao using PaddlePaddle

YOLOv5-Paddle YOLOv5 🚀 reproduction by Guo Quanhao using PaddlePaddle 支持AutoBatch 支持AutoAnchor 支持GPU Memory 快速开始 使用AIStudio高性能环境快速构建YOLOv5训练(PaddlePa

QuanHao Guo 20 Nov 14, 2022
Project Tugas Besar pertama Pengenalan Komputasi Institut Teknologi Bandung

Vending_Machine_(Mesin_Penjual_Minuman) Project Tugas Besar pertama Pengenalan Komputasi Institut Teknologi Bandung Raw Sketch untuk Essay Ringkasan P

QueenLy 1 Nov 08, 2021
Reinforcement Learning for Portfolio Management

qtrader Reinforcement Learning for Portfolio Management Why Reinforcement Learning? Learns the optimal action, rather than models the market. Adaptive

Angelos Filos 406 Jan 01, 2023
This repository contains the source codes for the paper AtlasNet V2 - Learning Elementary Structures.

AtlasNet V2 - Learning Elementary Structures This work was build upon Thibault Groueix's AtlasNet and 3D-CODED projects. (you might want to have a loo

Théo Deprelle 123 Nov 11, 2022
A highly efficient and modular implementation of Gaussian Processes in PyTorch

GPyTorch GPyTorch is a Gaussian process library implemented using PyTorch. GPyTorch is designed for creating scalable, flexible, and modular Gaussian

3k Jan 02, 2023
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.

MMdnn MMdnn is a comprehensive and cross-framework tool to convert, visualize and diagnose deep learning (DL) models. The "MM" stands for model manage

Microsoft 5.7k Jan 09, 2023
Example repository for custom C++/CUDA operators for TorchScript

Custom TorchScript Operators Example This repository contains examples for writing, compiling and using custom TorchScript operators. See here for the

106 Dec 14, 2022
A hybrid framework (neural mass model + ML) for SC-to-FC prediction

The current workflow simulates brain functional connectivity (FC) from structural connectivity (SC) with a neural mass model. Gradient descent is applied to optimize the parameters in the neural mass

Yilin Liu 1 Jan 26, 2022
Active learning for Mask R-CNN in Detectron2

MaskAL - Active learning for Mask R-CNN in Detectron2 Summary MaskAL is an active learning framework that automatically selects the most-informative i

49 Dec 20, 2022
Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight)

Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight) Abstract Due to the limited and even imbalanced dat

Hanzhe Hu 99 Dec 12, 2022
Pixray is an image generation system

Pixray is an image generation system

pixray 883 Jan 07, 2023
A hyperparameter optimization framework

Optuna: A hyperparameter optimization framework Website | Docs | Install Guide | Tutorial Optuna is an automatic hyperparameter optimization software

7.4k Jan 04, 2023
Pytorch implementation of our paper LIMUSE: LIGHTWEIGHT MULTI-MODAL SPEAKER EXTRACTION.

LiMuSE Overview Pytorch implementation of our paper LIMUSE: LIGHTWEIGHT MULTI-MODAL SPEAKER EXTRACTION. LiMuSE explores group communication on a multi

Auditory Model and Cognitive Computing Lab 17 Oct 26, 2022
Explaining in Style: Training a GAN to explain a classifier in StyleSpace

Explaining in Style: Official TensorFlow Colab Explaining in Style: Training a GAN to explain a classifier in StyleSpace Oran Lang, Yossi Gandelsman,

Google 197 Nov 08, 2022