kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Graph2Feat: Inductive Link Prediction via Knowledge Distillation
KTH, School of Industrial Engineering and Management (ITM), Materials Science and Engineering, Structures.ORCID iD: 0000-0002-7697-9150
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0001-7898-0879
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0003-4516-7317
Number of Authors: 32023 (English)In: ACM Web Conference 2023: Companion of the World Wide Web Conference, WWW 2023, Association for Computing Machinery (ACM) , 2023, p. 805-812Conference paper, Published paper (Refereed)
Abstract [en]

Link prediction between two nodes is a critical task in graph machine learning. Most approaches are based on variants of graph neural networks (GNNs) that focus on transductive link prediction and have high inference latency. However, many real-world applications require fast inference over new nodes in inductive settings where no information on connectivity is available for these nodes. Thereby, node features provide an inevitable alternative in the latter scenario. To that end, we propose Graph2Feat, which enables inductive link prediction by exploiting knowledge distillation (KD) through the Student-Teacher learning framework. In particular, Graph2Feat learns to match the representations of a lightweight student multi-layer perceptron (MLP) with a more expressive teacher GNN while learning to predict missing links based on the node features, thus attaining both GNN's expressiveness and MLP's fast inference. Furthermore, our approach is general; it is suitable for transductive and inductive link predictions on different types of graphs regardless of them being homogeneous or heterogeneous, directed or undirected. We carry out extensive experiments on seven real-world datasets including homogeneous and heterogeneous graphs. Our experiments demonstrate that Graph2Feat significantly outperforms SOTA methods in terms of AUC and average precision in homogeneous and heterogeneous graphs. Finally, Graph2Feat has the minimum inference time compared to the SOTA methods, and 100x acceleration compared to GNNs. The code and datasets are available on GitHub1.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM) , 2023. p. 805-812
Keywords [en]
graph representation learning, heterogeneous networks, inductive link prediction, knowledge distillation
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-333310DOI: 10.1145/3543873.3587596ISI: 001124276300163Scopus ID: 2-s2.0-85159575698OAI: oai:DiVA.org:kth-333310DiVA, id: diva2:1784945
Conference
2023 World Wide Web Conference, WWW 2023, Austin, United States of America, Apr 30 2023 - May 4 2023
Note

Part of ISBN 9781450394161

QC 20230801

Available from: 2023-08-01 Created: 2023-08-01 Last updated: 2024-03-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Bonvalet, ManonKefato, Zekarias TilahunGirdzijauskas, Sarunas

Search in DiVA

By author/editor
Bonvalet, ManonKefato, Zekarias TilahunGirdzijauskas, Sarunas
By organisation
StructuresSoftware and Computer systems, SCS
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 119 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf