For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit:
Jure Leskovec
Computer Science, PhD
From previous lectures we see how we can use machine learning with feature engineering to make predictions on nodes, links and graphs. In this video we’ll focus on a new technique called graph representation learning that could alleviate the need to do feature engineering. In graph representation learning, we can map nodes into an embedding space such that similarity of nodes in graph is reflected by distance between their embeddings. We introduce the general components in the node embedding algorithm, namely the encoder and decoder, as well as how to define the similarity function.
To follow along with the course schedule and syllabus, visit:
0:00 Introduction
0:12 Recap: Traditional ML for Graphs
1:31 Graph Representation Learning
2:40 Why Embedding?
3:26 Example Node Embedding
5:03 Setup
5:35 Embedding Nodes
7:19 Learning Node Embeddings
7:53 Two Key Components
8:47 “Shallow“ Encoding
11:27 Framework Summary
12:26 How to Define Node Similarity?
13:24 Note on Node Embeddings
1 view
49
10
2 years ago 00:16:47 1
CS224W: Machine Learning with Graphs | 2021 | Lecture 2.2 - Traditional Feature-based Methods: Link
2 years ago 00:20:10 1
CS224W: Machine Learning with Graphs | 2021 | Lecture 2.3 - Traditional Feature-based Methods: Graph
2 years ago 00:27:07 1
CS224W: Machine Learning with Graphs | 2021 | Lecture Walk Approaches for Node Embeddings