Data doesn’t always fit neat into rows and columns. Instead, it’s often the case that data follows a graph structure, take for example social networks, protein structures, recommendation or transportation systems. Leaving the information about the graph topology out of a machine learning model can decrease the performance drastically. Luckily, there is a way to include this information.
Graph Neural Networks (GNNs) are designed to learn from data represented as nodes and edges. GNNs have evolved over the years, and in this post you will learn about Graph Convolutional Networks (GCNs). My next post will cover Graph Attention Networks (GATs). GCNs and GATs are two fundamental architectures on which current state of the art models are based upon, so if you want to learn about GNNs, this is a good start. Let’s dive in!
New to graphs? The first part of this post (Graph Basics) explains the basics of graphs. Also, you should be familiar with neural networks (a short recap is provided in this article, in the part Datasets and Prerequisites).