Relational inductive biases, deep learning, and graph networks (arXiv)

An interesting review of graph-based neural networks from a who's who of Deep Learning researchers. The review could have been a bit deeper IMO; instead, it chooses to cast many existing approaches as instances of a "new" framework. While it gives a nice summary of existing relational inductive biases in neural networks, I was missing a deeper insight with regard to how to perform relational reasoning, e.g. for text. It broaches important questions such as how to generate a graph from an input, but doesn't answer them. An interesting---if not novel---observation is that self-attention (as in the Transformer) performs some form of relational reasoning.


Want to receive more content like this in your inbox?