Neural Networks for Relational DataNavdeep Kaur; Gautam Kunapuli; Saket Joshi; Kristian Kersting; Sriraam Natarajan
In: Dimitar Kazakov; Can Erten (Hrsg.). Inductive Logic Programming - 29th International Conference, Proceedings. International Conference on Inductive Logic Programming (ILP-2019), September 3-5, Plovdiv, Bulgaria, Pages 62-71, Lecture Notes in Computer Science (LNAI), Vol. 11770, Springer, 2019.
While deep networks have been enormously successful over the last decade, they rely on flat-feature vector representations, which makes them unsuitable for richly structured domains such as those arising in applications like social network analysis. Such domains rely on relational representations to capture complex relationships between entities and their attributes. Thus, we consider the problem of learning neural networks for relational data. We distinguish ourselves from current approaches that rely on expert hand-coded rules by learning relational random-walk-based features to capture local structural interactions and the resulting network architecture. We further exploit parameter tying of the network weights of the resulting relational neural network, where instances of the same type share parameters. Our experimental results across several standard relational data sets demonstrate the effectiveness of the proposed approach over multiple neural net baselines as well as state-of-the-art statistical relational models.