Feature Expansion for Graph Neural Networks
Abstract
Graph neural networks aim to learn representations for graph-structured data and show impressive performance, particularly in <PRE_TAG>node classification</POST_TAG>. Recently, many methods have studied the representations of GNNs from the perspective of <PRE_TAG>optimization goals</POST_TAG> and <PRE_TAG>spectral graph theory</POST_TAG>. However, the <PRE_TAG>feature space</POST_TAG> that dominates representation learning has not been systematically studied in graph neural networks. In this paper, we propose to fill this gap by analyzing the <PRE_TAG>feature space</POST_TAG> of both spatial and <PRE_TAG>spectral models</POST_TAG>. We decompose graph neural networks into determined <PRE_TAG>feature space</POST_TAG>s and trainable weights, providing the convenience of studying the <PRE_TAG>feature space</POST_TAG> explicitly using matrix space analysis. In particular, we theoretically find that the <PRE_TAG>feature space</POST_TAG> tends to be <PRE_TAG>linearly correlated</POST_TAG> due to repeated aggregations. Motivated by these findings, we propose 1) <PRE_TAG>feature subspaces flattening</POST_TAG> and 2) structural <PRE_TAG>principal components</POST_TAG> to expand the <PRE_TAG>feature space</POST_TAG>. Extensive experiments verify the effectiveness of our proposed more comprehensive <PRE_TAG>feature space</POST_TAG>, with comparable <PRE_TAG>inference time</POST_TAG> to the baseline, and demonstrate its efficient convergence capability.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper