Large-Scale Sparse Structural Node Representation
In the BigData era, large graph datasets are becoming increasingly popular due to their capability to integrate and interconnect large sources of data in many fields, e.g., social media, biology, communication networks, etc. Graph representation learning is a flexible tool that automatically extracts features from a graph node. These features can be directly used for machine learning tasks. Graph representation learning approaches producing features preserving the structural information of the graphs are still an open problem, especially in the context of largescale graphs. In this paper, we propose a new fast and scalable structural representation learning approach called SparseStruct. Our approach uses a sparse internal representation for each node, and we formally proved its ability to preserve structural information. Thanks to a light-weight algorithm where each iteration costs only linear time in the number of the edges, SparseStruct is able to easily process large graphs. In addition, it provides improvements in comparison with state of the art in terms of prediction and classification accuracy by also providing strong robustness to noise data.
Serra, Edoardo; Joaristi, Mikel; and Cuzzocrea, Alfredo. (2020). "Large-Scale Sparse Structural Node Representation". 2020 IEEE International Conference on Big Data (Big Data), 5247-5253. https://doi.org/10.1109/BigData50022.2020.9377854