L2G2G: a Scalable Local-to-Global Network Embedding with Graph Autoencoders
CoRR(2024)
摘要
For analysing real-world networks, graph representation learning is a popular
tool. These methods, such as a graph autoencoder (GAE), typically rely on
low-dimensional representations, also called embeddings, which are obtained
through minimising a loss function; these embeddings are used with a decoder
for downstream tasks such as node classification and edge prediction. While
GAEs tend to be fairly accurate, they suffer from scalability issues. For
improved speed, a Local2Global approach, which combines graph patch embeddings
based on eigenvector synchronisation, was shown to be fast and achieve good
accuracy. Here we propose L2G2G, a Local2Global method which improves GAE
accuracy without sacrificing scalability. This improvement is achieved by
dynamically synchronising the latent node representations, while training the
GAEs. It also benefits from the decoder computing an only local patch loss.
Hence, aligning the local embeddings in each epoch utilises more information
from the graph than a single post-training alignment does, while maintaining
scalability. We illustrate on synthetic benchmarks, as well as real-world
examples, that L2G2G achieves higher accuracy than the standard Local2Global
approach and scales efficiently on the larger data sets. We find that for large
and dense networks, it even outperforms the slow, but assumed more accurate,
GAEs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要