AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
We proposed a novel relationship embedding model, TransConv, which is built upon structural translation on relationship hyperplane and further optimized through conversation factors established from textual communications

TransConv: Relationship Embedding in Social Networks

THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF AR..., pp.4130-4138, (2019)

Cited by: 3|Views69
EI
Full Text
Bibtex
Weibo

Abstract

Representation learning (RL) for social networks facilitates real-world tasks such as visualization, link prediction and friend recommendation. Traditional knowledge graph embedding models learn continuous low-dimensional embedding of entities and relations. However, when applied to social networks, existing approaches do not consider the...More

Code:

Data:

0
Introduction
  • Representation learning has been applied widely in different areas to extract useful information from data when building classifiers for inferring node attributes or predicting links in graphs.
  • A user could be close to a one set of friends because they were college classmates but close to another because they are colleagues at work.
  • To capture this information, it is important to consider the characteristics of relationships between users when learning representations of social networks
Highlights
  • Representation learning has been applied widely in different areas to extract useful information from data when building classifiers for inferring node attributes or predicting links in graphs
  • The experimental results show that our approach outperforms other state-of-the-art models on two real-world social network datasets, and notably it improves prediction accuracy for both frequent and infrequency relations
  • Since our study focuses on relationship and user embeddings in social networks, we conjecture that the textual communication between users plays an especially crucial role
  • We evaluate TransConv compared to several knowledge graph embedding models: transE, transH, transR, transD, and DKRL
  • We proposed a novel relationship embedding model, TransConv, which is built upon structural translation on relationship hyperplane and further optimized through conversation factors established from textual communications
  • Our experiments show that TransConv outperforms the state-of-the-art relationship embedding models in the tasks of social network completion, triplets classification and multilabel classification
Methods
  • The authors evaluate the approach and related methods on three various tasks: social network completion, triplets classification and multilabel classification.
  • The public Purdue Facebook network data from March 2007 to March 2008, which includes 3 million post activities.
  • The authors construct 41 relationships from user attributes, groups and top friends information.
  • The authors' Twitter dataset is sampled from the dataset collected by (Kwak et al 2010).
  • It contains 20 million post activities from June to July 2009.
  • The 42 relationships types are constructed from user profiles and follower/following information
Results
  • The experimental results show that the approach outperforms other state-of-the-art models on two real-world social network datasets, and notably it improves prediction accuracy for both frequent and infrequency relations.
  • When examining the bottom-3 relationships, TransConv still achieves nearly over 60% in Hits@10 and outperforms others; While the performances of other models, including TransH, have dropped significantly to lower than 20%.
  • TransConv and DKRL achieve over 10% in the bottom-1 relationship.
  • The authors' experiments show that TransConv outperforms the state-of-the-art relationship embedding models in the tasks of social network completion, triplets classification and multilabel classification
Conclusion
  • The authors proposed a novel relationship embedding model, TransConv, which is built upon structural translation on relationship hyperplane and further optimized through conversation factors established from textual communications.
  • To the best of the knowledge, TransConv is the first model that considers both intensity and similarity of textual communications between users.
  • The authors' experiments show that TransConv outperforms the state-of-the-art relationship embedding models in the tasks of social network completion, triplets classification and multilabel classification
Summary
  • Introduction:

    Representation learning has been applied widely in different areas to extract useful information from data when building classifiers for inferring node attributes or predicting links in graphs.
  • A user could be close to a one set of friends because they were college classmates but close to another because they are colleagues at work.
  • To capture this information, it is important to consider the characteristics of relationships between users when learning representations of social networks
  • Objectives:

    The authors aim to leverage ideas from the knowledge graph completion problem to jointly learn representations of entities and relations.
  • Given a pair of user and their relation as a triplet, the goal of this work is to learn a joint embedding for users and relationships, such that every relation can be viewed as a translation of users in the embedding space.
  • The authors aim to exploit the message information among users to improve the learned embedding, e.g., by automatically identifying content relevant to particular relations.
  • Since the goal is to leverage the impact of interaction and communication between users in social networks, the user description does not provide sufficient details to describe these relationships between users
  • Methods:

    The authors evaluate the approach and related methods on three various tasks: social network completion, triplets classification and multilabel classification.
  • The public Purdue Facebook network data from March 2007 to March 2008, which includes 3 million post activities.
  • The authors construct 41 relationships from user attributes, groups and top friends information.
  • The authors' Twitter dataset is sampled from the dataset collected by (Kwak et al 2010).
  • It contains 20 million post activities from June to July 2009.
  • The 42 relationships types are constructed from user profiles and follower/following information
  • Results:

    The experimental results show that the approach outperforms other state-of-the-art models on two real-world social network datasets, and notably it improves prediction accuracy for both frequent and infrequency relations.
  • When examining the bottom-3 relationships, TransConv still achieves nearly over 60% in Hits@10 and outperforms others; While the performances of other models, including TransH, have dropped significantly to lower than 20%.
  • TransConv and DKRL achieve over 10% in the bottom-1 relationship.
  • The authors' experiments show that TransConv outperforms the state-of-the-art relationship embedding models in the tasks of social network completion, triplets classification and multilabel classification
  • Conclusion:

    The authors proposed a novel relationship embedding model, TransConv, which is built upon structural translation on relationship hyperplane and further optimized through conversation factors established from textual communications.
  • To the best of the knowledge, TransConv is the first model that considers both intensity and similarity of textual communications between users.
  • The authors' experiments show that TransConv outperforms the state-of-the-art relationship embedding models in the tasks of social network completion, triplets classification and multilabel classification
Tables
  • Table1: Score functions of embedding models
  • Table2: Statistics of datasets
  • Table3: Most- and least-frequent relationships in Facebook and Twitter datasets
  • Table4: Evaluation results of link prediction on Facebook dataset
  • Table5: Evaluation results of link prediction on Twitter dataset
  • Table6: Detailed results by relationship categories with Filter setting on Facebook dataset
  • Table7: Detailed results by relationship categories with Filter setting on Twitter dataset
  • Table8: Mean accuracy (%) for triplet binary classification on selected relationships with different negative sampling strategies
  • Table9: Results of multilabel 8-relationship classification
Download tables as Excel
Related work
  • Network Embedding Models

    There has been increasing attention on low-dimensional graph embedding recently. Many approaches have been proposed for data visualization, node classification, link prediction, and recommendation. DeepWalk (Perozzi, Al-Rfou, and Skiena 2014) predicts the local neighborhood of nodes embeddings to learn graph embedding. LINE (Tang et al 2015) learns feature representations in first-order proximity and second-order proximity respectively. GraRep (Cao, Lu, and Xu 2015) learns graph representation by optimizing kstep loss functions. Node2Vec (Grover and Leskovec 2016) extends DeepWalk with a more sophisticated random walk procedure and explores diverse neighborhoods. Although many studies have reported their performance on social network datasets, we argue that the actual social networks are more complicated. Users in social networks could have different neighbor structures based on different relationships. Jointly learning representations for users and relationships can help to describe users in social networks more precisely.
Funding
  • This research is supported by NSF and AFRL under contract numbers IIS-1546488, IIS-1618690, and FA8650-18-27879
Study subjects and analysis
users: 19409
Data

We analyze two social network datasets in our experiments:

• The public Purdue Facebook network data from March 2007 to March 2008, which includes 3 million post activities. There are 211,166 triplets with 19,409 users. For every triplet (ui, r, uj), ui posts at least one message (conversation) on uj’s timeline and vice versa

users: 22729
It contains 20 million post activities from June to July 2009. There are 300,985 triplets with 22,729 users. We use the posts with user mentions (e.g., ”@david happy birthday!”) as textual interactions

users: 3
Specifically, when modeling people’s relationships in social networks, we consider a sophisticated model to utilize the “interaction” between two users rather than design a complicated hyperplane projection. For example, u1, u2, and u3 are three users who described themselves as supporters for the same political party, but (u1, u2) discuss politics extensively and (u1, u3) rarely discuss it. Let’s denote rpolitics as “the same political party”

users: 19409
• The public Purdue Facebook network data from March 2007 to March 2008, which includes 3 million post activities. There are 211,166 triplets with 19,409 users. For every triplet (ui, r, uj), ui posts at least one message (conversation) on uj’s timeline and vice versa

Reference
  • Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; and Yakhnenko, O. 2013. Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems, 2787–2795.
    Google ScholarLocate open access versionFindings
  • Boutell, M. R.; Luo, J.; Shen, X.; and Brown, C. M. 2004. Learning multi-label scene classification. Pattern recognition 37(9):1757–1771.
    Google ScholarLocate open access versionFindings
  • Cao, S.; Lu, W.; and Xu, Q. 2015. Grarep: Learning graph representations with global structural information. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, 891–900. ACM.
    Google ScholarLocate open access versionFindings
  • Garcia-Duran, A.; Gonzalez, R.; Onoro-Rubio, D.; Niepert, M.; and Li, H. 2018. Transrev: Modeling reviews as translations from users to items. arXiv preprint arXiv:1801.10095.
    Findings
  • Godbole, S., and Sarawagi, S. 2004. Discriminative methods for multi-labeled classification. In Pacific-Asia conference on knowledge discovery and data mining, 22–30. Springer.
    Google ScholarFindings
  • Grover, A., and Leskovec, J. 201node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, 855–864. ACM.
    Google ScholarLocate open access versionFindings
  • Ji, G.; He, S.; Xu, L.; Liu, K.; and Zhao, J. 2015. Knowledge graph embedding via dynamic mapping matrix. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 1, 687– 696.
    Google ScholarLocate open access versionFindings
  • Kwak, H.; Lee, C.; Park, H.; and Moon, S. 2010. What is twitter, a social network or a news media? In Proceedings of the 19th international conference on World Wide Web, 591– 600. ACM.
    Google ScholarLocate open access versionFindings
  • Lin, Y.; Liu, Z.; Sun, M.; Liu, Y.; and Zhu, X. 2015. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2181–2187.
    Google ScholarLocate open access versionFindings
  • Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G. S.; and Dean, J. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, 3111–3119.
    Google ScholarLocate open access versionFindings
  • Perozzi, B.; Al-Rfou, R.; and Skiena, S. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 701–710. ACM.
    Google ScholarLocate open access versionFindings
  • Salton, G. 1991. Developments in automatic text retrieval. science 253(5023):974–980.
    Google ScholarLocate open access versionFindings
  • Tang, J.; Qu, M.; Wang, M.; Zhang, M.; Yan, J.; and Mei, Q. 2015. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, 1067–1077. ACM.
    Google ScholarLocate open access versionFindings
  • Wang, Z.; Zhang, J.; Feng, J.; and Chen, Z. 20Knowledge graph embedding by translating on hyperplanes. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, 1112–1119.
    Google ScholarLocate open access versionFindings
  • Wang, D.; Cui, P.; and Zhu, W. 2016. Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, 1225–1234. ACM.
    Google ScholarLocate open access versionFindings
  • Xiao, H.; Huang, M.; Meng, L.; and Zhu, X. 2017. Ssp: Semantic space projection for knowledge graph embedding with text descriptions. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, 3104–3110.
    Google ScholarLocate open access versionFindings
  • Xie, R.; Liu, Z.; Jia, J.; Luan, H.; and Sun, M. 2016. Representation learning of knowledge graphs with entity descriptions. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2659–2665.
    Google ScholarLocate open access versionFindings
Author
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科