AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
Our experiments empirically demonstrate the effectiveness of transformer-based embedding model by showing that TEM outperforms the state-of-the-art personalized product search baselines significantly

A Transformer-based Embedding Model for Personalized Product Search

SIGIR '20: The 43rd International ACM SIGIR conference on research and development in Information Re..., pp.1521-1524, (2020)

Cited by: 0|Views124
EI

Abstract

Product search is an important way for people to browse and purchase items on E-commerce platforms. While customers tend to make choices based on their personal tastes and preferences, analysis of commercial product search logs has shown that personalization does not always improve product search quality. Most existing product search tech...More

Code:

Data:

0
Introduction
  • Product search systems have been playing an important role in serving customers shopping on online e-commerce platforms in their daily life.
  • Ai et al [2] propose to control the influence of personalization by representing the users’ purchase intent with a convex combination between the query embedding and user embedding.
  • This method applies undifferentiated personalization to all search sessions since the coefficient of the combination is a fixed number.
  • Despite the ability to adaptively personalize a query-user pair, the maximum personalization ZAM can perform is to consider the query and the user information, which may be not enough when the user preference dominates the purchase
Highlights
  • Product search systems have been playing an important role in serving customers shopping on online e-commerce platforms in their daily life
  • Similar to previous studies [1,2,3], we observe that Latent Semantic Entity model (LSE) and Query Embedding Model (QEM) perform worse than personalized product search baselines in most cases
  • From the improvement of Precision, NDCG, and MRR, we can infer that transformer-based embedding model (TEM) retrieves more ideal items in the top 20 results and promotes them to higher positions. This demonstrates that TEM can benefit the effectiveness of personalized models with a more flexible mechanism to control the influence of personalization and by learning dynamic item representations with the interaction between items taken into consideration
  • We propose a transformer-based embedding model, abbreviated as TEM, that can conduct query-dependent personalization
  • By encoding the sequence of the query and users’ purchase history with a transformer architecture, the effect of personalization can vary from none to domination
  • Our experiments empirically demonstrate the effectiveness of TEM by showing that TEM outperforms the state-of-the-art personalized product search baselines significantly
Methods
  • A query string of each purchased item is formed by concatenating words in the multi-level category of the item and removing stopwords as well as duplicate words.
  • In this way, there could be multiple queries for each item since an item may belong to multiple categories.
Results
  • HEM and AEM achieve better results than ZAM on Cell Phones and worse results on the other two datasets
  • This indicates that, while adjusting the influence of personalization with the attention weights on the zero vector could benefit the retrieval performance of ZAM, its limitation on personalization could harm the search quality on datasets where personalization is essential.
  • This demonstrates that TEM can benefit the effectiveness of personalized models with a more flexible mechanism to control the influence of personalization and by learning dynamic item representations with the interaction between items taken into consideration
Conclusion
  • CONCLUSION AND FUTURE WORK

    In this paper, the authors propose a transformer-based embedding model, abbreviated as TEM, that can conduct query-dependent personalization.
  • By encoding the sequence of the query and users’ purchase history with a transformer architecture, the effect of personalization can vary from none to domination.
  • The attention scores in TEM indicate the personalization degree and which historical items draw more attention for retrieving a result.
  • This information could be helpful for users to make purchase decisions.
  • The authors are interested in incorporating other information about products such as price, ratings, and images with a transformer architecture to facilitate personalized product search
Summary
  • Introduction:

    Product search systems have been playing an important role in serving customers shopping on online e-commerce platforms in their daily life.
  • Ai et al [2] propose to control the influence of personalization by representing the users’ purchase intent with a convex combination between the query embedding and user embedding.
  • This method applies undifferentiated personalization to all search sessions since the coefficient of the combination is a fixed number.
  • Despite the ability to adaptively personalize a query-user pair, the maximum personalization ZAM can perform is to consider the query and the user information, which may be not enough when the user preference dominates the purchase
  • Methods:

    A query string of each purchased item is formed by concatenating words in the multi-level category of the item and removing stopwords as well as duplicate words.
  • In this way, there could be multiple queries for each item since an item may belong to multiple categories.
  • Results:

    HEM and AEM achieve better results than ZAM on Cell Phones and worse results on the other two datasets
  • This indicates that, while adjusting the influence of personalization with the attention weights on the zero vector could benefit the retrieval performance of ZAM, its limitation on personalization could harm the search quality on datasets where personalization is essential.
  • This demonstrates that TEM can benefit the effectiveness of personalized models with a more flexible mechanism to control the influence of personalization and by learning dynamic item representations with the interaction between items taken into consideration
  • Conclusion:

    CONCLUSION AND FUTURE WORK

    In this paper, the authors propose a transformer-based embedding model, abbreviated as TEM, that can conduct query-dependent personalization.
  • By encoding the sequence of the query and users’ purchase history with a transformer architecture, the effect of personalization can vary from none to domination.
  • The attention scores in TEM indicate the personalization degree and which historical items draw more attention for retrieving a result.
  • This information could be helpful for users to make purchase decisions.
  • The authors are interested in incorporating other information about products such as price, ratings, and images with a transformer architecture to facilitate personalized product search
Tables
  • Table1: Statistics of the Amazon datasets
  • Table2: Comparison between the baselines and our proposed TEM. ‘*’ marks the bast baseline performance. ‘†’ indicates significant improvements over all the baselines in paired student t-test with p < 0.05
Download tables as Excel
Related work
  • Product Search. Earlier work on product search mainly considers products as structured entities and uses facets for the task [13]. Language model based approaches have been studied [7] for keyword search. To alleviate word mismatch problems, more recently, Van Gysel et al [12] introduce a latent semantic entity model that matches products and queries in the latent semantic space. Learning to rank techniques have also been investigated [9]. In the scope of personalized product search, Ai et al [2] use a convex combination between query and user embeddings for personalization; Guo et al [8] represent users’ long and short-term preferences with an attention mechanism; Ai et al [1] provide insight on when personalization could be beneficial and propose a zero-attention model to control how personalization takes effect. Personalization has also been studied in multi-page product search [4].
Funding
  • This work was supported in part by the Center for Intelligent Information Retrieval
Reference
  • Qingyao Ai, Daniel N Hill, SVN Vishwanathan, and W Bruce Croft. 2019. A zero attention model for personalized product search. In CIKM’19. 379–388.
    Google ScholarLocate open access versionFindings
  • Qingyao Ai, Yongfeng Zhang, Keping Bi, Xu Chen, and W Bruce Croft. 2017. Learning a hierarchical embedding model for personalized product search. In SIGIR’17. ACM, 645–654.
    Google ScholarLocate open access versionFindings
  • Keping Bi, Qingyao Ai, Yongfeng Zhang, and W Bruce Croft. 2019. Conversational product search based on negative feedback. In CIKM’19. 359–368.
    Google ScholarLocate open access versionFindings
  • Keping Bi, Choon Hui Teo, Yesh Dattatreya, Vijai Mohan, and W Bruce Croft. 2019. A Study of Context Dependencies in Multi-page Product Search. In CIKM’19.
    Google ScholarFindings
  • Zhuyun Dai and Jamie Callan. 2019. Deeper text understanding for IR with contextual neural language modeling. In SIGIR’19. 985–988.
    Google ScholarLocate open access versionFindings
  • Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
    Findings
  • Huizhong Duan, ChengXiang Zhai, Jinxing Cheng, and Abhishek Gattani. 2013. A probabilistic mixture model for mining and analyzing product search log. In CIKM’13. ACM, 2179–2188.
    Google ScholarLocate open access versionFindings
  • Yangyang Guo, Zhiyong Cheng, Liqiang Nie, Yinglong Wang, Jun Ma, and Mohan Kankanhalli. 2019. Attentive long short-term preference modeling for personalized product search. TOIS 37, 2 (2019), 1–27.
    Google ScholarLocate open access versionFindings
  • Shubhra Kanti Karmaker Santu, Parikshit Sondhi, and ChengXiang Zhai. 2017. On application of learning to rank for e-commerce search. In SIGIR’17. ACM, 475–484.
    Google ScholarLocate open access versionFindings
  • Julian McAuley, Rahul Pandey, and Jure Leskovec. 2015. Inferring networks of substitutable and complementary products. In SIGKDD’15. ACM, 785–794.
    Google ScholarLocate open access versionFindings
  • Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage Re-ranking with BERT. arXiv preprint arXiv:1901.04085 (2019).
    Findings
  • Christophe Van Gysel, Maarten de Rijke, and Evangelos Kanoulas. 2016. Learning latent vector spaces for product search. In CIKM’16. ACM, 165–174.
    Google ScholarLocate open access versionFindings
  • Damir Vandic, Flavius Frasincar, and Uzay Kaymak. 20Facet selection algorithms for web product search. In CIKM’13. ACM, 2327–2332.
    Google ScholarLocate open access versionFindings
  • Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998–6008.
    Google ScholarLocate open access versionFindings
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科