The Effects of a Platform Initiated Reviewer Incentive Program on Regular Review Generation
user-5ebe28934c775eda72abcddd(2017)
Abstract
To stimulate product reviews, many large e-commerce platforms have launched reviewer incentive programs in which free products are provided to reviewers in exchange for them generating reviews on the received free products (incentivized reviews). Prior studies explored the effects of incentives on review generation mainly through the comparison between the incentivized reviews and the non-incentivized reviews (regular reviews). In this study, we focus on an unexplored aspect of platform-initiated reviewer incentive programs – the impact of participation in such programs on reviewers’ regular reviews. We find that after receiving free products via the program, the reviewers generate 46.03% more regular reviews, increase the regular-review length by 3.40%, and increase the average regular-review rating by 2.18%. We attribute the findings to the norm of reciprocity towards the platform evoked by the received free products. Consistently with our theorization, reviewers change their regular-review activities only after they receive a sufficient number of free products, and the magnitude of the changes is positively moderated by the number of and the monetary values of the received free products. Our results demonstrate that apart from motivating the incentivized reviews, the platform-initiated incentives can also trigger evident changes in the recipients’ regular reviews.
MoreTranslated text
Key words
Product reviews,Norm of reciprocity,Marketing,Incentive program,Incentive,Engineering,Advertising
求助PDF
上传PDF
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
GPU is busy, summary generation fails
Rerequest