AI帮你理解科学

AI 生成解读视频

AI抽取解析论文重点内容自动生成视频


pub
生成解读视频

AI 溯源

AI解析本论文相关学术脉络


Master Reading Tree
生成 溯源树

AI 精读

AI抽取本论文的概要总结


微博一下
The same reduction implies that under the Exponential Time Hypothesis any algorithm approximating it within an m additive factor must run in time at least eΩ(poly(1/ )), ruling out a fully-polynomial time approximation scheme for the problem

Myersonian Regression

NIPS 2020, (2020)

被引用0|浏览18
下载 PDF 全文
引用
微博一下

摘要

Motivated by pricing applications in online advertising, we study a variant of linear regression with a discontinuous loss function that we term Myersonian regression. In this variant, we wish to find a linear function f : Rd → R that well approximates a set of points (xi, vi) ∈ Rd × [0, 1] in the following sense: we receive a loss of vi ...更多

代码

数据

0
简介
  • The Myerson price of a distribution is the price that maximizes the revenue when selling to a buyer whose value is drawn from that distribution.
  • In many modern applications such as online marketplaces and advertising, the seller doesn’t just set one price p but must instead price a variety of differentiated products.
  • In these settings, a seller must design a policy to price items based on their features in order to optimize revenue.
  • One would train a pricing policy on historical bids and apply this policy on future products
重点内容
  • In economics, the Myerson price of a distribution is the price that maximizes the revenue when selling to a buyer whose value is drawn from that distribution
  • While it is not surprising that solving Myersonian regression exactly is NP-hard given the discontinuity in the reward function, this has been left open by several previous works
  • The same reduction implies that under the Exponential Time Hypothesis (ETH) any algorithm approximating it within an m additive factor must run in time at least eΩ(poly(1/ )), ruling out a fully-polynomial time approximation scheme (FPTAS) for the problem
  • We show that (UMR) is unstable in the sense that arbitrarily small perturbations in the input can lead to completely different solutions
  • We show that under the Exponential Time Hypothesis (ETH), any algorithm that achieves a m-additive approximation for Myersonian regression must run in time at least exp(O −1/6 )
  • A interesting avenue of investigation for future work is to understand how strategic buyers would change their bids in response to a contextual batch learning algorithm and how to design algorithms that are aware of strategic response
结果
  • The authors' main result is a polynomial time approximation scheme (PTAS) using dimensionality reduction.
  • The same reduction implies that under the Exponential Time Hypothesis (ETH) any algorithm approximating it within an m additive factor must run in time at least eΩ(poly(1/ )), ruling out a fully-polynomial time approximation scheme (FPTAS) for the problem
  • This hardness of approximation perfectly complements the algorithmic results, showing that the guarantees are essentially the best that one can hope for
结论
  • The authors give the first approximation algorithm for learning a linear pricing function without any assumption on the data other than normalization.
  • A interesting avenue of investigation for future work is to understand how strategic buyers would change their bids in response to a contextual batch learning algorithm and how to design algorithms that are aware of strategic response
  • This is a well studied problem in non-contextual online learning (Amin et al [2013], Medina and Mohri [2014b], Drutsa [2017], Vanunts and Drutsa [2019], Nedelec et al [2019]) as well as in online contextual learning (Amin et al [2014], Golrezaei et al [2019]).
  • Formulating a model of strategic response to batch learning algorithms is itself open
总结
  • Introduction:

    The Myerson price of a distribution is the price that maximizes the revenue when selling to a buyer whose value is drawn from that distribution.
  • In many modern applications such as online marketplaces and advertising, the seller doesn’t just set one price p but must instead price a variety of differentiated products.
  • In these settings, a seller must design a policy to price items based on their features in order to optimize revenue.
  • One would train a pricing policy on historical bids and apply this policy on future products
  • Results:

    The authors' main result is a polynomial time approximation scheme (PTAS) using dimensionality reduction.
  • The same reduction implies that under the Exponential Time Hypothesis (ETH) any algorithm approximating it within an m additive factor must run in time at least eΩ(poly(1/ )), ruling out a fully-polynomial time approximation scheme (FPTAS) for the problem
  • This hardness of approximation perfectly complements the algorithmic results, showing that the guarantees are essentially the best that one can hope for
  • Conclusion:

    The authors give the first approximation algorithm for learning a linear pricing function without any assumption on the data other than normalization.
  • A interesting avenue of investigation for future work is to understand how strategic buyers would change their bids in response to a contextual batch learning algorithm and how to design algorithms that are aware of strategic response
  • This is a well studied problem in non-contextual online learning (Amin et al [2013], Medina and Mohri [2014b], Drutsa [2017], Vanunts and Drutsa [2019], Nedelec et al [2019]) as well as in online contextual learning (Amin et al [2014], Golrezaei et al [2019]).
  • Formulating a model of strategic response to batch learning algorithms is itself open
相关工作
  • Our work is in the broad area of learning for revenue optimization. The papers in this area can be categorized along two axis: online vs batch learning and contextual vs non-contextual. In the online non-contextual setting, Kleinberg and Leighton [2003] give the optimal algorithm for a single buyer which was later extended to optimal reserve pricing in auctions in Cesa-Bianchi et al [2013]. In the online contextual setting there is a stream of recent work deriving optimal regret bounds for pricing (Amin et al [2014], Cohen et al [2016], Javanmard and Nazerzadeh [2016], Javanmard [2017], Lobel et al [2017], Mao et al [2018], Leme and Schneider [2018], Shah et al [2019]). For batch learning in non-contextual settings there is a long line of work establishing tight sample complexity bounds for revenue optimization (Cole and Roughgarden [2014], Morgenstern and Roughgarden [2015, 2016]) as well as approximation algorithms to reserve price optimization (Paes Leme et al [2016], Roughgarden and Wang [2019], Derakhshan et al [2019]).

    Our paper is in the setting of contextual batch learning. Medina and Mohri [2014a] started the work on this setting by showing generalization bounds via Rademacher complexity. They also observe that the loss function is discontinuous and non-convex and propose the use of a surrogate loss. They bound the difference between the pricing loss and the surrogate loss and design algorithms for minimizing the surrogate loss. Medina and Vassilvitskii [2017] design a pricing algorithm based on clustering, where first features are clustered and then a non-contextual pricing algorithm is used on each cluster. Shen et al [2019] replaces the pricing loss by a convex loss function derived from the theory of market equilibrium and argue that the clearing price is a good approximation of the optimal price in real datasets. A common theme in the previous papers is to replace the pricing loss by a more amenable loss function and give conditions under which the new loss approximates the pricing loss. Instead here we study the pricing loss directly. We give the first hardness proof in this setting and also give a (1 − )-approximation without any conditions on the data other than bounded norm.
引用论文
  • K. Amin, A. Rostamizadeh, and U. Syed. Learning prices for repeated auctions with strategic buyers. In Advances in Neural Information Processing Systems, pages 1169–1177, 2013.
    Google ScholarLocate open access versionFindings
  • K. Amin, A. Rostamizadeh, and U. Syed. Repeated contextual auctions with strategic buyers. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 622–630, 2014.
    Google ScholarLocate open access versionFindings
  • R. Calderbank, S. Jafarpour, and R. Schapire. Compressed learning: Universal sparse dimensionality reduction and learning in the measurement domain. preprint, 2009.
    Google ScholarFindings
  • N. Cesa-Bianchi, C. Gentile, and Y. Mansour. Regret minimization for reserve prices in secondprice auctions. In Proceedings of the twenty-fourth annual ACM-SIAM symposium on Discrete algorithms, pages 1190–1204, 2013.
    Google ScholarLocate open access versionFindings
  • M. C. Cohen, I. Lobel, and R. Paes Leme. Feature-based dynamic pricing. In Proceedings of the 2016 ACM Conference on Economics and Computation, pages 817–817. ACM, 2016.
    Google ScholarLocate open access versionFindings
  • R. Cole and T. Roughgarden. The sample complexity of revenue maximization. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing, pages 243–252, 2014.
    Google ScholarLocate open access versionFindings
  • M. Derakhshan, N. Golrezaei, and R. Paes Leme. Lp-based approximation for personalized reserve prices. In Proceedings of the 2019 ACM Conference on Economics and Computation, pages 589–589, 2019.
    Google ScholarLocate open access versionFindings
  • A. Drutsa. Horizon-independent optimal pricing in repeated auctions with truthful and strategic buyers. In Proceedings of the 26th International Conference on World Wide Web, pages 33–42, 2017.
    Google ScholarLocate open access versionFindings
  • N. Golrezaei, A. Javanmard, and V. Mirrokni. Dynamic incentive-aware learning: Robust pricing in contextual auctions. In Advances in Neural Information Processing Systems, pages 9756–9766, 2019.
    Google ScholarLocate open access versionFindings
  • S. Har-Peled, P. Indyk, and R. Motwani. Approximate nearest neighbor: Towards removing the curse of dimensionality. Theory of computing, 8(1):321–350, 2012.
    Google ScholarLocate open access versionFindings
  • A. Javanmard. Perishability of data: dynamic pricing under varying-coefficient models. The Journal of Machine Learning Research, 18(1):1714–1744, 2017.
    Google ScholarLocate open access versionFindings
  • A. Javanmard and H. Nazerzadeh. Dynamic pricing in high-dimensions. Working paper, University of Southern California, 2016.
    Google ScholarFindings
  • R. Kleinberg and T. Leighton. The value of knowing a demand curve: Bounds on regret for online posted-price auctions. In Foundations of Computer Science, 2003. Proceedings. 44th Annual IEEE Symposium on, pages 594–605. IEEE, 2003.
    Google ScholarLocate open access versionFindings
  • R. P. Leme and J. Schneider. Contextual search via intrinsic volumes. In 59th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2018, Paris, France, October 7-9, 2018, pages 268–282, 2018.
    Google ScholarFindings
  • N. Linial, E. London, and Y. Rabinovich. The geometry of graphs and some of its algorithmic applications. Combinatorica, 15(2):215–245, 1995.
    Google ScholarLocate open access versionFindings
  • I. Lobel, R. P. Leme, and A. Vladu. Multidimensional binary search for contextual decision-making. In Proceedings of the 2017 ACM Conference on Economics and Computation, EC ’17, Cambridge, MA, USA, June 26-30, 2017, page 585, 2017.
    Google ScholarLocate open access versionFindings
  • J. Mao, R. P. Leme, and J. Schneider. Contextual pricing for lipschitz buyers. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada., pages 5648–5656, 2018.
    Google ScholarLocate open access versionFindings
  • A. M. Medina and M. Mohri. Learning theory and algorithms for revenue optimization in second price auctions with reserve. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 262–270, 2014a.
    Google ScholarLocate open access versionFindings
  • A. M. Medina and M. Mohri. Revenue optimization in posted-price auctions with strategic buyers. arXiv preprint arXiv:1411.6305, 2014b.
    Findings
  • A. M. Medina and S. Vassilvitskii. Revenue optimization with approximate bid predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 1856–1864. Curran Associates Inc., 2017.
    Google ScholarLocate open access versionFindings
  • J. Morgenstern and T. Roughgarden. Learning simple auctions. In Conference on Learning Theory, pages 1298–1318, 2016.
    Google ScholarLocate open access versionFindings
  • J. H. Morgenstern and T. Roughgarden. On the pseudo-dimension of nearly optimal auctions. In Advances in Neural Information Processing Systems, pages 136–144, 2015.
    Google ScholarLocate open access versionFindings
  • T. Nedelec, N. El Karoui, and V. Perchet. Learning to bid in revenue maximizing auction. In Companion Proceedings of The 2019 World Wide Web Conference, pages 934–935, 2019.
    Google ScholarLocate open access versionFindings
  • R. Paes Leme, M. Pal, and S. Vassilvitskii. A field guide to personalized reserve prices. In Proceedings of the 25th international conference on world wide web, pages 1093–1102, 2016.
    Google ScholarLocate open access versionFindings
  • T. Roughgarden and J. R. Wang. Minimizing regret with multiple reserves. ACM Transactions on Economics and Computation (TEAC), 7(3):1–18, 2019.
    Google ScholarLocate open access versionFindings
  • V. Shah, R. Johari, and J. Blanchet. Semi-parametric dynamic contextual pricing. In Advances in Neural Information Processing Systems, pages 2360–2370, 2019.
    Google ScholarLocate open access versionFindings
  • W. Shen, S. Lahaie, and R. P. Leme. Learning to clear the market. In International Conference on Machine Learning (ICML), 2019.
    Google ScholarLocate open access versionFindings
  • A. Vanunts and A. Drutsa. Optimal pricing in repeated posted-price auctions with different patience of the seller and the buyer. In Advances in Neural Information Processing Systems, pages 939–951, 2019.
    Google ScholarLocate open access versionFindings
作者
您的评分 :
0

 

标签
评论
小科