th margin generalization bound by Gao and Zhou (2013), which was recently proved to be near-tight for some data distributions (Gronlund et al. 2019). In this work, we first demonstrate that the $k
th margin bound is inadequate in explaining the performance of state-of-the-art gradient boosters. We then explain the short comings of the $k
th margin bound and prove a stronger and more refined margin-based generalization bound for boosted classifiers that indeed succeeds in explaining the performance of modern gradient boosters. Finally, we improve upon the recent generalization lower bound by Gr{\\o}nlund et al. (2019). ","authors":[{"name":"Allan Grønlund"},{"id":"53f43b3bdabfaec09f1ac28d","name":"Lior Kamma"},{"id":"53f4459bdabfaee1c0ae9880","name":"Kasper Green Larsen"}],"id":"5f7fdd328de39f0828397f57","num_citation":0,"order":1,"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fupload\u002Fpdf\u002F1452\u002F1072\u002F888\u002F5f7fdd328de39f0828397f57_0.pdf","title":"Margins are Insufficient for Explaining Gradient Boosting","urls":["https:\u002F\u002Fneurips.cc\u002FConferences\u002F2020\u002FAcceptedPapersInitial","https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.04998","https:\u002F\u002Fdblp.org\u002Frec\u002Fconf\u002Fnips\u002FGronlundKL20","https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002F146f7dd4c91bc9d80cf4458ad6d6cd1b-Abstract.html","https:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fjournals\u002Fcorr\u002Fcorr2011.html#abs-2011-04998","https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F146f7dd4c91bc9d80cf4458ad6d6cd1b-Abstract.html"],"venue":{"info":{"name":"NIPS 2020"},"volume":"33"},"versions":[{"id":"5f7fdd328de39f0828397f57","sid":"neurips2020#1470","src":"conf_neurips","year":2020},{"id":"5fabb57891e0110281fdaab9","sid":"2011.04998","src":"arxiv","year":2020},{"id":"5ff8844791e011c8326763cf","sid":"conf\u002Fnips\u002FGronlundKL20","src":"dblp","vsid":"conf\u002Fnips","year":2020},{"id":"5ff68ca1d4150a363cd2f308","sid":"3103017293","src":"mag","vsid":"1127325140","year":2020}],"year":2020},{"abstract":"Boosting is one of the most successful ideas in machine learning. The most well-accepted explanations for the low generalization error of boosting algorithms such as AdaBoost stem from margin theory. The study of margins in the context of boosting algorithms was initiated by Schapire, Freund, Bartlett and Lee (1998) and has inspired numerous boosting algorithms and generalization bounds. To date, the strongest known generalization (upper bound) is the kth margin bound of Gao and Zhou (2013). Despite the numerous generalization upper bounds that have been proved over the last two decades, nothing is known about the tightness of these bounds. In this paper, we give the first margin-based lower bounds on the generalization error of boosted classifiers. Our lower bounds nearly match the kth margin bound and thus almost settle the generalization performance of boosted classifiers in terms of margins.","authors":[{"name":"Allan Grønlund"},{"id":"53f43b3bdabfaec09f1ac28d","name":"Lior Kamma"},{"id":"53f4459bdabfaee1c0ae9880","name":"Kasper Green Larsen"},{"id":"53f451c9dabfaee4dc7ff847","name":"Alexander Mathiasen"},{"id":"53f43becdabfaeee229de758","name":"Jelani Nelson"}],"doi":"","id":"5db92a0247c8f766461fdbba","num_citation":1,"order":1,"pages":{"end":"11949","start":"11940"},"pdf":"\u002F\u002Fstatic.aminer.cn\u002Fmisc\u002Fpdf\u002FNIPS 2019\u002F9365-margin-based-generalization-lower-bounds-for-boosted-classifiers.pdf","title":"Margin-Based Generalization Lower Bounds for Boosted Classifiers","venue":{"info":{"name":"ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019)"},"issue":"","volume":"32"},"versions":[{"id":"5db92a0247c8f766461fdbba","sid":"2971065486","src":"mag","vsid":"1127325140","year":2019},{"id":"5e15adcb3a55ac47ab5b0812","sid":"conf\u002Fnips\u002FGronlundKLMN19","src":"dblp","vsid":"conf\u002Fnips","year":2019},{"id":"5d91d22d3a55acb3c9c57b53","sid":"1909.12518","src":"arxiv","year":2019},{"id":"5f817c5cc6c3b86a5061821e","sid":"WOS:000535866903056","src":"wos","vsid":"ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019)","year":2019}],"year":2019},{"abstract":" Multiplication is one of the most fundamental computational problems, yet its true complexity remains elusive. The best known upper bound, by F\\\"{u}rer, shows that two $n$-bit numbers can be multiplied via a boolean circuit of size $O(n \\lg n \\cdot 4^{\\lg^*n})$, where $\\lg^*n$ is the very slowly growing iterated logarithm. In this work, we prove that if a central conjecture in the area of network coding is true, then any constant degree boolean circuit for multiplication must have size $\\Omega(n \\lg n)$, thus almost completely settling the complexity of multiplication circuits. We additionally revisit classic conjectures in circuit complexity, due to Valiant, and show that the network coding conjecture also implies one of Valiant's conjectures. ","authors":[{"id":"53f436ccdabfaee0d9b687c9","name":"Peyman Afshani"},{"name":"Casper Benjamin Freksen"},{"id":"53f43b3bdabfaec09f1ac28d","name":"Lior Kamma"},{"id":"53f4459bdabfaee1c0ae9880","name":"Kasper Green Larsen"}],"doi":"","id":"5cede0f6da562983788d5973","lang":"en","num_citation":2,"order":2,"pages":{"end":"","start":""},"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fstorage\u002Fpdf\u002Farxiv\u002F19\u002F1902\u002F1902.10935.pdf","title":"Lower Bounds for Multiplication via Network Coding.","urls":["db\u002Fjournals\u002Fcorr\u002Fcorr1902.html#abs-1902-10935","http:\u002F\u002Farxiv.org\u002Fabs\u002F1902.10935","https:\u002F\u002Fdblp.org\u002Frec\u002Fconf\u002Ficalp\u002FAfshaniFKL19","https:\u002F\u002Fdoi.org\u002F10.4230\u002FLIPIcs.ICALP.2019.10","https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.10935"],"venue":{"info":{"name":"ICALP"},"issue":"","volume":"abs\u002F1902.10935"},"versions":[{"id":"5e8d92889fced0a24b60e144","sid":"journals\u002Fcorr\u002Fabs-1902-10935","src":"dblp","vsid":"journals\u002Fcorr","year":2019},{"id":"5ce2d1f2ced107d4c6483e86","sid":"2917327790","src":"mag","vsid":"2595449451","year":2019},{"id":"5d25b94f3a55ac5d42537a7c","sid":"conf\u002Ficalp\u002FAfshaniFKL19","src":"dblp","vsid":"conf\u002Ficalp","year":2019},{"id":"5db9265a47c8f766461aed82","sid":"2965168983","src":"mag","vsid":"1141821850","year":2019},{"id":"5c790c97e1cd8ea577770d22","sid":"1902.10935","src":"arxiv","year":2019}],"year":2019},{"abstract":"The conjectured hardness of Boolean matrix-vector multiplication has been used with great success to prove conditional lower bounds for numerous important data structure problems, see Henzinger et al. [STOC’15]. In recent work, Larsen and Williams [SODA’17] attacked the problem from the upper bound side and gave a surprising cell probe data structure (that is, we only charge for memory accesses, while computation is free). Their cell probe data structure answers queries in O( n 7\u002F4 ) time and is succinct in the sense that it stores the input matrix in read-only memory, plus an additional O( n 7\u002F4 ) bits on the side. In this paper, we essentially settle the cell probe complexity of succinct Boolean matrix-vector multiplication. We present a new cell probe data structure with query time O( n 3\u002F2 ) storing just O( n 3\u002F2 ) bits on the side. We then complement our data structure with a lower bound showing that any data structure storing r bits on the side, with n r n 2 must have query time t satisfying t r = Ω( n 3 ). For r ≤ n , any data structure must have t = Ω( n 2 ). Since lower bounds in the cell probe model also apply to classic word-RAM data structures, the lower bounds naturally carry over. We also prove similar lower bounds for matrix-vector multiplication over F 2 .","authors":[{"id":"53f453b3dabfaeb22f4f650a","name":"Diptarka Chakraborty"},{"id":"53f43b3bdabfaec09f1ac28d","name":"Lior Kamma"},{"id":"53f4459bdabfaee1c0ae9880","name":"Kasper Green Larsen"}],"doi":"10.1145\u002F3188745.3188830","id":"5d9edc1b47c8f766460336c1","num_citation":12,"order":1,"pages":{"end":"1306","start":"1297"},"title":"Tight cell probe bounds for succinct Boolean matrix-vector multiplication","urls":["http:\u002F\u002Farxiv.org\u002Fabs\u002F1711.04467","http:\u002F\u002Fdoi.acm.org\u002F10.1145\u002F3188745.3188830","https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.04467","http:\u002F\u002Fwww.webofknowledge.com\u002F"],"venue":{"info":{"name":"STOC '18: Symposium on Theory of Computing\n\t\t Los Angeles\n\t\t CA\n\t\t USA\n\t\t June, 2018"},"issue":"","volume":"abs\u002F1711.04467"},"versions":[{"id":"5d9edc1b47c8f766460336c1","sid":"2963650845","src":"mag","vsid":"1190910084","year":2018},{"id":"5e8f41319fced0a24b8d021a","sid":"10.1145\u002F3188745.3188830","src":"acm","vsid":"stoc","year":2018},{"id":"5e8d92a59fced0a24b62e4c0","sid":"journals\u002Fcorr\u002Fabs-1711-04467","src":"dblp","vsid":"journals\u002Fcorr","year":2017},{"id":"5b3d986f17c44a510f7fa6b9","sid":"conf\u002Fstoc\u002FChakrabortyKL18","src":"dblp","vsid":"conf\u002Fstoc","year":2018},{"id":"5c757231f56def97987ed857","sid":"2768191234","src":"mag","vsid":"1190910084","year":2018},{"id":"5efe19e9dfae548d33e5ab39","sid":"1711.04467","src":"arxiv","year":2017},{"id":"5fa9dc4765ca82c27fa61e7b","sid":"WOS:000458175600111","src":"wos","vsid":"STOC'18: PROCEEDINGS OF THE 50TH ANNUAL ACM SIGACT SYMPOSIUM ON THEORY OF COMPUTING","year":2018}],"year":2018},{"abstract":"Feature hashing, also known as {em the hashing trick}, introduced by Weinberger et al. (2009), is one of the key techniques used in scaling-up machine learning algorithms. Loosely speaking, feature hashing uses a random sparse projection matrix A:Rn→Rm (where m≪n) in order to reduce the dimension of the data from n to m while approximately preserving the Euclidean norm. Every column of A contains exactly one non-zero entry, equals to either −1 or 1. Weinberger et al. showed tail bounds on ‖Ax‖22. Specifically they showed that for every e,δ, if ‖x‖∞\u002F‖x‖2 is sufficiently small, and m is sufficiently large, thenPr[|‖Ax‖22−‖x‖22|u003ce‖x‖22]≥1−δ.These bounds were later extended by Dasgupta et al. (2010) and most recently refined by Dahlgaard et al. (2017), however, the true nature of the performance of this key technique, and specifically the correct tradeoff between the pivotal parameters ‖x‖∞\u002F‖x‖2,m,e,δ remained an open question. We settle this question by giving tight asymptotic bounds on the exact tradeoff between the central parameters, thus providing a complete understanding of the performance of feature hashing. We complement the asymptotic bound with empirical data, which shows that the constants hiding in the asymptotic notation are, in fact, very close to 1, thus further illustrating the tightness of the presented bounds in practice.","authors":[{"id":"53f43b3bdabfaec09f1ac28d","name":"Lior Kamma"},{"name":"Casper Benjamin Freksen"},{"id":"53f4459bdabfaee1c0ae9880","name":"Kasper Green Larsen"}],"doi":"","id":"5c2348ceda562935fc1d56a4","lang":"en","num_citation":0,"order":0,"pages":{"end":"5404","start":"5394"},"pdf":"\u002F\u002Fstatic.aminer.cn\u002Fmisc\u002Fpdf\u002FNIPS\u002F2018\u002F5c2348ceda562935fc1d56a4.pdf","title":"Fully Understanding The Hashing Trick.","urls":["https:\u002F\u002Fdblp.org\u002Frec\u002Fconf\u002Fnips\u002FKammaFL18","http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7784-fully-understanding-the-hashing-trick"],"venue":{"info":{"name":"neural information processing systems"},"issue":"","volume":""},"versions":[{"id":"5c2348ceda562935fc1d56a4","sid":"conf\u002Fnips\u002FKammaFL18","src":"dblp","vsid":"conf\u002Fnips","year":2018},{"id":"5c757550f56def97989dcdee","sid":"2804552715","src":"mag","vsid":"1127325140","year":2018},{"id":"5d9edc7847c8f76646040b13","sid":"2964064010","src":"mag","vsid":"1127325140","year":2018}],"year":2018},{"abstract":" introduce a emph{batch} version of sparse recovery, where the goal is to report a sequence of vectors $A_1u0027,ldots,A_mu0027 in mathbb{R}^n$ that estimate unknown signals $A_1,ldots,A_m in mathbb{R}^n$ using a few linear measurements, each involving exactly one signal vector, under an assumption of emph{average sparsity}. More precisely, we want to have newline $(1) ;;; sum_{j in [m]}{|A_j- A_ju0027|_p^p} le C cdot min Big{ sum_{j in [m]}{|A_j - A_j^*|_p^p} Big}$ predetermined constants $C ge 1$ and $p$, where the minimum is over all $A_1^*,ldots,A_m^*inmathbb{R}^n$ that are $k$-sparse on average. assume $k$ is given as input, and ask for the minimal number of measurements required to satisfy $(1)$. The special case $m=1$ is known as stable sparse recovery and has been studied extensively. We resolve the question for $p =1$ up to polylogarithmic factors, by presenting a randomized adaptive scheme that performs $tilde{O}(km)$ measurements and with high probability has output satisfying $(1)$, for arbitrarily small $C u003e 1$. Finally, we show that adaptivity is necessary for every non-trivial scheme.","authors":[{"id":"544838a4dabfae87b7deb9c6","name":"Alexandr Andoni"},{"id":"53f43b3bdabfaec09f1ac28d","name":"Lior Kamma"},{"id":"5406ccdbdabfae44f085d023","name":"Robert Krauthgamer"},{"id":"53f42ebddabfaee43ebd620c","name":"Eric Price"}],"doi":"","id":"5b8c9f4a17c44af36f8b6de2","lang":"en","num_citation":0,"order":1,"pages":{"end":"","start":""},"title":"Batch Sparse Recovery, or How to Leverage the Average Sparsity.","urls":["http:\u002F\u002Farxiv.org\u002Fabs\u002F1807.08478","https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.08478"],"venue":{"info":{"name":"arXiv: Data Structures and Algorithms"},"issue":"","volume":"abs\u002F1807.08478"},"versions":[{"id":"5e8d92819fced0a24b606ba4","sid":"journals\u002Fcorr\u002Fabs-1807-08478","src":"dblp","vsid":"journals\u002Fcorr","year":2018},{"id":"5ce2ceadced107d4c62ffea2","sid":"2884092275","src":"mag","vsid":"2595449451","year":2018},{"id":"5f043c0fdfae54570ec4b508","sid":"1807.08478","src":"arxiv","year":2018}],"year":2018},{"abstract":"By a classical result of Gomory and Hu 1961, in every edge-weighted graph $G=V,E,w$, the minimum st-cut values, when ranging over all $s,t\\\\in V$, take at most $|V|-1$ distinct values. That is, these $\\\\left {\\\\begin{array}{c}|V|\\\\\\\\ 2\\\\end{array}}\\\\right $ instances exhibit redundancy factor${\\\\varOmega }|V|$. They further showed how to construct from G a tree $V,E',w'$ that stores all minimum st-cut values. Motivated by this result, we obtain tight bounds for the redundancy factor of several generalizations of the minimum st-cut problem. 1. Group-Cut: Consider the minimum A,ï¾¿B-cut, ranging over all subsets $A,B\\\\subseteq V$ of given sizes $|A|=\\\\alpha $ and $|B|=\\\\beta $. The redundancy factor is ${\\\\varOmega }_{\\\\alpha ,\\\\beta }|V|$.2. Multiway-Cut: Consider the minimum cut separating every two vertices of $S\\\\subseteq V$, ranging over all subsets of a given size $|S|=k$. The redundancy factor is ${\\\\varOmega }_{k}|V|$.3. Multicut: Consider the minimum cut separating every demand-pair in $D\\\\subseteq V\\\\times V$, ranging over collections of $|D|=k$ demand pairs. The redundancy factor is ${\\\\varOmega }_{k}|V|^k$. This result is a bit surprising, as the redundancy factor is much larger than in the first two problems. A natural application of these bounds is to construct small data structures that stores all relevant cut values, í la the Gomory-Hu tree. We initiate this direction by giving some upper and lower bounds.","authors":[{"id":"53f45151dabfaeee22a212f5","name":"rajesh chitnis"},{"id":"53f43b3bdabfaec09f1ac28d","name":"lior kamma"},{"id":"5406ccdbdabfae44f085d023","name":"robert krauthgamer"}],"doi":"10.1007\u002F978-3-662-53536-3_12","id":"5736960e6e3b12023e5211f0","lang":"en","num_citation":2,"order":1,"pages":{"end":"144","start":"133"},"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fstorage\u002Fpdf\u002Farxiv\u002F15\u002F1511\u002F1511.08647.pdf","title":"Tight Bounds for Gomory-Hu-like Cut Counting","urls":["http:\u002F\u002Farxiv.org\u002Fabs\u002F1511.08647","http:\u002F\u002Fdx.doi.org\u002F10.1007\u002F978-3-662-53536-3_12","http:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=3081387&preflayout=flat","https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.08647"],"venue":{"info":{"name":"WG"},"issue":"","volume":"abs\u002F1511.08647"},"versions":[{"id":"56d867e2dabfae2eeeaeddec","sid":"2179083756","src":"mag","year":2015},{"id":"5e8d928a9fced0a24b60ffe6","sid":"journals\u002Fcorr\u002FChitnisKK15","src":"dblp","vsid":"journals\u002Fcorr","year":2015},{"id":"599c7bb4601a182cd27591ff","sid":"conf\u002Fwg\u002FChitnisKK16","src":"dblp","vsid":"conf\u002Fwg","year":2016},{"id":"59a249370cf267ab141d18df","sid":"3081387","src":"acm","year":2016},{"id":"5ce2abdbced107d4c6b50744","sid":"2275085302","src":"mag","vsid":"1165841369","year":2016},{"id":"5d9edba147c8f76646021814","sid":"2963094095","src":"mag","vsid":"1165841369","year":2016},{"id":"5c61095cda56297340b72e17","sid":"1511.08647","src":"arxiv","year":2015}],"year":2016},{"abstract":"Our main result is that the Steiner Point Removal (SPR) problem can always be solved with polylogarithmic distortion, which resolves in the affirmative a question posed by Chan, Xia, Konjevod, and Richa (2006). Specifically, we prove that for every edge-weighted graph G = (V, E, w) and a subset of terminals T ⊆ V, there is a graph G' = (T, E', w') that is isomorphic to a minor of G, such that for every two terminals u, v ε T, the shortest-path distances between them in G and in G' satisfy [EQUATION]. Our existence proof actually gives a randomized polynomial-time algorithm. Our proof features a new variant of metric decomposition. It is well-known that every finite metric space (X, d) admits a β-separating decomposition for β = O(log|X|), which roughly means for every desired diameter bound Δ \u003E 0 there is a randomized partitioning of X, which satisfies the following separation requirement: for every x, y ε X, the probability they lie in different clusters of the partition is at most βd(x, y)\u002FΔ. We introduce an additional requirement, which is the following tail bound: for every shortest-path P of length d(P) ≤ Δ\u002Fβ, the number of clusters of the partition that meet the path P, denoted ZP, satisfies Pr[ZP \u003E t] ≤ 2e-Ω(t) for all t \u003E 0.","abstract_zh":"","authors":[{"id":"53f43b3bdabfaec09f1ac28d","name":"Lior Kamma"},{"id":"5406ccdbdabfae44f085d023","name":"Robert Krauthgamer"},{"id":"53f43450dabfaee4dc76ef77","name":"Huy L. Nguyen"}],"doi":"10.5555\u002F2634074.2634151","id":"53e99b78b7602d970241f531","lang":"en","num_citation":26,"order":0,"pages":{"end":"1040","start":"1029"},"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fstorage\u002Fpdf\u002Farxiv\u002F13\u002F1304\u002F1304.1449.pdf","title":"Cutting corners cheaply, or how to remove Steiner points","urls":["http:\u002F\u002Farxiv.org\u002Fabs\u002F1304.1449","http:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2634074.2634151&COLL=DL&DL=acm&preflayout=flat","http:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2634074.2634151&coll=DL&dl=GUIDE&CFID=521561572&CFTOKEN=81175169&preflayout=flat","http:\u002F\u002Fdx.doi.org\u002F10.1137\u002F140951382","https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.5555\u002F2634074.2634151","https:\u002F\u002Farxiv.org\u002Fabs\u002F1304.1449","http:\u002F\u002Fwww.webofknowledge.com\u002F"],"venue":{"info":{"name":"SIAM J. Comput."},"issue":"4","volume":"abs\u002F1304.1449"},"versions":[{"id":"558b0586612c41e6b9d40952","sid":"2634151","src":"acm","year":2014},{"id":"5e8d92899fced0a24b60eb7b","sid":"journals\u002Fcorr\u002Fabs-1304-1449","src":"dblp","vsid":"journals\u002Fcorr","year":2013},{"id":"599c7e7e601a182cd28a69c9","sid":"conf\u002Fsoda\u002FKammaKN14","src":"dblp","vsid":"conf\u002Fsoda","year":2014},{"id":"56d8135bdabfae2eee610dee","sid":"2199259801","src":"mag","year":2014},{"id":"56d8fed1dabfae2eeec9f6ea","sid":"1960942702","src":"mag","year":2013},{"id":"599c7a95601a182cd26c8371","sid":"journals\u002Fsiamcomp\u002FKammaKN15","src":"dblp","vsid":"journals\u002Fsiamcomp","year":2015},{"id":"5c756b9af56def97983fb67d","sid":"2569472613","src":"mag","vsid":"153560523","year":2015},{"id":"5e8ec6779fced0a24b6e9662","sid":"10.5555\u002F2634074.2634151","src":"acm","vsid":"soda","year":2014},{"id":"5c61082cda56297340b2c119","sid":"1304.1449","src":"arxiv","year":2015},{"id":"5ff5a79abf33bee3baf8fe85","sid":"WOS:000360654100004","src":"wos","vsid":"SIAM JOURNAL ON COMPUTING","year":2015}],"year":2015},{"abstract":"\n Given a graph H = (U,E) and connectivity requirements r = {r(u,v): u,v ∈ R ⊆ U }, we say that H satisfies r if it contains r(u,v) pairwise internally-disjoint uv-paths for all u,v ∈ R. We consider the Survivable Network with Minimum Number of Steiner Points (SN-MSP) problem: given a finite set V of points in a normed space (M, ||\u003C\u002Ffont\n\u003E ·||\u003C\u002Ffont\n\u003E)(M, \\left\\| \\cdot \\right\\|), and connectivity requirements, find a minimum size set S ⊂ M − V of additional points, such that the unit disc graph induced by V ∪ S satisfies the requirements. In the (node-connectivity version of the) Survivable Network Design Problem (SNDP) we are given a graph G = (V,E) with edge costs and connectivity requirements, and seek a min-cost subgraph H of G that satisfies the requirements. Let k = maxu,v Î\u003C\u002Ffont\n\u003E Vr(u,v)k = \\mathop{ \\max }\\limits_{u,v \\in V}{r(u,v)} denote the maximum connectivity requirement. We will show a natural transformation of an SN-MSP instance (V,r) into an SNDP instance (G = (V,E),c,r), such that an α-approximation for the SNDP instance implies an α·O(k\n 2)-approximation algorithm for the SN-MSP instance. In particular, for the most interesting case of uniform requirement r(u,v) = k for all u,v ∈ V, we obtain for SN-MSP the ratio O(k\n 2 ln k), which solves an open problem from [3].\n \n ","abstract_zh":"","authors":[{"id":"53f43b3bdabfaec09f1ac28d","name":"Lior Kammaand"},{"id":"5405da2edabfae450f3da1e8","name":"Zeev Nutov"}],"doi":"10.1007\u002F978-3-642-18318-8_14","id":"53e9b422b7602d9703f1cf51","lang":"en","num_citation":4,"order":0,"pages":{"end":"165","start":"154"},"title":"Approximating survivable networks with minimum number of Steiner points","urls":["http:\u002F\u002Fdx.doi.org\u002F10.1007\u002F978-3-642-18318-8_14","http:\u002F\u002Fdx.doi.org\u002F10.1002\u002Fnet.21466","http:\u002F\u002Fwww.webofknowledge.com\u002F"],"venue":{"info":{"name":"Workshop on Approximation and Online Algorithms"},"issue":"","volume":"6534"},"versions":[{"id":"599c7e75601a182cd28a2e3a","sid":"conf\u002Fwaoa\u002FKammaN10","src":"dblp","vsid":"conf\u002Fwaoa","year":2010},{"id":"5390aeba20f70186a0ecabf8","sid":"1946254","src":"acm","year":2010},{"id":"56d81435dabfae2eee667f88","sid":"1498454938","src":"mag","year":2010},{"id":"53e2a99a20f7fff678df3be0","sid":"39280282","src":"msra","year":2010},{"id":"56d91493dabfae2eee514dbe","sid":"2022303267","src":"mag","year":2012},{"id":"5fc6eaecd75e2ac63d4fbaf8","sid":"WOS:000296363200014","src":"wos","vsid":"LECTURE NOTES IN COMPUTER SCIENCE","year":2011}],"year":2012}],"profilePubsTotal":10,"profilePatentsPage":1,"profilePatents":[],"profilePatentsTotal":0,"profilePatentsEnd":true,"profileProjectsPage":0,"profileProjects":null,"profileProjectsTotal":null,"newInfo":null,"checkDelPubs":[]}};