s denote the subgraphs of edges corresponding to each group. We propose a probabilistic generative model for this problem, along with a spectral algorithm for which we provide a detailed theoretical analysis in terms of robustness against both sampling sparsity and noise. The theoretical findings are complemented by a comprehensive set of numerical experiments, showcasing the efficacy of our algorithm under various parameter regimes. Finally, we consider an application of bi-synchronization to the graph realization problem, and provide along the way an iterative graph disentangling procedure that uncovers the subgraphs $G_i$, $i=1,\\ldots,k$ which is of independent interest, as it is shown to improve the final recovery accuracy across all the experiments considered. ","authors":[{"id":"53f47bb5dabfaee43ed46b04","name":"Mihai Cucuringu"},{"id":"53f35ad7dabfae4b3496e708","name":"Hemant Tyagi"}],"id":"5feefbf691e0113b2659ff09","num_citation":0,"order":1,"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fstorage\u002Fpdf\u002Farxiv\u002F20\u002F2012\u002F2012.14932.pdf","title":"An extension of the angular synchronization problem to the heterogeneous setting","urls":["https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.14932"],"versions":[{"id":"5feefbf691e0113b2659ff09","sid":"2012.14932","src":"arxiv","year":2020}],"year":2020},{"abstract":" Given a measurement graph $G= (V,E)$ and an unknown signal $r \\in \\mathbb{R}^n$, we investigate algorithms for recovering $r$ from pairwise measurements of the form $r_i - r_j$; $\\{i,j\\} \\in E$. This problem arises in a variety of applications, such as ranking teams in sports data and time synchronization of distributed networks. Framed in the context of ranking, the task is to recover the ranking of $n$ teams (induced by $r$) given a small subset of noisy pairwise rank offsets. We propose a simple SVD-based algorithmic pipeline for both the problem of time synchronization and ranking. We provide a detailed theoretical analysis in terms of robustness against both sampling sparsity and noise perturbations with outliers, using results from matrix perturbation and random matrix theory. Our theoretical findings are complemented by a detailed set of numerical experiments on both synthetic and real data, showcasing the competitiveness of our proposed algorithms with other state-of-the-art methods. ","authors":[{"name":"Alexandre d'Aspremont"},{"id":"53f47bb5dabfaee43ed46b04","name":"Mihai Cucuringu"},{"id":"53f35ad7dabfae4b3496e708","name":"Hemant Tyagi"}],"id":"5d06e468da562926acc2ecc9","lang":"en","num_citation":0,"order":2,"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fstorage\u002Fpdf\u002Farxiv\u002F19\u002F1906\u002F1906.02746.pdf","title":"Ranking and synchronization from pairwise measurements via SVD.","urls":["db\u002Fjournals\u002Fcorr\u002Fcorr1906.html#abs-1906-02746","http:\u002F\u002Farxiv.org\u002Fabs\u002F1906.02746","https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.02746"],"venue":{"info":{"name":"CoRR"},"volume":"abs\u002F1906.02746"},"versions":[{"id":"5e8d92749fced0a24b5f8c03","sid":"journals\u002Fcorr\u002Fabs-1906-02746","src":"dblp","vsid":"journals\u002Fcorr","year":2019},{"id":"5cfe2a533a55ac0face548f2","sid":"1906.02746","src":"arxiv","year":2019}],"year":2019},{"abstract":"We introduce a principled and theoretically sound spectral method for k-way clustering in signed graphs, where the affinity measure between nodes takes either positive or negative values. Our approach is motivated by social balance theory, where the task of clustering aims to decompose the network into disjoint groups such that individuals within the same group are connected by as many positive edges as possible, while individuals from different groups are connected by as many negative edges as possible. Our algorithm relies on a generalized eigenproblem formulation inspired by recent work on constrained clustering. We provide theoretical guarantees for our approach in the setting of a signed stochastic block model, by leveraging tools from matrix perturbation theory and random matrix theory. An extensive set of numerical experiments on both synthetic and real data shows that our approach compares favorably with state-of-the-art methods for signed clustering, especially for large number of clusters and sparse measurement graphs.","authors":[{"id":"53f47bb5dabfaee43ed46b04","name":"Mihai Cucuringu"},{"name":"Peter Davies"},{"id":"56191e3a45ce1e596423f321","name":"Aldo Glielmo"},{"id":"53f35ad7dabfae4b3496e708","name":"Hemant Tyagi"}],"doi":"","id":"5cede10cda562983788ec0a4","lang":"en","num_citation":6,"order":3,"pages":{"end":"","start":""},"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fupload\u002Fpdf\u002Fprogram\u002F5cede10cda562983788ec0a4_0.pdf","title":"SPONGE: A generalized eigenproblem for clustering signed networks.","urls":["db\u002Fjournals\u002Fcorr\u002Fcorr1904.html#abs-1904-08575","http:\u002F\u002Farxiv.org\u002Fabs\u002F1904.08575","db\u002Fconf\u002Faistats\u002Faistats2019.html#CucuringuDGT19","http:\u002F\u002Fproceedings.mlr.press\u002Fv89\u002Fcucuringu19a.html","https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.08575"],"venue":{"info":{"name":"international conference on artificial intelligence and statistics"},"issue":"","volume":"abs\u002F1904.08575"},"versions":[{"id":"5e8d92a39fced0a24b62c0f6","sid":"journals\u002Fcorr\u002Fabs-1904-08575","src":"dblp","vsid":"journals\u002Fcorr","year":2019},{"id":"5ce2d252ced107d4c64c60b5","sid":"2922291765","src":"mag","vsid":"2622962978","year":2019},{"id":"5d245b48da56295a28f8d10d","sid":"conf\u002Faistats\u002FCucuringuDGT19","src":"dblp","vsid":"conf\u002Faistats","year":2019},{"id":"5cb9a1f6e1cd8e83b67399b6","sid":"1904.08575","src":"arxiv","year":2019}],"year":2019},{"abstract":"A function $f: mathbb{R}^d rightarrow mathbb{R}$ is a Sparse Additive Model (SPAM), if it is of the form $f(mathbf{x}) = sum_{l mathcal{S}}phi_{l}(x_l)$ where $mathcal{S} subset [d]$, $|mathcal{S}| ll d$. Assuming $phi$u0027s, $mathcal{S}$ to be unknown, there exists extensive work for estimating $f$ from its samples. In this work, we consider a generalized version of SPAMs, that also allows for the presence of a sparse number of second order interaction terms. For some $mathcal{S}_1 subset [d], mathcal{S}_2 subset {[d] choose 2}$, with $|mathcal{S}_1| ll d, |mathcal{S}_2| ll d^2$, the function $f$ is now assumed to be of the form: $sum_{p mathcal{S}_1}phi_{p} (x_p) + sum_{(l,l^{prime}) mathcal{S}_2}phi_{(l,l^{prime})} (x_l,x_{l^{prime}})$. Assuming we have the freedom to query $f$ anywhere in its domain, we derive efficient algorithms that provably recover $mathcal{S}_1,mathcal{S}_2$ with finite sample bounds. Our analysis covers the noiseless setting where exact samples of $f$ are obtained, and also extends to the noisy setting where the queries are corrupted with noise. For the noisy setting in particular, we consider two noise models namely: i.i.d Gaussian noise and arbitrary but bounded noise. Our main methods for identification of $mathcal{S}_2$ essentially rely on estimation of sparse Hessian matrices, for which we provide two novel compressed sensing based schemes. Once $mathcal{S}_1, mathcal{S}_2$ are known, we show how the individual components $phi_p$, $phi_{(l,l^{prime})}$ can be estimated via additional queries of $f$, with uniform error bounds. Lastly, we provide simulation results on synthetic data that validate our theoretical findings.","authors":[{"id":"53f35ad7dabfae4b3496e708","name":"Hemant Tyagi"},{"id":"53f43219dabfaeb22f444093","name":"Anastasios Kyrillidis"},{"id":"53f7e1a4dabfae9060af3c82","name":"Bernd Gärtner"},{"id":"53f48b19dabfaea6f277b55f","name":"Andreas Krause 0001"}],"doi":"","id":"57a4e91aac44365e35c97c8e","lang":"en","num_citation":3,"order":0,"pages":{"end":"","start":""},"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fstorage\u002Fpdf\u002Farxiv\u002F16\u002F1605\u002F1605.00609.pdf","title":"Algorithms for Learning Sparse Additive Models with Interactions in High Dimensions.","urls":["http:\u002F\u002Farxiv.org\u002Fabs\u002F1605.00609","https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.00609"],"venue":{"info":{"name":"arXiv: Learning"},"issue":"","volume":"abs\u002F1605.00609"},"versions":[{"id":"5e8d92739fced0a24b5f717a","sid":"journals\u002Fcorr\u002FTyagiKGK16a","src":"dblp","vsid":"journals\u002Fcorr","year":2016},{"id":"5ce2b2c3ced107d4c6e40454","sid":"2346810800","src":"mag","vsid":"2597173376","year":2018},{"id":"5c61098cda56297340b7f998","sid":"1605.00609","src":"arxiv","year":2016}],"year":2018},{"abstract":"Consider an unknown smooth function $f: [0,1] rightarrow mathbb{R}$, and say we are given $n$ noisy mod 1 samples of $f$, i.e., $y_i = (f(x_i) + eta_i)mod 1$ for $x_i [0,1]$, where $eta_i$ denotes noise. Given the samples $(x_i,y_i)_{i=1}^{n}$, our goal is to recover smooth, robust estimates of the clean samples $f(x_i) bmod 1$. We formulate a natural approach for solving this problem which works with angular embeddings of the noisy mod 1 samples over the unit complex circle, inspired by the angular synchronization framework. Our approach amounts to solving a quadratically constrained quadratic program (QCQP) which is NP-hard in its basic form, and therefore we consider its relaxation which is a trust region sub-problem and hence solvable efficiently. We demonstrate its robustness to noise via extensive numerical simulations on several synthetic examples, along with a detailed theoretical analysis. To the best of our knowledge, we provide the first algorithm for denoising mod 1 samples of a smooth function, which comes with robustness guarantees.","authors":[{"id":"53f47bb5dabfaee43ed46b04","name":"Mihai Cucuringu"},{"id":"53f35ad7dabfae4b3496e708","name":"Hemant Tyagi"}],"doi":"","id":"5aed148b17c44a4438154afe","lang":"en","num_citation":10,"order":1,"pages":{"end":"1876","start":"1868"},"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fupload\u002Fpdf\u002Fprogram\u002F5aed148b17c44a4438154afe_0.pdf","title":"On denoising modulo 1 samples of a function.","urls":["http:\u002F\u002Fproceedings.mlr.press\u002Fv84\u002Fcucuringu18a.html","https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.10210"],"venue":{"info":{"name":"AISTATS"},"issue":"","volume":""},"versions":[{"id":"5aed148b17c44a4438154afe","sid":"conf\u002Faistats\u002FCucuringuT18","src":"dblp","vsid":"conf\u002Faistats","year":2018},{"id":"5c7571d2f56def97987ad31e","sid":"2765197489","src":"mag","vsid":"2622962978","year":2018},{"id":"5d9edc4347c8f76646039596","sid":"2963835727","src":"mag","vsid":"2622962978","year":2018},{"id":"5f03164ddfae54360a463bed","sid":"1710.10210","src":"arxiv","year":2018}],"year":2018},{"abstract":"A function f : R-d -\u003E R is referred to as a Sparse Additive Model (SPAM), if it is of the form f(x) = Sigma(l is an element of S) phi(l)(x(l)), where S subset of [d], |S| \u003C\u003C d. Assuming phi(l)'s and S to be unknown, the problem of estimating f from its samples has been studied extensively. In this work, we consider a generalized SPAM, allowing for second order interaction terms. For some S-1 subset of [d], S-2 subset of (([d])(2)) , the function f is assumed to be of the form: f(x) = Sigma(p is an element of S1) phi(p)(x(p)) + Sigma((l,l')subset of S2) phi((l,l')) (x(l), x(l')). Assuming phi(p), phi((l,l')), S-1 and, S-2 to be unknown, we provide a randomized algorithm that queries f and exactly recovers S-1, S-2. Consequently, this also enables us to estimate the underlying phi(p), phi((l,l')). We derive sample complexity bounds for our scheme and also extend our analysis to include the situation where the queries are corrupted with noise - either stochastic, or arbitrary but bounded. Lastly, we provide simulation results on synthetic data, that validate our theoretical findings.","authors":[{"id":"53f35ad7dabfae4b3496e708","name":"Hemant Tyagi"},{"id":"53f43219dabfaeb22f444093","name":"Anastasios Kyrillidis"},{"id":"53f7e1a4dabfae9060af3c82","name":"Bernd Gärtner"},{"id":"53f48b19dabfaea6f277b55f","name":"Andreas Krause 0001"}],"doi":"","id":"57a4e91dac44365e35c98af3","lang":"en","num_citation":8,"order":0,"pages":{"end":"120","start":"111"},"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fupload\u002Fpdf\u002Fprogram\u002F57a4e91dac44365e35c98af3_0.pdf","title":"Learning Sparse Additive Models with Interactions in High Dimensions.","urls":["http:\u002F\u002Farxiv.org\u002Fabs\u002F1604.05307","http:\u002F\u002Fjmlr.org\u002Fproceedings\u002Fpapers\u002Fv51\u002Ftyagi16.html","https:\u002F\u002Farxiv.org\u002Fabs\u002F1604.05307","http:\u002F\u002Fwww.webofknowledge.com\u002F"],"venue":{"info":{"name":"JMLR Workshop and Conference Proceedings"},"issue":"","volume":"51"},"versions":[{"id":"5e8d92829fced0a24b60725e","sid":"journals\u002Fcorr\u002FTyagiKGK16","src":"dblp","vsid":"journals\u002Fcorr","year":2016},{"id":"599c7cc1601a182cd27d46bd","sid":"conf\u002Faistats\u002FTyagiKGK16","src":"dblp","vsid":"conf\u002Faistats","year":2016},{"id":"5ce2b262ced107d4c6e2ea6e","sid":"2340875464","src":"mag","vsid":"2622962978","year":2016},{"id":"5d9edb7747c8f7664601a973","sid":"2962879761","src":"mag","vsid":"2622962978","year":2016},{"id":"5c610987da56297340b7e76e","sid":"1604.05307","src":"arxiv","year":2016},{"id":"5fc673e1e8bf8c1045b03186","sid":"WOS:000508662100013","src":"wos","vsid":"JMLR Workshop and Conference Proceedings","year":2016}],"year":2016},{"abstract":"We consider the problem of continuum armed bandits where the arms are indexed by a compact subset of . For large , it is well known that mere smoothness assumptions on the reward functions lead to regret bounds that suffer from the curse of dimensionality. A typical way to tackle this in the literature has been to make further assumptions on the structure of reward functions. In this work we assume the reward functions to be intrinsically of low dimension ≪ and consider two models: (i) The reward functions depend on only an unknown subset of coordinate variables and, (ii) a generalization of (i) where the reward functions depend on an unknown dimensional subspace of . By placing suitable assumptions on the smoothness of the rewards we derive randomized algorithms for both problems that achieve nearly optimal regret bounds in terms of the number of rounds .","authors":[{"id":"53f35ad7dabfae4b3496e708","name":"hemant tyagi"},{"id":"53f4726ddabfaeecd6a3ad76","name":"Sebastian U. Stich"},{"id":"53f7e1a4dabfae9060af3c82","name":"Bernd Gärtner"}],"doi":"10.1007\u002Fs00224-014-9570-8","id":"56d8dbebdabfae2eeef276dc","lang":"en","num_citation":6,"order":0,"pages":{"end":"222.0","start":"191.0"},"title":"On Two Continuum Armed Bandit Problems in High Dimensions","urls":["http:\u002F\u002Fdx.doi.org\u002F10.1007\u002Fs00224-014-9570-8","http:\u002F\u002Fdx.doi.org\u002Fhttps:\u002F\u002Fdoi.org\u002F10.1007\u002Fs00224-014-9570-8","https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs00224-014-9570-8","http:\u002F\u002Fwww.webofknowledge.com\u002F"],"venue":{"info":{"name":"Theory of Computing Systems"},"issue":"1","volume":"58"},"versions":[{"id":"56d8dbebdabfae2eeef276dc","sid":"1974484513","src":"mag","year":2014},{"id":"5b66c471ab2dfb4592119c36","sid":"\u002Farticle\u002F10.1007\u002Fs00224-014-9570-8","src":"springer","year":2016},{"id":"599c793d601a182cd262654f","sid":"journals\u002Fmst\u002FTyagiSG16","src":"dblp","vsid":"journals\u002Fmst","year":2016},{"id":"5ff5bdefbf33bee3ba15a110","sid":"WOS:000367607000010","src":"wos","vsid":"THEORY OF COMPUTING SYSTEMS","year":2016}],"year":2016},{"abstract":"We consider the problem of learning multi-ridge functions of the form f(x)=g(Ax) from point evaluations of f. We assume that the function f is defined on an ℓ2-ball in Rd, g is twice continuously differentiable almost everywhere, and A∈Rk×d is a rank k matrix, where k≪d. We propose a randomized, polynomial-complexity sampling scheme for estimating such functions. Our theoretical developments leverage recent techniques from low rank matrix recovery, which enables us to derive a polynomial time estimator of the function f along with uniform approximation guarantees. We prove that our scheme can also be applied for learning functions of the form: f(x)=∑i=1kgi(aiTx), provided f satisfies certain smoothness conditions in a neighborhood around the origin. We also characterize the noise robustness of the scheme. Finally, we present numerical examples to illustrate the theoretical bounds in action.","abstract_zh":"","authors":[{"id":"53f35ad7dabfae4b3496e708","name":"hemant tyagi"},{"id":"53f4520edabfaeb22f4efe06","name":"volkan cevher"}],"doi":"10.1016\u002Fj.acha.2014.01.002","id":"56d836c0dabfae2eee456fe7","lang":"en","num_citation":20,"order":0,"pages":{"end":"412","start":"389"},"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fstorage\u002Fpdf\u002Farxiv\u002F13\u002F1310\u002F1310.1826.pdf","title":"Learning Non-Parametric Basis Independent Models from Point Queries via Low-Rank Methods","urls":["https:\u002F\u002Farxiv.org\u002Fabs\u002F1310.1826","https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS1063520314000050","http:\u002F\u002Fwww.webofknowledge.com\u002F"],"venue":{"info":{"name":"Applied and Computational Harmonic Analysis"},"issue":"3","volume":"37"},"versions":[{"id":"56d836c0dabfae2eee456fe7","sid":"2012411663","src":"mag","year":2013},{"id":"5c610860da56297340b38948","sid":"1310.1826","src":"arxiv","year":2016},{"id":"5f1edefe9fced0a24bccdaaf","sid":"S1063520314000050","src":"sciencedirect","vsid":"applied-and-computational-harmonic-analysis","year":2014},{"id":"5fc6de73d75e2ac63d47d674","sid":"WOS:000342187700002","src":"wos","vsid":"APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS","year":2014}],"year":2016},{"abstract":" We consider a stochastic continuum armed bandit problem where the arms are indexed by the $\\ell_2$ ball $B_{d}(1+\\nu)$ of radius $1+\\nu$ in $\\mathbb{R}^d$. The reward functions $r :B_{d}(1+\\nu) \\rightarrow \\mathbb{R}$ are considered to intrinsically depend on $k \\ll d$ unknown linear parameters so that $r(\\mathbf{x}) = g(\\mathbf{A} \\mathbf{x})$ where $\\mathbf{A}$ is a full rank $k \\times d$ matrix. Assuming the mean reward function to be smooth we make use of results from low-rank matrix recovery literature and derive an efficient randomized algorithm which achieves a regret bound of $O(C(k,d) n^{\\frac{1+k}{2+k}} (\\log n)^{\\frac{1}{2+k}})$ with high probability. Here $C(k,d)$ is at most polynomial in $d$ and $k$ and $n$ is the number of rounds or the sampling budget which is assumed to be known beforehand. ","abstract_zh":"","authors":[{"id":"53f35ad7dabfae4b3496e708","name":"Hemant Tyagi"},{"id":"53f4726ddabfaeecd6a3ad76","name":"Sebastian U. Stich"},{"id":"53f7e1a4dabfae9060af3c82","name":"Bernd Gärtner"}],"doi":"","id":"53e9ae3cb7602d970384fdaa","lang":"en","num_citation":0,"order":0,"pages":{"end":"","start":""},"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fstorage\u002Fpdf\u002Farxiv\u002F13\u002F1312\u002F1312.0232.pdf","title":"Stochastic continuum armed bandit problem of few linear parameters in high dimensions.","urls":["http:\u002F\u002Farxiv.org\u002Fabs\u002F1312.0232","https:\u002F\u002Farxiv.org\u002Fabs\u002F1312.0232"],"venue":{"info":{"name":"CoRR"},"issue":"","volume":"abs\u002F1312.0232"},"versions":[{"id":"5e8d928d9fced0a24b613bea","sid":"journals\u002Fcorr\u002FTyagiSG13","src":"dblp","vsid":"journals\u002Fcorr","year":2013},{"id":"56d83e8adabfae2eee754268","sid":"1764343453","src":"mag","year":2013},{"id":"5c610879da56297340b3c990","sid":"1312.0232","src":"arxiv","year":2015}],"year":2015},{"abstract":"We consider the problem of learning sparse additive models, i.e., functions of the form: f(x) = Sigma(l is an element of S) phi(l) (x(l)), x is an element of R-d from point queries of f. Here S is an unknown subset of coordinate variables with vertical bar S vertical bar = k \u003C\u003C d. Assuming phi(l) 's to be smooth, we propose a set of points at which to sample f and an efficient randomized algorithm that recovers a uniform approximation to each unknown phi(l). We provide a rigorous theoretical analysis of our scheme along with sample complexity bounds. Our algorithm utilizes recent results from compressive sensing theory along with a novel convex quadratic program for recovering robust uniform approximations to univariate functions, from point queries corrupted with arbitrary bounded noise. Lastly we theoretically analyze the impact of noise - either arbitrary but bounded, or stochastic - on the performance of our algorithm.","authors":[{"id":"53f35ad7dabfae4b3496e708","name":"Hemant Tyagi"},{"id":"53f48b19dabfaea6f277b55f","name":"Andreas Krause"},{"id":"53f7e1a4dabfae9060af3c82","name":"Bernd Gartner"}],"id":"5f817983c6c3b86a50617e1b","num_citation":0,"order":0,"title":"Efficient Sampling for Learning Sparse Additive Models in High Dimensions","urls":["http:\u002F\u002Fwww.webofknowledge.com\u002F"],"venue":{"info":{"name":"ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014)"},"volume":"27"},"versions":[{"id":"5f817983c6c3b86a50617e1b","sid":"WOS:000452647102027","src":"wos","vsid":"ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014)","year":2014}],"year":2014},{"abstract":"We consider the stochastic and adversarial settings of continuum armed bandits where the arms are indexed by [0, 1](d). The reward functions r : [0, 1](d) -\u003E R are assumed to intrinsically depend on at most k coordinate variables implying r(x(1),..., x(d)) = g(x(i1),..., x(ik)) for distinct and unknown i(1),..., i(k) is an element of {1,..., d} and some locally Holder continuous g : [0, 1](k) -\u003E R with exponent alpha is an element of (0, 1]. Firstly, assuming (i(1),..., i(k)) to be fixed across time, we propose a simple modification of the CAB1 algorithm where we construct the discrete set of sampling points to obtain a bound of O(n alpha+k\u002F2 alpha+k (log n) alpha\u002F2 alpha+k C(k, d)) on the regret, with C(k, d) depending at most polynomially in k and sub-logarithmically in d. The construction is based on creating partitions of {1,..., d} into k disjoint subsets and is probabilistic, hence our result holds with high probability. Secondly we extend our results to also handle the more general case where (i(1),..., i(k)) can change over time and derive regret bounds for the same.","abstract_zh":"","authors":[{"id":"53f35ad7dabfae4b3496e708","name":"Hemant Tyagi"},{"id":"53f7e1a4dabfae9060af3c82","name":"Bernd Gärtner"}],"doi":"10.1007\u002F978-3-319-08001-7_10","id":"53e9b65bb7602d97041bcf70","lang":"en","num_citation":8,"order":0,"pages":{"end":"119","start":"108"},"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fstorage\u002Fpdf\u002Farxiv\u002F13\u002F1304\u002F1304.5793.pdf","title":"Continuum armed bandit problem of few variables in high dimensions","urls":["http:\u002F\u002Farxiv.org\u002Fabs\u002F1304.5793","http:\u002F\u002Fdx.doi.org\u002F10.1007\u002F978-3-319-08001-7_10","https:\u002F\u002Farxiv.org\u002Fabs\u002F1304.5793","http:\u002F\u002Fwww.webofknowledge.com\u002F"],"venue":{"info":{"name":"Lecture Notes in Computer Science"},"issue":"","volume":"8447"},"versions":[{"id":"599c7e75601a182cd28a2e5c","sid":"conf\u002Fwaoa\u002FTyagiG13","src":"dblp","vsid":"conf\u002Fwaoa","year":2013},{"id":"5e8d927d9fced0a24b601e56","sid":"journals\u002Fcorr\u002Fabs-1304-5793","src":"dblp","vsid":"journals\u002Fcorr","year":2013},{"id":"56d81435dabfae2eee668003","sid":"1918401771","src":"mag","year":2013},{"id":"5c610832da56297340b2d3a1","sid":"1304.5793","src":"arxiv","year":2014},{"id":"5fc6eaedd75e2ac63d4fbb03","sid":"WOS:000342842800010","src":"wos","vsid":"Lecture Notes in Computer Science","year":2014}],"year":2014},{"abstract":"Numerous dimensionality reduction problems in data analysis involve the recovery of low-dimensional models or the learning of manifolds underlying sets of data. Many manifold learning methods require the estimation of the tangent space of the manifold at a point from locally available data samples. Local sampling conditions such as (i) the size of the neighborhood (sampling width) and (ii) the num...","abstract_zh":"","authors":[{"id":"53f35ad7dabfae4b3496e708","name":"hemant tyagi"},{"id":"53f44714dabfaee2a1d36f4c","name":"elif vural"},{"id":"5405aec1dabfae450f3c59ab","name":"pascal frossard"}],"doi":"10.1093\u002Fimaiai\u002Fiat003","id":"56d8d2c7dabfae2eeeb20ecb","lang":"en","num_citation":46,"order":0,"pages":{"end":"114","start":"69"},"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fstorage\u002Fpdf\u002Farxiv\u002F12\u002F1208\u002F1208.1065.pdf","title":"Tangent space estimation for smooth embeddings of Riemannian manifolds","urls":["http:\u002F\u002Fdx.doi.org\u002F10.1093\u002Fimaiai\u002Fiat003","https:\u002F\u002Farxiv.org\u002Fabs\u002F1208.1065","https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8189508"],"venue":{"info":{"name":"Information and Inference: A Journal of the IMA"},"issue":"1","volume":"2"},"versions":[{"id":"56d8d2c7dabfae2eeeb20ecb","sid":"2142224317","src":"mag","year":2013},{"id":"5c6107e4da56297340b1c110","sid":"1208.1065","src":"arxiv","year":2013},{"id":"5f2c8b689fced0a24b2b300e","sid":"8189508","src":"ieee","vsid":"8016800","year":2013}],"year":2013},{"abstract":"","abstract_zh":"","authors":[{"id":"53f4520edabfaeb22f4efe06","name":"volkan cevher"},{"id":"53f35ad7dabfae4b3496e708","name":"hemant tyagi"}],"doi":"","id":"56d8138fdabfae2eee624e63","lang":"en","num_citation":6,"order":1,"pages":{"end":"","start":""},"title":"Active Learning of Multi-Index Function Models","venue":{"info":{"name":"neural information processing systems"},"issue":"","volume":""},"versions":[{"id":"56d8138fdabfae2eee624e63","sid":"2119792438","src":"mag","year":2012}],"year":2012},{"abstract":"We study the problem of learning ridge functions of the form f(x) = g(aT x), x ∈ ℝd, from random samples. Assuming g to be a twice continuously differentiable function, we leverage techniques from low rank matrix recovery literature to derive a uniform approximation guarantee for estimation of the ridge function f. Our new analysis removes the de facto compressibility assumption on the parameter a for learning in the existing literature. Interestingly the price to pay in high dimensional settings is not major. For example, when g is thrice continuously differentiable in an open neighbourhood of the origin, the sampling complexity changes from O(log d) to O(d) or from equation to O(d2+q\u002F2-q) to O(d4), depending on the behaviour of g' and g\" at the origin, with 0 \u003C; q \u003C; 1 characterizing the sparsity of a.","abstract_zh":"","authors":[{"id":"53f35ad7dabfae4b3496e708","name":"Hemant Tyagi"},{"id":"53f4520edabfaeb22f4efe06","name":"Volkan Cevher"}],"doi":"10.1109\u002FICASSP.2012.6288306","id":"558b5b4de4b037c0875c710c","lang":"en","num_citation":4,"order":0,"pages":{"end":"2028","start":"2025"},"pdf":"","title":"Learning ridge functions with randomized sampling in high dimensions","urls":["http:\u002F\u002Fieeexplore.ieee.org\u002Fxpl\u002FabstractAuthors.jsp?tp=&arnumber=6288306","http:\u002F\u002Fdx.doi.org\u002F10.1109\u002FICASSP.2012.6288306"],"venue":{"info":{"name":"Acoustics, Speech and Signal Processing","name_zh":""},"issue":"","volume":""},"versions":[{"id":"558b5b4de4b037c0875c710c","sid":"6288306","src":"ieee","year":2012},{"id":"599c7d02601a182cd27f3a91","sid":"conf\u002Ficassp\u002FTyagiC12","src":"dblp","vsid":"conf\u002Ficassp","year":2012},{"id":"56d814c8dabfae2eee6a16b2","sid":"2125139533","src":"mag","year":2012}],"year":2012},{"authors":[{"id":"53f4520edabfaeb22f4efe06","name":"Volkan Cevher"},{"id":"53f35ad7dabfae4b3496e708","name":"Hemant Tyagi"}],"id":"5ff682bfd4150a363cbaed2d","num_citation":0,"order":1,"pages":{"end":"10","start":"1"},"title":"Active Learning of Multi-Index Function Models","urls":["https:\u002F\u002Fwww.research-collection.ethz.ch\u002Fhandle\u002F20.500.11850\u002F61702"],"venue":{"info":{"name":"symposium on discrete algorithms"}},"versions":[{"id":"5ff682bfd4150a363cbaed2d","sid":"3049070217","src":"mag","vsid":"1130177464","year":2012}],"year":2012},{"authors":[{"id":"53f35ad7dabfae4b3496e708","name":"Hemant Tyagi"},{"id":"53f4520edabfaeb22f4efe06","name":"Volkan Cevher"}],"doi":"","id":"53e9a123b7602d9702a13910","lang":"en","num_citation":17,"order":0,"pages":{"end":"1483","start":"1475"},"pdf":"\u002F\u002Fstatic.aminer.org\u002Fpdf\u002F20160902\u002Fweb-conf\u002FNIPS\u002FNIPS-2012-2419.pdf","title":"Active Learning of Multi-Index Function Models.","urls":["http:\u002F\u002Fbooks.nips.cc\u002Fpapers\u002Ffiles\u002Fnips25\u002FNIPS2012_0701.pdf","https:\u002F\u002Fstatic.aminer.org\u002Fpdf\u002F20160902\u002Fweb-conf\u002Findex.txt","https:\u002F\u002Fstatic.aminer.org\u002Fpdf\u002F20170130\u002FwebConf\u002Findex.txt","db\u002Fconf\u002Fnips\u002Fnips2012.html#TyagiC12","http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F4820-active-learning-of-multi-index-function-models"],"venue":{"info":{"name":"NIPS"},"issue":"","volume":""},"versions":[{"id":"599c7f08601a182cd28e5943","sid":"conf\u002Fnips\u002FTyagiC12","src":"dblp","vsid":"conf\u002Fnips","year":2012},{"id":"58774e5f0a3ac5b5de65bfa0","sid":"58774e5f0a3ac5b5de65bfa0","src":"pdf","year":2012},{"id":"5a79ad310a3ac53e09baa256","sid":"5a79ad310a3ac53e09baa256","src":"pdf","year":2012}],"year":2012}],"profilePubsTotal":20,"profilePatentsPage":1,"profilePatents":[],"profilePatentsTotal":0,"profilePatentsEnd":true,"profileProjectsPage":0,"profileProjects":null,"profileProjectsTotal":null,"newInfo":null,"checkDelPubs":[]}};