NeurIPS Best Papers CollectingThe purpose of the Neural Information Processing Systems annual meeting is to foster the exchange of research on neural information processing systems in their biological, technological, mathematical, and theoretical aspects. The core focus is peer-reviewed novel research which is presented and discussed in the general session, along with invited talks by leaders in their field.
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), (2019): 4751-4762
It is a plausible conjecture that obtaining better error guarantees is computationally intractable. This is left as an interesting open problem for future work. Another open question is whether there is an efficient proper learner matching the error guarantees of our algorithm
Cited by12BibtexViews407Links
0
0
Annual Conference on Neural Information Processing Systems, (2015): 2989-2997
There are a number of interesting questions and directions for future research which are suggested by our results, including the following: Convergence rates for vanilla Hedge: The fast rates of our paper do not apply to algorithms such as Hedge without modification
Cited by80BibtexViews112Links
0
0
Annual Conference on Neural Information Processing Systems, (2015)
Probability estimation over large alphabets has long been the subject of extensive research, both by practitioners deriving practical estimators, and by theorists searching for optimal estimators
Cited by52BibtexViews76Links
0
0
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), (2014): 2321-2329
We develop Asymmetric LSH, which generalizes the existing Locality Sensitive Hashing framework by applying asymmetric transformations to the input query vector and the data vectors in the repository
Cited by285BibtexViews131Links
0
0
neural information processing systems, (2013): 3147-3155
We propose a randomized algorithm for influence estimation in continuous-time diffusion networks
Cited by257BibtexViews81Links
0
0
NIPS, (2013): 2436-2444
We provide a number of iterative algorithms which are very practical and scalable, and algorithms like Ellipsoidal Approximation based Submodular Set Cover and Ellipsoidal Approximation based Submodular Cost Knapsack, which though more intensive, obtain tight approximation bounds
Cited by179BibtexViews65Links
0
0
NIPS, pp.1034-1042, (2013)
Classical models of synaptic plasticity model synaptic efficacy as an analog scalar value, denoting the size of a postsynaptic potential injected into one neuron from another
Cited by44BibtexViews75Links
0
0
NIPS, pp.3248-3256, (2012)
Discriminative training allows for a wider variety of Sum-product networks architectures than generative training, because completeness and consistency do not have to be maintained over evidence variables
Cited by220BibtexViews61Links
0
0
Annals of Statistics, pp.2096-2104, (2012)
We investigate a curious relationship between the structure of a discrete graphical model and the support of the inverse of a generalized covariance matrix
Cited by149BibtexViews68Links
1
0
NIPS, (2011): 109-117
We have presented a highly efficient approximate inference algorithm for fully connected conditional random field models
Cited by2228BibtexViews95Links
0
0
NIPS, pp.2375-2383, (2011)
It can use the entire amount to read from the stream, writing the results of computing their 3k log k means to disk; when the stream is exhausted, this file is treated as a stream, until an iteration produces a file that fits entirely into main memory
Cited by170BibtexViews54Links
0
0
NIPS, pp.2052-2060, (2011)
The model has attractive properties and we show that the posterior computations can be done efficiently using a sampler based on particle MCMC methods
Cited by29BibtexViews57Links
0
0
NIPS, pp.1396-1404, (2010)
In contrast to most DP-based approaches, our construction is motivated by the intrinsic relation between Dirichlet processes and compound Poisson processes
Cited by97BibtexViews71Links
0
0
NIPS, pp.1660-1668, (2009)
We propose fast subtree kernels on graphs
Cited by247BibtexViews81Links
0
0
NIPS, pp.567-575, (2009)
We provide a simple characterization of it for tree structured graphs, and show how it can be used for approximations in non-tree graphs
Cited by79BibtexViews46Links
0
0