We introduce X-Engine, an OLTP storage engine, optimized for the largest e-commerce platform in the world at Alibaba, serving more than 600 million active customers globally
All networks are trained on strictly binary silhouettes, but we overlay a checkerboard texture on the source shape to clearly visualize the smoothness of the estimated mappings and display the texture transfer abilities facilitated by our system
We propose iBTune to adjust DBMS buffer pool sizes by using a large deviation analysis for least recently used caching models and leveraging the similar instances based on performance metrics to find tolerable miss ratios
To further improve query latency and concurrency, we enhance the optimizer and execution engine in AnalyticDB to fully utilize the advantages of our storage and indexes
Assuming the Exponential Time Hypothesis, there is an > 0 such that any strong simulation that can determine if 0|C|0| = 0 of a polynomial-sized quantum circuit C formed from the Clifford+T gate set with N T -gates takes time at least 2 N
Facilitated by the automatic differentiation technique widely used in deep learning, we propose a uniform framework of differentiable Tensor renormalization group that can be applied to improve various Tensor renormalization group methods, in an automatic fashion
To compare our Quantum Approximate Optimization Algorithm simulator with existing quantum simulation package equipped with QAOA functionalities, we choose MAX-CUT problems on a random regular graph and compare the time spent for a single energy function query
Beyond surveying the advances of each aspect of Convolutional Neural Network, we have introduced the application of Convolutional Neural Network on many tasks, including image classification, object detection, object tracking, pose estimation, text detection, visual saliency dete...
One question that may rise here is if the advantage of “spatio-temporal long short-term memory ” model could be only due to the higher length and redundant sequence of the joints fed to the network, and not because of the proposed semantic relations between the joints
We propose a novel progressive attention guided recurrent network, which selectively integrates contextual information from multi-level features to generate powerful attentive features
Deep Convolutional Neural Networks designed for image classification tends to extract abstract features of dominated objects, some essentially discriminative information for inconspicuous objects and stuff are weakened or even disregarded
We proposed an end-to-end trainable framework, namely Dual ATtention Matching network, to learn context-aware feature sequences and to perform dually attentive comparison for person ReID
This paper presents a new deep learning framework for a hierarchical shared-specific component factorization, to analyze RGB+D features of human action
We have proposed a framework for instancewise feature selection via mutual information, and a method L2X which seeks a variational approximation of the mutual information, and makes use of a Gumbel-softmax relaxation of discrete subset sampling during training
Our algorithms can achieve comparable convergence speed with the exact algorithm even the neighbor sampling size D(l) = 2, so that the per-epoch cost of training Graph convolution networks is comparable with training multi-layer perceptron