阿里巴巴达摩院阿里巴巴达摩院发表的相关论文
Gui Huang,Xuntao Cheng, Jianying Wang,Yujie Wang, Dengcheng He, Tieying Zhang,Feifei Li,Sheng Wang,Wei Cao, Qiang Li
Proceedings of the 2019 International Conference on Management of Data, pp.651-665, (2019)
We introduce X-Engine, an OLTP storage engine, optimized for the largest e-commerce platform in the world at Alibaba, serving more than 600 million active customers globally
Cited by12BibtexViews273DOI
0
0
ACM Transactions on Graphics (TOG), no. 1 (2019)
All networks are trained on strictly binary silhouettes, but we overlay a checkerboard texture on the source shape to clearly visualize the smoothness of the estimated mappings and display the texture transfer abilities facilitated by our system
Cited by11BibtexViews181DOI
1
0
Jian Tan, Tieying Zhang,Feifei Li, Jie Chen, Qixing Zheng,Ping Zhang, Honglin Qiao,Yue Shi,Wei Cao, Rui Zhang
Proceedings of the VLDB Endowment, no. 10 (2019): 1221-1234
We propose iBTune to adjust DBMS buffer pool sizes by using a large deviation analysis for least recently used caching models and leveraging the similar instances based on performance metrics to find tolerable miss ratios
Cited by8BibtexViews91DOI
0
0
Chaoqun Zhan, Maomeng Su, Chuangxian Wei, Xiaoqiang Peng, Liang Lin,Sheng Wang, Zhe Chen,Feifei Li, Yue Pan, Fang Zheng,Chengliang Chai
PVLDB, no. 12 (2019): 2059-2070
To further improve query latency and concurrency, we enhance the optimizer and execution engine in AnalyticDB to fully utilize the advantages of our storage and indexes
Cited by2BibtexViews102DOI
0
0
Cupjin Huang, Michael Newman,Mario Szegedy
arXiv: Quantum Physics, (2019)
Assuming the Exponential Time Hypothesis, there is an > 0 such that any strong simulation that can determine if 0|C|0| = 0 of a polynomial-sized quantum circuit C formed from the Clifford+T gate set with N T -gates takes time at least 2 N
Cited by1BibtexViews118DOI
1
0
Fang Zhang, Jianxin Chen
arXiv: Quantum Physics, (2019)
We present a new optimization technique to reduce the number of T gates in Clifford+T circuits by treating every gate conjugated by
Cited by0BibtexViews57
0
0
World Wide Web, no. 1 (2019): 153-184
– We propose weighted Heterogeneous Information Network and weighted meta path to consider attribute values on links in information networks
Cited by0BibtexViews115DOI
0
0
Chen Bin-Bin, Gao Yuan, Guo Yi-Bin, Liu Yuzhi,Zhao Hui-Hai, Liao Hai-Jun,Wang Lei,Xiang Tao,Li Wei,Xie Z. Y.
Facilitated by the automatic differentiation technique widely used in deep learning, we propose a uniform framework of differentiable Tensor renormalization group that can be applied to improve various Tensor renormalization group methods, in an automatic fashion
Cited by0BibtexViews120DOI
1
0
Huang Cupjin,Szegedy Mario, Zhang Fang, Gao Xun, Chen Jianxin,Shi Yaoyun
To compare our Quantum Approximate Optimization Algorithm simulator with existing quantum simulation package equipped with QAOA functionalities, we choose MAX-CUT problems on a random regular graph and compare the time spent for a single energy function query
Cited by0BibtexViews59DOI
0
0
Zhang Fang,Huang Cupjin, Newman Michael, Cai Junjie, Yu Huanjun, Tian Zhengxiong, Yuan Bo, Xu Haihong, Wu Junyin, Gao Xun, Chen Jianxin,Szegedy Mario
Several variables will affect the performance of our simulator:
Cited by0BibtexViews67DOI
0
0
Jiuxiang Gu,Zhenhua Wang, Jason Kuen, Lianyang Ma,Amir Shahroudy,Bing Shuai, Ting Liu, Xingxing Wang,Gang Wang,Jianfei Cai,Tsuhan Chen
Pattern Recognition, (2018): 354-377
Beyond surveying the advances of each aspect of Convolutional Neural Network, we have introduced the application of Convolutional Neural Network on many tasks, including image classification, object detection, object tracking, pose estimation, text detection, visual saliency dete...
Cited by1299BibtexViews263DOI
0
0
computer vision and pattern recognition, (2018)
Degradation maps are obtained by a simple dimensionality stretching of the degradation parameters
Cited by224BibtexViews124DOI
0
0
IEEE Transactions on Pattern Analysis and Machine Intelligence, no. 12 (2018): 3007-3021
One question that may rise here is if the advantage of “spatio-temporal long short-term memory ” model could be only due to the higher length and redundant sequence of the joints fed to the network, and not because of the proposed semantic relations between the joints
Cited by178BibtexViews114DOI
0
0
international conference on machine learning, (2018): 1123-1132
We study the adversarial attack on graph structured data
Cited by176BibtexViews241DOI
0
0
CVPR, pp.714-722, (2018)
We propose a novel progressive attention guided recurrent network, which selectively integrates contextual information from multi-level features to generate powerful attentive features
Cited by163BibtexViews83DOI
0
0
CVPR, pp.2393-2402, (2018)
Deep Convolutional Neural Networks designed for image classification tends to extract abstract features of dominated objects, some essentially discriminative information for inconspicuous objects and stuff are weakened or even disregarded
Cited by148BibtexViews92DOI
0
0
CVPR, (2018): 5363-5372
We proposed an end-to-end trainable framework, namely Dual ATtention Matching network, to learn context-aware feature sequences and to perform dually attentive comparison for person ReID
Cited by148BibtexViews100DOI
0
0
IEEE Transactions on Pattern Analysis and Machine Intelligence, no. 5 (2018): 1045-1058
This paper presents a new deep learning framework for a hierarchical shared-specific component factorization, to analyze RGB+D features of human action
Cited by129BibtexViews117DOI
0
0
ICML, (2018): 882-891
We have proposed a framework for instancewise feature selection via mutual information, and a method L2X which seeks a variational approximation of the mutual information, and makes use of a Gumbel-softmax relaxation of discrete subset sampling during training
Cited by128BibtexViews192DOI
0
0
international conference on machine learning, (2018)
Our algorithms can achieve comparable convergence speed with the exact algorithm even the neighbor sampling size D(l) = 2, so that the per-epoch cost of training Graph convolution networks is comparable with training multi-layer perceptron
Cited by128BibtexViews134DOI
0
0
No data, please see others