AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
We have demonstrated that training joint predictors in the image space with simultaneous alignment of canonical surfaces in 3D results in an efficient transfer of knowledge between different classes even when the amount of ground truth annotations is severely limited

Continuous Surface Embeddings

NIPS 2020, (2020)

Cited by: 0|Views56
EI
Full Text
Bibtex
Weibo

Abstract

In this work, we focus on the task of learning and representing dense correspondences in deformable object categories. While this problem has been considered before, solutions so far have been rather ad-hoc for specific object types (i.e., humans), often with significant manual work involved. However, scaling the geometry understanding to...More

Code:

Data:

0
Introduction
  • Understanding the geometry of natural objects, such as humans and other animals, must start from the notion of correspondence.
  • Given a new object category to model with DensePose, one must start by defining a canonical shape S, a sort of ‘average’ 3D shape used as a reference to express correspondences.
  • A dataset of images of the object must be collected and annotated with millions of manual point correspondences between the images and the canonical 3D model.
  • This process must be repeated from scratch
Highlights
  • Understanding the geometry of natural objects, such as humans and other animals, must start from the notion of correspondence
  • The model must be manually partitioned into a number of parts, or charts, and a deep neural network must be trained to segment the image and regress the uv coordinates for each chart, guided by the manual annotations, yielding a DensePose predictor
  • One important contribution of this paper is to introduce a better and more flexible representation of correspondences that can be used as a drop-in replacement in architectures such as DensePose
  • We propose a new approach for representing continuous correspondences between an image and points in a 3D object
  • We found it beneficial to modify this loss to account for the geometry of the problem, minimizing the cross entropy between a ‘Gaussian-like’ distribution centered on the ground-truth point k and the predicted posterior: X K
  • We have demonstrated that training joint predictors in the image space with simultaneous alignment of canonical surfaces in 3D results in an efficient transfer of knowledge between different classes even when the amount of ground truth annotations is severely limited
Methods
  • The authors propose a new approach for representing continuous correspondences between an image and points in a 3D object.
  • To this end, let S ⇢ R3 be a canonical surface.
  • The authors recover the corresponding canonical 3D point X 2 S probabilistically, via a softmax-like function: p(X|x, I, e, ) = R exp ( heX , x(I)i).
Results
  • Note that compared to the original DensePoseCOCO labelling effort that produced 5 million annotated points for the human category (96% coverage of the SMPL mesh), the annotations are three orders of magnitude smaller and only 18% of vertices of animal meshes, on average, have at least one ground truth annotation.
Conclusion
  • The authors have made an important step towards designing universal networks for learning dense correspondences within and across different object categories.
  • The authors should note that the approach cannot be considered biometrics, because from pose alone, even if dense, it is not possible to ascertain the identity of an individual.
  • This mitigates the potential risk when the method is applied to humans
Summary
  • Introduction:

    Understanding the geometry of natural objects, such as humans and other animals, must start from the notion of correspondence.
  • Given a new object category to model with DensePose, one must start by defining a canonical shape S, a sort of ‘average’ 3D shape used as a reference to express correspondences.
  • A dataset of images of the object must be collected and annotated with millions of manual point correspondences between the images and the canonical 3D model.
  • This process must be repeated from scratch
  • Methods:

    The authors propose a new approach for representing continuous correspondences between an image and points in a 3D object.
  • To this end, let S ⇢ R3 be a canonical surface.
  • The authors recover the corresponding canonical 3D point X 2 S probabilistically, via a softmax-like function: p(X|x, I, e, ) = R exp ( heX , x(I)i).
  • Results:

    Note that compared to the original DensePoseCOCO labelling effort that produced 5 million annotated points for the human category (96% coverage of the SMPL mesh), the annotations are three orders of magnitude smaller and only 18% of vertices of animal meshes, on average, have at least one ground truth annotation.
  • Conclusion:

    The authors have made an important step towards designing universal networks for learning dense correspondences within and across different object categories.
  • The authors should note that the approach cannot be considered biometrics, because from pose alone, even if dense, it is not possible to ascertain the identity of an individual.
  • This mitigates the potential risk when the method is applied to humans
Tables
  • Table1: Annotation statistics of the DensePose-LVIS dataset. ‘Coverage’ is expressed as the number of vertices in a given class mesh with at least one corresponding ground truth annotation. The corresponding animal meshes are shown on the right (source: hum3d.com)
  • Table2: Performance on DensePose-COCO, with IUV (top) and CSE (bottom) training (GPSm scores, minival). First block: published SOTA DensePose methods, second block: our optimized architectures + IUV training, third block: our optimized architectures + CSE training. All CSE models are trained with loss L (eq 4), LBO size M = 256, embedding size D = 16
  • Table3: Hyperparameter search and performance in low data regimes (AP, DensePoseCOCO, minival): (left) LBO basis size, M (D = 16), (center) embedding size, D (M = 256), (right) comparison of IUV and CSE training in small data regimes. DP-RCNN* (R50) predictor
  • Table4: Performance on the DensePose-Chimps dataset with CSE training (AP, GPSm scores, measured on both chimp and SMPL meshes wrt the GT mapping Schimp ! Ssmpl from [<a class="ref-link" id="c43" href="#r43">43</a>])
  • Table5: Performance on the DensePose-LVIS dataset with CSE training (AP, GPSm scores)
Download tables as Excel
Related work
  • Human pose recognition. With deep learning, image-based human pose estimation has made substantial progress [52, 37, 12], also due to the availability of large datasets such as COCO [31], MPII [3], Leeds Sports Pose Dataset (LSP) [23, 24], PennAction [58], or PoseTrack [2]. Our work is most related with DensePose [17], which introduced a method to establish dense correspondences between image pixels and points on the surface of the average SMPL human mesh model [32].

    Unsupervised pose recognition. Most pose estimators [5, 49, 7, 50, 43, 48, 33, 59, 22] require full supervision, which is expensive to collect, especially for a model such as DensePose. A handful of works have tackled this issue by seeking unsupervised and weakly-supervised objectives, using cues such as equivariance to synthetic image transformations. The most relevant to us is Slim DensePose [36], which showed that DensePose annotations can be significantly reduced without incurring a large performance penalty, but did not address the issue of scaling to multiple classes.
Funding
  • Note that compared to the original DensePoseCOCO labelling effort that produced 5 million annotated points for the human category (96% coverage of the SMPL mesh), our annotations are three orders of magnitude smaller and only 18% of vertices of animal meshes, on average, have at least one ground truth annotation
Study subjects and analysis
test samples: 430
For the multi-class setting, we make use of a recent DensePose-Chimps [44] test benchmark containing a small number of annotated correspondences for chimpanzees. We split the set of annotated instances of [44] into 500 training and 430 test samples containing 1354 and 1151 annotated correspondences respectively. Additionally, we collect correspondence annotations on a set of 9 animal categories of the LVIS dataset [21]

Reference
  • Yonathan Aflalo, Anastasia Dubrovina, and Ron Kimmel. Spectral Generalized Multidimensional Scaling. International Journal of Computer Vision, 118(3):380–392, 2016.
    Google ScholarLocate open access versionFindings
  • Mykhaylo Andriluka, Umar Iqbal, Anton Milan, Eldar Insafutdinov, Leonid Pishchulin, Juergen Gall, and Bernt Schiele. PoseTrack: A Benchmark for Human Pose Estimation and Tracking. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5167– 5176, 2018.
    Google ScholarLocate open access versionFindings
  • Mykhaylo Andriluka, Leonid Pishchulin, Peter V. Gehler, and Bernt Schiele. 2D Human Pose Estimation: New Benchmark and State of the Art Analysis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3686–3693, 2014.
    Google ScholarLocate open access versionFindings
  • Mathieu Aubry, Ulrich Schlickewei, and Daniel Cremers. The wave kernel signature: A quantum mechanical approach to shape analysis. In IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pages 1626–1633, 2011.
    Google ScholarLocate open access versionFindings
  • Miguel Ángel Bautista, Artsiom Sanakoyeu, Ekaterina Tikhoncheva, and Björn Ommer. CliqueCNN: Deep Unsupervised Examplar Learning. In Advances in Neural Information Processing Systems (NIPS), pages 3846–3854, 2016.
    Google ScholarLocate open access versionFindings
  • Benjamin Biggs, Thomas Roddick, Andrew Fitzgibbon, and Roberto Cipolla. Creatures Great and SMAL: Recovering the Shape and Motion of Animals from Video. In Asian Conference on Computer Vision (ACCV), pages 3–19, 2018.
    Google ScholarLocate open access versionFindings
  • Biagio Brattoli, Uta Büchler, Anna-Sophia Wahl, Martin E. Schwab, and Björn Ommer. LSTM Self-Supervision for Detailed Behavior Analysis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3747–3756, 2017.
    Google ScholarLocate open access versionFindings
  • Alexander M. Bronstein, Michael M. Bronstein, Leonidas J. Guibas, and Maks Ovsjanikov. Shape google: Geometric words and expressions for invariant shape retrieval. ACM Transactions on Graphics (TOG), 30(1):1–20, 2011.
    Google ScholarLocate open access versionFindings
  • Alexander M. Bronstein, Michael M. Bronstein, and Ron Kimmel. Generalized multidimensional scaling: a framework for isometry-invariant partial surface matching. Proceedings of the National Academy of Sciences (PNAS), 103(5):1168–1172, 2006.
    Google ScholarLocate open access versionFindings
  • Alexander M. Bronstein, Michael M. Bronstein, Ron Kimmel, Mona Mahmoudi, and Guillermo Sapiro. A Gromov-Hausdorff framework with Diffusion Geometry for Topologically-Robust Non-rigid Shape Matching. International Journal of Computer Vision, 89(2–3):266–286, 2010.
    Google ScholarLocate open access versionFindings
  • Michael M. Bronstein and Iasonas Kokkinos. Scale-invariant heat kernel signatures for nonrigid shape recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1704 – 1711, 2010.
    Google ScholarLocate open access versionFindings
  • Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1302–1310, 2017.
    Google ScholarLocate open access versionFindings
  • Wenzheng Chen, Huan Ling, Jun Gao, Edward J. Smith, Jaakko Lehtinen, Alec Jacobson, and Sanja Fidler. Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer. In Advances in Neural Information Processing Systems (NeurIPS), pages 9605– 9616, 2019.
    Google ScholarLocate open access versionFindings
  • Ronald R. Coifman and Stéphane Lafon. Diffusion maps. Applied and Computational Harmonic Analysis, 21(1):5–30, 2006.
    Google ScholarLocate open access versionFindings
  • Asi Elad (Elbaz) and Ron Kimmel. On Bending Invariant Signatures for Surfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(10):1285–1295, 2003.
    Google ScholarLocate open access versionFindings
  • Danielle Ezuz and Mirela Ben-Chen. Deblurring and Denoising of Maps between Shapes. Computer Graphics Forum, 36(5):165–174, 2017.
    Google ScholarLocate open access versionFindings
  • Rıza Alp Güler, Natalia Neverova, and Iasonas Kokkinos. DensePose: Dense Human Pose Estimation in the Wild. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7297–7306, 2018.
    Google ScholarLocate open access versionFindings
  • Riza Alp Güler, Natalia Neverova, and Iasonas Kokkinos. DensePose: Dense human pose estimation in the wild. In Proc. CVPR, 2018.
    Google ScholarLocate open access versionFindings
  • Semih Günel, Helge Rhodin, Daniel Morales, João Campagnolo, Pavan Ramdya, and Pascal Fua. DeepFly3D, a deep learning-based approach for 3D limb and appendage tracking in tethered, adult Drosophila. eLife, 2019.
    Google ScholarLocate open access versionFindings
  • Yuyu Guo, Lianli Gao, Jingkuan Song, Peng Wang, Wuyuan Xie, and Heng Tao Shen. Adaptive Multi-Path Aggregation for Human DensePose Estimation in the Wild. In ACM International Conference on Multimedia, pages 356–364, 2019.
    Google ScholarLocate open access versionFindings
  • Agrim Gupta, Piotr Dollár, and Ross Girshick. LVIS: A Dataset for Large Vocabulary Instance Segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5356–5364, 2019.
    Google ScholarLocate open access versionFindings
  • Tomas Jakab, Ankush Gupta, Hakan Bilen, and Andrea Vedaldi. Unsupervised Learning of Object Landmarks through Conditional Image Generation. In Advances in Neural Information Processing Systems (NeurIPS), pages 4020–4031, 2018.
    Google ScholarLocate open access versionFindings
  • Sam Johnson and Mark Everingham. Clustered Pose and Nonlinear Appearance Models for Human Pose Estimation. In British Machine Vision Conference (BMVC), pages 1–11, 2010.
    Google ScholarLocate open access versionFindings
  • Sam Johnson and Mark Everingham. Learning effective human pose estimation from inaccurate annotation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1465–1472, 2011.
    Google ScholarLocate open access versionFindings
  • Angjoo Kanazawa, David W. Jacobs, and Manmohan Chandraker. WarpNet: Weakly Supervised Matching for Single-View Reconstruction. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3253–3261, 2016.
    Google ScholarLocate open access versionFindings
  • Angjoo Kanazawa, Shubham Tulsiani, Alexei A. Efros, and Jitendra Malik. Learning categoryspecific mesh reconstruction from image collections. In European Conference on Computer Vision (ECCV), pages 386–402, 2018.
    Google ScholarLocate open access versionFindings
  • Artiom Kovnatsky, Michael M. Bronstein, Xavier Bresson, and Pierre Vandergheynst. Functional correspondence by matrix completion. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 905–914, 2015.
    Google ScholarLocate open access versionFindings
  • Nilesh Kulkarni, Abhinav Gupta, David Fouhey, and Shubham Tulsiani. Articulation-aware Canonical Surface Mapping. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
    Google ScholarLocate open access versionFindings
  • Nilesh Kulkarni, Shubham Tulsiani, and Abhinav Gupta. Canonical Surface Mapping via Geometric Cycle Consistency. In International Conference on Computer Vision (ICCV), pages 2202–2211, 2019.
    Google ScholarLocate open access versionFindings
  • Shuyuan Li, Jianguo Li, Weiyao Lin, and Hanlin Tang. Amur tiger re-identification in the wild. arXiv e-prints arXiv:1906.05586, 2019.
    Findings
  • Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: Common Objects in Context. In European Conference on Computer Vision (ECCV), pages 740–755, 2014.
    Google ScholarLocate open access versionFindings
  • Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. SMPL: A skinned multi-person linear model. ACM Transactions on Graphics (TOG), 34(6):248, 2015.
    Google ScholarLocate open access versionFindings
  • Dominik Lorenz, Leonard Bereska, Timo Milbich, and Björn Ommer. Unsupervised partbased disentangling of object shape and appearance. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 10955–10964, 2019.
    Google ScholarLocate open access versionFindings
  • Simone Melzi, Jing Ren, Emanuele Rodolà, Abhishek Sharma, Peter Wonka, and Maks Ovsjanikov. ZoomOut: spectral upsampling for efficient shape correspondence. ACM Transaction on Graphics, 38(6):155:1–155:14, 2019.
    Google ScholarLocate open access versionFindings
  • Tanmay Nath, Alexander Mathis, An Chi Chen, Amir Patel, Matthias Bethge, and Mackenzie Weygandt Mathis. Using DeepLabCut for 3D markerless pose estimation across species and behaviors. Nature Protocols, 2019.
    Google ScholarLocate open access versionFindings
  • Natalia Neverova, James Thewlis, Rıza Alp Güler, Iasonas Kokkinos, and Andrea Vedaldi. Slim DensePose: Thrifty Learning from Sparse Annotations and Motion Cues. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 10915–10923, 2019.
    Google ScholarLocate open access versionFindings
  • Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked Hourglass Networks for Human Pose Estimation. In European Conference on Computer Vision (ECCV), pages 483–499, 2016.
    Google ScholarLocate open access versionFindings
  • David Novotny, Nikhila Ravi, Benjamin Graham, Natalia Neverova, and Andrea Vedaldi. C3DPO: Canonical 3D Pose Networks for Non-Rigid Structure From Motion. In International Conference on Computer Vision (ICCV), pages 7687–7696, 2019.
    Google ScholarLocate open access versionFindings
  • Maks Ovsjanikov, Mirela Ben-Chen, Justin Solomon, Adrian Butscher, and Leonidas J. Guibas. Functional maps: a flexible representation of maps between shapes. ACM Transactions on Graphics (TOG), 31(4):1–11, 2012.
    Google ScholarLocate open access versionFindings
  • Jonathan Pokrass, Alexander M. Bronstein, Michael M. Bronstein, Pablo Sprechmann, and Guillermo Sapiro. Sparse modeling of intrinsic correspondences. Computer Graphics Forum, 32(2):459–468, 2013.
    Google ScholarLocate open access versionFindings
  • Maheen Rashid, Xiuye Gu, and Yong Jae Lee. Interspecies knowledge transfer for facial keypoint detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6894–6903, 2017.
    Google ScholarLocate open access versionFindings
  • Raif M. Rustamov. Laplace-Beltrami eigenfunctions for deformation invariant shape representation. In Symposium on Geometry Processing, pages 225–233, 2007.
    Google ScholarLocate open access versionFindings
  • Artsiom Sanakoyeu, Miguel Ángel Bautista, and Björn Ommer. Deep unsupervised learning of visual similarities. Pattern Recognition, 78:331–343, 2018.
    Google ScholarLocate open access versionFindings
  • Artsiom Sanakoyeu, Vasil Khalidov, Maureen S. McCarthy, Andrea Vedaldi, and Natalia Neverova. Transferring Dense Pose to Proximal Animal Classes. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
    Google ScholarLocate open access versionFindings
  • Saurabh Singh, Derek Hoiem, and David A. Forsyth. Learning to Localize Little Landmarks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 260–269, 2016.
    Google ScholarLocate open access versionFindings
  • Josef Sivic and Andrew Zisserman. Video Google: A Text Retrieval Approach to Object Matching in Videos. In International Conference on Computer Vision (ICCV), pages 1470– 1477, 2003.
    Google ScholarLocate open access versionFindings
  • Jian Sun, Maks Ovsjanikov, and Leonidas J. Guibas. A Concise and Provably Informative Multi-Scale Signature Based on Heat Diffusion. Computer Graphics Forum, 28(5):1383–1392, 2009.
    Google ScholarLocate open access versionFindings
  • James Thewlis, Samuel Albanie, Hakan Bilen, and Andrea Vedaldi. Unsupervised learning of landmarks by descriptor vector exchange. ICCV, 2019.
    Google ScholarLocate open access versionFindings
  • James Thewlis, Hakan Bilen, and Andrea Vedaldi. Unsupervised Learning of Object Landmarks by Factorized Spatial Embeddings. In International Conference on Computer Vision (ICCV), pages 3229–3238, 2017.
    Google ScholarLocate open access versionFindings
  • James Thewlis, Hakan Bilen, and Andrea Vedaldi. Unsupervised object learning from dense invariant image labelling. In Advances in Neural Information Processing Systems (NIPS), pages 844–855, 2017.
    Google ScholarLocate open access versionFindings
  • Shubham Tulsiani, João Carreira, and Jitendra Malik. Pose Induction for Novel Object Categories. In IEEE International Conference on Computer Vision (ICCV), pages 64–72, 2015.
    Google ScholarLocate open access versionFindings
  • Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh. Convolutional Pose Machines. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4724–4732, 2016.
    Google ScholarLocate open access versionFindings
  • Peter Welinder, Steve Branson, Takeshi Mita, Catherine Wah, Florian Schroff, Serge Belongie, and P. Perona. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010.
    Google ScholarFindings
  • Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detectron2. https://github.com/facebookresearch/detectron2, 2019.
    Findings
  • Heng Yang, Renqiao Zhang, and Peter Robinson. Human and sheep facial landmarks localisation by triplet interpolated features. In IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1–8, 2015.
    Google ScholarLocate open access versionFindings
  • Lu Yang, Qing Song, Zhihui Wang, and Ming Jiang. Parsing R-CNN for Instance-Level Human Analysis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 364–373, 2019.
    Google ScholarLocate open access versionFindings
  • Ning Zhang, Jeff Donahue, Ross B. Girshick, and Trevor Darrell. Part-Based R-CNNs for FineGrained Category Detection. In European Conference on Computer Vision (ECCV), pages 834–849, 2014.
    Google ScholarLocate open access versionFindings
  • Weiyu Zhang, Menglong Zhu, and Konstantinos G. Derpanis. From Actemes to Action: A Strongly-Supervised Representation for Detailed Action Understanding. International Conference on Computer Vision (ICCV), pages 2248–2255, 2013.
    Google ScholarLocate open access versionFindings
  • Yuting Zhang, Yijie Guo, Yixin Jin, Yijun Luo, Zhiyuan He, and Honglak Lee. Unsupervised Discovery of Object Landmarks as Structural Representations. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2694–2703, 2018.
    Google ScholarLocate open access versionFindings
  • Silvia Zuffi, Angjoo Kanazawa, Tanya Y. Berger-Wolf, and Michael J. Black. Three-D Safari: Learning to Estimate Zebra Pose, Shape, and Texture from Images "In the Wild". In International Conference on Computer Vision (ICCV), pages 5358–5367, 2019.
    Google ScholarLocate open access versionFindings
  • Silvia Zuffi, Angjoo Kanazawa, David W. Jacobs, and Michael J. Black. 3D Menagerie: Modeling the 3D Shape and Pose of Animals. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5524–5532, 2017.
    Google ScholarLocate open access versionFindings
  • Silvia Zuffi, Angjoo Kanazawa, David W. Jacobs, and Michael J. Black:. Lions and Tigers and Bears: Capturing Non-Rigid, 3D, Articulated Shape from Images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3955–3963, 2018.
    Google ScholarLocate open access versionFindings
Author
David Novotny
David Novotny
Marc Szafraniec
Marc Szafraniec
Patrick Labatut
Patrick Labatut
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科