AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
We propose to maximize the similarity between encoded representations of the same video at two different speeds as well as minimize the similarity between different videos played at different speeds

Semi-Supervised Action Recognition With Temporal Contrastive Learning.

CVPR, pp.10389-10399, (2021)

Cited by: 0|Views62
EI
Full Text
Bibtex
Weibo

Abstract

Learning to recognize actions from only a handful of labeled videos is a challenging problem due to the scarcity of tediously collected activity labels. We approach this problem by learning a two-pathway temporal contrastive model using unlabeled videos at two different speeds leveraging the fact that changing video speed does not chang...More

Code:

Data:

0
Introduction
  • Supervised deep learning approaches have shown remarkable progress in video action recognition recently [7, 16, 17, 18, 35, 48].
  • Being supervised, these models are critically dependent on large datasets requiring tedious human annotation effort.
  • Semi-supervised representation learning models [10, 29, 37, 49] have performed very well even surpassing its supervised counterparts in case of images [22, 46].
Highlights
  • Supervised deep learning approaches have shown remarkable progress in video action recognition recently [7, 16, 17, 18, 35, 48]
  • Motivated by the success of using slow and fast versions of videos for supervised action recognition as well as by the success of the contrastive learning frameworks [26, 40], we propose Temporal Contrastive Learning (TCL) for semi-supervised action recognition in videos where consistent features representing both slow and fast versions of the same videos are learned
  • We differ from [46] as we propose a temporal contrastive learning framework for semi-supervised action recognition by modeling temporal aspects using two pathways at different speeds instead of augmenting images
  • Equipped with an initial backbone trained with limited supervision, our goal is to learn a model that can use a large pool of unlabeled videos for better activity understanding
  • We present a novel temporal contrastive learning framework for semi-supervised action recognition by maximizing the similarity between encoded representations of the same unlabeled video at two different speeds as well as minimizing the similarity between different unlabeled videos run at different speeds
  • The improvement is 8.14% for the case when 5% data is labeled
  • We demonstrate the effectiveness of our approach on four standard benchmark datasets, significantly outperforming several competing methods
Methods
  • The authors present the novel semi-supervised approach to efficiently learn video representations.
  • The authors' aim is to address semi-supervised activity recognition where only a small set of videos (Dl) has labels, but a large number of unlabeled videos (Du) are assumed to be present alongside.
  • The set Dl {V i, yi}Ni=l1 comprises of Nl videos where the ith video and the corresponding activity label is denoted by V i and yi respectively.
  • The authors use the unlabeled videos at two different frame rates and refer to them as fast and slow videos.
  • The frames are sampled from the video following Wang et al [51] where a random frame is sampled uniformly from consecutive non-overlapping segments
Results
  • The authors' temporal contrastive learning approach is able to correctly recognize different hand gestures from Jester dataset even with 1% of labeling, while the supervised baseline and the best approach (S4L) fail to recognize such actions.
  • Supervised: Pulling Hand In S4L: Pulling Two Fingers In TCL: Thumb Down.
  • The authors perform extensive ablation studies on MiniSomething-V2 with 5% labeled data and ResNet-18 backbone to better understand the effect of different losses and
Conclusion
  • The authors present a novel temporal contrastive learning framework for semi-supervised action recognition by maximizing the similarity between encoded representations of the same unlabeled video at two different speeds as well as minimizing the similarity between different unlabeled videos run at different speeds.
  • The authors demonstrate the effectiveness of the approach on four standard benchmark datasets, significantly outperforming several competing methods
Tables
  • Table1: Performance Comparison in Mini-Something-V2. Numbers show average Top-1 accuracy values with standard deviations over 3 random trials for different percentages of labeled data. TCL significantly outperforms all the compared methods in both cases
  • Table2: Performance Comparison on Jester and Kinetics-400. Numbers show the top-1 accuracy values using ResNet-18 on both datasets. Our approach TCL achieves the best performance across different percentages of labeled data. right) summarizes the results on Kinetics-400, which is one of the widely used action recognition datasets consisting of 240K videos across 400 classes. TCL outperforms FixMatch by a margin of 1.31% and 4.63% on 1% and 5% scenarios respectively, showing the superiority of our approach on large scale datasets. The top-1 accuracy achieved using TCL with finetuning and pretraining is almost twice better than the supervised approach when only 1% of the labeled data is used. The results also show that off-the-shelf extensions of sophisticated state-ofthe-art semi-supervised image classification methods offer little benefit to action classification on videos
  • Table3: Semi-supervised action recognition under domain shift (Charades-Ego). Numbers show mean average precision (mAP) with ResNet-18 backbone across three different proportions of unlabeled data (ρ) between third and first person videos. TCL achieves the best mAP, even on this challenging dataset
  • Table4: Ablation Studies on Mini-Something-V2. Numbers show top-1 accuracy with ResNet-18 and 5% labeled Data
Download tables as Excel
Related work
  • Action Recognition. Action recognition is a challenging problem with great application potential. The emergence of large-scale video datasets such as Kinetics-400 [31], Something-Something [23] have led to the application of deep neural networks to effectively learn features and provide end-to-end action recognition frameworks. Conventional approaches are mostly built over a two-stream CNN based framework [45], one to process a single RGB frame and the other for optical flow input to analyze the spatial and temporal information respectively. Many variants of 3D-CNNs such as C3D [48], I3D [7] and ResNet3D [27], that use 3D convolutions to model space and time jointly, have also been introduced for action recognition. SlowFast network [18] employs two pathways for recognizing actions by processing a video at both slow and fast frame rates. Recent works also utilize 2D-CNNs for efficient video classification by modeling temporal causality using different aggregation modules such as temporal averaging in TSN [51], bag of features in TRN [59], channel shifting in TSM [35], depthwise convolutions in TAM [16]. Despite promising results on common benchmarks, these models are critically dependent on large datasets that require careful and tedious human annotation effort. In contrast, we propose a simple yet effective temporal contrastive learning framework for semi-supervised action recognition that alleviates the data annotation limitation of supervised methods.
Funding
  • This work was partially supported by the SERB Grant SRG/2019/001205
  • This work is also supported by the Intelligence Advanced Research Projects Activity (IARPA) via DOI/IBC contract number D17PC00341
Study subjects and analysis
standard datasets: 4
We term the contrastive loss considering only individual instances as the instance-contrastive loss and the same between the groups as the group-contrastive loss respectively. We perform extensive experiments on four standard datasets and demonstrate that TCL achieves superior performance over extended baselines of state-of-the-art image domain semi-supervised approaches. Figure 1 shows comparisons between performances of TCL and a fully supervised strategy [35] that uses 100% labeled data

datasets: 4
A novel group-contrastive loss is pioneered to couple discriminative motion representation with pace-invariance that significantly improves semisupervised action recognition performance. • We demonstrate through experimental results on four datasets, TCL’s superiority over extended baselines of successful image-domain semi-supervised approaches. The versatility and robustness of our approach in case of training with unlabeled videos from a different domain is shown along with in-depth ablation analysis pinpointing the role of the different components

datasets: 4
Datasets. We evaluate our approach using four datasets, namely Mini-Something-V2 [9], Jester [36], Kinetics400 [31] and Charades-Ego [43]. Mini-Something-V2 is a subset of Something-Something V2 dataset [23] containing 81K training videos and 12K testing videos across 87 action classes

datasets: 4
Large-scale Experiments and Comparisons. Tables 1- 3 show performance of different methods on all four datasets, in terms of average top-1 clip accuracy and standard deviation over 3 random trials. Mini-Something-V2

standard benchmark datasets: 4
We employ contrastive loss between different video instances including groups of videos with similar actions to explore high-level action semantics within the neighborhood of different videos depicting different instances of the same action. We demonstrate the effectiveness of our approach on four standard benchmark datasets, significantly outperforming several competing methods. Acknowledgements

Reference
  • Eric Arazo, Diego Ortego, Paul Albert, Noel E O’Connor, and Kevin McGuinness. Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning. In International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE, 2020. 2
    Google ScholarLocate open access versionFindings
  • Philip Bachman, Ouais Alsharif, and Doina Precup. Learning with Pseudo-Ensembles. In Neural Information Processing Systems, pages 3365–3373, 2014. 2
    Google ScholarLocate open access versionFindings
  • Sagie Benaim, Ariel Ephrat, Oran Lang, Inbar Mosseri, William T Freeman, Michael Rubinstein, Michal Irani, and Tali Dekel. SpeedNet: Learning the Speediness in Videos. In IEEE Conference on Computer Vision and Pattern Recognition, pages 9922–9931, 2020. 3
    Google ScholarLocate open access versionFindings
  • David Berthelot, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. Remixmatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring. In International Conference on Learning Representations, 2019. 2
    Google ScholarLocate open access versionFindings
  • David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A Holistic Approach to Semi-Supervised Learning. In Neural Information Processing Systems, pages 5050–5060, 2019. 2, 5, 6, 12
    Google ScholarLocate open access versionFindings
  • Leon Bottou. Large-Scale Machine Learning with Stochastic Gradient Descent. In COMPSTAT, pages 177–18Springer, 2010. 6
    Google ScholarLocate open access versionFindings
  • Joao Carreira and Andrew Zisserman. Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. In IEEE Conference on Computer Vision and Pattern Recognition, pages 6299–6308, 2011, 2, 4
    Google ScholarLocate open access versionFindings
  • Olivier Chapelle, Bernhard Scholkopf, and Alexander Zien. Semi-Supervised Learning (chapelle, o. et al., eds.; 2006)[Book Reviews]. Transactions on Neural Networks, 20(3):542–542, 2009. 2
    Google ScholarLocate open access versionFindings
  • Chun-Fu Chen, Rameswar Panda, Kandan Ramakrishnan, Rogerio Feris, John Cohn, Aude Oliva, and Quanfu Fan. Deep Analysis of CNN-based Spatio-temporal Representations for Action Recognition. ArXiv preprint ArXiv:2010.11757, 2020. 2, 5, 11
    Findings
  • Ting Chen, Simon Kornblith, M. Norouzi, and Geoffrey E. Hinton. A Simple Framework for Contrastive Learning of Visual Representations. ArXiv preprint ArXiv:2002.05709, 2020. 1, 2, 3, 6
    Findings
  • Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey Hinton. Big Self-Supervised Models are Strong Semi-Supervised Learners. arXiv preprint arXiv:2006.10029, 2020. 3
    Findings
  • Jinwoo Choi, Gaurav Sharma, Manmohan Chandraker, and Jia-Bin Huang. Unsupervised and Semi-Supervised Domain Adaptation for Action Recognition from Drones. In IEEE Winter Conference on Applications of Computer Vision, pages 1717–1726, 2020. 7
    Google ScholarLocate open access versionFindings
  • Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical Automated Data Augmentation with a Reduced Search Space. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 702–703, 2020. 13
    Google ScholarLocate open access versionFindings
  • Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discriminative Unsupervised Feature Learning with Convolutional Neural Networks. In Neural Information Processing Systems, pages 766–774, 202
    Google ScholarLocate open access versionFindings
  • Dumitru Erhan, Yoshua Bengio, Aaron Courville, PierreAntoine Manzagol, and Pascal Vincent. Why Does Unsupervised Pre-training Help Deep Learning? Journal of Machine Learning Research, 11:625–660, 2010. 5
    Google ScholarLocate open access versionFindings
  • Quanfu Fan, Chun-Fu Richard Chen, Hilde Kuehne, Marco Pistoia, and David Cox. More Is Less: Learning Efficient Video Representations by Big-Little Network and Depthwise Temporal Aggregation. In Neural Information Processing Systems, pages 2261–2270, 2019. 1, 2
    Google ScholarLocate open access versionFindings
  • Christoph Feichtenhofer. X3D: Expanding Architectures for Efficient Video Recognition. In IEEE Conference on Computer Vision and Pattern Recognition, pages 203–213, 2020. 1
    Google ScholarLocate open access versionFindings
  • Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. SlowFast Networks for Video Recognition. In IEEE International Conference on Computer Vision, pages 6202–6211, 2019. 1, 2
    Google ScholarLocate open access versionFindings
  • Gaurav Fotedar, Nima Tajbakhsh, Shilpa Ananth, and Xiaowei Ding. Extreme Consistency: Overcoming Annotation Scarcity and Domain Shifts. arXiv preprint arXiv:2004.11966, 2020. 3
    Findings
  • Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised Representation Learning by Predicting Image Rotations. arXiv preprint arXiv:1803.07728, 2018. 2
    Findings
  • Daniel Gordon, Kiana Ehsani, Dieter Fox, and Ali Farhadi. Watching The World Go By: Representation Learning from Unlabeled Videos. arXiv preprint arXiv:2003.07990, 2020. 3
    Findings
  • Priya Goyal, Dhruv Mahajan, Abhinav Gupta, and Ishan Misra. Scaling and Benchmarking Self-Supervised Visual Representation Learning. In IEEE International Conference on Computer Vision, pages 6391–6400, 2019. 1
    Google ScholarLocate open access versionFindings
  • Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna Materzynska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz Mueller-Freitag, Florian Hoppe, Christian Thurau, Ingo Bax, and Roland Memisevic. The “Something Something” Video Database for Learning and Evaluating Visual Common Sense. In IEEE International Conference on Computer Vision (ICCV), Oct 2017. 1, 2, 5, 11, 12
    Google ScholarLocate open access versionFindings
  • Yves Grandvalet and Yoshua Bengio. Semi-Supervised Learning by Entropy Minimization. In Neural Information Processing Systems, pages 529–536, 2005. 2
    Google ScholarLocate open access versionFindings
  • Tengda Han, Weidi Xie, and Andrew Zisserman. Video Representation Learning by Dense Predictive Coding. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 0–0, 2019. 3
    Google ScholarLocate open access versionFindings
  • Tengda Han, Weidi Xie, and Andrew Zisserman. Memoryaugmented Dense Predictive Coding for Video Representation Learning. In European conference on computer vision. Springer, 2020. 1
    Google ScholarLocate open access versionFindings
  • Kensho Hara, Hirokatsu Kataoka, and Yutaka Satoh. Learning Spatio-Temporal Features with 3D Residual Networks for Action recognition. In IEEE International Conference on Computer Vision Workshops, pages 3154–3160, 2017. 2, 4
    Google ScholarLocate open access versionFindings
  • Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum Contrast for Unsupervised Visual Representation Learning. In IEEE Conference on Computer Vision and Pattern Recognition, pages 9729–9738, 2020. 3
    Google ScholarLocate open access versionFindings
  • Olivier J Henaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, SM Eslami, and Aaron van den Oord. Data-Efficient Image Recognition with Contrastive Predictive Coding. arXiv preprint arXiv:1905.09272, 2019. 1
    Findings
  • R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning Deep Representations by Mutual Information Estimation and Maximization. arXiv preprint arXiv:1808.06670, 2018. 3
    Findings
  • Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The Kinetics Human Action Video Dataset. arXiv preprint arXiv:1705.06950, 2017. 2, 5
    Findings
  • Bruno Korbar, Du Tran, and Lorenzo Torresani. Cooperative Learning of Audio and Video Models from Self-Supervised Synchronization. In Neural Information Processing Systems, pages 7763–7774, 2018. 3
    Google ScholarLocate open access versionFindings
  • Samuli Laine and Timo Aila. Temporal Ensembling for Semi-Supervised Learning. arXiv preprint arXiv:1610.02242, 2016. 2
    Findings
  • Dong-Hyun Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In International Conference on Machine Learning Workshop, volume 3, page 2, 2013. 1, 2, 5, 6, 7, 12
    Google ScholarLocate open access versionFindings
  • Ji Lin, Chuang Gan, and Song Han. TSM: Temporal Shift Module for Efficient Video Understanding. In IEEE International Conference on Computer Vision, pages 7083–7093, 2019. 1, 2, 4, 5, 6, 12
    Google ScholarLocate open access versionFindings
  • Joanna Materzynska, Guillaume Berger, Ingo Bax, and Dataset of Human Gestures. In IEEE International Conference on Computer Vision Workshops, pages 0–0, 2019. 1, 5,
    Google ScholarLocate open access versionFindings
  • Ishan Misra and Laurens van der Maaten. Self-Supervised Conference on Computer Vision and Pattern Recognition, pages 6707–6717, 2020. 1, 3
    Google ScholarLocate open access versionFindings
  • Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and IEEE transactions on pattern analysis and machine intelligence, 41(8):1979–1993, 2018. 2
    Google ScholarFindings
  • Augustus Odena. Semi-Supervised Learning with arXiv preprint arXiv:1606.01583, 2016. 2
    Findings
  • Rui Qian, Tianjian Meng, Boqing Gong, Ming-Hsuan Yang, Huisheng Wang, Serge Belongie, and Yin Cui. Spatiotemporal Contrastive Video Representation Learning. arXiv preprint arXiv:2008.03800, 2020. 1, 3
    Findings
  • Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Neural Information Processing Systems, pages 3546–3554, 2015. 2
    Google ScholarLocate open access versionFindings
  • Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Brain. Time-Contrastive Networks: Self-Supervised Learning from Video. In 2018 IEEE International Conference on
    Google ScholarLocate open access versionFindings
  • Robotics and Automation (ICRA), pages 1134–1141. IEEE, 2018. 3
    Google ScholarFindings
  • [43] Gunnar A Sigurdsson, Abhinav Gupta, Cordelia Schmid, Ali Farhadi, and Karteek Alahari. Charades-Ego: A Large-Scale Dataset of Paired Third and First Person Videos. arXiv preprint arXiv:1804.09626, 2018. 2, 5, 7
    Findings
  • [44] Gunnar A Sigurdsson, Gul Varol, Xiaolong Wang, Ali Springer, 2016. 7
    Google ScholarFindings
  • [45] Karen Simonyan and Andrew Zisserman. Two-Stream Convolutional Networks for Action Recognition in Videos. In Neural Information Processing Systems, pages 568–576, 2014. 2
    Google ScholarLocate open access versionFindings
  • [46] Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Neural Information Processing Systems, 2020. 1, 2, 3, 5, 6, 7, 8, 13, 14, 15
    Google ScholarLocate open access versionFindings
  • [47] Antti Tarvainen and Harri Valpola. Mean Teachers are Better Role Models: Weight-Averaged Consistency Targets Improve Semi-Supervised Deep Learning Results. In Neural Information Processing Systems, pages 1195–1204, 2017. 2, 5, 6, 13
    Google ScholarLocate open access versionFindings
  • [48] Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning Spatiotemporal Features with 3D Convolutional Networks. In Proceedings of the IEEE international conference on computer vision, pages 4489– 4497, 2015. 1, 2
    Google ScholarLocate open access versionFindings
  • [49] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation Learning with Contrastive Predictive Coding. arXiv preprint arXiv:1807.03748, 2018. 1, 2, 3
    Findings
  • [50] Jiangliu Wang, Jianbo Jiao, and Yun-Hui Liu. SelfSupervised Video Representation Learning by Pace Prediction. arXiv preprint arXiv:2008.05861, 2020. 3
    Findings
  • [51] Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool. Temporal Segment Networks: Towards Good Practices for Deep Action Recognition. In European conference on computer vision, 2016. 2, 4
    Google ScholarLocate open access versionFindings
  • [52] Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised Feature Learning via Non-Parametric Instance Discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3733– 3742, 2018. 3
    Google ScholarLocate open access versionFindings
  • [53] Fanyi Xiao, Yong Jae Lee, Kristen Grauman, Jitendra Malik, and Christoph Feichtenhofer. Audiovisual SlowFast Networks for Video Recognition. arXiv preprint arXiv:2001.08740, 2020. 1
    Findings
  • [54] Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848, 2019. 2
    Findings
  • [55] Qizhe Xie, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. Self-training with Noisy Student improves ImageNet classification. In IEEE Conference on Computer Vision and Pattern Recognition, pages 10687–10698, 2020. 5
    Google ScholarLocate open access versionFindings
  • [56] Ceyuan Yang, Yinghao Xu, Bo Dai, and Bolei Zhou. Video Representation Learning with Visual Tempo Consistency. arXiv preprint arXiv:2006.15489, 2020. 3, 6
    Findings
  • [57] Yuan Yao, Chang Liu, Dezhao Luo, Yu Zhou, and Qixiang Ye. Video Playback Rate Perception for Self-Supervised Spatio-Temporal Representation Learning. In IEEE Conference on Computer Vision and Pattern Recognition, pages 6548–6557, 2020. 3
    Google ScholarLocate open access versionFindings
  • [58] Xiaohua Zhai, Avital Oliver, Alexander Kolesnikov, and Lucas Beyer. S4L: Self-Supervised Semi-Supervised Learning. In IEEE International Conference on Computer Vision, pages 1476–1485, 2019. 3, 5, 6, 7, 8, 12, 14
    Google ScholarLocate open access versionFindings
  • [59] Bolei Zhou, Alex Andonian, Aude Oliva, and Antonio Torralba. Temporal Relational Reasoning in Videos. In European Conference on Computer Vision (ECCV), pages 803– 818, 2018. 2, 12
    Google ScholarLocate open access versionFindings
  • [60] Barret Zoph, Golnaz Ghiasi, Tsung-Yi Lin, Yin Cui, Hanxiao Liu, Ekin D. Cubuk, and Quoc V. Le. Rethinking Pretraining and Self-training. In Neural Information Processing Systems, 2020. 5
    Google ScholarLocate open access versionFindings
Author
Ankit Singh
Ankit Singh
Omprakash Chakraborty
Omprakash Chakraborty
Ashutosh Varshney
Ashutosh Varshney
Rogerio Feris
Rogerio Feris
Abir Das
Abir Das
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科