AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
We have proposed a new algorithm that generates more challenging positive and hard negative pairs, on-the-fly, by leveraging adversarial examples

Contrastive Learning with Adversarial Examples

NIPS 2020, (2020)

Cited by: 0|Views15
EI
Full Text
Bibtex
Weibo

Abstract

Contrastive learning (CL) is a popular technique for self-supervised learning (SSL) of visual representations. It uses pairs of augmentations of unlabeled training examples to define a classification task for pretext learning of a deep embedding. Despite extensive works in augmentation procedures, prior works do not address the selectio...More

Code:

Data:

0
Introduction
  • Deep networks have enabled significant advances in many machine learning tasks over the last decade
  • This usually requires supervised learning, based on large and carefully curated datasets.
  • CL is based on a surrogate task that treats instances as classes and aims to learn an invariant instance representation
  • This is implemented by generating a pair of examples per instance, and feeding them through an encoder, which is trained with a constrastive loss.
  • This encourages the embeddings of pairs generated from the same instance, known as positive pairs, to be close together and embeddings originated from different instances, known as negative pairs, to be far apart
Highlights
  • Deep networks have enabled significant advances in many machine learning tasks over the last decade
  • This encourages the embeddings of pairs generated from the same instance, known as positive pairs, to be close together and embeddings originated from different instances, known as negative pairs, to be far apart
  • That it is possible to leverage the interpretation of contrastive learning (CL) as instance classification to produce a sensible generalization of classification attacks to the CL problem
  • We have proposed a new algorithm (CLAE) that generates more challenging positive and hard negative pairs, on-the-fly, by leveraging adversarial examples
  • This work advances the general use of deep learning technology, especially in the case that dataset annotations are difficult to obtain, and could have many applications
  • We show that adversarial data augmentation can be used to improve the performance of supervised learning (SSL) learning
  • While this work mainly focuses on the study of image recognition, we hope this work can be extended to other application domains of SSL in the future
Methods
  • Embedding dimension impact is evaluated in Fig. 5(b)
  • This dimension does not seem to affect the performance of both baseline and adversarially trained model.
  • The improvement is much more dramatic with adversarial training, where it can be as high as 5% (ResNet101 over ResNet18), than for the baseline, where it is at most 1%
  • This is likely because larger networks have higher learning capacity and can benefit from the more challenging examples produced by adversarial augmentation.
  • The fact that the ResNet18 with adversarial training beats the ResNet101 baseline shows that there is always a benefit to more challenging training pairs, even for smaller models
Results
  • The authors show that adversarial data augmentation can be used to improve the performance of SSL learning.
Conclusion
  • In self-supervised learning (SSL), approaches based on contrastive learning (CL) do not necessarily optimize on hard negative pairs.
  • This work advances the general use of deep learning technology, especially in the case that dataset annotations are difficult to obtain, and could have many applications
  • It advances several state of the art solutions on self-supervised learning (SSL), where no labels are provided.
  • While prior works in SSL suggest training with larger network, larger batch size and longer training epochs, the experiments in this works demonstrates that these factors are less critical by optimizing on effective training pairs.
  • While this work mainly focuses on the study of image recognition, the authors hope this work can be extended to other application domains of SSL in the future
Summary
  • Introduction:

    Deep networks have enabled significant advances in many machine learning tasks over the last decade
  • This usually requires supervised learning, based on large and carefully curated datasets.
  • CL is based on a surrogate task that treats instances as classes and aims to learn an invariant instance representation
  • This is implemented by generating a pair of examples per instance, and feeding them through an encoder, which is trained with a constrastive loss.
  • This encourages the embeddings of pairs generated from the same instance, known as positive pairs, to be close together and embeddings originated from different instances, known as negative pairs, to be far apart
  • Objectives:

    The authors aim to leverage the strength of adversarial example for SSL, where no labels are available, and the focus on pairs rather than single examples requires an altogether different definition of adversaries.
  • The authors' aims is to use adversarial training to compensate the limitation of current CL algorithms, by both generating challenging positive pairs and mining effective hard negative pairs for the optimization of the contrastive loss
  • Methods:

    Embedding dimension impact is evaluated in Fig. 5(b)
  • This dimension does not seem to affect the performance of both baseline and adversarially trained model.
  • The improvement is much more dramatic with adversarial training, where it can be as high as 5% (ResNet101 over ResNet18), than for the baseline, where it is at most 1%
  • This is likely because larger networks have higher learning capacity and can benefit from the more challenging examples produced by adversarial augmentation.
  • The fact that the ResNet18 with adversarial training beats the ResNet101 baseline shows that there is always a benefit to more challenging training pairs, even for smaller models
  • Results:

    The authors show that adversarial data augmentation can be used to improve the performance of SSL learning.
  • Conclusion:

    In self-supervised learning (SSL), approaches based on contrastive learning (CL) do not necessarily optimize on hard negative pairs.
  • This work advances the general use of deep learning technology, especially in the case that dataset annotations are difficult to obtain, and could have many applications
  • It advances several state of the art solutions on self-supervised learning (SSL), where no labels are provided.
  • While prior works in SSL suggest training with larger network, larger batch size and longer training epochs, the experiments in this works demonstrates that these factors are less critical by optimizing on effective training pairs.
  • While this work mainly focuses on the study of image recognition, the authors hope this work can be extended to other application domains of SSL in the future
Tables
  • Table1: Downstream classification accuracy for three SSL methods, with and without ( = 0) adversarial augmentation, on different datasets
  • Table2: Comparison of transfer learning performance with linear evaluation 56 to other image datasets
Download tables as Excel
Related work
  • Since this work focuses on image classification tasks, our survey of previous work concentrates on contrastive learning (CL) and adversarial examples for image classification.

    2.1 Contrastive learning

    Contrastive learning has been widely used in the metric learning literature [13, 71, 54] and, more recently, for self-supervised learning (SSL) [68, 74, 78, 63, 22, 12, 39, 55, 23], where it is used to learn an encoder in the pretext training stage. Under the SSL setting, where no labels are available, CL algorithms aim to learn an invariant representation of each image in the training set. This is implemented by minimizing a contrastive loss evaluated on pairs of feature vectors extracted from data augmentations of the image. While most CL based SSL approaches share this core idea, multiple augmentation strategies have been proposed [74, 78, 63, 22, 12, 39]. Typically, augmentations are obtained by data transformation (i.e. rotation, cropping, random grey scale and color jittering) [78, 12], but there have also been proposals to use different color channels, depth, or surface normals as the augmentations of an image [63]. Another approach is to use an augmentation dictionary composed of the embedding vectors from the previous epoch [74] or obtained by forwarding an image through a momentum updated encoder [22]. This diversity of approaches to the synthesis of augmentations reflects the critical importance of using semantically similar example pairs in CL [5]. This has also been studied empirically in [12], showing that stronger data augmentations improve CL performance.
Funding
  • This work was partially funded by NSF awards IIS-1637941, IIS-1924937, and NVIDIA GPU donations
Study subjects and analysis
datasets: 8
Transfer to other downstream datasets Transfer performance compares how encoders learned by different SSL approaches generalize to various downstream datasets. Following the linear evaluation protocol of [12], we consider the 8 datasets [29, 28, 38, 14, 49, 17, 46] shown in Table2. Both the encoder of SimCLR [12] and CLAE are trained on ImageNet100 (an ImageNet subset sampled by [63]), using a ResNet18

datasets: 8
This indicates that it can scale to large datasets. On the remaining datasets of Table2, it outperformed SimCLR on 7 out of the 8 datasets. This suggests that it generalizes better across downstream datasets

Reference
  • Tiny imagenet visual recognition challenge. https://tiny-imagenet.herokuapp.com/.
    Findings
  • P. Agrawal, J. Carreira, and J. Malik. Learning to see by moving. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 37–45, Dec 2015.
    Google ScholarLocate open access versionFindings
  • Unaiza Ahsan, Rishi Madhok, and Irfan A. Essa. Video jigsaw: Unsupervised learning of spatiotemporal context for video action recognition. CoRR, abs/1808.07507, 2018.
    Findings
  • Anurag Arnab, Ondrej Miksik, and Philip H. S. Torr. On the robustness of semantic segmentation models to adversarial attacks. In CVPR, 2018.
    Google ScholarLocate open access versionFindings
  • Sanjeev Arora, Hrishikesh Khandeparkar, Mikhail Khodak, Orestis Plevrakis, and Nikunj Saunshi. A theoretical analysis of contrastive unsupervised representation learning. CoRR, abs/1902.09229, 2019.
    Findings
  • H. Bilen and A. Vedaldi. Universal representations: The missing link between faces, text, planktons, and cat breeds. Technical report, 2017.
    Google ScholarFindings
  • Avishek Joey Bose, Huan Ling, and Yanshuai Cao. Adversarial contrastive estimation. CoRR, abs/1805.03642, 2018.
    Findings
  • Avishek Joey Bose, Huan Ling, and Yanshuai Cao. Compositional hard negatives for visual semantic embeddings via an adversary. 2018.
    Google ScholarFindings
  • Nicholas Carlini and David A. Wagner. Towards evaluating the robustness of neural networks. CoRR, abs/1608.04644, 2016.
    Findings
  • Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. Adversarial attacks and defences: A survey. CoRR, abs/1810.00069, 2018.
    Findings
  • Shang-Tse Chen, Cory Cornelius, Jason Martin, and Duen Horng Chau. Robust physical adversarial attack on faster R-CNN object detector. CoRR, abs/1804.05810, 2018.
    Findings
  • Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020.
    Findings
  • S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), volume 1, pages 539–546 vol. 1, 2005.
    Google ScholarLocate open access versionFindings
  • M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed,, and A. Vedaldi. Describing textures in the wild. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2014.
    Google ScholarLocate open access versionFindings
  • Carl Doersch, Abhinav Gupta, and Alexei A. Efros. Unsupervised visual representation learning by context prediction. 2015 IEEE International Conference on Computer Vision (ICCV), pages 1422–1430, 2015.
    Google ScholarLocate open access versionFindings
  • Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramèr, Atul Prakash, Tadayoshi Kohno, and Dawn Song. Physical adversarial examples for object detectors. CoRR, abs/1807.07769, 2018.
    Findings
  • Li Fei-Fei, Rob Fergus, and Pietro Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. Computer Vision and Pattern Recognition Workshop, 2004.
    Google ScholarLocate open access versionFindings
  • Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. CoRR, abs/1803.07728, 2018.
    Findings
  • Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. CoRR, abs/1412.6572, 2014.
    Findings
  • Michael Gutmann and Aapo Hyvärinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Yee Whye Teh and Mike Titterington, editors, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 297–304, Chia Laguna Resort, Sardinia, Italy, 13–15 May 2010. PMLR.
    Google ScholarLocate open access versionFindings
  • Tengda Han, Weidi Xie, and Andrew Zisserman. Video representation learning by dense predictive coding. In Workshop on Large Scale Holistic Video Understanding, ICCV, 2019.
    Google ScholarLocate open access versionFindings
  • Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. arXiv preprint arXiv:1911.05722, 2019.
    Findings
  • Aapo Hyvärinen and Hiroshi Morioka. Unsupervised feature extraction by time-contrastive learning and nonlinear ica. In R. Garnett, D.D. Lee, U. von Luxburg, I. Guyon, and M. Sugiyama, editors, Advances in Neural Information Processing Systems, number NIPS 2016 in Advances in neural information processing systems, pages 3772–3780, United States, 2016. Neural Information Processing Systems Foundation.
    Google ScholarLocate open access versionFindings
  • Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. In NeurIPS, 2019.
    Google ScholarLocate open access versionFindings
  • Longlong Jing and Yingli Tian. Self-supervised visual feature learning with deep neural networks: A survey. CoRR, abs/1902.06162, 2019.
    Findings
  • Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. arXiv preprint arXiv:2004.11362, 2020.
    Findings
  • Dahun Kim, Donghyeon Cho, Donggeun Yoo, and In So Kweon. Learning image representations by completing damaged jigsaw puzzles. CoRR, abs/1802.01880, 2018.
    Findings
  • Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia, 2013.
    Google ScholarLocate open access versionFindings
  • Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009.
    Google ScholarFindings
  • B. G. Vijay Kumar, Ben Harwood, Gustavo Carneiro, Ian D. Reid, and Tom Drummond. Smart mining for deep metric learning. CoRR, abs/1704.01285, 2017.
    Findings
  • Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial examples in the physical world. CoRR, abs/1607.02533, 2016.
    Findings
  • Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial machine learning at scale. CoRR, abs/1611.01236, 2016.
    Findings
  • Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Learning representations for automatic colorization. CoRR, abs/1603.06668, 2016.
    Findings
  • Hsin-Ying Lee, Jia-Bin Huang, Maneesh Singh, and Ming-Hsuan Yang. Unsupervised representation learning by sorting sequences. CoRR, abs/1708.01246, 2017.
    Findings
  • Saehyung Lee, Hyun-Gyu Lee, and Sungroh Yoon. Adversarial vertex mixup: Toward better adversarially robust generalization. ArXiv, abs/2003.02484, 2020.
    Findings
  • Hong Liu, Mingsheng Long, Jianmin Wang, and Michael Jordan. Transferable adversarial training: A general approach to adapting deep classifiers. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 4013–4022, Long Beach, California, USA, 09–15 Jun 2019. PMLR.
    Google ScholarLocate open access versionFindings
  • Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018.
    Google ScholarLocate open access versionFindings
  • S. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi. Fine-grained visual classification of aircraft. Technical report, 2013.
    Google ScholarFindings
  • Ishan Misra and Laurens van der Maaten. Self-supervised learning of pretext-invariant representations. In CVPR, 2020.
    Google ScholarLocate open access versionFindings
  • Ishan Misra, C. Lawrence Zitnick, and Martial Hebert. Unsupervised learning using sequential verification for action recognition. CoRR, abs/1603.08561, 2016.
    Findings
  • T. Miyato, S. Maeda, M. Koyama, and S. Ishii. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(8):1979–1993, 2019.
    Google ScholarLocate open access versionFindings
  • Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. CoRR, abs/1610.08401, 2016.
    Findings
  • Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. CoRR, abs/1511.04599, 2015.
    Findings
  • T. Nathan Mundhenk, Daniel Ho, and Barry Y. Chen. Improvements to context based self-supervised learning. CoRR, abs/1711.06379, 2017.
    Findings
  • Muzammal Naseer, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Fatih Porikli. A selfsupervised approach for adversarial robustness. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
    Google ScholarLocate open access versionFindings
  • M-E. Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing, Dec 2008.
    Google ScholarLocate open access versionFindings
  • Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. CoRR, abs/1603.09246, 2016.
    Findings
  • Nicolas Papernot, Patrick D. McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. CoRR, abs/1511.07528, 2015.
    Findings
  • O. M. Parkhi, A. Vedaldi, A. Zisserman, and C. V. Jawahar. Cats and dogs. In IEEE Conference on Computer Vision and Pattern Recognition, 2012.
    Google ScholarLocate open access versionFindings
  • Omkar M. Parkhi, Andrea Vedaldi, and Andrew Zisserman. Deep face recognition. In Xianghua Xie, Mark W. Jones, and Gary K. L. Tam, editors, Proceedings of the British Machine Vision Conference (BMVC), pages 41.1–41.12. BMVA Press, September 2015.
    Google ScholarLocate open access versionFindings
  • Deepak Pathak, Philipp Krähenbühl, Jeff Donahue, Trevor Darrell, and Alexei A. Efros. Context encoders: Feature learning by inpainting. CoRR, abs/1604.07379, 2016.
    Findings
  • Ishan Misra Pedro Morgado, Nuno Vasconcelos. Audio-visual instance discrimination with cross-modal agreement. https://arxiv.org/abs/2004.12943, 2020.
    Findings
  • S-A. Rebuffi, H. Bilen, and A. Vedaldi. Learning multiple visual domains with residual adapters. In Neural Information Processing Systems (NeurIPS), 2017.
    Google ScholarLocate open access versionFindings
  • Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. CoRR, abs/1503.03832, 2015.
    Findings
  • Pierre Sermanet, Corey Lynch, Jasmine Hsu, and Sergey Levine. Time-contrastive networks: Selfsupervised learning from multi-view observation. CoRR, abs/1704.06888, 2017.
    Findings
  • Pierre Sermanet, Corey Lynch, Jasmine Hsu, and Sergey Levine. Time-contrastive networks: Selfsupervised learning from multi-view observation. CoRR, abs/1704.06888, 2017.
    Findings
  • Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John P. Dickerson, Christoph Studer, Larry S. Davis, Gavin Taylor, and Tom Goldstein. Adversarial training for free! CoRR, abs/1904.12843, 2019.
    Findings
  • E. Simo-Serra, E. Trulls, L. Ferraz, I. Kokkinos, P. Fua, and F. Moreno-Noguer. Discriminative learning of deep convolutional feature point descriptors. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 118–126, 2015.
    Google ScholarLocate open access versionFindings
  • Kihyuk Sohn. Improved deep metric learning with multi-class n-pair loss objective. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 1857–1865. Curran Associates, Inc., 2016.
    Google ScholarLocate open access versionFindings
  • Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. Deep metric learning via lifted structured feature embedding. CoRR, abs/1511.06452, 2015.
    Findings
  • Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. One pixel attack for fooling deep neural networks. CoRR, abs/1710.08864, 2017.
    Findings
  • Yumin Suh, Bohyung Han, Wonsik Kim, and Kyoung Mu Lee. Stochastic class-based hard example mining for deep metric learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
    Google ScholarLocate open access versionFindings
  • Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. arXiv preprint arXiv:1906.05849, 2019.
    Findings
  • Florian Tramèr and Dan Boneh. Adversarial training and robustness for multiple perturbations. CoRR, abs/1904.13000, 2019.
    Findings
  • Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations, 2018.
    Google ScholarLocate open access versionFindings
  • Michael Tschannen, Josip Djolonga, Paul K. Rubenstein, Sylvain Gelly, and Mario Lucic. On mutual information maximization for representation learning. ArXiv, abs/1907.13625, 2020.
    Findings
  • Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. arXiv: Machine Learning, 2018.
    Google ScholarFindings
  • Aäron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. CoRR, abs/1807.03748, 2018.
    Findings
  • Riccardo Volpi, Hongseok Namkoong, Ozan Sener, C. John Duchi, Vittorio Murino, and Silvio Savarese. Generalizing to unseen domains via adversarial data augmentation. NeurIPS, pages 5334–5344, 2018.
    Google ScholarLocate open access versionFindings
  • X. Wang and A. Gupta. Unsupervised learning of visual representations using videos. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 2794–2802, 2015.
    Google ScholarLocate open access versionFindings
  • Kilian Q. Weinberger and Lawrence K. Saul. Distance metric learning for large margin nearest neighbor classification. Journal of Machine Learning Research, 10(9):207–244, 2009.
    Google ScholarLocate open access versionFindings
  • Eric Wong, Leslie Rice, and J. Zico Kolter. Fast is better than free: Revisiting adversarial training. In International Conference on Learning Representations, 2020.
    Google ScholarLocate open access versionFindings
  • Chao-Yuan Wu, R. Manmatha, Alexander J. Smola, and Philipp Krähenbühl. Sampling matters in deep embedding learning. CoRR, abs/1706.07567, 2017.
    Findings
  • Zhirong Wu, Yuanjun Xiong, Stella Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance-level discrimination. CoRR, abs/1805.01978, 2018.
    Findings
  • Cihang Xie, Mingxing Tan, Boqing Gong, Jiang Wang, Alan L. Yuille, and Quoc V. Le. Adversarial examples improve image recognition. ArXiv, abs/1911.09665, 2019.
    Findings
  • Cihang Xie, Jianyu Wang, Zhishuai Zhang, Yuyin Zhou, Lingxi Xie, and Alan L. Yuille. Adversarial examples for semantic segmentation and object detection. CoRR, abs/1703.08603, 2017.
    Findings
  • Cihang Xie and Alan L. Yuille. Intriguing properties of adversarial training. CoRR, abs/1906.03787, 2019.
    Findings
  • Mang Ye, Xu Zhang, Pong C. Yuen, and Shih-Fu Chang. Unsupervised embedding learning via invariant and spreading instance feature. CoRR, abs/1904.03436, 2019.
    Findings
  • Xiaoyong Yuan, Pan He, Qile Zhu, Rajendra Rana Bhat, and Xiaolin Li. Adversarial examples: Attacks and defenses for deep learning. CoRR, abs/1712.07107, 2017.
    Findings
  • Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, and Michael I. Jordan. Theoretically principled trade-off between robustness and accuracy. CoRR, abs/1901.08573, 2019.
    Findings
  • Richard Zhang, Phillip Isola, and Alexei A. Efros. Colorful image colorization. CoRR, abs/1603.08511, 2016.
    Findings
  • Yue Zhao, Hong Zhu, Qintao Shen, Ruigang Liang, Kai Chen, and Shengzhi Zhang. Practical adversarial attack against object detector. CoRR, abs/1812.10217, 2018.
    Findings
Author
Chih-Hui Ho
Chih-Hui Ho
Nuno Nvasconcelos
Nuno Nvasconcelos
Your rating :
0

 

Tags
Comments
小科