AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
In this work, inspired by the repetitive nature of texture patterns, we find that texture synthesis can be viewed as upsampling in the Fast Fourier Transform domain

Neural FFTs for Universal Texture Image Synthesis

NIPS 2020, (2020)

Cited by: 0|Views25
EI
Full Text
Bibtex
Weibo

Abstract

Synthesizing larger texture images from a smaller exemplar is an important task in graphics and vision. The conventional CNNs, recently adopted for synthesis, require to train and test on the same set of images and fail to generalize to unseen images. This is mainly because those CNNs fully rely on convolutional and upsampling layers that...More

Code:

Data:

0
Introduction
  • Texture synthesis is the expansion of a small texture example to an arbitrarily larger size while preserving the structural content.
  • It is a challenging task given the wide range of textures a synthesizer should handle.
  • Later works train feed-forward CNNs to learn the end-to-end synthesis map that expands textures in a single pass [37, 66, 52, 36]
Highlights
  • Texture synthesis is the expansion of a small texture example to an arbitrarily larger size while preserving the structural content
  • We compare against several leading benchmarks. Those include: 1) naive tiling: it duplicates the input 128×128 patch to form the 256×256 output; 2) self-tuning [29]: state-of-the-art optimization based method; two style transfer based methods including 3) texture CNN [18]: uses the 256×256 ground-truth as the input style and 256×256 noise as the content, and 4) whiten-and-color transform (WCT) [37]: uses 128×128 patch as the style and 256×256 noise as the content; 5) texture mixer [62]: a texture interpolation method with all source patches chosen from the input; two GAN-based schemes 6) sinGAN [52] and 7) nonstationary [66] that overfits the network per each example; and 8) pix2pix [56]: an example for image-to-image translation methods
  • The results are presented in Table 1, where we find that each proposed module plays an important role to achieve successful texture synthesis
  • For effective CNN training with non-smooth Fast Fourier Transform (FFT) images, we design a framework that applies FFT upsampling in the feature space using a deconvolution network ours end-to-end FFT
  • Extensive evaluations confirm that our synthesizer achieves state-of-the-art performance based on both quantitative metrics and human evaluations
  • In order to address the shortcomings of the proposed approach, there are still important steps to pursue for future research
Methods
  • Performance of the FFT-based texture synthesis was assessed for a large and diverse dataset of natural texture images, and compared with state-of-the-art using both quantitative and qualitative metrics.

    Dataset.
  • Performance of the FFT-based texture synthesis was assessed for a large and diverse dataset of natural texture images, and compared with state-of-the-art using both quantitative and qualitative metrics.
  • A large texture dataset with 55, 583 images from 15 different sources [8, 53, 9, 6, 7, 47, 1, 15, 45, 32] are collected.
  • All images are resized with preserving the aspect ratio to the standard 256 × 256 size for the target/output. 128 × 128 input patches are formed by cropping the center of target images
Results
  • The authors compare against several leading benchmarks
  • Those include: 1) naive tiling: it duplicates the input 128×128 patch to form the 256×256 output; 2) self-tuning [29]: state-of-the-art optimization based method; two style transfer based methods including 3) texture CNN [18]: uses the 256×256 ground-truth as the input style and 256×256 noise as the content, and 4) WCT [37]: uses 128×128 patch as the style and 256×256 noise as the content; 5) texture mixer [62]: a texture interpolation method with all source patches chosen from the input; two GAN-based schemes 6) sinGAN [52] and 7) nonstationary [66] that overfits the network per each example; and 8) pix2pix [56]: an example for image-to-image translation methods.
  • The authors' p-values for all comparisons are small ( 10−6), indicating the preference of the method is statistically significant
Conclusion
  • This paper puts forth a novel FFT-based CNN framework for universal texture synthesis.
  • In order to address the shortcomings of the proposed approach, there are still important steps to pursue for future research.
  • One such step pertains to diversifying the generated texture in a controllable manner.
  • Another step is to handle synthesis for non-stationary textures.
  • The gram-based style loss may not effectively translate local characteristics, and developing a more effective criterion to match the input and output statistics becomes an important step
Summary
  • Introduction:

    Texture synthesis is the expansion of a small texture example to an arbitrarily larger size while preserving the structural content.
  • It is a challenging task given the wide range of textures a synthesizer should handle.
  • Later works train feed-forward CNNs to learn the end-to-end synthesis map that expands textures in a single pass [37, 66, 52, 36]
  • Objectives:

    Given the training data {(yi, xi)}Ki=, learn the upsampler hθ(·) that maps X to Y.
  • Methods:

    Performance of the FFT-based texture synthesis was assessed for a large and diverse dataset of natural texture images, and compared with state-of-the-art using both quantitative and qualitative metrics.

    Dataset.
  • Performance of the FFT-based texture synthesis was assessed for a large and diverse dataset of natural texture images, and compared with state-of-the-art using both quantitative and qualitative metrics.
  • A large texture dataset with 55, 583 images from 15 different sources [8, 53, 9, 6, 7, 47, 1, 15, 45, 32] are collected.
  • All images are resized with preserving the aspect ratio to the standard 256 × 256 size for the target/output. 128 × 128 input patches are formed by cropping the center of target images
  • Results:

    The authors compare against several leading benchmarks
  • Those include: 1) naive tiling: it duplicates the input 128×128 patch to form the 256×256 output; 2) self-tuning [29]: state-of-the-art optimization based method; two style transfer based methods including 3) texture CNN [18]: uses the 256×256 ground-truth as the input style and 256×256 noise as the content, and 4) WCT [37]: uses 128×128 patch as the style and 256×256 noise as the content; 5) texture mixer [62]: a texture interpolation method with all source patches chosen from the input; two GAN-based schemes 6) sinGAN [52] and 7) nonstationary [66] that overfits the network per each example; and 8) pix2pix [56]: an example for image-to-image translation methods.
  • The authors' p-values for all comparisons are small ( 10−6), indicating the preference of the method is statistically significant
  • Conclusion:

    This paper puts forth a novel FFT-based CNN framework for universal texture synthesis.
  • In order to address the shortcomings of the proposed approach, there are still important steps to pursue for future research.
  • One such step pertains to diversifying the generated texture in a controllable manner.
  • Another step is to handle synthesis for non-stationary textures.
  • The gram-based style loss may not effectively translate local characteristics, and developing a more effective criterion to match the input and output statistics becomes an important step
Tables
  • Table1: Performance of different synthesis methods and ablation study for critical design elements averaged over 200 test examples. PS is based on scores received from 15 readers. Texture CNN∗ needs the ground truth as input. All methods are run on a single NVIDIA Tesla V100, except for self-tuning which runs the default 8 threads in parallel on an Intel Corei7-6800K CPU @ 3.40GHz. Note that the running time of ours is based on custom-ed CUDA kernel for deformable convolution
Download tables as Excel
Related work
  • Texture synthesis has witnessed ample research. A holistic survey is beyond the scope of this paper; see e.g., [4]. Next, we list the most relevant works from the two main categories: nonparametric and parametric. Non-parametric examples include pixel-based [14, 58, 13], assemblingbased [13, 39, 31, 49], optimization-based [48, 30, 51, 29], appearance-based [34], and image-analogy based [23] methods. Self-tuning texture optimization is state-of-the-art non-parametric method [29]. It matches certain global statistics (such as histogram) between input patch and output by optimizing a handcrafted objective. Thus, it can be prohibitively slow and brittle for complex textures.
Funding
  • Extensive evaluations confirm that our method achieves state-of-the-art performance both quantitatively and qualitatively
  • Our scheme also significantly outperforms WCT, compare e.g., FID=128.1 vs. 71.82, which is mainly due to the structural artifacts present in WCT textures
  • Extensive evaluations confirm that our synthesizer achieves state-of-the-art performance based on both quantitative metrics and human evaluations
Study subjects and analysis
pairs: 4
The detailed architecture for 256×256 texture synthesis from 128×128 inputs is as follows. Encoder starts with an RGB image patch 3×128×128, and after four pairs of stride=2 and stride=1 convolutional layers, it extracts 512×8×8. We take the features at 512×8×8, 256×16×16, and 128×32×32 for the FFT based upsampling

users: 15
We use Amazon Mechanical Turk (AMT) to perform AB testing where the users are asked to choose between the synthesized textures from our method and one of the benchmarks, and provide a binary score. For each method pair, the orders are randomized, and 200 examples are viewed each by 15 users. A two-sample t-test is used to identify if the mean scores from two schemes is significantly different

Reference
  • S. Abdelmounaime and H. Dong-Chen. New brodatz-based image databases for grayscale color and multiband texture analysis. 2013.
    Google ScholarFindings
  • M. Aittala, T. Aila, and J. Lehtinen. Reflectance modeling by neural texture synthesis. ACM Transactions on Graphics (ToG), 35(4):1–13, 2016.
    Google ScholarLocate open access versionFindings
  • A. Alanov, M. Kochurov, D. Volkhonskiy, D. Yashkov, E. Burnaev, and D. Vetrov. Usercontrollable multi-texture synthesis with generative adversarial networks. arXiv preprint arXiv:1904.04751, 2019.
    Findings
  • J. A. Alexander and M. C. Mozer. Template-based algorithms for connectionist rule extraction. In Advances in neural information processing systems, pages 609–616, 1995.
    Google ScholarLocate open access versionFindings
  • U. Bergmann, N. Jetchev, and R. Vollgraf. Learning texture manifolds with the periodic spatial gan. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 469–477. JMLR. org, 2017.
    Google ScholarLocate open access versionFindings
  • G. J. Burghouts and J.-M. Geusebroek. Material-specific adaptation of color invariant features. Pattern Recognition Letters, 30(3):306 – 313, 2009.
    Google ScholarLocate open access versionFindings
  • [8] M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed,, and A. Vedaldi. Describing textures in the wild. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2014.
    Google ScholarLocate open access versionFindings
  • [9] D. Dai, H. Riemenschneider, and L. Van Gool. The synthesizability of texture examples. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
    Google ScholarLocate open access versionFindings
  • [10] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei. Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 764–773, 2017.
    Google ScholarLocate open access versionFindings
  • [11] A. Dundar, K. Sapra, G. Liu, A. Tao, and B. Catanzaro. Panoptic-based image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8070–8079, 2020.
    Google ScholarLocate open access versionFindings
  • [12] A. Dundar, K. J. Shih, A. Garg, R. Pottorf, A. Tao, and B. Catanzaro. Unsupervised disentanglement of pose, appearance and background from images and videos. arXiv preprint arXiv:2001.09518, 2020.
    Findings
  • [13] A. A. Efros and W. T. Freeman. Image quilting for texture synthesis and transfer. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 341–346, 2001.
    Google ScholarLocate open access versionFindings
  • [14] A. A. Efros and T. K. Leung. Texture synthesis by non-parametric sampling. In Proceedings of the seventh IEEE international conference on computer vision, volume 2, pages 1033–1038. IEEE, 1999.
    Google ScholarLocate open access versionFindings
  • [15] M. Fritz, E. Hayman, B. Caputo, and J.-O. Eklundh. The kth-tips database. 2004.
    Google ScholarFindings
  • [16] A. Frühstück, I. Alhashim, and P. Wonka. Tilegan: Synthesis of large-scale non-homogeneous textures. arXiv preprint arXiv:1904.12795, 2019.
    Findings
  • [17] B. Galerne, Y. Gousseau, and J.-M. Morel. Random phase textures: Theory and synthesis. IEEE Transactions on image processing, 20(1):257–267, 2010.
    Google ScholarLocate open access versionFindings
  • [18] L. Gatys, A. S. Ecker, and M. Bethge. Texture synthesis using convolutional neural networks. In Advances in neural information processing systems, pages 262–270, 2015.
    Google ScholarLocate open access versionFindings
  • [19] L. A. Gatys, A. S. Ecker, and M. Bethge. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576, 2015.
    Findings
  • [20] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2414–2423, 2016.
    Google ScholarLocate open access versionFindings
  • [21] R. C. Gonzales and R. E. Woods. Digital image processing, 2002.
    Google ScholarLocate open access versionFindings
  • [22] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
    Google ScholarLocate open access versionFindings
  • [23] A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless, and D. H. Salesin. Image analogies. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 327–340, 2001.
    Google ScholarLocate open access versionFindings
  • [24] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In NIPS, 2017.
    Google ScholarLocate open access versionFindings
  • [25] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
    Findings
  • [26] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In CVPR, 2017.
    Google ScholarLocate open access versionFindings
  • [27] N. Jetchev, U. Bergmann, and R. Vollgraf. Texture synthesis with spatial generative adversarial networks. CoRR, abs/1611.08207, 2016.
    Findings
  • [28] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and superresolution. In European conference on computer vision, pages 694–711.
    Google ScholarLocate open access versionFindings
  • [29] A. Kaspar, B. Neubert, D. Lischinski, M. Pauly, and J. Kopf. Self tuning texture optimization. In Computer Graphics Forum, volume 34, pages 349–359. Wiley Online Library, 2015.
    Google ScholarLocate open access versionFindings
  • [30] V. Kwatra, I. Essa, A. Bobick, and N. Kwatra. Texture optimization for example-based synthesis. In ACM SIGGRAPH 2005 Papers, SIGGRAPH ’05, pages 795–802, New York, NY, USA, 2005. ACM.
    Google ScholarFindings
  • [31] V. Kwatra, A. Schödl, I. Essa, G. Turk, and A. Bobick. Graphcut textures: image and video synthesis using graph cuts. ACM Transactions on Graphics (ToG), 22(3):277–286, 2003.
    Google ScholarLocate open access versionFindings
  • [32] R. Kwitt and P. Meerwald. Salzburg texture image database.
    Google ScholarFindings
  • [33] W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang. Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 624–632, 2017.
    Google ScholarLocate open access versionFindings
  • [34] S. Lefebvre and H. Hoppe. Appearance-space texture synthesis. ACM Transactions on Graphics (TOG), 25(3):541–548, 2006.
    Google ScholarLocate open access versionFindings
  • [35] C. Li and M. Wand. Precomputed real-time texture synthesis with markovian generative adversarial networks. CoRR, abs/1604.04382, 2016.
    Findings
  • [36] Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.-H. Yang. Diversified texture synthesis with feed-forward networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3920–3928, 2017.
    Google ScholarLocate open access versionFindings
  • [37] Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.-H. Yang. Universal style transfer via feature transforms. In Advances in neural information processing systems, pages 386–396, 2017.
    Google ScholarLocate open access versionFindings
  • [38] Y. Li, M.-Y. Liu, X. Li, M.-H. Yang, and J. Kautz. A closed-form solution to photorealistic image stylization. In Proceedings of the European Conference on Computer Vision (ECCV), pages 453–468, 2018.
    Google ScholarLocate open access versionFindings
  • [39] L. Liang, C. Liu, Y.-Q. Xu, B. Guo, and H.-Y. Shum. Real-time texture synthesis by patch-based sampling. ACM Transactions on Graphics (ToG), 20(3):127–150, 2001.
    Google ScholarLocate open access versionFindings
  • [40] J. Liao, Y. Yao, L. Yuan, G. Hua, and S. B. Kang. Visual attribute transfer through deep image analogy. arXiv preprint arXiv:1705.01088, 2017.
    Findings
  • [41] G. Liu, Y. Gousseau, and G.-S. Xia. Texture synthesis through convolutional neural networks and spectrum constraints. In 2016 23rd International Conference on Pattern Recognition (ICPR), pages 3234–3239. IEEE, 2016.
    Google ScholarLocate open access versionFindings
  • [42] G. Liu, F. A. Reda, K. J. Shih, T.-C. Wang, A. Tao, and B. Catanzaro. Image inpainting for irregular holes using partial convolutions. In Proceedings of the European Conference on Computer Vision (ECCV), pages 85–100, 2018.
    Google ScholarLocate open access versionFindings
  • [43] G. Liu, R. Taori, T.-C. Wang, Z. Yu, S. Liu, F. A. Reda, K. Sapra, A. Tao, and B. Catanzaro. Transposer: Universal texture synthesis using feature maps as transposed convolution filter. arXiv preprint arXiv:2007.07243, 2020.
    Findings
  • [44] X. Liu, G. Yin, J. Shao, X. Wang, et al. Learning to predict layout-to-image conditional convolutions for semantic image synthesis. In Advances in Neural Information Processing Systems, pages 568–578, 2019.
    Google ScholarLocate open access versionFindings
  • [45] P. Mallikarjuna, A. Targhi, M. Fritz, E. Hayman, B. Caputo, and J.-O. Eklundh. The kth-tips2 database. 07 2006.
    Google ScholarLocate open access versionFindings
  • [46] T. Park, M.-Y. Liu, T.-C. Wang, and J.-Y. Zhu. Semantic image synthesis with spatiallyadaptive normalization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2337–2346, 2019.
    Google ScholarLocate open access versionFindings
  • [47] R. Picard, C. Graczyk, S. Mann, J. Wachman, L. Picard, and L. Campbell. Vistex vision texture database. 01 2010.
    Google ScholarFindings
  • [48] J. Portilla and E. P. Simoncelli. A parametric texture model based on joint statistics of complex wavelet coefficients. International Journal of Computer Vision, 40(1):49–70, Oct 2000.
    Google ScholarLocate open access versionFindings
  • [49] Y. Pritch, E. Kav-Venaki, and S. Peleg. Shift-map image editing. In 2009 IEEE 12th International Conference on Computer Vision, pages 151–158. IEEE.
    Google ScholarLocate open access versionFindings
  • [50] Y. Ren, X. Yu, R. Zhang, T. H. Li, S. Liu, and G. Li. Structureflow: Image inpainting via structure-aware appearance flow. In Proceedings of the IEEE International Conference on Computer Vision, pages 181–190, 2019.
    Google ScholarLocate open access versionFindings
  • [51] A. Rosenberger, D. Cohen-Or, and D. Lischinski. Layered shape synthesis: Automatic generation of control maps for non-stationary textures. In ACM SIGGRAPH Asia 2009 Papers, SIGGRAPH Asia ’09, pages 107:1–107:9, New York, NY, USA, 2009. ACM.
    Google ScholarFindings
  • [52] T. R. Shaham, T. Dekel, and T. Michaeli. Singan: Learning a generative model from a single natural image. In Proceedings of the IEEE International Conference on Computer Vision, pages 4570–4580, 2019.
    Google ScholarLocate open access versionFindings
  • [53] L. Sharan, R. Rosenholtz, and E. Adelson. Material perception: What can you see in a brief glance? Journal of Vision, 9(8):784–784, 2009.
    Google ScholarLocate open access versionFindings
  • [54] G. Tartavel, Y. Gousseau, and G. Peyré. Variational texture synthesis with sparsity and spectrum constraints. Journal of Mathematical Imaging and Vision, 52(1):124–144, 2015.
    Google ScholarLocate open access versionFindings
  • [55] D. Ulyanov, V. Lebedev, A. Vedaldi, and V. S. Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images.(2016). URL: http://arxiv.org/abs/1603.03417, 2016.
    Findings
  • [56] T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8798–8807, 2018.
    Google ScholarLocate open access versionFindings
  • [57] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, et al. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004.
    Google ScholarLocate open access versionFindings
  • [58] L.-Y. Wei and M. Levoy. Fast texture synthesis using tree-structured vector quantization. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pages 479–488. ACM Press/Addison-Wesley Publishing Co., 2000.
    Google ScholarLocate open access versionFindings
  • [59] G.-S. Xia, S. Ferradans, G. Peyré, and J.-F. Aujol. Synthesizing and mixing stationary gaussian texture models. SIAM Journal on Imaging Sciences, 7(1):476–508, 2014.
    Google ScholarLocate open access versionFindings
  • [60] C. Yang, X. Lu, Z. Lin, E. Shechtman, O. Wang, and H. Li. High-resolution image inpainting using multi-scale neural patch synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6721–6729, 2017.
    Google ScholarLocate open access versionFindings
  • [61] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang. Generative image inpainting with contextual attention. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5505–5514, 2018.
    Google ScholarLocate open access versionFindings
  • [62] N. Yu, C. Barnes, E. Shechtman, S. Amirghodsi, and M. Lukác. Texture mixer: A network for controllable synthesis and interpolation of texture. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 12164–12173, 2019.
    Google ScholarLocate open access versionFindings
  • [63] Y. Zeng, J. Fu, H. Chao, and B. Guo. Learning pyramid-context encoder network for highquality image inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1486–1494, 2019.
    Google ScholarLocate open access versionFindings
  • [64] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 586–595, 2018.
    Google ScholarLocate open access versionFindings
  • [65] C. Zheng, T.-J. Cham, and J. Cai. Pluralistic image completion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1438–1447, 2019.
    Google ScholarLocate open access versionFindings
  • [66] Y. Zhou, Z. Zhu, X. Bai, D. Lischinski, D. Cohen-Or, and H. Huang. Non-stationary texture synthesis by adversarial expansion. arXiv preprint arXiv:1805.04487, 2018.
    Findings
Author
Morteza Mardani
Morteza Mardani
Shiqiu Liu
Shiqiu Liu
Andrew Tao
Andrew Tao
Your rating :
0

 

Tags
Comments
小科