Pairwise Rotation Invariant Co-Occurrence Local Binary Pattern

IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 36, Issue 11, 2014, Pages 2199-2213.

Cited by: 280|Bibtex|Views30|Links
EI WOS SCOPUS
Keywords:
flickr material datasetmaterial recognitionrelative anglelocal binary patterntexture classificationMore(20+)
Weibo:
We evaluated the performance of the PRICoLBP comprehensively on nine benchmark data sets from five different perspectives

Abstract:

Designing effective features is a fundamental problem in computer vision. However, it is usually difficult to achieve a great tradeoff between discriminative power and robustness. Previous works shown that spatial co-occurrence can boost the discriminative power of features. However the current existing co-occurrence features are taking f...More

Code:

Data:

0
Introduction
  • DESIGNING effective features is a fundamental issue in computer vision
  • It plays a significant role in a wide range of applications that includes static and dynamic texture classification [1], [2], [3], [4], [5], object and scene recognition [6], [7], [8], [9], [10], face detection and recognition [11], image retrieval [12], [13], stereo correspondence, 3D reconstruction and many more.
  • The rationales behind this claim are that: The spatial co-occurrence of two features captures a strong correlation between them and provides more information than their individual occurrence
Highlights
  • DESIGNING effective features is a fundamental issue in computer vision
  • Previous works have shown in texture classification [14], object classification [8], [9], [15], [16], [17], and image retrieval [12], [13] that the spatial co-occurrence among features could increase the discriminative power of features
  • We evaluate PRICoLBP comprehensively on nine data sets from five different perspectives, including encoding strategy, rotation invariance, the number of templates, speed, and discriminative power compared with other LBP variants
  • In the spatial PACT, each image was divided into three-layer pyramid, in which the first layer had one block, the second layer contained five block, and the third had 5 Â 5 1⁄4 25 blocks
  • We addressed the transform invariance issue for co-occurrence feature
  • We evaluated the performance of the PRICoLBP comprehensively on nine benchmark data sets from five different perspectives
Results
  • Note that PRICoLBPg reduced the classification error of PACT [58] relatively over 70 percent.
  • The authors' method with PRICoLBPg did not use the spatial layout prior information.
  • In the spatial PACT, each image was divided into three-layer pyramid, in which the first layer had one block, the second layer contained five block, and the third had 5 Â 5 1⁄4 25 blocks.
  • If the leaf images were well-aligned, as in Swedish leaf data set, the spatial prior information could provide important cues for classification.
  • In practical cases, the condition can hardly be satisfied
Conclusion
  • CONCLUSION AND DISCUSSION

    In this paper, the authors addressed the transform invariance issue for co-occurrence feature.
  • The authors presented a pairwise transform invariance principle, proposed an effective and efficient co-occurrence encoding scheme, pairwise rotation invariant co-occurrence LBP, and extended it to incorporate multi-scales, multi-orientations, and multi-color channels information.
  • Different from other LBP variants, the PRICoLBP can capture the spatial context cooccurrence information effectively, and possess rotation invariance.
  • On some other medical applications, the proposed feature shows superior performance, such as Tissue Classification.
  • The authors hope that the PRICoLBP can become a de facto standard tool for texture relevant classification and retrieval applications
Summary
  • Introduction:

    DESIGNING effective features is a fundamental issue in computer vision
  • It plays a significant role in a wide range of applications that includes static and dynamic texture classification [1], [2], [3], [4], [5], object and scene recognition [6], [7], [8], [9], [10], face detection and recognition [11], image retrieval [12], [13], stereo correspondence, 3D reconstruction and many more.
  • The rationales behind this claim are that: The spatial co-occurrence of two features captures a strong correlation between them and provides more information than their individual occurrence
  • Results:

    Note that PRICoLBPg reduced the classification error of PACT [58] relatively over 70 percent.
  • The authors' method with PRICoLBPg did not use the spatial layout prior information.
  • In the spatial PACT, each image was divided into three-layer pyramid, in which the first layer had one block, the second layer contained five block, and the third had 5 Â 5 1⁄4 25 blocks.
  • If the leaf images were well-aligned, as in Swedish leaf data set, the spatial prior information could provide important cues for classification.
  • In practical cases, the condition can hardly be satisfied
  • Conclusion:

    CONCLUSION AND DISCUSSION

    In this paper, the authors addressed the transform invariance issue for co-occurrence feature.
  • The authors presented a pairwise transform invariance principle, proposed an effective and efficient co-occurrence encoding scheme, pairwise rotation invariant co-occurrence LBP, and extended it to incorporate multi-scales, multi-orientations, and multi-color channels information.
  • Different from other LBP variants, the PRICoLBP can capture the spatial context cooccurrence information effectively, and possess rotation invariance.
  • On some other medical applications, the proposed feature shows superior performance, such as Tissue Classification.
  • The authors hope that the PRICoLBP can become a de facto standard tool for texture relevant classification and retrieval applications
Tables
  • Table1: Summary of Six Applications and Nine Databases Used in Our Experiments stands for the RU-LBP of point A and the UðBð3tÞÞ is for the uniform LBP of point B. Here we refer a scale or an orientation as a template, e.g., if we use three scales and two orientations, then the number of templates T is 6
  • Table2: Comparison of RUCoLBP, UUCoLBP, and PRICoLBP
  • Table3: Comparison of PRICoLBP0, PRICoLBPg, and CoALBP
  • Table4: Templates Settings
  • Table5: Performance of PRICoLBPg with Different Templates Settings
  • Table6: Comparison of Running Time (Seconds) of PRICoLBP,
  • Table7: Comparison with Other LBP Variants the dimensionality of the PRICoLBP g on the Brodatz, CUReT, KTH-TIPS, and Leaf are 590 Â 2 1⁄4 1;180, and the dimensionality for the FMD is 590 Â 6 1⁄4 3;540. Color information is incorporated for Oxford Flower, PRID and MITindoor, the dimensionality for Oxford Flower and PRID is 3;540 Â 3 1⁄4 10; 620. Since we use two scales (LBP(8, 1) and LBP(8, 2)) on two scene data sets, the dimensionality for Gray PRICoLBP g on Scene-15 and MIT-indoor 67 is 2,360, 7,080 individually. When color information is used, the dimensionality increased to three times. When SPM (e.g., 1 Â 1, 2 Â 2 and 3 Â 1) is applied, e.g., on Scene-15 and MITindoor, the dimensionality increased to eight times. When PCA is used, cross-validation was used to determine the dimension. In FMD, we found that 120-dimensional feature after PCA yielded the best performance
  • Table8: Texture Classification Results on Brodatz,
  • Table9: Experimental Results on Data Set FMD
  • Table10: Recognition Results on Oxford Flower 102 Data Set
  • Table11: Recognition Performance on Swedish Leaf
  • Table12: Classification Accuracy on PFID Data Set and Dimensionality of the Representation Used in Each Method
  • Table13: Classification Results on Scene 15 Data Set
  • Table14: Average Accuracy (Percent) for MIT Indoor Scene Data Set
  • Table15: Category-Wise Accuracy (Percent) for PRICoLBPg, RBoW [<a class="ref-link" id="c66" href="#r66">66</a>], DPM [<a class="ref-link" id="c72" href="#r72">72</a>], and GIST-Color [<a class="ref-link" id="c63" href="#r63">63</a>] on MIT Indoor 67
Download tables as Excel
Related work
  • 2.1 Local Binary Pattern

    The local binary pattern operator was firstly proposed by Ojala et al [1] as a gray-scale invariant texture descriptor. For a pixel A in an image, its LBP code is computed by thresholding its circularly symmetric n neighbors in a circle of radius r with the pixel value of the central point and arranging the results as a binary string. For clarity, we denote the LBP of pixel A as LBPn;rðAÞ, which is defined as follows:

    X nÀ1 LBPn;rðAÞ 1⁄4 sðgi À gcÞ2i; i1⁄40 sðxÞ 1⁄4

    1; x ! 0 0; x < 0; Xn UðLBPn;rðAÞÞ 1⁄4 jsðgi À gcÞ À sðgiÀ1 À gcÞj; i1⁄41 where gn is equivalent to g0. For example, “11110000” and “11000011” are uniform patterns, and “11110100” and “10100100” are non-uniform patterns.

    To enhance the robustness to image rotation, rotation invariant LBP (LBPri) and rotation invariant uniform LBP (LBPru) are also introduced. The LBPri is defined as
Funding
  • Guo are supported by the Natural Science Foundation of China (NSFC) under Grant nos. 61175011, 61273217, and 61171193, and the 111 Project under Grant no
  • Tang are supported by NSFC (91320101), Shenzhen Basic Research Program (JC201005270350A, JCYJ20120903092050890, JCYJ20120617114614438), 100 Talents Programme of CAS
Reference
  • T. Ojala, M. Pietik€ainen, and T. M€aenp€a€a, “Multiresolution grayscale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 7, pp. 971–987, Jul. 2002.
    Google ScholarLocate open access versionFindings
  • G. Zhao and M. Pietikainen, “Dynamic texture recognition using local binary patterns with an application to facial expressions,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 6, pp. 915–928, Jun. 2007.
    Google ScholarLocate open access versionFindings
  • S. Lazebnik, C. Schmid, and J. Ponce, “A sparse texture representation using local affine regions,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 8, pp. 1265–1278, Aug. 2005.
    Google ScholarLocate open access versionFindings
  • M. Varma and A. Zisserman, “A statistical approach to texture classification from single images,” Int. J. Comput. Vis., vol. 62, pp. 61–81, 2005.
    Google ScholarLocate open access versionFindings
  • J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid, “Local features and kernels for classification of texture and object categories: A comprehensive study,” in Proc. IEEE Workshop Comput. Vis. Pattern Recognit. Workshop, 2007, pp. 213–238.
    Google ScholarLocate open access versionFindings
  • L. Bo, “Kernel descriptors for visual recognition,” in Proc. Adv. Neural Inf. Process. Syst., 2010, pp. 244–252.
    Google ScholarLocate open access versionFindings
  • O. Boiman, E. Shechtman, and M. Irani, “In defense of nearestneighbor based image classification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2008, pp. 1–8.
    Google ScholarLocate open access versionFindings
  • P. Chang and J. Krumm, “Object recognition with color co-occurrence histograms,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 1999.
    Google ScholarLocate open access versionFindings
  • S. Ito and S. Kubota, “Object classification using heterogeneous co-occurrence features,” in Proc. 11th Eur. Conf. Comput. Vis., 2010, pp. 209–222.
    Google ScholarLocate open access versionFindings
  • C. Kanan and G. Cottrell, “Robust classification of objects, faces, and flowers using natural image statistics,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2010, pp. 2472–2479.
    Google ScholarLocate open access versionFindings
  • T. Ahonen, A. Hadid, and M. Pietikainen, “Face description with local binary patterns: Application to face recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 12, pp. 2037– 2041, Dec. 2006.
    Google ScholarLocate open access versionFindings
  • Y. Zhang, Z. Jia, and T. Chen, “Image retrieval with geometry-preserving visual phrases,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2011, pp. 809–816.
    Google ScholarLocate open access versionFindings
  • O. Chum and J. Matas, “Unsupervised discovery of co-occurrence in sparse high dimensional data,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2010, pp. 3416–3423.
    Google ScholarLocate open access versionFindings
  • R. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for image classification,” IEEE Trans. Systems, Man and Cybern., vol. SMC-3, no. 6, pp. 610–621, Nov. 1973.
    Google ScholarLocate open access versionFindings
  • J. Yuan, M. Yang, and Y. Wu, “Mining discriminative co-occurrence patterns for visual recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2011, pp. 2777–2784.
    Google ScholarLocate open access versionFindings
  • N. Rasiwasia and N. Vasconcelos, “Holistic context modeling using semantic co-occurrences,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2009, pp. 1889–1895.
    Google ScholarLocate open access versionFindings
  • S. Yang, M. Chen, D. Pomerleau, and R. Sukthankar, “Food recognition using statistics of pairwise local features,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2010, pp. 2249–2256.
    Google ScholarLocate open access versionFindings
  • G. A. Orban, “Higher order visual processing in macaque extrastriate cortex,” Physiological Rev., vol. 88, no. 1, pp. 59–89, 2008.
    Google ScholarLocate open access versionFindings
  • Y. Yang and S. Newsam, “Spatial pyramid co-occurrence for image classification,” in Proc. IEEE Int. Conf. Comput. Vis., 2011, pp. 1465–1472.
    Google ScholarLocate open access versionFindings
  • X. Qi, R. Xiao, J. Guo, and L. Zhang, “Pairwise rotation invariant co-occurrence local binary pattern,” in Proc. 12th Eur. Conf. Comput. Vis., 2012, pp. 158–171.
    Google ScholarLocate open access versionFindings
  • X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” in Proc. 3rd Int. Conf. Anal. Model. Faces Gestures, 2007, pp. 168–182.
    Google ScholarLocate open access versionFindings
  • Z. Guo, L. Zhang, and D. Zhang, “Rotation invariant texture classification using lbp variance (lbpv) with global matching,” Pattern Recognit., vol. 43, pp. 706–719, 2010.
    Google ScholarLocate open access versionFindings
  • N.-S. Vu and A. Caplier, “Face recognition with patterns of oriented edge magnitudes,” in Proc. 11th Eur. Conf. Comput. Vis., 2010, pp. 313–326.
    Google ScholarLocate open access versionFindings
  • N.-S. Vu and A. Caplier, “Mining patterns of orientations and magnitudes for face recognition,” in Proc. IEEE Int. Joint Conf. Biometrics, 2011, pp. 1–8.
    Google ScholarLocate open access versionFindings
  • Z. Guo, L. Zhang, and D. Zhang, “A completed modeling of local binary pattern operator for texture classification,” IEEE Trans. Image Process., vol. 19, no. 6, pp. 1657–1663, Jun. 2010.
    Google ScholarLocate open access versionFindings
  • T. Ahonen, J. Matas, C. He, and M. Pietik€ainen, “Rotation invariant image description with local binary pattern histogram fourier features,” in Proc. 16th Scandinavian Conf. Image Anal, 2009, pp. 61– 70.
    Google ScholarLocate open access versionFindings
  • G. Zhao, T. Ahonen, J. Matas, and M. Pietikainen, “Rotationinvariant image and video description with local binary pattern features,” IEEE Trans. Image Process., vol. 21, no. 4, pp. 1465–1477, Apr. 2012.
    Google ScholarLocate open access versionFindings
  • R. Nosaka, Y. Ohkawa, and K. Fukui, “Feature extraction based on co-occurrence of adjacent local binary patterns,” in Proc. 5th Pacific Rim Conf. Adv. Image Video Technol., 2012, pp. 82–91.
    Google ScholarLocate open access versionFindings
  • D. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis., vol. 60, pp. 91–110, 2004.
    Google ScholarLocate open access versionFindings
  • M. Nilsback and A. Zisserman, “Automated flower classification over a large number of classes,” in Proc. 6th Indian Conf. Comput. Vis., Graph. Image Process., 2008, pp. 722–729.
    Google ScholarLocate open access versionFindings
  • M. Nilsback, “An automatic visual flora—Segmentation and classification of flowers images,” in Ph.D. thesis, Department of Engineering Science, University of Oxford, Oxford, U.K., 2009.
    Google ScholarFindings
  • Y. Chai, V. Lempitsky, and A. Zisserman, “Bicos: A bi-level cosegmentation method for image classification,” in Proc. IEEE Int. Conf. Comput. Vis., 2011, pp. 2579–2586.
    Google ScholarLocate open access versionFindings
  • P. Brodatz, Textures: A Photographic Album for Artists and Designers. New York, NY, USA: Dover, 1999.
    Google ScholarFindings
  • K. Dana, B. Van Ginneken, S. Nayar, and J. Koenderink, “Reflectance and texture of real-world surfaces,” ACM Trans. Graph., vol. 18, pp. 1–34, 1999.
    Google ScholarLocate open access versionFindings
  • E. Hayman, B. Caputo, M. Fritz, and J. Eklundh, “On the significance of real-world conditions for material classification,” in Proc. Eur. Conf. Comput. Vis., 2004, pp. 253–266.
    Google ScholarLocate open access versionFindings
  • L. Sharan, R. Rosenholtz, and E. Adelson, “Material perception: What can you see in a brief glance?” J. Vis., vol. 9, p. 784, 2009.
    Google ScholarLocate open access versionFindings
  • O. So€derkvist, “Computer vision classification of leaves from swedish trees,” Master’s thesis, Department of Electrical Engineering, Linko€ping University, Linko€ping, Sweden, 2001.
    Google ScholarFindings
  • M. Chen, K. Dhingra, W. Wu, L. Yang, R. Sukthankar, and J. Yang, “PFID: Pittsburgh fast-food image dataset,” in Proc. 16th IEEE Int. Conf. Image Process., 2009, pp. 289–292.
    Google ScholarLocate open access versionFindings
  • S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2006, pp. 2169–2178.
    Google ScholarLocate open access versionFindings
  • A. Vedaldi and A. Zisserman, “Efficient additive kernels via explicit feature maps,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 3, pp. 480–492, 2012.
    Google ScholarLocate open access versionFindings
  • J. Xiao, J. Hays, K. Ehinger, A. Oliva, and A. Torralba, “Sun database: Large-scale scene recognition from abbey to zoo,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2010, pp. 3485–3492.
    Google ScholarLocate open access versionFindings
  • A. Vedaldi and B. Fulkerson, “Vlfeat: An open and portable library of computer vision algorithms,” in Proc. ACM Int. Conf. Multimedia, 2010, pp. 1469–1472.
    Google ScholarLocate open access versionFindings
  • M. Varma and A. Zisserman, “A statistical approach to material classification using image patch exemplars,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 11, pp. 2032–2047, Nov. 2008.
    Google ScholarLocate open access versionFindings
  • B. Caputo, E. Hayman, M. Fritz, and J. Eklundh, “Classifying materials in the real world,” Image Vis. Comput., vol. 28, pp. 150– 163, 2010.
    Google ScholarLocate open access versionFindings
  • L. Liu and P. Fieguth, “Texture classification from random features,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 3, pp. 574–586, Mar. 2012.
    Google ScholarLocate open access versionFindings
  • H. Nguyen, R. Fablet, and J. Boucher, “Visual textures as realizations of multivariate log-Gaussian Cox processes,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2011, pp. 2945–2952.
    Google ScholarLocate open access versionFindings
  • C. Liu, L. Sharan, E. Adelson, and R. Rosenholtz, “Exploring features in a Bayesian framework for material recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2010, pp. 239– 246.
    Google ScholarLocate open access versionFindings
  • L. Sharan, C. Liu, R. Rosenholtz, and E. H. Adelson, “Recognizing materials using perceptually inspired features,” Int. J. Comput. Vis., vol. 103, pp. 348–371, 2013.
    Google ScholarLocate open access versionFindings
  • D. Hu and L. Bo, “Toward robust material recognition for everyday objects,” in Proc. Brit. Mach. Vis. Conf., 2011, pp. 48.1–48.11.
    Google ScholarLocate open access versionFindings
  • L. Bo and C. Sminchisescu, “Efficient match kernels between sets of features for visual recognition,” in Proc. Adv. Neural Inf. Process. Syst., vol. 1730, 2009, p. 1731.
    Google ScholarLocate open access versionFindings
  • Z. Liao, J. Rock, Y. Wang, and D. Forsyth, “Non-parametric filtering for geometric detail extraction and material representation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2013, pp. 963– 970.
    Google ScholarLocate open access versionFindings
  • W. Li and M. Fritz, “Recognizing materials from virtual examples,” in Proc. 12th Eur. Conf. Comput. Vis., 2012, pp. 345–358.
    Google ScholarLocate open access versionFindings
  • C. Rother, V. Kolmogorov, and A. Blake, “GrabCut: Interactive foreground extraction using iterated graph cuts,” in ACM Trans. Graph, 2004, vol. 23, pp. 309–314.
    Google ScholarLocate open access versionFindings
  • Y. Chai, “Recognition between a large number of flower species,” Masters’ thesis, Department of Electrical Engineering, Swiss Federal Inst. Technol. Zurich, Switzerland, 2011.
    Google ScholarFindings
  • X. Yuan and S. Yan, “Visual classification with multi-task joint sparse representation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2010, pp. 3493–3500.
    Google ScholarLocate open access versionFindings
  • Z. Wang, J. Feng, S. Yan, and H. Xi, “Linear distance coding for image classification,” IEEE Trans. Image Process., vol. 22, no. 2, pp. 537–548, Feb. 2013.
    Google ScholarLocate open access versionFindings
  • Y. Chai, E. Rahtu, V. Lempitsky, L. Van Gool, and A. Zisserman, “Tricos: A tri-level class-discriminative co-segmentation method for image classification,” in Proc. Eur. Conf. Comput. Vis., 2012, pp. 794–807.
    Google ScholarLocate open access versionFindings
  • J. Wu and J. Rehg, “Centrist: A visual descriptor for scene categorization,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 8, pp. 1489–1501, Aug. 2011.
    Google ScholarLocate open access versionFindings
  • H. Ling and D. Jacobs, “Shape classification using the innerdistance,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 2, pp. 286–299, Feb. 2007.
    Google ScholarLocate open access versionFindings
  • P. Felzenszwalb and J. Schwartz, “Hierarchical matching of deformable shapes,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2007, pp. 1–8.
    Google ScholarLocate open access versionFindings
  • S. Zhang, Y. Lei, T. Dong, and X.-P. Zhang, “Label propagation based supervised locality projection analysis for plant leaf classification,” Pattern Recognit., vol. 46, pp. 1891–1897, 2013.
    Google ScholarLocate open access versionFindings
  • B. Ni, M. Xu, J. Tang, S. Yan, and P. Moulin, “Omni-range spatial contexts for visual classification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2012, pp. 3514–3521.
    Google ScholarLocate open access versionFindings
  • A. Oliva and A. Torralba, “Modeling the shape of the scene: A holistic representation of the spatial envelope,” Int. J. Comput. Vis., vol. 42, pp. 145–175, 2001.
    Google ScholarLocate open access versionFindings
  • L. Fei-Fei and P. Perona, “A Bayesian hierarchical model for learning natural scene categories,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2005, pp. 524–531.
    Google ScholarLocate open access versionFindings
  • J. C. van Gemert, C. J. Veenman, A. W. M. Smeulders, and J. M. Geusebroek, “Visual word ambiguity,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 7, pp. 1271–1283, Jul. 2009.
    Google ScholarLocate open access versionFindings
  • S. N. Parizi, J. G. Oberlin, and P. F. Felzenszwalb, “Reconfigurable models for scene recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2012, pp. 2775–2782.
    Google ScholarLocate open access versionFindings
  • L.-J. Li, H. Su, E. P. Xing, and L. Fei-Fei, “Object bank: A high-level image representation for scene classification and semantic feature sparsification,” in Proc. Adv. Neural Inf. Process. Syst., 2010, pp. 1378–1386.
    Google ScholarLocate open access versionFindings
  • L. Wang, Y. Li, J. Jia, J. Sun, D. Wipf, and J. M. Rehg, “Learning sparse covariance patterns for natural scenes,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2012, pp. 2767–2774.
    Google ScholarLocate open access versionFindings
  • R. Kwitt, N. Vasconcelos, and N. Rasiwasia, “Scene recognition on the semantic manifold,” in Proc. 12th Eur. Conf. Comput. Vis., 2012, pp. 359–372.
    Google ScholarLocate open access versionFindings
  • F. Sadeghi and M. F. Tappen, “Latent pyramidal regions for recognizing scenes,” in Proc. 12th Eur. Conf. Comput. Vis., 2012, pp. 228– 241.
    Google ScholarLocate open access versionFindings
  • A. Quattoni and A. Torralba, “Recognizing indoor scenes,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2009, pp. 413–420.
    Google ScholarLocate open access versionFindings
  • M. Pandey and S. Lazebnik, “Scene recognition and weakly supervised object localization with deformable part-based models,” in Proc. Int. Conf. Comput. Vis., 2011, pp. 1307–1314.
    Google ScholarLocate open access versionFindings
  • J. Zhu, L.-J. Li, L. Fei-Fei, and E. P. Xing, “Large margin learning of upstream scene understanding models,” in Proc. Adv. Neural Inf. Process. Syst., 2010, pp. 2586–2594.
    Google ScholarLocate open access versionFindings
  • P. Hobson, G. Percannella, M. Vento, and A. Wiliem, (2013). “Competition on Cells Classification by Fluorescent Image Analysis,” Proc. IEEE 20th Int. Conf. Image Process., [Online]. Available: http://nerone.diiie.unisa.it/contest-icip-2013/index.shtml. Xianbiao Qi received the BE degree in information engineering from the Beijing University of Posts and Telecommunications (BUPT) in 2008. He is currently working toward the PhD degree at BUPT. He visited the Web Search and Mining Group in Microsoft Research Asia (MSRA) as a visiting student from January 2011 to May 2012. His research interests include texture-relevant computer vision applications, including discriminative texture feature design, and texture and material recognition.
    Findings
  • Rong Xiao received the PhD degree from Nanjing University, China, in 2001. He joined Microsoft Research China as an associate researcher in July 2001. He is currently a senior research software engineer at Microsoft Bing, Redmond. He has published approximately 30 papers at leading conferences such as ICCV, CVPR, ECCV, and ACM Multimedia. His research interests include statistical machine learning, face detection and recognition, object detection and tracking, and local feature design.
    Google ScholarFindings
  • Chun-Guang Li received the BE degree from Jilin University in 2002 and the PhD degree from the Beijing University of Posts and Telecommunications (BUPT) in 2007.
    Google ScholarLocate open access versionFindings
  • Currently, he is a lecturer with the School of Information and Communication Engineering, BUPT. From July 2011 to April 2012, he visited the Visual Computing Group, Microsoft Research Asia. From December 2012 to November 2013, he visited the Vision, Dynamics and Learning lab, Johns Hopkins University. His research interests include statistical machine learning, compressive sensing, and pattern recognition. He is a member of the IEEE.
    Google ScholarFindings
  • Yu Qiao received the PhD degree from the University of Electro-Communications in Japan, in 2006. He was a JSPS fellow, and then a project assistant professor with the University of Tokyo from 2007 to 2010. He is currently a professor with the Shenzhen Institutes of Advanced Technology at the Chinese Academy of Science. His research interests include pattern recognition, computer vision, multimedia, image processing, and machine learning. He has published more than 90 papers in these fields. He received the Lu Jiaxi Young Researcher Award from the Chinese Academy of Science in 2012. He is a senior member of the IEEE.
    Google ScholarFindings
  • Jun Guo received the BE and ME degrees from the Beijing University of Posts and Telecommunications (BUPT), China, in 1982 and 1985, respectively, and the PhD degree from the Tohuku-Gakuin University, Japan, in 1993. At present, he is a professor and vice president of BUPT. His publications cover CVPR, ICCV, ECCV, and the IEEE Transactions on Pattern Analysis and Machine Intelligence. His research interests include pattern recognition theory and application, information retrieval, content-based information security, and network management.
    Google ScholarFindings
  • Xiaoou Tang received the BS degree from the University of Science and Technology of China, Hefei, in 1990, and the MS degree from the University of Rochester, New York, in 1991, and the PhD degree from the Massachusetts Institute of Technology, in 1996. He is a professor in the Department of Information Engineering at the Chinese University of Hong Kong. He worked as the group manager of the Visual Computing Group at Microsoft Research Asia from 2005 to 2008. He was a program chair of the IEEE International Conference on Computer Vision (ICCV) 2009 and is an associate editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence and the International Journal of Computer Vision. His research interests include computer vision, pattern recognition, and video processing. He received the Best Paper Award at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2009. He is a fellow of the IEEE.
    Google ScholarLocate open access versionFindings
  • For more information on this or any other computing topic, please visit our Digital Library at www.computer.org/publications/dlib.
    Locate open access versionFindings
Your rating :
0

 

Tags
Comments