AI帮你理解科学

AI 生成解读视频

AI抽取解析论文重点内容自动生成视频


pub
生成解读视频

AI 溯源

AI解析本论文相关学术脉络


Master Reading Tree
生成 溯源树

AI 精读

AI抽取本论文的概要总结


微博一下
We demonstrated that repeated sequence of varying exposure frames can be used for automatic point-spread function estimation and segmentation for challenging object motion blur scenarios

Invertible motion blur in video

ACM Trans. Graph., no. 3 (2009): 1-8

引用120|浏览45
EI
下载 PDF 全文
引用
微博一下

摘要

We show that motion blur in successive video frames is invertible even if the point-spread function (PSF) due to motion smear in a single photo is non-invertible. Blurred photos exhibit nulls (zeros) in the frequency transform of the PSF, leading to an ill-posed deconvolution. Hardware solutions to avoid this require specialized devices s...更多

代码

数据

0
简介
  • Motion blur is a common problem in photographing fast moving objects.
  • Consider deblurring a fast moving object in front of a static background.
  • Automatic deblurring involves three critical components: (a) maintaining invertible PSF, (b) estimating the motion of the moving parts, and (c) segmenting the moving objects from the static background.
  • The authors propose a unique approach based on ordinary cameras and show joint-invertibility of blurs in video frames via the concept of frequency domain null-filling
重点内容
  • Motion blur is a common problem in photographing fast moving objects
  • We propose a unique approach based on ordinary cameras and show joint-invertibility of blurs in video frames via the concept of frequency domain null-filling
  • We show that by varying the exposure of each frame within a video, point-spread function null-filling can be achieved for object motion
  • We showed that motion blur in video can be made invertible by combining non-invertible point-spread function that do not have common zeros
  • For a complete deblurring solution, segmentation and point-spread function estimation are as important as point-spread function invertibility
  • We demonstrated that repeated sequence of varying exposure frames can be used for automatic point-spread function estimation and segmentation for challenging object motion blur scenarios
结果
  • Using the SDK provided with the camera, the exposure time for each frame could be changed .
  • The authors placed the object on a variable speed toy train to capture datasets.
  • In order to find optimal exposures, the authors bound each exposure within Tmin and Tmax to avoid saturation and unusable photos.
  • For N = 3, Tmin = 30 ms and Tmax = 50 ms, the optimized exposures were 30, 35, and 42 ms.
  • The authors capture at least 2N images to allow PSF estimation
结论
  • The authors showed that motion blur in video can be made invertible by combining non-invertible PSFs that do not have common zeros.
  • PSF null-filling can be achieved on machine vision cameras as well as off-the-shelf digital SLR’s using exposure bracketing, without requiring additional hardware or camera motion.
  • For a complete deblurring solution, segmentation and PSF estimation are as important as PSF invertibility.
  • The authors demonstrated that repeated sequence of varying exposure frames can be used for automatic PSF estimation and segmentation for challenging object motion blur scenarios
相关工作
  • PSF Manipulation: Specialized capture devices employ two important classes of techniques for engineering the PSF to make it (a) invertible and/or (b) invariant. For defocus PSF, wavefront coding [Dowski and Cathey 1995] uses cubic phase plate in front of the lens to make the PSF invariant to scene depths. This can also be achieved by lateral sensor motion [Nagahara et al 2008]. However, these approaches result in defocus blur on the scene parts originally in focus. Coded exposure [Raskar et al 2006] flutters the shutter with a broadband binary code to make the PSF invertible. Accelerating camera motion [Levin et al 2008b] makes the motion PSF invariant to the speed of the object (requiring a priori knowledge of motion direction), at the cost of blurring static parts. Our approach does not modify the camera but indirectly engineers the joint-PSF across frames by carefully choosing the exposure times.
引用论文
  • AGRAWAL, A., AND RASKAR, R. 2007. Resolving Objects at Higher Resolution from a Single Motion-blurred Image. In Proc. Conf. Comp. Vision and Pattern Recognition, 1–8.
    Google ScholarLocate open access versionFindings
  • BASCLE, B., BLAKE, A., AND ZISSERMAN, A. 1996. Motion Deblurring and Super-resolution from an Image Sequence. In Proc. European Conf. Computer Vision, vol. 2, 573–582.
    Google ScholarLocate open access versionFindings
  • BEN-EZRA, M., AND NAYAR, S. 2004. Motion-based Motion Deblurring. IEEE Trans. Pattern Anal. Machine Intell. 26, 6 (Jun), 689–698.
    Google ScholarLocate open access versionFindings
  • CHEN, W.-G., NANDHAKUMAR, N., AND MARTIN, W. N. 1996. Image Motion Estimation from Motion Smear-A New Computational Model. IEEE Trans. Pattern Anal. Mach. Intell. 18, 4 (Apr.), 412–425.
    Google ScholarLocate open access versionFindings
  • CHEN, J., YUAN, L., TANG, C.-K., AND QUAN, L. 2008. Robust Dual Motion Deblurring. In Proc. Conf. Comp. Vision and Pattern Recognition, 1–8.
    Google ScholarLocate open access versionFindings
  • CHO, S., MATSUSHITA, Y., AND LEE, S. 2007. Removing NonUniform Motion Blur from Images. In Proc. Int’l Conf. Computer Vision, 1–8.
    Google ScholarLocate open access versionFindings
  • DAI, S., AND WU, Y. 2008. Motion from Blur. In Proc. Conf. Comp. Vision and Pattern Recognition, 1–8.
    Google ScholarLocate open access versionFindings
  • DEBEVEC, P. E., AND MALIK, J. 1997. Recovering High Dynamic Range Radiance Maps from Photographs. In Proc. SIGGRAPH 97, 369–378.
    Google ScholarLocate open access versionFindings
  • DOWSKI, E. R., AND CATHEY, W. 1995. Extended Depth of Field through Wavefront Coding. Appl. Optics 34, 11 (Apr.), 1859– 1866.
    Google ScholarLocate open access versionFindings
  • FERGUS, R., SINGH, B., HERTZMANN, A., ROWEIS, S. T., AND FREEMAN, W. T. 2006. Removing Camera Shake from a Single Photograph. ACM Trans. Graph. 25, 3 (jul), 787–794.
    Google ScholarLocate open access versionFindings
  • GROSSBERG, M., AND NAYAR, S. 2003. High Dynamic Range from Multiple Images: Which Exposures to Combine? In ICCV Workshop on Color and Photometric Methods in Computer Vision (CPMCV).
    Google ScholarLocate open access versionFindings
  • JANSSON, P. 1997. Deconvolution of Image and Spectra, 2nd ed. Academic Press.
    Google ScholarFindings
  • 95:8 • A. Agrawal et al. JIA, J. 2007. Single Image Motion Deblurring using Transparency. In Proc. Conf. Comp. Vision and Pattern Recognition, 1–8.
    Google ScholarLocate open access versionFindings
  • LEVOY, M. 2005. High Performance Imaging using Large Camera Arrays. ACM Trans. Graph. 24, 3 (Jul.), 765–776.
    Google ScholarLocate open access versionFindings
  • JOSHI, N., SZELISKI, R., AND KRIEGMAN, D. 2008. PSF Estimation using Sharp Edge Prediction. In Proc. Conf. Comp. Vision and Pattern Recognition, 1–8.
    Google ScholarLocate open access versionFindings
  • LEVIN, A., FERGUS, R., DURAND, F., AND FREEMAN, W. T. 2007. Image and Depth from a Conventional Camera with a Coded Aperture. ACM Trans. Graph. 26, 3 (Jul.), 70.
    Google ScholarLocate open access versionFindings
  • YUAN, L., SUN, J., QUAN, L., AND SHUM, H.-Y. 2007. Image Deblurring with Blurred/Noisy Image Pairs. ACM Trans. Graph. 26, 3 (Jul.), 1.
    Google ScholarLocate open access versionFindings
  • YUAN, L., SUN, J., QUAN, L., AND SHUM, H.-Y. 2008. Progressive Inter-Scale and Intra-Scale Non-Blind Image Deconvolution. ACM Trans. Graph. 27, 3 (Aug.), 74.
    Google ScholarLocate open access versionFindings
  • LEVIN, A., LISCHINSKI, D., AND WEISS, Y. 2008. A ClosedForm Solution to Natural Image Matting. IEEE Trans. Pattern Anal. Mach. Intell. 30, 2, 228–242.
    Google ScholarLocate open access versionFindings
  • LEVIN, A., SAND, P., CHO, T. S., DURAND, F., AND FREEMAN, W. T. 2008. Motion-Invariant Photography. ACM Trans. Graph. 27, 3 (Aug.), 71.
    Google ScholarLocate open access versionFindings
  • LUCY, L. 1974. An iterative technique for the rectification of observed distributions. J. Astronomy 79, 745–754.
    Google ScholarLocate open access versionFindings
  • MANN, S., AND PICARD, R. W. 1995. Being Undigital with Digital Cameras: Extending Dynamic Range by Combining Differently Exposed Pictures. In Proc. of IS&T 48th Annual Conference, 422–428.
    Google ScholarLocate open access versionFindings
  • NAGAHARA, H., KUTHIRUMMAL, S., ZHOU, C., AND NAYAR, S. 2008. Flexible Depth of Field Photography. In Proc. European Conf. Computer Vision, 60 – 73.
    Google ScholarLocate open access versionFindings
  • PICCARDI, M. 2004. Background Subtraction Techniques: a Review. In Proc. IEEE SMC Intl. Conf. Systems, Man and Cybernetics.
    Google ScholarLocate open access versionFindings
  • RASKAR, R., AGRAWAL, A., AND TUMBLIN, J. 2006. Coded Rxposure Photography: Motion Deblurring using Fluttered Shutter. ACM Trans. Graph. 25, 3 (Jul.), 795–804.
    Google ScholarLocate open access versionFindings
  • RAV-ACHA, A., AND PELEG, S. 2005. Two Motion-blurred Images are Better than One. Pattern Recognition Letters 26, 3, 311 – 317.
    Google ScholarLocate open access versionFindings
  • RICHARDSON, W. 1972. Bayesian-based iterative method of image restoration. J. Opt. Soc. of America 62, 1, 55–59.
    Google ScholarLocate open access versionFindings
  • SCHULTZ, R. R., AND STEVENSON, R. L. 1996. Extraction of High-Resolution Frames from Video Sequences. IEEE Trans. Image Processing 5 (jun), 996–1011.
    Google ScholarLocate open access versionFindings
  • SELLENT, A., EISEMANN, M., AND MAGNOR, M. 2008. Calculating Motion Fields from Images with Two Different Exposure Times. Tech. rep., Computer Graphics Lab, Technical University of Braunschweig, 5.
    Google ScholarFindings
  • SHAN, Q., JIA, J., AND AGARWALA, A. 2008. High-Quality Motion Deblurring from a Single Image. ACM Trans. Graph. 27, 3 (Aug.), 73.
    Google ScholarLocate open access versionFindings
  • SHECHTMAN, E., CASPI, Y., AND IRANI, M. 2002. Increasing Space-Time Resolution in Video. In Proc. European Conf. Computer Vision, 753–768.
    Google ScholarLocate open access versionFindings
  • TAI., W. Y., HAO, D., BROWN, M. S., AND LIN, S. 2008. Image/Video Deblurring using a Hybrid Camera. In Proc. Conf. Comp. Vision and Pattern Recognition, 1–8.
    Google ScholarLocate open access versionFindings
  • TELLEEN, J., SULLIVAN, A., YEE, J., GUNAWARDANE, P., WANG, O., COLLINS, I., AND DAVIS, J. 2007. Synthetic Shutter Speed Imaging. In Proc. Eurographics, 591–598.
    Google ScholarLocate open access versionFindings
0
您的评分 :

暂无评分

标签
评论
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn