Open Access Open Access  Restricted Access Subscription or Fee Access

Video Inpainting Patch Propagation of Frames

A. Santha Rubia, S. Soundari

Abstract


Video inpainting technique is that repairs damaged area or remove areas in an video. In order to deal with this kind of problems, not only a robust video inpainting algorithm should be used, technique of structure generation is also needs to adopt in inpainting procedure. In this paper, the exemplar-based video inpainting method is extended by incorporating the sparsity of natural image patches. Patch priority and patch representations which are two crucial steps for patch propagation in the extended examplar-based inpainting approach. First, patch structure sparsity is designed to measure the confidence of a patch located at the image structure (e.g., the edge or corner) by the sparseness of its nonzero similarities to the neighboring patches. The patch with larger structure sparsity will be assigned higher priority for further inpainting. Second, it is assumed that the patch to be filled can be represented by the sparse linear combination of candidate patches under the local patch consistency constraint in a framework of sparse representation. The main procedures of the proposed examplar-based inpainting algorithm is based on patch propagation by inwardly propagating the image patches from the source region into the interior of the target region patch by patch. In each iteration of patch propagation, the algorithm is decomposed into two procedures: patch selection and patch inpainting.

Keywords


Image Inpainting, Patch Propagation, Patch Sparse Representation, Video Inpainting.

Full Text:

PDF

References


A. Criminisi, P. Perez, and K. Toyama, ―Region filling and object removal by exemplar-based image inpainting,‖ IEEE Trans. Image Process., vol. 13, no. 9, Sep. 2004, pp. 1200–1212.

D. Comaniciu and P. Meer, ―Mean shift: A robust approach toward feature space analysis,‖ IEEE Trans. Pattern Anal.Mach. Intell., vol. 24, no. 5, May 2002, pp. 603–619.

I. Drori, D. Cohen-Or, and H. Yeshurun, ―Fragment-based image completion,‖ ACM Trans. Graphics (SIGGRAPH), vol. 22, pp. 303–312, 2003, San Diego, CA.

K. Hariharakrishnan and D. Schonfeld, ―Fast object tracking using adaptive block matching,‖ IEEE Trans. Multimedia, vol. 7, no. 5, Oct. 2005, pp. 853–859.

J. Hays and A. A. Efros, ―Scene completion using millions of photographs,‖ ACM Trans. Graphics (SIGGRAPH 2007), vol. 26, no. 3, Aug. 2007.

J. Jia, T. P. Wu, Y. W. Tai, and C. K. Tang, ―Video repairing: Inference of foreground and background under severe occlusion,‖ in Proc. IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), Jun.–Jul. 2004, pp. 364–371.

J. Jia, Y. W. Tai, T. P. Wu, and C. K. Tang, ―Video repairing under variable illumination using cyclic motions,‖ IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 5, May 2006, pp. 832–839.

Y. T. Jia, S. M. Hu, and R. R. Martin, ―Video completion using tracking and fragment merging,‖ in Proc. Pacific Graphics 2005, Visual Computing, Sep. 2005, vol. 21, pp. 601–610.

C. Kim and J. N. Hwang, ―Video object extraction for object-oriented applications,‖ J. VLSI Signal Process.—Syst. Signal, Image, Video Technol., vol. 29, no. 1/2, Aug. 2001, pp. 7–22.

A. C. Kokaram and S. J. Godsill, ―Joint detection, interpolation, motion and parameter estimation for image sequences with missing data,‖ in Proc. Int. Conf. Image Process., Oct. 1997, vol. 2, pp. 191–194.


Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.