Open Access Open Access  Restricted Access Subscription or Fee Access

Review on Content Based Video Retrieval

Rujul Mankad, Priyanka Buch

Abstract


In recent years, technologies have made huge amount of multimedia content available the video repositories. As users shift from text based retrieval systems to content based retrieval systems there is huge demand of content based video retrieval. As per user interest it is difficult to retrieve the appropriate videos from large video warehouse. Efficiency of the retrieval system depends upon the search method used and use of unsuitable search method may degrade the performance of the retrieval system. There is an important role of feature selection in content based video retrieval. These features are proposed for selection, indexing and ranking according to user’s interest. So there is a need of multimedia data accessible and searchable with great ease and flexibility. This paper offers an outline of the different existing techniques in content based video retrieval and finally analyzes future research directions.


Keywords


Abrupt Transition, CBVR, Feature Extraction, Key Frame Extraction, Shot Boundary Detection, Video Retrieval

Full Text:

PDF

References


Huurnink, B.; Snoek, C.G.M.; de Rijke, M.; Smeulders, A.W.M., 2012.”Content-Based Analysis Improves Audiovisual Archive Retrieval”, Multimedia, IEEE Transactions on Volume: 14, Page(s): 1166 – 1178.

Z. Cernekova, I. Pitas, and C. Nikou, “Information theory-based shot cut/fade detection and video summarization,” IEEE Trans. Circuits Syst. Video Technol., vol. 16, no. 1, pp. 82–90, Jan. 2006.

A. Hanjalic, “Shot-boundary detection: Unraveled and resolved?,” IEEE Trans. Circuits Syst. Video Technol., vol. 12, no. 2, pp. 90–105, Feb. 2002.

U. Damnjanovic, E. Izquierdo, and M. Grzegorzek, “Shot boundary detection using spectral clustering,” in Proc. Eur. Signal Process. Conf., Poznan, Poland, Sep. 2007, pp. 1779–1783.

G. Camara-Chavez, F. Precioso, M. Cord, S. Phillip-Foliguet, and A. de A. Araujo, “Shot boundary detection by a hierarchical supervised approach,”in Proc. Int. Conf. Syst., Signals Image Process., Jun. 2007,pp. 197–200.

K. W. Sze, K. M. Lam, and G. P. Qiu,2005, “A new key frame representation for video segment retrieval,” IEEE Trans. Circuits Syst. Video Technol., vol. 15, no. 9, pp. 1148–1155.

B. T. Truong and S. Venkatesh, 2007, “Video abstraction: A sys-tematic review and classification,” ACM Trans. Multimedia Comput., Commun. Appl., vol. 3, no. 1, art. 3, pp. 1–37.

X.-D. Zhang, T.-Y. Liu, K.-T. Lo, and J. Feng, “Dynamic selection and effective compression of key frames for video abstraction,” Pattern Recognit. Lett., vol. 24, no. 9–10, pp. 1523–1532, Jun. 2003.

An Adaptive Novel Feature Based Approach for Automatic Video Shot Boundary Detection viral b. Thakar, sarman k. Hadia. 2013 International Conference on Intelligent Systems and Signal Processing (ISSP). 978-1-4799-0317-7/13/$31.00©2013 IEEE

L.-H. Chen, Y.-C. Lai, and H.-Y. M. Liao, “Movie scene segmentation using background information,” Pattern Recognit., vol. 41, no. 3,pp. 1056–1065, Mar. 2008.

B. T. Truong, S. Venkatesh, and C. Dorai, “Scene extraction in motion pictures,” IEEE Trans. Circuits Syst. Video Technol., vol. 13, no. 1,pp. 5–15, Jan. 2003.

H. Sundaram and S.-F. Chang, “Video scene segmentation using video and audio features,” in Proc. IEEE Int. Conf. Multimedia Expo., New York, 2000, pp. 1145–1148.

R. Yan and A. G. Hauptmann, “A review of text and image retrieval approaches for broadcast news video,” Inform. Retrieval, vol. 10, pp. 445–484, 2007.

J. Adcock, A. Girgensohn, M. Cooper, T. Liu, L. Wilcox, and E. Rieffel,“FXPAL experiments for TRECVID 2004,” in Proc. TREC Video Retrieval Eval., Gaithersburg, MD, 2004.

A. Amir, W. Hsu, G. Iyengar, C. Y. Lin, M. Naphade, A. Natsev,C. Neti, H. J. Nock, J. R. Smith, B. L. Tseng, Y. Wu, and D. Zhang,“IBM research TRECVID-2003 video retrieval system,” in Proc. TREC Video Retrieval Eval., Gaithersburg, MD, 2003.

C. Foley, C. Gurrin, G. Jones, H. Lee, S. McGivney, N. E. O’Connor,S. Sav, A. F. Smeaton, and P. Wilkins, “TRECVID 2005 experiments at Dublin city university,” in Proc. TREC Video Retrieval Eval., Gaithersburg, MD, 2005.

A. Hauptmann, M. Y. Chen, M. Christel, C. Huang, W. H. Lin, T. Ng,N. Papernick, A. Velivelli, J. Yang, R. Yan, H. Yang, and H. D. Wactlar,“Confounded expectations: Informedia at TRECVID 2004,” in Proc. TREC Video Retrieval Eval., Gaithersburg, MD, 2004.

E. Cooke, P. Ferguson, G. Gaughan, C. Gurrin, G. Jones, H. L. Borgue,H. Lee, S. Marlow,K. McDonald,M. McHugh, N. Murphy,N.O’ Connor,N. O’Hare, S. Rothwell, A. Smeaton, and P. Wilkins, “TRECVID 2004 experiments in Dublin city university,” in Proc. TREC Video Retrieval Eval., Gaithersburg, MD, 2004.

D.-D. Le, S. Satoh, and M. E. Houle, “Face retrieval in broadcasting news video by fusing temporal and intensity information,” in Proc. Int.Conf. Image Video Retrieval, (Lect. Notes Comput. Sci.), 4071, Jul. 2006, pp. 391–400.

H. P. Li and D. Doermann, “Video indexing and retrieval based on recognized text,” in Proc. IEEE Workshop Multimedia Signal Process., Dec. 2002, pp. 245–248.A. D. Bimbo, E. Vicario, and D. Zingoni, “Symbolic description and visual querying of image sequences using spatiotemporal logic,” IEEE Trans. Knowl. Data Eng., vol. 7, pp. 609–622, Aug. 1995.

A. Anjulan andN.Canagarajah,2009,“Aunified framework for object retrieval and mining,” IEEE Trans. Circuits Syst. Video Technol., vol. 19, no. 1, pp. 63–76.

Y. F. Zhang, C. S. Xu, Y. Rui, J. Q.Wang, and H. Q. Lu, 2007,“Semantic event extraction from basketball games using multi-modal analysis,” in Proc. IEEE Int. Conf. Multimedia Expo, pp. 2190–2193.

T. Quack, V. Ferrari, and L. V. Gool, 2006, “Video mining with frequent item set configurations,” in Proc. Int. Conf. Image Video Retrieval, pp. 360–369.

K. X. Dai, D. F. Wu, C. J. Fu, G. H. Li, and H. J. Li, 2006,“Video mining: A survey,” J. Image Graph., vol. 11, no. 4, pp. 451–457.

Y. Yuan, 2003, “Research on video classification and retrieval,” Ph.D. dissertation, School Electron. Inf. Eng., Xi’an Jiaotong Univ., Xi’an, China, pp. 5–27.

Jun Wu and Marcel Worring, 2012,” Efficient Genre-Specific Semantic Video Indexing”, IEEE TRANSACTIONS ON MUL-TIMEDIA, VOL. 14, NO. 2, pp.291-302.

L. Yang, J. Liu, X. Yang, and X. Hua,2007, “Multi-modality web video categorization,” in Proc. ACM SIGMM Int. Workshop Mul-timedia Inform. Retrieval, Augsburg, Germany, pp. 265–274.

Sargin, M.E.; Aradhye, H, 2012,”Boosting video classi_cation using cross-video signals Semantic Annotation of Ubiquitous Learning Environments”.

Y. Aytar, M. Shah, and J. B. Luo, 2008, “Utilizing semantic word similarity measures for video retrieval,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., pp. 1–8.

P. Browne and A. F. Smeaton, 2005, “Video retrieval using dia-logue, keyframe similarity and video objects,” in Proc. IEEE Int. Conf. Image Process., vol. 3, pp. 1208–1211. C. G. M. Snoek, B. Huurnink, L. Hollink, M. de Rijke, G. Schreiber, and M. Worring, 2007,“Adding semantics to detectors for video retrieval,” IEEE Trans. Multimedia, vol. 9, no. 5, pp. 975–985.

J. Sivic and A. Zisserman, 2006, “Video Google: Efficient visual search of videos,” in Toward Category-Level Object Recognition. Berlin, Germany: Springer, pp. 127–144.

L.-H. Chen, K.-H. Chin, and H.-Y. Liao, 2007, “An integrated approach to video retrieval,” in Proc. ACM Conf. Australasian Database, vol. 75, Gold Coast, Australia, pp. 49–55.


Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.