Open Access Open Access  Restricted Access Subscription or Fee Access

Human Activity Recognition using Motion Feature and Two Stage Classification

M.M. Sardeshmukh, Dr.M.T. Kolte

Abstract


Automatic analysis of the ongoing video to understand what is happening in the monitored area is of great practical use in many applications. Understanding the human activities from the videos is useful in applications like video surveillance, patent monitoring, content based retrieval etc. Proper selection of features plays important role in the performance of activity recognition system. The goal of this paper is to investigate the features to describe human activities and use of this to improve the recognition rate.  A novel motion feature and two stage classification is suggested in this paper. Four classifiers namely KNN, SVM, NN and NB are used for classification. The extracted features are represented using two subspace PCA and LDA. Observations show that use of motion feature improves the recognition rate for all the classifier.


Keywords


Human Activity Recognition, Motion Feature, PCA, LDA, RGB-D Data.

Full Text:

PDF

References


Han, Sang Uk, et. al. “Empirical Assessment of a RGB-D Sensor on Motion Capture and Action Recognition for Construction Worker Monitoring.” Visualization in Engineering Springer Open Journal, Vol. 1, 2013, pp. 1-13.

Victor Escorcia, Mara A. Dvila, Mani Golparvar-Fard and Juan Carlos Niebles, “Automated Vision-based Recognition of Construction Worker Actions for Building Interior Construction Operations using RGBD Cameras”, Construction Research Congress 2012, 2012 pp. 879-888.

Fanellom Sean Ryan et al. “One-shot Learning for Real-time Action Recognition,” Pattern recognition and Image Analysis, Springer Berlin Heidelberg, Vol.2, 2013, pp. 31-40.

Yamato J., Ohya J., Ishii, K., “Recognizing Human Action in Time-Sequential Images using hidden Markov Model,” Proceedings of IEEE Society Conference on Computer Vision and Pattern Recognition, 1992, pp. 379-385.

Natarajan P., Nevatia R., “Coupled Hidden Semi Markov Models for Activity Recognition”, Workshop on Motion and Video Computing, 2007, pp.1-10.

Li W., Zhang Z., Liu Z, “Action Recognition Based on a Bag of 3D Points, Proceeding of IEEE Computer Society Conference on Computer Vision and Pattern Recognition , Vol.2 , 2010, pp. 13-18.

Poppe Ronald, “A Survey on Vision-Based Human Action Recognition”, Image and Vision Computing Journal IEEE, Vol. 28(6) , 2010, pp. 976-990.

Thomas B. Moeslund, Adrian Hilton, Volker Kruger, “A Survey of Advances in Vision-based Human Motion Capture and Analysis”, Computer Vision and Image Understanding (CVIU) Elsevier Journal, Vol. 104 (2-3), 2006, pp. 90-126..

Michael S. Ryoo, Jake , K. Aggarwal, “Semantic Representation and Recognition of Continued and Recursive Human Activities”, International Journal of Computer Vision (IJCV), Vol. 82 (1), 2009, pp. 124-132.

Abhinav Gupta, Aniruddha Kembhavi, Larry S. Davis, “Observing Human Object Interactions using Spatial and Functional Compatibility for Recognition”, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 31 (10), 2009, pp. 1775-1789.

Darnell J. Moore, Irfan A. Essa, Monson H. Hayes III, “Exploiting Human Actions and Object Context for Recognition Tasks”, Proceedings of the 7th International Conference on Computer Vision (ICCV99), Kerkyra, Greece ,Vol. 1, September 1999, pp. 80-86.

Ali Erol, George Bebis, Mircea Nicolescu, Richard D. Boyle, Xander Twombly, “Vision-Based Hand Pose Estimation: a Review”, Computer Vision and Image Understanding (CVIU) Elsevier Journal, Vol. 12, 2007, pp. 52-73.

Sushmita Mitra, Tinku Acharya, “Gesture Recognition: a Survey”, IEEE Transactions on Systems, Man, and Cybernetics (SMC) Part C: Applications and Reviews Vol. 37 (3), 2007 pp. 311-324.

Fabio Cuzzolin, “Using Bilinear Models for View-invariant Action and Identity Recognition”, Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR06), New York, Vol. 2, June 2006, pp. 1701-1708.

Ahmed M. Elgammal, Chan-Su Lee, “Separating Style and Content on a Nonlinear Manifold”, Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR04), Washington DC, Vol. 1, June 2004, pp. 478-485.

Jack M. Wang, David J. Fleet, Aaron Hertzmann, “Multifactor Gaussian Process Models for Style-content Separation”, Proceedings of the International Conference on Machine Learning (ICML07), ACM International Conference Proceeding Series, Corvalis, OR, Vol. 3, June 2007, pp. 975-982.

Chan Clifi and Seyed Sepehr Mirfakhraei. “Hand Gesture Recognition using Kinect." Technical Report No. ECE-2013-04, Boston University 2013.

Jungong Han, Ling Shao, Dong Xu, Shotton J., “Enhanced Computer Vision With Microsoft Kinect Sensor: A Review,” IEEE Transactions on Cybernetics, Vol.43, no.5, Oct. 2013, pp.1318-1334.

Malima, A., Ozgur, E. Cetin M., “A Fast Algorithm for Vision-Based Hand Gesture Recognition for Robot Control,” 14th IEEE Conference on Signal Processing and Communications Applications, Vol.1, No 4 , April 2006, pp. 17-19.

Kai Guo, Ishwar, P., Konrad, J., “Action Recognition in Video by Covariance Matching of Silhouette Tunnels,” Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI), Oct. 2009, pp.299-306.

D. Zhang and G. Lu, “A Comparative Study of Fourier Descriptors for Shape Representation and Retrieval”, Proceeding of 5th Asian Conference on Computer Vision, 2002, pp.132-141.

Kulshreshth A., Zorn C., LaViola J.J., “Poster: Real-time Marker Less Kinect Based Finger Tracking and Hand Gesture Recognition for HCI”, IEEE Symposium on 3D User Interfaces (3DUI), 16-17 March 2013, pp.187-188


Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.