bringing the world locally
About us
Subscription
Review Committee
Conferences
Publications
FAQ
Contact
 
Members Login
    You are not logged in.
Username


Password



CiiT International Journal of Digital Image Processing
Print: ISSN 0974 9691 & Online: ISSN 0974 9586

20082009 2010 2011 2012 2013
January February March April May June July August September October November December

Issue : October 2011
DOI: DIP102011001
Title: An Automated Algorithm for Classification and Quantitative Characterization of Breast Cancer by Thermal Imaging
Authors: N. Selvarasu, Alamelu Nachiappan and N.M. Nandhitha
Keywords: Breast Cancer, Fibroadenoma, Region Growing Thermographs, Thresholding, Wavelet
Abstract:
      Clinical infrared thermography, a non-contact, non-invasive, non-hazardous technique is accepted as a reliable diagnostic tool for detecting cancer even at the earlier stages of formation. Here, temperature variations are mapped into thermographs. Temperature distribution is uniform and symmetric for normal conditions. On the other hand, an abnormality is indicated by non- uniform and non-symmetric thermal patterns in a thermograph. Abnormality may be due to pain, swelling, Tuberculosis, Fibroadenoma or cancer. This paper proposes an automated technique for classification of cancer regions from fibroadenoma. Also cancer regions are extracted using thresholding and region growing techniques. In this paper, significance of wavelet based smoothing and cascaded wavelet based smoothing techniques in removing the undesirable regions is studied.

Full PDF


Issue : October 2011
DOI: DIP102011002
Title: Comparison of Skew Detection and Correction Techniques by Applying on Gurmukhi Script
Authors: Loveleen Kaur and Simpel Jindal

Keywords: Document Processing, Gurmukhi Script, Skew Angle, Skew Correction, Skew Detection.
     This paper includes the information about the techniques used to detect Skew which are introduced during the scanning of the documents. It also discusses about the tool which have been used to implement the technique. The comparison of the techniques is done on the basis of the angle measured and by applying the algorithm on the Gurmukhi Script. The methods provides a very efficient way to calculate the Skew. Correction in the skewed scanned document image is very important, because it has a direct effect on the reliability and efficiency of the segmentation and feature extraction stages. The method deals with an accurate measure of skew, within-line, and between-line spacings and locates text lines and text blocks. The detection and correction of the images are done.

Full PDF


Issue : October 2011
DOI: DIP102011003
Title: Recognition of Degraded Images by Legendre Moment Invariants
Authors: T. Sudheer Kumar and K. Ashok Babu
Keywords: Blurred Image, Centrally Symmetric, Legendre Moments, Pattern Recognition, and Symmetric Blur.
Abstract:
      Analysis and interpretation of an image which was acquired by a non ideal imaging system is the key problem in many application areas. Existing methods to obtain blur invariants which are invariant with respect to centrally symmetric blur are based on geometric moments or complex moments.. In this paper, we propose an alternative approach. We derive the features for image representation which are invariant with respect to blur regardless of the degradation point spread function (PSF) provided that it is centrally symmetric. Methods to obtain blur invariants which are invariants with respect to centrally symmetric blur are based on geometric moments or complex moments, orthogonal Legendre moments. The performance of the proposed descriptors is evaluated with various point-spread functions and different image noises. The comparison of the different approaches with previous methods in terms of pattern recognition accuracy is also provided. The experimental results show that the proposed descriptors are more robust to noise and have better discriminative power than the methods based on geometric or complex moments.

Full PDF


Issue : October 2011
DOI: DIP102011004
Title: News Video Indexing System Using Inserted-Caption Detection and its Retrieval
Authors: Sanjoy Ghatak, Sonu Kumar, Soham Banerjee and Akshay Kumar Singh
Keywords: Inserted Caption Detection, Optical Character Recognition, Edge/Field Detection, Shot Boundaries Detection, Text Extraction from Image.
Abstract:
     Data compression coupled with the availability of high bandwidth networks and storage capacity have created the overwhelming production of multimedia content, this paper briefly describes techniques for content-based analysis, retrieval and filtering of News Videos and focuses on basic methods for extracting features and information that will enable indexing and search of any news video based on its content and semantics. The major themes covered by the study include shot segmentation, key frame extraction, feature extraction, and relevance feedback. A new caption text extraction algorithm that takes full advantage of the temporal information in a video sequence is developed.

Full PDF


Issue : October 2011
DOI: DIP102011005
Title: Inter Color Local Ternary Patterns for Image Indexing and Retrieval
Authors: P.V.N. Reddy and K. Satya Prasad
Keywords: CBIR, Feature Extraction, Local Binary Patterns, and Inter Color Local Ternary Patterns.
Abstract:
      Content Based Image Retrieval (CBIR) system using Inter Color Local Ternary Patterns (ICLTP) based features with high retrieval rate and less computational complexity is proposed in this paper. The property of LTP is, it extracts the information based on distribution of edges in an image. This property made it a powerful tool for feature extraction of images in the data base. First the image is separated into red(R), green(G), and blue(B) color spaces, and these are used for inter color local ternary patterns (ICLTP), which are evaluated by taking into consideration of local difference between the center pixel and its neighbors by changing center pixels of one color plane with other color planes. Improved results in terms of computational complexity and retrieval efficiency are observed over recent work based on Local Binary Pattern (LBP) based CBIR system. The d1distance is used as similarity measure in the proposed CBIR system.

Full PDF


Issue : October 2011
DOI: DIP102011006
Title: Contrast Enhancement of Natural Images Using Histogram Equalization Technique
Authors: Bindu Goyal and Vipan Bansal
Keywords: BBHE, DSIHE, Histogram Equalization, MMBEBHE.
Abstract:
     Image enhancement improves an image appearance by increasing dominance of some features or by decreasing ambiguity between different regions of the image. A number of image contrast enhancement techniques exist to improve the visual appearance of an image. Many images such as medical images, remote sensing images, electron microscopy images and even real life photographic pictures, suffer from poor contrast. Therefore, it is necessary to enhance the contrast of images. Histogram equalization is widely used for contrast enhancement in a variety of applications due to its simple function and effectiveness. However, it tends to change the brightness of an image and hence, not suitable for consumer electronic products, where preserving the original brightness is essential to avoid annoying artifacts. In addition, HE method tends to introduce unnecessary visual deterioration including saturation effect. Preserving the input brightness of the image and keeping PSNR in the desired range are required to avoid the generation of non-existing artifacts in the output image. A number of techniques have been used to overcome its annoying effects. But each technique has its advantages and application areas. This paper presents a new contrast enhancement method based on histogram equalization which aims to better preserve the image quality, preserves better contrast and enriches the image details.

Full PDF


Issue : October 2011
DOI: DIP102011007
Title: A Segmented Morphological Approach to Detect Tumor in Lung Images
Authors: Poonam Bhayan and Gagandeep Jindal
Keywords: Contrast Stretching, Gabor Filter, Histogram Modeling, Morphological operations, Watershed Segmentation.
Abstract:
    Image processing is one of most growing research area these days and now it is very much integrated with the medical and biotechnology field. Image Processing can be used to analyze different medical and MRI images to get the abnormality in the image. This abnormality can be described in terms of tumor or the patch or scare on the human body. We are presenting such an approach to detect the tumor from the lung image. In this proposed approach we have applied a series of operations, first to enhance the image and then to detect the tumor from the lung image. In this proposed approach, First of all some image enhancement and noise reduction techniques are used to enhance the image quality, after that we have applied watershed segmentation and some morphological operations to get the desired result. The algorithm has been tried on a number of different images from different angles and has always given the correct desired output.

Full PDF


Issue : October 2011
DOI: DIP102011008
Title: Interactive Image Retrieval using Genetic Algorithm and Orthogonal Moments
Authors: J.P. Ananth and Dr.V. Subbiah Bharathi
Keywords: Moment Features, Image Retrieval, Genetic Algorithm
Abstract:
   Image Retrieval is a field of study concerned with searching and retrieving images from a collection of database. The user participation in image retrieval system gains attention in the recent research in order to reduce the impact of completely depending on discrimination power of image features. In this proposed work, interactive genetic algorithm is employed where user selects one of the retrieved images for the next stage of mutation. Moreover, dual Hahn moments employed in this work, which are orthogonal and rotation invariants are effective image descriptors. Experiments were carried out on COREL images and the average retrieval rate of 88% reveals the efficacy of the proposed work.

Full PDF


Issue : October 2011
DOI: DIP102011009
Title: Effective Multiple Object Motion Detection Using Iterated Training Algorithm
Authors: J. Ferdin Joe
Keywords: Motion Detection, Surveillance
Abstract:
  Motion detection has been done for videos with various methodologies. The existing systems are based on edge detection and detect motion as a single object by taking the movement of edges into account. But in sensitive applications like satellite imaging systems, cancer cell or medical imaging systems, the sub objects movement is also taken into account for efficient decision making. So a new methodology has been developed in this project for the multiple objects and sub objects movement in sensitive video applications. A methodology was developed for static images by Fellenszwalb et al for multiple object detection. The Iterated Training Algorithm (ITA) used for the static images is implied in the case of videos. This algorithm has been modified for the case of videos. In this paper ITA is implied in the case of videos and the sub objects movements in the video are detected. Webcam video is fed as input and the performance measure of sensitivity and numbers of frames detected with motion are visualized. It is found from the performance measures that, the proposed ITA holds better than the existing methods. Multiple Instance method had better performance than ITA but in the case of training, Multiple Instance method needs more training than the proposed method. As of whole, this paper validates the advantages of the proposed methodology.

Full PDF


Issue : October 2011
DOI: DIP102011010
Title: A Robust, Low-Cost Approach to Face Detection and Face Recognition
Authors: Divya Jyoti, Aman Chadha, Pallavi Vaidya and M. Mani Roja
Keywords: Discrete Wavelet Transform, Face Detection, Face Recognition, Person Identification.
Abstract:
    In the domain of Biometrics, recognition systems based on iris, fingerprint or palm print scans etc. are often considered more dependable due to extremely low variance in the properties of these entities with respect to time. However, over the last decade data processing capability of computers has increased manifold, which has made real-time video content analysis possible. This shows that the need of the hour is a robust and highly automated Face Detection and Recognition algorithm with credible accuracy rate. The proposed Face Detection and Recognition system using Discrete Wavelet Transform (DWT) accepts face frames as input from a database containing images from low cost devices such as VGA cameras, webcams or even CCTV’s, where image quality is inferior. Face region is then detected using properties of L*a*b* color space and only Frontal Face is extracted such that all additional background is eliminated. Further, this extracted image is converted to grayscale and its dimensions are resized to 128 x 128 pixels. DWT is then applied to entire image to obtain the coefficients. Recognition is carried out by comparison of the DWT coefficients belonging to the test image with those of the registered reference image. On comparison, Euclidean distance classifier is deployed to validate the test image from the database. Accuracy for various levels of DWT Decomposition is obtained and hence, compared.

Full PDF


Issue : October 2011
DOI: DIP102011011
Title: An Automated System for Real Time Generation of Bispectral Hybrid Imageries
Authors: Satish R. Kulkarni, C.G. Patil and A.M. Khan
Keywords: Automated System, Bispectral Hybrid Imagery, Image Representation, Weather Nowcasting and Look-Up-Table.
Abstract:
      Satellite imageries are the indispensable source of information for weather prediction. The image data from weather satellites is acquired in real time, but the preprocessing, information retrieval, and image products generation are generally implemented as offline activities to be carried out by the human experts. This work aims at generation of ready to use image products in real time. A Look-Up-Table (LUT) based method has been developed for encoding the processed information into the image. This paper describes the methodology employed to generate the Conventional Image Products in general and the Bispectral Hybrid Image Products in particular. On trial basis this method is being used in the automated system which is operational at Master Control Facility-Hassan, India to generate real-time, ready to use image products embedded with processed information. The resulting image is compared with the EUMETSAT (European Meteorological Satellite) image. The images taken by the spacecrafts, for the same day and same time were compared and it is found that there is a good agreement in the temperature ranges of the features represented in both the images.

Full PDF


Issue : October 2011
DOI: DIP102011012
Title: Application of Fuzzy Filter for Image Deblurring
Authors: Dr.S. Lakshmi Prabha
Keywords: Gaussian Noise, Fuzzy Logic, Image Processing, Membership Function
Abstract:
     Nonlinear techniques have recently assumed significance as they are able to suppress Gaussian noise which is also called as white additive noise to preserve important signal elements such as edges and fine details and eliminate degradations occurring during signal formation or transmission through nonlinear channels. Among nonlinear techniques, the fuzzy logic based approaches are important as they are capable of reasoning with vague and uncertain information. This paper presents a new fuzzy filter for suppressing noise in lena image and satellite image to show the feasibility of the proposed noise reduction using Fuzzy filter approach and compare it with the existing Mean, Median Filter and Non-Local means Algorithm. This filtering method is more efficient to remove the noise for low noise levels.

Full PDF


Issue : October 2011
DOI: DIP102011013
Title: A Survey on Texture Analysis of Mammogram for the Detection of Breast Cancer
Authors: D. Narain Ponraj, Sweety Kunjachan, Dr.P. Poongodi and J. Samuel Manoharan
Keywords: Breast Cancer, Classification, Malignant, Mammogram.
Abstract:
     Breast cancer is the leading cause of death of women in United States. Modern mammography is the only technique that has demonstrated the ability to detect breast cancer at an early stage and with high sensitivity and specificity. The search for features in this kind of image is complicated by the higher-frequency textural variations in image intensity. The interpretation of mammograms is a skilled and difficult task. But the high rate of false positives in mammography causes a large number of unnecessary biopsies. A characteristic feature of the mammograms is their textured appearance. With this texture extraction the number of false positives can be reduced. The aim of this paper is to review on existing approaches to the texture extraction in the detection of breast cancer. Existing texture analysis algorithms are carefully studied and classified into three categories: texture analysis in the detection of masses, micro calcification, and also in tissue surrounding the region. Different methods of texture extractions can also be done in each category. The identification of glandular tissues in breast X-rays is another important task in assessing left and right breasts images. The appearance of glandular tissue in mammograms is highly variable, ranging from sparse streaks to dense blobs. Fatty regions are generally smooth and dark. Texture analysis provides a flexible approach to discriminating between glandular and fatty regions. Therefore the importance of texture analysis is presented first in this paper. Each approach is reviewed according to its classification, and its merits and drawbacks are outlined. The reviewed results show that many approaches greatly improve the false positive and false negative reduction rates.

Full PDF


Issue : October 2011
DOI: DIP102011014
Title: A Quantitive Analysis of Frequency Domain Filters for Sector Scan SONAR Image Processing
Authors: Nagamani Modalavalasa, G. Sasi Bhushana Rao and K. Satya Prasad
Keywords: Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), SONAR Images, Speckle Noise.
Abstract:
     The SONAR (Sound Navigation and Ranging) images are perturbed by a multiplicative noise called speckle noise, due to the coherent nature of the scattering phenomenon. Removing noise from the SONAR image is still a challenging problem for researcher. There is no unique technique for image enhancement for noise reduction. Several approaches have been introduced and each has its own assumption, advantages and disadvantages. This paper proposes performance comparison of frequency domain filtering techniques such as low pass, high pass and band pass filters based on fast fourier transform method for the removal of underwater speckle noise from the real Sector Scan SONAR images. These three filters are compared by computing the error metrics such as Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE). Low pass filter is found to be the suitable filter in frequency domain which tends to reduce the speckle, preserving the structural features and textural information of the scene.

Full PDF


Issue : October 2011
DOI: DIP102011015
Title: Earthquake Damage Assessment using Multi Temporal Satellite Images
Authors: Dr. Sanjay K. Jain
Keywords: Image Fusion, Change Detection, Remote Sensing, Damage Assessment.
Abstract:
     The work presented here is concerned with the problem of Earthquake damage assessment using multi temporal satellite images. Earthquake is one of the unavoidable natural hazards that cause lots of damages and problems to the economy, environment and the whole life of the people. After earthquakes, there is a need for rapid, accurate and reliable damage information in the critical post event hours to guide response activities. Disaster damage assessment using remotely sensed data can be carried out using the multi temporal approach, which requires two images pre-damage and post-damage of the affected area that are compared to identify changes. In the present work, we have performed the image fusion and image change detection for precise Earthquake damage assessment. We have proposed an IHS and Wavelet transform based integrated image fusion technique for fusion of pre and post panchromatic and multispectral satellite images. The resultant pre and post fused images of earthquake disaster are the input to the image change detection method. We have also proposed a novel Minimum Description Length (MDL) based method for change detection in pre and post images of the earthquake. The results generated by the image fusion and change detection methods are quite helpful for earthquake damage assessment.

Full PDF


Issue : October 2011
DOI: DIP102011016
Title: A Novel Approach to Palmprint Classification using Orthogonal Moments
Authors: M.A. Leo Vijilious and V. Subbiah Bharathi
Keywords: Biometrics, Palmprint, Feature Extraction, Zernike Moments, Adaboost
Abstract:
     Biometrics plays an important role in personal identification and it is becoming increasingly popular today. Palmprint matching is considered in this paper for effective identification of persons. Palmprint matching is performed based on Zernike moments feature descriptors and the classification using Modified Adaboost classifiers. Since, Zernike moments have the property of geometrical invariance, they are superior in image representative capability. From the experimental results, it has been observed that Zernike moments achieve superior performance than the other well-known moments.

Full PDF


Issue : October 2011
DOI: DIP102011017
Title: Fast Color Image Segmentation Using Wavelets-Based Clustering Techniques
Authors: S. Manimala and K. Hemachandran
Keywords:Image Segmentation, K-Means, Fuzzy C-Means, Wavelet Transform, Lab Color Space.
Abstract:
     This paper introduces efficient and fast algorithms for unsupervised image segmentation, using low-level features such as color and texture. The proposed approach is based on the clustering technique, using (1) Lab color space and (2) the wavelet transformation technique. The input image is decomposed into two-dimensional Haar wavelets. The features vector, containing the information about the color and texture content for each pixel is extracted. These vectors are used as inputs for the k-means or fuzzy c-means clustering methods, for a segmented image whose regions are distinct from each other, according to color and texture characteristics. Experimental result shows that the proposed method is more efficient and achieves high computational speed.

Full PDF


Issue : October 2011
DOI: DIP102011018
Title: Evaluation of Similarity Measures for Recognition of Handwritten Kannada Numerals
Authors: H.R. Mamatha, K. Srikanta Murthy, Priya Vishwanath, T.S. Savitha, A.S. Sahana and S. Suma Shankari
Keywords: Similarity Measures, OCR, Handwritten Kannada Numerals, Image Fusion, Zonal Based Feature Extraction, Nearest Neighbour Classifier.
Abstract:
     The automatic classification of patterns is a broad area of research in the machine learning area. The aim of pattern classification is the allocation of a certain input to a specific class in a predefined set of classes. Examples of pattern classification tasks are automatic identification of diseases based on a set of symptoms, optical character recognition, automatic document classification, speech recognition, etc.,. In classification problems, the classification rates depend significantly on similarity measures. Classification depends largely on distance or similarity as neighbors are different depending on similarity measures. Therefore it is important to choose a suitable similarity measure. In this paper an Evaluation of four different similarity measures such as Euclidean, Chebyshev, Manhattan and cosine for recognition of Handwritten Kannada numerals have been done. Here, image fusion technique has been used where extracted features of the several images corresponding to each handwritten numeral are fused to generate patterns, which are stored in 8x8 matrices, irrespective of the size of images. Zonal based feature extraction algorithm is being used to extract the features of Handwritten Kannada numerals. The numerals to be recognized are matched using nearest neighbor classifier with different similarity measures against each pattern and the best match pattern is considered as the recognized numeral. Results show that Euclidean distance measure outperforms other similarity measures in terms of recognition accuracy.

Full PDF


Issue : October 2011
DOI: DIP102011019
Title: An Empirical Comparison of Three Object Recognition Methods
Authors: V. Subbaroyan and Dr.S. Karthik
Keywords: Correlation, Gradient, Histogram, Texture
Abstract:
     In this paper an attempt has been made to compare three different approaches of Object Recognition namely, Gradient based, Histogram based and Texture based methods. For a realistic approach common household articles with uniform colour properties have been taken up for this study, instead of standard images. An evaluation of the comparative study has been made and the results have been tabulated. We believe that this study will be useful in choosing the appropriate approach in object recognition for service robots.
In this paper, we evaluate an object recognition system building on three types of method, Gradient based method, Histogram based method and Texture based method. These methods are suitable for objects of uniform color properties such as cups, cutlery, fruits etc. The system has a significant potential both in terms of service robot and programming by demonstration tasks. This paper outlines the three object recognition system with comparison, and shows the results of experimental object recognition using the three methods.

Full PDF


Issue : October 2011
DOI: DIP102011020
Title: Adaptive Lifting Schemes Combining Semi Norms for Image Compression
Authors: R. Pandian and Dr.T. Vigneswaren
Keywords: Wavelet Transforms, Imagecoding, Lifting Structures
Abstract:
     This paper presents a new class of adaptive wavelet decompositions that can capture the directional nature of the picture information. Our method exploits the properties of semi norms to build lifting structures able to choose between different update filters, the choice being triggered by a local gradient of the input. In order to discriminate between different geometrical information, the system makes use of multiple criteria, giving rise multiple choice of update filters. It establishes the conditions under which these decisions can be recovered at synthesis, without the need for transmitting overhead information.

Full PDF


Issue : October 2011
DOI: DIP102011021
Title: Latent Fingerprint Enhancement in Preprocessing Stage
Authors: Raju Rajkumar and K. Hemachandran
Keywords: Latent Fingerprint, Primary Enhancement, Secondary Enhancement.
Abstract:
     Most of the latent fingerprint images are incomplete with poor quality. Therefore to develop an automatic feature extraction of a latent fingerprint is a very challenging problem. A combination of different preprocessing stages will produce a better enhanced Latent fingerprint image. In the present study, an algorithm with a combination of FFT and Gaussian filter has been proposed and implement for image enhancement. The experimental results show that the enhanced images are acceptable for post-processing analysis.

Full PDF


Issue : October 2011
DOI: DIP102011022
Title: Frequency Domain Enhancement Filters for Fingerprint Image: A Performance Evaluations
Authors: Dr.E. Chandra and K. Kanagalakshmi
Keywords: Band-Pass Filter, Butterworth Filter, Domain, Log-Gabor, Low-Pass.
Abstract:
     Filtering and Image Enhancements are the primary need of the automatic identification and authentication system. This paper aims to review and evaluate the frequency domain enhancement techniques: Ideal Low Pass filtering (ILPF), Butterworth Low Pass Filtering (BLPF), Band Pass Filtering (BPF), and Log-Gabor Filtering. Experimental results show the performance measures based on Peak-Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) and also Standard Deviation between original and enhanced image.

Full PDF


Issue : October 2011
DOI: DIP102011023
Title: Image Retrieval using Discrete Orthogonal Moments in a Non Uniform Lattice
Authors: J.P. Ananth and Dr.V. Subbiah Bharathi
Keywords: Feature Selection, Image Retrieval, Moment Feature, Racah Moment.

Abstract:
     Retrieval of images from large databases is an emerging application particularly in medical and forensic departments. Face image retrieval is still a challenging task since face images can vary considerably in terms of facial expressions, lighting conditions etc. In feature based image retrieval methods, the accuracy depends on the discrimination power of the features. In this work, orthogonal moments were employed as features for the retrieval task. Due to the orthogonal property, these moments are inherently non-redundant exhibiting good image representation capability. Racah moments defined in a non-uniform lattice are proved to be better than other orthogonal moments in terms of reconstruction error. Face image retrieval using Racah moment features has been extensively experimented with YALE face database and FERET Database. The results reveal the efficacy of orthogonal moment descriptors.

Full PDF


Issue : October 2011
DOI: DIP102011024
Title: Identification of Region of Interest using Local Binary Pattern with Ternary Encoding
Authors: Abraham Varghese, Reji Rajan Varghese, Balakrishnan Kannan and J.S. Paul
Keywords: Neighborhood Calculation, Local Binary Pattern, Ternary Encoding, Region of Interest
Abstract:
     Local Binary Pattern has been used as texture descriptor for various medical image applications due to its invariant to monotonic gray level change and easier computation. LBP may fail in the case of noisy images or on flat image areas of constant gray level due to the thresholding scheme of the operator. Local Binary Pattern with ternary encoding has been proposed to identify the region of Interest in Brain MR Images. The proper threshold has been calculated locally based on the range of pixels in each window. This will reduce the complexity of the Image retrieval problem, especially Brain Slice retrieval.

Full PDF


Issue : October 2011
DOI: DIP102011025
Title: A New Hybrid Approach for Medical Image Classification
Authors: A. Vaideghy and K. Vembandasamy
Keywords: Edge Detection, ID3 Decision Tree, Image Mining, Sequence Database, Transaction Database, UDDAG Association Rule.
Abstract:
     This paper discusses the application of data mining for the classification of a medical image. A hybrid technique for brain tumor detection using UpDown Directed Acyclic Graph (UDDAG) association rule with ID3 decision tree classifier is applied. This hybrid approach classifies the CT scan brain images into three categories namely normal, benign and malignant. The major steps involved in the technique are: pre-processing, feature extraction, association rule mining and classification. The pre-processing step has been done using the 2D median filtering process. The edge features from the image has been extracted using canny edge detection technique. The sequential patterns are generated by UpDown Directed Acyclic Graph (UDDAG) algorithm that mines the Association Rule. The decision tree method (ID3) has been used to classify the medical images for diagnosis based on the rules generated by the association rule. This hybrid approach (HARC) enhances the efficiency and accuracy of the brain tumor detection from the CT scan brain images.

Full PDF


Issue : October 2011
DOI: DIP102011026
Title: Kannada Characters Recognition - A Novel Approach Using Image Zoning and Run Length Count
Authors: S. Karthik, H.R. Mamatha and K. Srikanta Murthy
Keywords: Optical Character Recognition, Naive Bayes Classifier, K-Nearest Neighbor Classifier, Zoning, Run Length Count.
Abstract:
     Optical Character Recognition (OCR) is one of the important field in image processing and pattern recognition domain. Many practical applications uses OCR with high accuracy. The accuracy of the Optical Character Recognition system depends on the quality of the features extracted and the effectiveness of the classifier. Here we are proposing a novel method to recognize the printed kannada vowels. Kannada script has large number of characters having similar shapes and also the complexity is font dependent, which means the same characters in a class, may vary in structure for different fonts. Hence a method, which makes use of image zoning and the Run Length Count techniques to extract the features have been proposed. The methodology uses Naive Bayes classifier, K-Nearest Neighbor classifier for classification. The method experimented on a dataset, which consists of samples from 69 different fonts, and a maximum of 97.44% recognition accuracy is achieved.

Full PDF


Issue : October 2011
DOI: DIP102011027
Title: Effect of Multi-Algorithmic Approaches on Automatic Face Recognition Systems
Authors: S.M. Zakariya, Manzoor A. Lone and Rashid Ali
Keywords: A Face Recognition System, PCA, DCT, Template Matching using Correlation, PIFS, Multi-Algorithmic Techniques of Six Systems, ORL Face Database and Face Recognition Rate.
Abstract:
     For the purpose of human authentication, the face recognition system is use as a biometric mode. As we know the face recognition is a technique of recognizing similar faces from face databases. It is the problem of searching a face in reference database to find the matches as a given face. The purpose is to find a face that has highest similarity with a given face in the database. The objective of face recognition involves the extraction of different features of the human face from the face image for discriminating it from other persons. Many face recognition algorithms have been developed and used as an application of access control and surveillance. For enhancing the performance and accuracy of biometric face recognition system, we use a multi-algorithmic approach, where in a combination of three different individual face recognition techniques is used. Recently, we developed six face recognition systems based on the six combinations of four individual techniques of face recognition system by fusing the scores of two approaches in a single face recognition system. In this paper, we develop four different face recognition systems based on the combinations of four individual techniques namely Principal Component Analysis (PCA), Discrete Cosine Transform (DCT), Template Matching using Correlation and Partitioned Iterative Function System (PIFS). We fuse the scores of three of these four techniques in a single face recognition system. We perform a comparative study of face recognition rate of these face recognition systems at two precision levels namely at top- 5 and at top-10. We experiment with a standard database called ORL face database. Experimentally, we find that each of these four systems perform well in comparison to the corresponding (in group of two) six combinations of four individual techniques. Overall, the system based on combination of PCA, DCT and Template Matching using Correlation is giving the best performance among these four systems.

Full PDF


Issue : October 2011
DOI: DIP102011028
Title: Use of Image Processing in Hand Written Character Recognition
Authors: Keshav Krishna Agarwal and Dr. Utkarsh Seetha
Abstract:
     Hand Character Recognition may be done offline or on line. On line character recognition is more accurate & fast than its counterpart offline character recognition. But in general public life people like to use off line characters like in form filling for taking admission, giving signature, writing address on letters, writing answers in examination or filling details on bank cheque & in many more general life. So this research paper is mainly intended to recognize off line characters using ANN techniques.

Full PDF


Issue : October 2011
DOI: DIP102011029
Title: Enhancing Breast Ultrasound Images using Hough Transform
Authors: N. Alamelumangai and Dr.J. Devishree
Keywords: Ultrasound Image, Memetic Algorithm, Hough Transform, Modified Fuzzy Possibilistic C-Means, Repulsion.
Abstract:
     Problem Statement: Highly widespread and foremost reason for cancer death among women is Breast cancer. It has turn out to be most important health concern in the world over the past 50 years, and its occurrence has mounted in recent years. Early detection is an efficient method to diagnose and supervise breast cancer. Computer-aided detection or diagnosis (CAD) systems can act a major function in the early detection of breast cancer and can decrease the death rate among women with breast cancer. Approach: The purpose of this paper is to provide a better CAD system which detects the cancer in early stages. The proposed system involves three phases such as speckle noise reduction, image enhancement and segmentation. For removing the speckle noise, this paper uses Memetic algorithm. Image enhancement is performed using Hough transform. Finally, the enhanced image is segmented using clustering technique called Modified Fuzzy Possibilistic C-Means technique with Repulsion factor to identify the cancer affected region Results: The proposed enhancement technique for breast ultrasound image is evaluated using the real time ultrasound images. The comparison is performed by means of Mean Square Error between the existing and proposed technique. Mean Square Error for the proposed approach is lesser when compared to the existing approach. Conclusion: The experimental result suggests that the proposed system results in better enhancement in ultrasound image when compared to the conventional technique.

Full PDF


Issue : October 2011
DOI: DIP102011030
Title: Segmentation of Brain Tumor on MRI Images Using Modified GVF Snake Model
Authors:A. Rajendran and Dr.R. Dhanasekaran
Keywords: GVF Snake, Segmentation, Brain Tumor, Deformable Model
Abstract:
     Medical image segmentation is the most important process and research focus in the medical image processing field. Snakes, or active contours, are used extensively in computer vision and image processing applications, particularly to locate object boundaries. In this paper the gradient vector flow (GVF) snake model is modified with thinning canny edge detection is used for brain tumor segmentation. The thinning canny operator is used to calculate the edge map gradient for GVF snake model. Then the GVF deforms with initial contour. Simulation results show that the GVF model with thinning canny operator can extract the boundary of brain tumor accurately. This method can overcome the problem that traditional snake cannot have efficient converge to the weak boundary.

Full PDF


Issue : October 2011
DOI: DIP102011031
Title: Multimedia Content Protection by Biometrics-Based Scalable Encryption and Watermarking
Authors: K.P. Mohammed Basheer, K. Meena and Dr.K.R. Subramanian
Keywords: Multimedia, Security, Biometrics, Watermarking, Scalable Encryption
Abstract:
     With the huge development of broadband network, distribution of multimedia by means of Internet is an uncomplicated method of communication and data exchange. Intellectual Property (IP) protection is a vital component in a multimedia broadcast system. Traditional IP protection methods can be classified into two major categories: encryption and watermarking. Content protection has turned out to be one of the most considerable and demanding problems of this field. This paper proposes a multimedia content protection framework that is dependent on biometric data of the users, a layered encryption/decryption scheme and watermarking. Scalable encryption algorithms result from a transaction between implementation cost and resulting performances. In addition, this approach generally aims to be exploited competently on a large range of platforms. The computational necessities and applicability of the proposed method are addressed. By utilizing the benefit of the nature of cryptographic schemes and digital watermarking, the copyright of multimedia contents can be protected. In this paper, the scalable transmission technique is utilized over the broadcasting environment for encryption. The embedded watermark can be thus extracted with high confidence.

Full PDF


Issue : October 2011
DOI: DIP102011032
Title: Combining Biometric Features of Iris and Retina for Better Security Cryptography
Authors: P. Balakumar and Dr. R. Venkatesan
Keywords: Biometrics, Cryptography Key Generation, Minutiae Points Extraction, Security Analysis
Abstract:
     The requirement for dependable user authentication methods has increased in the wake of discriminating worries about security and fast developments in networking, communication and mobility. The majority of authentication systems of present control access to computer systems or secured locations with the help of passwords, but it are not highly durable to attacks because it can be easily broken or stolen. Thus, biometrics has now turned out to be a feasible replacement to conventional identification techniques in several application fields. Biometrics which is indicated as the science of recognizing an individual according to their physiological or behavioral traits is evolving to gain recognition as a legitimate technique for identifying an individual’s identity. Biometric technique have proved importance in a range of security, access control and monitoring applications. The technologies are still new and rapidly evolving; this also leads to the cracking of even the biometrics system. Therefore, a new technique must be developed to overcome those difficulties. A new technique called multimodal biometrics can be used to satisfy those requirements because it is very difficulty for the attackers to identify more than one biometrics. In this technique, the features are extracted from different biometrics and then they are combined using a technique called Fusion. From this fused features, cryptographic key id generated which is used as a key for authenticating the system. This paper combines the features of Iris and Retina. The experimental result suggests that the combination of Iris and Retina results in better security than the other combination of biometrics.

Full PDF