King Saud University Repository >
King Saud University >
Science Colleges >
College of Computer and Information Sciences >
College of Computer and Information Sciences >

Please use this identifier to cite or link to this item: http://hdl.handle.net/123456789/15423

Title: Block-Based Motion Estimation Analysis for Lip Reading User Authentication Systems
Authors: Khaled Alghathbar
Keywords: Block-Based Motion, Authentication Systems
Issue Date: 2009
Publisher: ACM
Abstract: This paper proposes a lip reading technique for speech recognition by using motion estimation analysis. The method described in this paper represents a sub-system of the Silent Pass project. Silent Pass is a lip reading password entry system for security applications. It presents a user authentication system based on password lip reading. Motion estimation is done for lip movement image sequences representing speech. In this methodology, the motion estimation is computed without extracting the speaker's lip contours and location. This leads to obtaining robust visual features for lip movements representing utterances. Our methodology comprises of two phases, a training phase and a recognition phase. In both phases an n × n video frame of the image sequence for an utterance (can be an alphanumeric character, word or a sentence in more complicated analysis) is divided into m × m blocks. Our method calculates and fits eight curves for each frame. Each curve represents motion estimation of this frame in a specific direction. These eight curves are representing set of features of a specific frame and are extracted in an unsupervised manner. The feature set consists of the integral values of the motion estimation. These features are expected to be extremely effective in the training phase. The feature sets are used to characterize specific utterances with no additional acoustic feature set. A corpus of utterances and their motion estimation features are built in the training phase. The recognition phase is accomplished by extracting the feature set, from the new image sequence of lip movement of an utterance, and compare it to the corpus using the mean square error metric for recognition.
URI: http://hdl.handle.net/123456789/15423
Appears in Collections:College of Computer and Information Sciences

Files in This Item:

File Description SizeFormat
Alghathbar_paper_21.docx12.42 kBMicrosoft Word XMLView/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.


DSpace Software Copyright © 2002-2009 MIT and Hewlett-Packard - Feedback