LAP Lambert Academic Publishing ( 14.09.2010 )
€ 49,00
The focus of this work is on classifying the most common non-manual (facial) gestures in Sign Language. This goal is achieved in two consecutive steps: First, automatic facial landmarking is performed based on Multi-resolution Active Shape Models (MRASMs). Second, the tracked landmarks are normalized and expression classification is done based on multivariate Continuous Hidden Markov Model (CHMMs). We collected a video database of expressions from Turkish Sign Language (TSL) to test the proposed approach. The expressions used are universal and the results are applicable to other sign languages. Single view vs. multi-view and person specific vs. generic MRASM trackers are compared both for tracking and expression recognition. The multi-view person-specific tracker performs the best and tracks the landmarks robustly. For expression classification, the proposed CHMM classifier is tested on different training and test set combinations and the results are reported. We observe that the classification performances of distinct classes are very high.
Детали книги: |
|
ISBN-13: |
978-3-8383-2713-6 |
ISBN-10: |
3838327136 |
EAN: |
9783838327136 |
Язык книги: |
English |
By (author) : |
İsmail Arı |
Количество страниц: |
96 |
Опубликовано: |
14.09.2010 |
Категория: |
Технология |