The present method incorporates audio and visual cues from human gesticulation for automatic recognition. The methodology articulates a framework for co-analyzing gestures and prosodic elements of a person's speech. The methodology can be applied to a wide range of algorithms involving analysis of gesticulating individuals. The examples of interactive technology applications can range from information kiosks to personal computers. The video analysis of human activity provides a basis for the development of automated surveillance technologies in public places such as airports, shopping malls, and sporting events.

Title
Prosody based audio/visual co-analysis for co-verbal gesture recognition
Application Number
10/666460
Publication Number
20040056907
Application Date
September 19, 2003
Publication Date
March 25, 2004
Inventor
Sanshzar Kettebekov
State College
PA, US
Mohammed Yeasin
Utica
NY, US
Rajeev Sharma
State College
PA, US
Agent
Synnestvedt & Lechner
PA, US
Assignee
The Penn State Research Foundation
PA, US
IPC
G06G 07/48
G09G 05/00
View Original Source