The present method incorporates audio and visual cues from human gesticulation for automatic recognition. The methodology articulates a framework for co-analyzing gestures and prosodic elements of a person's speech. The methodology can be applied to a wide range of algorithms involving analysis of gesticulating individuals. The examples of interactive technology applications can range from information kiosks to personal computers. The video analysis of human activity provides a basis for the development of automated surveillance technologies in public places such as airports, shopping malls, and sporting events.

Title
Prosody based audio/visual co-analysis for co-verbal gesture recognition
Application Number
PCT/US2003/029863
Publication Number
2004/027685
Application Date
September 19, 2003
Publication Date
April 1, 2004
Inventor
Kettebekov Sanshzar
Yeasin Mohammed
Sharma Rajeev
Agent
SIMPSON Mark D
Assignee
Kettebekov Sanshzar
Yeasin Mohammed
Sharma Rajeev
The Penn State Research Foundation
IPC
G09B 19/04
G05B 19/00
View Original Source