Publication Date

5-2017

Date of Final Oral Examination (Defense)

5-15-2017

Type of Culminating Activity

Thesis

Degree Title

Master of Science in Computer Science

Department

Computer Science

Supervisory Committee Chair

Steven M. Cutchin, Ph.D.

Supervisory Committee Co-Chair

Jerry Alan Fails, Ph.D.

Supervisory Committee Member

Maria Soledad Pera, Ph.D.

Supervisory Committee Member

Casey Kennington, Ph.D.

Abstract

Sign Language is a language which allows mute people to communicate with other mute or non-mute people. The benefits provided by this language, however, disappear when one of the members of a group does not know Sign Language and a conversation starts using that language. In this document, I present a system that takes advantage of Convolutional Neural Networks to recognize hand letter and number gestures from American Sign Language based on depth images captured by the Kinect camera. In addition, as a byproduct of these research efforts, I collected a new dataset of depth images of American Sign Language letters and numbers, and I compared the presented method for image recognition against a similar dataset but for Vietnamese Sign Language. Finally, I present how this work supports my ideas for the future work on a complete system for Sign Language transcription.

DOI

https://doi.org/10.18122/B2B136

Share

COinS