Institutional Scholarship

Gesture Recognition using Electromyographic, Spatial and Temporal Input Data

Show simple item record

dc.contributor.advisor Dougherty, John P.
dc.contributor.author Gabriel, Dorvil
dc.date.accessioned 2016-07-19T18:11:28Z
dc.date.available 2016-07-19T18:11:28Z
dc.date.issued 2016
dc.identifier.uri http://hdl.handle.net/10066/18676
dc.description.abstract With the advancement of technology, electromyographic (EMG) and spatial data recognition is having a growing impact on accessible computing. I designed a customizable script that uses a combination of EMG, spatial and temporal data, such that each user of the script can select a custom profile of gestures they want to use to type. The gestures chosen for each custom profile will be determined by each user’s level of ability/disability. Based off of the custom profiles each user selects and speed at which each user types we determine if using EMG, spatial data and temporal data can serve as viable form of text input. While this research placed a strong emphasis on text input, it also supports the ideal of universal design in other contexts. The Myo armband was used to read and interact with the EMG, spatial and temporal data. The interface of this script also revealed multiple techniques for scripting with the Myo that allows people with disabilities to use spatial data to type.
dc.description.sponsorship Haverford College. Department of Computer Science
dc.language.iso eng
dc.rights.uri http://creativecommons.org/licenses/by-nc/4.0/
dc.title Gesture Recognition using Electromyographic, Spatial and Temporal Input Data
dc.type Thesis
dc.rights.access Open Access


Files in this item

This item appears in the following Collection(s)

Show simple item record

http://creativecommons.org/licenses/by-nc/4.0/ Except where otherwise noted, this item's license is described as http://creativecommons.org/licenses/by-nc/4.0/

Search


Browse

My Account