TouchPoint: A Wrist-Worn, On-Body Touch Interaction Device
Designing an smartwatch-embedded optical sensor and processing algorithms to create a back-of-the-hand trackpad
For my thesis, I developed TouchPoint, a wrist-worn sensor device and processing algorithms that returns a positional measurement of a finger touching on the back of the hand for smartwatch input. The paper was awarded the SEAS Dean's Award for the best thesis in EE department.
In about five years, smartwatches successfully made the transition out of the pages of science fiction and onto the wrists of the tech-savvy. Hailed as the next frontier of the mobile tech boom, the wrist-worn wearable market witnessed double-digit growth from 2012 to 2015, and is predicted to experience 35% CAGR through to 2019. The user interface of the majority of smartwatches is a touchscreen display. While the touchscreen has been the winning interface of the mobile revolution, achieving ubiquity across smartphone and tablet devices, the interface is less suited to the ultra-small screens of wrist-worn devices. The finger is relatively large as compared the screen and, as a result, interactive icons must be enlarged to be accurately touch selected, occupying valuable display real estate and leading to less comprehensive UI.
The ideal input mechanism would maintain the same dimensionality as the output (display), in this case, the 2-D screen, but would be positioned beyond the smartwatch perimeter. By expanding the interactive footprint beyond the physical confines of the device and maintaining high dimensionality, more natural, expressive, and complex interactions can be enabled without increasing device size or obtrusiveness.
In addition to maintaining sub-millimeter precision and sub-centimeter positional accuracy, the developed solution requires no prior calibration for use. Designed with the aim to be deployable in a consumer smartwatch form factor in mind, the system takes steps towards being appropriately miniature, low power, unobtrusive, and comfortable. Applications of the device design can be easily extended to other tracking applications with similar geometric constraints such as tracking a finger or pen-like tool on a table surface next to a phone or laptop, or serve as an always-accessible interactive surface to control ubiquitous Internet-of-Things devices.
The solution hardware solution consists of binocular line cameras and budget plastic optics. Classic stereoscopic computer vision algorithm implementations (rectification and generating disparity maps) failed because of optical asymmetries and imperfections, in addition to being relatively computationally intensive. Instead, a number of specifically-selected features are extracted from the image differences to maintain tracking robustness across environmental variations. A neural network model was trained offline in different environments generate raw x-z (camera coordinate notation) values from these features. The processing pipeline including the generated model was deployed onto an onboard jellybean, 8-bit microcontroller target.
At some point in the future, I hope to further miniaturize the solution mainly by leveraging more compact lenses. I’d also like to explore how a smartwatch UI might change if the back of the hand were used for input.
Paper and citation here: dash.harvard.edu/handle/1/37817800.
In about five years, smartwatches successfully made the transition out of the pages of science fiction and onto the wrists of the tech-savvy. Hailed as the next frontier of the mobile tech boom, the wrist-worn wearable market witnessed double-digit growth from 2012 to 2015, and is predicted to experience 35% CAGR through to 2019. The user interface of the majority of smartwatches is a touchscreen display. While the touchscreen has been the winning interface of the mobile revolution, achieving ubiquity across smartphone and tablet devices, the interface is less suited to the ultra-small screens of wrist-worn devices. The finger is relatively large as compared the screen and, as a result, interactive icons must be enlarged to be accurately touch selected, occupying valuable display real estate and leading to less comprehensive UI.
The ideal input mechanism would maintain the same dimensionality as the output (display), in this case, the 2-D screen, but would be positioned beyond the smartwatch perimeter. By expanding the interactive footprint beyond the physical confines of the device and maintaining high dimensionality, more natural, expressive, and complex interactions can be enabled without increasing device size or obtrusiveness.
In addition to maintaining sub-millimeter precision and sub-centimeter positional accuracy, the developed solution requires no prior calibration for use. Designed with the aim to be deployable in a consumer smartwatch form factor in mind, the system takes steps towards being appropriately miniature, low power, unobtrusive, and comfortable. Applications of the device design can be easily extended to other tracking applications with similar geometric constraints such as tracking a finger or pen-like tool on a table surface next to a phone or laptop, or serve as an always-accessible interactive surface to control ubiquitous Internet-of-Things devices.
The solution hardware solution consists of binocular line cameras and budget plastic optics. Classic stereoscopic computer vision algorithm implementations (rectification and generating disparity maps) failed because of optical asymmetries and imperfections, in addition to being relatively computationally intensive. Instead, a number of specifically-selected features are extracted from the image differences to maintain tracking robustness across environmental variations. A neural network model was trained offline in different environments generate raw x-z (camera coordinate notation) values from these features. The processing pipeline including the generated model was deployed onto an onboard jellybean, 8-bit microcontroller target.
At some point in the future, I hope to further miniaturize the solution mainly by leveraging more compact lenses. I’d also like to explore how a smartwatch UI might change if the back of the hand were used for input.
Paper and citation here: dash.harvard.edu/handle/1/37817800.
© Ishan Chatterjee 2020