2017
Talky
Maximilian Ruppert
Talky presents two novel approaches of selecting and clicking discrete targets like hyperlinks. The core mechanism is based on Actigaze, a gaze-based input method, which makes it possible to select and click a discrete target by only dwelling over it and its corresponding button, even when the target is small or closely surrounded by other targets. This is done by color-coding the targets when the user dwells over them. Actigaze therefore relies on gaze-tracking techniques. Talky alters this approach by presenting a speech-only and simulated gaze + speech variant. In the speech-only scenario the content is overlaid with a grid and the user has to speak the box coordinates of the target’s location to color-code all discrete targets inside that box. He then only has to say the color in which his target is now highlighted to trigger the click. In the simulated gaze + speech variant the grid is no longer needed because the user dwells over a target group with his eye and mouse cursor (this is the simulation part) to color code the discrete targets. Compared to Actigaze both variants of Talky are slower but still offer interesting use cases.
- Interaction
- gaze, speech
- Technology
- camera