Attentive Sequence-to-Sequence Modeling of Stroke Gestures Articulation Performance

Results on the $1-GDS dataset (unistroke gestures) Results on the MatchUp dataset (multitouch gestures) Results on the Nicicon dataset (multistroke gestures)
Prediction examples (red lines) on several datasets. From left to right: $1-GDS (unistroke gestures), MatchUp (multitouch gestures), and Nicicon (multistroke gestures).

Abstract

Production time of stroke gestures is a fundamental measure of user performance with Graphical User Interfaces. However, production time represents an overall quantification of the user's gesture articulation process and therefore provides an incomplete picture of such process. Moreover, previous approaches assumed stroke gestures as synchronous point sequences, when most gesture-driven applications have to deal with asynchronous point sequences. Furthermore, deep generative models of human handwriting ignore the temporal information, thereby missing a key component of the user's gesture articulation process. To solve these issues, we introduce Ditto, a sequence-to-sequence deep learning model that estimates the velocity profile of any stroke gesture using spatial information only, providing thus a fine-grained estimation of the moment-by-moment behavior of the user's articulation performance. We show that this unique capability makes Ditto remarkably accurate while handling gestures of any type: unistrokes, multistrokes, and multitouch gestures. Our model, code, and associated web application are available as open source software.

Research highlights

Resources

Citation

LaTeX users can use the following BibTeX entry for citation:

@Article{ditto,
  author  = {Lokesh Kumar T and Luis A. Leiva},
  title   = {Attentive Sequence-to-Sequence Modeling of Stroke Gestures Articulation Performance},
  journal = {IEEE Transactions on Human-Machine Systems},
  volume  = {51},
  number  = {6},
  year    = {2021},
}
    

Disclaimer

Our software is free for scientific use (dual licensed under MIT and GPL licenses). The software must not be distributed without prior permission of the authors. Please contact us if you are planning to use the software for commercial purposes. The authors are not responsible for any implication derived from the use of this software.