Bayesian identification of fixations, saccades, and smooth pursuits

T Santini, W Fuhl, T Kübler, E Kasneci - Proceedings of the Ninth …, 2016 - dl.acm.org
Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research …, 2016dl.acm.org
Smooth pursuit eye movements provide meaningful insights and information on subject's
behavior and health and may, in particular situations, disturb the performance of typical
fixation/saccade classification algorithms. Thus, an automatic and efficient algorithm to
identify these eye movements is paramount for eye-tracking research involving dynamic
stimuli. In this paper, we propose the Bayesian Decision Theory Identification (I-BDT)
algorithm, a novel algorithm for ternary classification of eye movements that is able to …
Smooth pursuit eye movements provide meaningful insights and information on subject's behavior and health and may, in particular situations, disturb the performance of typical fixation/saccade classification algorithms. Thus, an automatic and efficient algorithm to identify these eye movements is paramount for eye-tracking research involving dynamic stimuli. In this paper, we propose the Bayesian Decision Theory Identification (I-BDT) algorithm, a novel algorithm for ternary classification of eye movements that is able to reliably separate fixations, saccades, and smooth pursuits in an online fashion, even for low-resolution eye trackers. The proposed algorithm is evaluated on four datasets with distinct mixtures of eye movements, including fixations, saccades, as well as straight and circular smooth pursuits; data was collected with a sample rate of 30 Hz from six subjects, totaling 24 evaluation datasets. The algorithm exhibits high and consistent performance across all datasets and movements relative to a manual annotation by a domain expert (recall: μ = 91.42%, σ = 9.52%; precision: μ = 95.60%, σ = 5.29%; specificity μ = 95.41%, σ = 7.02%) and displays a significant improvement when compared to I-VDT, an state-of-the-art algorithm (recall: μ = 87.67%, σ = 14.73%; precision: μ = 89.57%, σ = 8.05%; specificity μ = 92.10%, σ = 11.21%). Algorithm implementation and annotated datasets are openly available at www.ti.uni-tuebingen.de/perception
ACM Digital Library