Master Thesis: Affect and Activity Recognition with Earables

Earables devices (e.g., headphones with additional sensors and processors) facilitate long term continuous monitoring of multiple modalities while maintaining a high user acceptance. Amongst others, they could be used for Activity and Affect recognition to provide context-aware conversational agents [2]. Previous studies on Earable Affect Recognition relied on unnatural, i.e., acted, emotions [3] or single modalities (e.g., EEG [4, 5]).

The goal of this Thesis is to compare multimodal Earable data (e.g., audio, IMU, EEG, temperature) against data from non-Earables sources. Emotions should be induced in various paradigms, for example situational procedures (e.g., call-center situation) or autobiographic recall [1]. Major challenges involve study design and hardware considerations, processing longitudinal physiological, physical, and psychological data and applying Machine Learning to predict affective states and/or behavioral patterns. Data will be collected in the KD2Lab (www.kd2lab.kit.edu) with extensive equipment and a large pool of participants. This work will contribute towards using everyday data and mobile Computing for clinical and non-clinical behavioral interventions in the wild. The student will have the chance to prepare for possible research and career tracks involving Human-Computer-Interaction, Smart Devices, Machine Learning, and Multimodal Time-Series Data.

Keywords: Human-Computer-Interaction, Machine Learning, Earables, Time-Series Data

Tasks (Scope depends on the type of Thesis)

Literature review;
Designing the user study;
Collecting and analyzing user data;
ML-Models for Affect and/or Activity Recognition;
Discussing the implications and pros and cons of Earable Computing;

What we offer

Access to a large pool of participants;
Professional advice in terms of Data Science and Hardware;
A pleasant working atmosphere and constructive cooperation;
Chances to publish your work on top conference;
Research at the intersection between Psychology and Technology;

Qualification

Proactive and communicative work style;
Good English reading and writing;
Machine Learning;
Interest in working with Earable devices and interdisciplinary work;

Interested? Please contact: Tim Schneegans (schneegans@teco.edu)

References

[1]Ewa Siedlecka and Thomas F Denson. Experimental methods for inducing basic emotions: A qualitative review.Emotion Review,11(1):87–97, 2019.
[2] Shin Katayama, Akhil Mathur, Marc Van den Broeck, Tadashi Okoshi, Jin Nakazawa, and Fahim Kawsar. Situation-aware emotionregulation of conversational agents with kinetic earables. In2019 8th International Conference on Affective Computing and IntelligentInteraction (ACII), pages 725–731. IEEE, 2019.
[3] Sabrina AL Frohn, Jeevan S Matharu, and Jamie A Ward. Towards a characterisation of emotional intent during scripted scenes usingin-ear movement sensors. InProceedings of the 2020 International Symposium on Wearable Computers, pages 37–39, 2020.
[4] Chanavit Athavipach, Setha Pan-Ngum, and Pasin Israsena. A wearable in-ear eeg device for emotion monitoring.Sensors, 19(18):4014,2019
[5] Gang Li, Zhe Zhang, and Guoxing Wang. Emotion recognition based on low-cost in-ear eeg. In2017 IEEE Biomedical Circuits and SystemsConference (BioCAS), pages 1–4. IEEE, 2017.