CALM: Cognitive Assessment using Light-insensitive Model

Additional Material

Akhil Pilakkatt Meethal, Anita Paas, Nerea Urrestilla and David St-Onge, 2024

In support to our study CALM: Cognitive Assessment using Light-insensitive Model, this page introduces a distinct experiment regrouping 10 participants using the same modality as presented in the paper but testing different sensors.

There were two levels of mental workload (rest and high workload) and two levels of lighting (light [210 Lux] and dark [1 lux]) in this experimental setting. As far as the mental workload level identification is concerned, this is a binary classification problem. This experiment focused on the impact of ambient light and how multimodal data can mitigate this sensitivity. In the rest condition, participants looked at a point on the wall and relaxed while their pupil and heart rate data were recorded. In the high workload condition, partici- pants performed a 2-back task. During this experiment, pupil data was recorded with the Pupil Labs glasses, and heart rate data was recorded with the BioPac M35. These devices are shown below.

PupilLabs
Biopac MP35
PupilLabs Biopac MP35

Cognitive load classification results

The results from the binary cognitive load classification task are reported in the table below.

Sensors Train Test Accuracy
Pupil Light Light 71.87±0.27
Pupil Light Dark 59.38±0.32
Pupil All Light 75.10±0.26
Pupil All Dark 62.53±0.30
Pupil All All 71.09±0.22
HRV+Pupil Light Light 81.25±0.17
HRV+Pupil Light Dark 78.87±0.23
HRV+Pupil All Light 94.74±0.21
HRV+Pupil All Dark 88.98±0.23
HRV+Pupil All All 92.20±0.18

The performance of pupillometry-only models significantly decreased (by more than 12 percentage points) when trained on light conditions and tested on dark conditions. Despite using features like IPA to reduce sensitivity to light, its impact remains. As expected, training on a mix of light and dark conditions yields better performance because it mitigates the distribution shift during testing. It is evident that using multimodal inputs significantly enhances the classifier’s performance. The overall improvement is 21%. Although the multimodal settings show improvement compared to the pupillometry-only case when the testing condition is dark, the performance remains lower than the results when the training set contains data from all light conditions. This implies that the distribution shift due to light conditions still impacts this task, even with multimodal features.

Feature level changes due to light conditions

We selected the most common features from both pupillometry and HRV and compared their distribution using violin plots under light and dark condition. The following figure shows the distribution under light conditions on the left and dark conditions on the right of each violin plot.

Image 1
Image 2
Image 3
Mean Pupil Diameter IPA PDRoC
Image 4
Image 5
Image 6
RMSSD SDNN Mean RR Interval