Computational analysis of deep visual data for quantifying facial expression production
-
Leo, Marco
Institute of Applied Sciences and Intelligent Systems, National Research Council, 73100 Lecce, Italy
-
Carcagnì, Pierluigi
Institute of Applied Sciences and Intelligent Systems, National Research Council, 73100 Lecce, Italy
-
Distante, Cosimo
Institute of Applied Sciences and Intelligent Systems, National Research Council, 73100 Lecce, Italy
-
Mazzeo, Pier Luigi
Institute of Applied Sciences and Intelligent Systems, National Research Council, 73100 Lecce, Italy
-
Spagnolo, Paolo
Institute of Applied Sciences and Intelligent Systems, National Research Council, 73100 Lecce, Italy
-
Levante, Annalisa
Department of History, Society and Human Studies, Università del Salento, 73100 Lecce, Italy - Lab of Applied Psychology and Intervention, Università del Salento, 73100 Lecce, Italy
-
Petrocchi, Serena
Lab of Applied Psychology and Intervention, Università del Salento, 73100 Lecce, Italy - Institute of Communication and Health (ICH), Facoltà di scienze della comunicazione, Università della Svizzera italiana, Svizzera
-
Lecciso, Flavia
Department of History, Society and Human Studies, Università del Salento, 73100 Lecce, Italy - Lab of Applied Psychology and Intervention, Università del Salento, 73100 Lecce, Italy
Show more…
Published in:
- Applied sciences. - 2019, vol. 9, no. 21, p. 4542
English
The computational analysis of facial expressions is an emerging research topic that could overcome the limitations of human perception and get quick and objective outcomes in the assessment of neurodevelopmental disorders (e.g., Autism Spectrum Disorders, ASD). Unfortunately, there have been only a few attempts to quantify facial expression production and most of the scientific literature aims at the easier task of recognizing if either a facial expression is present or not. Some attempts to face this challenging task exist but they do not provide a comprehensive study based on the comparison between human and automatic outcomes in quantifying children’s ability to produce basic emotions. Furthermore, these works do not exploit the latest solutions in computer vision and machine learning. Finally, they generally focus only on a homogeneous (in terms of cognitive capabilities) group of individuals. To fill this gap, in this paper some advanced computer vision and machine learning strategies are integrated into a framework aimed to computationally analyze how both ASD and typically developing children produce facial expressions. The framework locates and tracks a number of landmarks (virtual electromyography sensors) with the aim of monitoring facial muscle movements involved in facial expression production. The output of these virtual sensors is then fused to model the individual ability to produce facial expressions. Gathered computational outcomes have been correlated with the evaluation provided by psychologists and evidence has been given that shows how the proposed framework could be effectively exploited to deeply analyze the emotional competence of ASD children to produce facial expressions.
-
Language
-
-
Classification
-
Medicine
-
License
-
License undefined
-
Identifiers
-
-
Persistent URL
-
https://n2t.net/ark:/12658/srd1318972
Statistics
Document views: 95
File downloads: