Uncategorized

Understanding Naturalistic Facial Expressions with Deep Learning and Multimodal Large Language Models



Sensors, Vol. 24, Pages 126: Understanding Naturalistic Facial Expressions with Deep Learning and Multimodal Large Language Models
Sensors doi: 10.3390/s24010126
Authors:
Yifan Bian
Dennis Küster
Hui Liu
Eva G. Krumhuber

This paper provides a comprehensive overview of affective computing systems for facial expression recognition (FER) research in naturalistic contexts. The first section presents an updated account of user-friendly FER toolboxes incorporating state-of-the-art deep learning models and elaborates on their neural architectures, datasets, and performances across domains. These sophisticated FER toolboxes can robustly address a variety of challenges encountered in the wild such as variations in illumination and head pose, which may otherwise impact recognition accuracy. The second section of this paper discusses multimodal large language models (MLLMs) and their potential applications in affective science. MLLMs exhibit human-level capabilities for FER and enable the quantification of various contextual variables to provide context-aware emotion inferences. These advancements have the potential to revolutionize current methodological approaches for studying the contextual influences on emotions, leading to the development of contextualized emotion models.

Source: Sensors Category: Biotechnology Authors: Tags: Review Source Type: research



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *