Avatar Emotion Recognition using Non-verbal Communication

No Thumbnail Available
Date
2023
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association
Abstract
Among the sources of information about emotions, body movements, recognized as ''kinesics'' in non-verbal communication, have received limited attention. This research gap suggests the need to investigate suitable body movement-based approaches for making communication in virtual environments more realistic. Therefore, this study proposes an automated emotion recognition approach suitable for use in virtual environments. This study consists of two pipelines for emotion recognition. For the first pipeline, i.e., upper-body keypoint-based recognition, the HEROES video dataset was employed to train a bidirectional long short-term memory model using upper-body keypoints capable of predicting four discrete emotions: boredom, disgust, happiness, and interest, achieving an accuracy of 84%. For the second pipeline, i.e., wrist-movement-based recognition, a random forest model was trained based on 17 features computed from acceleration data of wrist movements along each axis. The model achieved an accuracy of 63% in distinguishing three discrete emotions: sadness, neutrality, and happiness. The findings suggest that the proposed approach is a noticeable step toward automated emotion recognition, without using any additional sensors other than the head mounted display (HMD).
Description

CCS Concepts: Human-centered computing -> Human computer interaction (HCI)

        
@inproceedings{
10.2312:pg.20231277
, booktitle = {
Pacific Graphics Short Papers and Posters
}, editor = {
Chaine, Raphaëlle
 and
Deng, Zhigang
 and
Kim, Min H.
}, title = {{
Avatar Emotion Recognition using Non-verbal Communication
}}, author = {
Bazargani, Jalal Safari
 and
Sadeghi-Niaraki, Abolghasem
 and
Choi, Soo-Mi
}, year = {
2023
}, publisher = {
The Eurographics Association
}, ISBN = {
978-3-03868-234-9
}, DOI = {
10.2312/pg.20231277
} }
Citation