|
--- |
|
license: gpl-3.0 |
|
tags: |
|
- emotion |
|
- emotion-recognition |
|
- sentiment-analysis |
|
- roberta |
|
language: |
|
- en |
|
pipeline_tag: text-classification |
|
--- |
|
|
|
## FacialMMT |
|
|
|
This repo contains the data and pretrained models for FacialMMT, a framework that uses facial sequences of real speaker to help multimodal emotion recognition. |
|
|
|
The model performance on MELD test set is: |
|
|
|
| Release | W-F1(%) | |
|
|:-------------:|:--------------:| |
|
| 23-07-10 | 66.73 | |
|
|
|
It is currently ranked third on [paperswithcode](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on-meld?p=a-facial-expression-aware-multimodal-multi). |
|
|
|
If you're interested, please check out this [repo](https://github.com/NUSTM/FacialMMT) for more in-detail explanation of how to use our model. |
|
|
|
|
|
### Citation |
|
|
|
Please consider citing the following if this repo is helpful to your research. |
|
``` |
|
@inproceedings{zheng2023facial, |
|
title={A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations}, |
|
author={Zheng, Wenjie and Yu, Jianfei and Xia, Rui and Wang, Shijin}, |
|
booktitle={Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, |
|
pages={15445--15459}, |
|
year={2023} |
|
} |
|
``` |