Liusuthu commited on
Commit
3cc6d2b
·
verified ·
1 Parent(s): 3da9d67

Upload folder using huggingface_hub

Browse files
CODE_OF_CONDUCT.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Code of Conduct
2
+
3
+ ## Our Pledge
4
+
5
+ In the interest of fostering an open and welcoming environment, we as
6
+ contributors and maintainers pledge to make participation in our project and
7
+ our community a harassment-free experience for everyone, regardless of age, body
8
+ size, disability, ethnicity, sex characteristics, gender identity and expression,
9
+ level of experience, education, socio-economic status, nationality, personal
10
+ appearance, race, religion, or sexual identity and orientation.
11
+
12
+ ## Our Standards
13
+
14
+ Examples of behavior that contributes to creating a positive environment
15
+ include:
16
+
17
+ * Using welcoming and inclusive language
18
+ * Being respectful of differing viewpoints and experiences
19
+ * Gracefully accepting constructive criticism
20
+ * Focusing on what is best for the community
21
+ * Showing empathy towards other community members
22
+
23
+ Examples of unacceptable behavior by participants include:
24
+
25
+ * The use of sexualized language or imagery and unwelcome sexual attention or
26
+ advances
27
+ * Trolling, insulting/derogatory comments, and personal or political attacks
28
+ * Public or private harassment
29
+ * Publishing others' private information, such as a physical or electronic
30
+ address, without explicit permission
31
+ * Other conduct which could reasonably be considered inappropriate in a
32
+ professional setting
33
+
34
+ ## Our Responsibilities
35
+
36
+ Project maintainers are responsible for clarifying the standards of acceptable
37
+ behavior and are expected to take appropriate and fair corrective action in
38
+ response to any instances of unacceptable behavior.
39
+
40
+ Project maintainers have the right and responsibility to remove, edit, or
41
+ reject comments, commits, code, wiki edits, issues, and other contributions
42
+ that are not aligned to this Code of Conduct, or to ban temporarily or
43
+ permanently any contributor for other behaviors that they deem inappropriate,
44
+ threatening, offensive, or harmful.
45
+
46
+ ## Scope
47
+
48
+ This Code of Conduct applies within all project spaces, and it also applies when
49
+ an individual is representing the project or its community in public spaces.
50
+ Examples of representing a project or community include using an official
51
+ project e-mail address, posting via an official social media account, or acting
52
+ as an appointed representative at an online or offline event. Representation of
53
+ a project may be further defined and clarified by project maintainers.
54
+
55
+ This Code of Conduct also applies outside the project spaces when there is a
56
+ reasonable belief that an individual's behavior may have a negative impact on
57
+ the project or its community.
58
+
59
+ ## Enforcement
60
+
61
+ Instances of abusive, harassing, or otherwise unacceptable behavior may be
62
+ reported by contacting the project team at <[email protected]>. All
63
+ complaints will be reviewed and investigated and will result in a response that
64
+ is deemed necessary and appropriate to the circumstances. The project team is
65
+ obligated to maintain confidentiality with regard to the reporter of an incident.
66
+ Further details of specific enforcement policies may be posted separately.
67
+
68
+ Project maintainers who do not follow or enforce the Code of Conduct in good
69
+ faith may face temporary or permanent repercussions as determined by other
70
+ members of the project's leadership.
71
+
72
+ ## Attribution
73
+
74
+ This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
75
+ available at <https://www.contributor-covenant.org/version/1/4/code-of-conduct.html>
76
+
77
+ [homepage]: https://www.contributor-covenant.org
78
+
79
+ For answers to common questions about this code of conduct, see
80
+ <https://www.contributor-covenant.org/faq>
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2024 Elena Ryumina
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
README.md CHANGED
@@ -1,12 +1,13 @@
1
  ---
2
- title: Copy Facial Expression Recognition
3
- emoji: 🌍
4
- colorFrom: red
5
- colorTo: gray
6
  sdk: gradio
7
- sdk_version: 4.19.1
8
  app_file: app.py
9
  pinned: false
 
10
  ---
11
 
12
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
1
  ---
2
+ title: Copy-Facial-Expression-Recognition
3
+ emoji: 😀😲😐😥🥴😱😡
4
+ colorFrom: blue
5
+ colorTo: pink
6
  sdk: gradio
7
+ sdk_version: 4.15.0
8
  app_file: app.py
9
  pinned: false
10
+ license: mit
11
  ---
12
 
13
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
app.css ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ div.app-flex-container {
2
+ display: flex;
3
+ align-items: left;
4
+ }
5
+
6
+ div.app-flex-container > a {
7
+ margin-left: 6px;
8
+ }
9
+
10
+ div.dl1 div.upload-container {
11
+ height: 350px;
12
+ max-height: 350px;
13
+ }
14
+
15
+ div.dl2 {
16
+ max-height: 200px;
17
+ }
18
+
19
+ div.dl2 img {
20
+ max-height: 200px;
21
+ }
22
+
23
+ div.dl5 {
24
+ max-height: 200px;
25
+ }
26
+
27
+ div.dl5 img {
28
+ max-height: 200px;
29
+ }
30
+
31
+ div.video1 div.video-container {
32
+ height: 500px;
33
+ }
34
+
35
+ div.video2 {
36
+ height: 200px;
37
+ }
38
+
39
+ div.video3 {
40
+ height: 200px;
41
+ }
42
+
43
+ div.video4 {
44
+ height: 200px;
45
+ }
46
+
47
+ div.stat {
48
+ height: 286px;
49
+ }
50
+
51
+ div.settings-wrapper {
52
+ display: none;
53
+ }
54
+
55
+ .submit {
56
+ display: inline-block;
57
+ padding: 10px 20px;
58
+ font-size: 16px;
59
+ font-weight: bold;
60
+ text-align: center;
61
+ text-decoration: none;
62
+ cursor: pointer;
63
+ border: var(--button-border-width) solid var(--button-primary-border-color);
64
+ background: var(--button-primary-background-fill);
65
+ color: var(--button-primary-text-color);
66
+ border-radius: 8px;
67
+ transition: all 0.3s ease;
68
+ }
69
+
70
+ .submit[disabled] {
71
+ cursor: not-allowed;
72
+ opacity: 0.6;
73
+ }
74
+
75
+ .submit:hover:not([disabled]) {
76
+ border-color: var(--button-primary-border-color-hover);
77
+ background: var(--button-primary-background-fill-hover);
78
+ color: var(--button-primary-text-color-hover);
79
+ }
80
+
81
+ .clear {
82
+ display: inline-block;
83
+ padding: 10px 20px;
84
+ font-size: 16px;
85
+ font-weight: bold;
86
+ text-align: center;
87
+ text-decoration: none;
88
+ cursor: pointer;
89
+ border-radius: 8px;
90
+ transition: all 0.3s ease;
91
+ }
92
+
93
+ .clear[disabled] {
94
+ cursor: not-allowed;
95
+ opacity: 0.6;
96
+ }
97
+
98
+ .submit:active:not([disabled]), .clear:active:not([disabled]) {
99
+ transform: scale(0.98);
100
+ }
app.py ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ File: app.py
3
+ Author: Elena Ryumina and Dmitry Ryumin
4
+ Description: Description: Main application file for Facial_Expression_Recognition.
5
+ The file defines the Gradio interface, sets up the main blocks,
6
+ and includes event handlers for various components.
7
+ License: MIT License
8
+ """
9
+
10
+ import os
11
+
12
+ import gradio as gr
13
+
14
+ from app.app_utils import preprocess_image_and_predict, preprocess_video_and_predict
15
+ from app.authors import AUTHORS
16
+
17
+ # Importing necessary components for the Gradio app
18
+ from app.description import DESCRIPTION_DYNAMIC, DESCRIPTION_STATIC
19
+ # Importing necessary components for the Gradio app
20
+ from app.description import DESCRIPTION_DYNAMIC, DESCRIPTION_STATIC
21
+
22
+ os.environ["no_proxy"] = "localhost,127.0.0.1,::1"
23
+ def clear_static_info():
24
+ return (
25
+ gr.Image(value=None, type="pil"),
26
+ gr.Image(value=None, scale=1, elem_classes="dl5"),
27
+ gr.Image(value=None, scale=1, elem_classes="dl2"),
28
+ gr.Label(value=None, num_top_classes=3, scale=1, elem_classes="dl3"),
29
+ )
30
+
31
+
32
+ def clear_dynamic_info():
33
+ return (
34
+ gr.Video(value=None),
35
+ gr.Video(value=None),
36
+ gr.Video(value=None),
37
+ gr.Video(value=None),
38
+ gr.Plot(value=None),
39
+ )
40
+
41
+
42
+ with gr.Blocks(css="app.css") as demo:
43
+ with gr.Tab("Dynamic App"):
44
+ gr.Markdown(value=DESCRIPTION_DYNAMIC)
45
+ with gr.Row():
46
+ with gr.Column(scale=2):
47
+ input_video = gr.Video(elem_classes="video1")
48
+ with gr.Row():
49
+ clear_btn_dynamic = gr.Button(
50
+ value="Clear", interactive=True, scale=1
51
+ )
52
+ submit_dynamic = gr.Button(
53
+ value="Submit", interactive=True, scale=1, elem_classes="submit"
54
+ )
55
+ with gr.Column(scale=2, elem_classes="dl4"):
56
+ with gr.Row():
57
+ output_video = gr.Video(
58
+ label="Original video", scale=1, elem_classes="video2"
59
+ )
60
+ output_face = gr.Video(
61
+ label="Pre-processed video", scale=1, elem_classes="video3"
62
+ )
63
+ output_heatmaps = gr.Video(
64
+ label="Heatmaps", scale=1, elem_classes="video4"
65
+ )
66
+ output_statistics = gr.Plot(
67
+ label="Statistics of emotions", elem_classes="stat"
68
+ )
69
+ gr.Examples(
70
+ [
71
+ "videos/video1.mp4",
72
+ "videos/video2.mp4",
73
+ ],
74
+ [input_video],
75
+ )
76
+
77
+ with gr.Tab("Static App"):
78
+ gr.Markdown(value=DESCRIPTION_STATIC)
79
+ with gr.Row():
80
+ with gr.Column(scale=2, elem_classes="dl1"):
81
+ input_image = gr.Image(label="Original image", type="pil")
82
+ with gr.Row():
83
+ clear_btn = gr.Button(
84
+ value="Clear", interactive=True, scale=1, elem_classes="clear"
85
+ )
86
+ submit = gr.Button(
87
+ value="Submit", interactive=True, scale=1, elem_classes="submit"
88
+ )
89
+ with gr.Column(scale=1, elem_classes="dl4"):
90
+ with gr.Row():
91
+ output_image = gr.Image(label="Face", scale=1, elem_classes="dl5")
92
+ output_heatmap = gr.Image(
93
+ label="Heatmap", scale=1, elem_classes="dl2"
94
+ )
95
+ output_label = gr.Label(num_top_classes=3, scale=1, elem_classes="dl3")
96
+ gr.Examples(
97
+ [
98
+ "images/fig7.jpg",
99
+ "images/fig1.jpg",
100
+ "images/fig2.jpg",
101
+ "images/fig3.jpg",
102
+ "images/fig4.jpg",
103
+ "images/fig5.jpg",
104
+ "images/fig6.jpg",
105
+ ],
106
+ [input_image],
107
+ )
108
+ with gr.Tab("Authors"):
109
+ gr.Markdown(value=AUTHORS)
110
+
111
+ submit.click(
112
+ fn=preprocess_image_and_predict,
113
+ inputs=[input_image],
114
+ outputs=[output_image, output_heatmap, output_label],
115
+ queue=True,
116
+ )
117
+ clear_btn.click(
118
+ fn=clear_static_info,
119
+ inputs=[],
120
+ outputs=[input_image, output_image, output_heatmap, output_label],
121
+ queue=True,
122
+ )
123
+
124
+ submit_dynamic.click(
125
+ fn=preprocess_video_and_predict,
126
+ inputs=input_video,
127
+ outputs=[output_video, output_face, output_heatmaps, output_statistics],
128
+ queue=True,
129
+ )
130
+ clear_btn_dynamic.click(
131
+ fn=clear_dynamic_info,
132
+ inputs=[],
133
+ outputs=[
134
+ input_video,
135
+ output_video,
136
+ output_face,
137
+ output_heatmaps,
138
+ output_statistics,
139
+ ],
140
+ queue=True,
141
+ )
142
+
143
+ if __name__ == "__main__":
144
+ demo.queue(api_open=False).launch(share=False)
app/app___init__.py ADDED
File without changes
app/app_app_utils.py ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ File: app_utils.py
3
+ Author: Elena Ryumina and Dmitry Ryumin
4
+ Description: This module contains utility functions for facial expression recognition application.
5
+ License: MIT License
6
+ """
7
+
8
+ import torch
9
+ import numpy as np
10
+ import mediapipe as mp
11
+ from PIL import Image
12
+ import cv2
13
+ from pytorch_grad_cam.utils.image import show_cam_on_image
14
+
15
+ # Importing necessary components for the Gradio app
16
+ from app.model import pth_model_static, pth_model_dynamic, cam, pth_processing
17
+ from app.face_utils import get_box, display_info
18
+ from app.config import DICT_EMO, config_data
19
+ from app.plot import statistics_plot
20
+
21
+ mp_face_mesh = mp.solutions.face_mesh
22
+
23
+
24
+ def preprocess_image_and_predict(inp):
25
+ inp = np.array(inp)
26
+
27
+ if inp is None:
28
+ return None, None
29
+
30
+ try:
31
+ h, w = inp.shape[:2]
32
+ except Exception:
33
+ return None, None
34
+
35
+ with mp_face_mesh.FaceMesh(
36
+ max_num_faces=1,
37
+ refine_landmarks=False,
38
+ min_detection_confidence=0.5,
39
+ min_tracking_confidence=0.5,
40
+ ) as face_mesh:
41
+ results = face_mesh.process(inp)
42
+ if results.multi_face_landmarks:
43
+ for fl in results.multi_face_landmarks:
44
+ startX, startY, endX, endY = get_box(fl, w, h)
45
+ cur_face = inp[startY:endY, startX:endX]
46
+ cur_face_n = pth_processing(Image.fromarray(cur_face))
47
+ with torch.no_grad():
48
+ prediction = (
49
+ torch.nn.functional.softmax(pth_model_static(cur_face_n), dim=1)
50
+ .detach()
51
+ .numpy()[0]
52
+ )
53
+ confidences = {DICT_EMO[i]: float(prediction[i]) for i in range(7)}
54
+ grayscale_cam = cam(input_tensor=cur_face_n)
55
+ grayscale_cam = grayscale_cam[0, :]
56
+ cur_face_hm = cv2.resize(cur_face,(224,224))
57
+ cur_face_hm = np.float32(cur_face_hm) / 255
58
+ heatmap = show_cam_on_image(cur_face_hm, grayscale_cam, use_rgb=True)
59
+
60
+ return cur_face, heatmap, confidences
61
+
62
+
63
+ def preprocess_video_and_predict(video):
64
+
65
+ cap = cv2.VideoCapture(video)
66
+ w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
67
+ h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
68
+ fps = np.round(cap.get(cv2.CAP_PROP_FPS))
69
+
70
+ path_save_video_face = 'result_face.mp4'
71
+ vid_writer_face = cv2.VideoWriter(path_save_video_face, cv2.VideoWriter_fourcc(*'mp4v'), fps, (224, 224))
72
+
73
+ path_save_video_hm = 'result_hm.mp4'
74
+ vid_writer_hm = cv2.VideoWriter(path_save_video_hm, cv2.VideoWriter_fourcc(*'mp4v'), fps, (224, 224))
75
+
76
+ lstm_features = []
77
+ count_frame = 1
78
+ count_face = 0
79
+ probs = []
80
+ frames = []
81
+ last_output = None
82
+ last_heatmap = None
83
+ cur_face = None
84
+
85
+ with mp_face_mesh.FaceMesh(
86
+ max_num_faces=1,
87
+ refine_landmarks=False,
88
+ min_detection_confidence=0.5,
89
+ min_tracking_confidence=0.5) as face_mesh:
90
+
91
+ while cap.isOpened():
92
+ _, frame = cap.read()
93
+ if frame is None: break
94
+
95
+ frame_copy = frame.copy()
96
+ frame_copy.flags.writeable = False
97
+ frame_copy = cv2.cvtColor(frame_copy, cv2.COLOR_BGR2RGB)
98
+ results = face_mesh.process(frame_copy)
99
+ frame_copy.flags.writeable = True
100
+
101
+ if results.multi_face_landmarks:
102
+ for fl in results.multi_face_landmarks:
103
+ startX, startY, endX, endY = get_box(fl, w, h)
104
+ cur_face = frame_copy[startY:endY, startX: endX]
105
+
106
+ if count_face%config_data.FRAME_DOWNSAMPLING == 0:
107
+ cur_face_copy = pth_processing(Image.fromarray(cur_face))
108
+ with torch.no_grad():
109
+ features = torch.nn.functional.relu(pth_model_static.extract_features(cur_face_copy)).detach().numpy()
110
+
111
+ grayscale_cam = cam(input_tensor=cur_face_copy)
112
+ grayscale_cam = grayscale_cam[0, :]
113
+ cur_face_hm = cv2.resize(cur_face,(224,224), interpolation = cv2.INTER_AREA)
114
+ cur_face_hm = np.float32(cur_face_hm) / 255
115
+ heatmap = show_cam_on_image(cur_face_hm, grayscale_cam, use_rgb=False)
116
+ last_heatmap = heatmap
117
+
118
+ if len(lstm_features) == 0:
119
+ lstm_features = [features]*10
120
+ else:
121
+ lstm_features = lstm_features[1:] + [features]
122
+
123
+ lstm_f = torch.from_numpy(np.vstack(lstm_features))
124
+ lstm_f = torch.unsqueeze(lstm_f, 0)
125
+ with torch.no_grad():
126
+ output = pth_model_dynamic(lstm_f).detach().numpy()
127
+ last_output = output
128
+
129
+ if count_face == 0:
130
+ count_face += 1
131
+
132
+ else:
133
+ if last_output is not None:
134
+ output = last_output
135
+ heatmap = last_heatmap
136
+
137
+ elif last_output is None:
138
+ output = np.empty((1, 7))
139
+ output[:] = np.nan
140
+
141
+ probs.append(output[0])
142
+ frames.append(count_frame)
143
+ else:
144
+ if last_output is not None:
145
+ lstm_features = []
146
+ empty = np.empty((7))
147
+ empty[:] = np.nan
148
+ probs.append(empty)
149
+ frames.append(count_frame)
150
+
151
+ if cur_face is not None:
152
+ heatmap_f = display_info(heatmap, 'Frame: {}'.format(count_frame), box_scale=.3)
153
+
154
+ cur_face = cv2.cvtColor(cur_face, cv2.COLOR_RGB2BGR)
155
+ cur_face = cv2.resize(cur_face, (224,224), interpolation = cv2.INTER_AREA)
156
+ cur_face = display_info(cur_face, 'Frame: {}'.format(count_frame), box_scale=.3)
157
+ vid_writer_face.write(cur_face)
158
+ vid_writer_hm.write(heatmap_f)
159
+
160
+ count_frame += 1
161
+ if count_face != 0:
162
+ count_face += 1
163
+
164
+ vid_writer_face.release()
165
+ vid_writer_hm.release()
166
+
167
+ stat = statistics_plot(frames, probs)
168
+
169
+ if not stat:
170
+ return None, None, None, None
171
+
172
+ return video, path_save_video_face, path_save_video_hm, stat
app/app_authors.py ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ File: authors.py
3
+ Author: Elena Ryumina and Dmitry Ryumin
4
+ Description: About the authors.
5
+ License: MIT License
6
+ """
7
+
8
+
9
+ AUTHORS = """
10
+ Authors: [Elena Ryumina](https://github.com/ElenaRyumina), [Dmitry Ryumin](https://github.com/DmitryRyumin), [Denis Dresvyanskiy](https://www.uni-ulm.de/en/nt/staff/research-assistants/dresvyanskiy/), [Maxim Markitantov](https://hci.nw.ru/en/employees/10) and [Alexey Karpov](https://hci.nw.ru/en/employees/1)
11
+
12
+ Authorship contribution:
13
+
14
+ App developers: ``Elena Ryumina`` and ``Dmitry Ryumin``
15
+
16
+ Methodology developers: ``Elena Ryumina``, ``Denis Dresvyanskiy`` and ``Alexey Karpov``
17
+
18
+ Model developer: ``Elena Ryumina``
19
+
20
+ TensorFlow to PyTorch model converters: ``Maxim Markitantov`` and ``Elena Ryumina``
21
+
22
+ Citation
23
+
24
+ If you are using EMO-AffectNetModel in your research, please consider to cite research [paper](https://www.sciencedirect.com/science/article/pii/S0925231222012656). Here is an example of BibTeX entry:
25
+
26
+ <div class="highlight highlight-text-bibtex notranslate position-relative overflow-auto" dir="auto"><pre><span class="pl-k">@article</span>{<span class="pl-en">RYUMINA2022</span>,
27
+ <span class="pl-s">title</span> = <span class="pl-s"><span class="pl-pds">{</span>In Search of a Robust Facial Expressions Recognition Model: A Large-Scale Visual Cross-Corpus Study<span class="pl-pds">}</span></span>,
28
+ <span class="pl-s">author</span> = <span class="pl-s"><span class="pl-pds">{</span>Elena Ryumina and Denis Dresvyanskiy and Alexey Karpov<span class="pl-pds">}</span></span>,
29
+ <span class="pl-s">journal</span> = <span class="pl-s"><span class="pl-pds">{</span>Neurocomputing<span class="pl-pds">}</span></span>,
30
+ <span class="pl-s">year</span> = <span class="pl-s"><span class="pl-pds">{</span>2022<span class="pl-pds">}</span></span>,
31
+ <span class="pl-s">doi</span> = <span class="pl-s"><span class="pl-pds">{</span>10.1016/j.neucom.2022.10.013<span class="pl-pds">}</span></span>,
32
+ <span class="pl-s">url</span> = <span class="pl-s"><span class="pl-pds">{</span>https://www.sciencedirect.com/science/article/pii/S0925231222012656<span class="pl-pds">}</span></span>,
33
+ }</div>
34
+ """
app/app_config.py ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ File: config.py
3
+ Author: Elena Ryumina and Dmitry Ryumin
4
+ Description: Configuration file.
5
+ License: MIT License
6
+ """
7
+
8
+ import toml
9
+ from typing import Dict
10
+ from types import SimpleNamespace
11
+
12
+
13
+ def flatten_dict(prefix: str, d: Dict) -> Dict:
14
+ result = {}
15
+
16
+ for k, v in d.items():
17
+ if isinstance(v, dict):
18
+ result.update(flatten_dict(f"{prefix}{k}_", v))
19
+ else:
20
+ result[f"{prefix}{k}"] = v
21
+
22
+ return result
23
+
24
+
25
+ config = toml.load("config.toml")
26
+
27
+ config_data = flatten_dict("", config)
28
+
29
+ config_data = SimpleNamespace(**config_data)
30
+
31
+ DICT_EMO = {
32
+ 0: "Neutral",
33
+ 1: "Happiness",
34
+ 2: "Sadness",
35
+ 3: "Surprise",
36
+ 4: "Fear",
37
+ 5: "Disgust",
38
+ 6: "Anger",
39
+ }
40
+
41
+ COLORS = {
42
+ 0: 'blue',
43
+ 1: 'orange',
44
+ 2: 'green',
45
+ 3: 'red',
46
+ 4: 'purple',
47
+ 5: 'brown',
48
+ 6: 'pink'
49
+ }
app/app_description.py ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ File: description.py
3
+ Author: Elena Ryumina and Dmitry Ryumin
4
+ Description: Project description for the Gradio app.
5
+ License: MIT License
6
+ """
7
+
8
+ # Importing necessary components for the Gradio app
9
+ from app.config import config_data
10
+
11
+ DESCRIPTION_STATIC = f"""\
12
+ # Static Facial Expression Recognition
13
+ <div class="app-flex-container">
14
+ <img src="https://img.shields.io/badge/version-v{config_data.APP_VERSION}-rc0" alt="Version">
15
+ <a href="https://visitorbadge.io/status?path=https%3A%2F%2Fhuggingface.co%2Fspaces%2FElenaRyumina%2FFacial_Expression_Recognition"><img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fhuggingface.co%2Fspaces%2FElenaRyumina%2FFacial_Expression_Recognition&countColor=%23263759&style=flat" /></a>
16
+ <a href="https://paperswithcode.com/paper/in-search-of-a-robust-facial-expressions"><img src="https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/in-search-of-a-robust-facial-expressions/facial-expression-recognition-on-affectnet" /></a>
17
+ </div>
18
+ """
19
+
20
+ DESCRIPTION_DYNAMIC = f"""\
21
+ # Dynamic Facial Expression Recognition
22
+ <div class="app-flex-container">
23
+ <img src="https://img.shields.io/badge/version-v{config_data.APP_VERSION}-rc0" alt="Version">
24
+ <a href="https://visitorbadge.io/status?path=https%3A%2F%2Fhuggingface.co%2Fspaces%2FElenaRyumina%2FFacial_Expression_Recognition"><img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fhuggingface.co%2Fspaces%2FElenaRyumina%2FFacial_Expression_Recognition&countColor=%23263759&style=flat" /></a>
25
+ <a href="https://paperswithcode.com/paper/in-search-of-a-robust-facial-expressions"><img src="https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/in-search-of-a-robust-facial-expressions/facial-expression-recognition-on-affectnet" /></a>
26
+ </div>
27
+ """
app/app_face_utils.py ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ File: face_utils.py
3
+ Author: Elena Ryumina and Dmitry Ryumin
4
+ Description: This module contains utility functions related to facial landmarks and image processing.
5
+ License: MIT License
6
+ """
7
+
8
+ import numpy as np
9
+ import math
10
+ import cv2
11
+
12
+
13
+ def norm_coordinates(normalized_x, normalized_y, image_width, image_height):
14
+ x_px = min(math.floor(normalized_x * image_width), image_width - 1)
15
+ y_px = min(math.floor(normalized_y * image_height), image_height - 1)
16
+ return x_px, y_px
17
+
18
+
19
+ def get_box(fl, w, h):
20
+ idx_to_coors = {}
21
+ for idx, landmark in enumerate(fl.landmark):
22
+ landmark_px = norm_coordinates(landmark.x, landmark.y, w, h)
23
+ if landmark_px:
24
+ idx_to_coors[idx] = landmark_px
25
+
26
+ x_min = np.min(np.asarray(list(idx_to_coors.values()))[:, 0])
27
+ y_min = np.min(np.asarray(list(idx_to_coors.values()))[:, 1])
28
+ endX = np.max(np.asarray(list(idx_to_coors.values()))[:, 0])
29
+ endY = np.max(np.asarray(list(idx_to_coors.values()))[:, 1])
30
+
31
+ (startX, startY) = (max(0, x_min), max(0, y_min))
32
+ (endX, endY) = (min(w - 1, endX), min(h - 1, endY))
33
+
34
+ return startX, startY, endX, endY
35
+
36
+ def display_info(img, text, margin=1.0, box_scale=1.0):
37
+ img_copy = img.copy()
38
+ img_h, img_w, _ = img_copy.shape
39
+ line_width = int(min(img_h, img_w) * 0.001)
40
+ thickness = max(int(line_width / 3), 1)
41
+
42
+ font_face = cv2.FONT_HERSHEY_SIMPLEX
43
+ font_color = (0, 0, 0)
44
+ font_scale = thickness / 1.5
45
+
46
+ t_w, t_h = cv2.getTextSize(text, font_face, font_scale, None)[0]
47
+
48
+ margin_n = int(t_h * margin)
49
+ sub_img = img_copy[0 + margin_n: 0 + margin_n + t_h + int(2 * t_h * box_scale),
50
+ img_w - t_w - margin_n - int(2 * t_h * box_scale): img_w - margin_n]
51
+
52
+ white_rect = np.ones(sub_img.shape, dtype=np.uint8) * 255
53
+
54
+ img_copy[0 + margin_n: 0 + margin_n + t_h + int(2 * t_h * box_scale),
55
+ img_w - t_w - margin_n - int(2 * t_h * box_scale):img_w - margin_n] = cv2.addWeighted(sub_img, 0.5, white_rect, .5, 1.0)
56
+
57
+ cv2.putText(img=img_copy,
58
+ text=text,
59
+ org=(img_w - t_w - margin_n - int(2 * t_h * box_scale) // 2,
60
+ 0 + margin_n + t_h + int(2 * t_h * box_scale) // 2),
61
+ fontFace=font_face,
62
+ fontScale=font_scale,
63
+ color=font_color,
64
+ thickness=thickness,
65
+ lineType=cv2.LINE_AA,
66
+ bottomLeftOrigin=False)
67
+
68
+ return img_copy
app/app_model.py ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ File: model.py
3
+ Author: Elena Ryumina and Dmitry Ryumin
4
+ Description: This module provides functions for loading and processing a pre-trained deep learning model
5
+ for facial expression recognition.
6
+ License: MIT License
7
+ """
8
+
9
+ import torch
10
+ import requests
11
+ from PIL import Image
12
+ from torchvision import transforms
13
+ from pytorch_grad_cam import GradCAM
14
+
15
+ # Importing necessary components for the Gradio app
16
+ from app.config import config_data
17
+ from app.model_architectures import ResNet50, LSTMPyTorch
18
+
19
+
20
+ def load_model(model_url, model_path):
21
+ try:
22
+ with requests.get(model_url, stream=True) as response:
23
+ with open(model_path, "wb") as file:
24
+ for chunk in response.iter_content(chunk_size=8192):
25
+ file.write(chunk)
26
+ return model_path
27
+ except Exception as e:
28
+ print(f"Error loading model: {e}")
29
+ return None
30
+
31
+ path_static = load_model(config_data.model_static_url, config_data.model_static_path)
32
+ pth_model_static = ResNet50(7, channels=3)
33
+ pth_model_static.load_state_dict(torch.load(path_static))
34
+ pth_model_static.eval()
35
+
36
+ path_dynamic = load_model(config_data.model_dynamic_url, config_data.model_dynamic_path)
37
+ pth_model_dynamic = LSTMPyTorch()
38
+ pth_model_dynamic.load_state_dict(torch.load(path_dynamic))
39
+ pth_model_dynamic.eval()
40
+
41
+ target_layers = [pth_model_static.layer4]
42
+ cam = GradCAM(model=pth_model_static, target_layers=target_layers)
43
+
44
+ def pth_processing(fp):
45
+ class PreprocessInput(torch.nn.Module):
46
+ def init(self):
47
+ super(PreprocessInput, self).init()
48
+
49
+ def forward(self, x):
50
+ x = x.to(torch.float32)
51
+ x = torch.flip(x, dims=(0,))
52
+ x[0, :, :] -= 91.4953
53
+ x[1, :, :] -= 103.8827
54
+ x[2, :, :] -= 131.0912
55
+ return x
56
+
57
+ def get_img_torch(img, target_size=(224, 224)):
58
+ transform = transforms.Compose([transforms.PILToTensor(), PreprocessInput()])
59
+ img = img.resize(target_size, Image.Resampling.NEAREST)
60
+ img = transform(img)
61
+ img = torch.unsqueeze(img, 0)
62
+ return img
63
+
64
+ return get_img_torch(fp)
app/app_model_architectures.py ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ File: model.py
3
+ Author: Elena Ryumina and Dmitry Ryumin
4
+ Description: This module provides model architectures.
5
+ License: MIT License
6
+ """
7
+
8
+ import torch
9
+ import torch.nn as nn
10
+ import torch.nn.functional as F
11
+ import math
12
+
13
+ class Bottleneck(nn.Module):
14
+ expansion = 4
15
+ def __init__(self, in_channels, out_channels, i_downsample=None, stride=1):
16
+ super(Bottleneck, self).__init__()
17
+
18
+ self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride, padding=0, bias=False)
19
+ self.batch_norm1 = nn.BatchNorm2d(out_channels, eps=0.001, momentum=0.99)
20
+
21
+ self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, padding='same', bias=False)
22
+ self.batch_norm2 = nn.BatchNorm2d(out_channels, eps=0.001, momentum=0.99)
23
+
24
+ self.conv3 = nn.Conv2d(out_channels, out_channels*self.expansion, kernel_size=1, stride=1, padding=0, bias=False)
25
+ self.batch_norm3 = nn.BatchNorm2d(out_channels*self.expansion, eps=0.001, momentum=0.99)
26
+
27
+ self.i_downsample = i_downsample
28
+ self.stride = stride
29
+ self.relu = nn.ReLU()
30
+
31
+ def forward(self, x):
32
+ identity = x.clone()
33
+ x = self.relu(self.batch_norm1(self.conv1(x)))
34
+
35
+ x = self.relu(self.batch_norm2(self.conv2(x)))
36
+
37
+ x = self.conv3(x)
38
+ x = self.batch_norm3(x)
39
+
40
+ #downsample if needed
41
+ if self.i_downsample is not None:
42
+ identity = self.i_downsample(identity)
43
+ #add identity
44
+ x+=identity
45
+ x=self.relu(x)
46
+
47
+ return x
48
+
49
+ class Conv2dSame(torch.nn.Conv2d):
50
+
51
+ def calc_same_pad(self, i: int, k: int, s: int, d: int) -> int:
52
+ return max((math.ceil(i / s) - 1) * s + (k - 1) * d + 1 - i, 0)
53
+
54
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
55
+ ih, iw = x.size()[-2:]
56
+
57
+ pad_h = self.calc_same_pad(i=ih, k=self.kernel_size[0], s=self.stride[0], d=self.dilation[0])
58
+ pad_w = self.calc_same_pad(i=iw, k=self.kernel_size[1], s=self.stride[1], d=self.dilation[1])
59
+
60
+ if pad_h > 0 or pad_w > 0:
61
+ x = F.pad(
62
+ x, [pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2]
63
+ )
64
+ return F.conv2d(
65
+ x,
66
+ self.weight,
67
+ self.bias,
68
+ self.stride,
69
+ self.padding,
70
+ self.dilation,
71
+ self.groups,
72
+ )
73
+
74
+ class ResNet(nn.Module):
75
+ def __init__(self, ResBlock, layer_list, num_classes, num_channels=3):
76
+ super(ResNet, self).__init__()
77
+ self.in_channels = 64
78
+
79
+ self.conv_layer_s2_same = Conv2dSame(num_channels, 64, 7, stride=2, groups=1, bias=False)
80
+ self.batch_norm1 = nn.BatchNorm2d(64, eps=0.001, momentum=0.99)
81
+ self.relu = nn.ReLU()
82
+ self.max_pool = nn.MaxPool2d(kernel_size = 3, stride=2)
83
+
84
+ self.layer1 = self._make_layer(ResBlock, layer_list[0], planes=64, stride=1)
85
+ self.layer2 = self._make_layer(ResBlock, layer_list[1], planes=128, stride=2)
86
+ self.layer3 = self._make_layer(ResBlock, layer_list[2], planes=256, stride=2)
87
+ self.layer4 = self._make_layer(ResBlock, layer_list[3], planes=512, stride=2)
88
+
89
+ self.avgpool = nn.AdaptiveAvgPool2d((1,1))
90
+ self.fc1 = nn.Linear(512*ResBlock.expansion, 512)
91
+ self.relu1 = nn.ReLU()
92
+ self.fc2 = nn.Linear(512, num_classes)
93
+
94
+ def extract_features(self, x):
95
+ x = self.relu(self.batch_norm1(self.conv_layer_s2_same(x)))
96
+ x = self.max_pool(x)
97
+ # print(x.shape)
98
+ x = self.layer1(x)
99
+ x = self.layer2(x)
100
+ x = self.layer3(x)
101
+ x = self.layer4(x)
102
+
103
+ x = self.avgpool(x)
104
+ x = x.reshape(x.shape[0], -1)
105
+ x = self.fc1(x)
106
+ return x
107
+
108
+ def forward(self, x):
109
+ x = self.extract_features(x)
110
+ x = self.relu1(x)
111
+ x = self.fc2(x)
112
+ return x
113
+
114
+ def _make_layer(self, ResBlock, blocks, planes, stride=1):
115
+ ii_downsample = None
116
+ layers = []
117
+
118
+ if stride != 1 or self.in_channels != planes*ResBlock.expansion:
119
+ ii_downsample = nn.Sequential(
120
+ nn.Conv2d(self.in_channels, planes*ResBlock.expansion, kernel_size=1, stride=stride, bias=False, padding=0),
121
+ nn.BatchNorm2d(planes*ResBlock.expansion, eps=0.001, momentum=0.99)
122
+ )
123
+
124
+ layers.append(ResBlock(self.in_channels, planes, i_downsample=ii_downsample, stride=stride))
125
+ self.in_channels = planes*ResBlock.expansion
126
+
127
+ for i in range(blocks-1):
128
+ layers.append(ResBlock(self.in_channels, planes))
129
+
130
+ return nn.Sequential(*layers)
131
+
132
+ def ResNet50(num_classes, channels=3):
133
+ return ResNet(Bottleneck, [3,4,6,3], num_classes, channels)
134
+
135
+
136
+ class LSTMPyTorch(nn.Module):
137
+ def __init__(self):
138
+ super(LSTMPyTorch, self).__init__()
139
+
140
+ self.lstm1 = nn.LSTM(input_size=512, hidden_size=512, batch_first=True, bidirectional=False)
141
+ self.lstm2 = nn.LSTM(input_size=512, hidden_size=256, batch_first=True, bidirectional=False)
142
+ self.fc = nn.Linear(256, 7)
143
+ self.softmax = nn.Softmax(dim=1)
144
+
145
+ def forward(self, x):
146
+ x, _ = self.lstm1(x)
147
+ x, _ = self.lstm2(x)
148
+ x = self.fc(x[:, -1, :])
149
+ x = self.softmax(x)
150
+ return x
app/app_plot.py ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ File: config.py
3
+ Author: Elena Ryumina and Dmitry Ryumin
4
+ Description: Plotting statistical information.
5
+ License: MIT License
6
+ """
7
+ import matplotlib.pyplot as plt
8
+ import numpy as np
9
+
10
+ # Importing necessary components for the Gradio app
11
+ from app.config import DICT_EMO, COLORS
12
+
13
+
14
+ def statistics_plot(frames, probs):
15
+ fig, ax = plt.subplots(figsize=(10, 4))
16
+ fig.subplots_adjust(left=0.07, bottom=0.14, right=0.98, top=0.8, wspace=0, hspace=0)
17
+ # Установка параметров left, bottom, right, top, чтобы выделить место для легенды и названий осей
18
+ probs = np.array(probs)
19
+ for i in range(7):
20
+ try:
21
+ ax.plot(frames, probs[:, i], label=DICT_EMO[i], c=COLORS[i])
22
+ except Exception:
23
+ return None
24
+
25
+ ax.legend(loc='upper center', bbox_to_anchor=(0.47, 1.2), ncol=7, fontsize=12)
26
+ ax.set_xlabel('Frames', fontsize=12) # Добавляем подпись к оси X
27
+ ax.set_ylabel('Probability', fontsize=12) # Добавляем подпись к оси Y
28
+ ax.grid(True)
29
+ return plt
config.toml ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ APP_VERSION = "0.2.0"
2
+ FRAME_DOWNSAMPLING = 5
3
+
4
+ [model_static]
5
+ url = "https://huggingface.co/ElenaRyumina/face_emotion_recognition/resolve/main/FER_static_ResNet50_AffectNet.pt"
6
+ path = "FER_static_ResNet50_AffectNet.pt"
7
+
8
+ [model_dynamic]
9
+ url = "https://huggingface.co/ElenaRyumina/face_emotion_recognition/resolve/main/FER_dinamic_LSTM_IEMOCAP.pt"
10
+ path = "FER_dinamic_LSTM_IEMOCAP.pt"
flake8 ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ ; https://www.flake8rules.com/
2
+
3
+ [flake8]
4
+ max-line-length = 120
5
+ ignore = E203, E402, E741, W503
gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
gitignore ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Compiled source #
2
+ ###################
3
+ *.com
4
+ *.class
5
+ *.dll
6
+ *.exe
7
+ *.o
8
+ *.so
9
+ *.pyc
10
+
11
+ # Packages #
12
+ ############
13
+ # it's better to unpack these files and commit the raw source
14
+ # git has its own built in compression methods
15
+ *.7z
16
+ *.dmg
17
+ *.gz
18
+ *.iso
19
+ *.rar
20
+ #*.tar
21
+ *.zip
22
+
23
+ # Logs and databases #
24
+ ######################
25
+ *.log
26
+ *.sqlite
27
+
28
+ # OS generated files #
29
+ ######################
30
+ .DS_Store
31
+ ehthumbs.db
32
+ Icon
33
+ Thumbs.db
34
+ .tmtags
35
+ .idea
36
+ .vscode
37
+ tags
38
+ vendor.tags
39
+ tmtagsHistory
40
+ *.sublime-project
41
+ *.sublime-workspace
42
+ .bundle
43
+
44
+ # Byte-compiled / optimized / DLL files
45
+ __pycache__/
46
+ *.py[cod]
47
+ *$py.class
48
+
49
+ # C extensions
50
+ *.so
51
+
52
+ # Distribution / packaging
53
+ .Python
54
+ build/
55
+ develop-eggs/
56
+ dist/
57
+ downloads/
58
+ eggs/
59
+ .eggs/
60
+ lib/
61
+ lib64/
62
+ parts/
63
+ sdist/
64
+ var/
65
+ wheels/
66
+ pip-wheel-metadata/
67
+ share/python-wheels/
68
+ *.egg-info/
69
+ .installed.cfg
70
+ *.egg
71
+ MANIFEST
72
+ node_modules/
73
+
74
+ # PyInstaller
75
+ # Usually these files are written by a python script from a template
76
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
77
+ *.manifest
78
+ *.spec
79
+
80
+ # Installer logs
81
+ pip-log.txt
82
+ pip-delete-this-directory.txt
83
+
84
+ # Unit test / coverage reports
85
+ htmlcov/
86
+ .tox/
87
+ .nox/
88
+ .coverage
89
+ .coverage.*
90
+ .cache
91
+ nosetests.xml
92
+ coverage.xml
93
+ *.cover
94
+ .hypothesis/
95
+ .pytest_cache/
96
+
97
+ # Translations
98
+ *.mo
99
+ *.pot
100
+
101
+ # Django stuff:
102
+ *.log
103
+ local_settings.py
104
+ db.sqlite3
105
+ db.sqlite3-journal
106
+
107
+ # Flask stuff:
108
+ instance/
109
+ .webassets-cache
110
+
111
+ # Scrapy stuff:
112
+ .scrapy
113
+
114
+ # Sphinx documentation
115
+ docs/_build/
116
+
117
+ # PyBuilder
118
+ target/
119
+
120
+ # Jupyter Notebook
121
+ .ipynb_checkpoints
122
+
123
+ # IPython
124
+ profile_default/
125
+ ipython_config.py
126
+
127
+ # pyenv
128
+ .python-version
129
+
130
+ # pipenv
131
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
132
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
133
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
134
+ # install all needed dependencies.
135
+ #Pipfile.lock
136
+
137
+ # celery beat schedule file
138
+ celerybeat-schedule
139
+
140
+ # SageMath parsed files
141
+ *.sage.py
142
+
143
+ # Environments
144
+ .env
145
+ .venv
146
+ env/
147
+ venv/
148
+ ENV/
149
+ env.bak/
150
+ venv.bak/
151
+
152
+ # Spyder project settings
153
+ .spyderproject
154
+ .spyproject
155
+
156
+ # Rope project settings
157
+ .ropeproject
158
+
159
+ # mkdocs documentation
160
+ /site
161
+
162
+ # mypy
163
+ .mypy_cache/
164
+ .dmypy.json
165
+ dmypy.json
166
+
167
+ # Pyre type checker
168
+ .pyre/
169
+
170
+ # Custom
171
+ *.pth
172
+ *.pt
images/images_fig1.jpg ADDED
images/images_fig2.jpg ADDED
images/images_fig3.jpg ADDED
images/images_fig4.jpg ADDED
images/images_fig5.jpg ADDED
images/images_fig6.jpg ADDED
images/images_fig7.jpg ADDED
requirements.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ gradio==4.15.0
2
+ requests==2.31.0
3
+ torch==2.1.2
4
+ torchaudio==2.1.2
5
+ torchvision==0.16.2
6
+ mediapipe==0.10.9
7
+ pillow==10.2.0
8
+ toml==0.10.
9
+ grad-cam==1.5.0
videos/videos_video1.mp4 ADDED
Binary file (680 kB). View file
 
videos/videos_video2.mp4 ADDED
Binary file (182 kB). View file