linbojunzi commited on
Commit
55ebd3d
·
verified ·
1 Parent(s): a6875a9

Upload gpt4o_50.json

Browse files
Files changed (1) hide show
  1. gpt4o_50.json +302 -0
gpt4o_50.json ADDED
@@ -0,0 +1,302 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ [
3
+ "2410.00001v1_figure_1.png",
4
+ "http://arxiv.org/html/2410.00001v1/extracted/5841610/Figures/AugementedRealityFlow.png",
5
+ "Figure 1: iSurgARy general workflow. (From left to right): (1) The ‘acquire’ button sets a reference landmark and displays a red sphere to visualize its placement in AR. Above, we display all landmarks from a head top point of view (2) The system displays the 3D model of the ventricles in AR with the registration’s RMSE. The process of selecting landmarks more accurately can be repeated until the user is satisfied with the RMSE. (3) The entry point placement feature enables the user to choose the catheter entry point on the surface of the head. We display the top view of Kocher’s point, a common entry point for ventriculostomy. (4) The catheter tracking feature tracks a QR code image and renders the tip of the catheter in AR. The insertion of the catheter through the entry point from a head top point of view is shown.",
6
+ "Chart"
7
+ ],
8
+ [
9
+ "2410.00001v1_figure_2.png",
10
+ "http://arxiv.org/html/2410.00001v1/extracted/5841610/Figures/clinician.png",
11
+ "Figure 2: Left: Neurosurgeon performing registration by selecting the left tragus landmark. Right: Visualization of internal ventricular structure in AR and simulation of catheter insertion.",
12
+ "Chart"
13
+ ],
14
+ [
15
+ "2410.00001v1_figure_3.png",
16
+ "http://arxiv.org/html/2410.00001v1/extracted/5841610/Figures/nurse.jpg",
17
+ "Figure 3: Participant uses the aim cursor positioned at the center of the screen to select landmarks.",
18
+ "Chart"
19
+ ],
20
+ [
21
+ "2410.00001v1_figure_4.png",
22
+ "http://arxiv.org/html/2410.00001v1/extracted/5841610/Figures/nasa.png",
23
+ "Figure 4: NASA TLX Scores for each dimension of workload.",
24
+ "Chart"
25
+ ],
26
+ [
27
+ "2410.00001v1_figure_5.png",
28
+ "http://arxiv.org/html/2410.00001v1/extracted/5841610/Figures/IPhone_holder.png",
29
+ "Figure 5: After feedback from clinicians we began using a mobile phone holder to improve ergonomic issues.",
30
+ "Chart"
31
+ ],
32
+ [
33
+ "2410.00003v2_figure_1.png",
34
+ "http://arxiv.org/html/2410.00003v2/x1.png",
35
+ "Figure 1. Semantic interpretations for HAR",
36
+ "Chart"
37
+ ],
38
+ [
39
+ "2410.00003v2_figure_2.png",
40
+ "http://arxiv.org/html/2410.00003v2/x2.png",
41
+ "Figure 2. Example of semantic interpretations of sensor readings and activity labels",
42
+ "Chart"
43
+ ],
44
+ [
45
+ "2410.00003v2_figure_3(a).png",
46
+ "http://arxiv.org/html/2410.00003v2/x3.png",
47
+ "(a) Raw data\nFigure 3. Data distribution under three settings across two datasets",
48
+ "Chart"
49
+ ],
50
+ [
51
+ "2410.00003v2_figure_3(b).png",
52
+ "http://arxiv.org/html/2410.00003v2/x4.png",
53
+ "(b) Self-supervised learning\nFigure 3. Data distribution under three settings across two datasets",
54
+ "Chart"
55
+ ],
56
+ [
57
+ "2410.00003v2_figure_3(c).png",
58
+ "http://arxiv.org/html/2410.00003v2/x5.png",
59
+ "(c) Semantic interpretations\nFigure 3. Data distribution under three settings across two datasets",
60
+ "Chart"
61
+ ],
62
+ [
63
+ "2410.00003v2_figure_4.png",
64
+ "http://arxiv.org/html/2410.00003v2/x6.png",
65
+ "Figure 4. Semantic interpretations for new\nactivities",
66
+ "Chart"
67
+ ],
68
+ [
69
+ "2410.00003v2_figure_5.png",
70
+ "http://arxiv.org/html/2410.00003v2/x7.png",
71
+ "Figure 5. Overview of LanHAR. (1) Utilize LLMs for generating semantic interpretations of sensor reading and activity labels. (2) Train a text encoder to encode two types of semantic interpretations and achieve their alignment. H⁢i𝐻𝑖H{i}italic_H italic_i and Zisubscript𝑍𝑖Z_{i}italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT denote embeddings of semantic interpretations of activity labels and sensor reading (3) Train a sensor encoder to align sensor reading and semantic interpretation. E⁢i𝐸𝑖E{i}italic_E italic_i denote embeddings of sensor reading (4) For inference, only use the sensor encoder to generate embeddings for the sensor readings E⁢i𝐸𝑖E{i}italic_E italic_i and then compute the similarity with the pre-stored embeddings of the activity labels H⁢i𝐻𝑖H{i}italic_H italic_i to obtain the human activity recognition results.",
72
+ "Chart"
73
+ ],
74
+ [
75
+ "2410.00003v2_figure_6.png",
76
+ "http://arxiv.org/html/2410.00003v2/x8.png",
77
+ "Figure 6. An example prompt for obtaining semantic interpretation of sensor readings",
78
+ "Chart"
79
+ ],
80
+ [
81
+ "2410.00003v2_figure_7.png",
82
+ "http://arxiv.org/html/2410.00003v2/x9.png",
83
+ "Figure 7. An example prompt for obtaining semantic interpretation of activity labels",
84
+ "Chart"
85
+ ],
86
+ [
87
+ "2410.00003v2_figure_8.png",
88
+ "http://arxiv.org/html/2410.00003v2/x10.png",
89
+ "Figure 8. The iterative re-generatio method to ensuring the quality of LLM responses. (1) Filter inaccurate semantic interpretations. (2) Regenerate new semantic interpretations by LLMs. (3) Incorporate the new semantic interpretation based on whether it can reduce the overall KL divergence.",
90
+ "Chart"
91
+ ],
92
+ [
93
+ "2410.00003v2_figure_9.png",
94
+ "http://arxiv.org/html/2410.00003v2/x11.png",
95
+ "Figure 9. Category-level HAR Performance\n",
96
+ "Chart"
97
+ ],
98
+ [
99
+ "2410.00003v2_figure_10.png",
100
+ "http://arxiv.org/html/2410.00003v2/x12.png",
101
+ "Figure 10. The Effect of Text Encoder\n",
102
+ "Chart"
103
+ ],
104
+ [
105
+ "2410.00003v2_figure_11.png",
106
+ "http://arxiv.org/html/2410.00003v2/x13.png",
107
+ "Figure 11. The impact of different pre-trained LM\n",
108
+ "Chart"
109
+ ],
110
+ [
111
+ "2410.00003v2_figure_12.png",
112
+ "http://arxiv.org/html/2410.00003v2/x14.png",
113
+ "Figure 12. The impact of different LLMs\n",
114
+ "Chart"
115
+ ],
116
+ [
117
+ "2410.00003v2_figure_13.png",
118
+ "http://arxiv.org/html/2410.00003v2/x15.png",
119
+ "Figure 13. KL divergence with and without the iterative re-generation method\n",
120
+ "Chart"
121
+ ],
122
+ [
123
+ "2410.00003v2_figure_14.png",
124
+ "http://arxiv.org/html/2410.00003v2/x16.png",
125
+ "Figure 14. The impact of different quality of LLMs’ response\n",
126
+ "Chart"
127
+ ],
128
+ [
129
+ "2410.00003v2_figure_15.png",
130
+ "http://arxiv.org/html/2410.00003v2/x17.png",
131
+ "Figure 15. The effect of different number of attention heads\n",
132
+ "Chart"
133
+ ],
134
+ [
135
+ "2410.00003v2_figure_16.png",
136
+ "http://arxiv.org/html/2410.00003v2/x18.png",
137
+ "Figure 16. The effect of different number of encoding layers\n",
138
+ "Chart"
139
+ ],
140
+ [
141
+ "2410.00005v1_figure_1.png",
142
+ "http://arxiv.org/html/2410.00005v1/x1.png",
143
+ "Figure 1. Illustration of Task 1 framework.",
144
+ "Chart"
145
+ ],
146
+ [
147
+ "2410.00005v1_figure_2.png",
148
+ "http://arxiv.org/html/2410.00005v1/x2.png",
149
+ "Figure 2. Illustration of Task #2 and #3 framework.",
150
+ "Chart"
151
+ ],
152
+ [
153
+ "2410.00005v1_figure_3.png",
154
+ "http://arxiv.org/html/2410.00005v1/x3.png",
155
+ "Figure 3. Detail Evaluation of Team db3 Solution",
156
+ "Chart"
157
+ ],
158
+ [
159
+ "2410.00006v1_figure_1.png",
160
+ "http://arxiv.org/html/2410.00006v1/extracted/5851436/image1.png",
161
+ "Figure 1. Typical architecture of a conversational interface, adapted from\n(McTear, 2018; Rough and Cowan, 2020)",
162
+ "Chart"
163
+ ],
164
+ [
165
+ "2410.00006v1_figure_2.png",
166
+ "http://arxiv.org/html/2410.00006v1/extracted/5851436/image2.png",
167
+ "Figure 2. Architecture of a conversational user interface based on Rasa",
168
+ "Chart"
169
+ ],
170
+ [
171
+ "2410.00006v1_figure_3.png",
172
+ "http://arxiv.org/html/2410.00006v1/extracted/5851436/image3.png",
173
+ "Figure 3. Sample dialog, using the chatroom of (scalableminds, 2020)",
174
+ "Chart"
175
+ ],
176
+ [
177
+ "2410.00006v1_figure_4.png",
178
+ "http://arxiv.org/html/2410.00006v1/extracted/5851436/image4.png",
179
+ "Figure 4. Node-RED Flow editor with an example flow implementing an action server, i.e., fulfillment component,\nfor a Rasa conversational user interface.\nTo the left, the network section of the Node-RED palette is partially visible.",
180
+ "Chart"
181
+ ],
182
+ [
183
+ "2410.00006v1_figure_5.png",
184
+ "http://arxiv.org/html/2410.00006v1/extracted/5851436/image5.png",
185
+ "Figure 5. Example configuration of a switch node in Node-RED",
186
+ "Chart"
187
+ ],
188
+ [
189
+ "2410.00006v1_figure_6.png",
190
+ "http://arxiv.org/html/2410.00006v1/extracted/5851436/image6.png",
191
+ "Figure 6. Example configuration of a sendbuttons node in Node-RED",
192
+ "Chart"
193
+ ],
194
+ [
195
+ "2410.00010v1_figure_1.png",
196
+ "http://arxiv.org/html/2410.00010v1/x1.png",
197
+ "Figure 1: PHemoNet. Fully hypercomplex architecture with encoders operating in different hypercomplex domains according to n𝑛nitalic_n and a refined hypercomplex fusion module with nfusion=4subscript𝑛fusion4n_{\\text{fusion}}=4italic_n start_POSTSUBSCRIPT fusion end_POSTSUBSCRIPT = 4. Finally, a classification layer yileds the final output for valence/arousal prefiction.",
198
+ "Chart"
199
+ ],
200
+ [
201
+ "2410.00013v1_figure_1.png",
202
+ "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/BlockDiagram.png",
203
+ "Figure 1: Block diagram of the actor-critic based EEG diffusion system.",
204
+ "Chart"
205
+ ],
206
+ [
207
+ "2410.00013v1_figure_2.png",
208
+ "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/Forward.png",
209
+ "Figure 2: The forward diffusion process of EEG diffusion model.",
210
+ "Chart"
211
+ ],
212
+ [
213
+ "2410.00013v1_figure_3.png",
214
+ "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/Reverse.png",
215
+ "Figure 3: The backward diffusion process of EEG diffusion model.",
216
+ "Chart"
217
+ ],
218
+ [
219
+ "2410.00013v1_figure_4.png",
220
+ "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/U_Net.png",
221
+ "Figure 4: The architecture of EEG-U-Net.",
222
+ "Chart"
223
+ ],
224
+ [
225
+ "2410.00013v1_figure_5.png",
226
+ "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/StepBlock.png",
227
+ "Figure 5: The step blocks of EEG-U-Net.",
228
+ "Chart"
229
+ ],
230
+ [
231
+ "2410.00013v1_figure_6(a).png",
232
+ "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/ConBlock.png",
233
+ "(a)\nFigure 6: The architecture of important network blocks in EEG-U-Net. (a) Convolution block. (b) Transposed convolution block. (c) Residual block.",
234
+ "Chart"
235
+ ],
236
+ [
237
+ "2410.00013v1_figure_6(b).png",
238
+ "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/TransCon.png",
239
+ "(b)\nFigure 6: The architecture of important network blocks in EEG-U-Net. (a) Convolution block. (b) Transposed convolution block. (c) Residual block.",
240
+ "Chart"
241
+ ],
242
+ [
243
+ "2410.00013v1_figure_6(c).png",
244
+ "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/ResBlock.png",
245
+ "(c)\nFigure 6: The architecture of important network blocks in EEG-U-Net. (a) Convolution block. (b) Transposed convolution block. (c) Residual block.",
246
+ "Chart"
247
+ ],
248
+ [
249
+ "2410.00013v1_figure_7(a).png",
250
+ "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/CWTNet.png",
251
+ "(a)\nFigure 7: The architecture of continuous wavelet transform based convolutional networks and classification based convolutional networks. (a) Wavelet network. (b) Classification network.",
252
+ "Chart"
253
+ ],
254
+ [
255
+ "2410.00013v1_figure_7(b).png",
256
+ "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/ClassNet.png",
257
+ "(b)\nFigure 7: The architecture of continuous wavelet transform based convolutional networks and classification based convolutional networks. (a) Wavelet network. (b) Classification network.",
258
+ "Chart"
259
+ ],
260
+ [
261
+ "2410.00013v1_figure_8(a).png",
262
+ "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/ActorNet.png",
263
+ "(a)\nFigure 8: The architecture of two networks using in weight-guided agent. (a) Actor network. (b) Critic network.",
264
+ "Chart"
265
+ ],
266
+ [
267
+ "2410.00013v1_figure_8(b).png",
268
+ "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/CriticNet.png",
269
+ "(b)\nFigure 8: The architecture of two networks using in weight-guided agent. (a) Actor network. (b) Critic network.",
270
+ "Chart"
271
+ ],
272
+ [
273
+ "2410.00013v1_figure_9.png",
274
+ "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/reward.png",
275
+ "Figure 9: The design process of a reward mechanism for the EEG generation system.",
276
+ "Chart"
277
+ ],
278
+ [
279
+ "2410.00013v1_figure_10.png",
280
+ "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/Channel.png",
281
+ "Figure 10: Electrode montage for In-house EEG Datase.",
282
+ "Chart"
283
+ ],
284
+ [
285
+ "2410.00013v1_figure_11.png",
286
+ "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/Device.png",
287
+ "Figure 11: Equipments utilized for data collection.",
288
+ "Chart"
289
+ ],
290
+ [
291
+ "2410.00013v1_figure_12.png",
292
+ "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/Experiment.png",
293
+ "Figure 12: Timing scheme of one session (top), Timing scheme of the paradigm (bottom).",
294
+ "Chart"
295
+ ],
296
+ [
297
+ "2410.00013v1_figure_13.png",
298
+ "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/RealEEG.png",
299
+ "Figure 13: Real EEG signals drawn from BCI competition 4-2a dataset.",
300
+ "Chart"
301
+ ]
302
+ ]