unnir commited on
Commit
f888b81
1 Parent(s): 83c2fa6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md CHANGED
@@ -114,6 +114,92 @@ The model demonstrates strong performance across various sentiment categories. H
114
  ```
115
 
116
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
117
  ## Training Procedure
118
 
119
  The model was fine-tuned on synthetic data using the `bert-base-uncased` architecture. The training process involved:
 
114
  ```
115
 
116
 
117
+
118
+ ## JS example
119
+
120
+ ```js
121
+ <!DOCTYPE html>
122
+ <html lang="en">
123
+ <head>
124
+ <meta charset="UTF-8">
125
+ <title>Tabularis Sentiment Analysis</title>
126
+ </head>
127
+ <body>
128
+ <div id="output"></div>
129
+
130
+ <script type="module">
131
+ import { AutoTokenizer, AutoModel, env } from 'https://cdn.jsdelivr.net/npm/@xenova/[email protected]';
132
+
133
+ env.allowLocalModels = false;
134
+ env.useCDN = true;
135
+
136
+ const MODEL_NAME = 'tabularisai/bert-base-uncased-sentiment-five-classes';
137
+
138
+ function softmax(arr) {
139
+ const max = Math.max(...arr);
140
+ const exp = arr.map(x => Math.exp(x - max));
141
+ const sum = exp.reduce((acc, val) => acc + val);
142
+ return exp.map(x => x / sum);
143
+ }
144
+
145
+ async function analyzeSentiment() {
146
+ try {
147
+ const tokenizer = await AutoTokenizer.from_pretrained(MODEL_NAME);
148
+ const model = await AutoModel.from_pretrained(MODEL_NAME);
149
+
150
+ const texts = [
151
+ "I absolutely loved this movie! The acting was superb and the plot was engaging.",
152
+ "The service at this restaurant was terrible. I'll never go back.",
153
+ "The product works as expected. Nothing special, but it gets the job done.",
154
+ "I'm somewhat disappointed with my purchase. It's not as good as I hoped.",
155
+ "This book changed my life! I couldn't put it down and learned so much."
156
+ ];
157
+
158
+ const output = document.getElementById('output');
159
+
160
+ for (const text of texts) {
161
+ const inputs = await tokenizer(text, { return_tensors: 'pt' });
162
+ const result = await model(inputs);
163
+
164
+ console.log('Model output:', result);
165
+
166
+ if (result.output && result.output.data) {
167
+ const logitsArray = Array.from(result.output.data);
168
+ console.log('Logits array:', logitsArray);
169
+
170
+ const probabilities = softmax(logitsArray);
171
+ const predicted_class = probabilities.indexOf(Math.max(...probabilities));
172
+
173
+ const sentimentMap = {
174
+ 0: "Very Negative",
175
+ 1: "Negative",
176
+ 2: "Neutral",
177
+ 3: "Positive",
178
+ 4: "Very Positive"
179
+ };
180
+
181
+ const sentiment = sentimentMap[predicted_class];
182
+ const score = probabilities[predicted_class];
183
+
184
+ output.innerHTML += `Text: "${text}"<br>`;
185
+ output.innerHTML += `Sentiment: ${sentiment}, Score: ${score.toFixed(4)}<br><br>`;
186
+ } else {
187
+ console.error('Unexpected model output structure:', result);
188
+ output.innerHTML += `Unable to process: "${text}"<br><br>`;
189
+ }
190
+ }
191
+ } catch (error) {
192
+ console.error('Error:', error);
193
+ document.getElementById('output').innerHTML = 'An error occurred. Please check the console for details.';
194
+ }
195
+ }
196
+
197
+ analyzeSentiment();
198
+ </script>
199
+ </body>
200
+ </html>
201
+ ```
202
+
203
  ## Training Procedure
204
 
205
  The model was fine-tuned on synthetic data using the `bert-base-uncased` architecture. The training process involved: