File size: 1,842 Bytes
a9067fc
 
 
 
 
 
 
 
 
 
 
 
07d9e23
 
1a36bec
 
07d9e23
a9067fc
 
 
 
 
 
 
 
 
07d9e23
06747d5
a9067fc
 
 
07d9e23
a9067fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-ROMANCE
tags:
- generated_from_trainer
model-index:
- name: seq2seq-finetuned-slang-en
  results: []
---

# seq2seq-finetuned-slang-en

This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ROMANCE](https://huggingface.co./Helsinki-NLP/opus-mt-en-ROMANCE) on a dataset of 1772 messages in slang English, and manually translated into standard English. 
There are some typos, since this was a first approach. Thus, insults or acronyms are not taken into account. Also, you may confirm mistakes, such as translating 'ya' almost always as 'yes', while it can also be 'you'. 
These are nuances I am working on.
I am also working on this task for other languages, if you like the project, please contact me.

It achieves the following results on the evaluation set:
- Loss: 0.0286

## Model description

More information needed

## Intended uses & limitations

This is a prototype. I am working on a better model. The goal would be to finetune a model able to translate slang English into standard English.
I am also working on applying it to different languages.

## Training and evaluation data

The data partition is as follows: 85% train and val, and 15% for testing.

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3

### Training results

| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0844        | 1.33  | 500  | 0.0714          |
| 0.0297        | 2.65  | 1000 | 0.0286          |


### Framework versions

- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.1