File size: 3,112 Bytes
2d8884b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c30a8a5
9c52ee9
2d8884b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-7b
model-index:
- name: gemma-7b-spanishbillionwords
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# gemma-7b-spanishbillionwords

This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co./google/gemma-7b) on [Spanish Billion Words](https://huggingface.co./datasets/jhonparra18/spanish_billion_words_clean).
This is the base Gemma model fine-tuned to perform better on spanish language. It achieves the following results on the evaluation set:
- Loss: 12.1686

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 30
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7924        | 0.0   | 1    | 12.2361         |
| 4.4499        | 0.0   | 2    | 12.1524         |
| 3.9217        | 0.0   | 3    | 12.0710         |
| 4.3292        | 0.0   | 4    | 12.1710         |
| 6.6619        | 0.0   | 5    | 12.1710         |
| 4.4917        | 0.0   | 6    | 12.2628         |
| 4.8346        | 0.0   | 7    | 12.3997         |
| 3.6987        | 0.0   | 8    | 12.4212         |
| 6.0457        | 0.0   | 9    | 12.4049         |
| 3.7882        | 0.0   | 10   | 12.4228         |
| 3.9878        | 0.0   | 11   | 12.4168         |
| 5.1707        | 0.0   | 12   | 12.3961         |
| 3.7024        | 0.0   | 13   | 12.3430         |
| 5.8496        | 0.0   | 14   | 12.3009         |
| 5.1708        | 0.0   | 15   | 12.2863         |
| 4.9796        | 0.0   | 16   | 12.2789         |
| 4.3754        | 0.0   | 17   | 12.2600         |
| 4.8339        | 0.0   | 18   | 12.2371         |
| 4.0352        | 0.0   | 19   | 12.2284         |
| 3.9643        | 0.0   | 20   | 12.2266         |
| 3.6923        | 0.0   | 21   | 12.2103         |
| 4.8213        | 0.0   | 22   | 12.2015         |
| 3.8048        | 0.0   | 23   | 12.1901         |
| 4.3145        | 0.0   | 24   | 12.1837         |
| 3.6633        | 0.0   | 25   | 12.1811         |
| 4.2401        | 0.0   | 26   | 12.1775         |
| 3.3954        | 0.0   | 27   | 12.1757         |
| 5.631         | 0.0   | 28   | 12.1720         |
| 3.8886        | 0.0   | 29   | 12.1714         |
| 4.3891        | 0.0   | 30   | 12.1686         |


### Framework versions

- PEFT 0.8.2
- Transformers 4.38.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2