File size: 2,951 Bytes
12f2bcf
 
e8c26b2
 
 
 
 
238a32b
e8c26b2
 
e95a12b
 
2ea151c
 
 
 
9a86c83
e95a12b
 
 
 
 
 
 
2ea151c
e95a12b
 
 
2ea151c
 
 
 
 
 
 
 
 
 
e95a12b
 
2ea151c
 
e95a12b
 
 
2ea151c
 
 
e95a12b
 
 
2ea151c
 
e95a12b
 
 
 
986613c
e95a12b
 
 
986613c
e95a12b
 
 
c71209d
e95a12b
 
 
2ea151c
e95a12b
 
 
e06f1b9
e95a12b
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
- falcon
base_model: tiiuae/falcon-7b
model-index:
- name: falcon7b-linear-equations
  results: []
datasets:
- Menouar/LinearEquations
language:
- en
pipeline_tag: text-generation
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# falcon7b-linear-equations

This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co./tiiuae/falcon-7b) on a simple dataset of [linear equations](https://huggingface.co./datasets/Menouar/LinearEquations). 

## Model description

The objective of this model is to test Falcon7B's ability to solve mathematical linear equations after fine-tuning. The linear equations are in the form:

```
Ay + ay + b + B = Dy + dy + c + C
```
This model was trained using TRL, LoRA quantization, and Flash Attention.

Due to limited GPU resources, I only considered 20,000 samples for training.

For more information, check my [Notebook](https://colab.research.google.com/drive/1e8t5Cj6ZDAOc-z3bweWuBxF8mQZ9IPsH?usp=sharing).


## Intended uses & limitations
The model can solve any equation of the form ```Ay + ay + b + B = Dy + dy + c + C``` with integer coefficients ranging from -10 to 10. It cannot solve linear equations which have more constants than: A, a, b, B, c, C. It also cannot solve linear equations which have constants larger than 10 or smaller than -10. These limitations are due to the nature of the samples within the dataset and the ability of Large Language Models (LLMs) to perform simple computations between numbers. The goal of this work is to demonstrate that fine-tuning an LLM on a specific dataset can yield excellent results for handling a specific task, as is the case with our new model compared to the original one.

## Training and evaluation data

I will complete the evaluation data later, but for now, 
let’s show an example of a linear equation where this model finds the correct solution, unlike other models such as ChatGPT3.5, Bard, Llama 70B, and Mixtral:


## Training procedure

For more information, check my [Notebook](https://colab.research.google.com/drive/1e8t5Cj6ZDAOc-z3bweWuBxF8mQZ9IPsH?usp=sharing).

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 42
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 84
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3

### Training results

The training results can be found [here](https://huggingface.co./Menouar/falcon7b-linear-equations/tensorboard)

### Framework versions

- PEFT 0.8.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1