Menouar's picture
Update README.md
503f6b9 verified
|
raw
history blame
3.21 kB
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
- falcon
base_model: tiiuae/falcon-7b
model-index:
- name: falcon7b-linear-equations
results: []
datasets:
- Menouar/LinearEquations
language:
- en
pipeline_tag: text-generation
widget:
- text: "Solve for y: 10 + 4y -9y +5 = 4 +8y - 2y + 8 ."
example_title: "Solve Linear Equations"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon7b-linear-equations
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co./tiiuae/falcon-7b) on a simple dataset of [linear equations](https://huggingface.co./datasets/Menouar/LinearEquations).
## Model description
The objective of this model is to test Falcon7B's ability to solve mathematical linear equations after fine-tuning. The linear equations are in the form:
```
Ay + ay + b + B = Dy + dy + c + C
```
This model was trained using TRL, LoRA quantization, and Flash Attention.
Due to limited GPU resources, I only considered 20,000 samples for training.
For more information, check my [Notebook](https://colab.research.google.com/drive/1e8t5Cj6ZDAOc-z3bweWuBxF8mQZ9IPsH?usp=sharing).
## Intended uses & limitations
The model can solve any equation of the form ```Ay + ay + b + B = Dy + dy + c + C``` with integer coefficients ranging from -10 to 10. It cannot solve linear equations which have more constants than: A, a, b, B, c, C. It also cannot solve linear equations which have constants larger than 10 or smaller than -10. These limitations are due to the nature of the samples within the dataset and the ability of Large Language Models (LLMs) to perform simple computations between numbers. The goal of this work is to demonstrate that fine-tuning an LLM on a specific dataset can yield excellent results for handling a specific task, as is the case with our new model compared to the original one.
## Training and evaluation data
I will complete the evaluation data later, but for now,
let’s show an example of a linear equation where this model finds the correct solution, unlike other models such as ChatGPT3.5, Bard, Llama 70B, and Mixtral:
<p align="center">
<img src="https://huggingface.co./Menouar/falcon7b-linear-equations/blob/main/Chatgpt_bard.jpg" alt="Your image description" />
</p>
## Training procedure
For more information, check my [Notebook](https://colab.research.google.com/drive/1e8t5Cj6ZDAOc-z3bweWuBxF8mQZ9IPsH?usp=sharing).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 42
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 84
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
The training results can be found [here](https://huggingface.co./Menouar/falcon7b-linear-equations/tensorboard)
### Framework versions
- PEFT 0.8.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1