|
--- |
|
model_size: 494034560 |
|
required_memory: 1.84 |
|
accuracy: 0.5080275229357798 |
|
metrics: |
|
- GLUE_SST-2 |
|
license: apache-2.0 |
|
datasets: |
|
- jtatman/python-code-dataset-500k |
|
- Vezora/Tested-143k-Python-Alpaca |
|
language: |
|
- en |
|
- es |
|
base_model: Qwen/Qwen2-0.5B |
|
library_name: adapter-transformers |
|
tags: |
|
- llama |
|
- minillama |
|
- tinyllama |
|
- tiny |
|
- mini |
|
- nemo |
|
- minitron |
|
- tinytron |
|
- ollama |
|
--- |
|
|
|
# Uploaded model |
|
|
|
[<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" width="100"/><img src="https://github.githubassets.com/assets/GitHub-Logo-ee398b662d42.png" width="100"/>](https://github.com/Agnuxo1) |
|
- **Developed by:** [Agnuxo](https://github.com/Agnuxo1) |
|
- **License:** apache-2.0 |
|
- **Finetuned from model:** Agnuxo/Tinytron-Qwen2-0.5B |
|
|
|
This model was fine-tuned using [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
|
|
|
## Benchmark Results |
|
|
|
This model has been fine-tuned for various tasks and evaluated on the following benchmarks: |
|
|
|
### GLUE_SST-2 |
|
**Accuracy:** 0.5080 |
|
|
|
![GLUE_SST-2 Metrics](./GLUE_SST-2_metrics.png) |
|
|
|
|
|
Model Size: 494,034,560 parameters |
|
Required Memory: 1.84 GB |
|
|
|
For more details, visit my [GitHub](https://github.com/Agnuxo1). |
|
|
|
Thanks for your interest in this model! |