theprint's picture
Update README.md
0181c14 verified
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- cleverboi
- theprint
datasets:
- iamtarun/python_code_instructions_18k_alpaca
---
<img src="https://huggingface.co./theprint/CleverBoi-Gemma-2-9B/resolve/main/cleverboi.png"/>
# CleverBoi
This is an experimental fine tune of [theprint/CleverBoi-Llama-3.1-8B-v2](https://huggingface.co./theprint/CleverBoi-Llama-3.1-8B-v2), which was given additional fine tuning on the [iamtarun/python_code_instructions_18k_alpaca](https://huggingface.co./datasets/iamtarun/python_code_instructions_18k_alpaca) data set (1 epoch).
The CleverBoi series is based on models that have been fine tuned on a collection of data sets that emphasize logic, inference, math and coding, also known as the CleverBoi data set.
# Uploaded model
- **Developed by:** theprint
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)