artificialguybr's picture
Create README.md
bfc75a8 verified
|
raw
history blame
878 Bytes
---
base_model: artificialguybr/LLAMA3.2-1B-Synthia-II-Redmond
datasets:
- migtissera/Synthia-v1.5-II
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
license: apache-2.0
quantized_by: artificialguybr
tags:
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- facebook
- meta
- pytorch
- llama
- llama-3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
Thanks [Redmond.AI](https://redmond.ai/) for GPU Sponsor!
Quantization for: https://huggingface.co./artificialguybr/LLAMA3.2-1B-Synthia-II-Redmond
## How to use
If you are unsure how to use GGUF files, look at the [TheBloke
READMEs](https://huggingface.co./TheBloke/CodeLlama-70B-Python-GGUF) for
more details, including on how to concatenate multi-part files.