SlimOrca-Llama-3-8B / README.md
ajibawa-2023's picture
Update README.md
c1f2c6b verified
|
raw
history blame
2.3 kB
metadata
license: apache-2.0
datasets:
  - Open-Orca/SlimOrca
  - ajibawa-2023/SlimOrca-ShareGPT
language:
  - en

SlimOrca-Llama-3-8B: A General Purpose Intelligent Model

This Model is trained on refined version of SlimOrca made available by Open-Orca team. This Model is very good in various types of General Purpose content generation such as Q&A (including multiple choice), Articles from Summary, Sentiment Analysis, Context & Hypothesis, Reviews, Erotic story generation etc. To a certain extent it can also generate Uncensored content. Kindly be careful while generating Uncensored content as you will be responsible for what you generate.

It is trained on 517981 set of conversations. Each set having 2 conversations. I have shared this data.

I have used ChatML prompt format.

All the credit goes to the Open-Orca team for releasing SlimOrca dataset.

Check examples given below.

Training:

Entire dataset was trained on 4 x A100 80GB. For 2 epoch, training took almost 114 hours. Axolotl & DeepSpeed codebase was used for training purpose. Entire data is trained on Llama-3 by Meta.

This is a fully fine tuned model. Links for quantized models are given below.

GGUF & Exllama

GGUF: TBA

Exllama: TBA

Example Prompt:

This model uses ChatML prompt format.

<|im_start|>system
You are a helpful Assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

You can modify above Prompt as per your requirement.

I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.

Thank you for your love & support.

Examples

Example 1

image/jpeg

Example 2

image/jpeg

Example 3

image/jpeg

Example 4

image/jpeg