|
|
|
--- |
|
|
|
datasets: |
|
- cerebras/SlimPajama-627B |
|
- HuggingFaceH4/ultrachat_200k |
|
- bigcode/starcoderdata |
|
- HuggingFaceH4/ultrafeedback_binarized |
|
language: |
|
- en |
|
metrics: |
|
- accuracy |
|
- speed |
|
library_name: transformers |
|
tags: |
|
- coder |
|
- Text-Generation |
|
- Transformers |
|
- HelpingAI |
|
license: mit |
|
widget: |
|
- text: | |
|
<|system|> |
|
You are a chatbot who can code!</s> |
|
<|user|> |
|
Write me a function to search for OEvortex on youtube use Webbrowser .</s> |
|
<|assistant|> |
|
- text: | |
|
<|system|> |
|
You are a chatbot who can be a teacher!</s> |
|
<|user|> |
|
Explain me working of AI .</s> |
|
<|assistant|> |
|
model-index: |
|
- name: HelpingAI-Lite |
|
results: |
|
- task: |
|
type: text-generation |
|
metrics: |
|
- name: Epoch |
|
type: Training Epoch |
|
value: 3 |
|
- name: Eval Logits/Chosen |
|
type: Evaluation Logits for Chosen Samples |
|
value: -2.707406759262085 |
|
- name: Eval Logits/Rejected |
|
type: Evaluation Logits for Rejected Samples |
|
value: -2.65652441978546 |
|
- name: Eval Logps/Chosen |
|
type: Evaluation Log-probabilities for Chosen Samples |
|
value: -370.129670421875 |
|
- name: Eval Logps/Rejected |
|
type: Evaluation Log-probabilities for Rejected Samples |
|
value: -296.073825390625 |
|
- name: Eval Loss |
|
type: Evaluation Loss |
|
value: 0.513750433921814 |
|
- name: Eval Rewards/Accuracies |
|
type: Evaluation Rewards and Accuracies |
|
value: 0.738095223903656 |
|
- name: Eval Rewards/Chosen |
|
type: Evaluation Rewards for Chosen Samples |
|
value: -0.0274422804903984 |
|
- name: Eval Rewards/Margins |
|
type: Evaluation Rewards Margins |
|
value: 1.008722543614307 |
|
- name: Eval Rewards/Rejected |
|
type: Evaluation Rewards for Rejected Samples |
|
value: -1.03616464138031 |
|
- name: Eval Runtime |
|
type: Evaluation Runtime |
|
value: 93.5908 |
|
- name: Eval Samples |
|
type: Number of Evaluation Samples |
|
value: 2000 |
|
- name: Eval Samples per Second |
|
type: Evaluation Samples per Second |
|
value: 21.37 |
|
- name: Eval Steps per Second |
|
type: Evaluation Steps per Second |
|
value: 0.673 |
|
|
|
--- |
|
|
|
![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ) |
|
|
|
# QuantFactory/HelpingAI-Lite-GGUF |
|
This is quantized version of [OEvortex/HelpingAI-Lite](https://huggingface.co./OEvortex/HelpingAI-Lite) created using llama.cpp |
|
|
|
# Original Model Card |
|
|
|
|
|
# HelpingAI-Lite |
|
# Subscribe to my YouTube channel |
|
[Subscribe](https://youtube.com/@OEvortex) |
|
|
|
GGUF version [here](https://huggingface.co./OEvortex/HelpingAI-Lite-GGUF) |
|
|
|
HelpingAI-Lite is a lite version of the HelpingAI model that can assist with coding tasks. It's trained on a diverse range of datasets and fine-tuned to provide accurate and helpful responses. |
|
|
|
## License |
|
|
|
This model is licensed under MIT. |
|
|
|
## Datasets |
|
|
|
The model was trained on the following datasets: |
|
- cerebras/SlimPajama-627B |
|
- bigcode/starcoderdata |
|
- HuggingFaceH4/ultrachat_200k |
|
- HuggingFaceH4/ultrafeedback_binarized |
|
|
|
## Language |
|
|
|
The model supports English language. |
|
|
|
## Usage |
|
|
|
# CPU and GPU code |
|
|
|
```python |
|
from transformers import pipeline |
|
from accelerate import Accelerator |
|
|
|
# Initialize the accelerator |
|
accelerator = Accelerator() |
|
|
|
# Initialize the pipeline |
|
pipe = pipeline("text-generation", model="OEvortex/HelpingAI-Lite", device=accelerator.device) |
|
|
|
# Define the messages |
|
messages = [ |
|
{ |
|
"role": "system", |
|
"content": "You are a chatbot who can help code!", |
|
}, |
|
{ |
|
"role": "user", |
|
"content": "Write me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.", |
|
}, |
|
] |
|
|
|
# Prepare the prompt |
|
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
|
|
# Generate predictions |
|
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
|
|
# Print the generated text |
|
print(outputs[0]["generated_text"]) |
|
``` |
|
|