Edit model card

This Model is optimized version of "alibidaran/llama-2-7b-virtual_doctor" for executing on CPU and GPU. It can be easily used on CPU and personal computers.

Uses

In order to use this model on the CPU, you need to install a few libraries.

!pip install ctransformers

In the next step you can use this model in your project by using codes below:

from ctransformers import AutoModelForCausalLM
from transformers import AutoTokenizer
import torch

model = AutoModelForCausalLM.from_pretrained("alibidaran/llama-2-7b-virtual_doctor-gguf",hf=True)
tokenizer = AutoTokenizer.from_pretrained("alibidaran/llama-2-7b-virtual_doctor")

prompt = " Hi doctor, I have nose running, I have fever, I often feel tired, what should i do?"
text=f"<s> ###Human: {prompt} ###Asistant: "
inputs=tokenizer(text,return_tensors='pt').to('cpu')
with torch.no_grad():
    outputs=model.generate(**inputs,max_new_tokens=200,do_sample=True,top_p=0.92,top_k=10,temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
24
GGUF
Model size
6.74B params
Architecture
llama
Inference API
Unable to determine this model's library. Check the docs .