metadata
language:
- pt
license: apache-2.0
library_name: transformers
tags:
- portugues
- portuguese
- QA
- instruct
- llama-cpp
- gguf-my-repo
base_model: meta-llama/Meta-Llama-3-8B-Instruct
datasets:
- rhaymison/superset
pipeline_tag: text-generation
model-index:
- name: Llama-3-portuguese-Tom-cat-8b-instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 70.4
name: accuracy
source:
url: >-
https://huggingface.co./spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 58
name: accuracy
source:
url: >-
https://huggingface.co./spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 51.07
name: accuracy
source:
url: >-
https://huggingface.co./spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 90.91
name: f1-macro
source:
url: >-
https://huggingface.co./spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 75.4
name: pearson
source:
url: >-
https://huggingface.co./spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 76.05
name: f1-macro
source:
url: >-
https://huggingface.co./spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 86.99
name: f1-macro
source:
url: >-
https://huggingface.co./spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 60.39
name: f1-macro
source:
url: >-
https://huggingface.co./spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia/tweetsentbr_fewshot
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 65.92
name: f1-macro
source:
url: >-
https://huggingface.co./spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
name: Open Portuguese LLM Leaderboard
noxinc/Llama-3-portuguese-Tom-cat-8b-instruct-Q5_K_M-GGUF
This model was converted to GGUF format from rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
Use with llama.cpp
Install llama.cpp through brew.
brew install ggerganov/ggerganov/llama.cpp
Invoke the llama.cpp server or the CLI.
CLI:
llama-cli --hf-repo noxinc/Llama-3-portuguese-Tom-cat-8b-instruct-Q5_K_M-GGUF --model llama-3-portuguese-tom-cat-8b-instruct.Q5_K_M.gguf -p "The meaning to life and the universe is"
Server:
llama-server --hf-repo noxinc/Llama-3-portuguese-Tom-cat-8b-instruct-Q5_K_M-GGUF --model llama-3-portuguese-tom-cat-8b-instruct.Q5_K_M.gguf -c 2048
Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-portuguese-tom-cat-8b-instruct.Q5_K_M.gguf -n 128