waltervix/Mistral-portuguese-luana-7b-Q4_K_M-GGUF
This model was converted to GGUF format from rhaymison/Mistral-portuguese-luana-7b
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
✨ Use with Samantha Interface Assistant
Github project: https://github.com/controlecidadao/samantha_ia/blob/main/README.md
📺 Video: Intelligence Challenge - Microsoft Phi 3.5 vs Google Gemma 2
Video: https://www.youtube.com/watch?v=KgicCGMSygU
👟 Testing a Model in 5 Steps with Samantha
Samantha needs just a .gguf
model file to generate text. Follow these steps to perform a simple model test:
1) Open Windows Task Management by pressing CTRL + SHIFT + ESC
and check available memory. Close some programs if necessary to free memory.
2) Visit Hugging Face repository and click on the card to open the corresponding page. Locate the Files and versions tab and choose a .gguf
model that fits in your available memory.
3) Right click over the model download link icon and copy its URL.
4) Paste the model URL into Samantha's Download models for testing field.
5) Insert a prompt into User prompt field and press Enter
. Keep the $$$
sign at the end of your prompt. The model will be downloaded and the response will be generated using the default deterministic settings. You can track this process via Windows Task Management.
Every new model downloaded via this copy and paste procedure will replace the previous one to save hard drive space. Model download is saved as MODEL_FOR_TESTING.gguf
in your Downloads folder.
You can also download the model and save it permanently to your computer. For more datails, visit Samantha's project on Github.
- Downloads last month
- 24
Model tree for waltervix/Mistral-portuguese-luana-7b-Q4_K_M-GGUF
Base model
mistralai/Mistral-7B-Instruct-v0.2Datasets used to train waltervix/Mistral-portuguese-luana-7b-Q4_K_M-GGUF
Evaluation results
- accuracy on ENEM Challenge (No Images)Open Portuguese LLM Leaderboard58.640
- accuracy on BLUEX (No Images)Open Portuguese LLM Leaderboard47.980
- accuracy on OAB ExamsOpen Portuguese LLM Leaderboard38.820
- f1-macro on Assin2 RTEtest set Open Portuguese LLM Leaderboard90.630
- pearson on Assin2 STStest set Open Portuguese LLM Leaderboard75.810
- f1-macro on FaQuAD NLItest set Open Portuguese LLM Leaderboard57.790
- f1-macro on HateBR Binarytest set Open Portuguese LLM Leaderboard77.240
- f1-macro on PT Hate Speech Binarytest set Open Portuguese LLM Leaderboard68.500
- f1-macro on tweetSentBRtest set Open Portuguese LLM Leaderboard63.000