Edit model card

French-Alpaca-Phi-3-mini-4k-instruct-v1.0-GGUF (q4_k_m)

May 2024 : currently the fastest and most efficient version of French-Alpaca, the general SLM in French
4k tokens of window context

French-Alpaca is a 3.82B params Small Language Model (SLM) based on microsoft/Phi-3-mini-4k-instruct,
fine-tuned from the original French-Alpaca-dataset entirely generated with OpenAI GPT-3.5-turbo.
The fine-tuning method is inspired from https://crfm.stanford.edu/2023/03/13/alpaca.html

This quantized q4_k_m GGUF version can be used on a CPU device, compatible llama.cpp
Now supported architecture by LM Studio.
Ready for Raspberry Pi 5 8Gb

Usage

ollama run jpacifico/french-alpaca-3b

Ollama Modelfile best example I tested yet:

FROM ./french-alpaca-phi-3-mini-4k-instruct-Q4-v1.gguf

TEMPLATE """{{ if .System }}<|system|>
{{ .System }}<|end|>
{{ end }}{{ if .Prompt }}<|user|>
{{ .Prompt }}<|end|>
{{ end }}<|assistant|>
{{ .Response }}<|end|>
"""
PARAMETER num_keep 4
PARAMETER stop "<|user|>"
PARAMETER stop "<|assistant|>"
PARAMETER stop "<|system|>"
PARAMETER stop "<|end|>"
PARAMETER stop "<|endoftext|>"
PARAMETER stop "###"
PARAMETER stop "<|fin|>"

Limitations :
The French-Alpaca models family is a quick demonstration that a small LM ( < 8B params )
can be easily fine-tuned to specialize in a particular language. It does not have any moderation mechanisms.

Developed by: Jonathan Pacifico, 2024
Model type: LLM
Language(s) (NLP): French
License: MIT
Finetuned from model : microsoft/Phi-3-mini-4k-instruct

Downloads last month
5
GGUF
Model size
3.82B params
Architecture
llama
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train jpacifico/French-Alpaca-Phi-3-mini-4k-instruct-v1.0-GGUF

Collection including jpacifico/French-Alpaca-Phi-3-mini-4k-instruct-v1.0-GGUF