--- license: llama3.2 language: - en - ja - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation base_model: AELLM/Llama-3.2-Chibi-3B datasets: - ryota39/izumi-lab-dpo-45k - Aratako/Magpie-Tanuki-8B-97k - kunishou/databricks-dolly-15k-ja - kunishou/oasst1-89k-ja tags: - llama3.2 - mlx --- # edwardlee4948/Llama-3.2-Chibi-3B The Model [edwardlee4948/Llama-3.2-Chibi-3B](https://huggingface.co./edwardlee4948/Llama-3.2-Chibi-3B) was converted to MLX format from [AELLM/Llama-3.2-Chibi-3B](https://huggingface.co./AELLM/Llama-3.2-Chibi-3B) using mlx-lm version **0.21.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("edwardlee4948/Llama-3.2-Chibi-3B") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```