Edit model card

Model Card for Model ID

Fine Tuned meta-llama/Llama-3.2-1B-Instruct using a small sample (3000 examples) of https://huggingface.co./datasets/mlabonne/orpo-dpo-mix-40k:

hf (pretrained=meta-llama/Llama-3.2-1B-Instruct,dtype=float), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto:4 (64,64,64,64,64)

Tasks Version Filter n-shot Metric Value Stderr
eq_bench 2.1 none 0 eqbench 22.8199 ± 3.3087
none 0 percent_parseable 97.6608 ± 1.1592
Model Description
Developed by: Meta AI (assuming the model is related to Meta’s Llama series)

Funded by [optional]: Meta Platforms, Inc. (or similar organization if applicable)

Shared by [optional]: Hugging Face (if the model is hosted there)

Model type: Language Model (LM), specifically an Instruction-tuned transformer model

Language(s) (NLP): Primarily English (confirm the language capabilities as needed)

License: Apache 2.0 License (check if the original model has a specific license)

Finetuned from model [optional]: Llama-3.2-1B-Instruct (ensure the base model details are accurate)

Model Sources [optional]:

Repository: [Hugging Face model repository link if available] Dataset: https://huggingface.co./datasets/mlabonne/orpo-dpo-mix-40k Paper [optional]: [Link to any relevant papers or documentation about the model or its base architecture]

Demo [optional]: [Link to any demos if available, e.g., Hugging Face Spaces]

Uses Direct Use:

Intended for natural language understanding and generation tasks, including but not limited to question answering, text summarization, and conversational AI applications. Downstream Use [optional]:

Can be applied in various applications such as chatbots, virtual assistants, educational tools, content generation, and other NLP tasks. Out-of-Scope Use:

May not be suitable for applications requiring high-stakes decision-making, sensitive data processing, or contexts where ethical considerations are paramount. Bias, Risks, and Limitations Bias:

As with many language models, the model may inherit biases present in the training data, potentially reflecting cultural, social, or gender biases. Risks:

There is a risk of generating inappropriate or harmful content if not carefully monitored. Users should implement safety checks when deploying this model in applications. Limitations:

The model’s performance may vary based on the input complexity and domain. It may also struggle with understanding nuanced or context-heavy queries. Performance metrics from evaluation tasks should be considered when assessing its utility. Performance Metrics (from your data) Task: eq_bench

Version: 2.1 n-shot: 0 Metric: eqbench Value: 22.8199 Stderr: ± 3.3087 Task: none

n-shot: 0 Metric: percent_parseable Value: 97.6608 Stderr: ± 1.1592 Additional Notes Ensure that you validate all the placeholders with accurate and specific details about the model, its developers, and its applications as needed.

Downloads last month
4
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for FugacityM/mkrasner-uplimit-week2-hw

Adapter
(83)
this model