Llama-3.1-8B-Open-SFT

The Llama-3.1-8B-Open-SFT model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct, designed for advanced text generation tasks, including conversational interactions, question answering, and chain-of-thought reasoning. This model leverages Supervised Fine-Tuning (SFT) using the O1-OPEN/OpenO1-SFT dataset to provide enhanced performance in context-sensitive and instruction-following tasks.

File Name Size Description Upload Status
.gitattributes 1.57 kB Git LFS configuration for tracking large files. Uploaded
README.md 324 Bytes Updated README with minimal information. Uploaded
config.json 1.03 kB Model configuration and metadata. Uploaded
generation_config.json 248 Bytes Configuration for text generation specifics. Uploaded
pytorch_model-00001-of-00004.bin 4.98 GB First shard of PyTorch model. Uploaded (LFS)
pytorch_model-00002-of-00004.bin 5.00 GB Second shard of PyTorch model. Uploaded (LFS)
pytorch_model-00003-of-00004.bin 4.92 GB Third shard of PyTorch model. Uploaded (LFS)
pytorch_model-00004-of-00004.bin 1.17 GB Final shard of PyTorch model. Uploaded (LFS)
pytorch_model.bin.index.json 24.2 kB Index file for model shards. Uploaded
special_tokens_map.json 357 Bytes Map for special tokens used in tokenizer. Uploaded
tokenizer.json 17.2 MB Full tokenizer JSON file. Uploaded (LFS)
tokenizer_config.json 57.4 kB Configuration for the tokenizer. Uploaded

Sample Long CoT:

sfdvdfbvdfbd.png

Key Features

  1. Text Generation with CoT Reasoning:

    • Implements Chain-of-Thought (CoT) prompting for logical and step-by-step reasoning tasks.
  2. Conversational AI:

    • Excels in generating context-aware and coherent responses in multi-turn conversations.
  3. Supervised Fine-Tuning (SFT):

    • Optimized for open-domain tasks using the O1-OPEN/OpenO1-SFT dataset.
  4. Multi-Purpose Functionality:

    • Supports a wide range of NLP tasks, including summarization, question answering, and text completion.
  5. Scalable Sharded Architecture:

    • Model weights are distributed across four shards, ensuring efficient loading for large-scale applications.

Training Details

  • Base Model: meta-llama/Llama-3.1-8B

  • Finetuned Dataset: O1-OPEN/OpenO1-SFT

    • Dataset includes 77.7k fine-tuning samples, curated for instruction-based and open-domain tasks.
  • Model Size:

    • 8 Billion parameters distributed over 4 shards for efficient deployment.

Applications

  1. Chain-of-Thought (CoT) Reasoning:

    • Solve complex problems step-by-step with logical reasoning capabilities.
  2. Conversational Agents:

    • Ideal for chatbots, virtual assistants, and conversational systems.
  3. Question Answering:

    • Answer open-domain or context-specific questions accurately.
  4. Text Completion:

    • Generate coherent continuations for incomplete inputs.
  5. Creative Writing:

    • Support for generating stories, articles, or brainstorming ideas.

Usage

Loading the Model

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Llama-3.1-8B-Open-SFT"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

Inference Example

prompt = """
Explain the concept of gravity in a simple way suitable for a 10-year-old:
"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=150, temperature=0.7)

response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("Model Output:", response)

Expected Output

"Gravity is a force that pulls things toward each other. It's the reason why things fall to the ground when you drop them. On Earth, gravity keeps us on the ground and makes sure everything stays in place, like your toys, the water in the ocean, and even the air we breathe."


Performance Requirements

  • Hardware:

    • High-performance GPUs are recommended for efficient inference.
    • Minimum memory: ~16GB VRAM for full precision; 8GB for quantized models.
  • Optimization Options:

    • Use Safetensors for secure and efficient weight loading.
    • Quantization or model parallelism for resource-constrained environments.

Downloads last month
65
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for prithivMLmods/Llama-3.1-8B-Open-SFT

Finetuned
(585)
this model
Finetunes
1 model
Quantizations
4 models

Dataset used to train prithivMLmods/Llama-3.1-8B-Open-SFT

Collection including prithivMLmods/Llama-3.1-8B-Open-SFT