Quantized GGUF version of News reporter 3B LLM

Image

Model Description

News Reporter 3B LLM is based on Phi-3 Mini-4K Instruct a dense decoder-only Transformer model designed to generate high-quality text based on user prompts. With 3.8 billion parameters, the model is fine-tuned using Supervised Fine-Tuning (SFT) to align with human preferences and question answer pairs.

Key Features:

  • Parameter Count: 3.8 billion.
  • Architecture: Dense decoder-only Transformer.
  • Context Length: Supports up to 4,000 tokens.
  • Training Data: 43.5K+ question and answer pairs curated from different News channel.

Model Benchmarking

Downloads last month
28
GGUF
Model size
3.82B params
Architecture
phi3
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Dataset used to train RedHenLabs/news-reporter-gguf