Edit model card

Llama-3-11.5B-v2

Thank you to Meta for the weights for Meta-Llama-3-8B

image/png

This is an upscaling of the Meta-Llama-3-8B Ai using techniques created for chargoddard/mistral-11b-slimorca. This Ai model has been upscaled from 8b parameters to 11.5b parameters without any continuous pretraining or fine-tuning.

Unlike version 1 this model has no issues at fp16 or any quantizations.

The model that was used to create this one is linked below:

https://huggingface.co./meta-llama/Meta-Llama-3-8B

Downloads last month
15
Safetensors
Model size
11.5B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Replete-AI/Llama-3-11.5B-V2

Finetunes
5 models
Quantizations
5 models