Edit model card

NTK-Aware Scaled RoPE QLoRA Finetune of airoboros-33b-gpt4-1.4.1 (LoRA)

LoRA Weights can be found here: https://huggingface.co./bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-GPTQ

fp16 weights can be found here: https://huggingface.co./bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-fp16

Analogue with RoPE Position Interpolation (PI) technique: https://huggingface.co./bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-LoRA

Overview

This is Jon Durbin's Airoboros 33B GPT4 1.4 (LoRA) with several key modifications:

  • Context length extended to 16384 by NTK-Aware Scaled RoPE Embeddings, but NOT via the superHOT LoRA. I started with base Llama-33b.
  • Training sequences beyond 2048 have the target truncated to equal 2048.
  • Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4

Otherwise, I emulated the training process as closely as possible (rank 64 QLoRA) It was trained on 1x RTX 6000 Ada for ~43 hours.

NTK Patch

To use with HF transformers, AutoGPTQ, etc. See NTK monkey patch.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-LoRA