Text Generation
Transformers
PyTorch
Safetensors
code
Eval Results
Inference Endpoints

Octopack

Table of Contents

  1. Model Summary
  2. Use
  3. Training
  4. Citation

Model Summary

OctoCoder is an instruction tuned model with 15.5B parameters created by finetuning StarCoder on CommitPackFT & OASST as described in the OctoPack paper.

Use

Intended use

The model follows instructions provided in the input. You should always preface your input with "Question: " and finish it with "Answer:", for example: "Question: Please write a function in Python that performs bubble sort.\n\nAnswer:"

Feel free to share your generations in the Community tab!

Generation

# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer

checkpoint = "bigcode/octocoder"
device = "cuda" # for GPU usage or "cpu" for CPU usage

tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)

inputs = tokenizer.encode("Question: Please write a function in Python that performs bubble sort.\n\nAnswer:", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))

Training

Model

  • Architecture: GPT-2 model with multi-query attention and Fill-in-the-Middle objective
  • Steps: 250k pretraining & 30 instruction tuning
  • Pretraining tokens: 1 trillion pretraining & 2M instruction tuning
  • Precision: bfloat16

Hardware

  • Pretraining:
    • GPUs: 512 Tesla A100
    • Training time: 24 days
  • Instruction tuning:
    • GPUs: 8 Tesla A100
    • Training time: 4 hours

Software

Citation

@article{muennighoff2023octopack,
      title={OctoPack: Instruction Tuning Code Large Language Models}, 
      author={Niklas Muennighoff and Qian Liu and Armel Zebaze and Qinkai Zheng and Binyuan Hui and Terry Yue Zhuo and Swayam Singh and Xiangru Tang and Leandro von Werra and Shayne Longpre},
      journal={arXiv preprint arXiv:2308.07124},
      year={2023}
}
Downloads last month
285
Safetensors
Model size
15.5B params
Tensor type
F32
Β·
Inference API

Model tree for bigcode/octocoder

Finetunes
1 model
Quantizations
1 model

Datasets used to train bigcode/octocoder

Spaces using bigcode/octocoder 7

Collections including bigcode/octocoder

Evaluation results