arjunguha's picture
StarCoder_15b_r_v2_checkpoint_100
534cdbf verified
|
raw
history blame
2.63 kB
metadata
license: bigscience-openrail-m
library_name: transformers
tags:
  - code
  - gpt_bigcode
datasets:
  - nuprl/MultiPL-T
metrics:
  - code_eval
model-index:
  - name: MultiPLCoder-15b-OCaml
    results:
      - task:
          type: text-generation
        dataset:
          name: MultiPL-HumanEval (Lua)
          type: nuprl/MultiPL-E
        metrics:
          - type: pass@1
            value: 0.31
            name: pass@1
            verified: true
          - type: pass@1
            value: 0.21
            name: pass@1
            verified: true
          - type: pass@1
            value: 0.199
            name: pass@1
            verified: true

MultiPLCoder-15b

15 billion parameter version of MultiPLCoder, a set of StarCoder-based models finetuned on the MultiPL-T dataset. These models are state-of-the-art at low-resource languages, such as: Lua, Racket, and OCaml.

This 15 billion parameter model is the most capable of the MultiPLCoder family. However, it requires a dedicated GPU for inference. For a smaller model that fits on the CPU, check out MultiPLCoder-1b.

Language Revision Index

This is the revision index for the best-performing models for their respective langauge.

Langauge Revision ID Epoch
Lua 6069aa54dd554404dd18fccdf5dedd56b8088e74 4
Racket f0c77c06482f436f469007f20d731cb9dd73d609 8
OCaml e7babda985786810707200ff885df6105de7dc56 4

Usage

To utilize one of the models in this repository, you must first select a commit revision for that model from the table above. For example, to use the Lua model:

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("nuprl/MultiPLCoder-15b")
lua_revision="6069aa54dd554404dd18fccdf5dedd56b8088e74"
model = AutoModelForCausalLM.from_pretrained("nuprl/MultiPLCoder-15b", revision=lua_revision).cuda()

Note that the model's default configuration does not enable caching, therefore you must specify to use the cache on generation.

toks = tokenizer.encode("-- Fibonacci iterative", return_tensors="pt").cuda()
out = model.generate(toks, use_cache=True,  do_sample=True, temperature=0.2, top_p=0.95, max_length=256)
print(tokenizer.decode(out[0], skip_special_tokens=True))
-- Fibonacci iterative.
local function fib_iterative(n)
    if n == 0 or n == 1 then
        return n
    end
    local previous, current = 0, 1
    for _ = 2, n do
        previous, current = current, current + previous
    end
    return current
end