hpc-coder-v2-1.3b / README.md
daniellnichols's picture
Update README.md
963b22b verified
|
raw
history blame
2.61 kB
metadata
library_name: transformers
tags:
  - code
  - hpc
  - parallel
  - axonn
datasets:
  - hpcgroup/hpc-instruct
  - ise-uiuc/Magicoder-OSS-Instruct-75K
  - nickrosh/Evol-Instruct-Code-80k-v1
language:
  - en
pipeline_tag: text-generation

HPC-Coder-v2

The HPC-Coder-v2-1.3b model is an HPC code LLM fine-tuned on an instruction dataset catered to common HPC topics such as parallelism, optimization, accelerator porting, etc. This version is a fine-tuning of the Deepseek Coder 1.3b model. It is fine-tuned on the hpc-instruct, oss-instruct, and evol-instruct datasets. We utilized the distributed training library AxoNN to fine-tune in parallel across many GPUs.

HPC-Coder-v2-1.3b and HPC-Coder-v2-1.3b are two of the most capable open-source LLMs for parallel and HPC code generation. HPC-Coder-v2-6.7b is the best performing LLM under 30b parameters on the ParEval parallel code generation benchmark in terms of correctness and performance. It scores similarly to 34B and commercial models like Phind-V2 and GPT-4 on parallel code generation.

Using HPC-Coder-v2

The model is provided as a standard huggingface model with safetensor weights. It can be used with transformers pipelines, vllm, or any other standard model inference framework. HPC-Coder-v2 is an instruct model and prompts need to be formatted as instructions for best results. It was trained with the following instruct template:

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Response:

Quantized Models

4 and 8 bit quantized weights are available in the GGUF format for use with llama.cpp. The 4 bit model requires ~3.8 GB memory and can be found here. The 8 bit model requires ~7.1 GB memory and can be found here. Further information on how to use them with llama.cpp can be found in its documentation.