YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co./docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

BigCodeLlama-92b - GGUF

Original model description:

base_model: [codellama/CodeLlama-70b-Instruct-hf] tags: - mergekit - merge - code license: mit pipeline_tag: conversational

BigCodeLLama 92b LFG πŸš€

Experimental 92B CodeLlaMA frankenstein to see how it benchmarks

Models Merged with base codellama/CodeLlama-70b-Instruct-hf

Models Merged

The following models were included in the merge:

  • ../CodeLlama-70b-Python-hf
  • ../CodeLlama-70b-Instruct-hf

Configuration

The following YAML configuration was used to produce this model:

dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 69]
    model:
      model:
        path: ../CodeLlama-70b-Instruct-hf
- sources:
  - layer_range: [42, 80]
    model:
      model:
        path: ../CodeLlama-70b-Python-hf

Gguf available here https://huggingface.co./nisten/BigCodeLlama-92b-GGUF

Downloads last month
42
GGUF
Model size
92.1B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .