|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- togethercomputer/RedPajama-Data-1T-Sample |
|
language: |
|
- en |
|
--- |
|
|
|
# Landmark Attention LLaMA 33B |
|
|
|
This model has been trained using the PEFT LoRA technique with the [Landmark Attention](https://arxiv.org/abs/2305.16300) method over 200 steps. Model will likely be trained further and updated later on. |
|
|
|
## Usage |
|
|
|
Requires `trust_remote_code` to be set to `True`. In [oobabooga](https://github.com/oobabooga/text-generation-webui), you can simply add the `--trust_remote_code` flag. |
|
|
|
You will also need to disable the `Add the bos_token to the beginning of prompts` option in the settings. |
|
|
|
## PEFT Checkpoint |
|
|
|
You can probably merge the checkpoint with any other LLaMA-based model (provided they're 33B, of course). This repo contains the merged weights, but you can grab the adapter [here](https://anonfiles.com/F3Pb20wbz7). |
|
|
|
## Training Code |
|
|
|
You can find the training code [here](https://github.com/eugenepentland/landmark-attention-qlora). |