File size: 2,715 Bytes
8ed10b3 33a0aa5 8ed10b3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
---
datasets:
- oscar-corpus/OSCAR-2301
- wikipedia
- bjoernp/tagesschau-2018-2023
language:
- en
- de
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
# LAION LeoLM: **L**inguistically **E**nhanced **O**pen **L**anguage **M**odel
Meet LeoLM-Mistral, the first open and commercially available German Foundation Language Model built on Mistral 7b.
Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text.
Thanks to a compute grant at HessianAI's new supercomputer **42**, we release three foundation models trained with 8k context length.
[`LeoLM/leo-mistral-hessianai-7b`](https://huggingface.co./LeoLM/leo-mistral-hessianai-7b) under Apache 2.0 and
[`LeoLM/leo-hessianai-7b`](https://huggingface.co./LeoLM/leo-hessianai-7b) and [`LeoLM/leo-hessianai-13b`](https://huggingface.co./LeoLM/leo-hessianai-13b) under the [Llama-2 community license](https://huggingface.co./meta-llama/Llama-2-70b/raw/main/LICENSE.txt) (70b also coming soon! 👀).
With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption.
Read our [blog post](https://laion.ai/blog/leo-lm/) or our paper (preprint coming soon) for more details!
*A project by Björn Plüster and Christoph Schuhmann in collaboration with LAION and HessianAI.*
## Model Details
- **Finetuned from:** [mistralai/Mistral-7B-v0.1](https://huggingface.co./mistralai/Mistral-7B-v0.1)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English and German
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html)
- **Contact:** [LAION Discord](https://discord.com/invite/eq3cAMZtCC) or [Björn Plüster](mailto:[email protected])
## Use in 🤗Transformers
First install direct dependencies:
```
pip install transformers torch accelerate
```
If you want faster inference using flash-attention2, you need to install these dependencies:
```bash
pip install packaging ninja
pip install flash-attn
```
Then load the model in transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
model="LeoLM/leo-mistral-hessianai-7b",
device_map="auto",
torch_dtype=torch.bfloat16,
use_flash_attn_2=True # optional
)
```
## Training parameters
Note that for Mistral training, we changed learning rate to `1e-5` going down to `1e-6`. We also used Zero stage 3 and bfloat16 dtype.

## Benchmarks
 |