This project is sponsored by PrimeLine

Model Card

This model is an finetuned version for german instructions and conversations in style of Open Assistant tokens. "<|prompter|>" "<|endoftext|>" "<|assistant|>"

The dataset used is deduplicated and cleaned, with no codes inside. The focus is on instruction following and conversational tasks.

The model archictecture is based on Llama-v2 with 7B parameters, trained on 100% renewable energy powered hardware.

This work is contributed by private research of flozi00

Downloads last month
8
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train flozi00/Llama-2-7b-german-assistant-v2

Collection including flozi00/Llama-2-7b-german-assistant-v2