Update README.md
Browse files
README.md
CHANGED
@@ -14,26 +14,25 @@ datasets:
|
|
14 |
- Christoph911/German-legal-SQuAD
|
15 |
- philschmid/test_german_squad
|
16 |
---
|
17 |
-
#
|
18 |
-
|
19 |
This repository contains the model [jphme/Llama-2-13b-chat-german](https://huggingface.co/jphme/Llama-2-13b-chat-german) in GGUF format, for fast and easy inference with llama.cpp and similar LLM inference tools.
|
20 |
This model was created and trained by [jphme](https://huggingface.co/jphme). It is a fine-tuned variant of Meta's [Llama2 13b Chat](https://huggingface.co/meta-llama/Llama-2-13b-chat) with a compilation of multiple instruction datasets in German language.
|
21 |
|
22 |
-
##
|
23 |
|
24 |
-
**What is
|
25 |
-
AIML
|
26 |
|
27 |
-
**
|
28 |
-
|
29 |
|
30 |
-
**What's the purpose?**
|
31 |
-
|
32 |
|
33 |
|
34 |
-
|
35 |
|
36 |
-
###
|
37 |
|Attribute|Details|
|
38 |
|----------------------------|--------------------------------------------------------------------------------------------------------------|
|
39 |
| **ID** | 1 |
|
@@ -41,7 +40,7 @@ The aim of *AIML* is to unify and standardize the procedure of deploying, runnin
|
|
41 |
| **Creator** | [jphme](https://huggingface.co/jphme) |
|
42 |
| **Source URL** | https://huggingface.co/ |
|
43 |
|
44 |
-
###
|
45 |
|Attribute|Details|
|
46 |
|----------------------------|--------------------------------------------------------------------------------------------------------------|
|
47 |
| **Type** | Large Language Model |
|
@@ -61,11 +60,13 @@ The aim of *AIML* is to unify and standardize the procedure of deploying, runnin
|
|
61 |
| **Datasets** | {"[Prorietary German Conversation Dataset](https://placeholder.ocal/dataset)", "[German & German legal SQuAD](https://placeholder.local/dataset)" |
|
62 |
| **Notes** | The datasets were augmented with rows containing "wrong" contexts, in order to improve factual RAG performance. |
|
63 |
|
64 |
-
### Run
|
|
|
65 |
| **Start Model** | #/bin/sh<br/>chmod +x run.sh && ./run.sh <br/># This is an example. Functioning run.sh Script to be published soon |
|
|
|
66 |
|
67 |
|
68 |
-
##
|
69 |
1. Clone and install llama.cpp *(Commit: 9e20231)*.
|
70 |
```
|
71 |
# Install llama.cpp by cloning the repo from Github.
|
|
|
14 |
- Christoph911/German-legal-SQuAD
|
15 |
- philschmid/test_german_squad
|
16 |
---
|
17 |
+
# Model Details
|
|
|
18 |
This repository contains the model [jphme/Llama-2-13b-chat-german](https://huggingface.co/jphme/Llama-2-13b-chat-german) in GGUF format, for fast and easy inference with llama.cpp and similar LLM inference tools.
|
19 |
This model was created and trained by [jphme](https://huggingface.co/jphme). It is a fine-tuned variant of Meta's [Llama2 13b Chat](https://huggingface.co/meta-llama/Llama-2-13b-chat) with a compilation of multiple instruction datasets in German language.
|
20 |
|
21 |
+
## Profile (.aiml)
|
22 |
|
23 |
+
**What is AIML?**
|
24 |
+
AIML, or AI Markdown Language (.aiml), is a novel description format designed to streamline AI model deployments. It makes the process of running AI models platform-agnostic, more easy and considerably straightforward.
|
25 |
|
26 |
+
**Why the *config.aiml* file?**
|
27 |
+
The presence of a *config.aiml* file within an AI model repository equips users with all the essential configuration, metadata and rules required to deploy and execute the model securely and efficiently within your computing environment.
|
28 |
|
29 |
+
**What's the purpose of AIML?**
|
30 |
+
AIML's primary goal is to unify and simplify the steps involved in deploying, operating, and managing AI models.
|
31 |
|
32 |
|
33 |
+
### config.aiml
|
34 |
|
35 |
+
### Overview
|
36 |
|Attribute|Details|
|
37 |
|----------------------------|--------------------------------------------------------------------------------------------------------------|
|
38 |
| **ID** | 1 |
|
|
|
40 |
| **Creator** | [jphme](https://huggingface.co/jphme) |
|
41 |
| **Source URL** | https://huggingface.co/ |
|
42 |
|
43 |
+
### Specifications
|
44 |
|Attribute|Details|
|
45 |
|----------------------------|--------------------------------------------------------------------------------------------------------------|
|
46 |
| **Type** | Large Language Model |
|
|
|
60 |
| **Datasets** | {"[Prorietary German Conversation Dataset](https://placeholder.ocal/dataset)", "[German & German legal SQuAD](https://placeholder.local/dataset)" |
|
61 |
| **Notes** | The datasets were augmented with rows containing "wrong" contexts, in order to improve factual RAG performance. |
|
62 |
|
63 |
+
### Run Instructions
|
64 |
+
|Attribute|Details|
|
65 |
| **Start Model** | #/bin/sh<br/>chmod +x run.sh && ./run.sh <br/># This is an example. Functioning run.sh Script to be published soon |
|
66 |
+
| **Stop Model** | # Coming soon, todo |
|
67 |
|
68 |
|
69 |
+
## Deploy from source
|
70 |
1. Clone and install llama.cpp *(Commit: 9e20231)*.
|
71 |
```
|
72 |
# Install llama.cpp by cloning the repo from Github.
|