Update README.md
Browse files
README.md
CHANGED
@@ -4,6 +4,8 @@ tags:
|
|
4 |
- text-generation-inference
|
5 |
- text-generation
|
6 |
- peft
|
|
|
|
|
7 |
library_name: transformers
|
8 |
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
|
9 |
widget:
|
@@ -13,6 +15,14 @@ widget:
|
|
13 |
license: other
|
14 |
---
|
15 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
# Model Trained Using AutoTrain
|
17 |
|
18 |
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
|
|
|
4 |
- text-generation-inference
|
5 |
- text-generation
|
6 |
- peft
|
7 |
+
- int4
|
8 |
+
- BPLLM
|
9 |
library_name: transformers
|
10 |
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
|
11 |
widget:
|
|
|
15 |
license: other
|
16 |
---
|
17 |
|
18 |
+
# Fine-tuned Llama 2 13B PEFT int4 for Food Delivery and E-commerce
|
19 |
+
|
20 |
+
This model was trained for the experiments carried out in the research paper "Conversing with business process-aware Large Language Models: the BPLLM framework".
|
21 |
+
|
22 |
+
It comprises a version of the Llama 3.1 8B model fine-tuned (PEFT with quantization int4) to operate within the context of the Food Delivery and Reimbursement process models (different in terms of activities and events) introduced in the article.
|
23 |
+
|
24 |
+
Further insights can be found in our paper "[Conversing with business process-aware Large Language Models: the BPLLM framework](https://doi.org/10.21203/rs.3.rs-4125790/v1)".
|
25 |
+
|
26 |
# Model Trained Using AutoTrain
|
27 |
|
28 |
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
|