File size: 2,613 Bytes
db8d0c5
 
 
 
 
 
 
 
 
 
 
 
 
697f5d9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
base_model: Meta-Llama-3.1-8B-bnb-4bit
language:
- en
- fa
license: apache-2.0
tags:
- text-generation-inference
- transformers
- llama
- trl
- QA
pipeline_tag: document-question-answering
---

---
language: fa
tags:
  - question-answering
  - llama3
  - Persian
  - QA
license: apache-2.0
model_name: Llama-3.1-PersianQA
---

# Model Card for `Llama-3.1-PersianQA`

## Model Description

The `Llama-3.1-PersianQA` model is a fine-tuned version of Llama3 for Persian question-answering tasks. This model is designed to provide accurate answers to questions posed in Persian, based on the provided context. It has been trained on a dataset specific to Persian language QA tasks to enhance its performance in understanding and generating responses in Persian.

## Intended Use

This model is intended for use in applications requiring Persian language question answering. It can be integrated into chatbots, virtual assistants, and other systems where users interact in Persian and need accurate responses to their questions based on a given context.

### Use Cases

- **Customer Support:** Automate responses to customer queries in Persian.
- **Educational Tools:** Provide assistance and answers to questions in Persian educational platforms.
- **Content Retrieval:** Extract relevant information from Persian texts based on user queries.

## Training Data

The model was fine-tuned on a Persian question-answering dataset, which includes various domains and topics to ensure generalization across different types of questions. The dataset used for training contains question-context pairs and corresponding answers in Persian.

## Model Architecture

- **Base Model:** Llama3
- **Task:** Question Answering
- **Language:** Persian

## Performance

The model has been evaluated on a set of Persian QA benchmarks and performs well across various metrics. Performance may vary depending on the specific domain and nature of the questions.

## How to Use

You can use the `Llama-3.1-PersianQA` model with the Hugging Face `transformers` library. Here is a sample code to get started:

```python
from transformers import pipeline

# Load the model
qa_pipeline = pipeline("question-answering", model="zpm/Llama-3.1-PersianQA")

# Example usage
context = "شرکت فولاد مبارکۀ اصفهان، بزرگ‌ترین واحد صنعتی خصوصی در ایران و بزرگ‌ترین مجتمع تولید فولاد در خاورمیانه است."
question = "شرکت فولاد مبارکه در کجا واقع شده است؟"

result = qa_pipeline(question=question, context=context)
print(result)