File size: 1,565 Bytes
d619e17 2d27d52 d619e17 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
---
language:
- en
pipeline_tag: text-generation
---
# Model Card for qa-expert-7B-V1.0
<!-- Provide a quick summary of what the model is/does. -->
This model aims to handle **Multi-hop Question answering** by splitting a multi-hop questions into a sequence of single questions, handle these single questions then summarize the information to get the final answer.
## Model Details
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co./mistralai/Mistral-7B-v0.1) on the dataset: [khaimaitien/qa-expert-multi-hop-qa-V1.0](https://huggingface.co./datasets/khaimaitien/qa-expert-multi-hop-qa-V1.0)
You can get more information about how to **use/train** the model from this repo: https://github.com/khaimt/qa_expert
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/khaimt/qa_expert]
## How to Get Started with the Model
First, you need to clone the repo: https://github.com/khaimt/qa_expert
Then install the requirements:
```shell
pip install -r requirements.txt
```
Here is the example code:
```python
from qa_expert import get_inference_model, InferenceType
def retrieve(query: str) -> str:
# You need to implement this retrieval function, input is a query and output is a string
# This can be treated as the function to call in function calling of OpenAI
return context
model_inference = get_inference_model(InferenceType.hf, "khaimaitien/qa-expert-7B-V1.0")
answer, messages = model_inference.generate_answer(question, retriever_func)
```
|