Update README.md
Browse files
README.md
CHANGED
@@ -12,4 +12,21 @@ tags:
|
|
12 |
- healthcare
|
13 |
---
|
14 |
# Galen
|
15 |
-
Galen is fine-tuned from [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2), using [medical quesion answering dataset](https://huggingface.co/datasets/medalpaca/medical_meadow_medqa)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
- healthcare
|
13 |
---
|
14 |
# Galen
|
15 |
+
Galen is fine-tuned from [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2), using [medical quesion answering dataset](https://huggingface.co/datasets/medalpaca/medical_meadow_medqa)
|
16 |
+
# Get Started
|
17 |
+
Install "accelerate" to use CUDA GPU
|
18 |
+
```bash
|
19 |
+
pip install accelerate
|
20 |
+
```
|
21 |
+
```py
|
22 |
+
from transformers import AutoTokenizer, pipeline
|
23 |
+
```
|
24 |
+
```py
|
25 |
+
tokenizer = AutoTokenizer.from_pretrained('ahmed/galen')
|
26 |
+
model_pipeline = pipeline(task="text-generation", model='ahmed/galen', tokenizer=tokenizer, max_length=256, temperature=0.5, top_p=0.6)
|
27 |
+
```
|
28 |
+
```py
|
29 |
+
result = model_pipeline('What is squamous carcinoma')
|
30 |
+
#print the generated text
|
31 |
+
print(result[0]['generated_text'][len(prompt):])
|
32 |
+
```
|