File size: 2,398 Bytes
0f1cdda
0d624e4
0f1cdda
 
 
 
 
 
 
 
 
 
 
35e724e
0f1cdda
0d624e4
de75abc
 
 
 
80015fe
de75abc
80015fe
de75abc
80015fe
8f50c06
80015fe
 
 
 
 
 
 
8f50c06
80015fe
8f50c06
80015fe
 
0f1cdda
 
 
 
 
a3960d3
0f1cdda
 
 
0d624e4
0f1cdda
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8f50c06
0f1cdda
 
 
 
f8ceff4
0f1cdda
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
title: web-md-llama2-7b-3000 
tags:
  - healthcare
  - NLP
  - dialogues
  - LLM
  - fine-tuned
license: unknown
datasets:
  - Kabatubare/medical-guanaco-3000
---

# Medical3000 Model Card

This is a model card for  web-md-llama2-7b-3000 , a fine-tuned version of Llama-2-7B, specifically aimed at medical dialogues.

Covered areas:

General Medicine: Basic medical advice, symptoms, general treatments.

Cardiology: Questions related to heart diseases, blood circulation.

Neurology: Topics around brain health, neurological disorders.

Gastroenterology: Issues related to the digestive system.

Oncology: Questions about different types of cancers, treatments.

Endocrinology: Topics related to hormones, diabetes, thyroid.

Orthopedics: Bone health, joint issues.

Pediatrics: Child health, vaccinations, growth and development.

Mental Health: Depression, anxiety, stress, and other mental health issues.

Women's Health: Pregnancy, menstrual health, menopause.

## Model Details

### Base Model

- **Name**: Llama-2-7B
  
### Fine-tuned Model

- **Name**: web-md-llama2-7b-3000 
- **Fine-tuned on**: Kabatubare/medical-guanaco-3000
- **Description**: This model is fine-tuned to specialize in medical dialogues and healthcare applications.

### Architecture and Training Parameters

#### Architecture

- **LoRA Attention Dimension**: 64
- **LoRA Alpha Parameter**: 16
- **LoRA Dropout**: 0.1
- **Precision**: 4-bit (bitsandbytes)
- **Quantization Type**: nf4

#### Training Parameters

- **Epochs**: 3
- **Batch Size**: 4
- **Gradient Accumulation Steps**: 1
- **Max Gradient Norm**: 0.3
- **Learning Rate**: 3e-4
- **Weight Decay**: 0.001
- **Optimizer**: paged_adamw_32bit
- **LR Scheduler**: cosine
- **Warmup Ratio**: 0.03
- **Logging Steps**: 25

## Datasets

### 

### Fine-tuning Dataset

- **Name**: Kabatubare/medical-guanaco-3000
- **Description**: This is a reduced and balanced dataset curated from a larger medical dialogue dataset using derived from 24,000 WebMD question and answer dialogue sessions . It aims to cover a broad range of medical topics and is suitable for training healthcare chatbots and conducting medical NLP research.

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("Yo!Medical3000")
model = AutoModelForCausalLM.from_pretrained("Yo!Medical3000")

# Use the model for inference