File size: 1,806 Bytes
0f1cdda
35e724e
0f1cdda
 
 
 
 
 
 
 
 
 
 
35e724e
0f1cdda
6bf03a0
0f1cdda
 
 
 
 
a3960d3
0f1cdda
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
title: Medical3000
tags:
  - healthcare
  - NLP
  - dialogues
  - LLM
  - fine-tuned
license: unknown
datasets:
  - Kabatubare/medical-guanaco-3000
---

# Medical3000 Model Card

This is a model card for Medical_3000, a fine-tuned version of Llama-2-7B, specifically aimed at medical dialogues.

## Model Details

### Base Model

- **Name**: Llama-2-7B
  
### Fine-tuned Model

- **Name**: Yo!Medical3000
- **Fine-tuned on**: Kabatubare/medical-guanaco-3000
- **Description**: This model is fine-tuned to specialize in medical dialogues and healthcare applications.

### Architecture and Training Parameters

#### Architecture

- **LoRA Attention Dimension**: 64
- **LoRA Alpha Parameter**: 16
- **LoRA Dropout**: 0.1
- **Precision**: 4-bit (bitsandbytes)
- **Quantization Type**: nf4

#### Training Parameters

- **Epochs**: 3
- **Batch Size**: 4
- **Gradient Accumulation Steps**: 1
- **Max Gradient Norm**: 0.3
- **Learning Rate**: 3e-4
- **Weight Decay**: 0.001
- **Optimizer**: paged_adamw_32bit
- **LR Scheduler**: cosine
- **Warmup Ratio**: 0.03
- **Logging Steps**: 25

## Datasets

### Base Model Dataset

- **Name**: (Name of the dataset used for the base model)
- **Description**: (A brief description of this dataset and its characteristics)

### Fine-tuning Dataset

- **Name**: Kabatubare/medical-guanaco-3000
- **Description**: This is a reduced and balanced dataset curated from a larger medical dialogue dataset. It aims to cover a broad range of medical topics and is suitable for training healthcare chatbots and conducting medical NLP research.

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("Yo!Medical3000")
model = AutoModelForCausalLM.from_pretrained("Yo!Medical3000")

# Use the model for inference