Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,8 @@ The MELT-Mixtral-8x7B-Instruct-v0.1 Large Language Model (LLM) is a pretrained g
|
|
10 |
|
11 |
MELT-Mixtral-8x7B-Instruct-v0.1 is 68.2% accurate across 3 USMLE benchmarks, surpassing the pass mark (>60%) in the U.S. Medical Licensing Examination (USMLE) style questions.
|
12 |
|
|
|
|
|
13 |
## Model Details
|
14 |
|
15 |
The Medical Education Language Transformer (MELT) models have been trained on a wide-range of text, chat, Q/A, and instruction data in the medical domain.
|
|
|
10 |
|
11 |
MELT-Mixtral-8x7B-Instruct-v0.1 is 68.2% accurate across 3 USMLE benchmarks, surpassing the pass mark (>60%) in the U.S. Medical Licensing Examination (USMLE) style questions.
|
12 |
|
13 |
+
To the best of our understanding our model is 4% less accurate than Google's 540 billion parameter [Med-Palm](https://sites.research.google/med-palm/), which is 10X larger.
|
14 |
+
|
15 |
## Model Details
|
16 |
|
17 |
The Medical Education Language Transformer (MELT) models have been trained on a wide-range of text, chat, Q/A, and instruction data in the medical domain.
|