Preferred-MedLLM-Qwen-72B

Model Description

Preferred-MedLLM-Qwen-72B is a finetuned model based on Qwen/Qwen2.5-72B, which has undergone continued pretraining on an original corpus of medical-related text.

The model is released under the Qwen LICENSE.

Model Performance

The table below shows the performance on the Japanese medical licensing examination from 2018 to 2022 (IgakuQA).

Model ID Average 2018 2019 2020 2021 2022
Preferred-MedLLM-Qwen-72B 431.2 434 420 439 430 433
GPT-4o 430.4 427 431 433 427 434
Qwen2.5-72B 398.4 412 394 394 393 399
Llama3-Preferred-MedSwallow-70B 395.2 407 390 391 393 395
GPT-4 388.8 382 385 387 398 392
Mistral-Large-Instruct-2407 376 370 371 390 373 376
Llama-3.1-Swallow-70B-v0.1 368.4 379 378 379 351 355
Meta-Llama-3-70B 334.6 353 340 348 314 318
GPT-3.5 273.2 266 250 266 297 287

Limitations

The model was developed for research purposes and is not intended for clinical diagnosis. It is the users' responsibility to ensure compliance with applicable rules and regulations.

Contributors

Preferred Networks, Inc.

  • Junichiro Iwasawa
  • Wataru Kawakami
  • Keita Suzuki

Publications

We are currently preparing a related blog post and research paper, which will be made available soon to provide further insights and detailed information about the development and capabilities of this model.

License

Qwen LICENSE

Downloads last month
40
Safetensors
Model size
72.7B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for pfnet/Preferred-MedLLM-Qwen-72B

Base model

Qwen/Qwen2.5-72B
Finetuned
(27)
this model
Quantizations
3 models