Text Generation
Safetensors
English
Chinese
medical

HuatuoGPT-o1-72B

Introduction

HuatuoGPT-o1 is a medical LLM designed for advanced medical reasoning. It generates a complex thought process, reflecting and refining its reasoning, before providing a final response.

For more information, visit our GitHub repository: https://github.com/FreedomIntelligence/HuatuoGPT-o1.

Model Info

Backbone Supported Languages Link
HuatuoGPT-o1-8B LLaMA-3.1-8B English HF Link
HuatuoGPT-o1-70B LLaMA-3.1-70B English HF Link
HuatuoGPT-o1-7B Qwen2.5-7B English & Chinese HF Link
HuatuoGPT-o1-72B Qwen2.5-72B English & Chinese HF Link

Usage

You can use HuatuoGPT-o1-72B in the same way as Qwen2.5-72B-Instruct. You can deploy it with tools like vllm or Sglang, or perform direct inference:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/HuatuoGPT-o1-72B",torch_dtype="auto",device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("FreedomIntelligence/HuatuoGPT-o1-72B")

input_text = "How to stop a cough?"
messages = [{"role": "user", "content": input_text}]

inputs = tokenizer(tokenizer.apply_chat_template(messages, tokenize=False,add_generation_prompt=True
), return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=2048)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

HuatuoGPT-o1 adopts a thinks-before-it-answers approach, with outputs formatted as:

## Thinking
[Reasoning process]

## Final Response
[Output]

πŸ“– Citation

@misc{chen2024huatuogpto1medicalcomplexreasoning,
      title={HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs}, 
      author={Junying Chen and Zhenyang Cai and Ke Ji and Xidong Wang and Wanlong Liu and Rongsheng Wang and Jianye Hou and Benyou Wang},
      year={2024},
      eprint={2412.18925},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2412.18925}, 
}
Downloads last month
127
Safetensors
Model size
72.7B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for FreedomIntelligence/HuatuoGPT-o1-72B

Base model

Qwen/Qwen2.5-72B
Finetuned
(32)
this model
Quantizations
5 models

Datasets used to train FreedomIntelligence/HuatuoGPT-o1-72B

Spaces using FreedomIntelligence/HuatuoGPT-o1-72B 3

Collection including FreedomIntelligence/HuatuoGPT-o1-72B