Baichuan-M1-14B-Base

๐Ÿค— Baichuan-M1-14B-Base โ€ข ๐Ÿค— Baichuan-M1-14B-Instruct โ€ข ๐Ÿ’ฌ WeChat


๐Ÿ“– Table of Contents


๐Ÿ Model Introduction

Baichuan-14B-M1 is the industry's first open-source large language model developed from scratch by Baichuan Intelligence, specifically optimized for medical scenarios. While excelling in general capabilities, it demonstrates powerful performance in the medical field. It achieves results comparable to models of similar size in most general benchmark evaluations, while outperforming models five times larger in medical scenarios. Below are the core features of the model:

  • Trained from scratch on 20 trillion tokens of high-quality medical and general data.
  • Specialized modeling for 20+ medical departments with fine-grained medical expertise.
  • Introduces innovative model architecture, significantly improving context understanding and long-sequence task performance.
  • Provides ๐Ÿค— Base Model and ๐Ÿค— Instruct Model.

๐Ÿ”ฌ Data Collection and Processing

Medical Data Collection

We conducted meticulous data collection and synthesis for the medical field, including:

  • Tens of millions of professional medical data: Chinese/English professional papers, medical cases, medical textbooks, knowledge bases, etc.
  • Hundreds of millions of medical Q&A and clinical data: Covering complex medical reasoning and real-world clinical cases.
  • Comprehensive data classification and evaluation: Categorized by medical departments, content, and value to ensure balanced data distribution and filter out truly valuable medical data.

Data Synthesis and Optimization

  • Synthetic data design: Combining knowledge graphs, cases, and textbooks to generate diverse, high-quality medical reasoning data.
  • Self-reflection mechanism and reward model: Continuously improving the quality of synthetic data, ultimately generating nearly a trillion tokens of reasoning data, covering long-tail knowledge and complex scenarios.

General Data Collection

  • 20T multilingual general dataset: Including 14T English data, 4T Chinese data, and 2T data covering 30 mainstream languages.
  • Deduplication and upsampling strategy: Upsampling high-quality data to significantly enhance model performance.
  • 27 global knowledge categories: Optimizing data ratios based on small model experiments to balance general and domain-specific capabilities.

๐Ÿง  New Model Architecture

Short Convolution Attention Mechanism

  • By introducing lightweight short convolution operations when computing Key and Value, the reliance of standard Transformer models on induction heads for learning is significantly reduced. Traditional Transformers rely on induction heads to capture repetitive patterns and contextual dependencies in sequences, which requires a certain model width and depth. Short convolution decouples the Key and Value sequences in the time dimension, enhancing context learning capabilities. Extensive experiments from toy models to models with over ten billion parameters show that the short convolution attention mechanism excels in language modeling tasks, especially those heavily dependent on contextual information.

Sliding Window Attention Mechanism

  • Adopting a sliding window attention mechanism in some layers to reduce KV Cache memory usage.
  • Balancing computational efficiency and performance, especially suitable for long-sequence tasks.

Optimizing Position Encoding Oscillation

  • By increasing the dimensions of some attention heads, RoPE curve oscillation is reduced.
  • More stable performance in long-sequence tasks while maintaining the model's ability to capture diverse features.

High Peak Learning Rate Strategy

  • Using WSD learning rate scheduling strategy with high peak learning rates to promote model generalization.
  • Significant improvement in benchmark task performance.

Adaptive Gradient Update

  • Dynamic gradient clipping: Skipping updates when gradients are too large to reduce instability caused by special samples or steep loss spaces.

โš™๏ธ Training Methodology

We innovatively adopted a multi-stage curriculum learning and alignment optimization approach, systematically enhancing model capabilities through the following two parts:

1. Multi-Stage Curriculum Learning

Training is divided into three stages, progressively optimizing the model's general and medical domain capabilities:

  1. General Knowledge Enhancement Stage: Focused on general language modeling to improve basic language and common sense.
  2. Medical Basic Knowledge Enhancement Stage: Introducing high-quality medical data to enhance reasoning, mathematical, and medical knowledge.
  3. Medical Advanced Knowledge Enhancement Stage: Further optimizing data quality, focusing on complex medical reasoning, disease diagnosis, and long-tail knowledge.

2. Alignment Optimization

Enhancing model generation quality, logical reasoning, and user preference alignment through reinforcement learning and pairwise data optimization:

  1. Pairwise Data: Covering multi-turn dialogues, instruction following, math and code, and reasoning tasks, sourced from human annotations and multi-model generation.
  2. Optimization Process:
    • ELO: Optimizing diverse, high-quality chain-of-thought generation based on maximum likelihood.
    • TDPO: Using pairwise data to optimize the generation model for better user preference alignment.
    • PPO: Further enhancing generation logic and task performance through policy optimization.

This combined approach of multi-stage and alignment optimization enables the model to achieve exceptional performance in both general and medical domain capabilities.


๐Ÿ“Š Benchmark Results

Our evaluation covers all mainstream benchmarks, achieving excellent metrics in both open-source and closed-source evaluations, demonstrating outstanding medical scenario capabilities while maintaining strong general performance.

Category Benchmark Baichuan-M1-14B-Instruct Qwen2.5-14B-Instruct Qwen2.5-72B-Instruct claude-3.5-sonnet-20241022 gpt-4o
Average Score 72.23 65.39 70.51 74.85 75.00
Clinical Practice cmbclin 77.40 71.51 75.36 78.37 75.36
clinicalbench_diag 70.90 68.85 72.23 75.00 73.05
clinicalbench_hos 70.05 68.83 70.53 65.58 69.38
clinicalbench_treat 56.38 55.03 57.30 64.03 59.35
rarearena_rdc 81.80 66.40 76.20 89.60 88.40
rarearena_rds 54.00 42.60 49.80 59.80 57.20
rarebench 59.60 52.80 60.60 65.30 62.80
Exams cmexam 80.10 77.70 82.70 77.50 78.00
Pediatric Qualification Exam 78.48 74.68 84.81 76.58 78.48
Internal Medicine Qualification Exam 83.42 86.10 87.17 87.70 83.42
General Practice Qualification Exam 87.07 88.44 88.44 81.63 84.35
USMLE 78.00 67.20 76.70 85.90 87.10
medbullets 66.88 54.22 64.29 72.40 75.97
mediq 83.40 66.80 79.90 88.80 90.20
nejmqa 49.75 45.69 50.76 69.54 54.31
pubmedqa 75.20 76.40 75.60 77.00 77.60
redisqa 74.50 69.70 75.00 83.20 82.80
Basic Capabilities mednli_dis 80.40 68.90 74.90 58.30 79.80
medcalc 56.00 31.40 37.90 52.60 49.00
MMLU-anatomy 80.00 67.41 71.11 86.67 91.11
MMLU-virology 54.82 56.02 53.01 54.22 57.23
MMLU-genetics 91.00 82.00 87.00 97.00 95.00

๐Ÿš€ Quick Start

๐Ÿค— Hugging Face Transformers

We recommend using the latest version of the Transformers library (at least 4.47.0). The following code snippet demonstrates how to use the Baichuan-M1-14B-Instruct model:

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# 1. Load pre-trained model and tokenizer
model_name = "baichuan-inc/Baichuan-M1-14B-Base"  
tokenizer = AutoTokenizer.from_pretrained(model_name,trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name,trust_remote_code=True,torch_dtype = torch.bfloat16).cuda()

input_text = "I have recently recovered from my cold."


inputs = tokenizer(input_text, return_tensors="pt").to(model.device)

outputs = model.generate(
        inputs["input_ids"],
        max_length=100, 
    )

generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("Generated Text:")
print(generated_text)

๐Ÿ“œ License and Statement

The use of the model must comply with ใ€ŠBaichuan-M1-14Bๆจกๅž‹็คพๅŒบ่ฎธๅฏๅ่ฎฎใ€‹.

The development team of Baichuan has not developed any commercial applications based on this model. All users must comply with laws and regulations and must not use the model for harmful national security or illegal purposes.

Downloads last month
13
Safetensors
Model size
14.5B params
Tensor type
BF16
ยท
Inference API
Unable to determine this model's library. Check the docs .