user
stringlengths 3
28
| created_at
timestamp[us] | body
stringlengths 1
173k
| issue_number
int64 1
2.95k
| __index_level_0__
int64 0
8.59k
|
---|---|---|---|---|
kashif | 2025-02-24T09:13:40 | i think so too... we will need to pin the transformer version but yes should be a better soluton | 2,940 | 12 |
qgallouedec | 2025-02-24T11:41:07 | Thanks @edbeeching!
> i think so too... we will need to pin the transformer version but yes should be a better soluton
After checking, it seems like `use_liger_kernel` exists for at least 4.46, which is the min version in TRL. So we shouldn't need to bump transformers
https://github.com/huggingface/transformers/blob/052e652d6d53c2b26ffde87e039b723949a53493/src/transformers/training_args.py#L1521 | 2,940 | 13 |
qgallouedec | 2025-02-24T11:43:06 | it was introduced in 4.45 | 2,940 | 14 |
kashif | 2025-02-23T18:42:51 | @DanFosing which version of TRL are you using? | 2,939 | 15 |
DanFosing | 2025-02-23T19:02:31 | I experienced this issue with both v0.15.1 and with the alpha version downloaded using:
`pip install git+https://github.com/huggingface/trl.git` | 2,939 | 16 |
DanFosing | 2025-02-23T19:05:41 | Oh and I forgot to mention, max_seq_length didn't seem to work for me for some reason, the warning says it will be deprecated in v.0.20.0 but are you sure it wasn't deprecated already? (that's why I added a comment there in the code, but it's not related to the main fix) | 2,939 | 17 |
HuggingFaceDocBuilderDev | 2025-02-23T16:37:47 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2938). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,938 | 18 |
qgallouedec | 2025-02-23T16:14:16 | Thanks for reporting, please provide a refrence code and your system info | 2,937 | 19 |
yzhdut | 2025-02-24T02:02:38 | > Thanks for reporting, please provide a refrence code and your system info
Ok Thanks for your reply
system info:
gpu:
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.40.07 Driver Version: 550.40.07 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 Off | Off |
| 44% 79C P2 434W / 450W | 22731MiB / 24564MiB | 100% Default |
`Package Version
----------------------------- ------------
accelerate 1.4.0
aiofiles 23.2.1
aiohttp 3.9.1
aiosignal 1.3.1
altair 5.2.0
annotated-types 0.6.0
antlr4-python3-runtime 4.9.3
anyio 4.2.0
async-timeout 4.0.3
attrs 23.1.0
auto_gptq 0.7.1
bitsandbytes 0.45.2
blinker 1.7.0
cachetools 5.3.2
certifi 2023.11.17
charset-normalizer 3.3.2
click 8.1.7
cmake 3.27.9
colorama 0.4.6
contourpy 1.2.0
cpm-kernels 1.0.11
cycler 0.12.1
datasets 3.3.1
deepspeed 0.12.4
dill 0.3.7
docstring-parser 0.15
et-xmlfile 1.1.0
exceptiongroup 1.2.0
fastapi 0.108.0
ffmpy 0.3.1
filelock 3.13.1
fonttools 4.47.0
frozenlist 1.4.1
fsspec 2023.10.0
gekko 1.2.1
gitdb 4.0.11
GitPython 3.1.40
gradio 3.38.0
gradio_client 0.8.0
h11 0.14.0
hjson 3.1.0
httpcore 1.0.2
httpx 0.26.0
huggingface-hub 0.29.0
idna 3.6
importlib-metadata 6.11.0
importlib-resources 6.1.1
jieba 0.42.1
Jinja2 3.1.2
joblib 1.3.2
jsonschema 4.20.0
jsonschema-specifications 2023.11.2
kiwisolver 1.4.5
linkify-it-py 2.0.3
lit 17.0.6
markdown-it-py 2.2.0
MarkupSafe 2.1.3
matplotlib 3.8.2
mdit-py-plugins 0.3.3
mdurl 0.1.2
mpmath 1.3.0
multidict 6.0.4
multiprocess 0.70.15
networkx 3.2.1
ninja 1.11.1.1
nltk 3.8.1
numpy 1.26.2
omegaconf 2.3.0
openpyxl 3.1.2
optimum 1.24.0
orjson 3.9.10
packaging 23.2
pandas 2.1.4
peft 0.10.0
Pillow 10.1.0
pip 25.0.1
protobuf 4.25.1
psutil 5.9.6
py-cpuinfo 9.0.0
pyarrow 19.0.1
pyarrow-hotfix 0.6
pydantic 2.5.2
pydantic_core 2.14.5
pydeck 0.8.1b0
pydub 0.25.1
Pygments 2.17.2
pynvml 11.5.0
pyparsing 3.1.1
pyproject 1.3.1
python-dateutil 2.8.2
python-multipart 0.0.6
pytz 2023.3.post1
PyYAML 6.0.1
referencing 0.32.0
regex 2023.10.3
requests 2.32.3
rich 13.7.0
rouge 1.0.1
rouge-chinese 1.0.3
rpds-py 0.13.2
safetensors 0.5.2
scipy 1.11.4
semantic-version 2.10.0
sentencepiece 0.1.99
setuptools 75.8.0
shellingham 1.5.4
shtab 1.6.5
six 1.16.0
smmap 5.0.1
sniffio 1.3.0
sse-starlette 1.8.2
starlette 0.32.0.post1
streamlit 1.29.0
sympy 1.12
tenacity 8.2.3
tiktoken 0.5.2
tokenizers 0.20.3
toml 0.10.2
tomlkit 0.12.0
toolz 0.12.0
torch 2.0.0+cu118
torchvision 0.15.0+cu118
tornado 6.4
tqdm 4.67.1
transformers 4.46.0
transformers-stream-generator 0.0.4
triton 3.2.0
trl 0.15.1
typer 0.12.3
typing_extensions 4.9.0
tyro 0.6.3
tzdata 2024.2
tzlocal 5.2
uc-micro-py 1.0.3
urllib3 2.1.0
uvicorn 0.25.0
validators 0.22.0
watchdog 3.0.0
websockets 11.0.3
wheel 0.41.2
xxhash 3.4.1
yarl 1.9.4
zipp 3.17.0
zstandard 0.23.0`
reference code:
`
with open('data/RLFTqa_delete_some_answer_is_null.json', 'r') as f:
data = json.load(f)
random.shuffle(data)
data = data[0:200]
formatted_data = []
for item in data:
formatted_data.append({
"prompt": item["question"],
"reference": "||".join(item.get("answer", []))
})
n_total = len(formatted_data)
n_train = int(0.7 * n_total)
n_eval = int(0.15 * n_total)
n_test = n_total - n_train - n_eval
train_data = formatted_data[:1]
eval_data = formatted_data[n_train:n_train+n_eval]
test_data = formatted_data[n_train+n_eval:]
dataset_features = Features({
"prompt": Value("string"),
"reference": Value("string")
})
train_dataset = Dataset.from_list(train_data, features=dataset_features)
eval_dataset = Dataset.from_list(eval_data, features=dataset_features)
test_dataset = Dataset.from_list(test_data, features=dataset_features)
local_model_path = "./Qwen2.5-7B-Instruct-GPTQ-Int4"
tokenizer = AutoTokenizer.from_pretrained(local_model_path)
tokenizer.pad_token = tokenizer.eos_token
base_model = AutoModelForCausalLM.from_pretrained(
local_model_path,
# quantization_config=quant_config,
device_map="auto",
trust_remote_code=True,
torch_dtype=torch.float16
)
base_model.config.gradient_checkpointing = False
lora_config = LoraConfig(
r=16, # 适当增加秩维度
lora_alpha=32,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"], # 扩展目标模块
lora_dropout=0.1,
bias="none",
task_type="CAUSAL_LM",
)
model = prepare_model_for_kbit_training(base_model)
model = get_peft_model(model, lora_config)
grpo_config = GRPOConfig(
num_generations=8, # 每组生成数量
max_completion_length=512, # 最大生成长度
gradient_accumulation_steps=2,
learning_rate=3e-5,
logging_steps=5,
logging_first_step = True,
save_steps=50,
output_dir="./grpo_checkpoints",
logging_dir="./grpo_checkpoints/log",
log_level="info",
log_completions=True,
eval_strategy="steps",
eval_steps=40,
do_predict=True,
temperature=0.7,
gradient_checkpointing=False,
use_vllm=False
)
model.bfloat16()
trainer = GRPOTrainer(
model=model,
args=grpo_config,
reward_funcs=[reward_func, format_reward_func], # 直接传递奖励函数
train_dataset=train_dataset,
eval_dataset=eval_dataset,
processing_class=tokenizer,
)
train_output = trainer.predict(train_dataset)
trainer.train()
model.save_pretrained("grpo_lora_adapter")`
As mentioned above, the length of train_det is 1. Before training, I first call trainer. predict (train_dataset). Then I printed the completion of the model in the library function, and the obtained location is as follows:
xxx/lib/python3.10/site-packages/trl/trainer/grpo_trainer.py:
` def _prepare_inputs(self, inputs: dict[str, Union[torch.Tensor, Any]]) -> dict[str, Union[torch.Tensor, Any]]:
device = self.accelerator.device
prompts = [x["prompt"] for x in inputs]
prompts_text = [maybe_apply_chat_template(example, self.processing_class)["prompt"] for example in inputs]
prompt_inputs = self.processing_class(
prompts_text, return_tensors="pt", padding=True, padding_side="left", add_special_tokens=False
)
prompt_inputs = super()._prepare_inputs(prompt_inputs)
prompt_ids, prompt_mask = prompt_inputs["input_ids"], prompt_inputs["attention_mask"]
if self.max_prompt_length is not None:
prompt_ids = prompt_ids[:, -self.max_prompt_length :]
prompt_mask = prompt_mask[:, -self.max_prompt_length :]
# Generate completions using either vLLM or regular generation
if self.args.use_vllm:
print("Using VLM model")
# First, have main process load weights if needed
if self.state.global_step != self._last_loaded_step:
self._move_model_to_vllm()
self._last_loaded_step = self.state.global_step
# Generate completions using vLLM: gather all prompts and use them in a single call in the main process
all_prompts_text = gather_object(prompts_text)
if self.accelerator.is_main_process:
outputs = self.llm.generate(all_prompts_text, sampling_params=self.sampling_params, use_tqdm=False)
completion_ids = [out.token_ids for completions in outputs for out in completions.outputs]
else:
completion_ids = [None] * len(all_prompts_text)
# Broadcast the completions from the main process to all processes, ensuring each process receives its
# corresponding slice.
completion_ids = broadcast_object_list(completion_ids, from_process=0)
process_slice = slice(
self.accelerator.process_index * len(prompts),
(self.accelerator.process_index + 1) * len(prompts),
)
completion_ids = completion_ids[process_slice]
# Pad the completions, and concatenate them with the prompts
completion_ids = [torch.tensor(ids, device=device) for ids in completion_ids]
completion_ids = pad(completion_ids, padding_value=self.processing_class.pad_token_id)
prompt_completion_ids = torch.cat([prompt_ids, completion_ids], dim=1)
else:
# Regular generation path
with unwrap_model_for_generation(self.model, self.accelerator) as unwrapped_model:
prompt_completion_ids = unwrapped_model.generate(
prompt_ids, attention_mask=prompt_mask, generation_config=self.generation_config
)
# Compute prompt length and extract completion ids
prompt_length = prompt_ids.size(1)
prompt_ids = prompt_completion_ids[:, :prompt_length]
completion_ids = prompt_completion_ids[:, prompt_length:]
# Mask everything after the first EOS token
is_eos = completion_ids == self.processing_class.eos_token_id
eos_idx = torch.full((is_eos.size(0),), is_eos.size(1), dtype=torch.long, device=device)
eos_idx[is_eos.any(dim=1)] = is_eos.int().argmax(dim=1)[is_eos.any(dim=1)]
sequence_indices = torch.arange(is_eos.size(1), device=device).expand(is_eos.size(0), -1)
completion_mask = (sequence_indices <= eos_idx.unsqueeze(1)).int()
# Concatenate prompt_mask with completion_mask for logit computation
attention_mask = torch.cat([prompt_mask, completion_mask], dim=1) # (B*G, P+C)
logits_to_keep = completion_ids.size(1) # we only need to compute the logits for the completion tokens
with torch.inference_mode():
if self.ref_model is not None:
ref_per_token_logps = self._get_per_token_logps(
self.ref_model, prompt_completion_ids, attention_mask, logits_to_keep
)
else:
with self.accelerator.unwrap_model(self.model).disable_adapter():
ref_per_token_logps = self._get_per_token_logps(
self.model, prompt_completion_ids, attention_mask, logits_to_keep
)
# Decode the generated completions
completions_text = self.processing_class.batch_decode(completion_ids, skip_special_tokens=True)
print("Completions text:", completions_text)`
As I mentioned, when predicting, the model shows basic problem-solving ability. However, for the same input, during training, completion appears to significantly reduce model performance due to issues with irrelevant statements and tasks, as well as repetition.
And I found that the log doesn't seem to be saved together with the checkpoint. Is it possible that it will only be saved after the training is completed

| 2,937 | 20 |
HuggingFaceDocBuilderDev | 2025-02-23T13:17:42 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2936). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,936 | 21 |
qgallouedec | 2025-02-23T16:55:14 | Super cool PR @August-murr! I'll check in details asap.
> I faced out-of-memory (OOM) issues and couldn't train an agent for more complex tasks.
Can you share your code?
> In the end, we could write a blog post or report to showcase its effectiveness.
Definitely!
> While the results look good, they don’t represent a practical use case
Do you have another simple while practical use case?
| 2,936 | 22 |
qgallouedec | 2025-02-23T17:17:44 | Can you add some tests and docs as well? | 2,936 | 23 |
August-murr | 2025-02-23T19:50:09 | > Can you share your code?
[Kaggle Notebook](https://www.kaggle.com/code/augustmurr/training-agent-to-generate-code)
the biggest issue was really Kaggles 2xT4 having little VRAM.
I did try PEFT but then couldn't use it properly with vllm then decided to do full model instead.
>
> Do you have another simple while practical use case?
no, not simpler than that.
| 2,936 | 24 |
qgallouedec | 2025-02-23T21:22:24 | Not sure when you tested it but peft + vllm should be fixed now | 2,936 | 25 |
qgallouedec | 2025-02-23T12:11:35 | @bot /style | 2,935 | 26 |
HuggingFaceDocBuilderDev | 2025-02-23T12:15:44 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2935). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,935 | 27 |
HuggingFaceDocBuilderDev | 2025-02-23T12:04:37 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2934). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,934 | 28 |
qgallouedec | 2025-02-23T13:29:50 | @bot /style | 2,934 | 29 |
qgallouedec | 2025-02-23T13:32:58 | @bot /style | 2,934 | 30 |
qgallouedec | 2025-02-23T13:35:00 | @bot /style | 2,934 | 31 |
qgallouedec | 2025-02-23T16:35:17 | @bot /style | 2,934 | 32 |
qgallouedec | 2025-02-23T16:36:09 | @bot /style | 2,934 | 33 |
github-actions[bot] | 2025-02-23T16:36:27 | Style fixes have been applied. [View the workflow run here](https://github.com/huggingface/trl/actions/runs/13484908517). | 2,934 | 34 |
qgallouedec | 2025-02-23T17:21:27 | Thanks, just a minor suggestion :)
| 2,932 | 35 |
qgallouedec | 2025-02-23T17:21:35 | @bot /style | 2,932 | 36 |
cuiyuhao1996 | 2025-02-24T07:39:04 | How is this solved? I also encountered the same problem. | 2,931 | 37 |
qgallouedec | 2025-02-24T08:00:12 | Try to upgrade trl, it should solve the issue | 2,931 | 38 |
qgallouedec | 2025-02-24T08:00:13 | Try to upgrade trl, it should solve the issue | 2,931 | 39 |
HuggingFaceDocBuilderDev | 2025-02-22T12:13:19 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2929). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,929 | 40 |
mehdiataei | 2025-02-22T05:57:35 | Fixed by downgrading to transformers==4.49.0 from dev. | 2,928 | 41 |
dignfei | 2025-02-22T12:22:32 | I can train Qwen2.5-3B on 4090 ( 24gb ) | 2,927 | 42 |
dignfei | 2025-02-22T12:22:58 | Qwen2.5-7B only need 2x H20(80GB) | 2,927 | 43 |
Tuziking | 2025-02-22T12:39:57 | > Qwen2.5-7B only need 2x H20(80GB)
I'm sorry to bother you. But can you share your code with me or help me to find the bug in my code as follow? I reference
willccbb's code to train.
``` python
import re
import torch
from datasets import load_dataset, Dataset
from transformers import AutoTokenizer, AutoModelForCausalLM
from trl import GRPOConfig, GRPOTrainer
from peft import LoraConfig, get_peft_model, TaskType
import wandb
import logging
from scripts.utils.replace_grpo_trainer import trigger
#
logging.basicConfig(
filename="GRPO-Qwen2.5-7B.log", # 保存的日志文件名
level=logging.INFO, # 日志等级
format="%(asctime)s - %(message)s", # 日志格式
datefmt="%Y-%m-%d %H:%M:%S"
)
logger = logging.getLogger("logger")
# Load and prep dataset
SYSTEM_PROMPT = """
A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant first thinks about the think process in the mind and then provides the user with the answer. The think process and answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., <think> think process here </think> <answer> answer here </answer>
"""
XML_COT_FORMAT = """
<think>
{think}
</think>
<answer>
{answer}
</answer>
"""
def extract_xml_answer(text: str) -> str:
answer = text.split("<answer>")[-1]
answer = answer.split("</answer>")[0]
return answer.strip()
def extract_hash_answer(text: str) -> str | None:
if "####" not in text:
return None
return text.split("####")[1].strip()
# uncomment middle messages for 1-shot prompting
def get_gsm8k_questions(split = "train") -> Dataset:
data = load_dataset('dataset/gsm8k', 'main')[split] # type: ignore
data = data.map(lambda x: { # type: ignore
'prompt': [
{'role': 'system', 'content': SYSTEM_PROMPT},
# {'role': 'user', 'content': 'What is the largest single-digit prime number?'},
# {'role': 'assistant', 'content': XML_COT_FORMAT.format(
# think="9 is divisble by 3 and 8 is divisible by 2, but 7 is prime.",
# answer="7"
# )},
{'role': 'user', 'content': x['question']}
],
'answer': extract_hash_answer(x['answer'])
}) # type: ignore
return data # type: ignore
dataset = get_gsm8k_questions()
# print(dataset[0])
# Reward functions
def correctness_reward_func(prompts, completions, answer, **kwargs) -> list[float]:
responses = [completion[0]['content'] for completion in completions]
q = prompts[0][-1]['content']
extracted_responses = [extract_xml_answer(r) for r in responses]
print(len(responses), len(extracted_responses), len(answer))
# for response, extracted_response, _answer in zip(responses, extracted_responses, answer):
logger.info('-'*20)
logger.info(f"Question:\n{q}")
logger.info(f"Answer:\n{answer[0]}")
logger.info(f"Response:\n{responses[0]}")
logger.info(f"Extracted:\n{extracted_responses[0]}")
logger.info(f"Correctness: {1.0 if extracted_responses[0] == answer[0] else 0.0}")
# wandb.log({"Correctness": 1.0 if extracted_responses[0] == answer[0] else 0.0})
# print('-'*20, f"Question:\n{q}", f"\nAnswer:\n{answer[0]}", f"\nResponse:\n{responses[0]}", f"\nExtracted:\n{extracted_responses[0]}")
return [2.0 if r == a else 0.0 for r, a in zip(extracted_responses, answer)]
def int_reward_func(completions, **kwargs) -> list[float]:
# print("int_reward_func")
responses = [completion[0]['content'] for completion in completions]
extracted_responses = [extract_xml_answer(r) for r in responses]
return [0.5 if r.isdigit() else 0.0 for r in extracted_responses]
# def strict_format_reward_func(completions, **kwargs) -> list[float]:
# """Reward function that checks if the completion has a specific format."""
# # print("strict_format_reward_func")
# pattern = r"^<think>\n.*?\n</think>\n<answer>\n.*?\n</answer>\n$"
# responses = [completion[0]["content"] for completion in completions]
# matches = [re.match(pattern, r) for r in responses]
# return [0.5 if match else 0.0 for match in matches]
def soft_format_reward_func(completions, **kwargs) -> list[float]:
# print("soft_format_reward_func")
"""Reward function that checks if the completion has a specific format."""
pattern = r"<think>.*?</think>\s*<answer>.*?</answer>"
responses = [completion[0]["content"] for completion in completions]
matches = [re.match(pattern, r) for r in responses]
return [0.5 if match else 0.0 for match in matches]
def count_xml(text) -> float:
count = 0.0
if text.count("<think>\n") == 1:
count += 0.125
if text.count("\n</think>\n") == 1:
count += 0.125
if text.count("\n<answer>\n") == 1:
count += 0.125
count -= len(text.split("\n</answer>\n")[-1])*0.001
if text.count("\n</answer>") == 1:
count += 0.125
count -= (len(text.split("\n</answer>")[-1]) - 1)*0.001
return count
def xmlcount_reward_func(completions, **kwargs) -> list[float]:
# print("xmlcount_reward_func")
contents = [completion[0]["content"] for completion in completions]
return [count_xml(c) for c in contents]
output_dir = "outputs/Qwen2.5-7B-GRPO"
model_name = "models/Qwen2.5-7B"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
training_args = GRPOConfig(
output_dir=output_dir,
learning_rate=5e-6,
adam_beta1 = 0.9,
adam_beta2 = 0.99,
weight_decay = 0.1,
warmup_ratio = 0.1,
lr_scheduler_type='cosine',
logging_steps=1,
bf16=True,
per_device_train_batch_size=2,
gradient_accumulation_steps=4,
num_generations=2,
max_prompt_length=256,
max_completion_length=512,
num_train_epochs=1,
save_steps=100,
max_grad_norm=0.1,
log_on_each_node=False,
use_vllm=False,
report_to="wandb"
)
trainer = GRPOTrainer(
model = model,
# reward_funcs = xmlcount_reward_func,
reward_funcs = [
xmlcount_reward_func,
soft_format_reward_func,
# strict_format_reward_func,
int_reward_func,
correctness_reward_func,
],
args = training_args,
train_dataset = dataset,
)
trainer.train()
trainer.save_model(output_dir)
```
| 2,927 | 44 |
Tuziking | 2025-02-23T09:25:21 | I test the problem in different per_device_train_batch_size and num_generations, I find that if I use one H20, `per_device_train_batch_size==4, num_generations==4`to train, it can continue some steps before OOM. But if I use 3 x H20, `per_device_train_batch_size==2, num_generations==6`to train, OOM occur early. I don't know why more H20 to train but OOM is occur early. The error log is as follow:
```
[rank0]:[W223 17:38:57.742650271 reducer.cpp:1400] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[rank1]:[W223 17:38:57.742968833 reducer.cpp:1400] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[rank2]:[W223 17:38:57.745997855 reducer.cpp:1400] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
Traceback (most recent call last):
File "/online1/sc100010/sc100010/qb_project/MARL/trl_GRPO_train.py", line 176, in <module>
trainer.train()
File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/transformers/trainer.py", line 2241, in train
return inner_training_loop(
File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/transformers/trainer.py", line 2599, in _inner_training_loop
self.optimizer.step()
File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/accelerate/optimizer.py", line 178, in step
self.optimizer.step(closure)
File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 137, in wrapper
return func.__get__(opt, opt.__class__)(*args, **kwargs)
File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 487, in wrapper
out = func(*args, **kwargs)
File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 91, in _use_grad
ret = func(self, *args, **kwargs)
File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 220, in step
adamw(
File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 154, in maybe_fallback
return func(*args, **kwargs)
File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 782, in adamw
func(
File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 606, in _multi_tensor_adamw
exp_avg_sq_sqrt = torch._foreach_sqrt(device_exp_avg_sqs)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 130.00 MiB. GPU 0 has a total capacity of 94.99 GiB of which 87.19 MiB is free. Including non-PyTorch memory, this process has 94.90 GiB memory in use. Of the allocated memory 90.98 GiB is allocated by PyTorch, and 2.01 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
[rank0]: Traceback (most recent call last):
[rank0]: File "/online1/sc100010/sc100010/qb_project/MARL/trl_GRPO_train.py", line 176, in <module>
[rank0]: trainer.train()
[rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/transformers/trainer.py", line 2241, in train
[rank0]: return inner_training_loop(
[rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/transformers/trainer.py", line 2599, in _inner_training_loop
[rank0]: self.optimizer.step()
[rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/accelerate/optimizer.py", line 178, in step
[rank0]: self.optimizer.step(closure)
[rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 137, in wrapper
[rank0]: return func.__get__(opt, opt.__class__)(*args, **kwargs)
[rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 487, in wrapper
[rank0]: out = func(*args, **kwargs)
[rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 91, in _use_grad
[rank0]: ret = func(self, *args, **kwargs)
[rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 220, in step
[rank0]: adamw(
[rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 154, in maybe_fallback
[rank0]: return func(*args, **kwargs)
[rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 782, in adamw
[rank0]: func(
[rank0]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 606, in _multi_tensor_adamw
[rank0]: exp_avg_sq_sqrt = torch._foreach_sqrt(device_exp_avg_sqs)
[rank0]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 130.00 MiB. GPU 0 has a total capacity of 94.99 GiB of which 87.19 MiB is free. Including non-PyTorch memory, this process has 94.90 GiB memory in use. Of the allocated memory 90.98 GiB is allocated by PyTorch, and 2.01 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
[rank1]: Traceback (most recent call last):
[rank1]: File "/online1/sc100010/sc100010/qb_project/MARL/trl_GRPO_train.py", line 176, in <module>
[rank1]: trainer.train()
[rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/transformers/trainer.py", line 2241, in train
[rank1]: return inner_training_loop(
[rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/transformers/trainer.py", line 2599, in _inner_training_loop
[rank1]: self.optimizer.step()
[rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/accelerate/optimizer.py", line 178, in step
[rank1]: self.optimizer.step(closure)
[rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 137, in wrapper
[rank1]: return func.__get__(opt, opt.__class__)(*args, **kwargs)
[rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 487, in wrapper
[rank1]: out = func(*args, **kwargs)
[rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 91, in _use_grad
[rank1]: ret = func(self, *args, **kwargs)
[rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 220, in step
[rank1]: adamw(
[rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 154, in maybe_fallback
[rank1]: return func(*args, **kwargs)
[rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 782, in adamw
[rank1]: func(
[rank1]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 606, in _multi_tensor_adamw
[rank1]: exp_avg_sq_sqrt = torch._foreach_sqrt(device_exp_avg_sqs)
[rank1]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 130.00 MiB. GPU 1 has a total capacity of 94.99 GiB of which 127.19 MiB is free. Including non-PyTorch memory, this process has 94.86 GiB memory in use. Of the allocated memory 92.27 GiB is allocated by PyTorch, and 682.32 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
[rank2]: Traceback (most recent call last):
[rank2]: File "/online1/sc100010/sc100010/qb_project/MARL/trl_GRPO_train.py", line 176, in <module>
[rank2]: trainer.train()
[rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/transformers/trainer.py", line 2241, in train
[rank2]: return inner_training_loop(
[rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/transformers/trainer.py", line 2599, in _inner_training_loop
[rank2]: self.optimizer.step()
[rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/accelerate/optimizer.py", line 178, in step
[rank2]: self.optimizer.step(closure)
[rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 137, in wrapper
[rank2]: return func.__get__(opt, opt.__class__)(*args, **kwargs)
[rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 487, in wrapper
[rank2]: out = func(*args, **kwargs)
[rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 91, in _use_grad
[rank2]: ret = func(self, *args, **kwargs)
[rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 220, in step
[rank2]: adamw(
[rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/optimizer.py", line 154, in maybe_fallback
[rank2]: return func(*args, **kwargs)
[rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 782, in adamw
[rank2]: func(
[rank2]: File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/optim/adamw.py", line 606, in _multi_tensor_adamw
[rank2]: exp_avg_sq_sqrt = torch._foreach_sqrt(device_exp_avg_sqs)
[rank2]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 130.00 MiB. GPU 2 has a total capacity of 94.99 GiB of which 1.19 MiB is free. Including non-PyTorch memory, this process has 94.98 GiB memory in use. Of the allocated memory 92.43 GiB is allocated by PyTorch, and 703.27 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
wandb:
wandb: 🚀 View run outputs/Qwen2.5-7B-GRPO at: https://wandb.ai/bobo1398861921-nus/huggingface/runs/eorh7fyx
wandb: Find logs at: ../../../../../../../../online1/sc100010/sc100010/qb_project/MARL/wandb/run-20250223_173847-eorh7fyx/logs
W0223 17:39:00.122000 40366 /online1/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 40439 closing signal SIGTERM
W0223 17:39:00.125000 40366 /online1/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 40440 closing signal SIGTERM
E0223 17:39:00.741000 40366 /online1/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 2 (pid: 40441) of binary: /home/export/base/sc100010/sc100010/.conda/envs/torch/bin/python
Traceback (most recent call last):
File "/home/export/base/sc100010/sc100010/.conda/envs/torch/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main
args.func(args)
File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1163, in launch_command
multi_gpu_launcher(args)
File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/accelerate/commands/launch.py", line 792, in multi_gpu_launcher
distrib_run.run(args)
File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/distributed/run.py", line 910, in run
elastic_launch(
File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/export/base/sc100010/sc100010/.conda/envs/torch/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
trl_GRPO_train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-02-23_17:39:00
host : gpu018
rank : 2 (local_rank: 2)
exitcode : 1 (pid: 40441)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
``` | 2,927 | 45 |
kashif | 2025-02-21T16:47:42 | i might need to update the loss on the liger side with respect to the multi-turn PR in TRL's GRPOTrainer | 2,926 | 46 |
SalmanMohammadi | 2025-02-21T17:10:03 | > i might need to update the loss on the liger side with respect to the multi-turn PR in TRL's GRPOTrainer
Let me know if I can help test! | 2,926 | 47 |
Christoph-XJ | 2025-02-21T16:47:52 | I ran into the exact same problem when training a 1.5B model with 8 A100s. Can't do anything about it now. | 2,923 | 48 |
edwardzjl | 2025-02-24T03:16:45 | I think it is not yet supported for now: <https://github.com/huggingface/trl/issues/2887#issuecomment-2669842873>
| 2,922 | 49 |
qgallouedec | 2025-02-21T08:34:19 | Hi please fill the issue template fully, otherwise it will be very hard for us to know what's going on 🙏 | 2,920 | 50 |
Syazvinski | 2025-02-22T05:25:13 | +1 | 2,917 | 51 |
qgallouedec | 2025-02-20T15:55:18 | Sorry but your point isn't clear to me. Can you try to clarify?
Here a simplified version of the preprocessing part. What is your question about this code?
```python
# If the dataset is already preprocessed (tokenized), skip the processing steps.
column_names = list(next(iter(dataset)).keys())
is_processed = "input_ids" in column_names
if formatting_func is not None and not is_processed:
def _func(example):
return {"text": formatting_func(example)}
dataset = dataset.map(_func, **map_kwargs)
# Convert the dataset to ChatML if needed
dataset = dataset.map(
maybe_convert_to_chatml,
remove_columns="conversations" if "conversations" in dataset.column_names else None,
**map_kwargs,
)
# Apply the chat template if needed
dataset = dataset.map(
maybe_apply_chat_template,
fn_kwargs={"tokenizer": processing_class},
remove_columns="messages" if "messages" in dataset.column_names else None, # renamed to "text"
**map_kwargs,
)
# Tokenize the dataset if needed
if not is_processed:
def tokenize(ex):
tokenized = processing_class(ex[args.dataset_text_field])
return {"input_ids": tokenized["input_ids"], "attention_mask": tokenized["attention_mask"]}
dataset = dataset.map(tokenize, **map_kwargs)
``` | 2,915 | 52 |
qgallouedec | 2025-02-20T14:11:30 | Thank for the question :) The answer is in the paper:
<img width="1114" alt="Image" src="https://github.com/user-attachments/assets/c8278f9c-3d65-4e05-97c8-8a9bcd35073b" />
The ref is http://joschu.net/blog/kl-approx.html | 2,914 | 53 |
aburkov | 2025-02-20T22:20:25 | In the paper, there's Algorithm 1 looking this way:

It makes reference to Equation 21 they call gradient coeficient:

This equation 21 (and 20 where it came from) are already the first derivatives of the loss, for example the KL term is already in form of its first derivative. While in your implementation, you use the KL's original definition and then PyTorch computes its derivative at loss.backwards().
Is there a reason why it wasn't implemented as Algorithm 1 defines it? | 2,914 | 54 |
qgallouedec | 2025-02-20T22:45:53 | answered here https://github.com/huggingface/trl/issues/2752 | 2,914 | 55 |
aburkov | 2025-02-20T22:51:25 | > answered here [#2752](https://github.com/huggingface/trl/issues/2752)
Are you referring to this answer:
> Equation (21) gives the gradient of the objective, but what we're trying to optimize is the objective, not the gradient. This is why we use equation (3) in practice, and autograd takes care of the derivation and optimization step.
? If yes, I understand that. My question was why didn't you use Algorithm 1 and, instead, decided to implement a more complex loss that PyTorch then must autograd, if you already have a simplified expression for the gradient in Equation 21 and Algorithm 1 showing how to use it? | 2,914 | 56 |
qgallouedec | 2025-02-21T09:00:16 | How would you do this? | 2,914 | 57 |
aburkov | 2025-02-21T10:01:09 | > How would you do this?
That's why I'm asking. I implemented it as Algorithm 1 with Equation 20 as the loss, the way the paper describes it, but it doesn't learn. So, I thought that maybe I misunderstood something in Algorithm 1 and tried to find a different implementation, but all implementations I found are using the same objective you used, which is not what Algorithm 1 says should be used. So I asked you about the reasons for you not to implement it as Algorithm 1. Is it an error in the paper? How did you come to the conclusion that the loss shouldn't be the equations 20/21? | 2,914 | 58 |
qgallouedec | 2025-02-21T10:15:10 | equation 21 is the derivative of the loss, not the loss, see https://github.com/huggingface/trl/issues/2752#issuecomment-2635204212. So either you use the loss as expressed in equation (3) and let autogrid find the gradients, what we usually do, or you directly use it's derivative by _disabling_ autograd (not sure why and how you would do that). | 2,914 | 59 |
aburkov | 2025-02-21T17:23:33 | Algorithm 1 says "by maximizing the GRPO objective, Equation 21." If you try to maximize equation 21, the model doesn't learn. I'm not sure if I clearly ask the question. How the article reader wanting to implement GRPO is supposed to figure out that Equation 21 referred as the objective in Algorithm 1 is actually equation 3? | 2,914 | 60 |
qgallouedec | 2025-02-21T17:40:27 | I suppose it could have been expressed more clearly, but we never maximize the “gradient coefficient”. It doesn't really make sense. We maximize the objective using the gradient. That's how I understood it. If you want more information on why it is phrased this way you'd have to contact the author directly. | 2,914 | 61 |
aburkov | 2025-02-21T20:44:23 | No, thanks. I'm sure the way you understood it is the right one. I tested with both equations 20 and 21 as objectives, and it doesn't learn. So, testing equation 3 is the next logical choice :-) They just didn't reference the right equation in Algorithm 1. | 2,914 | 62 |
XZ-X | 2025-02-20T02:56:15 | Correct me if I'm wrong. I think it is achieved by the repeat sampler. Each prompt is repeated `num_generations` times, so that each generation only need to generate one sequence.
https://github.com/huggingface/trl/blob/9b3c5bf64fd88526481ec32de85539e2bbdda92b/trl/trainer/grpo_trainer.py#L488 | 2,910 | 63 |
ko-redtruck | 2025-02-19T21:09:05 | I guess this is already provided with the option:
```
sync_ref_model (`bool`, *optional*, defaults to `False`):
Whether to synchronize the reference model with the active model every `ref_model_sync_steps` steps, using
the `ref_model_mixup_alpha` parameter. This synchronization originites from the
[TR-DPO](https://huggingface.co./papers/2404.09656) paper.
``` | 2,908 | 64 |
Tuziking | 2025-02-19T15:42:11 | I use the parameters from the documentation example and the code magically returns to normal. I don't know why my configuration is wrong. documentation example is as follow:
training_args = GRPOConfig(output_dir="Qwen2-0.5B-GRPO", logging_steps=10) | 2,906 | 65 |
qgallouedec | 2025-02-19T09:08:14 | Not yet. See this for ref https://github.com/huggingface/trl/issues/2608#issuecomment-2609844003 and I'm working on it #2899 | 2,903 | 66 |
jackfsuia | 2025-02-19T09:13:47 | > Not yet. See this for ref [#2608 (comment)](https://github.com/huggingface/trl/issues/2608#issuecomment-2609844003) and I'm working on it [#2899](https://github.com/huggingface/trl/pull/2899)
Interesting. I implemented one based on PPOTrainer of trl at https://github.com/jackfsuia/nanoRLHF/blob/main/GRPO/grpo_trainer.py. I don't know if thats similar to what you need. | 2,903 | 67 |
qgallouedec | 2025-02-21T11:26:47 | Closed by #2899 | 2,903 | 68 |
qgallouedec | 2025-02-19T09:14:46 | I don't understand the profiling actually. Where do you get that this line is the bottleneck?
Thank for contributing! | 2,902 | 69 |
cyyever | 2025-02-19T14:44:56 | @qgallouedec It is the sixth line in the first picture. It's not a main bottleneck, however, the GPU utility rose a bit after fixing it. | 2,902 | 70 |
qgallouedec | 2025-02-19T21:57:00 | This?
```
dequantize_4bit (bitsandbytes/functional. py:1380)
``` | 2,902 | 71 |
qgallouedec | 2025-02-19T21:58:18 | The comparison is not very clear to me tbh, do you have clearer results, like two trainings (one with main, one with your branch) where we can see the speedup in term of steps/sec? | 2,902 | 72 |
cyyever | 2025-02-20T06:16:47 | @qgallouedec Of course, I will provide a comparison ASAP. | 2,902 | 73 |
ZYM66 | 2025-02-19T05:06:36 | Additionally, you can set these keywords to use this **PR**

| 2,901 | 74 |
qgallouedec | 2025-02-19T08:21:12 | Cc @edbeeching | 2,901 | 75 |
ZYM66 | 2025-02-20T03:04:38 | > Cc @edbeeching
I would like to clarify this PR, as I am not a native English speaker and there might be some errors in my original description.
The default code loads the vLLM generative model on a single GPU. When training the model, other GPUs must wait for the single GPU to complete its task, causing delays. In this PR, I have added a new optional feature that allows using an external API for completion, instead of relying solely on the local vLLM implementation.
Thanks!
| 2,901 | 76 |
XZ-X | 2025-02-20T03:09:21 | I might not fully understand it, but I don't see how is the external openai compatible model updated during training?
The original slow implementation loads the most updated weights to vLLM at each step before generating responses. | 2,901 | 77 |
ZYM66 | 2025-02-20T03:23:13 | > I might not fully understand it, but I don't see how is the external openai compatible model updated during training?
>
> The original slow implementation loads the most updated weights to vLLM at each step before generating responses.
Hmm, you're right. This code doesn't update the vLLM server model weights in real time. I'm currently looking for ways to address this issue.
I've now changed this PR to a draft.
| 2,901 | 78 |
ji-huazhong | 2025-02-19T11:27:53 | @qgallouedec Made the suggested change. :) | 2,900 | 79 |
HuggingFaceDocBuilderDev | 2025-02-21T19:15:32 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2900). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,900 | 80 |
HuggingFaceDocBuilderDev | 2025-02-20T12:41:52 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2899). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,899 | 81 |
kashif | 2025-02-20T13:55:22 | clearer when one realizes that the integers are the indices of the prompts! | 2,899 | 82 |
qgallouedec | 2025-02-20T14:00:57 | > clearer when one realizes that the integers are the indices of the prompts!
I'll add it, thanks for the feedback | 2,899 | 83 |
NJordan72 | 2025-02-19T20:00:57 | I think vLLM skips special tokens by default. `skip_special_tokens=False` in SamplingParams would probably do the trick. | 2,897 | 84 |
0x404 | 2025-02-21T09:28:54 | https://github.com/huggingface/trl/blob/e5ae703d352b29537159180087ef8bd4b41bf625/trl/trainer/grpo_trainer.py#L769
setting skip_special_tokens to False here should give you what you want. | 2,897 | 85 |
MohamedAliRashad | 2025-02-18T17:08:21 | Sorry, this was mistake from my side | 2,896 | 86 |
HuggingFaceDocBuilderDev | 2025-02-18T15:13:20 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2895). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,895 | 87 |
HuggingFaceDocBuilderDev | 2025-02-18T15:04:31 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2894). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,894 | 88 |
qgallouedec | 2025-02-18T15:15:11 | Actually, I think you can access all the steps via a different wandb workspace config | 2,893 | 89 |
qgallouedec | 2025-02-18T15:16:19 | <img width="1326" alt="Screenshot 2025-02-18 at 16 16 12" src="https://github.com/user-attachments/assets/0bf071c0-1fb2-4e5a-af57-59f13f07f7f3" />
| 2,893 | 90 |
lidiya-co | 2025-02-18T16:30:34 | @qgallouedec Oh great, is the wandb workspace config somewhere in the docs already or should I add that instead? | 2,893 | 91 |
qgallouedec | 2025-02-18T16:51:14 | TBH I really don't know how to configure the wandb workspace to get this visualization. My wandb workspace doesn't show this visualization, so I switch to one of my colleagues' workspaces. If you find the solution, please share it, I'm interested. | 2,893 | 92 |
semitable | 2025-02-19T15:54:41 | If you change `runs.summary["completions"]` to `runs.history.concat["completions"]` (directly on the wandb website shown in your screenshot that is, and click the small "run" button that appears), you can already see the past completions.
Nothing is overwritten in wandb, so there's no need to resend all the information at each timestep. | 2,893 | 93 |
HuggingFaceDocBuilderDev | 2025-02-18T11:24:55 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2890). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,890 | 94 |
coding-famer | 2025-02-21T06:00:41 | I guess the reason is the sizes of gathered logits(seq length) do not match. So gathering only the number of correct tokens and the total number of tokens will be a good fix. | 2,890 | 95 |
HuggingFaceDocBuilderDev | 2025-02-18T10:57:11 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2889). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,889 | 96 |
qgallouedec | 2025-02-18T11:18:06 | Cool!! | 2,889 | 97 |
cuiyuhao1996 | 2025-02-18T10:21:40 | same issue! | 2,887 | 98 |
GuodongFan | 2025-02-19T07:53:20 | same! | 2,887 | 99 |
weizhepei | 2025-02-19T21:45:02 | Is it possible to specify the tensor_parallel_size for vllm in GRPO? (as suggested in this [issue](https://github.com/huggingface/open-r1/issues/260#top)) | 2,887 | 100 |
qgallouedec | 2025-02-19T21:51:53 | > The default code only deploys the generation model on a single GPU
> I temporarily solved the problem in https://github.com/huggingface/trl/pull/2901
These are different things right?
So currently it's done this way because vLLM is quite hard to hack to make it work on selected devices. Just to have it on a single device, we need two very hacky patches.
We're very opened to contributions on this point. | 2,887 | 101 |
zygi | 2025-02-22T22:25:09 | For a custom training pipeline I implemented a hack that allows spawning vLLM instances through python/torch multiprocessing , have them bound to select devices, and support performant weight copying from the training process to vllm. Just wanted to share it here in case someone wants to adapt it to trl: https://gist.github.com/zygi/2155c6c3fefdea69237baee16efdd7e5 | 2,887 | 102 |
vagitablebirdcode | 2025-02-18T09:42:52 | I have the same trouble,also if use vllm < 0.6.5, the GRPOTrainer will cause an error that lack of `vllm.worker.worker.Worker._assert_memory_footprint_increased_during_profiling`
when use vllm >= 0.6.5 and vllm < 0.7, there cause a conflict of multi-device use of vllm and other utils. | 2,883 | 103 |
XZ-X | 2025-02-19T01:10:33 | I encountered the same issue. Using vllm >= 0.7 solves the problem for me. | 2,883 | 104 |
daniele-sartiano | 2025-02-18T13:56:16 | I'm experiencing the same issue using the official training script example https://github.com/huggingface/trl/blob/main/examples/scripts/orpo.py with four A100 80GB GPUs.
I've tried multiple models, but the issue persists.
Here there is the log:
```
accelerate launch --debug orpo.py \
--dataset_name chatpgt_dataset_dev/dataset_dev_train_gpt4o_truths_postproc.json \
--model_name_or_path=deepseek-ai/deepseek-coder-6.7b-instruct \
--per_device_train_batch_size 2 \
--max_steps 10 \
--learning_rate 8e-5 \
--gradient_accumulation_steps 8 \
--logging_steps 1 \
--eval_steps 500 \
--output_dir="deepseek-ai/deepseek-coder-6.7b-instruct-max_steps10-dataset_dataset_dev_train_gpt4o_truths_postproc-16-lora-aligned-orpo" \
--optim rmsprop \
--warmup_steps 150 \
--bf16 \
--logging_first_step \
--no_remove_unused_columns \
--use_peft \
--lora_r=16 \
--lora_alpha=16 \
--max_prompt_length=320 \
--log_level detail
[2025-02-18 14:42:09,851] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
W0218 14:42:10.776000 142432 torch/distributed/run.py:792]
W0218 14:42:10.776000 142432 torch/distributed/run.py:792] *****************************************
W0218 14:42:10.776000 142432 torch/distributed/run.py:792] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0218 14:42:10.776000 142432 torch/distributed/run.py:792] *****************************************
[2025-02-18 14:42:14,384] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-02-18 14:42:14,446] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-02-18 14:42:14,463] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-02-18 14:42:14,511] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-02-18 14:42:15,277] [INFO] [comm.py:652:init_distributed] cdb=None
[2025-02-18 14:42:15,338] [INFO] [comm.py:652:init_distributed] cdb=None
[2025-02-18 14:42:15,357] [INFO] [comm.py:652:init_distributed] cdb=None
[2025-02-18 14:42:15,357] [INFO] [comm.py:683:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
[2025-02-18 14:42:15,491] [INFO] [comm.py:652:init_distributed] cdb=None
[2025-02-18 14:42:26,046] [INFO] [config.py:733:__init__] Config mesh_device None world_size = 4
[2025-02-18 14:42:26,077] [INFO] [config.py:733:__init__] Config mesh_device None world_size = 4
[2025-02-18 14:42:26,169] [INFO] [config.py:733:__init__] Config mesh_device None world_size = 4
[2025-02-18 14:42:26,391] [INFO] [config.py:733:__init__] Config mesh_device None world_size = 4
Installed CUDA version 12.3 does not match the version torch was compiled with 12.4 but since the APIs are compatible, accepting this combination
Installed CUDA version 12.3 does not match the version torch was compiled with 12.4 but since the APIs are compatible, accepting this combination
Installed CUDA version 12.3 does not match the version torch was compiled with 12.4 but since the APIs are compatible, accepting this combination
Installed CUDA version 12.3 does not match the version torch was compiled with 12.4 but since the APIs are compatible, accepting this combination
[2025-02-18 14:42:32,911] [INFO] [utils.py:30:print_object] AsyncPartitionedParameterSwapper:
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True, 'use_gds': False}
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] aio_handle ................... <class 'async_io.aio_handle'>
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] aligned_bytes ................ 1024
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] aligned_elements_per_buffer .. 600000000
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] available_buffer_ids ......... [0, 1, 2, 3, 4]
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] available_numel .............. 0
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] available_params ............. set()
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] dtype ........................ torch.bfloat16
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] elements_per_buffer .......... 600000000
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] id_to_path ................... {}
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] inflight_numel ............... 0
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] inflight_params .............. []
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] inflight_swap_in_buffers ..... []
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] invalid_buffer ............... 1.0
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] min_aio_bytes ................ 1048576
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] numel_alignment .............. 512
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] param_buffer_count ........... 5
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] param_id_to_buffer_id ........ {}
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] param_id_to_numel ............ {}
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] param_id_to_swap_buffer ...... {}
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] partitioned_swap_buffer ...... None
[2025-02-18 14:42:32,912] [INFO] [utils.py:34:print_object] partitioned_swap_pool ........ None
[2025-02-18 14:42:32,913] [INFO] [utils.py:34:print_object] pending_reads ................ 0
[2025-02-18 14:42:32,913] [INFO] [utils.py:34:print_object] pending_writes ............... 0
[2025-02-18 14:42:32,913] [INFO] [utils.py:34:print_object] reserved_buffer_ids .......... []
[2025-02-18 14:42:32,913] [INFO] [utils.py:34:print_object] swap_config .................. device='nvme' nvme_path=PosixPath('/root/semeval_2025_task_8/fine-tuning/nvme') buffer_count=5 buffer_size=600000000 max_in_cpu=1000000000 pin_memory=False
[2025-02-18 14:42:32,913] [INFO] [utils.py:34:print_object] swap_element_size ............ 2
[2025-02-18 14:42:32,913] [INFO] [utils.py:34:print_object] swap_folder .................. /root/semeval_2025_task_8/fine-tuning/nvme/zero_stage_3/bfloat16params/rank0
[2025-02-18 14:42:32,913] [INFO] [utils.py:34:print_object] swap_out_params .............. []
[2025-02-18 14:42:32,913] [INFO] [utils.py:34:print_object] use_gds ...................... False
ec206899:142606:142606 [0] NCCL INFO Bootstrap : Using eth0:213.171.186.165<0>
ec206899:142606:142606 [0] NCCL INFO NET/Plugin: No plugin found (libnccl-net.so)
ec206899:142606:142606 [0] NCCL INFO NET/Plugin: Plugin load returned 2 : libnccl-net.so: cannot open shared object file: No such file or directory : when loading libnccl-net.so
ec206899:142606:142606 [0] NCCL INFO NET/Plugin: Using internal network plugin.
ec206899:142606:142606 [0] NCCL INFO cudaDriverVersion 12080
NCCL version 2.21.5+cuda12.4
ec206899:142606:142606 [0] NCCL INFO Comm config Blocking set to 1
ec206899:142609:142609 [3] NCCL INFO cudaDriverVersion 12080
ec206899:142609:142609 [3] NCCL INFO Bootstrap : Using eth0:213.171.186.165<0>
ec206899:142609:142609 [3] NCCL INFO NET/Plugin: No plugin found (libnccl-net.so)
ec206899:142609:142609 [3] NCCL INFO NET/Plugin: Plugin load returned 2 : libnccl-net.so: cannot open shared object file: No such file or directory : when loading libnccl-net.so
ec206899:142609:142609 [3] NCCL INFO NET/Plugin: Using internal network plugin.
ec206899:142609:142609 [3] NCCL INFO Comm config Blocking set to 1
ec206899:142607:142607 [1] NCCL INFO cudaDriverVersion 12080
ec206899:142607:142607 [1] NCCL INFO Bootstrap : Using eth0:213.171.186.165<0>
ec206899:142607:142607 [1] NCCL INFO NET/Plugin: No plugin found (libnccl-net.so)
ec206899:142607:142607 [1] NCCL INFO NET/Plugin: Plugin load returned 2 : libnccl-net.so: cannot open shared object file: No such file or directory : when loading libnccl-net.so
ec206899:142607:142607 [1] NCCL INFO NET/Plugin: Using internal network plugin.
ec206899:142607:142607 [1] NCCL INFO Comm config Blocking set to 1
ec206899:142606:142883 [0] NCCL INFO NET/IB : No device found.
ec206899:142606:142883 [0] NCCL INFO NET/Socket : Using [0]eth0:213.171.186.165<0>
ec206899:142606:142883 [0] NCCL INFO Using non-device net plugin version 0
ec206899:142606:142883 [0] NCCL INFO Using network Socket
ec206899:142609:142884 [3] NCCL INFO NET/IB : No device found.
ec206899:142609:142884 [3] NCCL INFO NET/Socket : Using [0]eth0:213.171.186.165<0>
ec206899:142609:142884 [3] NCCL INFO Using non-device net plugin version 0
ec206899:142609:142884 [3] NCCL INFO Using network Socket
ec206899:142608:142608 [2] NCCL INFO cudaDriverVersion 12080
ec206899:142608:142608 [2] NCCL INFO Bootstrap : Using eth0:213.171.186.165<0>
ec206899:142608:142608 [2] NCCL INFO NET/Plugin: No plugin found (libnccl-net.so)
ec206899:142608:142608 [2] NCCL INFO NET/Plugin: Plugin load returned 2 : libnccl-net.so: cannot open shared object file: No such file or directory : when loading libnccl-net.so
ec206899:142608:142608 [2] NCCL INFO NET/Plugin: Using internal network plugin.
ec206899:142608:142608 [2] NCCL INFO Comm config Blocking set to 1
ec206899:142607:142885 [1] NCCL INFO NET/IB : No device found.
ec206899:142607:142885 [1] NCCL INFO NET/Socket : Using [0]eth0:213.171.186.165<0>
ec206899:142607:142885 [1] NCCL INFO Using non-device net plugin version 0
ec206899:142607:142885 [1] NCCL INFO Using network Socket
ec206899:142608:142886 [2] NCCL INFO NET/IB : No device found.
ec206899:142608:142886 [2] NCCL INFO NET/Socket : Using [0]eth0:213.171.186.165<0>
ec206899:142608:142886 [2] NCCL INFO Using non-device net plugin version 0
ec206899:142608:142886 [2] NCCL INFO Using network Socket
ec206899:142608:142886 [2] NCCL INFO ncclCommInitRank comm 0x5633813a74a0 rank 2 nranks 4 cudaDev 2 nvmlDev 2 busId 7000 commId 0x2765b45a4d929e5c - Init START
ec206899:142609:142884 [3] NCCL INFO ncclCommInitRank comm 0x563f3ebda820 rank 3 nranks 4 cudaDev 3 nvmlDev 3 busId 8000 commId 0x2765b45a4d929e5c - Init START
ec206899:142607:142885 [1] NCCL INFO ncclCommInitRank comm 0x562234686450 rank 1 nranks 4 cudaDev 1 nvmlDev 1 busId 6000 commId 0x2765b45a4d929e5c - Init START
ec206899:142606:142883 [0] NCCL INFO ncclCommInitRank comm 0x56089fe0c520 rank 0 nranks 4 cudaDev 0 nvmlDev 0 busId 5000 commId 0x2765b45a4d929e5c - Init START
ec206899:142608:142886 [2] NCCL INFO NVLS multicast support is not available on dev 2
ec206899:142609:142884 [3] NCCL INFO NVLS multicast support is not available on dev 3
ec206899:142607:142885 [1] NCCL INFO NVLS multicast support is not available on dev 1
ec206899:142606:142883 [0] NCCL INFO NVLS multicast support is not available on dev 0
ec206899:142609:142884 [3] NCCL INFO comm 0x563f3ebda820 rank 3 nRanks 4 nNodes 1 localRanks 4 localRank 3 MNNVL 0
ec206899:142606:142883 [0] NCCL INFO comm 0x56089fe0c520 rank 0 nRanks 4 nNodes 1 localRanks 4 localRank 0 MNNVL 0
ec206899:142607:142885 [1] NCCL INFO comm 0x562234686450 rank 1 nRanks 4 nNodes 1 localRanks 4 localRank 1 MNNVL 0
ec206899:142608:142886 [2] NCCL INFO comm 0x5633813a74a0 rank 2 nRanks 4 nNodes 1 localRanks 4 localRank 2 MNNVL 0
ec206899:142609:142884 [3] NCCL INFO Trees [0] -1/-1/-1->3->2 [1] -1/-1/-1->3->2
ec206899:142606:142883 [0] NCCL INFO Channel 00/02 : 0 1 2 3
ec206899:142608:142886 [2] NCCL INFO Trees [0] 3/-1/-1->2->1 [1] 3/-1/-1->2->1
ec206899:142607:142885 [1] NCCL INFO Trees [0] 2/-1/-1->1->0 [1] 2/-1/-1->1->0
ec206899:142609:142884 [3] NCCL INFO P2P Chunksize set to 131072
ec206899:142606:142883 [0] NCCL INFO Channel 01/02 : 0 1 2 3
ec206899:142608:142886 [2] NCCL INFO P2P Chunksize set to 131072
ec206899:142607:142885 [1] NCCL INFO P2P Chunksize set to 131072
ec206899:142606:142883 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1
ec206899:142606:142883 [0] NCCL INFO P2P Chunksize set to 131072
ec206899:142608:142886 [2] NCCL INFO Channel 00 : 2[2] -> 3[3] via SHM/direct/direct
ec206899:142608:142886 [2] NCCL INFO Channel 01 : 2[2] -> 3[3] via SHM/direct/direct
ec206899:142609:142884 [3] NCCL INFO Channel 00 : 3[3] -> 0[0] via SHM/direct/direct
ec206899:142609:142884 [3] NCCL INFO Channel 01 : 3[3] -> 0[0] via SHM/direct/direct
ec206899:142606:142883 [0] NCCL INFO Channel 00 : 0[0] -> 1[1] via SHM/direct/direct
ec206899:142607:142885 [1] NCCL INFO Channel 00 : 1[1] -> 2[2] via SHM/direct/direct
ec206899:142606:142883 [0] NCCL INFO Channel 01 : 0[0] -> 1[1] via SHM/direct/direct
ec206899:142607:142885 [1] NCCL INFO Channel 01 : 1[1] -> 2[2] via SHM/direct/direct
ec206899:142608:142886 [2] NCCL INFO Connected all rings
ec206899:142607:142885 [1] NCCL INFO Connected all rings
ec206899:142609:142884 [3] NCCL INFO Connected all rings
ec206899:142606:142883 [0] NCCL INFO Connected all rings
ec206899:142609:142884 [3] NCCL INFO Channel 00 : 3[3] -> 2[2] via SHM/direct/direct
ec206899:142609:142884 [3] NCCL INFO Channel 01 : 3[3] -> 2[2] via SHM/direct/direct
ec206899:142608:142886 [2] NCCL INFO Channel 00 : 2[2] -> 1[1] via SHM/direct/direct
ec206899:142608:142886 [2] NCCL INFO Channel 01 : 2[2] -> 1[1] via SHM/direct/direct
ec206899:142607:142885 [1] NCCL INFO Channel 00 : 1[1] -> 0[0] via SHM/direct/direct
ec206899:142607:142885 [1] NCCL INFO Channel 01 : 1[1] -> 0[0] via SHM/direct/direct
ec206899:142606:142883 [0] NCCL INFO Connected all trees
ec206899:142609:142884 [3] NCCL INFO Connected all trees
ec206899:142609:142884 [3] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 512 | 512
ec206899:142609:142884 [3] NCCL INFO 2 coll channels, 2 collnet channels, 0 nvls channels, 2 p2p channels, 2 p2p channels per peer
ec206899:142608:142886 [2] NCCL INFO Connected all trees
ec206899:142606:142883 [0] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 512 | 512
ec206899:142607:142885 [1] NCCL INFO Connected all trees
ec206899:142606:142883 [0] NCCL INFO 2 coll channels, 2 collnet channels, 0 nvls channels, 2 p2p channels, 2 p2p channels per peer
ec206899:142608:142886 [2] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 512 | 512
ec206899:142608:142886 [2] NCCL INFO 2 coll channels, 2 collnet channels, 0 nvls channels, 2 p2p channels, 2 p2p channels per peer
ec206899:142607:142885 [1] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 512 | 512
ec206899:142607:142885 [1] NCCL INFO 2 coll channels, 2 collnet channels, 0 nvls channels, 2 p2p channels, 2 p2p channels per peer
ec206899:142608:142886 [2] NCCL INFO TUNER/Plugin: Plugin load returned 2 : libnccl-net.so: cannot open shared object file: No such file or directory : when loading libnccl-tuner.so
ec206899:142609:142884 [3] NCCL INFO TUNER/Plugin: Plugin load returned 2 : libnccl-net.so: cannot open shared object file: No such file or directory : when loading libnccl-tuner.so
ec206899:142607:142885 [1] NCCL INFO TUNER/Plugin: Plugin load returned 2 : libnccl-net.so: cannot open shared object file: No such file or directory : when loading libnccl-tuner.so
ec206899:142608:142886 [2] NCCL INFO TUNER/Plugin: Using internal tuner plugin.
ec206899:142609:142884 [3] NCCL INFO TUNER/Plugin: Using internal tuner plugin.
ec206899:142608:142886 [2] NCCL INFO ncclCommInitRank comm 0x5633813a74a0 rank 2 nranks 4 cudaDev 2 nvmlDev 2 busId 7000 commId 0x2765b45a4d929e5c - Init COMPLETE
ec206899:142606:142883 [0] NCCL INFO TUNER/Plugin: Plugin load returned 2 : libnccl-net.so: cannot open shared object file: No such file or directory : when loading libnccl-tuner.so
ec206899:142607:142885 [1] NCCL INFO TUNER/Plugin: Using internal tuner plugin.
ec206899:142609:142884 [3] NCCL INFO ncclCommInitRank comm 0x563f3ebda820 rank 3 nranks 4 cudaDev 3 nvmlDev 3 busId 8000 commId 0x2765b45a4d929e5c - Init COMPLETE
ec206899:142606:142883 [0] NCCL INFO TUNER/Plugin: Using internal tuner plugin.
ec206899:142607:142885 [1] NCCL INFO ncclCommInitRank comm 0x562234686450 rank 1 nranks 4 cudaDev 1 nvmlDev 1 busId 6000 commId 0x2765b45a4d929e5c - Init COMPLETE
ec206899:142606:142883 [0] NCCL INFO ncclCommInitRank comm 0x56089fe0c520 rank 0 nranks 4 cudaDev 0 nvmlDev 0 busId 5000 commId 0x2765b45a4d929e5c - Init COMPLETE
[2025-02-18 14:42:45,044] [INFO] [partition_parameters.py:348:__exit__] finished initializing model - num_params = 291, num_elems = 6.74B
Loading checkpoint shards: 100%|██████████| 2/2 [00:19<00:00, 9.94s/it]
Loading checkpoint shards: 100%|██████████| 2/2 [00:19<00:00, 9.95s/it]
Loading checkpoint shards: 100%|██████████| 2/2 [00:19<00:00, 9.95s/it]
Loading checkpoint shards: 100%|██████████| 2/2 [00:19<00:00, 9.95s/it]
Map: 100%|██████████| 1079/1079 [00:00<00:00, 21769.15 examples/s]
Map: 100%|██████████| 1079/1079 [00:28<00:00, 37.48 examples/s]
max_steps is given, it will override any value given in num_train_epochs
Using auto half precision backend
Currently training with a batch size of: 2
Detected ZeRO Offload and non-DeepSpeed optimizers: This combination should work as long as the custom optimizer has both CPU and GPU implementation (except LAMB)
Installed CUDA version 12.3 does not match the version torch was compiled with 12.4 but since the APIs are compatible, accepting this combination
Using /root/.cache/torch_extensions/py310_cu124 as PyTorch extensions root...
Emitting ninja build file /root/.cache/torch_extensions/py310_cu124/cpu_adam/build.ninja...
Building extension module cpu_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module cpu_adam...
Time to load cpu_adam op: 2.2107419967651367 seconds
Adam Optimizer #0 is created with AVX512 arithmetic capability.
Config: alpha=0.000080, betas=(0.900000, 0.999000), weight_decay=0.000000, adam_w=0
[2025-02-18 14:43:40,048] [INFO] [logging.py:128:log_dist] [Rank 0] DeepSpeed info: version=0.16.3, git-hash=unknown, git-branch=unknown
[2025-02-18 14:43:40,048] [INFO] [config.py:733:__init__] Config mesh_device None world_size = 4
[2025-02-18 14:43:40,063] [INFO] [logging.py:128:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
[2025-02-18 14:43:40,065] [INFO] [logging.py:128:log_dist] [Rank 0] Using client Optimizer as basic optimizer
[2025-02-18 14:43:40,065] [INFO] [logging.py:128:log_dist] [Rank 0] Removing param_group that has no 'params' in the basic Optimizer
[2025-02-18 14:43:40,071] [INFO] [logging.py:128:log_dist] [Rank 0] DeepSpeed Basic Optimizer = DeepSpeedCPUAdam
[2025-02-18 14:43:40,071] [INFO] [utils.py:59:is_zero_supported_optimizer] Checking ZeRO support for optimizer=DeepSpeedCPUAdam type=<class 'deepspeed.ops.adam.cpu_adam.DeepSpeedCPUAdam'>
[2025-02-18 14:43:40,071] [INFO] [logging.py:128:log_dist] [Rank 0] Creating fp16 ZeRO stage 3 optimizer, MiCS is enabled False, Hierarchical params gather False
[2025-02-18 14:43:40,071] [INFO] [logging.py:128:log_dist] [Rank 0] Creating torch.bfloat16 ZeRO stage 3 optimizer
Installed CUDA version 12.3 does not match the version torch was compiled with 12.4 but since the APIs are compatible, accepting this combination
Using /root/.cache/torch_extensions/py310_cu124 as PyTorch extensions root...
Installed CUDA version 12.3 does not match the version torch was compiled with 12.4 but since the APIs are compatible, accepting this combination
Using /root/.cache/torch_extensions/py310_cu124 as PyTorch extensions root...
Emitting ninja build file /root/.cache/torch_extensions/py310_cu124/cpu_adam/build.ninja...
Building extension module cpu_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module cpu_adam...
Time to load cpu_adam op: 2.213812828063965 seconds
Installed CUDA version 12.3 does not match the version torch was compiled with 12.4 but since the APIs are compatible, accepting this combination
Using /root/.cache/torch_extensions/py310_cu124 as PyTorch extensions root...
Emitting ninja build file /root/.cache/torch_extensions/py310_cu124/cpu_adam/build.ninja...
Building extension module cpu_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module cpu_adam...
Time to load cpu_adam op: 2.2096915245056152 seconds
Loading extension module cpu_adam...
Time to load cpu_adam op: 2.309788465499878 seconds
[2025-02-18 14:43:40,229] [INFO] [utils.py:781:see_memory_usage] Stage 3 initialize beginning
[2025-02-18 14:43:40,229] [INFO] [utils.py:782:see_memory_usage] MA 0.02 GB Max_MA 0.74 GB CA 0.03 GB Max_CA 1 GB
[2025-02-18 14:43:40,230] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 41.7 GB, percent = 9.3%
[2025-02-18 14:43:40,234] [INFO] [stage3.py:169:__init__] Reduce bucket size 500000000
[2025-02-18 14:43:40,234] [INFO] [stage3.py:170:__init__] Prefetch bucket size 50000000
[2025-02-18 14:43:40,381] [INFO] [utils.py:781:see_memory_usage] DeepSpeedZeRoOffload initialize [begin]
[2025-02-18 14:43:40,382] [INFO] [utils.py:782:see_memory_usage] MA 0.02 GB Max_MA 0.02 GB CA 0.03 GB Max_CA 0 GB
[2025-02-18 14:43:40,382] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 41.7 GB, percent = 9.3%
Parameter Offload: Total persistent parameters: 8654848 in 193 params
[2025-02-18 14:43:40,606] [INFO] [utils.py:781:see_memory_usage] DeepSpeedZeRoOffload initialize [end]
[2025-02-18 14:43:40,607] [INFO] [utils.py:782:see_memory_usage] MA 0.0 GB Max_MA 0.02 GB CA 0.03 GB Max_CA 0 GB
[2025-02-18 14:43:40,607] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 41.7 GB, percent = 9.3%
[2025-02-18 14:43:40,760] [INFO] [utils.py:781:see_memory_usage] Before creating fp16 partitions
[2025-02-18 14:43:40,760] [INFO] [utils.py:782:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.03 GB Max_CA 0 GB
[2025-02-18 14:43:40,761] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 41.7 GB, percent = 9.3%
[2025-02-18 14:43:40,929] [INFO] [utils.py:781:see_memory_usage] After creating fp16 partitions: 1
[2025-02-18 14:43:40,930] [INFO] [utils.py:782:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.03 GB Max_CA 0 GB
[2025-02-18 14:43:40,930] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 41.92 GB, percent = 9.3%
[2025-02-18 14:43:40,931] [INFO] [stage3.py:624:_configure_tensor_swapping] Tensor Swapping: Adding optimizer tensors
[2025-02-18 14:43:40,962] [INFO] [utils.py:30:print_object] SwapBufferManager:
[2025-02-18 14:43:40,962] [INFO] [utils.py:34:print_object] count ........................ 5
[2025-02-18 14:43:40,962] [INFO] [utils.py:34:print_object] dtype ........................ torch.float32
[2025-02-18 14:43:40,962] [INFO] [utils.py:34:print_object] free_buffer_index ............ [0, 1, 2, 3, 4]
[2025-02-18 14:43:40,962] [INFO] [utils.py:34:print_object] gigabytes .................... 0.0390625
[2025-02-18 14:43:40,962] [INFO] [utils.py:34:print_object] num_elems .................... 2097152
[2025-02-18 14:43:40,962] [INFO] [utils.py:34:print_object] used_buffer_index ............ {}
[2025-02-18 14:43:40,963] [INFO] [utils.py:30:print_object] PartitionedOptimizerSwapper:
[2025-02-18 14:43:40,963] [INFO] [utils.py:34:print_object] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True, 'use_gds': False}
[2025-02-18 14:43:40,963] [INFO] [utils.py:34:print_object] aligned_bytes ................ 1024
[2025-02-18 14:43:40,963] [INFO] [utils.py:34:print_object] dtype ........................ torch.float32
[2025-02-18 14:43:40,963] [INFO] [utils.py:34:print_object] largest_numel ................ 2097152
[2025-02-18 14:43:40,963] [INFO] [utils.py:34:print_object] min_aio_bytes ................ 1048576
[2025-02-18 14:43:40,963] [INFO] [utils.py:34:print_object] numel_alignment .............. 256
[2025-02-18 14:43:40,963] [INFO] [utils.py:34:print_object] swap_config .................. device='nvme' nvme_path=PosixPath('/root/semeval_2025_task_8/fine-tuning/nvme') buffer_count=5 pin_memory=False pipeline_read=False pipeline_write=False fast_init=False ratio=1.0
[2025-02-18 14:43:40,963] [INFO] [utils.py:34:print_object] swap_element_size ............ 4
[2025-02-18 14:43:40,963] [INFO] [utils.py:34:print_object] swap_folder .................. /root/semeval_2025_task_8/fine-tuning/nvme/zero_stage_3/optimizer/rank0
[2025-02-18 14:43:41,123] [INFO] [utils.py:781:see_memory_usage] Before creating fp32 partitions
[2025-02-18 14:43:41,124] [INFO] [utils.py:782:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.03 GB Max_CA 0 GB
[2025-02-18 14:43:41,124] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 41.98 GB, percent = 9.3%
[2025-02-18 14:43:41,282] [INFO] [utils.py:781:see_memory_usage] After creating fp32 partitions
[2025-02-18 14:43:41,283] [INFO] [utils.py:782:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.03 GB Max_CA 0 GB
[2025-02-18 14:43:41,283] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 41.98 GB, percent = 9.3%
[2025-02-18 14:43:41,440] [INFO] [utils.py:781:see_memory_usage] Before initializing optimizer states
[2025-02-18 14:43:41,440] [INFO] [utils.py:782:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.03 GB Max_CA 0 GB
[2025-02-18 14:43:41,441] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 42.01 GB, percent = 9.3%
[2025-02-18 14:43:41,601] [INFO] [utils.py:781:see_memory_usage] After initializing optimizer states
[2025-02-18 14:43:41,602] [INFO] [utils.py:782:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.03 GB Max_CA 0 GB
[2025-02-18 14:43:41,602] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 42.02 GB, percent = 9.3%
[2025-02-18 14:43:41,602] [INFO] [stage3.py:529:_setup_for_real_optimizer] optimizer state initialized
[2025-02-18 14:43:41,821] [INFO] [utils.py:781:see_memory_usage] After initializing ZeRO optimizer
[2025-02-18 14:43:41,821] [INFO] [utils.py:782:see_memory_usage] MA 0.93 GB Max_MA 0.93 GB CA 0.96 GB Max_CA 1 GB
[2025-02-18 14:43:41,822] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 42.16 GB, percent = 9.4%
[2025-02-18 14:43:41,822] [INFO] [logging.py:128:log_dist] [Rank 0] DeepSpeed Final Optimizer = DeepSpeedZeroOptimizer_Stage3
[2025-02-18 14:43:41,822] [INFO] [logging.py:128:log_dist] [Rank 0] DeepSpeed using configured LR scheduler = None
[2025-02-18 14:43:41,822] [INFO] [logging.py:128:log_dist] [Rank 0] DeepSpeed LR Scheduler = None
[2025-02-18 14:43:41,822] [INFO] [logging.py:128:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0], mom=[(0.9, 0.999)]
[2025-02-18 14:43:41,825] [INFO] [config.py:999:print] DeepSpeedEngine configuration:
[2025-02-18 14:43:41,825] [INFO] [config.py:1003:print] activation_checkpointing_config {
"partition_activations": false,
"contiguous_memory_optimization": false,
"cpu_checkpointing": false,
"number_checkpoints": null,
"synchronize_checkpoint_boundary": false,
"profile": false
}
[2025-02-18 14:43:41,825] [INFO] [config.py:1003:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True, 'use_gds': False}
[2025-02-18 14:43:41,825] [INFO] [config.py:1003:print] amp_enabled .................. False
[2025-02-18 14:43:41,825] [INFO] [config.py:1003:print] amp_params ................... False
[2025-02-18 14:43:41,825] [INFO] [config.py:1003:print] autotuning_config ............ {
"enabled": false,
"start_step": null,
"end_step": null,
"metric_path": null,
"arg_mappings": null,
"metric": "throughput",
"model_info": null,
"results_dir": "autotuning_results",
"exps_dir": "autotuning_exps",
"overwrite": true,
"fast": true,
"start_profile_step": 3,
"end_profile_step": 5,
"tuner_type": "gridsearch",
"tuner_early_stopping": 5,
"tuner_num_trials": 50,
"model_info_path": null,
"mp_size": 1,
"max_train_batch_size": null,
"min_train_batch_size": 1,
"max_train_micro_batch_size_per_gpu": 1.024000e+03,
"min_train_micro_batch_size_per_gpu": 1,
"num_tuning_micro_batch_sizes": 3
}
[2025-02-18 14:43:41,825] [INFO] [config.py:1003:print] bfloat16_enabled ............. True
[2025-02-18 14:43:41,826] [INFO] [config.py:1003:print] bfloat16_immediate_grad_update False
[2025-02-18 14:43:41,826] [INFO] [config.py:1003:print] checkpoint_parallel_write_pipeline False
[2025-02-18 14:43:41,826] [INFO] [config.py:1003:print] checkpoint_tag_validation_enabled True
[2025-02-18 14:43:41,826] [INFO] [config.py:1003:print] checkpoint_tag_validation_fail False
[2025-02-18 14:43:41,826] [INFO] [config.py:1003:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x7f2a4f827100>
[2025-02-18 14:43:41,826] [INFO] [config.py:1003:print] communication_data_type ...... None
[2025-02-18 14:43:41,826] [INFO] [config.py:1003:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}}
[2025-02-18 14:43:41,826] [INFO] [config.py:1003:print] curriculum_enabled_legacy .... False
[2025-02-18 14:43:41,826] [INFO] [config.py:1003:print] curriculum_params_legacy ..... False
[2025-02-18 14:43:41,826] [INFO] [config.py:1003:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}}
[2025-02-18 14:43:41,826] [INFO] [config.py:1003:print] data_efficiency_enabled ...... False
[2025-02-18 14:43:41,826] [INFO] [config.py:1003:print] dataloader_drop_last ......... False
[2025-02-18 14:43:41,826] [INFO] [config.py:1003:print] disable_allgather ............ False
[2025-02-18 14:43:41,826] [INFO] [config.py:1003:print] dump_state ................... False
[2025-02-18 14:43:41,826] [INFO] [config.py:1003:print] dynamic_loss_scale_args ...... None
[2025-02-18 14:43:41,826] [INFO] [config.py:1003:print] eigenvalue_enabled ........... False
[2025-02-18 14:43:41,826] [INFO] [config.py:1003:print] eigenvalue_gas_boundary_resolution 1
[2025-02-18 14:43:41,826] [INFO] [config.py:1003:print] eigenvalue_layer_name ........ bert.encoder.layer
[2025-02-18 14:43:41,827] [INFO] [config.py:1003:print] eigenvalue_layer_num ......... 0
[2025-02-18 14:43:41,827] [INFO] [config.py:1003:print] eigenvalue_max_iter .......... 100
[2025-02-18 14:43:41,827] [INFO] [config.py:1003:print] eigenvalue_stability ......... 1e-06
[2025-02-18 14:43:41,827] [INFO] [config.py:1003:print] eigenvalue_tol ............... 0.01
[2025-02-18 14:43:41,827] [INFO] [config.py:1003:print] eigenvalue_verbose ........... False
[2025-02-18 14:43:41,827] [INFO] [config.py:1003:print] elasticity_enabled ........... False
[2025-02-18 14:43:41,827] [INFO] [config.py:1003:print] flops_profiler_config ........ {
"enabled": false,
"recompute_fwd_factor": 0.0,
"profile_step": 1,
"module_depth": -1,
"top_modules": 1,
"detailed": true,
"output_file": null
}
[2025-02-18 14:43:41,827] [INFO] [config.py:1003:print] fp16_auto_cast ............... None
[2025-02-18 14:43:41,827] [INFO] [config.py:1003:print] fp16_enabled ................. False
[2025-02-18 14:43:41,827] [INFO] [config.py:1003:print] fp16_master_weights_and_gradients False
[2025-02-18 14:43:41,827] [INFO] [config.py:1003:print] global_rank .................. 0
[2025-02-18 14:43:41,827] [INFO] [config.py:1003:print] grad_accum_dtype ............. None
[2025-02-18 14:43:41,827] [INFO] [config.py:1003:print] gradient_accumulation_steps .. 8
[2025-02-18 14:43:41,827] [INFO] [config.py:1003:print] gradient_clipping ............ 0.0
[2025-02-18 14:43:41,827] [INFO] [config.py:1003:print] gradient_predivide_factor .... 1.0
[2025-02-18 14:43:41,827] [INFO] [config.py:1003:print] graph_harvesting ............. False
[2025-02-18 14:43:41,827] [INFO] [config.py:1003:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8
[2025-02-18 14:43:41,827] [INFO] [config.py:1003:print] initial_dynamic_scale ........ 1
[2025-02-18 14:43:41,828] [INFO] [config.py:1003:print] load_universal_checkpoint .... False
[2025-02-18 14:43:41,828] [INFO] [config.py:1003:print] loss_scale ................... 1.0
[2025-02-18 14:43:41,828] [INFO] [config.py:1003:print] memory_breakdown ............. False
[2025-02-18 14:43:41,828] [INFO] [config.py:1003:print] mics_hierarchial_params_gather False
[2025-02-18 14:43:41,828] [INFO] [config.py:1003:print] mics_shard_size .............. -1
[2025-02-18 14:43:41,828] [INFO] [config.py:1003:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') comet=CometConfig(enabled=False, samples_log_interval=100, project=None, workspace=None, api_key=None, experiment_name=None, experiment_key=None, online=None, mode=None) wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName')
[2025-02-18 14:43:41,828] [INFO] [config.py:1003:print] nebula_config ................ {
"enabled": false,
"persistent_storage_path": null,
"persistent_time_interval": 100,
"num_of_version_in_retention": 2,
"enable_nebula_load": true,
"load_path": null
}
[2025-02-18 14:43:41,828] [INFO] [config.py:1003:print] optimizer_legacy_fusion ...... False
[2025-02-18 14:43:41,828] [INFO] [config.py:1003:print] optimizer_name ............... None
[2025-02-18 14:43:41,828] [INFO] [config.py:1003:print] optimizer_params ............. None
[2025-02-18 14:43:41,828] [INFO] [config.py:1003:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0, 'pipe_partitioned': True, 'grad_partitioned': True}
[2025-02-18 14:43:41,828] [INFO] [config.py:1003:print] pld_enabled .................. False
[2025-02-18 14:43:41,828] [INFO] [config.py:1003:print] pld_params ................... False
[2025-02-18 14:43:41,828] [INFO] [config.py:1003:print] prescale_gradients ........... False
[2025-02-18 14:43:41,828] [INFO] [config.py:1003:print] scheduler_name ............... None
[2025-02-18 14:43:41,828] [INFO] [config.py:1003:print] scheduler_params ............. None
[2025-02-18 14:43:41,828] [INFO] [config.py:1003:print] seq_parallel_communication_data_type torch.float32
[2025-02-18 14:43:41,829] [INFO] [config.py:1003:print] sparse_attention ............. None
[2025-02-18 14:43:41,829] [INFO] [config.py:1003:print] sparse_gradients_enabled ..... False
[2025-02-18 14:43:41,829] [INFO] [config.py:1003:print] steps_per_print .............. inf
[2025-02-18 14:43:41,829] [INFO] [config.py:1003:print] timers_config ................ enabled=True synchronized=True
[2025-02-18 14:43:41,829] [INFO] [config.py:1003:print] train_batch_size ............. 64
[2025-02-18 14:43:41,829] [INFO] [config.py:1003:print] train_micro_batch_size_per_gpu 2
[2025-02-18 14:43:41,829] [INFO] [config.py:1003:print] use_data_before_expert_parallel_ False
[2025-02-18 14:43:41,829] [INFO] [config.py:1003:print] use_node_local_storage ....... False
[2025-02-18 14:43:41,829] [INFO] [config.py:1003:print] wall_clock_breakdown ......... False
[2025-02-18 14:43:41,829] [INFO] [config.py:1003:print] weight_quantization_config ... None
[2025-02-18 14:43:41,829] [INFO] [config.py:1003:print] world_size ................... 4
[2025-02-18 14:43:41,829] [INFO] [config.py:1003:print] zero_allow_untested_optimizer True
[2025-02-18 14:43:41,829] [INFO] [config.py:1003:print] zero_config .................. stage=3 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500000000 use_multi_rank_bucket_allreduce=True allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=True load_from_fp32_weights=True elastic_checkpoint=False offload_param=DeepSpeedZeroOffloadParamConfig(device='nvme', nvme_path=PosixPath('/root/semeval_2025_task_8/fine-tuning/nvme'), buffer_count=5, buffer_size=600000000, max_in_cpu=1000000000, pin_memory=False) offload_optimizer=DeepSpeedZeroOffloadOptimizerConfig(device='nvme', nvme_path=PosixPath('/root/semeval_2025_task_8/fine-tuning/nvme'), buffer_count=5, pin_memory=False, pipeline_read=False, pipeline_write=False, fast_init=False, ratio=1.0) sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50000000 param_persistence_threshold=100000 model_persistence_threshold=9223372036854775807 max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=False module_granularity_threshold=0 use_all_reduce_for_fetch_params=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=False zero_quantized_nontrainable_weights=False zero_quantized_gradients=False zeropp_loco_param=None mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=True pipeline_loading_checkpoint=False override_module_apply=True
[2025-02-18 14:43:41,829] [INFO] [config.py:1003:print] zero_enabled ................. True
[2025-02-18 14:43:41,829] [INFO] [config.py:1003:print] zero_force_ds_cpu_optimizer .. True
[2025-02-18 14:43:41,829] [INFO] [config.py:1003:print] zero_optimization_stage ...... 3
[2025-02-18 14:43:41,829] [INFO] [config.py:989:print_user_config] json = {
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "nvme",
"nvme_path": "/root/semeval_2025_task_8/fine-tuning/nvme",
"pin_memory": false,
"buffer_count": 5
},
"offload_param": {
"device": "nvme",
"nvme_path": "/root/semeval_2025_task_8/fine-tuning/nvme",
"pin_memory": false,
"buffer_count": 5,
"buffer_size": 6.000000e+08,
"max_in_cpu": 1.000000e+09
}
},
"bf16": {
"enabled": true
},
"aio": {
"enabled": true
},
"train_micro_batch_size_per_gpu": 2,
"gradient_accumulation_steps": 8,
"steps_per_print": inf,
"fp16": {
"enabled": false
},
"zero_allow_untested_optimizer": true
}
***** Running training *****
Num examples = 1,079
Num Epochs = 1
Instantaneous batch size per device = 2
Total train batch size (w. parallel, distributed & accumulation) = 64
Gradient Accumulation steps = 8
Total optimization steps = 10
Number of trainable parameters = 8,388,608
0%| | 0/10 [00:00<?, ?it/s][rank1]: Traceback (most recent call last):
[rank1]: File "/root/semeval_2025_task_8/fine-tuning/orpo.py", line 104, in <module>
[rank1]: trainer.train()
[rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2171, in train
[rank1]: return inner_training_loop(
[rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2531, in _inner_training_loop
[rank1]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 3675, in training_step
[rank1]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/orpo_trainer.py", line 873, in compute_loss
[rank1]: loss, metrics = self.get_batch_loss_metrics(model, inputs, train_eval="train")
[rank1]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/orpo_trainer.py", line 848, in get_batch_loss_metrics
[rank1]: self.accelerator.gather_for_metrics(policy_rejected_logits).detach().mean()
[rank1]: File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2583, in gather_for_metrics
[rank1]: data = self.gather(input_data)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2539, in gather
[rank1]: return gather(tensor)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/operations.py", line 389, in wrapper
[rank1]: raise DistributedOperationException(
[rank1]: accelerate.utils.operations.DistributedOperationException: Cannot apply desired operation due to shape mismatches. All shapes across devices must be valid.
[rank1]: Operation: `accelerate.utils.operations.gather`
[rank1]: Input shapes:
[rank1]: - Process 0: [2, 350, 32256]
[rank1]: - Process 1: [2, 658, 32256]
[rank1]: - Process 2: [2, 617, 32256]
[rank1]: - Process 3: [2, 594, 32256]
[rank3]: Traceback (most recent call last):
[rank3]: File "/root/semeval_2025_task_8/fine-tuning/orpo.py", line 104, in <module>
[rank3]: trainer.train()
[rank3]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2171, in train
[rank3]: return inner_training_loop(
[rank3]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2531, in _inner_training_loop
[rank3]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank3]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 3675, in training_step
[rank3]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank3]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/orpo_trainer.py", line 873, in compute_loss
[rank3]: loss, metrics = self.get_batch_loss_metrics(model, inputs, train_eval="train")
[rank3]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/orpo_trainer.py", line 848, in get_batch_loss_metrics
[rank3]: self.accelerator.gather_for_metrics(policy_rejected_logits).detach().mean()
[rank3]: File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2583, in gather_for_metrics
[rank3]: data = self.gather(input_data)
[rank3]: File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2539, in gather
[rank3]: return gather(tensor)
[rank3]: File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/operations.py", line 389, in wrapper
[rank3]: raise DistributedOperationException(
[rank3]: accelerate.utils.operations.DistributedOperationException: Cannot apply desired operation due to shape mismatches. All shapes across devices must be valid.
[rank3]: Operation: `accelerate.utils.operations.gather`
[rank3]: Input shapes:
[rank3]: - Process 0: [2, 350, 32256]
[rank3]: - Process 1: [2, 658, 32256]
[rank3]: - Process 2: [2, 617, 32256]
[rank3]: - Process 3: [2, 594, 32256]
[rank0]: Traceback (most recent call last):
[rank0]: File "/root/semeval_2025_task_8/fine-tuning/orpo.py", line 104, in <module>
[rank0]: trainer.train()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2171, in train
[rank0]: return inner_training_loop(
[rank0]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2531, in _inner_training_loop
[rank0]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 3675, in training_step
[rank0]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/orpo_trainer.py", line 873, in compute_loss
[rank0]: loss, metrics = self.get_batch_loss_metrics(model, inputs, train_eval="train")
[rank0]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/orpo_trainer.py", line 848, in get_batch_loss_metrics
[rank0]: self.accelerator.gather_for_metrics(policy_rejected_logits).detach().mean()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2583, in gather_for_metrics
[rank0]: data = self.gather(input_data)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2539, in gather
[rank0]: return gather(tensor)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/operations.py", line 389, in wrapper
[rank0]: raise DistributedOperationException(
[rank0]: accelerate.utils.operations.DistributedOperationException: Cannot apply desired operation due to shape mismatches. All shapes across devices must be valid.
[rank0]: Operation: `accelerate.utils.operations.gather`
[rank0]: Input shapes:
[rank0]: - Process 0: [2, 350, 32256]
[rank0]: - Process 1: [2, 658, 32256]
[rank0]: - Process 2: [2, 617, 32256]
[rank0]: - Process 3: [2, 594, 32256]
[rank2]: Traceback (most recent call last):
[rank2]: File "/root/semeval_2025_task_8/fine-tuning/orpo.py", line 104, in <module>
[rank2]: trainer.train()
[rank2]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2171, in train
[rank2]: return inner_training_loop(
[rank2]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2531, in _inner_training_loop
[rank2]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 3675, in training_step
[rank2]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/orpo_trainer.py", line 873, in compute_loss
[rank2]: loss, metrics = self.get_batch_loss_metrics(model, inputs, train_eval="train")
[rank2]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/orpo_trainer.py", line 848, in get_batch_loss_metrics
[rank2]: self.accelerator.gather_for_metrics(policy_rejected_logits).detach().mean()
[rank2]: File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2583, in gather_for_metrics
[rank2]: data = self.gather(input_data)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2539, in gather
[rank2]: return gather(tensor)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/operations.py", line 389, in wrapper
[rank2]: raise DistributedOperationException(
[rank2]: accelerate.utils.operations.DistributedOperationException: Cannot apply desired operation due to shape mismatches. All shapes across devices must be valid.
[rank2]: Operation: `accelerate.utils.operations.gather`
[rank2]: Input shapes:
[rank2]: - Process 0: [2, 350, 32256]
[rank2]: - Process 1: [2, 658, 32256]
[rank2]: - Process 2: [2, 617, 32256]
[rank2]: - Process 3: [2, 594, 32256]
0%| | 0/10 [00:02<?, ?it/s]
W0218 14:43:46.697000 142432 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 142606 closing signal SIGTERM
W0218 14:43:46.697000 142432 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 142607 closing signal SIGTERM
W0218 14:43:46.697000 142432 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 142608 closing signal SIGTERM
E0218 14:43:48.541000 142432 torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 3 (pid: 142609) of binary: /usr/bin/python3
Traceback (most recent call last):
File "/usr/local/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/accelerate_cli.py", line 48, in main
args.func(args)
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 1182, in launch_command
deepspeed_launcher(args)
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 861, in deepspeed_launcher
distrib_run.run(args)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 909, in run
elastic_launch(
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
orpo.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-02-18_14:43:46
host : ec206899.seewebcloud.it
rank : 3 (local_rank: 3)
exitcode : 1 (pid: 142609)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
``` | 2,882 | 105 |
daniele-sartiano | 2025-02-18T14:14:00 | I would like to add that I am experiencing the same issue using FSDP.
Here is the configuration:
```
compute_environment: LOCAL_MACHINE
debug: true
distributed_type: FSDP
downcast_bf16: 'no'
enable_cpu_affinity: false
fsdp_config:
fsdp_activation_checkpointing: false
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch: BACKWARD_PRE
fsdp_cpu_ram_efficient_loading: true
fsdp_forward_prefetch: false
fsdp_offload_params: true
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
and here is the log:
```
accelerate launch --config_file fsdp_config.yaml orpo.py \
--dataset_name chatpgt_dataset_dev/dataset_dev_train_gpt4o_truths_postproc.json \
--model_name_or_path=deepseek-ai/deepseek-coder-6.7b-instruct \
--per_device_train_batch_size 2 \
--max_steps 10 \
--learning_rate 8e-5 \
--gradient_accumulation_steps 8 \
--logging_steps 1 \
--eval_steps 500 \
--output_dir="deepseek-ai/deepseek-coder-6.7b-instruct-max_steps10-dataset_dataset_dev_train_gpt4o_truths_postproc-16-lora-aligned-orpo" \
--optim rmsprop \
--warmup_steps 150 \
--bf16 \
--logging_first_step \
--no_remove_unused_columns \
--use_peft \
--lora_r=16 \
--lora_alpha=16 \
--max_prompt_length=320 \
--log_level detail
[2025-02-18 15:09:56,504] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-02-18 15:09:56,542] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-02-18 15:09:56,556] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-02-18 15:09:56,597] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Loading checkpoint shards: 100%|██████████| 2/2 [00:00<00:00, 6.03it/s]
Loading checkpoint shards: 100%|██████████| 2/2 [00:00<00:00, 6.00it/s]
Loading checkpoint shards: 100%|██████████| 2/2 [00:00<00:00, 4.14it/s]
[rank1]:[W218 15:09:59.243816376 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id.
[rank2]:[W218 15:09:59.430070293 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id.
[rank3]:[W218 15:09:59.621769461 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id.
Loading checkpoint shards: 100%|██████████| 2/2 [00:11<00:00, 5.64s/it]
[rank0]:[W218 15:10:10.723505783 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id.
ec206899:144470:144470 [0] NCCL INFO Bootstrap : Using eth0:213.171.186.165<0>
ec206899:144470:144470 [0] NCCL INFO NET/Plugin: No plugin found (libnccl-net.so)
ec206899:144470:144470 [0] NCCL INFO NET/Plugin: Plugin load returned 2 : libnccl-net.so: cannot open shared object file: No such file or directory : when loading libnccl-net.so
ec206899:144470:144470 [0] NCCL INFO NET/Plugin: Using internal network plugin.
ec206899:144470:144470 [0] NCCL INFO cudaDriverVersion 12080
NCCL version 2.21.5+cuda12.4
ec206899:144470:144470 [0] NCCL INFO Comm config Blocking set to 1
ec206899:144471:144471 [1] NCCL INFO cudaDriverVersion 12080
ec206899:144471:144471 [1] NCCL INFO Bootstrap : Using eth0:213.171.186.165<0>
ec206899:144471:144471 [1] NCCL INFO NET/Plugin: No plugin found (libnccl-net.so)
ec206899:144471:144471 [1] NCCL INFO NET/Plugin: Plugin load returned 2 : libnccl-net.so: cannot open shared object file: No such file or directory : when loading libnccl-net.so
ec206899:144471:144471 [1] NCCL INFO NET/Plugin: Using internal network plugin.
ec206899:144471:144471 [1] NCCL INFO Comm config Blocking set to 1
ec206899:144472:144472 [2] NCCL INFO cudaDriverVersion 12080
ec206899:144472:144472 [2] NCCL INFO Bootstrap : Using eth0:213.171.186.165<0>
ec206899:144473:144473 [3] NCCL INFO cudaDriverVersion 12080
ec206899:144472:144472 [2] NCCL INFO NET/Plugin: No plugin found (libnccl-net.so)
ec206899:144472:144472 [2] NCCL INFO NET/Plugin: Plugin load returned 2 : libnccl-net.so: cannot open shared object file: No such file or directory : when loading libnccl-net.so
ec206899:144472:144472 [2] NCCL INFO NET/Plugin: Using internal network plugin.
ec206899:144473:144473 [3] NCCL INFO Bootstrap : Using eth0:213.171.186.165<0>
ec206899:144473:144473 [3] NCCL INFO NET/Plugin: No plugin found (libnccl-net.so)
ec206899:144473:144473 [3] NCCL INFO NET/Plugin: Plugin load returned 2 : libnccl-net.so: cannot open shared object file: No such file or directory : when loading libnccl-net.so
ec206899:144473:144473 [3] NCCL INFO NET/Plugin: Using internal network plugin.
ec206899:144472:144472 [2] NCCL INFO Comm config Blocking set to 1
ec206899:144473:144473 [3] NCCL INFO Comm config Blocking set to 1
ec206899:144470:144646 [0] NCCL INFO NET/IB : No device found.
ec206899:144470:144646 [0] NCCL INFO NET/Socket : Using [0]eth0:213.171.186.165<0>
ec206899:144470:144646 [0] NCCL INFO Using non-device net plugin version 0
ec206899:144470:144646 [0] NCCL INFO Using network Socket
ec206899:144471:144647 [1] NCCL INFO NET/IB : No device found.
ec206899:144471:144647 [1] NCCL INFO NET/Socket : Using [0]eth0:213.171.186.165<0>
ec206899:144471:144647 [1] NCCL INFO Using non-device net plugin version 0
ec206899:144471:144647 [1] NCCL INFO Using network Socket
ec206899:144473:144649 [3] NCCL INFO NET/IB : No device found.
ec206899:144473:144649 [3] NCCL INFO NET/Socket : Using [0]eth0:213.171.186.165<0>
ec206899:144472:144648 [2] NCCL INFO NET/IB : No device found.
ec206899:144473:144649 [3] NCCL INFO Using non-device net plugin version 0
ec206899:144473:144649 [3] NCCL INFO Using network Socket
ec206899:144472:144648 [2] NCCL INFO NET/Socket : Using [0]eth0:213.171.186.165<0>
ec206899:144472:144648 [2] NCCL INFO Using non-device net plugin version 0
ec206899:144472:144648 [2] NCCL INFO Using network Socket
ec206899:144472:144648 [2] NCCL INFO ncclCommInitRank comm 0x55b80961e620 rank 2 nranks 4 cudaDev 2 nvmlDev 2 busId 7000 commId 0x1cefe4816d6d2b1f - Init START
ec206899:144473:144649 [3] NCCL INFO ncclCommInitRank comm 0x5627a3f8a1c0 rank 3 nranks 4 cudaDev 3 nvmlDev 3 busId 8000 commId 0x1cefe4816d6d2b1f - Init START
ec206899:144470:144646 [0] NCCL INFO ncclCommInitRank comm 0x557323c35d60 rank 0 nranks 4 cudaDev 0 nvmlDev 0 busId 5000 commId 0x1cefe4816d6d2b1f - Init START
ec206899:144471:144647 [1] NCCL INFO ncclCommInitRank comm 0x55f486708000 rank 1 nranks 4 cudaDev 1 nvmlDev 1 busId 6000 commId 0x1cefe4816d6d2b1f - Init START
ec206899:144472:144648 [2] NCCL INFO NVLS multicast support is not available on dev 2
ec206899:144471:144647 [1] NCCL INFO NVLS multicast support is not available on dev 1
ec206899:144470:144646 [0] NCCL INFO NVLS multicast support is not available on dev 0
ec206899:144473:144649 [3] NCCL INFO NVLS multicast support is not available on dev 3
ec206899:144472:144648 [2] NCCL INFO comm 0x55b80961e620 rank 2 nRanks 4 nNodes 1 localRanks 4 localRank 2 MNNVL 0
ec206899:144473:144649 [3] NCCL INFO comm 0x5627a3f8a1c0 rank 3 nRanks 4 nNodes 1 localRanks 4 localRank 3 MNNVL 0
ec206899:144470:144646 [0] NCCL INFO comm 0x557323c35d60 rank 0 nRanks 4 nNodes 1 localRanks 4 localRank 0 MNNVL 0
ec206899:144471:144647 [1] NCCL INFO comm 0x55f486708000 rank 1 nRanks 4 nNodes 1 localRanks 4 localRank 1 MNNVL 0
ec206899:144472:144648 [2] NCCL INFO Trees [0] 3/-1/-1->2->1 [1] 3/-1/-1->2->1
ec206899:144473:144649 [3] NCCL INFO Trees [0] -1/-1/-1->3->2 [1] -1/-1/-1->3->2
ec206899:144470:144646 [0] NCCL INFO Channel 00/02 : 0 1 2 3
ec206899:144471:144647 [1] NCCL INFO Trees [0] 2/-1/-1->1->0 [1] 2/-1/-1->1->0
ec206899:144472:144648 [2] NCCL INFO P2P Chunksize set to 131072
ec206899:144473:144649 [3] NCCL INFO P2P Chunksize set to 131072
ec206899:144470:144646 [0] NCCL INFO Channel 01/02 : 0 1 2 3
ec206899:144471:144647 [1] NCCL INFO P2P Chunksize set to 131072
ec206899:144470:144646 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1
ec206899:144470:144646 [0] NCCL INFO P2P Chunksize set to 131072
ec206899:144472:144648 [2] NCCL INFO Channel 00 : 2[2] -> 3[3] via SHM/direct/direct
ec206899:144473:144649 [3] NCCL INFO Channel 00 : 3[3] -> 0[0] via SHM/direct/direct
ec206899:144472:144648 [2] NCCL INFO Channel 01 : 2[2] -> 3[3] via SHM/direct/direct
ec206899:144471:144647 [1] NCCL INFO Channel 00 : 1[1] -> 2[2] via SHM/direct/direct
ec206899:144473:144649 [3] NCCL INFO Channel 01 : 3[3] -> 0[0] via SHM/direct/direct
ec206899:144470:144646 [0] NCCL INFO Channel 00 : 0[0] -> 1[1] via SHM/direct/direct
ec206899:144471:144647 [1] NCCL INFO Channel 01 : 1[1] -> 2[2] via SHM/direct/direct
ec206899:144470:144646 [0] NCCL INFO Channel 01 : 0[0] -> 1[1] via SHM/direct/direct
ec206899:144470:144646 [0] NCCL INFO Connected all rings
ec206899:144473:144649 [3] NCCL INFO Connected all rings
ec206899:144471:144647 [1] NCCL INFO Connected all rings
ec206899:144472:144648 [2] NCCL INFO Connected all rings
ec206899:144473:144649 [3] NCCL INFO Channel 00 : 3[3] -> 2[2] via SHM/direct/direct
ec206899:144473:144649 [3] NCCL INFO Channel 01 : 3[3] -> 2[2] via SHM/direct/direct
ec206899:144471:144647 [1] NCCL INFO Channel 00 : 1[1] -> 0[0] via SHM/direct/direct
ec206899:144472:144648 [2] NCCL INFO Channel 00 : 2[2] -> 1[1] via SHM/direct/direct
ec206899:144471:144647 [1] NCCL INFO Channel 01 : 1[1] -> 0[0] via SHM/direct/direct
ec206899:144472:144648 [2] NCCL INFO Channel 01 : 2[2] -> 1[1] via SHM/direct/direct
ec206899:144470:144646 [0] NCCL INFO Connected all trees
ec206899:144473:144649 [3] NCCL INFO Connected all trees
ec206899:144473:144649 [3] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 512 | 512
ec206899:144473:144649 [3] NCCL INFO 2 coll channels, 2 collnet channels, 0 nvls channels, 2 p2p channels, 2 p2p channels per peer
ec206899:144472:144648 [2] NCCL INFO Connected all trees
ec206899:144472:144648 [2] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 512 | 512
ec206899:144471:144647 [1] NCCL INFO Connected all trees
ec206899:144472:144648 [2] NCCL INFO 2 coll channels, 2 collnet channels, 0 nvls channels, 2 p2p channels, 2 p2p channels per peer
ec206899:144471:144647 [1] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 512 | 512
ec206899:144471:144647 [1] NCCL INFO 2 coll channels, 2 collnet channels, 0 nvls channels, 2 p2p channels, 2 p2p channels per peer
ec206899:144470:144646 [0] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 512 | 512
ec206899:144470:144646 [0] NCCL INFO 2 coll channels, 2 collnet channels, 0 nvls channels, 2 p2p channels, 2 p2p channels per peer
ec206899:144472:144648 [2] NCCL INFO TUNER/Plugin: Plugin load returned 2 : libnccl-net.so: cannot open shared object file: No such file or directory : when loading libnccl-tuner.so
ec206899:144470:144646 [0] NCCL INFO TUNER/Plugin: Plugin load returned 2 : libnccl-net.so: cannot open shared object file: No such file or directory : when loading libnccl-tuner.so
ec206899:144473:144649 [3] NCCL INFO TUNER/Plugin: Plugin load returned 2 : libnccl-net.so: cannot open shared object file: No such file or directory : when loading libnccl-tuner.so
ec206899:144471:144647 [1] NCCL INFO TUNER/Plugin: Plugin load returned 2 : libnccl-net.so: cannot open shared object file: No such file or directory : when loading libnccl-tuner.so
ec206899:144472:144648 [2] NCCL INFO TUNER/Plugin: Using internal tuner plugin.
ec206899:144470:144646 [0] NCCL INFO TUNER/Plugin: Using internal tuner plugin.
ec206899:144473:144649 [3] NCCL INFO TUNER/Plugin: Using internal tuner plugin.
ec206899:144471:144647 [1] NCCL INFO TUNER/Plugin: Using internal tuner plugin.
ec206899:144472:144648 [2] NCCL INFO ncclCommInitRank comm 0x55b80961e620 rank 2 nranks 4 cudaDev 2 nvmlDev 2 busId 7000 commId 0x1cefe4816d6d2b1f - Init COMPLETE
ec206899:144470:144646 [0] NCCL INFO ncclCommInitRank comm 0x557323c35d60 rank 0 nranks 4 cudaDev 0 nvmlDev 0 busId 5000 commId 0x1cefe4816d6d2b1f - Init COMPLETE
ec206899:144473:144649 [3] NCCL INFO ncclCommInitRank comm 0x5627a3f8a1c0 rank 3 nranks 4 cudaDev 3 nvmlDev 3 busId 8000 commId 0x1cefe4816d6d2b1f - Init COMPLETE
ec206899:144471:144647 [1] NCCL INFO ncclCommInitRank comm 0x55f486708000 rank 1 nranks 4 cudaDev 1 nvmlDev 1 busId 6000 commId 0x1cefe4816d6d2b1f - Init COMPLETE
max_steps is given, it will override any value given in num_train_epochs
Using auto half precision backend
Currently training with a batch size of: 2
***** Running training *****
Num examples = 1,079
Num Epochs = 1
Instantaneous batch size per device = 2
Total train batch size (w. parallel, distributed & accumulation) = 64
Gradient Accumulation steps = 8
Total optimization steps = 10
Number of trainable parameters = 2,097,152
0%| | 0/10 [00:00<?, ?it/s][rank1]: Traceback (most recent call last):
[rank1]: File "/root/semeval_2025_task_8/fine-tuning/orpo.py", line 104, in <module>
[rank1]: trainer.train()
[rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2171, in train
[rank1]: return inner_training_loop(
[rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2531, in _inner_training_loop
[rank1]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 3675, in training_step
[rank1]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/orpo_trainer.py", line 873, in compute_loss
[rank1]: loss, metrics = self.get_batch_loss_metrics(model, inputs, train_eval="train")
[rank1]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/orpo_trainer.py", line 848, in get_batch_loss_metrics
[rank1]: self.accelerator.gather_for_metrics(policy_rejected_logits).detach().mean()
[rank1]: File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2583, in gather_for_metrics
[rank1]: data = self.gather(input_data)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2539, in gather
[rank1]: return gather(tensor)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/operations.py", line 389, in wrapper
[rank1]: raise DistributedOperationException(
[rank1]: accelerate.utils.operations.DistributedOperationException: Cannot apply desired operation due to shape mismatches. All shapes across devices must be valid.
[rank1]: Operation: `accelerate.utils.operations.gather`
[rank1]: Input shapes:
[rank1]: - Process 0: [2, 350, 32256]
[rank1]: - Process 1: [2, 658, 32256]
[rank1]: - Process 2: [2, 617, 32256]
[rank1]: - Process 3: [2, 594, 32256]
[rank3]: Traceback (most recent call last):
[rank3]: File "/root/semeval_2025_task_8/fine-tuning/orpo.py", line 104, in <module>
[rank3]: trainer.train()
[rank3]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2171, in train
[rank3]: return inner_training_loop(
[rank3]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2531, in _inner_training_loop
[rank3]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank3]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 3675, in training_step
[rank3]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank3]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/orpo_trainer.py", line 873, in compute_loss
[rank3]: loss, metrics = self.get_batch_loss_metrics(model, inputs, train_eval="train")
[rank3]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/orpo_trainer.py", line 848, in get_batch_loss_metrics
[rank3]: self.accelerator.gather_for_metrics(policy_rejected_logits).detach().mean()
[rank3]: File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2583, in gather_for_metrics
[rank3]: data = self.gather(input_data)
[rank3]: File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2539, in gather
[rank3]: return gather(tensor)
[rank3]: File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/operations.py", line 389, in wrapper
[rank3]: raise DistributedOperationException(
[rank3]: accelerate.utils.operations.DistributedOperationException: Cannot apply desired operation due to shape mismatches. All shapes across devices must be valid.
[rank3]: Operation: `accelerate.utils.operations.gather`
[rank3]: Input shapes:
[rank3]: - Process 0: [2, 350, 32256]
[rank3]: - Process 1: [2, 658, 32256]
[rank3]: - Process 2: [2, 617, 32256]
[rank3]: - Process 3: [2, 594, 32256]
[rank0]: Traceback (most recent call last):
[rank0]: File "/root/semeval_2025_task_8/fine-tuning/orpo.py", line 104, in <module>
[rank0]: trainer.train()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2171, in train
[rank0]: return inner_training_loop(
[rank0]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2531, in _inner_training_loop
[rank0]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 3675, in training_step
[rank0]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/orpo_trainer.py", line 873, in compute_loss
[rank0]: loss, metrics = self.get_batch_loss_metrics(model, inputs, train_eval="train")
[rank0]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/orpo_trainer.py", line 848, in get_batch_loss_metrics
[rank0]: self.accelerator.gather_for_metrics(policy_rejected_logits).detach().mean()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2583, in gather_for_metrics
[rank0]: data = self.gather(input_data)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2539, in gather
[rank0]: return gather(tensor)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/operations.py", line 389, in wrapper
[rank0]: raise DistributedOperationException(
[rank0]: accelerate.utils.operations.DistributedOperationException: Cannot apply desired operation due to shape mismatches. All shapes across devices must be valid.
[rank0]: Operation: `accelerate.utils.operations.gather`
[rank0]: Input shapes:
[rank0]: - Process 0: [2, 350, 32256]
[rank0]: - Process 1: [2, 658, 32256]
[rank0]: - Process 2: [2, 617, 32256]
[rank0]: - Process 3: [2, 594, 32256]
[rank2]: Traceback (most recent call last):
[rank2]: File "/root/semeval_2025_task_8/fine-tuning/orpo.py", line 104, in <module>
[rank2]: trainer.train()
[rank2]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2171, in train
[rank2]: return inner_training_loop(
[rank2]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2531, in _inner_training_loop
[rank2]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 3675, in training_step
[rank2]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/orpo_trainer.py", line 873, in compute_loss
[rank2]: loss, metrics = self.get_batch_loss_metrics(model, inputs, train_eval="train")
[rank2]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/orpo_trainer.py", line 848, in get_batch_loss_metrics
[rank2]: self.accelerator.gather_for_metrics(policy_rejected_logits).detach().mean()
[rank2]: File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2583, in gather_for_metrics
[rank2]: data = self.gather(input_data)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2539, in gather
[rank2]: return gather(tensor)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/operations.py", line 389, in wrapper
[rank2]: raise DistributedOperationException(
[rank2]: accelerate.utils.operations.DistributedOperationException: Cannot apply desired operation due to shape mismatches. All shapes across devices must be valid.
[rank2]: Operation: `accelerate.utils.operations.gather`
[rank2]: Input shapes:
[rank2]: - Process 0: [2, 350, 32256]
[rank2]: - Process 1: [2, 658, 32256]
[rank2]: - Process 2: [2, 617, 32256]
[rank2]: - Process 3: [2, 594, 32256]
0%| | 0/10 [00:15<?, ?it/s]
W0218 15:10:42.789000 144337 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 144470 closing signal SIGTERM
W0218 15:10:42.790000 144337 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 144472 closing signal SIGTERM
W0218 15:10:42.790000 144337 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 144473 closing signal SIGTERM
E0218 15:10:46.323000 144337 torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 1 (pid: 144471) of binary: /usr/bin/python3
Traceback (most recent call last):
File "/usr/local/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/accelerate_cli.py", line 48, in main
args.func(args)
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 1184, in launch_command
multi_gpu_launcher(args)
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 808, in multi_gpu_launcher
distrib_run.run(args)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 909, in run
elastic_launch(
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
orpo.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-02-18_15:10:42
host : ec206899.seewebcloud.it
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 144471)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
``` | 2,882 | 106 |
daniele-sartiano | 2025-02-18T14:26:54 | I’ll conclude by noting that the issue persists in TRL versions 0.15.0 and 0.14.0.
Version 0.13.0 works. | 2,882 | 107 |
Saturnoul | 2025-02-19T09:05:14 | > I’ll conclude by noting that the issue persists in TRL versions 0.15.0 and 0.14.0. Version 0.13.0 works.
Thanks! I'll try v0.13.0, maybe comparing it with v0.15 can figure out why this error occurs | 2,882 | 108 |
HuggingFaceDocBuilderDev | 2025-02-17T11:11:51 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2881). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,881 | 109 |
HuggingFaceDocBuilderDev | 2025-02-17T09:53:41 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2880). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,880 | 110 |
ChenDRAG | 2025-02-17T08:16:59 | Similar bug does not appear in 0.14.0 | 2,879 | 111 |
Subsets and Splits