user
stringlengths
3
28
created_at
timestamp[us]
body
stringlengths
1
173k
issue_number
int64
1
2.95k
__index_level_0__
int64
0
8.59k
zaddy6
2025-02-17T19:10:28
@Maghoumi not the case for me <img width="1024" alt="Image" src="https://github.com/user-attachments/assets/6e08e4ce-17e6-44f8-910b-05d4dc125a6d" /> purple is with peft and vllm enabled
2,856
212
HuggingFaceDocBuilderDev
2025-02-13T17:26:27
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2855). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,855
213
qgallouedec
2025-02-13T17:28:23
It should always be the case indeed. The built-in `set` isn't ordered right? Is vllm faster when you pass 1 prompt with n=N outputs, than with N times the same prompt for n=1?
2,855
214
edbeeching
2025-02-13T17:58:23
> It should always be the case indeed. The built-in `set` isn't ordered right? Is vllm faster when you pass 1 prompt with n=N outputs, than with N times the same prompt for n=1? Unfortunately, `set` is not ordered. Yes vllm can share the prefill for the n generations so it is faster, I profiled around 1.5x faster with the changes in this PR at 2k `max_completion_length`.
2,855
215
qgallouedec
2025-02-13T18:24:10
Nice!! I am surprised, I expected a smaller speedup given that the prefix should already be reused since https://github.com/huggingface/trl/pull/2757. We should probably do the same with tranformers generation in a future PR, if it makes sense. Anyway, can you just add comment somewhere to explain why we do this?
2,855
216
winglian
2025-02-13T22:54:59
My guess is it's an easier optimization for vllm to understand that single prompt has multiple generations than sending the same prompt multiple times from the https://github.com/huggingface/trl/pull/2776 refactor.
2,855
217
edbeeching
2025-02-14T08:45:46
@qgallouedec, without diving into the codebase of vllm, I would assume that the prefix cache is only used to compare a new batch of prompts with previously processed prompts. The system prompt is shared across all prompts, so this is cached and reused for all batches, whereas a new batch of prompts would first all need to have their prefill calculated and entered into the cache before vllm could identify that there are `num_generations` of the prompts are exactly the same. Hence you get some improvement when you move from `prompt*num_generations` to `n` generations for each prompt. Let me know if you would like me to clarify.
2,855
218
qgallouedec
2025-02-14T08:57:18
Thanks Ed! Actually I meant adding a comment in the code to concisely explain why we merge the prompts. Something like `Since 'prompts' contains 'num_generations' duplicates, we first take unique prompts, and generate num_generations outputs for each one. This is faster than generating outputs for each duplicate prompt individually.`.
2,855
219
skandermoalla
2025-02-13T15:34:49
It's the estimator used by GRPO (ref eq 2 https://arxiv.org/pdf/2501.12948). For more details, you can check the `k3` estimator in this blogpost (http://joschu.net/blog/kl-approx.html).
2,854
220
qiaojiim
2025-02-14T01:40:15
> It's the estimator used by GRPO (ref eq 2 https://arxiv.org/pdf/2501.12948). For more details, you can check the `k3` estimator in this blogpost (http://joschu.net/blog/kl-approx.html). great
2,854
221
KareemMusleh
2025-02-14T08:10:30
It seems that it was moved to DPOConfig
2,853
222
llj1113
2025-02-14T08:37:19
> It seems that it was moved to DPOConfig thanks!I have solved this problem.
2,853
223
edbeeching
2025-02-13T13:29:02
Can you test with `vllm==0.7.2`, I had a similar issue which I believe was fixed when I bumped vllm version.
2,851
224
YunGe0414
2025-02-14T03:09:36
> Can you test with `vllm==0.7.2`, I had a similar issue which I believe was fixed when I bumped vllm version. It worked, thank you so much bro.
2,851
225
edbeeching
2025-02-14T10:00:16
No probs, closing.
2,851
226
HuggingFaceDocBuilderDev
2025-02-13T10:46:17
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2850). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,850
227
HuggingFaceDocBuilderDev
2025-02-13T10:03:18
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2848). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,848
228
qgallouedec
2025-02-12T19:47:38
Do you any dataset example that would contain such data? Your concern is that it could corrupt the training in some sense right? Is it a documented/recurrent issue?
2,844
229
shirinyamani
2025-02-12T20:06:17
Exactly! It can cause some issues in the training phase. Technically, any sort of rlhf schema that has SFT step with SFT data that you wanna fine-tune on those specific examples. If those specific examples contain any knowledge cutoff, it might cause issues! So far I did a very simple `re` based search of the cutoff pattern of ```python r"as of my last update", r"as of my last knowledge update", r"as of \d{4}", # Matches "As of 2024", "As of 2023", etc. r"i do not have access to real-time information", ``` for the [trl-lib/tldr](https://huggingface.co./datasets/trl-lib/tldr) dataset, and could not find anything. I mean I found ~ 1000 examples that has the term "As of the year" or "as it date back" etc but when I took a closer look these examples were referring to some date or sth in the context as this dataset is from reddit and mostly from the relationship subreddit, so could not find any matching example with respect to my concern of model generating a cutoff knowledge completion! However, note that this specific dataset by nature is not really a good match for the purpose I mentioned earlier but I believe this can happen in other SFT data. Speaking of this, I also saw a similar thing raised in allenai/open-instruct. Therefore, I thought it might be nice if we add such support, WDYT?
2,844
230
shirinyamani
2025-02-12T20:11:47
Imagine Im a user that only wanna use the Base model as publicly available models but for the sft training, I wanna do it with my local examples using TRL/sft_trainer, So is there any way we can flag this before fully training on the data that the Data the user is using for sft contains some knowledge cutoff ? OR the datasets that TRL presents under trl-lib. Is my question valid?
2,844
231
qgallouedec
2025-02-12T20:39:12
This information is typically included in the system prompt. So, even if the model has been trained on this "corrupting" data, it shouldn’t pose an issue during generation—but that’s just my intuition. In any case, this sounds more like a data preparation concern. Unless it’s a severe and recurring issue (i.e., well-documented and frequently reported), I’d consider it slightly beyond the scope of TRL. That said, now that this issue has been raised, if you have code that can detect or filter such data in a dataset, this would be the right place to share it.
2,844
232
shirinyamani
2025-02-12T21:40:37
This was the quick code i used for the [trl-lib/tldr](https://huggingface.co./datasets/trl-lib/tldr) dataset. ```python import re dataset = load_dataset("trl-lib/tldr", split="train") # knowledge cutoff-related phrases cutoff_patterns = [ r"as of my last update", r"as of my last knowledge update", r"as of \d{4}", # Matches "As of 2024", etc. r"i do not have access to real-time information", r"i was last updated in \d{4}", ] def check_knowledge_cutoff(text): text = text.lower() # Normalize to lowercase return any(re.search(pattern, text) for pattern in cutoff_patterns) cutoff_mentions = [ (row["prompt"], row["completion"]) for row in dataset if check_knowledge_cutoff(row["prompt"]) or check_knowledge_cutoff(row["completion"]) ] # Optionally, print a few examples print("Sample Matches:") for i, (prompt, completion) in enumerate(cutoff_mentions): print(f"{i+1}. Prompt: {prompt}\n Completion: {completion}\n") ``` But I also found this [PR](https://github.com/allenai/open-instruct/pull/555) on Allenai/open-instruct relevant to the topic!
2,844
233
HuggingFaceDocBuilderDev
2025-02-13T09:05:24
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2843). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,843
234
JoelSeniorLiang
2025-02-12T13:57:04
try to set per_device_train_batch_size=1 at the config
2,842
235
qgallouedec
2025-02-12T15:34:16
What trl version do you use?
2,842
236
qgallouedec
2025-02-12T15:37:30
what is the per device batch size? Where do you get these "output per device"?
2,842
237
MAOJIASONG
2025-02-13T08:50:57
> try to set per_device_train_batch_size=1 at the config Hi, I already set `per_device_train_batch_size=1`, so I assume the problem is not happened from training config
2,842
238
MAOJIASONG
2025-02-13T08:51:08
> What trl version do you use? 0.14.0
2,842
239
MAOJIASONG
2025-02-13T08:51:59
> what is the per device batch size? Where do you get these "output per device"? `per_device_train_batch_size=1` as it is. my debugging output obtains this output per device.
2,842
240
qgallouedec
2025-02-13T08:57:52
Ok but there is no such variable as "output_per_device" so please elaborate what line are you refering to. It's not very clear for me at this point
2,842
241
MAOJIASONG
2025-02-14T04:16:11
> Ok but there is no such variable as "output_per_device" so please elaborate on what line are you referring to. It's not very clear to me at this point https://github.com/huggingface/trl/blob/49711efab9e0cc3762d3228c9fd5a8064d489503/trl/trainer/grpo_trainer.py#L469 https://github.com/huggingface/trl/blob/49711efab9e0cc3762d3228c9fd5a8064d489503/trl/trainer/grpo_trainer.py#L464 Sorry, the `output_per_device` is defined by myself. In fact, I printed the number of prompts and completions for the above two lines and found the number is not matched with `num_processes=1`, which should be only 1 prompt per device. But this one gives me `avail_num_gpus*per_device_train_batch_size=4*1=4` that many of the prompts, as indicated in the example.
2,842
242
zzn1999
2025-02-12T10:18:52
That's Good. I have the same issue.
2,841
243
yuting-shi
2025-02-12T10:33:17
Solved my problem!
2,841
244
kashif
2025-02-12T13:40:58
@yuting-shi what problem does this solve? i believe the `generate()` method runs in inference/eval model
2,841
245
yuting-shi
2025-02-13T01:37:41
@kashif the content generated by inference model is a mess, even though the model has been SFT before.
2,841
246
casper-hansen
2025-02-12T12:04:25
Offending PR might be https://github.com/huggingface/trl/pull/2817
2,840
247
AndreiCComan
2025-02-13T19:43:31
Same issue here. In my case this happened immediately after the checkpoint has been saved.
2,840
248
qgallouedec
2025-02-13T20:15:02
Can you try to provide the steps to reproduce? Maybe take only a small part of your dataset could help reproduce without having to wait 24 hours
2,840
249
Superskyyy
2025-02-14T00:31:14
https://github.com/huggingface/open-r1/issues/299 seems to be the same issue referenced in open-r1
2,840
250
casper-hansen
2025-02-14T08:46:51
> Can you try to provide the steps to reproduce? Maybe take only a small part of your dataset could help reproduce without having to wait 24 hours This was with the following dataset https://huggingface.co./datasets/allenai/RLVR-IFeval
2,840
251
hezhefly
2025-02-17T04:58:52
> Same issue here. In my case this happened immediately after the checkpoint has been saved. Same situation
2,840
252
hezhefly
2025-02-17T10:53:48
我根据日志分别查阅了trl和deepspeed的源码,发现是`deepspeed.zero.GatheredParameters`中对参数的断言引发的错误,进一步查阅断言的逻辑,发现`free_param(param)`方法希望在执行之前`ds_active_sub_modules`参数值被清空。我不清楚trl中具体是什么原因造成这个这种`ds_active_sub_modules`参数值未清空的现象。 所以,我大胆的尝试了一下手动清空`ds_active_sub_modules`参数值,我尝试在[grpo_trainer.py#L490](https://github.com/huggingface/trl/blob/main/trl/trainer/grpo_trainer.py#L490)中加入以下清空参数的逻辑: ```python for param in self.model.parameters(): param.ds_active_sub_modules.clear() ``` 测试后发现有效,目前已经完成GRPO的训练任务。
2,840
253
wuyifan18
2025-02-18T04:20:21
Same issue
2,840
254
Superskyyy
2025-02-18T04:51:50
Just cross reference from OpenRLHF issue, seems like related to batch size. https://github.com/OpenRLHF/OpenRLHF/issues/630
2,840
255
tsrigo
2025-02-19T02:59:32
> Same issue here. In my case this happened immediately after the checkpoint has been saved. @qgallouedec Me too! Have you fix this problem?
2,840
256
tsrigo
2025-02-20T06:58:48
> > Same issue here. In my case this happened immediately after the checkpoint has been saved. > > [@qgallouedec](https://github.com/qgallouedec) Me too! Have you fix this problem? I fix it by satisfying `save_interval % grad_accum == 0`.
2,840
257
loxs123
2025-02-21T16:20:46
> 我根据日志分别查阅了trl和deepspeed的源码,发现是`deepspeed.zero.GatheredParameters`中对参数的断言引发的错误,进一步查阅断言的逻辑,发现`free_param(param)`方法希望在执行之前`ds_active_sub_modules`参数值被清空。我不清楚trl中具体是什么原因造成这个这种`ds_active_sub_modules`参数值未清空的现象。 > > 所以,我大胆的尝试了一下手动清空`ds_active_sub_modules`参数值,我尝试在[grpo_trainer.py#L490](https://github.com/huggingface/trl/blob/main/trl/trainer/grpo_trainer.py#L490)中加入以下清空参数的逻辑: > > for param in self.model.parameters(): > param.ds_active_sub_modules.clear() > 测试后发现有效,目前已经完成GRPO的训练任务。 我运行代码报了这个错误,`AttributeError: 'Parameter' object has no attribute 'ds_active_sub_modules`,请问你知道该如何解决吗?或许是某个库的版本不太一致?
2,840
258
nikhilchandak
2025-02-22T12:24:44
+1, I am also facing the same issue. @tsrigo in your fix, does `save_interval` correspond to `save_steps` which should be set as a multiple of `gradient_accumulation_steps`? I tried that still my runs crash. @qgallouedec Any known fix for this?
2,840
259
hezhefly
2025-02-24T02:09:09
> > 我根据日志分别查阅了trl和deepspeed的源码,发现是`deepspeed.zero.GatheredParameters`中对参数的断言引发的错误,进一步查阅断言的逻辑,发现`free_param(param)`方法希望在执行之前`ds_active_sub_modules`参数值被清空。我不清楚trl中具体是什么原因造成这个这种`ds_active_sub_modules`参数值未清空的现象。 > > 所以,我大胆的尝试了一下手动清空`ds_active_sub_modules`参数值,我尝试在[grpo_trainer.py#L490](https://github.com/huggingface/trl/blob/main/trl/trainer/grpo_trainer.py#L490)中加入以下清空参数的逻辑: > > for param in self.model.parameters(): > > param.ds_active_sub_modules.clear() > > 测试后发现有效,目前已经完成GRPO的训练任务。 > > 我运行代码报了这个错误,`AttributeError: 'Parameter' object has no attribute 'ds_active_sub_modules`,请问你知道该如何解决吗?或许是某个库的版本不太一致? @loxs123 我使用的版本是 Name: deepspeed Version: 0.15.3
2,840
260
qgallouedec
2025-02-24T07:48:02
No because I'm still waiting for someone to provide the sufficient info to reproduce https://github.com/huggingface/trl/issues/2840#issuecomment-2657619732 . Maybe you can help with this?
2,840
261
nomadlx
2025-02-24T10:04:12
> 不,因为我还在等有人提供足够的信息来重现[#2840(评论)](https://github.com/huggingface/trl/issues/2840#issuecomment-2657619732)。也许你可以帮忙? I think this information might be helpful for you to reproduce the issue: https://github.com/huggingface/open-r1/issues/299#issuecomment-2667375592. Because I've verified that with the same training data, when `trainset_size % batch_size == 0`, this error will no longer occur after the first save.
2,840
262
qgallouedec
2025-02-12T07:47:12
Thanks for the suggestion but it doesn't align with the paper actually.
2,837
263
pointerhacker
2025-02-12T09:58:15
> Thanks for the suggestion but it doesn't align with the paper actually. In my understanding, isn't per_token_logps - per_token_logps.detach() always equal to 0? Could you please explain why this is feasible? Thank you!
2,837
264
qgallouedec
2025-02-12T10:46:07
That's right, answer here: https://github.com/huggingface/trl/pull/2565#issuecomment-2595837761 :)
2,837
265
pointerhacker
2025-02-12T13:16:29
Thank you for your reply.
2,837
266
pointerhacker
2025-02-12T13:21:25
> That's right, answer here: [#2565 (comment)](https://github.com/huggingface/trl/pull/2565#issuecomment-2595837761) :) Based on what you said that `As a result, the GRPO objective just minimizes the KL divergence between the policy model and the reference policy,` I have another question: How can this approach achieve preference alignment effects?
2,837
267
Superskyyy
2025-02-12T13:58:59
Since the vllm device patch is growing larger. It might be wise to move them into a utility module instead. Wdyt.
2,836
268
baymax591
2025-02-14T10:54:03
This PR helps a lot, I hope it can speed up the integration
2,836
269
ji-huazhong
2025-02-14T13:30:03
I think this PR is ready to be merged 🤗 @qgallouedec
2,836
270
HuggingFaceDocBuilderDev
2025-02-14T13:52:15
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2836). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,836
271
qgallouedec
2025-02-14T14:23:07
Can you make sure sure to run `make precommit` to apply the style 🙏
2,836
272
ji-huazhong
2025-02-18T14:56:20
`make precommit` is successfully executed locally
2,836
273
lynnzhiyun
2025-02-18T15:28:50
Hi @ji-huazhong, Thank you for your excellent work! This PR has been incredibly helpful in enabling me to train models using GRPO on the NPU smoothly. I want to ask if this PR is ready to be merged and I'd be extremely grateful if it could be done promptly. cc @qgallouedec
2,836
274
ji-huazhong
2025-02-19T07:45:00
[![asciicast](https://asciinema.org/a/704242.svg)](https://asciinema.org/a/704242) I did a test on Ascend NPU using the grpo script provided by open-r1, it works 🤗 > Since training grpo for one step takes a long time, only the output of the first 4 steps is shown here, and then I just press ctrl-c to exit.
2,836
275
ji-huazhong
2025-02-19T08:40:49
Hi @kashif, the failing test case seems unrelated to this PR. Could you take a look? Thanks!
2,836
276
symoon11
2025-02-16T06:21:44
To the best of my knowledge, the "padding free" option works correctly only when FlashAttention is activated. It seems that FlashAttention is not currently activated. I recommend first creating the model and then passing it to the SFTTrainer.
2,834
277
YooSungHyun
2025-02-17T00:22:26
@symoon11 thx for reply! i give `--attn_implementation=flash_attention_2`, but this is model_config and not training_args... i will test soon and make some result
2,834
278
YooSungHyun
2025-02-17T00:36:23
I foolishly forgot to include this part in my code: ```python quantization_config = get_quantization_config(model_args) model_kwargs = dict( revision=model_args.model_revision, trust_remote_code=model_args.trust_remote_code, attn_implementation=model_args.attn_implementation, torch_dtype=model_args.torch_dtype, use_cache=False if training_args.gradient_checkpointing else True, device_map=get_kbit_device_map() if quantization_config is not None else None, quantization_config=quantization_config, ) training_args.model_init_kwargs = model_kwargs ``` Sorry for causing confusion. My code runs correctly now!
2,834
279
zaporter
2025-02-12T04:26:33
See https://github.com/huggingface/trl/blob/main/docs/source/grpo_trainer.md#computing-the-advantage It doesn't matter if you have negative or positive weights -- all that matters is the group relative advantage. Rewards of {1, 0} will result in advantages of `1` and `-1` respectively. That is the same as rewards of {1,-1} which results in `1, -1` Or consider rewards of `{1, 1, 2}`, this will result in advantages of `-1/sqrt(2), -1/sqrt(2), sqrt(2)`
2,832
280
HuggingFaceDocBuilderDev
2025-02-11T18:12:52
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2831). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,831
281
HuggingFaceDocBuilderDev
2025-02-11T13:34:15
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2829). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,829
282
HuggingFaceDocBuilderDev
2025-02-11T10:09:22
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2828). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,828
283
HuggingFaceDocBuilderDev
2025-02-11T07:55:50
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2827). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,827
284
MohamedAliRashad
2025-02-11T06:31:17
After some inspection, i think this error happens because of `vllm_gpu_memory_utilization`, if it's smaller that what vllm can use to host your model it will give you the error i recieved.
2,826
285
qgallouedec
2025-02-11T06:46:38
Ah that's right, it a particular case where the error message is misleading. Actually you should set `vllm_device="cuda:0"`.
2,826
286
MohamedAliRashad
2025-02-11T10:16:24
@qgallouedec I thought the problem was the vllm couldn't find enough VRAM on my single GPU so automatically seeked to get another one but failed at the end. Anyhow, i am trying to use an evaluation dataset with `GRPOTrainer` and it is giving me this error: ``` Traceback (most recent call last): File "/workspace/train_grpo2.py", line 173, in <module> trainer.train(resume_from_checkpoint=False) File "/usr/local/lib/python3.11/dist-packages/transformers/trainer.py", line 2171, in train return inner_training_loop( ^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/transformers/trainer.py", line 2598, in _inner_training_loop self._maybe_log_save_evaluate( File "/usr/local/lib/python3.11/dist-packages/transformers/trainer.py", line 3071, in _maybe_log_save_evaluate metrics = self._evaluate(trial, ignore_keys_for_eval) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/transformers/trainer.py", line 3025, in _evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/transformers/trainer.py", line 4073, in evaluate output = eval_loop( ^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/transformers/trainer.py", line 4267, in evaluation_loop losses, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/transformers/trainer.py", line 4436, in prediction_step has_labels = False if len(self.label_names) == 0 else all(inputs.get(k) is not None for k in self.label_names) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/transformers/trainer.py", line 4436, in <genexpr> has_labels = False if len(self.label_names) == 0 else all(inputs.get(k) is not None for k in self.label_names) ^^^^^^^^^^ AttributeError: 'list' object has no attribute 'get' 0%| | 10/57951 [01:38<158:47:29, 9.87s/it] ``` Can you tell me what i am doing wrong ?
2,826
287
qgallouedec
2025-02-11T10:18:11
Thanks for reporting, can you share the output of `trl env`?
2,826
288
MohamedAliRashad
2025-02-11T10:23:38
INFO 02-11 10:23:26 __init__.py:190] Automatically detected platform cuda. Copy-paste the following information when reporting an issue: - Platform: Linux-5.4.0-167-generic-x86_64-with-glibc2.35 - Python version: 3.11.10 - PyTorch version: 2.5.1 - CUDA device(s): NVIDIA L40 - Transformers version: 4.48.3 - Accelerate version: 1.3.0 - Accelerate config: not found - Datasets version: 3.2.0 - HF Hub version: 0.28.1 - TRL version: 0.14.0 - bitsandbytes version: not installed - DeepSpeed version: not installed - Diffusers version: not installed - Liger-Kernel version: not installed - LLM-Blender version: not installed - OpenAI version: 1.61.1 - PEFT version: not installed
2,826
289
qgallouedec
2025-02-11T10:49:42
@MohamedAliRashad can you try after install from source: ``` pip install git+https://github.com/huggingface/trl.git@main ```
2,826
290
lidh15
2025-02-11T02:13:48
with very similar environment setup (except for trl 0.15.0.dev0, where is that version? I can only find 0.14.0) I encountered this issue [No inf checks were recorded for this optimizer](https://discuss.pytorch.org/t/no-inf-checks-were-recorded-for-this-optimizer/140505). when I turn off vllm, the error will not be triggered, I wonder if there is any clue for this error.
2,825
291
HuggingFaceDocBuilderDev
2025-02-10T20:28:56
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2824). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,824
292
mark-mcl
2025-02-11T02:07:13
this seems to also affect sampling, tried patching this and I get a different prompt on each device now
2,824
293
kiddj
2025-02-11T02:46:58
thanks! does using a different seed for each process affect training?
2,824
294
qgallouedec
2025-02-11T07:18:27
True @kiddj!! Fixed in 95fcfeb003d8ba0ab339faeabcb41365786cee58
2,824
295
qgallouedec
2025-02-11T08:58:31
I've checked it locally, and yes it's now working as expected. I'll see in a follow up PR if I can add a test for this
2,824
296
qgallouedec
2025-02-10T18:41:19
Hi and thanks for your contribution! The idea seems quite natural. Do you have any quantitative results? I'd like to keep the codebase simple, so for the moment I'm in favor of leaving this PR open for the community to reference, and if it's a feature in high demand, then we'll merge it.
2,823
297
mandeep511
2025-02-10T19:04:10
> Hi and thanks for your contribution! The idea seems quite natural. Do you have any quantitative results? I'd like to keep the codebase simple, so for the moment I'm in favor of leaving this PR open for the community to reference, and if it's a feature in high demand, then we'll merge it. Hey, I'm doing a couple of runs to test how well this performs compared to the vanilla implementation. Will post the results here once it is done.
2,823
298
willccbb
2025-02-11T01:08:42
I'm not sure it makes sense to directly add features like this which are not part of the canonical algorithm, and which add significant complexity to the codebase, making further maintenance + feature compatibility more difficult. I believe that this approach probably works, and could yield higher performance, but my opinion is that this shouldn't be the primary goal of GRPOTrainer. There are lots of such tricks that could be added, but each one makes the code harder to read and modify, and is no longer "GRPO" in the literal sense. Mentioning my PR https://github.com/huggingface/trl/pull/2810 here because I think it's directly relevant: if people want to try out non-canonical sampling methods like MCTS, reward thresholds, reward score diversity constraints etc., or add things like tool calls or multi-step interactions they should have a way to do this without needing to modify the core Trainer. The `Environment` abstraction would allow users to have full control over the sampling step, and perhaps implementations of these could live in a unified place in TRL (e.g. `RolloutSamplers`) to support easy hot-swapping. We may also want to allow users to return rewards directly in the rollout stage. On the whole, I think it will be easier for more people to use and adapt TRL trainers if the primary code stays simple while supporting modular customization.
2,823
299
winglian
2025-02-11T22:58:10
It could be worthwhile to refactor the GRPOTrainer to make it easier to extend the class without having to duplicate whole swaths of code in a method in order to add retries in a subclass.
2,823
300
Rocketknight1
2025-02-11T15:23:10
Not all models are expected to support tool use! When they do support tool use, we encourage support for that in their chat template, but I'm not sure if models like Deepseek-R1 are trained to use tools. cc @aymeric-roucher for agentic workflows, though!
2,821
301
qgallouedec
2025-02-10T14:52:11
@winglian just to point out a different approach: #2730
2,818
302
winglian
2025-02-10T15:04:15
@qgallouedec The downside there is that you're limited to the lora support in vllm, which means no DoRA support. This approach almost any peft adapter type could be used. While LoRA does converge pretty quickly too compared to full parameter training, dora seems to be more performant. <img width="1327" alt="Screenshot 2025-02-10 at 9 25 06 AM" src="https://github.com/user-attachments/assets/b37e7935-a18b-4ed1-8146-8578041b1d5c" />
2,818
303
qgallouedec
2025-02-10T15:06:04
This seems quite reasonable, thank you for the clear explanation.
2,818
304
qgallouedec
2025-02-10T15:09:47
Another pointer that could be useful: > It is possible to call `model.merge_adapter` (optionally with `adapter_names` argument), then `model.state_dict()`, then `model.unmerge_adapter`. > The `state_dict` may require some clean up though, depending on what you need to do with it (I couldn't infer that from the PR). > By clean up, I mean: After `merge_and_unload` the model looks like the base model. But `merge_adapter` keeps the LoRA structure, with the wrapped base model, LoRA weights etc. still being present in the `state_dict`. From @BenjaminBossan
2,818
305
winglian
2025-02-10T16:05:52
I tried ``` unwrapped_model.merge_and_unload() state_dict = unwrapped_model.base_model.model.state_dict() unwrapped_model.unmerge_adapter() ``` but the state dict results still has the prefix of `base_model.model.`
2,818
306
qgallouedec
2025-02-10T18:30:41
I've added the suggested modification to this branch: https://github.com/huggingface/trl/pull/2725 it seems to work...! EDIT: DORA included
2,818
307
BenjaminBossan
2025-02-11T11:25:26
> I've added the suggested modification to this branch: #2725 it seems to work...! EDIT: DORA included Nice, I added a comment there. Hopefully, one of these branches can be merged soon :)
2,818
308
winglian
2025-02-13T00:45:53
I re-did this PR to account for the other changes, and also updated the test to use lora.
2,818
309
qgallouedec
2025-02-13T13:41:38
thanks for the followup @BenjaminBossan !
2,818
310
HuggingFaceDocBuilderDev
2025-02-13T13:52:31
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2818). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,818
311