user
stringlengths 3
28
| created_at
timestamp[us] | body
stringlengths 1
173k
| issue_number
int64 1
2.95k
| __index_level_0__
int64 0
8.59k
|
---|---|---|---|---|
SeungyounShin | 2025-02-05T04:29:57 | Here is how OpenRLHF dealing with ratio in PPO
https://github.com/OpenRLHF/OpenRLHF/blob/631b0bcb7c14ec2fd4117a43a661903ed60d26cc/openrlhf/trainer/ppo_trainer.py#L366
| 2,769 | 512 |
SeungyounShin | 2025-02-05T05:11:24 | This is related to https://github.com/huggingface/trl/issues/2608
I fully grasped why ratio always 1 | 2,769 | 513 |
HuggingFaceDocBuilderDev | 2025-02-04T18:26:58 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2766). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,766 | 514 |
endic-sam928281 | 2025-02-04T15:50:50 | Hello, we tried to solve the issue.
This is what we did:
Add the missing required parameters 'reward_model' and 'train_dataset' to the PPOTrainer initialization in the quickstart guide. This ensures consistency with the actual implementation.
You can review changes in this commit: https://github.com/endic-sam928281/huggingface-trl-2764/commit/7947e04ee3557bab907b04a6c53f3d47a4c8157f.
> [!CAUTION]
> Disclaimer: The concept of solution was created by AI and you should never copy paste this code before you check the correctness of generated code. Solution might not be complete, you should use this code as an inspiration only.
---
Latta AI seeks to solve problems in open source projects as part of its mission to support developers around the world. Learn more about our mission at https://latta.ai/ourmission . If you no longer want Latta AI to attempt solving issues on your repository, you can block this account. | 2,764 | 515 |
elliot-zzh | 2025-02-04T16:15:09 | Yes I think this will solve 1. but 2. still remains.
And I think passing `reward_model` and `train_dataset` in positional style instead of key-value pairs would be better? Since they're not optional. | 2,764 | 516 |
elliot-zzh | 2025-02-07T09:09:06 | Seems like the whole quick start guide is a mess? It's still using `PPOTrainer.step`, yet this API is now abandoned and now we should use `PPOTrainer.train`?
And the API reference is also a mess. There is even no type definition of of `PPOTrainer.train`, which I could only look up in the source code. | 2,764 | 517 |
Superskyyy | 2025-02-07T14:05:15 | > Seems like the whole quick start guide is a mess? It's still using `PPOTrainer.step`, yet this API is now abandoned and now we should use `PPOTrainer.train`?
>
> And the API reference is also a mess. There is even no type definition of of `PPOTrainer.train`, which I could only look up in the source code.
The quickstart was written for PPOTrainer V1 which back then it had the .step method, now the trainer loop is wrapped for you. But yes it's confusing to first time users. I will help to refactor it. For now you can refer to the examples folder in the codebase. | 2,764 | 518 |
HuggingFaceDocBuilderDev | 2025-02-04T15:43:23 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2763). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,763 | 519 |
shirinyamani | 2025-02-04T18:55:50 | if the ref_model exists on another machine (node), I don't fully understand how fsdp can take place ? Just to improve my knowledge I appreciate explanation on how the sync can happen without conflict ? | 2,763 | 520 |
Superskyyy | 2025-02-04T20:45:08 | This would potentially conflict with PR #2684 though, maybe need a note on doc. | 2,763 | 521 |
edbeeching | 2025-02-05T08:07:59 | @shirinyamani
The ref model is fixed, so there is no need to sync weights to it. It could be wrapped with FSDP, although this implementation does not expose that option, we assume it is small enough to fit on one GPU.
Only the model being optimized is sharded in this setting, the ref model is running on another node in order to free memory on the node being used for optimization.
@Superskyyy , good point. Yes I do not think that iterative GRPO is compatible with this option.
| 2,763 | 522 |
Superskyyy | 2025-02-05T13:19:12 | > @shirinyamani
>
> The ref model is fixed, so there is no need to sync weights to it. It could be wrapped with FSDP, although this implementation does not expose that option, we assume it is small enough to fit on one GPU.
>
>
>
> Only the model being optimized is sharded in this setting, the ref model is running on another node in order to free memory on the node being used for optimization.
>
>
>
> @Superskyyy , good point. Yes I do not think that iterative GRPO is compatible with this option.
>
>
When a more distributed backend is built into the lib it can be solved naturally. | 2,763 | 523 |
edbeeching | 2025-02-05T14:38:44 | @Superskyyy
Just pasting a comment from our internal slack below, I should be able to include interactive GRPO in this PR using a remote SDLang endpoint:
> I am going refactor it to work with an SDLang endpoint. The reasoning is that SDLang endpoints have two nice features, [working directly with the token_ids](https://docs.sglang.ai/backend/native_api.html#Skip-Tokenizer-and-Detokenizer) and the capacity to [reload a new model weights from disk](https://docs.sglang.ai/backend/native_api.html#Update-Weights-From-Disk).
> The weight reload is useful in two settings, iterative GRPO, which is already available in TRL by this [PR](https://github.com/huggingface/trl/pull/2700), and also async GRPO, which may be required in the future to scale up the training. | 2,763 | 524 |
Superskyyy | 2025-02-06T23:22:21 | > @Superskyyy Just pasting a comment from our internal slack below, I should be able to include interactive GRPO in this PR using a remote SDLang endpoint:
>
> > I am going refactor it to work with an SDLang endpoint. The reasoning is that SDLang endpoints have two nice features, [working directly with the token_ids](https://docs.sglang.ai/backend/native_api.html#Skip-Tokenizer-and-Detokenizer) and the capacity to [reload a new model weights from disk](https://docs.sglang.ai/backend/native_api.html#Update-Weights-From-Disk).
> > The weight reload is useful in two settings, iterative GRPO, which is already available in TRL by this [PR](https://github.com/huggingface/trl/pull/2700), and also async GRPO, which may be required in the future to scale up the training.
@edbeeching Thanks! I'm planning some further decoupling and efficiency gains, once this is merged I will try to add something on top of it this weekend. | 2,763 | 525 |
HuggingFaceDocBuilderDev | 2025-02-04T11:54:09 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2762). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,762 | 526 |
shirinyamani | 2025-02-04T18:45:19 | Did you make sure the trainer and the model are on the same device ?
Also the section of (Reference model considerations with PEFT) @ [here](https://huggingface.co./docs/trl/main/en/dpo_trainer#reference-model-considerations-with-peft) might be helpful ? | 2,760 | 527 |
Superskyyy | 2025-02-04T14:15:07 | https://github.com/huggingface/trl/issues/2608#issuecomment-2609844003 | 2,759 | 528 |
JohnGiorgi | 2025-02-04T16:09:37 | Huh, it looks like it comes down to what you use as the pad token itself. If I used one of Llamas unused special tokens: `<|reserved_special_token_0|>` (pad token id `128002`), it works! If I use the EOS token as padding, it doesn't...
<img width="386" alt="Image" src="https://github.com/user-attachments/assets/da13c989-e257-49e3-b05c-8b5bf3f7d782" />
Another callout is that in either cases, it looks like we somehow end up with two BOS tokens in the chosen/rejected pairs:
<img width="1035" alt="Image" src="https://github.com/user-attachments/assets/cdacf681-164d-4494-bd8f-e121b15a09fa" /> | 2,758 | 529 |
qgallouedec | 2025-02-04T10:14:39 | Super cool! | 2,757 | 530 |
jecummin | 2025-02-04T22:39:15 | Also interested in seeing multigpu generation for GRPO! It would be nice to use tensor parallelism during generation with vLLM | 2,754 | 531 |
Superskyyy | 2025-02-05T04:50:51 | Apparently it's not possible now, the vllm can only work on a single node due to the built-in assumption of accelerate. The way of using it would be decoupling vllm orchestration away from the main process. We gonna contribute this to upstream in near future. | 2,754 | 532 |
casper-hansen | 2025-02-10T14:13:12 | @qgallouedec @lewtun It would be incredible if you guys could take a look at this! At the moment, we are a lot of users that wants to run multi-node training with vLLM as it helps speed things up. | 2,754 | 533 |
RicardoDominguez | 2025-02-12T14:18:32 | Splitting examples across multiple GPUs at generation time (i.e., data parallel) would be incredibly useful. Is there some bottleneck for getting this running? Happy to contribute. | 2,754 | 534 |
zhc7 | 2025-02-13T10:32:56 | is it possible to run vllm on local main process instead of main process to support multi-node scenario? e.g. use cuda:7 for vllm on every node. | 2,754 | 535 |
Superskyyy | 2025-02-14T05:51:40 | > is it possible to run vllm on local main process instead of main process to support multi-node scenario? e.g. use cuda:7 for vllm on every node.
We will open a PR early next week to support this. Currently waiting on some other important PRs to merge first. | 2,754 | 536 |
akaanirban | 2025-02-19T18:06:50 | @Superskyyy wondering if there are any progress on the multi-node GRPO training PR you spoke about last week. We are trying some bigger models and single node training is not possible :( | 2,754 | 537 |
Superskyyy | 2025-02-19T23:30:08 | > [<img alt="" width="16" height="16" src="https://avatars.githubusercontent.com/u/26076517?u=6457805af06994e063da75ffcb09e055494408d0&v=4&size=80">@Superskyyy](https://github.com/Superskyyy?rgh-link-date=2025-02-19T18%3A06%3A50.000Z) wondering if there are any progress on the multi-node GRPO training PR you spoke about last week. We are trying some bigger models and single node training is not possible :(
Hi we are actively working on it, please stay tuned :) coming a bit late | 2,754 | 538 |
ticosir | 2025-02-21T07:09:48 | +1, I need multi-node training too. help us~ | 2,754 | 539 |
cuiyuhao1996 | 2025-02-24T03:32:02 | +1, This is a very urgent demand. I've been waiting weeks for this, even thinking about using openrlhf... | 2,754 | 540 |
akaanirban | 2025-02-24T03:52:06 | +1 Agree, this is indeed very urgent. We've also been looking into OpenInstruct and OpenRLHF, but their uneven GPU distribution per node is handled internally using Ray actors. Here, since TRL uses `accelerate`, we are hitting a dead end in working around this limitation and hacking our way through.
The best workaround we found allows multi-node training but with GPU wastage. For example, if training on 4 nodes with 8 GPUs each using DeepSpeed, you can technically run the training with 28 GPUs and assign `cuda:7` of `node 0` to vLLM. However, because accelerate enforces an equal number of GPUs per node, using 1 GPU for vLLM ends up wasting 3 more across other nodes. (I noticed that accelerate has options to accept a DeepSpeed hostfile, but since my system runs on Kubernetes, I have to use the `standard` launcher which I believe does not use the hostfile even if I specify uneven slots).
Firstly, many thanks to @Superskyyy for developing this! It would be extremely helpful to know the expected timeframe for the upcoming PR or when these changes are likely to be implemented.
Out of intellectual curiosity, @Superskyyy, how are you planning to implement this at a high level? | 2,754 | 541 |
zhc7 | 2025-02-24T04:09:52 | actually I've already managed to do this. the basic idea is to replace every `is_main_process` to `is_local_main_process` and change the global broadcast and gather to those within same node. also, some latest fixes of vllm are required. for me, commit `9cf47594` is roughly enough. and then you just need to fix where error occurs. | 2,754 | 542 |
Superskyyy | 2025-02-04T17:29:55 | vLLM version is too low, you should clean up the cached version and retry, version not constrained at the TRL setup.py level. | 2,753 | 543 |
qgallouedec | 2025-02-04T18:19:03 | What should be the min version to set in setup? | 2,753 | 544 |
Superskyyy | 2025-02-04T20:15:23 | > What should be the min version to set in setup?
Latest PR already pinned to 0.7.1 https://github.com/huggingface/trl/pull/2766 | 2,753 | 545 |
rawathemant246 | 2025-02-14T15:51:49 | @Superskyyy @qgallouedec Thank you for your support sorry i reply for so late i'm having some issue with the linux kerenl it was crashed and in kernel panic mode and also have interview schedule in between so couldn't reply back you guyzz...
I'll try to setup this today so @Superskyyy what you have suggest is vLLM is not the constrain i can level up its version right if i got the error again | 2,753 | 546 |
Superskyyy | 2025-02-14T21:47:37 | > @Superskyyy @qgallouedec Thank you for your support sorry i reply for so late i'm having some issue with the linux kerenl it was crashed and in kernel panic mode and also have interview schedule in between so couldn't reply back you guyzz...
>
> I'll try to setup this today so @Superskyyy what you have suggest is vLLM is not the constrain i can level up its version right if i got the error again
Yes vLLM 0.7.2 is fine. | 2,753 | 547 |
rawathemant246 | 2025-02-15T10:20:52 | @Superskyyy it doesnot vLLM does not support python 3.13 so i downgraded it to 3.12
also when i updated it to vLLM 0.7.2 in setup.py and it works.
| 2,753 | 548 |
xuefei1 | 2025-02-03T19:38:04 | I had the same question. I also specifically wonder about this line, why it's `per_token_logps - per_token_logps.detach()` inside `torch.exp`, shouldn't it be `ref_per_token_logps`?

Original line: https://github.com/huggingface/trl/blob/main/trl/trainer/grpo_trainer.py#L521 | 2,752 | 549 |
cfpark00 | 2025-02-04T03:05:31 | @liranringel
I'm unsure about the gradient coefficient form, but the implementation seems consistent (well, in terms of log(p) vs. p) with the following? (From deepseekmath paper)
<img width="856" alt="Image" src="https://github.com/user-attachments/assets/823f72fe-821b-4cd0-a686-c3773e87f835" />
----
Then of course there is no clipping here. And as @xuefei1 says, there currently seems to be no distinction between old_policy and ref_policy ?
<img width="828" alt="Image" src="https://github.com/user-attachments/assets/3e4784a2-7b4b-4216-8711-9503271540ac" />
| 2,752 | 550 |
liranringel | 2025-02-04T21:52:11 | @cfpark00 But they explicitly state that the policy update maximizes the GRPO objective as per Equation 21, which is the weighted log probability version. This suggests that the theoretical formulation is indeed meant to use log probabilities, yet the implementation appears to use probabilities instead.
Also, I'm still confused about Equation 20—it's unclear how they derived it. A derivation similar to policy gradients doesn't seem to work here, since the probability inside the expectation does not depend on $\theta$. | 2,752 | 551 |
qgallouedec | 2025-02-04T22:23:13 | I think there's a misunderstanding. In the paper it is written
> "Update the policy model no by maximizing the GRPO objective (Equation 21)".
Equation (21) gives the gradient of the objective, but what we're trying to optimize is the objective, not the gradient. This is why we use equation (3) in practice, and autograd takes care of the derivation and optimization step. | 2,752 | 552 |
liranringel | 2025-02-05T16:40:04 | @qgallouedec Well, the point is that in RL, the loss is not always just the (minus) expression inside the expectation.
If the expectation is taken over a distribution that depends on the model parameters, we have to account for that—typically using the log trick (see [this explanation from ChatGPT](https://chatgpt.com/share/67a39058-a6f8-8013-8987-0f9af8ebef36)).
Thus, I suspect that the DeepSeek researchers actually treated the distribution inside the expectation as parameterized by $\pi_\theta$, which allowed them to apply the log trick. I suspect that this might be the reason the logarithm appears in equation 20, though I am not certain.
To clarify, I believe that the gradient computed using autograd in the current implementation differs from the gradient in Equation 20 of the paper. | 2,752 | 553 |
cfpark00 | 2025-02-05T18:39:38 | I see.. I'm too much of a beginner in RL to provide a real opinion here. However, I can point you towards the corresponding lines in:
open-instruct:
https://github.com/allenai/open-instruct/blob/6b3964bcbc819299ca1269b3c306cc498355e488/open_instruct/grpo_vllm_thread_ray_gtrl.py#L1205
verl:
https://github.com/volcengine/verl/blob/6872dbefd5258643db247febf7ca68748d13e09c/verl/trainer/ppo/core_algos.py#L185
openrlhf:
https://github.com/OpenRLHF/OpenRLHF/blob/631b0bcb7c14ec2fd4117a43a661903ed60d26cc/openrlhf/models/loss.py#L72
It seems like they are all implementing the same loss as TRL?
@qgallouedec would it be worth tagging the devs of each library for a discussion or not? | 2,752 | 554 |
liranringel | 2025-02-06T16:08:34 | @cfpark00 @qgallouedec I continued my derivation using the log trick and can confirm that your implementation matches Equation 20, so I am closing this issue.
Thanks for the help!
Here’s the derivation:
 | 2,752 | 555 |
Paulmzr | 2025-02-24T10:59:58 | > [@cfpark00](https://github.com/cfpark00) [@qgallouedec](https://github.com/qgallouedec) I continued my derivation using the log trick and can confirm that your implementation matches Equation 20, so I am closing this issue.
> Thanks for the help!
>
> Here’s the derivation:
>
> 
The derivation is NOT right. When you try to estimate KL using q/p - log(q/p) - 1, you need to sample from p rather than q. So the sum operation in terms of KL should be on samples from pi_theta NOT pi_ref! | 2,752 | 556 |
imrankh46 | 2025-02-03T13:06:44 | @kashif @qgallouedec | 2,751 | 557 |
Superskyyy | 2025-02-03T16:41:51 | It makes a lot of sense to decouple vLLM deployment out, using an API endpoint that you can specify, because, in reality, at a larger scale, one would set up a dedicated vLLM cluster with InfiniBand just for doing inference. The current one-card approach is somewhat rigid. | 2,751 | 558 |
qgallouedec | 2025-02-03T18:20:39 | The main obstacle to this approach is the ability to load model weights quickly. This is what we tried with #2658 but currently vLLM doesn't support it (no endpoint, or way to load weights quickly) which makes this architecture prohibitively slow. | 2,751 | 559 |
imrankh46 | 2025-02-04T03:14:04 | @qgallouedec actually vllm loading checkpoint very fast.
But with new version, the vllm some time hang and not able to load the model.
- tips
After installing all these packages, make sure to restart the kernal, so it will work. | 2,751 | 560 |
HuggingFaceDocBuilderDev | 2025-02-03T09:06:27 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2750). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,750 | 561 |
shirinyamani | 2025-02-04T00:32:17 | That's a very good point! Thanks for pointing it out!, but quick follow-up for my knowledge, I could not find `torch.compile()` in the main [grpo_trainer](https://github.com/huggingface/trl/blob/main/trl/trainer/grpo_trainer.py), could you please help me understand which part of code is specifying the memory hierarchy-aware like `torch.compile` in your viewpoint ?
bc to the best of my knowledge we could do sth like `model = torch.compile(model)` but of course we gotta make sure the compatibility with rest of the computation.
@winglian @kashif | 2,750 | 562 |
winglian | 2025-02-04T13:50:55 | The base `TrainingArguments` [from transformers ](https://github.com/huggingface/transformers/blob/a93b80588b10d567ef7d70143b6581df8e601a8b/src/transformers/training_args.py#L744-L753)includes a `torch_compile` option, so you can simply set that on `GRPOConfig`
> That's a very good point! Thanks for pointing it out!, but quick follow-up for my knowledge, I could not find `torch.compile()` in the main [grpo_trainer](https://github.com/huggingface/trl/blob/main/trl/trainer/grpo_trainer.py), could you please help me understand which part of code is specifying the memory hierarchy-aware like `torch.compile` in your viewpoint ?
>
> bc to the best of my knowledge we could do sth like `model = torch.compile(model)` but of course we gotta make sure the compatibility with rest of the computation.
| 2,750 | 563 |
winglian | 2025-02-04T13:51:50 | @qgallouedec I rebased this so the merge conflict should be resolved. thanks! | 2,750 | 564 |
winglian | 2025-02-04T16:50:49 | I could also move this into the `unwrap_model_for_generation` function, but I'm not 100% on the deepspeed behavior.
<img width="990" alt="Screenshot 2025-02-04 at 11 50 18 AM" src="https://github.com/user-attachments/assets/521fba21-010b-4f6d-924b-efaca7559ba3" />
| 2,750 | 565 |
whitead | 2025-02-03T06:50:32 | wrong upstream! Sorry | 2,749 | 566 |
winglian | 2025-02-03T15:28:46 | this is fixed by removing `messages` column from dataset | 2,748 | 567 |
qgallouedec | 2025-02-04T19:32:24 | Surprising, since the device placement should be handled by
```python
prompt_inputs = super()._prepare_inputs(prompt_inputs)
```
can you provide a code to reproduce? | 2,747 | 568 |
tchang1997 | 2025-02-03T01:03:48 | What version of vLLM are you using? I had a similar issue when using vLLM for inference — for me, the issue was that I was using vLLM v0.6, and upgrading to 0.7.1 resolved this error.
Basically, `vllm/worker/model_runner.py` was using `.cuda()` to change tensor devices instead of setting `device=[correct device name]` until a [bugfix on 1/4/2025](https://github.com/vllm-project/vllm/commit/300acb83472512b14ec7ba8cdf45efe07e8c8f68), which is included in the 0.7 release.
Perhaps a brief line on vLLM version requirements could be added to the docs, if it isn't present already?
| 2,745 | 569 |
3rdAT | 2025-02-03T01:14:14 |
Hi @tchang1997 , I was actually using vLLM==0.6.6.post1. The updated version works! Thanks!
Also, I would like to say that the model needs to be loaded with flash-attention to work flawlessly with vllm, Adding this to the documentation for GRPO would also be beneficial for people who are new to this.
Thank for the help!
| 2,745 | 570 |
Superskyyy | 2025-02-03T01:47:49 | The inference is done based on a current version of the policy model, or are you saying that you want to add the value model back? If that it's essentially going back to PPO. GRPO is not intended to do this. | 2,744 | 571 |
hallerite | 2025-02-04T01:11:17 | I was suggesting that we could do the inference for the advantage estimation by an older version of the policy model and update it every n steps. We could then quantize the current model and use the quantized version of the policy model for inference. This would allow us to increase the sampled generations, which might off-set the fact that we are now lagging behind our current policy model (which isn't too different given KL regularization).
When writing this suggestion I did not think about the fact, that it wouldn't be the classic GRPO anymore, defeating the purpose of extending the GRPO class, so I think we can delete this issue. | 2,744 | 572 |
JohnGiorgi | 2025-02-02T23:55:27 | Closing because I wasn't able to reproduce this on closer inspection... I can also see `model.score.parameters()` is updated after training even with `target_modules=all-linear`... | 2,743 | 573 |
Subsets and Splits