user
stringlengths
3
28
created_at
timestamp[us]
body
stringlengths
1
173k
issue_number
int64
1
2.95k
__index_level_0__
int64
0
8.59k
mehdiataei
2025-02-13T17:13:59
Using Qwen1.5 instruct model I face the following error: ``` [rank0]: trainer.train() [rank0]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2171, in train [rank0]: return inner_training_loop( [rank0]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2531, in _inner_training_loop [rank0]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch) [rank0]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 3669, in training_step [rank0]: inputs = self._prepare_inputs(inputs) [rank0]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/grpo_trainer.py", line 535, in _prepare_inputs [rank0]: self._move_model_to_vllm() [rank0]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/grpo_trainer.py", line 515, in _move_model_to_vllm [rank0]: llm_model.load_weights(state_dict.items()) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2.py", line 515, in load_weights [rank0]: return loader.load_weights(weights) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/utils.py", line 235, in load_weights [rank0]: autoloaded_weights = set(self._load_module("", self.module, weights)) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/utils.py", line 224, in _load_module [rank0]: raise ValueError(msg) [rank0]: ValueError: There is no module or parameter named 'base_model' in Qwen2ForCausalLM ``` the format is: ``` trainer = GRPOTrainer( model=model, reward_funcs=[format_reward, judge_reward], args=training_args, train_dataset=dataset, peft_config=lora_config, ) ``` with the following vllm settings: ``` use_vllm=True, # Whether to use vLLM for faster generation (default: False) vllm_device="cuda:7", # Device for vLLM generation (e.g., "cuda:1"); "auto" selects the next available GPU vllm_gpu_memory_utilization=0.4, # Fraction of GPU memory to reserve for vLLM (default: 0.9) vllm_dtype="auto", # Data type for vLLM generation; "auto" lets vLLM decide based on model config vllm_max_model_len=512, # Optional maximum model length for vLLM; if None, uses the model's context size ``` Another weird thing that I noticed is that ``` INFO 02-13 17:12:27 model_runner.py:1115] Loading model weights took 0.0000 GB ^[[AINFO 02-13 17:12:28 worker.py:267] Memory profiling takes 0.48 seconds INFO 02-13 17:12:28 worker.py:267] the current vLLM instance can use total_gpu_memory (39.39GiB) x gpu_memory_utilization (0.40) = 15.76GiB INFO 02-13 17:12:28 worker.py:267] model weights take 0.00GiB; non_torch_memory takes 0.00GiB; PyTorch activation peak memory takes 0.00GiB; the rest of the memory reserved for KV Cache is 15.76GiB ``` Why the `model weights take 0.00GiB`?
2,818
312
zaddy6
2025-02-14T11:28:21
I noticed training without LORA leads to better performance, here is an example without LORA it starts to max the rewards at 1k steps, with Lora it doesnt learn <img width="968" alt="image" src="https://github.com/user-attachments/assets/0b682b5d-5b10-4ee6-87ea-e8574155b122" />
2,818
313
winglian
2025-02-14T12:01:37
> I noticed training without LORA leads to better performance, here is an example without LORA it starts to max the rewards at 1k steps, with Lora it doesnt learn > What rank and dataset? It learns pretty quickly with rank 64 o. The gsm8k dataset
2,818
314
zaddy6
2025-02-14T14:15:46
> > I noticed training without LORA leads to better performance, here is an example without LORA it starts to max the rewards at 1k steps, with Lora it doesnt learn > > What rank and dataset? It learns pretty quickly with rank 64 o. The gsm8k dataset Current config lora_config = LoraConfig( r=8, lora_alpha=16, target_modules="all-linear", lora_dropout=0.05, use_dora=True, ) what do you use as your alpha
2,818
315
HuggingFaceDocBuilderDev
2025-02-10T13:53:31
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2817). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,817
316
jymChen
2025-02-10T14:04:55
Run `pip install vllm==0.7.1`, and it works.
2,816
317
Superskyyy
2025-02-10T07:20:48
For Multi-Node it is currently not possible, but we are working on it. Meanwhile if your training only uses <= 8carda. You can try to make vllm work on a single node while reserving two cards, and set in GRPOTrainer LLM to use tp=2. It should work.
2,814
318
whw199833
2025-02-10T10:19:58
+1 I need multi-node training too. And Qwen72B cannot launch within 1 carda.
2,814
319
ticosir
2025-02-21T07:07:58
+1 I need multi-node training too. help~
2,814
320
Superskyyy
2025-02-10T03:26:01
Your issue isn't complete. What do you mean by intensive? The GPU requirement highly depends on dataset characteristics and algorithm to use. Also PEFT is for cases where GPU is very limited.
2,813
321
lonngxiang
2025-02-10T08:49:57
For example, what is the minimum amount of GPU resources required for the official reinforcement learning examples? ``` from datasets import load_dataset from trl import GRPOConfig, GRPOTrainer dataset = load_dataset("trl-lib/tldr", split="train") # Dummy reward function: rewards completions that are close to 20 characters def reward_len(completions, **kwargs): return [-abs(20 - len(completion)) for completion in completions] training_args = GRPOConfig(output_dir="Qwen2-0.5B-GRPO", logging_steps=10) trainer = GRPOTrainer( model="Qwen/Qwen2-0.5B-Instruct", reward_funcs=reward_len, args=training_args, train_dataset=dataset, ) trainer.train() ```
2,813
322
Superskyyy
2025-02-10T09:01:07
> For example, what is the minimum amount of GPU resources required for the official reinforcement learning examples? > > ``` > from datasets import load_dataset > from trl import GRPOConfig, GRPOTrainer > > dataset = load_dataset("trl-lib/tldr", split="train") > > # Dummy reward function: rewards completions that are close to 20 characters > def reward_len(completions, **kwargs): > return [-abs(20 - len(completion)) for completion in completions] > > training_args = GRPOConfig(output_dir="Qwen2-0.5B-GRPO", logging_steps=10) > trainer = GRPOTrainer( > model="Qwen/Qwen2-0.5B-Instruct", > reward_funcs=reward_len, > args=training_args, > train_dataset=dataset, > ) > trainer.train() > ``` If you tune down the hyperparams namely num_generations, max_completions_length and batch size to 1, gradient accumulation steps to 8. Effectively anything with 16GB VRAM should be able to train it for fun. For serious RL Training A100s-H100s are common.
2,813
323
lonngxiang
2025-02-11T08:41:54
Using the free Colab T4, it's not working. Could you please provide some test code?
2,813
324
vaibhavjindal
2025-02-21T00:54:05
https://github.com/huggingface/trl/issues/2495
2,812
325
qgallouedec
2025-02-10T14:15:24
Thanks for contributing @kldzj! For the record, can you explain briefly what is the motivation behind using GuidedDecodingParams?
2,811
326
HuggingFaceDocBuilderDev
2025-02-10T14:18:19
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2811). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,811
327
kldzj
2025-02-10T14:20:53
@qgallouedec When using the GRPO trainer, we likely want the model to respond in a specific format, in the example above we enforce the `<reasoning>\n...\n</reasoning>\n<answer>\n...\n</answer>` format right away, without spending many training steps for the model to learn the correct format through our reward functions. Let me know if there's any problem or flaw in my logic with this.
2,811
328
qgallouedec
2025-02-10T16:59:20
It's very interesting Regarding the implementation, it a bit annoying because `GuidedDecodingParams` isn't json serializable so it causes error. A fair alternative is to only do like this instead ```python @dataclass class GRPOConfig(TrainingArguments): ... vllm_guided_decoding_regex: Optional[str] = None ``` and ```python if args.vllm_guided_decoding_regex is not None: guided_decoding = GuidedDecodingParams(backend="outlines", regex= args.vllm_guided_decoding_regex) ``` it's less flexible but explicitly exposes the regex and probably easier for the user.
2,811
329
kldzj
2025-02-10T21:14:34
@qgallouedec Made the suggested change. :)
2,811
330
HuggingFaceDocBuilderDev
2025-02-12T11:42:59
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2810). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,810
331
willccbb
2025-02-12T13:32:43
I'm not quite sure what you mean by wrapping the generate method here. Many parts of the codebase touch the LLM object, wrapping it in another object would require changing the access in each place. Overriding generate entails a high amount of complexity on the user end, as most applications will either want to use the true generate (or chat, which calls generate) method. Adding an if/else Environment route like in this PR is the simplest approach I could think of which allows users to directly use the LLM object as-is in their rollouts, while still allowing the Trainer to reference LLM normally throughout. Note that this enables many requested features--tool use, sampling strategies, agentic interactions--to be encapsulated within Environments, avoiding further complexity down the road. If you have something specific in mind that you could illustrate with a short snippet I'm happy to try.
2,810
332
qgallouedec
2025-02-12T16:15:22
Actually it can be pretty straightforward and simple: ```python def wrapper_decorator(generate_func): def generate_wrapper(*args, **kwargs): ... # stuff before result = generate_func(*args, **kwargs) ... # stuff after return result return generate_wrapper trainer.llm.model.generate = wrapper_decorator(trainer.llm.model.generate) ```
2,810
333
accupham
2025-02-12T17:07:53
Seems like the wrapper idea could be implemented internally. The user interface could remain the same with the environments idea. Also, I think there might be a bug in the `DoubleCheckEnv.generate` example. I think this is the correct implementation: ```python # original completion_ids = [all_ids[states[i]["prompt_tokens"]:] for i, output in enumerate(outputs)] # fixed completion_ids = [all_ids[i][states[i]["prompt_tokens"]:] for i in range(len(prompts))] ```
2,810
334
willccbb
2025-02-12T17:25:24
@accupham thanks, you may be right, will double check + test shortly @qgallouedec I would strongly prefer a solution which doesn't require overriding generate for the LLM class directly, as this significantly complicates and limits what can be done with samplers/environments. I think the proposed Environment protocol is about as minimal as possible for accomplishing this. For example, many environments will want to make multiple calls to LLM.chat(...) (which [calls](https://github.com/vllm-project/vllm/blob/09972e716c4a90bfd4385540c9f478e18b4efb2d/vllm/entrypoints/llm.py#L748) generate), and this is not possible if the method has to be overridden--see the DoubleCheck example in the PR. Users would have to essentially reimplement and maintain full copies of the logic inside of vLLM's chat method without being able to call it directly. I do understand the goal of avoiding many changes to TRL, but in my opinion this is a very small one which unlocks a large number of use desired cases.
2,810
335
willccbb
2025-02-12T23:57:29
Ah good catch, yeah I see what you mean... I think this could just be handled at the user/Environment level by setting a different EOS token for the tokenizer + manually appending it to completion_ids, though would be good to flag somewhere. My personal preference would be to eventually allow for user-level computation of rewards + masks in addition to completion_ids, though this would require more significant code changes.
2,810
336
xiangjjj
2025-02-13T00:02:19
Thanks for the prompt response. Do you think my proposed change can be incorporated to this pull request as it is backwards compatible with the single-step rollout implementation. I like the idea of allowing the user to have fine-grained control. For now, the environment implementation only returns `completion_ids` making it difficult to have a more structured way to organize the data and potentially some supplementary information could get lost. Do you have thoughts about how to make it more flexible?
2,810
337
willccbb
2025-02-13T12:06:20
Yes, will test + add shortly. For rewards, you could make a data structure which allows storage of rewards based on rollout contents (hash of strings or message dicts), and have the reward_func passed to the trainer be a reference to an object which lives in the environment. This is easy enough that it probably doesn't need to be integrated directly now. For masks, probably best to wait and see what applications/implementations people find most useful.
2,810
338
willccbb
2025-02-13T16:34:19
@xiangjjj One complication is that many base model tokenizers have pad_token_id = eos_token_id, so when padding a batch, the "last EOS token" will be the last token in the pad sequence. Trying out a couple workarounds.
2,810
339
xiangjjj
2025-02-13T16:41:44
Ah, I see! That is tricky. Thank for this!
2,810
340
willccbb
2025-02-14T01:01:18
Simplest solution I think is to move the masking logic into the respective vllm/transformers generate routes. vLLM now masks based on completion_ids length rather than the position of the first EOS token.
2,810
341
xiangjjj
2025-02-14T02:54:39
> Simplest solution I think is to move the masking logic into the respective vllm/transformers generate routes. vLLM now masks based on completion_ids length rather than the position of the first EOS token. Sure, it makes sense and should resolve the masking issue! Thanks for fixing this!
2,810
342
vladrad
2025-02-17T17:30:35
All I would love to help out with this as I am working on it myself.
2,810
343
vladrad
2025-02-17T17:52:19
@willccbb I reach out via your email on your profile! let me know if you want to Colab on this as I am continuing to work on it.
2,810
344
qgallouedec
2025-02-17T17:52:34
I still don't understand why wrapping would limit what you can do. For example for the double call: ```python def wrapper_decorator(generate_func): def generate_wrapper(*args, **kwargs): ... # stuff before result1 = generate_func(*args, **kwargs) result1 = generate_func(*args, **kwargs) ... # stuff after return result return generate_wrapper trainer.llm.model.generate = wrapper_decorator(trainer.llm.model.generate) ``` Taking the env paradigm, I think it should work as is with the main branch with the something like: ```python env = MyEnv(...) def wrapper_decorator(generate_func): def generate_wrapper(*args, **kwargs): prompts = args[0] return env.generate(prompts, self, *args, **kwargs) return generate_wrapper trainer.llm.model.generate = wrapper_decorator(trainer.llm.model.generate) ``` I might be missing something though
2,810
345
vladrad
2025-02-17T18:50:47
@qgallouedec I tried something similar before without success, but I'll give it another go. I've been experimenting with combining chat and completion responses during training. The idea is to score each response based on its format and content. If a mistake is detected, another LLM—one that provides the correct answer—is consulted. This secondary LLM offers a brief hint to guide the correction, and then the response is re-generated. For example, this dataset snippet: ``` <think> Oh, the user wants me to call a tool. </think> </answer> I am going to call X... </answer> <tool> <function name=read_file>...</function> </tool> ``` If a mistake is found (say, the tool call should be within the `<answer>` tags), the correction process would work like this: ``` <think> Oh, I forgot the format requires tool calls to be within the answer ``` Now I go back with the hint and try to get a competition: ``` tags. This will provide the correct format the user requested. </think> </answer> I am going to call X... <tool> <function name=read_file>...</function> </tool> </answer> ``` My goal is to see I can get auto correction via hints scored on top of it. I have made a really hacky overfitted solution where I was training in epoch runs like this, creating the dataset and then going back to round 2,3,4 GRPO training... slowly guiding it to the right answer. Now I think I need to work on getting an actual solution that's how I ended up here. Happy to help/code things up and test. Thanks all!
2,810
346
willccbb
2025-02-17T20:28:53
@qgallouedec biggest problem is that `LLM.chat` relies on `LLM.generate`, and many (most?) multi-step interaction protocols will want to use `LLM.chat`. If we override `generate` as you propose, any call to `chat` inside of our wrapper will result in recursive blowup. We also have to keep all of our logic contained within a single wrapper function, and we also can't easily maintain global state within/across rollouts (for things like precomputing/caching rewards to be retrieved by reward functions, which can be objects with access to the Env state). It also is just much nicer to be able to have access to the `SamplingParams` and `LLM` objects directly, as this is how people typically develop agent applications on top of vLLM. The added complexity to the trainer by allowing an Env object is pretty minor, but it unlocks quite a bit from the user perspective. Other libraries which have already built these kinds of environments (TextArena, reasoning-gym, etc.) are way easier to adapt if we can just "use the model like a normal LLM" rather than having to rewrite all of the chat parsing logic again for every application.
2,810
347
qgallouedec
2025-02-09T14:17:54
> 1. Would this multi-turn format (including multiple tool messages—one for code and one for its output) work with the current GRPO implementation? I think it _should_ work. > 2. Is there a recommended or “correct” format for such agentic data that includes tool usage for GRPO? Not that I am aware of. As long as the format is supported by the chat template of the model. > 3. Are there any special considerations for the tool responses? For instance, should we exclude those tokens from KL divergence computations (as [@xiangjjj](https://github.com/xiangjjj) mentioned in [Training Agents with GRPO #2723](https://github.com/huggingface/trl/issues/2723) ) or handle them differently with attention masks? My hunch is that, indeed, these tokens should be masked for the loss calculation (not just the KL part, also the advantage part). But that it's probably OK not to mask them. My feeling is that even if these tokens won't be generated by the model, it may still help to train the model to predict them, or at least it won't hurt the learning process.
2,809
348
qgallouedec
2025-02-09T15:42:59
Hey @August-murr I've managed to make an agentic GRPO with just 2 lines change in GRPO; check this! It's a toy example where the model is trained to find the max value of an unknown function but it's funny, because it works! ```python from datasets import Dataset from trl import GRPOConfig, GRPOTrainer import re import json def get_value(x: float) -> float: """ Get the value of the function at x. Args: x: The input value. Returns: The value of the function at x. """ return max(-((x / 5) ** 2) + x + 1, 0.0) def agent_reward(completions, **kwargs): rewards = [] for completion in completions: content = completion[0]["content"] # Regex pattern to find the JSON inside <tool_call>...</tool_call> match = re.search(r"<tool_call>\s*(\{.*?\})\s*</tool_call>", content, re.DOTALL) if not match: rewards.append(-100) continue # Try to parse the JSON content try: # Parse the JSON content json_data = json.loads(match.group(1)) except json.JSONDecodeError: rewards.append(-80) continue # Check if the function name is "get_value" function_name = json_data.get("name", "") if function_name != "get_value": rewards.append(-60) continue # Get the value of "x" argument value = json_data.get("arguments", {}).get("x") if value is None: rewards.append(-40) continue # Check if the value is a float if not isinstance(value, (int, float)): rewards.append(-20) continue rewards.append(get_value(float(value))) return rewards dataset = Dataset.from_list( [{"prompt": [{"role": "user", "content": "Call the function get_value with any value."}]}] * 200 ) def main(): training_args = GRPOConfig( output_dir="Qwen2.5-0.5B-GRPO-agent", logging_steps=5, gradient_accumulation_steps=4, max_completion_length=128, max_prompt_length=128, bf16=True, log_completions=True, ) trainer = GRPOTrainer( model="Qwen/Qwen2.5-0.5B-Instruct", reward_funcs=agent_reward, args=training_args, train_dataset=dataset, tools=[get_value], ) trainer.train() if __name__ == "__main__": main() ``` <img width="1275" alt="Image" src="https://github.com/user-attachments/assets/ef135060-514e-457b-a2d6-97ac24aacde2" />
2,809
349
qgallouedec
2025-02-09T15:56:00
Btw, the changes in grpo: ```diff diff --git a/trl/trainer/grpo_trainer.py b/trl/trainer/grpo_trainer.py index bab86f7b..74838b45 100644 --- a/trl/trainer/grpo_trainer.py +++ b/trl/trainer/grpo_trainer.py @@ -193,6 +193,7 @@ class GRPOTrainer(Trainer): callbacks: Optional[list[TrainerCallback]] = None, optimizers: tuple[Optional[torch.optim.Optimizer], Optional[torch.optim.lr_scheduler.LambdaLR]] = (None, None), peft_config: Optional["PeftConfig"] = None, + tools: Optional[list[Union[dict, Callable]]] = None, ): # Args if args is None: @@ -282,6 +283,9 @@ class GRPOTrainer(Trainer): def data_collator(features): # No data collation is needed in GRPO return features + # Tools + self.tools = tools + # Training arguments self.max_prompt_length = args.max_prompt_length self.max_completion_length = args.max_completion_length # = |o_i| in the GRPO paper @@ -447,7 +451,7 @@ class GRPOTrainer(Trainer): def _prepare_inputs(self, inputs: dict[str, Union[torch.Tensor, Any]]) -> dict[str, Union[torch.Tensor, Any]]: device = self.accelerator.device prompts = [x["prompt"] for x in inputs] - prompts_text = [maybe_apply_chat_template(example, self.processing_class)["prompt"] for example in inputs] + prompts_text = [maybe_apply_chat_template(example, self.processing_class, tools=self.tools)["prompt"] for example in inputs] prompt_inputs = self.processing_class( prompts_text, return_tensors="pt", padding=True, padding_side="left", add_special_tokens=False ) ```
2,809
350
NickyDark1
2025-02-09T16:32:25
I don't know if I'm wrong, but in the url: ```py https://github.com/huggingface/trl/blob/main/trl/trainer/grpo_trainer.py ``` I notice that (GRPOTrainer, tools) 'tools=[get_value]' is not there.
2,809
351
qgallouedec
2025-02-09T16:34:16
Yes: > with just 2 lines change in GRPO described here https://github.com/huggingface/trl/issues/2809#issuecomment-2646369933
2,809
352
NickyDark1
2025-02-09T17:04:26
Thank you very much for the prompt response. but then 'get_value' is applied only to all rows, how would it be to be able to make it have different call functions and when you don't want to apply 'function call'?
2,809
353
willccbb
2025-02-09T17:58:55
idea with this PR (https://github.com/huggingface/trl/pull/2810) is to offload all of the rollout logic into an `Environment` object which just needs to return `completion_ids` for each prompt. Then it's up to the user to determine how the rollouts are constructed + we don't need to hardcode any assumptions about tool use into TRL. Would this address most of what people are hoping for from multi-step/agentic use cases?
2,809
354
korbinian-hoermann
2025-02-11T17:44:43
Does this mean, the current implementation of GRPO supports a multi-turn, trajectory-level format for agent tasks—where a full trajectory (o₁, a₁, …, oₙ, aₙ) is evaluated by an outcome reward model? Or must I split the trajectory into individual observation–action pairs and design a reward function that operates at the step level?
2,809
355
willccbb
2025-02-11T18:53:49
The PR is not merged yet, would appreciate any comments over there (or if someone can be assigned to review). But yes, it would allow full rollout logic to be customized by the user (with outcome rewards over the entire trajectory). Separately, I am working on a [repo](https://github.com/willccbb/verifiers) which would plug into TRL via this interface, and would provide some useful primitives for building these kinds of interaction loops.
2,809
356
Benjoyo
2025-02-10T17:12:31
Multimodal support would be great!
2,807
357
Syazvinski
2025-02-22T05:07:26
+999
2,807
358
davidhughhenrymack
2025-02-22T19:58:34
I am essentially patching this in a local fork so that I can train a vision model If I made a simple pr to support this just for non-VLLM generation, would that be acceptable? I'm envisioning a sort of minimal viable solution. I'm envisioning - Trainer, accepts collator function argument - During the prepare and generate function, this is used preferentially to its own applying of chat template and applying processing class - All of the columns outputted by the collator function are passed into the generate call so that you can have pixel data as well as token ids and attention masks I need to think a little bit how this interacts with the prompt text generation currently, maybe as a simple interim step we require the collator to provide both tokenized untokenized prompt
2,807
359
Benjoyo
2025-02-23T09:27:42
@davidhughhenrymack I'd say if you have something working, make a PR for sure. Even if it doesn’t get merged, people can use it and tinker with it!
2,807
360
qgallouedec
2025-02-23T09:52:41
The way to go is not to support collator. Instead one should modify the generation part and the compute loss
2,807
361
davidhughhenrymack
2025-02-23T16:34:18
@qgallouedec can you point to any of the trainers/example scripts as an example of an acceptable architectural approach? I have some cycles I can put into PR on this this week.
2,807
362
qgallouedec
2025-02-08T19:12:02
Do you have any reference that suggests that training without the kl term can give any good result?
2,806
363
ingambe
2025-02-08T20:01:23
Running the example script from the doc using the PR branch code: ``` from datasets import load_dataset from trl import GRPOConfig, GRPOTrainer dataset = load_dataset("trl-lib/tldr", split="train") # Define the reward function, which rewards completions that are close to 20 characters def reward_len(completions, **kwargs): return [-abs(20 - len(completion)) for completion in completions] training_args = GRPOConfig(output_dir="Qwen2-0.5B-GRPO", logging_steps=1, max_grad_norm=0.2, beta=0, per_device_train_batch_size=4, gradient_accumulation_steps=2, report_to="wandb") trainer = GRPOTrainer( model="Qwen/Qwen2-0.5B-Instruct", reward_funcs=reward_len, args=training_args, train_dataset=dataset ) trainer.train() ``` On 8*4090 for 15 minutes: ![Screenshot 2025-02-08 at 20 57 40](https://github.com/user-attachments/assets/815dd707-8da1-4cdd-a91c-eeb03b5b6f3f) https://wandb.ai/ingambe/huggingface/runs/6kg1jviv/workspace?nw=nwuseringambe Model converges. Grad clip value was selected based on the value that worked the best on another dataset.
2,806
364
qgallouedec
2025-02-08T20:28:44
No, I mean, do you have any reason to think the KL term is useless in GRPO? I'm sure the reward increases, this is actually expected, but remember, the KL term in RLHF prevents the fine-tuned policy from diverging too much from the pre-trained model, ensuring stability, safety, and generalization while balancing reward maximization.
2,806
365
ingambe
2025-02-08T20:42:08
I wouldn’t say it is useless, it is one of the thing to prevent too abrupt policy changes. But loss clipping and gradient clipping also contribute to it. In classical GRPO or PPO setups where the model performs multiple updates on the same set of samples, it makes a lot of sense because it directly measures and penalizes divergence at every step. However, in a single update setting, maintaining an extra reference model in memory and performing inference on it for the KL penalty might not always be worth it, doing more iterations with a smaller gradient clipping value is an interesting trade-off IMHO.
2,806
366
qgallouedec
2025-02-08T21:05:27
Ok. Here the reward is definitely not the metric to monitor. Instead I would monitor the completions, and run a long training with an actual dataset and reward function (not a toy example).
2,806
367
ingambe
2025-02-09T00:26:29
I am limited by the resources at my disposal, so I cannot run for long. But on a [math dataset](https://huggingface.co./datasets/ai2-adapt-dev/gsm8k_math_ground_truth) for Qwen2-0.5B I get expected results. [https://wandb.ai/ingambe/huggingface/runs/zvdenpo8](https://wandb.ai/ingambe/huggingface/runs/zvdenpo8) ![Screenshot 2025-02-09 at 01 13 41](https://github.com/user-attachments/assets/f5a91bbc-6eee-473a-91e6-896772910619) Code: ```python def reward_func(completions, ground_truth, **kwargs): # Regular expression to capture content inside \boxed{} matches = [re.search(r"\\boxed\{(.*?)\}", completion[0]["content"]) for completion in completions] contents = [match.group(1) if match else "" for match in matches] # Reward 1 if the content is the same as the ground truth, 0 otherwise return [1.0 if c == gt else 0.0 for c, gt in zip(contents, ground_truth)] dataset = load_dataset("ai2-adapt-dev/gsm8k_math_ground_truth", split="train") # Preprocessing: rename messages to prompt and add a system prompt def preprocess(example): system_prompt = { "role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{{}}." } example["prompt"] = [system_prompt] + example["messages"] example["completion"] = [{ "role": "assistant", "content": "" }] example["ground_truth"] = example.get("ground_truth", "") return example dataset = dataset.map(preprocess).remove_columns(["messages"]) training_args = GRPOConfig(output_dir="Qwen2-0.5B-GRPO", logging_steps=8, max_grad_norm=0.1, beta=0, per_device_train_batch_size=4, gradient_accumulation_steps=2, warmup_steps=100, max_prompt_length=650, max_completion_length=350, num_generations=8, report_to="wandb", log_completions=True) model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct") optimizer = bnb.optim.PagedAdamW8bit( model.parameters(), lr=1e-6, betas=(0.9, 0.95), weight_decay=0.1 ) scheduler = LambdaLR( optimizer, lr_lambda=lambda epoch: min(1.0, (epoch + 1) / training_args.warmup_steps) ) trainer = GRPOTrainer( model=model, reward_funcs=reward_func, args=training_args, train_dataset=dataset, optimizers=(optimizer, scheduler) ) trainer.train() ``` You can check model input/output in the artifacts. Model Input: ```tex <|im_start|>system Please reason step by step, and put your final answer within \boxed{{}}.<|im_end|> <|im_start|>user Question: Find the domain of the expression $\frac{\sqrt{x-2}}{\sqrt{5-x}}$.} Answer:The expressions inside each square root must be non-negative. Therefore, $x-2 \ge 0$, so $x\ge2$, and $5 - x \ge 0$, so $x \le 5$. Also, the denominator cannot be equal to zero, so $5-x>0$, which gives $x<5$. Therefore, the domain of the expression is $\boxed{[2,5)}$. Question: If $\det \mathbf{A} = 2$ and $\det \mathbf{B} = 12,$ then find $\det (\mathbf{A} \mathbf{B}).$ Answer:We have that $\det (\mathbf{A} \mathbf{B}) = (\det \mathbf{A})(\det \mathbf{B}) = (2)(12) = \boxed{24}.$ Question: Terrell usually lifts two 20-pound weights 12 times. If he uses two 15-pound weights instead, how many times must Terrell lift them in order to lift the same total weight? Answer:If Terrell lifts two 20-pound weights 12 times, he lifts a total of $2\cdot 12\cdot20=480$ pounds of weight. If he lifts two 15-pound weights instead for $n$ times, he will lift a total of $2\cdot15\cdot n=30n$ pounds of weight. Equating this to 480 pounds, we can solve for $n$: \begin{align*} 30n&=480\\ \Rightarrow\qquad n&=480/30=\boxed{16} \end{align*} Question: If the system of equations \begin{align*} 6x-4y&=a,\\ 6y-9x &=b. \end{align*}has a solution $(x, y)$ where $x$ and $y$ are both nonzero, find $\frac{a}{b},$ assuming $b$ is nonzero. Answer:If we multiply the first equation by $-\frac{3}{2}$, we obtain $$6y-9x=-\frac{3}{2}a.$$Since we also know that $6y-9x=b$, we have $$-\frac{3}{2}a=b\Rightarrow\frac{a}{b}=\boxed{-\frac{2}{3}}.$$ Question: Square $ABCD$ has side length $1$ unit. Points $E$ and $F$ are on sides $AB$ and $CB$, respectively, with $AE = CF$. When the square is folded along the lines $DE$ and $DF$, sides $AD$ and $CD$ coincide and lie on diagonal $BD$. The length of segment $AE$ can be expressed in the form $\sqrt{k}-m$ units. What is the integer value of $k+m$?<|im_end|> <|im_start|>assistant ``` Model Output: ```tex The length of side $AD$ is the diagonal of the square, which is equal to $1$ unit. When the square is folded along the lines $DE$ and $DF$, sides $AD$ and $CD$ coincide and lie on diagonal $BD$, which is the side length of the square itself. Let $AE = CF = t$ be the length of the side of the square. Since $AE = CF$ and $AE + CF = 1$, we have $t + t = 1$, which means $2t = 1$, or $t = \frac{1}{2}$. Therefore, $AE = \sqrt{\frac{1}{2}} = \frac{\sqrt{2}}{2}$. The length of side $AD\sqrt{2}$ is also $\frac{\sqrt{2}}{2}$. So, $k+m = \frac{1}{2} + 2 = \boxed{3}$. The answer is $\boxed{3}$. The answer is $\boxed{3}$ ``` Note: - I am limited in VRAM; therefore, I could not run it with a long context, which is not great at all, especially for this dataset, which has some inputs that are quite long. - Qwen2 0.5B sometimes ends up repeating itself; this is a known behavior of this small model. If I had to redo it, I would have added additionally a formatting reward on top of the correctness one. - I had to use PagedAdamW8bit due to memory pressure, which might have added some instability in the mix, but the results are not too bad for such a small model. - Of course I could not tune the hyper parameters. I cannot run larger experiments. To me, it is useful not to have KL penalties and to have a more aggressive gradient clipping to vastly improve the training speed/reduce memory, especially with such a low default kl_penalty coefficient, but your mileage may vary.
2,806
368
BaohaoLiao
2025-02-10T13:54:56
I think adding this option makes sense. SimPO (https://arxiv.org/pdf/2405.14734) also has a similar finding that reference model is not always needed.
2,806
369
qgallouedec
2025-02-10T14:19:31
cc @edbeeching, this might interest you
2,806
370
mirceapricop
2025-02-10T19:02:12
+1 would also appreciate having this option. And wouldn't it be a pure optimization for the case where beta == 0, without affecting other runs?
2,806
371
ingambe
2025-02-13T11:25:50
> +1 would also appreciate having this option. And wouldn't it be a pure optimization for the case where beta == 0, without affecting other runs? No it would not affect other runs. It would not even affect people already using beta = 0 as the loss will be equivalent. @qgallouedec any update?
2,806
372
HuggingFaceDocBuilderDev
2025-02-13T13:02:07
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2806). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,806
373
mdy666
2025-02-10T05:58:12
I do lots of exps, I finally find it's caused by accuracy_reward if set ```python def accuracy_reward(completions, solution, **kwargs): return [0.5 for _ in range(len(completions))] ``` with the steps increase, the memory in my code (use triton write the grpo loss) does not increase. But I don't know why this would happen?
2,805
374
nomadlx
2025-02-11T03:43:13
Is it convenient for you to provide your implementation? Currently, on an 8*80G graphics card, I can only run a 7B-sized model, and the maximum length for completion can only be set to 1024. This length is far from sufficient for an inference model. I don't quite understand why it leads to such high VRAM consumption.
2,805
375
mdy666
2025-02-11T06:13:39
> Is it convenient for you to provide your implementation? Currently, on an 8*80G graphics card, I can only run a 7B-sized model, and the maximum length for completion can only be set to 1024. This length is far from sufficient for an inference model. I don't quite understand why it leads to such high VRAM consumption. Sure! Here is my code. please click a star, thank you. https://github.com/mdy666/mdy_triton/tree/main/others/grpo
2,805
376
leonardtang
2025-02-17T03:36:13
> I do lots of exps, I finally find it's caused by accuracy_reward if set > > def accuracy_reward(completions, solution, **kwargs): > return [0.5 for _ in range(len(completions))] > with the steps increase, the memory in my code (use triton write the grpo loss) does not increase. But I don't know why this would happen? You're saying that `accuracy_reward` is eating up GPU memory?
2,805
377
mdy666
2025-02-17T04:02:35
> > I do lots of exps, I finally find it's caused by accuracy_reward if set > > def accuracy_reward(completions, solution, **kwargs): > > return [0.5 for _ in range(len(completions))] > > with the steps increase, the memory in my code (use triton write the grpo loss) does not increase. But I don't know why this would happen? > > You're saying that `accuracy_reward` is eating up GPU memory? this sound incredible,but my exps tell me it caused by the math_verfiy package. The GPU memory will increase from 60G to 120G. And it increase 20 G each time, it equals 2 * bs * num_generations * seq_len * vocab_size, I think the logits dose not allow the memory
2,805
378
maximevtush
2025-02-08T19:30:24
@kashif ready to merge?
2,804
379
srikanthsrnvs
2025-02-11T13:04:42
Yes, same issue. If you print the `all_prompts_text` variable you'll see that the prompts are not consistent.
2,803
380
srikanthsrnvs
2025-02-11T13:24:02
FYI, fixed it with the following: ``` class RepeatSampler(Sampler): """ Sampler that repeats each sample a fixed number of times in order. Example: ```python >>> sampler = RepeatSampler(["a", "b", "c", "d"], repeat_count=2) >>> list(sampler) [0, 0, 1, 1, 2, 2, 3, 3] ``` """ def __init__(self, data_source: Sized, repeat_count: int): self.data_source = data_source self.repeat_count = repeat_count self.num_samples = len(data_source) def __iter__(self): # Create repeated indices in order indexes = [idx for idx in range(self.num_samples) for _ in range(self.repeat_count)] return iter(indexes) def __len__(self): return self.num_samples * self.repeat_count ```
2,803
381
qgallouedec
2025-02-11T13:48:17
Can you elaborate just a bit, I think I understand what you mean, but not 100% sure
2,803
382
yiyepiaoling0715
2025-02-08T05:08:18
The per_token_loss = torch.exp(per_token_logps - per_token_logps.detach()) * advantages.unsqueeze(1) is always adavantages. =>you are wrong; the code is for backward gradient
2,802
383
yiyepiaoling0715
2025-02-08T05:10:38
` if self.args.use_vllm: # First, have main process load weights if needed if self.state.global_step != self._last_loaded_step: with unwrap_model_for_generation(model, self.accelerator) as unwrapped_model: state_dict = unwrapped_model.state_dict() if self.accelerator.is_main_process: llm_model = self.llm.llm_engine.model_executor.driver_worker.model_runner.model llm_model.load_weights(state_dict.items()) self._last_loaded_step = self.state.global_step ` so the weight is newer I think you really should read the code carefully and then give the issue
2,802
384
Superskyyy
2025-02-10T06:52:41
No the vllm uses the current policy with weight continuously updated. The implementation is correct.
2,802
385
linkedlist771
2025-02-17T07:27:11
> The per_token_loss = torch.exp(per_token_logps - per_token_logps.detach()) * advantages.unsqueeze(1) is always adavantages. =>you are wrong; the code is for backward gradient I am a little bit confused, why there is a `per_token_logps - per_token_logps.detach()`, it is actually `zero`. Taking it one exp makes it `1`.
2,802
386
Superskyyy
2025-02-08T12:07:04
Currently it's not possible but maybe this can be a worthy add on for advanced usages. Because of common techniques like Curriculum Learning might need it.
2,801
387
HuggingFaceDocBuilderDev
2025-02-07T23:32:58
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2800). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,800
388
Rocketknight1
2025-02-10T14:00:18
cc @qgallouedec for TRL / SFT Trainer. Should this issue be moved to the TRL repo?
2,819
389
qgallouedec
2025-02-10T14:02:03
thanks for the pointer, transferred
2,819
390
ibitec7
2025-02-22T15:26:05
Is anyone working on this issue? Please provide an update, some projects are halted because of this bug. Regards,
2,819
391
tyler-romero
2025-02-07T19:23:35
See benchmarks here: https://github.com/huggingface/trl/pull/2773#issuecomment-2638088452 (thanks @qgallouedec ) Notably, the most efficient approach in these benchmarks is not stable with bfloat16, and so we fall back to the approach that loops over log_softmax for bfloat16 and float16.
2,799
392
tyler-romero
2025-02-07T19:29:10
@qgallouedec
2,799
393
qgallouedec
2025-02-07T21:32:12
That's a super cool improvement! Thanks! Just some minor rewarks to adresse and we're good to merge
2,799
394
tyler-romero
2025-02-07T22:43:23
Ready for re-review!
2,799
395
HuggingFaceDocBuilderDev
2025-02-07T23:03:57
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2799). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,799
396
qgallouedec
2025-02-07T23:59:36
Thanks again!
2,799
397
Superskyyy
2025-02-08T12:09:46
Can you confirm is it chunked prefill or is it prefix caching? The issue mentions two different things to disable.
2,798
398
kawamou
2025-02-09T20:44:08
Hi there, I experimented by modifying the TRL source code to set `enable_prefix_caching` to False, and this change resolved the issue on my end. Based on this, it appears that the problem is caused by prefix caching rather than chunked prefill. Thanks.
2,798
399
edwardzjl
2025-02-19T08:50:48
Same issue here. Can we add a `vllm_kwargs` param in the `GRPOConfig`?
2,798
400
daniyalaliev
2025-02-17T08:50:25
did you manage to solve the issue? facing same one with my code
2,796
401
daniyalaliev
2025-02-17T09:44:43
probably setting fsdp_use_orig_params to true may help for you however, it seems that I see some problems with wrapping when setting fsdp_use_orig_params true: ```python return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: 'weight' must be 2-D ``` Interestingly, it happens only when using GrpoTrainer, using same wrapping with SftTrainer with same setup works ok for me
2,796
402
michaelhla
2025-02-18T21:03:59
seems to be working for me with latest versions, try checking versions?
2,796
403
cuong-dyania
2025-02-19T00:12:45
I tried with latest version 0.15.1 but I got the same error RuntimeError: The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory. I tried with change fsdp_use_orig_params from false to true but I got the error RuntimeError: 'weight' must be 2-D
2,796
404
yiyepiaoling0715
2025-02-08T05:17:23
same error,how to solve it
2,795
405
JohnConnor123
2025-02-08T13:10:21
@Superskyyy @qgallouedec
2,795
406
wuyifan18
2025-02-09T11:12:20
same error
2,795
407
miaoz0
2025-02-10T01:33:33
I also meet the same error at `trl==0.14`, `deepspeed==0.15.4`, `accelerate==1.0.1`, when using ZERO3. I found the issue solved this error when downgrading the `deepspeed` to `0.15.4`. [#2600 ](https://github.com/huggingface/trl/pull/2600#issuecomment-2621317193) Have you tried `deepspeed==0.15.4` ? --- Update: downgrading `deepspeed` to `0.15.3` and `vllm==0.7.0` solved my problem. https://github.com/huggingface/trl/issues/2304#issuecomment-2647228794
2,795
408
JohnConnor123
2025-02-12T13:48:58
> I also meet the same error at `trl==0.14`, `deepspeed==0.15.4`, `accelerate==1.0.1`, when using ZERO3. I found the issue solved this error when downgrading the `deepspeed` to `0.15.4`. [#2600 ](https://github.com/huggingface/trl/pull/2600#issuecomment-2621317193) > > Have you tried `deepspeed==0.15.4` ? > > Update: downgrading `deepspeed` to `0.15.3` and `vllm==0.7.0` solved my problem. [#2304 (comment)](https://github.com/huggingface/trl/issues/2304#issuecomment-2647228794) This doesn't help:( Can you provide the requirements.txt file if you are using pip, or pyproject.toml if you are using poetry?
2,795
409
miaoz0
2025-02-12T14:00:31
> > I also meet the same error at `trl==0.14`, `deepspeed==0.15.4`, `accelerate==1.0.1`, when using ZERO3. I found the issue solved this error when downgrading the `deepspeed` to `0.15.4`. [#2600 ](https://github.com/huggingface/trl/pull/2600#issuecomment-2621317193) > > Have you tried `deepspeed==0.15.4` ? > > Update: downgrading `deepspeed` to `0.15.3` and `vllm==0.7.0` solved my problem. [#2304 (comment)](https://github.com/huggingface/trl/issues/2304#issuecomment-2647228794) > > This doesn't help:( > > Can you provide the requirements.txt file if you are using pip, or pyproject.toml if you are using poetry? I just copy this from `pip list`. Hope this help you! ``` accelerate 1.3.0 aiohappyeyeballs 2.4.3 aiohttp 3.10.10 aiohttp-cors 0.7.0 aiosignal 1.3.1 airportsdata 20241001 annotated-types 0.7.0 anyio 4.6.2.post1 astor 0.8.1 asttokens 2.4.1 async-timeout 4.0.3 attrs 24.2.0 beautifulsoup4 4.12.3 blake3 1.0.4 bottle 0.13.2 Brotli 1.1.0 bs4 0.0.2 cachetools 5.5.1 certifi 2024.8.30 cffi 1.17.1 charset-normalizer 3.4.0 click 8.1.7 cloudpickle 3.1.0 colorful 0.5.6 comm 0.2.2 compressed-tensors 0.9.0 contourpy 1.3.0 cycler 0.12.1 datasets 3.0.2 debugpy 1.8.7 decorator 5.1.1 deepspeed 0.15.3 Deprecated 1.2.15 depyf 0.18.0 dill 0.3.8 diskcache 5.6.3 distlib 0.3.9 distro 1.9.0 docker-pycreds 0.4.0 einops 0.8.0 et_xmlfile 2.0.0 evaluate 0.4.3 exceptiongroup 1.2.2 executing 2.1.0 filelock 3.17.0 fire 0.7.0 flash_attn 2.7.4.post1 fonttools 4.54.1 frozenlist 1.5.0 fsspec 2024.6.1 func_timeout 4.3.5 gguf 0.10.0 gitdb 4.0.11 GitPython 3.1.43 gmpy2 2.1.5 google-api-core 2.24.1 google-auth 2.38.0 googleapis-common-protos 1.66.0 grpcio 1.68.0 grpcio-tools 1.68.0 h11 0.14.0 h2 4.1.0 hjson 3.1.0 hpack 4.0.0 httpcore 1.0.6 httptools 0.6.4 httpx 0.27.2 huggingface-hub 0.26.1 human-eval 1.0.3 hyperframe 6.0.1 idna 3.10 importlib_metadata 8.5.0 iniconfig 2.0.0 interegular 0.3.3 ipykernel 6.29.5 ipython 8.29.0 ipywidgets 8.1.5 jedi 0.19.1 jieba 0.42.1 Jinja2 3.1.4 jiter 0.6.1 joblib 1.4.2 jsonlines 4.0.0 jsonschema 4.23.0 jsonschema-specifications 2024.10.1 jupyter_client 8.6.3 jupyter_core 5.7.2 jupyterlab_widgets 3.0.13 kiwisolver 1.4.7 lark 1.2.2 linkify-it-py 2.0.3 llvmlite 0.43.0 lm-format-enforcer 0.10.9 markdown-it-py 3.0.0 MarkupSafe 2.1.5 matplotlib 3.9.2 matplotlib-inline 0.1.7 mdit-py-plugins 0.4.2 mdurl 0.1.2 memray 1.15.0 mistral_common 1.5.2 modelscope 1.22.3 mpmath 1.3.0 msgpack 1.1.0 msgspec 0.18.6 multidict 6.1.0 multiprocess 0.70.16 nest-asyncio 1.6.0 networkx 3.3 ninja 1.11.1.1 numba 0.60.0 numpy 1.26.4 numpy-financial 1.0.0 nvidia-cublas-cu12 12.1.3.1 nvidia-cuda-cupti-cu12 12.1.105 nvidia-cuda-nvrtc-cu12 12.1.105 nvidia-cuda-runtime-cu12 12.1.105 nvidia-cudnn-cu12 9.1.0.70 nvidia-cufft-cu12 11.0.2.54 nvidia-curand-cu12 10.3.2.106 nvidia-cusolver-cu12 11.4.5.107 nvidia-cusparse-cu12 12.1.0.106 nvidia-ml-py 12.560.30 nvidia-nccl-cu12 2.21.5 nvidia-nvjitlink-cu12 12.1.105 nvidia-nvtx-cu12 12.1.105 openai 1.52.2 opencensus 0.11.4 opencensus-context 0.1.3 opencv-python-headless 4.10.0.84 openpyxl 3.1.5 outlines 0.1.11 outlines_core 0.1.26 packaging 24.1 pandas 2.2.3 parso 0.8.4 partial-json-parser 0.2.1.1.post4 peft 0.13.2 pexpect 4.9.0 pillow 10.4.0 pip 24.2 pip-search 0.0.12 platformdirs 4.3.6 pluggy 1.5.0 prometheus_client 0.21.0 prometheus-fastapi-instrumentator 7.0.0 prompt_toolkit 3.0.48 propcache 0.2.0 proto-plus 1.26.0 protobuf 5.28.3 psutil 6.1.0 ptyprocess 0.7.0 pure_eval 0.2.3 py-cpuinfo 9.0.0 py-spy 0.4.0 pyairports 2.1.1 pyarrow 17.0.0 pyasn1 0.6.1 pyasn1_modules 0.4.1 pybind11 2.13.6 pycountry 24.6.1 pycparser 2.22 pydantic 2.9.2 pydantic_core 2.23.4 Pygments 2.18.0 pyparsing 3.2.0 PySocks 1.7.1 pytest 8.3.3 python-dateutil 2.9.0 python-dotenv 1.0.1 pytz 2024.1 PyYAML 6.0.2 pyzmq 26.2.0 ray 2.38.0 referencing 0.35.1 regex 2024.9.11 requests 2.32.3 rich 13.9.4 rpds-py 0.20.0 rsa 4.9 safetensors 0.4.5 schedula 1.5.49 scikit-learn 1.5.2 scipy 1.14.1 seaborn 0.13.2 sentencepiece 0.2.0 sentry-sdk 2.18.0 setproctitle 1.3.4 setuptools 75.1.0 simhash 2.1.2 six 1.16.0 smart-open 7.1.0 smmap 5.0.1 sniffio 1.3.1 soupsieve 2.6 stack-data 0.6.3 starlette 0.41.2 sympy 1.13.1 termcolor 2.5.0 textual 1.0.0 threadpoolctl 3.5.0 tiktoken 0.7.0 tokenizers 0.21.0 tomli 2.0.2 torch 2.5.1+cu121 torchaudio 2.5.1+cu121 torchvision 0.20.1+cu121 tornado 6.4.1 tqdm 4.66.5 traitlets 5.14.3 transformers 4.48.2 triton 3.1.0 trl 0.14.0 typing_extensions 4.12.2 tzdata 2024.2 uc-micro-py 1.0.3 urllib3 1.26.20 uvicorn 0.32.0 uvloop 0.21.0 virtualenv 20.29.1 vllm 0.7.0 wandb 0.18.7 watchfiles 0.24.0 wcwidth 0.2.13 websockets 13.1 wheel 0.44.0 widgetsnbextension 4.0.13 wrapt 1.16.0 xformers 0.0.28.post3 xgrammar 0.1.11 xxhash 3.5.0 yarl 1.16.0 zipp 3.20.2 zstandard 0.23.0 ```
2,795
410
JohnConnor123
2025-02-12T14:09:21
@miaoz0 This definitely will help! But can you copy paste not `pip list` output but `pip freeze > requirements.txt && nano requirements.txt`?
2,795
411