Parameters


training_args = TrainingArguments(
    output_dir='/kaggle/working/T5_Summarization',
    num_train_epochs=1,
    warmup_steps=500,
    per_device_train_batch_size=1,  # Reduce batch size if OOM persists
    per_device_eval_batch_size=2,
    weight_decay=0.01,
    logging_steps=10,
    evaluation_strategy='steps',
    eval_steps=500,
    save_steps=5000,  # Save more frequently
    gradient_accumulation_steps=16,
    fp16=True  # Enable mixed precision
)
Downloads last month
11
Safetensors
Model size
223M params
Tensor type
F32
ยท
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for ARSynopsis/T5_Full_FineTune_V0.1_40K

Base model

google-t5/t5-base
Finetuned
(473)
this model

Space using ARSynopsis/T5_Full_FineTune_V0.1_40K 1