NaN loss when finetuning BLIP-2

#28
by agopalkr - opened

I am currently finetuning Blip2ForConditionalGeneration to perform a specialized VQA task. Currently, I am testing my approach with the following code:

processor = Blip2Processor.from_pretrained('Salesforce/blip2-opt-2.7b', padding_side='left')
model = Blip2ForConditionalGeneration.from_pretrained('Salesforce/blip2-opt-2.7b', torch_dtype=torch.float16, device_map="auto")


text = ['Question: What is the ego state of the vehicle? Answer: The ego-vehicle is moving']
enc = processor(img, text, padding=True, truncation=True, return_tensors='pt')

outputs = model(**enc, labels=enc['input_ids'])
print(outputs.loss.item())

where img is a loaded image. Before finetuning on a single batch, the loss is not NaN. However, after one batch of finetuning, I keep on returning NaN losses, and I am not sure why. Does anyone know if I am properly passing in the labels to the model or if there is something else I need to fix in my code?

Hi @agopalkr
Can you try to load the model in fp32 and use mixed precision training instead?

The loss is now not NaN! Appreciate the help

Hi @agopalkr
Can you try to load the model in fp32 and use mixed precision training instead?

Hello!
How can I use my own image-text dataset to fine-tune the BLIP2 model. The task I need to perform is the image captioning task. I have found that using pretrained BLIP2 alone to generate text descriptions for my images does not work well. I would like to fine-tune my dataset first before performing the captioning operation? May I ask how to implement the specific operation and which pretrained model can be fine tuned to achieve better results?
Looking forward to your reply, thank you again!
Good luck to you!

Hi @shams123321
Thanks ! I suggest as a starting point to check out this: https://github.com/NielsRogge/Transformers-Tutorials/tree/master/BLIP-2 from @nielsr as it points out to resources on how to fine-tune Blip2 on your own dataset

Hi @shams123321
Thanks ! I suggest as a starting point to check out this: https://github.com/NielsRogge/Transformers-Tutorials/tree/master/BLIP-2 from @nielsr as it points out to resources on how to fine-tune Blip2 on your own dataset

Thank you for your guidance!

Sign up or log in to comment