how to save output image?
how to save output of the image ?
If you use the sample code then one way is:
plt.imsave("test.png", pred_seg)
If you use the sample code then one way is:
plt.imsave("test.png", pred_seg)
okay it's working perfect. can i get predication mask with key value like {"object":"shoes","mask-color":"red"}
Not sure if this is what you mean, but the code below uses pred_seg and turns the shoe labels into red and everything else into black:
# Change prediction to RGB
rgb_seg = torch.cat([pred_seg.unsqueeze(0)] * 3, dim=0).permute(1, 2, 0)
# Create mask for the target class
mask_9 = (rgb_seg == 9).all(dim=-1)
mask_10 = (rgb_seg == 10).all(dim=-1)
# Combine the masks
mask = mask_9 | mask_10
# Create a tensor with the target replacement value
new_values = torch.tensor([255, 0, 0])
# Expand dimensions of `new_values` to match `tensor`
new_values = new_values[None, None, :].expand_as(rgb_seg)
# Apply the mask to the original tensor
rgb_seg[mask] = new_values[mask]
# Filter out all other classes
rgb_seg[~mask] = torch.tensor([0, 0, 0])
plt.imsave("seg.png", rgb_seg.numpy().astype("uint8"))
There's probably way simpler and more efficient ways to do this tho.
okay working. is there any way to train on my custom image?
Yes you can train this like any other model on hugging face, here is one tutorial from hugging face on fine tuning/ training image segmentation models:
https://huggingface.co./docs/transformers/main/tasks/semantic_segmentation
Yes you can train this like any other model on hugging face, here is one tutorial from hugging face on fine tuning/ training image segmentation models:
https://huggingface.co./docs/transformers/main/tasks/semantic_segmentation
thank you