Swe-CLIP 2M
Huggingface Model · Huggingface Base Model
Usage
To use this model along with the original CLIP vision encoder follow the main page usage instructions to download the additional linear weights. Once this is done, you can load and use the model with the following code
from multilingual_clip import multilingual_clip
model = multilingual_clip.load_model('Swe-CLIP-2M')
embeddings = model(['Älgen är skogens konung!', 'Alla isbjörnar är vänsterhänta'])
print(embeddings.shape)
# Yields: torch.Size([2, 640])
About
A KB/Bert-Swedish-Cased tuned to match the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder.
Training data pairs was generated by sampling 2 Million sentences from the combined descriptions of GCC + MSCOCO + VizWiz, and translating them into Swedish. All translation was done using the Huggingface Opus Model, which seemingly procudes higher quality translations than relying on the AWS translate service.