Edit model card

Swe-CLIP 500k

Github Model Card

Usage

To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the Multilingual-CLIP Github. Once this is done, you can load and use the model with the following code

from src import multilingual_clip

model = multilingual_clip.load_model('Swe-CLIP-500k')
embeddings = model(['Älgen är skogens konung!', 'Alla isbjörnar är vänsterhänta'])
print(embeddings.shape)
# Yields: torch.Size([2, 640])

About

A KB/Bert-Swedish-Cased tuned to match the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder.

Training data pairs was generated by sampling 500k sentences from the combined descriptions of GCC + MSCOCO + VizWiz, and translating them into Swedish. All translation was done using the Huggingface Opus Model, which seemingly procudes higher quality translations than relying on the AWS translate service.

Downloads last month
24
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using M-CLIP/Swedish-500k 1