Huggingface Model · Huggingface Base Model
## Usage To use this model along with the original CLIP vision encoder follow the [main page usage instructions](https://github.com/FreddeFrallan/Multilingual-CLIP) to download the additional linear weights. Once this is done, you can load and use the model with the following code ```python from multilingual_clip import multilingual_clip model = multilingual_clip.load_model('M-BERT-Base-69') embeddings = model(['Älgen är skogens konung!', 'Wie leben Eisbären in der Antarktis?', 'Вы знали, что все белые медведи левши?']) print(embeddings.shape) # Yields: torch.Size([3, 640]) ``` ## About A [bert-base-multilingual](https://huggingface.co./bert-base-multilingual-cased) tuned to match the embedding space for 69 languages, to the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder.