Update README.md
Browse files
README.md
CHANGED
@@ -22,6 +22,8 @@ widget:
|
|
22 |
pipeline_tag: zero-shot-image-classification
|
23 |
---
|
24 |
|
|
|
|
|
25 |
# BiomedCLIP-PubMedBERT_256-vit_base_patch16_224
|
26 |
|
27 |
[BiomedCLIP](https://aka.ms/biomedclip-paper) is a biomedical vision-language foundation model that is pretrained on [PMC-15M](https://aka.ms/biomedclip-paper), a dataset of 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central, using contrastive learning.
|
|
|
22 |
pipeline_tag: zero-shot-image-classification
|
23 |
---
|
24 |
|
25 |
+
# NOTE: This is an old checkepoint. Use the [offical repo](aka.ms/biomedclip) for latest resutls.
|
26 |
+
|
27 |
# BiomedCLIP-PubMedBERT_256-vit_base_patch16_224
|
28 |
|
29 |
[BiomedCLIP](https://aka.ms/biomedclip-paper) is a biomedical vision-language foundation model that is pretrained on [PMC-15M](https://aka.ms/biomedclip-paper), a dataset of 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central, using contrastive learning.
|