Update README.md
Browse filesspelling on "general"
README.md
CHANGED
@@ -12,7 +12,7 @@ _Fork of https://huggingface.co/thenlper/gte-small with ONNX weights to be compa
|
|
12 |
|
13 |
# gte-small
|
14 |
|
15 |
-
|
16 |
|
17 |
The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc.
|
18 |
|
|
|
12 |
|
13 |
# gte-small
|
14 |
|
15 |
+
General Text Embeddings (GTE) model.
|
16 |
|
17 |
The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc.
|
18 |
|