Update README.md
Browse files
README.md
CHANGED
@@ -19,10 +19,10 @@ library_name: sentence-transformers
|
|
19 |
# QulBERT
|
20 |
|
21 |
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
|
22 |
-
|
23 |
This model originates from the [Camel-Bert_Classical Arabic](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca) model. It was then trained on the Jawami' Kalim dataset,
|
24 |
specifically a dataset of 440,000 matns and their corresponding taraf labels.
|
25 |
-
Taraf labels indicate two hadith are about the same report, and as such, are more semantically similar.
|
26 |
|
27 |
|
28 |
## Usage (Sentence-Transformers)
|
@@ -86,7 +86,7 @@ print(sentence_embeddings)
|
|
86 |
|
87 |
## Evaluation Results
|
88 |
|
89 |
-
The dataset was split into 75% training, 15% eval, 10% test.
|
90 |
|
91 |
|
92 |
|
@@ -154,7 +154,7 @@ Triplet Evaluation:
|
|
154 |
| 6 | 20000 | 0.9673 | 0.967 | 0.9665 |
|
155 |
| 6 | -1 | 0.9666 | 0.9658 | 0.9666 |
|
156 |
|
157 |
-
|
158 |
|
159 |
## Training
|
160 |
The model was trained with the parameters:
|
|
|
19 |
# QulBERT
|
20 |
|
21 |
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
|
22 |
+
<!--
|
23 |
This model originates from the [Camel-Bert_Classical Arabic](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca) model. It was then trained on the Jawami' Kalim dataset,
|
24 |
specifically a dataset of 440,000 matns and their corresponding taraf labels.
|
25 |
+
Taraf labels indicate two hadith are about the same report, and as such, are more semantically similar. -->
|
26 |
|
27 |
|
28 |
## Usage (Sentence-Transformers)
|
|
|
86 |
|
87 |
## Evaluation Results
|
88 |
|
89 |
+
<!-- The dataset was split into 75% training, 15% eval, 10% test.
|
90 |
|
91 |
|
92 |
|
|
|
154 |
| 6 | 20000 | 0.9673 | 0.967 | 0.9665 |
|
155 |
| 6 | -1 | 0.9666 | 0.9658 | 0.9666 |
|
156 |
|
157 |
+
-->
|
158 |
|
159 |
## Training
|
160 |
The model was trained with the parameters:
|