---
language: []
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:77376
- loss:CosineSimilarityLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
datasets: []
widget:
- source_sentence: He has published several books on nutrition, trace metals but not
biochemistry imbalances.
sentences:
- This in turn can help in effective communication between healthcare providers
and their patients.
- He has written several books on nutrition, trace metals, and biochemistry imbalances.
- One of the most boring movies I have ever seen.
- source_sentence: She was denied the 2011 NSK Neustadt Prize for Children's Literature.
sentences:
- She was the recipient of the 2011 NSK Neustadt Prize for Children's Literature.
- The ancient woodland at Dickshills is also located here.
- An element (such as a tree) that contributes to evapotranspiration can be called
an evapotranspirator.
- source_sentence: Viking, after the resemblance the pitchers bear to the prow of
a Viking ship.
sentences:
- Viking, after the striking difference the pitchers bear to the prow of a Viking
ship.
- Honshu is formed from the island arcs.
- For instance, even alcohol consumption by a pregnant woman is unable to lead to
fetal alcohol syndrome.
- source_sentence: Logging has not been undertake near the headwaters of the creek.
sentences:
- Then I had to continue pairing it periodically since it somehow kept dropping.
- That's fair, Nance.
- Logging has been done near the headwaters of the creek.
- source_sentence: He published a history of Cornwall, New York in 1873.
sentences:
- He failed to publish a history of Cornwall, New York in 1873.
- Salafis assert that reliance on taqlid has led to Islam 's decline.
- 'Lot of holes in the plot: there''s nothing about how he became the emperor; nothing
about where he spend 20 years between his childhood and mature age.'
pipeline_tag: sentence-similarity
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co./sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co./sentence-transformers/all-MiniLM-L6-v2)
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co./models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("LeoChiuu/all-MiniLM-L6-v2-negations")
# Run inference
sentences = [
'He published a history of Cornwall, New York in 1873.',
'He failed to publish a history of Cornwall, New York in 1873.',
"Salafis assert that reliance on taqlid has led to Islam 's decline.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 77,376 training samples
* Columns: sentence_0
, sentence_1
, and label
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details |
The situation in Yemen was already much better than it was in Bahrain.
| The situation in Yemen was not much better than Bahrain.
| 0
|
| She was a member of the Gamma Theta Upsilon honour society of geography.
| She was denied membership of the Gamma Theta Upsilon honour society of mathematics.
| 0
|
| Which aren't small and not worth the price.
| Which are small and not worth the price.
| 0
|
* Loss: [CosineSimilarityLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters