File size: 21,322 Bytes
6726556
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72e9913
6726556
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c091d77
 
 
 
6726556
 
 
 
 
 
72e9913
6726556
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c091d77
6726556
 
 
c091d77
 
6726556
 
 
0f3fc4a
6726556
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c091d77
0f3fc4a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6726556
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
---
base_model: BAAI/bge-base-en-v1.5
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'Reasoning:

    The provided answer correctly indicates that the percentage in the response status
    column shows "the total amount of successful completion of response actions."
    This is well-supported by the document, which states, "the status of response
    actions for the different steps in the... percentage indicates the total amount
    of successful completion of response actions." Therefore, the answer effectively
    addresses the specific question, maintains relevance, is concise, and uses the
    correct key/value terms from the document.


    Evaluation:'
- text: "Reasoning:\nThe document does not explicitly state the purpose of Endpoint\
    \ controls, but it provides instructions on how to enable and configure them.\
    \ The answer given is technically correct because the document does not directly\
    \ address the purpose of Endpoint controls. However, by reviewing the instructions\
    \ provided, one can infer that the purpose involves managing device control, firewall\
    \ control, and disk encryption visibility, all of which are related to enhancing\
    \ endpoint security. \n\nWhile the provided answer states that the information\
    \ needed isn't covered, this can be considered somewhat true, but it does not\
    \ make any inference from the given details.\n\nFinal result: Methodologically,\
    \ it aligns as'' based on strict criteria.\nEvaluation:"
- text: 'Reasoning:

    The provided document clearly outlines the purpose of the <ORGANIZATION> XDR On-Site
    Collector Agent: it is installed to collect logs from platforms and securely forward
    them to <ORGANIZATION> XDR. The answer given aligns accurately with the document''s
    description, addressing the specific question without deviating into unrelated
    topics. The response isalso concise and to the point.


    Evaluation:'
- text: 'Reasoning:

    The document specifies that in the "Email Notifications section," setting the
    "<ORGANIZATION_2> notifications On" will ensure that users with the System Admin
    role receive email notifications about stale or archived sensors. The answer provided
    states that the purpose of the checkbox is to enable or disable email notifications
    for users, which accurately reflects the information given in the document. The
    answer is supported by the document, directly addresses the question, and is concise.


    Evaluation:'
- text: "Reasoning:\nThe provided document contains specific URLs for images corresponding\
    \ to the queries. The URL for the image associated with the second query is given\
    \ as `..\\/..\\/_images\\/hunting_http://miller.co`. However, the provided answer\
    \ `/..\\/..\\/_images\\/hunting_http://www.flores.net/` does not match this information\
    \ and provides an incorrect URL that is not mentioned in the document. Therefore,\
    \ the answer fails to meet the relevant criteria, is not grounded in the context\
    \ of the document, and lacks conciseness by not directly referencing the correct\
    \ URL.\n\nFinal evaluation: \nEvaluation:"
inference: true
model-index:
- name: SetFit with BAAI/bge-base-en-v1.5
  results:
  - task:
      type: text-classification
      name: Text Classification
    dataset:
      name: Unknown
      type: unknown
      split: test
    metrics:
    - type: accuracy
      value: 0.676056338028169
      name: Accuracy
---

# SetFit with BAAI/bge-base-en-v1.5

This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co./BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.

## Model Details

### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-base-en-v1.5](https://huggingface.co./BAAI/bge-base-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co./datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->

### Model Sources

- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co./blog/setfit)

### Model Labels
| Label | Examples                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1     | <ul><li>'Reasoning:\nThe given answer is directly aligned with the context provided by the document. It accurately identifies Ennita Manyumwa as a 26-year-old African woman who is HIV-free. It further elaborates on her significance in the fight against AIDS by highlighting her resistance to older men who offer gifts for sex, a behavior that helps prevent the spread of AIDS. This information is consistent with the document and directly answers the question without any unnecessary details.\n\nFinal Evaluation:'</li><li>'Reasoning:\nThe answer directly addresses the question by listing the benefits the author has experienced from their regular yoga practice. These benefits include unapologetic "me" time, improved health, self-growth, increased patience, the ability to be still, acceptance of daily changes, the realization that happiness is their responsibility, a deeper appreciation for their body, the understanding that yoga exists off the mat, and the importance of being open. Each of these points is explicitly mentioned in the provided document, making the answer well-supported and contextually accurate. The answer is concise and relevant, sticking closely to the specifics asked for in the question.\n\nFinal Evaluation:'</li><li>'Reasoning:\nThe answer accurately identifies that the work on germ-free-life conducted at Notre Dame University resulted in the establishment of the Lobund Institute. This directly aligns with the information provided in the document, which details the evolution of the research from its beginnings in 1928 to the establishment of the Lobund Institute. The response is relevant, well-grounded in the context of the document, and concise.\n\nEvaluation:'</li></ul>                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
| 0     | <ul><li>'Reasoning:\nThe answer provided accurately addresses the question by explaining how to enable approval for appointment bookings, which subsequently changes the booking process for clients from immediate booking to a "request to book" process. This may be a solution to the issue if clients are currently experiencing difficulties due to the lack of this feature. The steps given are clear, concise, and directly supported by the provided document, aligning well with the instructions mentioned for enabling approval.\n\nHowever, it is important to note that the answer does not directly state why clients might be unable to book appointments online, nor does it explore other potential reasons beyond the approval setting. Directly stating that clients cannot book appointments online due to lack of enabling approval, and covering any other potential issues mentioned in the document, would make it even more thorough.\n\nEvaluation:'</li><li>'Reasoning:\nThe answer does cover the fundamental steps to write a nonfiction book, such as selecting a topic, conducting research, creating an outline, and starting the writing process. However, it includes an incorrect aside, stating that "The Old Man and the Sea" by Ernest Hemingway is nonfiction and based on true events, which detracts from the otherwise accurate guidance. Additionally, the answer could be more detailed and aligned with the extensive steps provided in the document, such as discussing the importance of understanding the genre, reading and analyzing examples, brainstorming, setting up interviews, organizing research, creating a writing schedule, and focusing on writing techniques.\n\nFinal Evaluation: \nEvaluation:'</li><li>'Reasoning:\nThe provided answer directly contradicts the guidelines given in the document on studying English literature. The answer suggests not taking notes, ignoring significant passages, and avoiding making character profiles, which are all contrary to the recommendations in the document. The document emphasizes the importance of thorough reading, taking detailed notes, creating character profiles, and paying attention to important passages and concepts, which are crucial for comprehensive understanding and analysis of English literature.\n\nFinal Evaluation: \nEvaluation:'</li></ul> |

## Evaluation

### Metrics
| Label   | Accuracy |
|:--------|:---------|
| **all** | 0.6761   |

## Uses

### Direct Use for Inference

First install the SetFit library:

```bash
pip install setfit
```

Then you can load this model and run inference.

```python
from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_cybereason_gpt-4o_improved-cot-instructions_chat_few_shot_generated_remov")
# Run inference
preds = model("Reasoning:
The provided document clearly outlines the purpose of the <ORGANIZATION> XDR On-Site Collector Agent: it is installed to collect logs from platforms and securely forward them to <ORGANIZATION> XDR. The answer given aligns accurately with the document's description, addressing the specific question without deviating into unrelated topics. The response isalso concise and to the point.

Evaluation:")
```

<!--
### Downstream Use

*List how someone could finetune this model on their own dataset.*
-->

<!--
### Out-of-Scope Use

*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->

<!--
## Bias, Risks and Limitations

*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->

<!--
### Recommendations

*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->

## Training Details

### Training Set Metrics
| Training set | Min | Median  | Max |
|:-------------|:----|:--------|:----|
| Word count   | 33  | 96.1280 | 289 |

| Label | Training Sample Count |
|:------|:----------------------|
| 0     | 312                   |
| 1     | 321                   |

### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False

### Training Results
| Epoch  | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0006 | 1    | 0.2154        | -               |
| 0.0316 | 50   | 0.2582        | -               |
| 0.0632 | 100  | 0.2517        | -               |
| 0.0948 | 150  | 0.2562        | -               |
| 0.1263 | 200  | 0.2532        | -               |
| 0.1579 | 250  | 0.2412        | -               |
| 0.1895 | 300  | 0.184         | -               |
| 0.2211 | 350  | 0.1608        | -               |
| 0.2527 | 400  | 0.1487        | -               |
| 0.2843 | 450  | 0.117         | -               |
| 0.3159 | 500  | 0.0685        | -               |
| 0.3474 | 550  | 0.0327        | -               |
| 0.3790 | 600  | 0.0257        | -               |
| 0.4106 | 650  | 0.0139        | -               |
| 0.4422 | 700  | 0.012         | -               |
| 0.4738 | 750  | 0.0047        | -               |
| 0.5054 | 800  | 0.0046        | -               |
| 0.5370 | 850  | 0.0042        | -               |
| 0.5685 | 900  | 0.0058        | -               |
| 0.6001 | 950  | 0.0029        | -               |
| 0.6317 | 1000 | 0.0055        | -               |
| 0.6633 | 1050 | 0.0033        | -               |
| 0.6949 | 1100 | 0.0026        | -               |
| 0.7265 | 1150 | 0.0026        | -               |
| 0.7581 | 1200 | 0.0033        | -               |
| 0.7896 | 1250 | 0.0049        | -               |
| 0.8212 | 1300 | 0.0043        | -               |
| 0.8528 | 1350 | 0.0019        | -               |
| 0.8844 | 1400 | 0.0015        | -               |
| 0.9160 | 1450 | 0.0014        | -               |
| 0.9476 | 1500 | 0.0017        | -               |
| 0.9792 | 1550 | 0.0013        | -               |
| 1.0107 | 1600 | 0.0019        | -               |
| 1.0423 | 1650 | 0.0012        | -               |
| 1.0739 | 1700 | 0.0011        | -               |
| 1.1055 | 1750 | 0.0013        | -               |
| 1.1371 | 1800 | 0.0012        | -               |
| 1.1687 | 1850 | 0.0013        | -               |
| 1.2003 | 1900 | 0.0013        | -               |
| 1.2318 | 1950 | 0.0012        | -               |
| 1.2634 | 2000 | 0.0011        | -               |
| 1.2950 | 2050 | 0.0012        | -               |
| 1.3266 | 2100 | 0.0011        | -               |
| 1.3582 | 2150 | 0.0011        | -               |
| 1.3898 | 2200 | 0.0012        | -               |
| 1.4214 | 2250 | 0.0014        | -               |
| 1.4529 | 2300 | 0.0011        | -               |
| 1.4845 | 2350 | 0.001         | -               |
| 1.5161 | 2400 | 0.0011        | -               |
| 1.5477 | 2450 | 0.001         | -               |
| 1.5793 | 2500 | 0.001         | -               |
| 1.6109 | 2550 | 0.0012        | -               |
| 1.6425 | 2600 | 0.0011        | -               |
| 1.6740 | 2650 | 0.0011        | -               |
| 1.7056 | 2700 | 0.001         | -               |
| 1.7372 | 2750 | 0.001         | -               |
| 1.7688 | 2800 | 0.001         | -               |
| 1.8004 | 2850 | 0.001         | -               |
| 1.8320 | 2900 | 0.001         | -               |
| 1.8636 | 2950 | 0.001         | -               |
| 1.8951 | 3000 | 0.001         | -               |
| 1.9267 | 3050 | 0.0009        | -               |
| 1.9583 | 3100 | 0.0011        | -               |
| 1.9899 | 3150 | 0.001         | -               |

### Framework Versions
- Python: 3.10.14
- SetFit: 1.1.0
- Sentence Transformers: 3.1.1
- Transformers: 4.44.0
- PyTorch: 2.4.0+cu121
- Datasets: 3.0.0
- Tokenizers: 0.19.1

## Citation

### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
```

<!--
## Glossary

*Clearly define terms in order to be accessible across audiences.*
-->

<!--
## Model Card Authors

*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->

<!--
## Model Card Contact

*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->