Edit model card

SetFit with BAAI/bge-base-en-v1.5

This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-base-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
0
  • 'Reasoning:\nThe provided answer is too general and is not grounded in the specific details or context of the given document, which focuses on study budgets and considerations for wise spending within the organization.\n\nEvaluation: Bad'
  • 'Reasoning:\nThe answer contains specific information from the document and lists relevant pet peeves. However, there are multiple instances of the phrase "Cassandra Rivera Heather Nelson," which seems out of place. This could cause confusion and detract from the overall clarity.\n\nFinal Result: Bad'
  • "Reasoning:\nThe answer does not address the question directly. The provided steps and methods mentioned focus on handling personal documents, expense reimbursements, secure sharing of information, feedback discussion, and requesting a learning budget. The question specifically asks about accessing the company's training resources, and the answer provided does not stay focused on that.\n\nEvaluation: Bad"
1
  • 'Reasoning:\nThe answer accurately addresses the question, providing specific details from the document about how feedback should be given. It includes key points such as timing, focusing on the situation, being clear and direct, showing appreciation, and the intention behind giving feedback, all of which are mentioned in the document.\n\nEvaluation: Good'
  • 'Reasoning:\nThe answer correctly identifies several reasons why it is important to share information from high-level meetings, directly supported by the provided document. The explanation is clear, relevant, and to the point, avoiding unnecessary information.\n\nEvaluation: Good'
  • 'Reasoning:\nThe answer largely reproduces content from the document accurately, specifying the process for keeping track of kilometers and sending an email or excel document. However, there are inaccuracies in the email addresses and naming inconsistency. For instance, "Dustin Chan" appears in place of "finance@Dustin [email protected]" and "ORGANIZATION_2," which do not exist in the document. Moreover, it includes extraneous, non-verifiable information about the parking card.\n\nEvaluation:\nBad'

Evaluation

Metrics

Label Accuracy
all 0.5970

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_newrelic_gpt-4o_improved-cot-instructions_chat_few_shot_only_reasoning_17")
# Run inference
preds = model("Reasoning:
The answer is accurately grounded in the provided document and directly addresses the question without deviating into unrelated topics. The email address for contacting regarding travel reimbursement questions is correctly cited from the document.

Final evaluation: Good")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 17 49.2462 126
Label Training Sample Count
0 32
1 33

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (5, 5)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0061 1 0.237 -
0.3067 50 0.2667 -
0.6135 100 0.2263 -
0.9202 150 0.0511 -
1.2270 200 0.004 -
1.5337 250 0.0024 -
1.8405 300 0.0019 -
2.1472 350 0.0019 -
2.4540 400 0.0017 -
2.7607 450 0.0015 -
3.0675 500 0.0014 -
3.3742 550 0.0014 -
3.6810 600 0.0013 -
3.9877 650 0.0013 -
4.2945 700 0.0013 -
4.6012 750 0.0013 -
4.9080 800 0.0012 -

Framework Versions

  • Python: 3.10.14
  • SetFit: 1.1.0
  • Sentence Transformers: 3.1.0
  • Transformers: 4.44.0
  • PyTorch: 2.4.1+cu121
  • Datasets: 2.19.2
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
10
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Netta1994/setfit_baai_newrelic_gpt-4o_improved-cot-instructions_chat_few_shot_only_reasoning_17

Finetuned
(247)
this model

Evaluation results