File size: 2,103 Bytes
b73191b
25c2738
 
b73191b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25c2738
 
 
 
 
 
 
 
 
 
 
b73191b
25c2738
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
language:
- en
dataset_info:
  features:
  - name: text
    dtype: string
  - name: label
    dtype: int64
  splits:
  - name: train
    num_bytes: 2793
    num_examples: 51
  - name: test
    num_bytes: 7035
    num_examples: 100
  download_size: 9918
  dataset_size: 9828
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
license: apache-2.0
task_categories:
- text-classification
tags:
- llms
- nlp
- chatbots
- prompts
pretty_name: TL (Test vs Learn) chatbot prompts
size_categories:
- n<1K
---

This dataset contains manually labeled examples used for training and testing [reddgr/tl-test-learn-prompt-classifier](https://huggingface.co./reddgr/tl-test-learn-prompt-classifier), a fine-tuning of DistilBERT that classifies chatbot prompts as either 'test' or 'learn.'

Prompts labeled as 'test' (1) are those where it can be inferred that the user is intentionally 'challenging' the conversational tool with a complicated question the user might know the answer to, or a subjective question the user makes with the purpose of testing the tool rather than learning from it or obtaining a specific unknown information.

An alternative naming convention for the labels is 'problem' (test) vs 'instruction' (learn). The earliest versions of the reddgr/tl-test-learn-prompt-classifier model used a zero-shot classification pipeline for those two specific terms: instruction (0) vs problem (1).

This dataset and the model are part of a project aimed at identifying metrics to quantitatively measure the conversational quality of text generated by large language models (LLMs) and, by extension, any other type of text extracted from a conversational context (customer service chats, social media posts...).

Relevant Jupyter notebooks and Python scripts that use this dataset and related datasets and models can be found in the following GitHub repository:
[reddgr/chatbot-response-scoring-scbn-rqtl](https://github.com/reddgr/chatbot-response-scoring-scbn-rqtl)
## Labels:
- **0**: Learn (instruction)
- **1**: Test (problem)