File size: 4,987 Bytes
da10516
 
 
 
 
 
 
 
 
 
 
34d0475
6a62bd4
 
 
da10516
 
6a62bd4
da10516
6a62bd4
da10516
6a62bd4
 
da10516
6a62bd4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
datasets:
- rasa/command-generation-calm-demo-v1
pipeline_tag: text-generation
---

# Model Card for Command Generator

<!-- Provide a quick summary of what the model is/does. -->

This is a Dialogue Understanding (DU) model developed by Rasa. 
It can be used to power assistants built with the [Conversational AI with Language Models (CALM) approach](https://rasa.com/docs/rasa-pro/calm) developed by Rasa.

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->
This model takes as input a transcript of an ongoing conversation between an AI assistant and a user,
as well as structured information about the assistant's business logic.
As output, it produces a short sequence of [commands](https://rasa.com/docs/rasa-pro/concepts/dialogue-understanding#command-reference) 
(typically 1-3) from the following list:

* `StartFlow(flow_name)`
* `SetSlot(slot_name, slot_value)`
* `CorrectSlot(slot_name, slot_value)`
* `Clarify(flow_name_1, flow_name_2, ...)`
* `ChitChat`
* `KnowledgeAnswer`
* `HumanHandoff`
* `Error`

Note that this model can only produce commands to be interpreted by Rasa.
It **cannot** be used to generate arbitrary text.

The Command Generator translates user messages into this internal grammar, allowing CALM to progress the conversation.

Examples:

> I want to transfer money

`StartFlow(transfer_money)`

> I want to transfer $55 to John

`StartFlow(transfer_money), SetSlot(recipient, John), SetSlot(amount, 55)`

- **Developed by:** Rasa Technologies
- **Model type:** Text Generation
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model [optional]:** [unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit](https://huggingface.co./unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit)


## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

The Command Generator is used as part of an AI assistant developed with Rasa's CALM paradigm. 
Typical use cases include customer-facing chatbots, voice assistants, IVR systems, and internal chatbots in large organizations.

### Direct Use

This model can be directly used as part of the command generator component if the flows of your CALM assistant are similar to flows used in the [rasa-calm-demo assistant](https://github.com/RasaHQ/rasa-calm-demo).

### Downstream Use [optional]

The model can also be used as a base model to fine-tune further on your own assistant's data using the [fine-tuning recipe feature](https://rasa.com/docs/rasa-pro/building-assistants/fine-tuning-recipe#step-2-prepare-the-fine-tuning-dataset) available in rasa pro

### Out-of-Scope Use

Since the model has been explicitly fine-tuned to output the grammar of commands, it shouldn't be used to generate any other free form content.

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

The Command Generator interprets conversations and translates user messages into commands.
These commands are processed by Rasa to advance the conversations. 
This model does not generate text to be sent to and end user, and is incapable of generating problematic
or harmful text. 

However, as with any pre-trained model, its predictions are susceptible to bias.
For example, the accuracy of the model varies with the language used. The authors have tested the performance on English but haven't tried the model in any other language.


## Training Details

### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

Trained on the `train` split of [rasa/command-generation-calm-demo-v1](https://huggingface.co./datasets/rasa/command-generation-calm-demo-v1).

### Training Procedure 

Trained using the notebook available [here](https://github.com/RasaHQ/notebooks/blob/main/cmd_gen_finetuning.ipynb). Used a single A100 GPU with 80GB VRAM.

## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

### Testing Data, Factors & Metrics

#### Testing Data

Evaluated on the `test` split of [rasa/command-generation-calm-demo-v1](https://huggingface.co./datasets/rasa/command-generation-calm-demo-v1).


#### Metrics

F1 score per command type (StartFlow, SetSlot, etc.) is the main metric chosen to evaluate the model on the test split. 
This helps us understand which commands the model has learnt well and the ones that the model needs more training on.

### Results

To be added

## Model Card Contact

If you have questions about the dataset, please reach out to us on the [Rasa forum](https://forum.rasa.com/c/rasa-pro-calm/36).