Spaces:
Running
Running
Update app.py
Browse files
app.py
CHANGED
@@ -2,33 +2,44 @@ import streamlit as st
|
|
2 |
from transformers import pipeline
|
3 |
st.header("Ways to Improve Your Conversational Agents using π€ Hugging Face")
|
4 |
|
5 |
-
st.write("There are many ways to improve your conversational agents using language models. In this blog post, I
|
6 |
|
7 |
st.subheader("Data Augmentation with Generative Models β¨")
|
8 |
st.write("There are cases where you will not be allowed to keep data, you will have to start from scratch or you will have very little amount of data. We'll go over two use cases and see how to tackle them.")
|
9 |
st.write("Imagine you're making a chatbot that will answer very general questions about emergency situations at home.")
|
10 |
st.write("If you have very little amount of data, you could actually augment it through language models. There are regex based tools you can use but they tend to create bias due to repetitive patterns, so it's better to use language models for this case. A good model to use is a generative model fine-tuned on Quora Question Pairs dataset. This dataset consists of question pairs that are paraphrase of one another, and T5 can generate a paraphrased question given a source question.")
|
11 |
st.write("Try it yourself here ππ»")
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
13 |
default_value = "How can I put out grease fire?"
|
14 |
sent = st.text_area("Input", default_value, height = 10)
|
15 |
outputs = generator(sent)
|
16 |
st.write("Paraphrased Example:")
|
17 |
st.write(outputs[0]["generated_text"])
|
18 |
|
19 |
-
st.subheader("Add Personas to Your Conversational Agent using GPT-2")
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
st.subheader("Multilingual Models using Translation Models")
|
24 |
st.write("Scaling your chatbot across different languages is expensive and cumbersome. There are couple of ways on how to make your chatbot speak a different language. You can either translate the intent classification data and responses and train another model and deploy it,, or you can put translation models at two ends. There are advantages and disadvantages in both approaches. For the first one, you can assess the performance of the model and hand your responses to a native speaker to have more control over what your bot says, but it requires more resources compared to second one. For the second one, assume that you're making a chatbot that is in English and want to have another language, say, German. You need two models, from German to English and from English to German.")
|
25 |
-
st.image("./
|
26 |
st.write("Your English intent classification model will be between these two models, your German to English model will translate the input to English and the output will go through the intent classification model, which will classify intent and select appropriate response (which is currently in English). The response will be translated back to German, which you can do in advance and do proofreading with a native speaker or directly pass it to a from English to German language model. For this use case, I highly recommend specific translation models instead of using sequence-to-sequence multilingual models like T5. ")
|
27 |
|
28 |
-
|
29 |
-
|
|
|
|
|
|
|
|
|
|
|
30 |
sent = st.text_area("Input", default_value, height = 10)
|
31 |
outputs = translator(sent)
|
32 |
st.write("Translated Example:")
|
33 |
translated_text = translator("How are you?")
|
34 |
-
st.write(outputs[0]["translation_text"])
|
|
|
|
|
|
|
|
|
|
|
|
2 |
from transformers import pipeline
|
3 |
st.header("Ways to Improve Your Conversational Agents using π€ Hugging Face")
|
4 |
|
5 |
+
st.write("There are many ways to improve your conversational agents using language models. In this blog post, I will walk you through a couple of tricks that will improve your conversational agent.")
|
6 |
|
7 |
st.subheader("Data Augmentation with Generative Models β¨")
|
8 |
st.write("There are cases where you will not be allowed to keep data, you will have to start from scratch or you will have very little amount of data. We'll go over two use cases and see how to tackle them.")
|
9 |
st.write("Imagine you're making a chatbot that will answer very general questions about emergency situations at home.")
|
10 |
st.write("If you have very little amount of data, you could actually augment it through language models. There are regex based tools you can use but they tend to create bias due to repetitive patterns, so it's better to use language models for this case. A good model to use is a generative model fine-tuned on Quora Question Pairs dataset. This dataset consists of question pairs that are paraphrase of one another, and T5 can generate a paraphrased question given a source question.")
|
11 |
st.write("Try it yourself here ππ»")
|
12 |
+
|
13 |
+
@st.cache
|
14 |
+
def load_qqp():
|
15 |
+
model = pipeline("text2text-generation", model = "mrm8488/t5-small-finetuned-quora-for-paraphrasing")
|
16 |
+
return model
|
17 |
+
generator = load_qqp()
|
18 |
default_value = "How can I put out grease fire?"
|
19 |
sent = st.text_area("Input", default_value, height = 10)
|
20 |
outputs = generator(sent)
|
21 |
st.write("Paraphrased Example:")
|
22 |
st.write(outputs[0]["generated_text"])
|
23 |
|
|
|
|
|
|
|
|
|
24 |
st.subheader("Multilingual Models using Translation Models")
|
25 |
st.write("Scaling your chatbot across different languages is expensive and cumbersome. There are couple of ways on how to make your chatbot speak a different language. You can either translate the intent classification data and responses and train another model and deploy it,, or you can put translation models at two ends. There are advantages and disadvantages in both approaches. For the first one, you can assess the performance of the model and hand your responses to a native speaker to have more control over what your bot says, but it requires more resources compared to second one. For the second one, assume that you're making a chatbot that is in English and want to have another language, say, German. You need two models, from German to English and from English to German.")
|
26 |
+
st.image("./Translation.png")
|
27 |
st.write("Your English intent classification model will be between these two models, your German to English model will translate the input to English and the output will go through the intent classification model, which will classify intent and select appropriate response (which is currently in English). The response will be translated back to German, which you can do in advance and do proofreading with a native speaker or directly pass it to a from English to German language model. For this use case, I highly recommend specific translation models instead of using sequence-to-sequence multilingual models like T5. ")
|
28 |
|
29 |
+
@st.cache
|
30 |
+
def load_translation():
|
31 |
+
model_checkpoint = "Helsinki-NLP/opus-mt-en-fr"
|
32 |
+
model = pipeline("translation", model=model_checkpoint)
|
33 |
+
return model
|
34 |
+
|
35 |
+
translator = load_translation()
|
36 |
sent = st.text_area("Input", default_value, height = 10)
|
37 |
outputs = translator(sent)
|
38 |
st.write("Translated Example:")
|
39 |
translated_text = translator("How are you?")
|
40 |
+
st.write(outputs[0]["translation_text"])
|
41 |
+
st.write("You can check out this [link](https://huggingface.co/models?pipeline_tag=translation&sort=downloads&search=helsinki-nlp) for available translation models.")
|
42 |
+
|
43 |
+
st.subheader("Add Personas to Your Conversational Agent using GPT-2")
|
44 |
+
st.write("When trained, language models like GPT-2 or DialoGPT is capable of talking like any character you want. If you have a friend-like chatbot (instead of a chatbot built for RPA) you can give your users options to talk to their favorite character. There are couple of ways of doing this, you can either fine-tune DialoGPT with sequences of conversation turns, maybe movie dialogues, or infer with a large model like GPT-J. ")
|
45 |
+
st.write("You can see an [example](https://huggingface.co/docs/transformers/model_doc/dialogpt) of a chatbot that talks like Gandalf, that is done simply by sending a request to GPT-J through Inference API.")
|