merve HF staff commited on
Commit
4f34d4f
Β·
1 Parent(s): 4028614

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +6 -9
app.py CHANGED
@@ -23,11 +23,10 @@ st.write("Imagine you're making a chatbot that will answer very general question
23
  st.write("If you have very little amount of data, you could actually augment it through language models. There are regex based tools you can use but they tend to create bias due to repetitive patterns, so it's better to use language models for this case. A good model to use is a generative model fine-tuned on [Quora Question Pairs dataset](https://www.tensorflow.org/datasets/catalog/glue?hl=en#glueqqp). This dataset consists of question pairs that are paraphrase of one another, and T5 can generate a paraphrased question given a source question. There's a similar dataset called [MRPC](https://www.tensorflow.org/datasets/catalog/glue?hl=en#gluemrpc) that assesses if one sentence is a paraphrase of another, you can choose between one of them.")
24
  st.write("Try it yourself here πŸ‘‡πŸ»")
25
 
26
-
27
  generator = pipeline("text2text-generation", model = "mrm8488/t5-small-finetuned-quora-for-paraphrasing")
28
 
29
- default_value = "How can I put out grease fire?"
30
- sent = st.text_area("Input", default_value, height = 10)
31
  outputs = generator(sent)
32
  st.write("Paraphrased Example:")
33
  st.write(outputs[0]["generated_text"])
@@ -39,12 +38,10 @@ st.write("Your English intent classification model will be between these two mod
39
 
40
 
41
 
42
-
43
  model_id = "Helsinki-NLP/opus-mt-en-fr"
44
 
45
- #translator = pipeline("translation", model="Helsinki-NLP/opus-mt-en-fr")
46
  default_value_tr = "How are you?"
47
- tr_input = st.text_input("Input in English", default_value_tr, key = "translation")
48
  outputs = query(tr_input, model_id, api_token)
49
  st.write("Translated Example:")
50
  st.write(outputs[0]["translation_text"])
@@ -55,8 +52,8 @@ st.subheader("Easy Information Retrieval")
55
  st.write("If you're making a chatbot that needs to provide information to user, you can take user's query and search for the answer in the documents you have, using question answering models. Look at the example and try it yourself here πŸ‘‡πŸ»")
56
 
57
  qa_model = pipeline("question-answering")
58
- question = st.text_area("Question", default_value = "What does transformers do?", height = 5)
59
- context = st.text_area("Context", default_value = "πŸ€— Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio.")
60
  output_answer = qa_model(question = question, context = context)
61
  st.write("Answer:")
62
  st.write(output_answer["answer"])
@@ -65,4 +62,4 @@ st.subheader("Add Personas to Your Conversational Agent using GPT-2")
65
  st.write("When trained, language models like GPT-2 or DialoGPT is capable of talking like any character you want. If you have a friend-like chatbot (instead of a chatbot built for RPA) you can give your users options to talk to their favorite character. There are couple of ways of doing this, you can either fine-tune DialoGPT with sequences of conversation turns, maybe movie dialogues, or infer with a large model like GPT-J. Note that these models might have biases and you will not have any control over output, unless you make an additional effort to filter it.")
66
  st.write("You can see an [example](https://huggingface.co/docs/transformers/model_doc/dialogpt) of a chatbot that talks like Gandalf, that is done simply by sending a request to GPT-J through Inference API.")
67
 
68
- st.write("I've written the inferences in this blog post with only three lines of code, using [pipelines](https://huggingface.co/docs/transformers/main_classes/pipelines). (yes 🀯) Check out the code of the post [here](https://huggingface.co/spaces/merve/chatbot-blog/blob/main/app.py) on how you can do it too! πŸ€— ")
 
23
  st.write("If you have very little amount of data, you could actually augment it through language models. There are regex based tools you can use but they tend to create bias due to repetitive patterns, so it's better to use language models for this case. A good model to use is a generative model fine-tuned on [Quora Question Pairs dataset](https://www.tensorflow.org/datasets/catalog/glue?hl=en#glueqqp). This dataset consists of question pairs that are paraphrase of one another, and T5 can generate a paraphrased question given a source question. There's a similar dataset called [MRPC](https://www.tensorflow.org/datasets/catalog/glue?hl=en#gluemrpc) that assesses if one sentence is a paraphrase of another, you can choose between one of them.")
24
  st.write("Try it yourself here πŸ‘‡πŸ»")
25
 
 
26
  generator = pipeline("text2text-generation", model = "mrm8488/t5-small-finetuned-quora-for-paraphrasing")
27
 
28
+ default_value_gen = "How can I put out grease fire?"
29
+ sent = st.text_area("Input", placeholder = default_value_gen, height = 10)
30
  outputs = generator(sent)
31
  st.write("Paraphrased Example:")
32
  st.write(outputs[0]["generated_text"])
 
38
 
39
 
40
 
 
41
  model_id = "Helsinki-NLP/opus-mt-en-fr"
42
 
 
43
  default_value_tr = "How are you?"
44
+ tr_input = st.text_input("Input in English", placeholder = default_value_tr, key = "translation")
45
  outputs = query(tr_input, model_id, api_token)
46
  st.write("Translated Example:")
47
  st.write(outputs[0]["translation_text"])
 
52
  st.write("If you're making a chatbot that needs to provide information to user, you can take user's query and search for the answer in the documents you have, using question answering models. Look at the example and try it yourself here πŸ‘‡πŸ»")
53
 
54
  qa_model = pipeline("question-answering")
55
+ question = st.text_area("Question", placeholder = "What does transformers do?", height = 5)
56
+ context = st.text_area("Context", placeholder = "πŸ€— Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio.")
57
  output_answer = qa_model(question = question, context = context)
58
  st.write("Answer:")
59
  st.write(output_answer["answer"])
 
62
  st.write("When trained, language models like GPT-2 or DialoGPT is capable of talking like any character you want. If you have a friend-like chatbot (instead of a chatbot built for RPA) you can give your users options to talk to their favorite character. There are couple of ways of doing this, you can either fine-tune DialoGPT with sequences of conversation turns, maybe movie dialogues, or infer with a large model like GPT-J. Note that these models might have biases and you will not have any control over output, unless you make an additional effort to filter it.")
63
  st.write("You can see an [example](https://huggingface.co/docs/transformers/model_doc/dialogpt) of a chatbot that talks like Gandalf, that is done simply by sending a request to GPT-J through Inference API.")
64
 
65
+ st.write("I've written the inferences in this blog post with only three lines of code (🀯), using [pipelines](https://huggingface.co/docs/transformers/main_classes/pipelines) and for translation example, [Inference API](https://huggingface.co/inference-api), which you can use for building your chatbots as well! Check out the code of the post [here](https://huggingface.co/spaces/merve/chatbot-blog/blob/main/app.py) on how you can do it too! πŸ€— ")