mrm8488 commited on
Commit
8329fab
1 Parent(s): 57d29fc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -17
README.md CHANGED
@@ -7,7 +7,7 @@ widget:
7
  ---
8
 
9
  # Spanish-T5-small fine-tuned on **SQAC** for QA 📖❓
10
- [Google's mT5-small](https://huggingface.co/flax-community/spanish-t5-small) fine-tuned on [SQAC](https://huggingface.co/datasets/BSC-TeMU/SQAC) (secondary task) for **Q&A** downstream task.
11
 
12
  ## Details of Spanish T5 (small)
13
 
@@ -24,7 +24,7 @@ widget:
24
 
25
  | Metric | # Value |
26
  | ------ | --------- |
27
- | **EM** | **41.65** |
28
 
29
 
30
 
@@ -34,8 +34,9 @@ widget:
34
  from transformers import AutoModelForCausalLM, AutoTokenizer
35
  import torch
36
  device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
37
- tokenizer = AutoTokenizer.from_pretrained("mrm8488/mT5-small-finetuned-tydiqa-for-xqa")
38
- model = AutoModelForCausalLM.from_pretrained("mrm8488/mT5-small-finetuned-tydiqa-for-xqa").to(device)
 
39
 
40
  def get_response(question, context, max_length=32):
41
  input_text = 'question: %s context: %s' % (question, context)
@@ -47,19 +48,6 @@ def get_response(question, context, max_length=32):
47
 
48
  return tokenizer.decode(output[0], skip_special_tokens=True)
49
 
50
- # Some examples in different languages
51
-
52
- context = 'HuggingFace won the best Demo paper at EMNLP2020.'
53
- question = 'What won HuggingFace?'
54
- get_response(question, context)
55
-
56
- context = 'HuggingFace ganó la mejor demostración con su paper en la EMNLP2020.'
57
- question = 'Qué ganó HuggingFace?'
58
- get_response(question, context)
59
-
60
- context = 'HuggingFace выиграл лучшую демонстрационную работу на EMNLP2020.'
61
- question = 'Что победило в HuggingFace?'
62
- get_response(question, context)
63
  ```
64
 
65
  > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
 
7
  ---
8
 
9
  # Spanish-T5-small fine-tuned on **SQAC** for QA 📖❓
10
+ [spanish-T5-small](https://huggingface.co/flax-community/spanish-t5-small) fine-tuned on [SQAC](https://huggingface.co/datasets/BSC-TeMU/SQAC) (secondary task) for **Q&A** downstream task.
11
 
12
  ## Details of Spanish T5 (small)
13
 
 
24
 
25
  | Metric | # Value |
26
  | ------ | --------- |
27
+ | **BLEU** | **41.94** |
28
 
29
 
30
 
 
34
  from transformers import AutoModelForCausalLM, AutoTokenizer
35
  import torch
36
  device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
37
+ ckpt = 'spanish-t5-small-sqac-for-qa'
38
+ tokenizer = AutoTokenizer.from_pretrained(ckpt)
39
+ model = AutoModelForCausalLM.from_pretrained(ckpt).to(device)
40
 
41
  def get_response(question, context, max_length=32):
42
  input_text = 'question: %s context: %s' % (question, context)
 
48
 
49
  return tokenizer.decode(output[0], skip_special_tokens=True)
50
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
  ```
52
 
53
  > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)