avacaondata commited on
Commit
f8cbd5a
1 Parent(s): 9e859a3

por favor links funcionad, quiero dormir, no me gusta el front...

Browse files
Files changed (1) hide show
  1. article_app.py +10 -10
article_app.py CHANGED
@@ -53,11 +53,11 @@ Below you can find all the pieces that form the system. This section is minimali
53
  <img src="https://drive.google.com/uc?export=view&id=1_iUdUMPR5u1p9767YVRbCZkobt_fOozD">
54
 
55
  <ol>
56
- <li><a href="https://hf.co/IIC/wav2vec2-spanish-multilibrispeech">Speech2Text</a>: For this we finedtuned a multilingual Wav2Vec2, as explained in the attached link. We use this model to process audio questions. More info: https://hf.co/IIC/wav2vec2-spanish-multilibrispeech</li>
57
- <li><a href="https://hf.co/IIC/dpr-spanish-passage_encoder-allqa-base">Dense Passage Retrieval (DPR) for Context</a>: Dense Passage Retrieval is a methodology <a href="https://arxiv.org/abs/2004.04906">developed by Facebook</a> which is currently the SoTA for Passage Retrieval, that is, the task of getting the most relevant passages to answer a given question. You can find details about how it was trained here: https://hf.co/IIC/dpr-spanish-passage_encoder-allqa-base. </li>
58
- <li><a href="https://hf.co/IIC/dpr-spanish-question_encoder-allqa-base">Dense Passage Retrieval (DPR) for Question</a>: It is actually part of the same thing as the above. For more details, go to https://hf.co/IIC/dpr-spanish-question_encoder-allqa-base .</li>
59
- <li><a href="https://hf.co/sentence-transformers/distiluse-base-multilingual-cased-v1">Sentence Encoder Ranker</a>: To rerank the candidate contexts retrieved by DPR for the generative model to see. This also selects the top 5 passages for the model to read, it is the final filter before the generative model. For this we used 3 different configurations to human-check (that's us seriously playing with our toy) the answer results, as generated answers depended much on this piece of the puzzle. The first option, before we trained our own crossencoder, was to use a <a href="https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1">multilingual sentence transformer</a>, trained on multilingual MS Marco. This worked more or less fine, although it was noticeable it wasn't specialized in Spanish. We then tried our own CrossEncoder, trained on our translated version of MS Marco to Spanish: https://huggingface.co/datasets/IIC/msmarco_es. It worked better than the sentence transformer. Then, it occured to us by looking at their ranks distributions for the same passages, that maybe by multiplying their similarity scores element by element, we could obtain a less biased rank for the documents, therefore only those documents both rankers agree are important appear at the top. We tried this and it showed much better results, so we left both systems with the posterior multiplication of similarities.</li>
60
- <li><a href="https://hf.co/IIC/mt5-base-lfqa-es">Generative Long-Form Question Answering Model</a>: For this we used either mT5 (the one attached) or <a href="https://hf.co/IIC/mbart-large-lfqa-es">mBART</a>. This generative model receives the most relevant passages and uses them to generate an answer to the question. In https://hf.co/IIC/mt5-base-lfqa-es and https://hf.co/IIC/mbart-large-lfqa-es there are more details about how we trained it etc.</li>
61
  <li><a href="https://huggingface.co/facebook/tts_transformer-es-css10">Text2Speech</a>: For this we used Meta's text2speech service on Huggingface, as text2speech classes are not yet implemented on the main branch of Transformers. This piece was a must to provide a voice to voice service so that it's almost fully accessible. As future work, as soon as text2speech classes are implemented on transformers, we will train our own models to replace this piece.</li>
62
  </ol>
63
 
@@ -75,11 +75,11 @@ Datasets used and created
75
  We uploaded, and in some cases created, datasets in Spanish to be able to build such a system.
76
 
77
  <ol>
78
- <li><a href="https://hf.co/datasets/IIC/spanish_biomedical_crawled_corpus">Spanish Biomedical Crawled Corpus</a>. Used for finding answers to questions about biomedicine. (More info in https://hf.co/datasets/IIC/spanish_biomedical_crawled_corpus .)</li>
79
- <li><a href="https://hf.co/datasets/IIC/lfqa_spanish">LFQA_Spanish</a>. Used for training the generative model. (More info in https://hf.co/datasets/IIC/lfqa_spanish )</li>
80
- <li><a href="https://hf.co/datasets/squad_es">SQUADES</a>. Used to train the DPR models. (More info in https://hf.co/datasets/squad_es .)</li>
81
- <li><a href="https://hf.co/datasets/IIC/bioasq22_es">BioAsq22-Spanish</a>. Used to train the DPR models. (More info in https://hf.co/datasets/IIC/bioasq22_es .)</li>
82
- <li><a href="https://hf.co/datasets/PlanTL-GOB-ES/SQAC">SQAC (Spanish Question Answering Corpus)</a>. Used to train the DPR models. (More info in https://hf.co/datasets/PlanTL-GOB-ES/SQAC .)</li>
83
  <li><a href="https://huggingface.co/datasets/IIC/msmarco_es">MSMARCO-ES</a>. Used to train CrossEncoder in Spanish for Ranker.(More info in https://huggingface.co/datasets/IIC/msmarco_es .)</li>
84
  <li><a href="https://huggingface.co/datasets/multilingual_librispeech">MultiLibrispeech</a>. Used to train the Speech2Text model in Spanish. (More info in https://huggingface.co/datasets/multilingual_librispeech .)</li>
85
  </ol>
 
53
  <img src="https://drive.google.com/uc?export=view&id=1_iUdUMPR5u1p9767YVRbCZkobt_fOozD">
54
 
55
  <ol>
56
+ <li><a href="https://huggingface.co/IIC/wav2vec2-spanish-multilibrispeech">Speech2Text</a>: For this we finedtuned a multilingual Wav2Vec2, as explained in the attached link. We use this model to process audio questions. More info: https://huggingface.co/IIC/wav2vec2-spanish-multilibrispeech</li>
57
+ <li><a href="https://huggingface.co/IIC/dpr-spanish-passage_encoder-allqa-base">Dense Passage Retrieval (DPR) for Context</a>: Dense Passage Retrieval is a methodology <a href="https://arxiv.org/abs/2004.04906">developed by Facebook</a> which is currently the SoTA for Passage Retrieval, that is, the task of getting the most relevant passages to answer a given question. You can find details about how it was trained here: https://huggingface.co/IIC/dpr-spanish-passage_encoder-allqa-base. </li>
58
+ <li><a href="https://huggingface.co/IIC/dpr-spanish-question_encoder-allqa-base">Dense Passage Retrieval (DPR) for Question</a>: It is actually part of the same thing as the above. For more details, go to https://huggingface.co/IIC/dpr-spanish-question_encoder-allqa-base .</li>
59
+ <li><a href="https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1">Sentence Encoder Ranker</a>: To rerank the candidate contexts retrieved by DPR for the generative model to see. This also selects the top 5 passages for the model to read, it is the final filter before the generative model. For this we used 3 different configurations to human-check (that's us seriously playing with our toy) the answer results, as generated answers depended much on this piece of the puzzle. The first option, before we trained our own crossencoder, was to use a <a href="https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1">multilingual sentence transformer</a>, trained on multilingual MS Marco. This worked more or less fine, although it was noticeable it wasn't specialized in Spanish. We then tried our own CrossEncoder, trained on our translated version of MS Marco to Spanish: https://huggingface.co/datasets/IIC/msmarco_es. It worked better than the sentence transformer. Then, it occured to us by looking at their ranks distributions for the same passages, that maybe by multiplying their similarity scores element by element, we could obtain a less biased rank for the documents, therefore only those documents both rankers agree are important appear at the top. We tried this and it showed much better results, so we left both systems with the posterior multiplication of similarities.</li>
60
+ <li><a href="https://huggingface.co/IIC/mt5-base-lfqa-es">Generative Long-Form Question Answering Model</a>: For this we used either mT5 (the one attached) or <a href="https://huggingface.co/IIC/mbart-large-lfqa-es">mBART</a>. This generative model receives the most relevant passages and uses them to generate an answer to the question. In https://huggingface.co/IIC/mt5-base-lfqa-es and https://huggingface.co/IIC/mbart-large-lfqa-es there are more details about how we trained it etc.</li>
61
  <li><a href="https://huggingface.co/facebook/tts_transformer-es-css10">Text2Speech</a>: For this we used Meta's text2speech service on Huggingface, as text2speech classes are not yet implemented on the main branch of Transformers. This piece was a must to provide a voice to voice service so that it's almost fully accessible. As future work, as soon as text2speech classes are implemented on transformers, we will train our own models to replace this piece.</li>
62
  </ol>
63
 
 
75
  We uploaded, and in some cases created, datasets in Spanish to be able to build such a system.
76
 
77
  <ol>
78
+ <li><a href="https://huggingface.co/datasets/IIC/spanish_biomedical_crawled_corpus">Spanish Biomedical Crawled Corpus</a>. Used for finding answers to questions about biomedicine. (More info in https://huggingface.co/datasets/IIC/spanish_biomedical_crawled_corpus .)</li>
79
+ <li><a href="https://huggingface.co/datasets/IIC/lfqa_spanish">LFQA_Spanish</a>. Used for training the generative model. (More info in https://huggingface.co/datasets/IIC/lfqa_spanish )</li>
80
+ <li><a href="https://huggingface.co/datasets/squad_es">SQUADES</a>. Used to train the DPR models. (More info in https://huggingface.co/datasets/squad_es .)</li>
81
+ <li><a href="https://huggingface.co/datasets/IIC/bioasq22_es">BioAsq22-Spanish</a>. Used to train the DPR models. (More info in https://huggingface.co/datasets/IIC/bioasq22_es .)</li>
82
+ <li><a href="https://huggingface.co/datasets/PlanTL-GOB-ES/SQAC">SQAC (Spanish Question Answering Corpus)</a>. Used to train the DPR models. (More info in https://huggingface.co/datasets/PlanTL-GOB-ES/SQAC .)</li>
83
  <li><a href="https://huggingface.co/datasets/IIC/msmarco_es">MSMARCO-ES</a>. Used to train CrossEncoder in Spanish for Ranker.(More info in https://huggingface.co/datasets/IIC/msmarco_es .)</li>
84
  <li><a href="https://huggingface.co/datasets/multilingual_librispeech">MultiLibrispeech</a>. Used to train the Speech2Text model in Spanish. (More info in https://huggingface.co/datasets/multilingual_librispeech .)</li>
85
  </ol>