Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:

Translation quality

#3
by Bachstelze - opened

The translation quality is in some cases really bad.
3 thoughts about its improvement:

  1. Usage of specific NMT models. NLLB is great for low-resource languages, yet there are better models for most of the languages.
  2. Back-translation and noisy channel models
  3. Can the sentence splitter be improved? Which one is currently used?
Cohere For AI org
edited Mar 20

Hi @Bachstelze

Thanks for the valuable feedback. We understand the translation quality is not good in some cases.

The reason we went with NLLB is because for training the Aya model, we were looking to cover 101 languages that the mT5 model was pretrained on and so we wanted the Aya Collection to have data coverage for all of these languages.

NLLB model provided translation coverage for most of these languages so it made sense to go ahead with NLLB as the translation model.
There were also considerations like inference cost to do translations for 101 languages and license of the chosen model that were taken into account before finalising on NLLB.

For sentence splitting , we used the sentence-splitter python package if I'm not wrong.

I'm tagging @weiyinko who specifically worked on the translation bit for the Aya project to help provide more context and address your query.

One can think of increasing the effort and cost of the translations, since only a part of the collection is used for training. So a smaller and qualitative corpus is created. It would also be possible to filter out bad translations via back-translation.

Cohere For AI org

Hello @Bachstelze , yes I think the translation can be improved. The focus of the dataset is on the breadth of language coverage and extending the volume of data available across the low-resource languages. The cost of translations is actually a big prohibiting factor for quality control. For example, the CNN_dailymail dataset took about a week on 8 A100 GPUs just to translate by itself. I agree that additional filtering and using specific NMT models would be good now that we have a baseline translation to work with/compare against.

Sign up or log in to comment