metadata
task_categories:
- text-retrieval
- text-generation
language:
- en
- de
- it
- pt
- fa
- fr
- ja
- es
- ru
- zh
pretty_name: Preprocessed Multilingual Wikipedia
size_categories:
- 100M<n<1B
configs:
- config_name: '20240401'
data_files:
- 20240401/en/collection.jsonl
- 20240401/de/collection.jsonl
- 20240401/es/collection.jsonl
- 20240401/fa/collection.jsonl
- 20240401/fr/collection.jsonl
- 20240401/it/collection.jsonl
- 20240401/zh/collection.jsonl
- 20240401/ru/collection.jsonl
- 20240401/ja/collection.jsonl
- 20240401/pt/collection.jsonl
- config_name: '20240801'
data_files:
- 20240801/en/collection.jsonl
- 20240801/de/collection.jsonl
- 20240801/es/collection.jsonl
- 20240801/fa/collection.jsonl
- 20240801/fr/collection.jsonl
- 20240801/it/collection.jsonl
- 20240801/zh/collection.jsonl
- 20240801/ru/collection.jsonl
- 20240801/ja/collection.jsonl
- 20240801/pt/collection.jsonl
- config_name: '20241001'
data_files:
- 20241001/en/collection.jsonl
- 20241001/de/collection.jsonl
- 20241001/es/collection.jsonl
- 20241001/fa/collection.jsonl
- 20241001/fr/collection.jsonl
- 20241001/it/collection.jsonl
- 20241001/zh/collection.jsonl
- 20241001/ru/collection.jsonl
- 20241001/ja/collection.jsonl
- 20241001/pt/collection.jsonl
This dataset contains preprocessed and chunked Wikipedia HTML dumps from 10 languages.
Refer to the following for more information:
GitHub repository: https://github.com/stanford-oval/WikiChat
Papers:
- WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia
- SPAGHETTI: Open-Domain Question Answering from Heterogeneous Data Sources with Retrieval and Semantic Parsing
WikiChat
Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia
Online demo:
https://wikichat.genie.stanford.edu