feature:: defining initial field in the readme
Browse files
README.md
CHANGED
@@ -1,24 +1,64 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-nc-4.0
|
3 |
-
dataset_info:
|
4 |
-
features:
|
5 |
-
- name: text
|
6 |
-
dtype: string
|
7 |
-
- name: url
|
8 |
-
dtype: string
|
9 |
-
- name: access_date
|
10 |
-
dtype: timestamp[s]
|
11 |
-
- name: index
|
12 |
-
dtype: int64
|
13 |
-
splits:
|
14 |
-
- name: train
|
15 |
-
num_bytes: 94923250249
|
16 |
-
num_examples: 36100000
|
17 |
-
download_size: 57607451860
|
18 |
-
dataset_size: 94923250249
|
19 |
-
configs:
|
20 |
-
- config_name: default
|
21 |
-
data_files:
|
22 |
-
- split: train
|
23 |
-
path: data/train-*
|
24 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-4.0
|
3 |
+
dataset_info:
|
4 |
+
features:
|
5 |
+
- name: text
|
6 |
+
dtype: string
|
7 |
+
- name: url
|
8 |
+
dtype: string
|
9 |
+
- name: access_date
|
10 |
+
dtype: timestamp[s]
|
11 |
+
- name: index
|
12 |
+
dtype: int64
|
13 |
+
splits:
|
14 |
+
- name: train
|
15 |
+
num_bytes: 94923250249
|
16 |
+
num_examples: 36100000
|
17 |
+
download_size: 57607451860
|
18 |
+
dataset_size: 94923250249
|
19 |
+
configs:
|
20 |
+
- config_name: default
|
21 |
+
data_files:
|
22 |
+
- split: train
|
23 |
+
path: data/train-*
|
24 |
+
---
|
25 |
+
|
26 |
+
**Dataset Information**
|
27 |
+
|
28 |
+
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
|
29 |
+
|
30 |
+
**Dataset developer:** Instituto de Ciência e Tecnologia Itaú (ICTi)
|
31 |
+
|
32 |
+
**Dataset Creation Architecture:**
|
33 |
+
|
34 |
+
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
|
35 |
+
|
36 |
+
**Dataset Release Date:** July 23, 2024.
|
37 |
+
|
38 |
+
**Status:**
|
39 |
+
|
40 |
+
**License:**
|
41 |
+
|
42 |
+
**Intended Use**
|
43 |
+
|
44 |
+
Intended Use Cases Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.1 Community License allows for these use cases.
|
45 |
+
|
46 |
+
**Out-of-scope**
|
47 |
+
|
48 |
+
Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**.
|
49 |
+
|
50 |
+
**How to use**
|
51 |
+
|
52 |
+
**Use with transformers**
|
53 |
+
|
54 |
+
**Training Data**
|
55 |
+
|
56 |
+
**Overview:** Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples.
|
57 |
+
|
58 |
+
**Data Freshness:** The pretraining data has a cutoff of December 2023.
|
59 |
+
|
60 |
+
**Benchmark with other Portuguese datasets**
|
61 |
+
|
62 |
+
**Responsibility & Safety**
|
63 |
+
|
64 |
+
**Ethical Considerations and Limitations**
|