New model from https://wandb.ai/wandb/huggingtweets/runs/wi63vy8x
Browse files- README.md +6 -6
- config.json +1 -1
- generation_config.json +1 -1
- pytorch_model.bin +1 -1
- tokenizer_config.json +1 -1
- training_args.bin +1 -1
README.md
CHANGED
@@ -42,20 +42,20 @@ The model was trained on tweets from Elon Musk.
|
|
42 |
|
43 |
| Data | Elon Musk |
|
44 |
| --- | --- |
|
45 |
-
| Tweets downloaded |
|
46 |
| Retweets | 168 |
|
47 |
-
| Short tweets |
|
48 |
-
| Tweets kept |
|
49 |
|
50 |
-
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/
|
51 |
|
52 |
## Training procedure
|
53 |
|
54 |
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk's tweets.
|
55 |
|
56 |
-
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/
|
57 |
|
58 |
-
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/
|
59 |
|
60 |
## How to use
|
61 |
|
|
|
42 |
|
43 |
| Data | Elon Musk |
|
44 |
| --- | --- |
|
45 |
+
| Tweets downloaded | 3178 |
|
46 |
| Retweets | 168 |
|
47 |
+
| Short tweets | 1187 |
|
48 |
+
| Tweets kept | 1823 |
|
49 |
|
50 |
+
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/d6moc5n8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
|
51 |
|
52 |
## Training procedure
|
53 |
|
54 |
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk's tweets.
|
55 |
|
56 |
+
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/wi63vy8x) for full transparency and reproducibility.
|
57 |
|
58 |
+
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/wi63vy8x/artifacts) is logged and versioned.
|
59 |
|
60 |
## How to use
|
61 |
|
config.json
CHANGED
@@ -37,7 +37,7 @@
|
|
37 |
}
|
38 |
},
|
39 |
"torch_dtype": "float32",
|
40 |
-
"transformers_version": "4.
|
41 |
"use_cache": true,
|
42 |
"vocab_size": 50257
|
43 |
}
|
|
|
37 |
}
|
38 |
},
|
39 |
"torch_dtype": "float32",
|
40 |
+
"transformers_version": "4.28.1",
|
41 |
"use_cache": true,
|
42 |
"vocab_size": 50257
|
43 |
}
|
generation_config.json
CHANGED
@@ -2,5 +2,5 @@
|
|
2 |
"_from_model_config": true,
|
3 |
"bos_token_id": 50256,
|
4 |
"eos_token_id": 50256,
|
5 |
-
"transformers_version": "4.
|
6 |
}
|
|
|
2 |
"_from_model_config": true,
|
3 |
"bos_token_id": 50256,
|
4 |
"eos_token_id": 50256,
|
5 |
+
"transformers_version": "4.28.1"
|
6 |
}
|
pytorch_model.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 510398013
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:67e286378047d2ebae98c9995b6ee265ab0329f01b9333001d2404274e336e5c
|
3 |
size 510398013
|
tokenizer_config.json
CHANGED
@@ -1,9 +1,9 @@
|
|
1 |
{
|
2 |
"add_prefix_space": false,
|
3 |
"bos_token": "<|endoftext|>",
|
|
|
4 |
"eos_token": "<|endoftext|>",
|
5 |
"model_max_length": 1024,
|
6 |
-
"special_tokens_map_file": null,
|
7 |
"tokenizer_class": "GPT2Tokenizer",
|
8 |
"unk_token": "<|endoftext|>"
|
9 |
}
|
|
|
1 |
{
|
2 |
"add_prefix_space": false,
|
3 |
"bos_token": "<|endoftext|>",
|
4 |
+
"clean_up_tokenization_spaces": true,
|
5 |
"eos_token": "<|endoftext|>",
|
6 |
"model_max_length": 1024,
|
|
|
7 |
"tokenizer_class": "GPT2Tokenizer",
|
8 |
"unk_token": "<|endoftext|>"
|
9 |
}
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 3579
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bec22cc965d25e71ab3d5398dc73729857f6c35327785038e732e10712a5ce28
|
3 |
size 3579
|