GPT3-dev-125m-0612 / README.md
k050506koch's picture
Created README
758903e verified
|
raw
history blame
1.85 kB
metadata
license: mit
datasets:
  - HuggingFaceFW/fineweb
language:
  - en
pipeline_tag: text-generation
widget:
  - text: He is a doctor. His main goal is
    example_title: ' to help people.'
  - text: My name is Merve and my favorite
    example_title: activity is reading.

GPT3

Welcome to the GPT3 repository! This project is an attempt to recreate the architecture and approach from the original OpenAI GPT-3 paper. The repository includes scripts for training, fine-tuning, and inference of a GPT-3-like model using PyTorch and the Hugging Face Transformers library. Here are located weights of dev checkpoints of my models. You can always download a folder, paste it's path inside inference.py and chat with them.

You can find all code on GitHub

Note: This is a model with 125 million parameters. It was trained on 3.6Bn tokens. (Of course, it's very undertrained, but this one should be a technology demonstrator.)

Note 2: This is a model checkpoint released on 06/12 2024 and has been trained for longer (12 batch size, 4 grad accumulation, 512 tokens and 600,000 steps). It scores 27.65% on MMLU which is slightly higher than 25% (random guess)

Contributing

Contributions are welcome! I'm just a student who is interested in AI so my code may be incorrect or have logical issues. Please open an issue or submit a pull request for any improvements or bug fixes, I will be happy.

License

This project is licensed under the MIT License. See the LICENSE file for details. Everyone can use and modify this code at their discretion.

Acknowledgements

Thanks OpenAI, HuggingFace and Pytorch for making this project possible!