Cantonese LLM using Gemma-2 2B Architecture
Welcome to the preview of the Cantonese Language Model (LLM) built on the Gemma-2 2B architecture. This model is designed to understand and generate text in Cantonese, including slangs, colloquials, and Internet terms.
License
This project is available under the Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0). For more details, please visit the license page.
Preview Warning
Please be advised that this version of the Cantonese LLM is a preview. As such, the model's outputs may sometimes be inaccurate, hallucinatory, or potentially offensive to some individuals. We are continuously working to improve the model's accuracy and reduce such instances.
Training Infrastructure
The Cantonese LLM has been trained using Cloud TPU v3-8 VM on Google Cloud Platform. The training was done using the KerasNLP library with LoRA at learning rate of 5e-5.
Training Credits
Google Cloud credits are provided for this project. #AISprint
This model was trained by Thomas Chong and Jacky Chan from Votee AI Limited, and we contribute to hon9kon9ize, the Hong Kong AI Research Community.
Usage Guidelines
- Ensure that you are aware of the potential for unexpected or offensive content.
- Always review and assess the model's output before using it in any application.
- Provide feedback on any issues you encounter to help us improve the model.
Contributions
We welcome contributions from the community. If you have suggestions or improvements, please submit a pull request or open an issue in the project repository.
Disclaimer
The developers of the Cantonese LLM are not responsible for any harm or offense caused by the model's outputs. Users are advised to exercise discretion and judgment when using the model.
Thank you for exploring the Cantonese LLM. We are excited to see the innovative ways in which it will be used!
- Downloads last month
- 15
Model tree for hon9kon9ize/cantonese-gemma-2-2b-lora-preview20240929
Base model
google/gemma-2-2b