--- datasets: - Minami-su/toxic-sft-zh - llm-wizard/alpaca-gpt4-data-zh language: - zh - en license: llama3 pipeline_tag: text-generation tags: - text-generation-inference - code - unsloth task_categories: - conversational base_model: cognitivecomputations/dolphin-2.9-llama3-8b widget: - text: "Is this review positive or negative? Review: Best cast iron skillet you will ever buy." example_title: "Sentiment analysis" - text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had ..." example_title: "Coreference resolution" - text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book ..." example_title: "Logic puzzles" - text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night ..." example_title: "Reading comprehension" --- ## Model Details ### Model Description It's my first finetune exmaple. Using `cognitivecomputations/dolphin-2.9-llama3-8b` as base model, and finetune with `Minami-su/toxic-sft-zh` and `llm-wizard/alpaca-gpt4-data-zh` to let tha model support Chinese. ## Training Procedure [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1bTLjWTVKgXJfdc1T-roMwa3k1NIERYyC?usp=sharing) ### Training Data **Base Model** 🐬[cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co./cognitivecomputations/dolphin-2.9-llama3-8b) **Dataset** - [Minami-su/toxic-sft-zh](https://huggingface.co./datasets/Minami-su/toxic-sft-zh) - [llm-wizard/alpaca-gpt4-data-zh](https://huggingface.co./datasets/llm-wizard/alpaca-gpt4-data-zh)