prithivMLmods
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -8,4 +8,7 @@ pipeline_tag: text-generation
|
|
8 |
library_name: transformers
|
9 |
tags:
|
10 |
- text-generation-inference
|
11 |
-
---
|
|
|
|
|
|
|
|
8 |
library_name: transformers
|
9 |
tags:
|
10 |
- text-generation-inference
|
11 |
+
---
|
12 |
+
# **QwQ-R1-Distill-7B-CoT**
|
13 |
+
|
14 |
+
QwQ-R1-Distill-7B-CoT is based on the LLaMA model, which was distilled by DeepSeek-R1-Distill-Qwen-7B. It has been fine-tuned on the long chain-of-thought reasoning model and specialized datasets, focusing on chain-of-thought (CoT) reasoning for problem-solving. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.
|