prithivMLmods
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -11,5 +11,16 @@ tags:
|
|
11 |
- text-generation-inference
|
12 |
- Qwen
|
13 |
---
|
14 |
-
|
15 |
![opus.gif](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/-vC9B4g2ccchvbS00HffZ.gif)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
- text-generation-inference
|
12 |
- Qwen
|
13 |
---
|
|
|
14 |
![opus.gif](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/-vC9B4g2ccchvbS00HffZ.gif)
|
15 |
+
|
16 |
+
# **Calcium-Opus-20B-v1**
|
17 |
+
|
18 |
+
Calcium-Opus-20B-v1 is based on the Qwen 2.5 modality architecture, designed to enrich the reasoning capabilities of 20B-parameter models. These models have proven highly effective for context understanding, reasoning, and mathematical problem-solving.
|
19 |
+
|
20 |
+
Key improvements include:
|
21 |
+
1. **Enhanced Knowledge and Expertise**: The model demonstrates significantly more knowledge and greatly improved capabilities in coding and mathematics, thanks to specialized expert models in these domains.
|
22 |
+
2. **Improved Instruction Following**: It shows significant advancements in following instructions, generating long texts (over 8K tokens), understanding structured data (e.g., tables), and producing structured outputs, especially in JSON format.
|
23 |
+
3. **Better Adaptability**: The model is more resilient to diverse system prompts, enabling enhanced role-playing implementations and condition-setting for chatbots.
|
24 |
+
4. **Long-Context Support**: It offers long-context support of up to 128K tokens and can generate up to 8K tokens in a single output.
|
25 |
+
5. **Multilingual Proficiency**: The model supports over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
|
26 |
+
|