File size: 4,102 Bytes
9d0141a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
---
license: apache-2.0
language:
- en
tags:
- rk3588
- rkllm
- Rockchip
---

# TinyLlama-1.1B-Chat_rkLLM

- [中文](#tinyllama-11b中文介绍)
- [English](#tinyllama-11b)

## TinyLlama-1.1B中文介绍

### 介绍

TinyLlama-1.1B-Chat_rkLLM 是从 [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co./TinyLlama/TinyLlama-1.1B-Chat-v1.0) 转换而来的 RKLLM 模型,专为 Rockchip 设备优化。该模型运行于 RK3588 的 NPU 上。

- **模型名称**: TinyLlama-1.1B-Chat_rkLLM
- **模型架构**: 与 TinyLlama-1.1B-Chat-v1.0 相同
- **发布者**: FydeOS
- **日期**: 2024-06-03

### 模型详情

TinyLlama-1.1B-Chat-v1.0 是采用了与 Llama 2 完全相同的架构和分词器的大模型。TinyLlama 结构紧凑,参数仅为 1.1B。这种紧凑性使其能够满足需要有限计算和内存占用的多种应用程序的需求。

### 使用指南

> 此模型仅支持搭载 Rockchip RK3588/s 芯片的设备。请确认设备信息并确保 NPU 可用。

#### openFyde 系统

> 请确保你已将系统升级到最新版本。

1. 下载模型文件 `XXX.rkllm`。
2. 新建文件夹 `model/`,将模型文件放置于该文件夹内。
3. 启动 FydeOS AI,在设置页面进行相关配置。

#### 其它系统
> 请确保已完成 RKLLM 的 NPU 相关内核更新。

1. 下载模型文件 `XXX.rkllm`。
2. 按照官方文档进行配置:[官方文档](https://github.com/airockchip/rknn-llm)。

### 常见问题(FAQ)

如遇到问题,请先查阅 issue 区,若问题仍未解决,再提交新的 issue。

### 限制与注意事项

- 模型在某些情况下可能存在性能限制
- 使用时请遵循相关法律法规
- 可能需要进行适当的参数调优以达到最佳效果

### 许可证

本模型采用与 [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co./TinyLlama/TinyLlama-1.1B-Chat-v1.0) 相同的许可证。

### 联系方式

如需更多信息,请联系:

- **电子邮件**: [email protected]
- **主页**: [FydeOS AI](https://fydeos.ai/zh/)



## TinyLlama-1.1B

### Introduction

TinyLlama-1.1B-Chat_rkLLM is a RKLLM model derived from [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co./TinyLlama/TinyLlama-1.1B-Chat-v1.0), specifically optimized for Rockchip devices. This model operates on the NPU of the RK3588 chip.

- **Model Name**: TinyLlama-1.1B-Chat_rkLLM
- **Architecture**: Identical to TinyLlama-1.1B-Chat-v1.0
- **Publisher**: FydeOS
- **Release Date**: 2024-06-03

### Model Details

TinyLlama-1.1B-Chat_v1.0, sharing the same architecture and tokenizer as Llama 2, is a large language model with a compact structure of only 1.1 billion parameters. This compactness enables it to meet the needs of various applications requiring limited computation and memory usage.

### User Guide

This model is only supported on devices with the Rockchip RK3588/s chip. Please verify your device's chip information and ensure the NPU is operational.

#### openFyde System

> Ensure you have upgraded to the latest version of openFyde.

1. Download the model file `XXX.rkllm`.
2. Create a folder named `model/` and place the model file inside this folder.
3. Launch FydeOS AI and configure the settings on the settings page.

#### Other Systems

> Ensure you have updated the NPU kernel related to RKLLM.

1. Download the model file `XXX.rkllm`.
2. Follow the configuration guidelines provided in the [official documentation](https://github.com/airockchip/rknn-llm).

### FAQ

If you encounter issues, please refer to the issue section first. If your problem remains unresolved, submit a new issue.

### Limitations and Considerations

- The model may have performance limitations in certain scenarios.
- Ensure compliance with relevant laws and regulations during usage.
- Parameter tuning might be necessary to achieve optimal performance.

### Licence

This model is licensed under the same terms as [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co./TinyLlama/TinyLlama-1.1B-Chat-v1.0).

### Contact Information

For more information, please contact:

- **Email**: [email protected]
- **Homepage**: [FydeOS AI](https://fydeos.ai/en/)