Edit model card

TinyKAI 1B

image/png

TinyKAI 1B is a fine-tuned LLM (Large Language Model) based off of Falcon-rw-1B.

Direct Use

TinyKAI 1B is optimal for research on large language models, specifically the influence of web data on the properties of large language models (fairness, safety, limitations, capabilities, etc.).

Banned Use

Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.

Limitations

TinyKAI 1B is trained on English data only, and will not generate appropriately reasonable content in other languages. Being trained on a representative of the web, it will carry the stereotypes and biases commonly encountered online. In addition, KAI-1B has a very low output limit (less than 2 thousand characters) and struggles when asked to quote online sources.

Recommendations

We recommend users of TinyKAI 1B to consider finetuning it for personal use, and for precautions to be taken for any commercial use.

Banned Use

TinyKAI-1B is governed by the apache 2.0 liscense, and therefore means that whatever the license deems unacceptable shall not be allowed. We specificaly ban the use of ANY AND ALL KAI MODELS for hate speech towards a paricular thing, person, our particular group due to legal and ethical issues.

Downloads last month
48
Safetensors
Model size
1.31B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train Keynote-Technology/TinyKAI-1B-v0.1

Collections including Keynote-Technology/TinyKAI-1B-v0.1