dragon-yi-6b-v0 / README.md
doberst's picture
Update README.md
59ae0e2
|
raw
history blame
4.48 kB
---
license: other
license_link: https://huggingface.co./01-ai/Yi-6B/blob/main/LICENSE
license_name: yi-license
model_creator: 01-ai
model_name: Yi 6B
model_type: yi
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
dragon-yi-6b-0.1 part of the dRAGon ("Delivering RAG On ...") model series, RAG-instruct trained on top of a Yi-6B base model.
DRAGON models are fine-tuned with high-quality custom instruct datasets, designed for production quality use in RAG scenarios.
### Benchmark Tests
Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester)
Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
--**Accuracy Score**: **99.5** correct out of 100
--Not Found Classification: 90.0%
--Boolean: 87.5%
--Math/Logic: 77.5%
--Complex Questions (1-5): 4 (Low-Medium)
--Summarization Quality (1-5): 4 (Coherent, extractive)
--Hallucinations: No hallucinations observed in test runs.
For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** llmware
- **Model type:** Yi
- **Language(s) (NLP):** English
- **License:** Yi License (Link)<https://huggingface.co./01-ai/Yi-6B/blob/main/LICENSE>
- **Finetuned from model:** Yi-6B
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The intended use of DRAGON models is two-fold:
1. Provide high-quality RAG-Instruct models designed for fact-based, no "hallucination" question-answering in connection with an enterprise RAG workflow.
2. DRAGON models are fine-tuned on top of leading base foundation models, generally in the 6-7B+ range, and purposefully rolled-out across multiple base models to provide choices and "drop-in" replacements for RAG specific use cases.
3. DRAGON models were trained on the same principles as the BLING models, so generally, it should be easy to "upgrade" from a BLING model in testing to a DRAGON model in production.
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
DRAGON is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services,
legal and regulatory industries with complex information sources.
DRAGON models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
This model is licensed according to the terms of the license of the base model, Yi-6B, and the license can be found in the files repository, as well as
at this (link)<https://huggingface.co./01-ai/Yi-6B/blob/main/LICENSE>.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.
## How to Get Started with the Model
The fastest way to get started with BLING is through direct import in transformers:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("dragon-yi-6b-0.1")
model = AutoModelForCausalLM.from_pretrained("dragon-yi-6b-0.1")
The DRAGON model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:"
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
1. Text Passage Context, and
2. Specific question or instruction based on the text passage
To get the best results, package "my_prompt" as follows:
my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
## Model Card Contact
Darren Oberst & llmware team
Please reach out anytime if you are interested in this project!