File size: 4,390 Bytes
fc50d4a
 
 
 
 
93ebce8
fc50d4a
 
 
93ebce8
fc50d4a
 
 
 
 
 
 
8e8df03
 
 
 
fc50d4a
 
 
 
 
 
93ebce8
fc50d4a
2ffad62
fc50d4a
 
 
 
 
 
 
 
 
8e8df03
fc50d4a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93ebce8
fc50d4a
93ebce8
 
 
 
fc50d4a
93ebce8
 
 
 
 
fc50d4a
93ebce8
fc50d4a
 
93ebce8
fc50d4a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
license: apache-2.0  
inference: false  
---

# bling-phi-3.5-gguf

<!-- Provide a quick summary of what the model is/does. -->

bling-phi-3.5-gguf is part of the BLING ("Best Little Instruct No-GPU") model series, RAG-instruct trained on top of a Microsoft Phi-3.5 base model, and 4_K_M quantized with GGUF for fast local inference.


### Benchmark Tests  

Evaluated against the benchmark test:   [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester)  
1 Test Run (temperature=0.0, sample=False) with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.  

--**Accuracy Score**:  **100** correct out of 100  
--Not Found Classification:  85.0%  
--Boolean:  95.0%  
--Math/Logic:  90.0%  
--Complex Questions (1-5):  4 (Above Average - multiple-choice, causal)  
--Summarization Quality (1-5):  4 (Above Average)  
--Hallucinations:  No hallucinations observed in test runs.  

For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).   

Please note that this is the model version used in the test results to replicate the most common inference environment (rather than the original Pytorch version).  

Note: compare results with [bling-phi-3-gguf](https://www.huggingface.co/llmware/bling-phi-3-gguf) and [bling-phi-2](https://www.huggingface.co/llmware/bling-phi-2-v0).

### Model Description

<!-- Provide a longer summary of what this model is. -->

- **Developed by:** llmware
- **Model type:** bling
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** Microsoft Phi-3.5

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

The intended use of BLING models is two-fold:

1.  Provide high-quality RAG-Instruct models designed for fact-based, no "hallucination" question-answering in connection with an enterprise RAG workflow.

2.  BLING models are fine-tuned on top of leading base foundation models, generally in the 1-3B+ range, and purposefully rolled-out across multiple base models to provide choices and "drop-in" replacements for RAG specific use cases.


### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services,
legal and regulatory industries with complex information sources.  

BLING models have been trained for common RAG scenarios, specifically:   question-answering, key-value extraction, and basic summarization as the core instruction types
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.


## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.


## How to Get Started with the Model

To pull the model via API:  

    from huggingface_hub import snapshot_download           
    snapshot_download("llmware/bling-phi-3.5-gguf", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)  
    
Load in your favorite GGUF inference engine, or try with llmware as follows:

    from llmware.models import ModelCatalog  
    
    # to load the model and make a basic inference
    model = ModelCatalog().load_model("llmware/bling-phi-3.5-gguf", temperature=0.0, sample=False)
    response = model.inference(query, add_context=text_sample)  

Details on the prompt wrapper and other configurations are on the config.json file in the files repository.  


## How to Get Started with the Model


The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:

1.  Text Passage Context, and
2.  Specific question or instruction based on the text passage

To get the best results, package "my_prompt" as follows:

    my_prompt = {{text_passage}} + "\n" + {{question/instruction}}


## Model Card Contact

Darren Oberst & llmware team