doberst commited on
Commit
89ed260
1 Parent(s): 8133af8

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -12
README.md CHANGED
@@ -1,20 +1,14 @@
1
  ---
2
  license: apache-2.0
3
  inference: false
4
- tags: [green, p1, llmware-fx,ov]
5
  ---
6
 
7
- # slim-sentiment-ov
8
 
9
- <!-- Provide a quick summary of what the model is/does. -->
10
 
11
- **slim-sentiment-ov** is an OpenVino int4 quantized version of slim sentiment 1B, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
12
-
13
- [**slim-sentiment**](https://huggingface.co/llmware/slim-sentiment) is a function-calling specialized model finetuned to evaluate sentiment and return a python dictionary with a sentiment key and the classification value, e.g., "positive", "negative", or "neutral".
14
-
15
- Get started right away with [OpenVino](https://github.com/openvinotoolkit/openvino)
16
-
17
- Looking for AI PC solutions and demos, contact us at [llmware](https://www.llmware.ai)
18
 
19
 
20
  ### Model Description
@@ -22,16 +16,28 @@ Looking for AI PC solutions and demos, contact us at [llmware](https://www.llmwa
22
  - **Developed by:** llmware
23
  - **Model type:** tinyllama
24
  - **Parameters:** 1.1 billion
25
- - **Model Parent:** llmware/slim-sentiment
26
  - **Language(s) (NLP):** English
27
  - **License:** Apache 2.0
28
- - **Uses:** Sentiment classification
29
  - **RAG Benchmark Accuracy Score:** NA
30
  - **Quantization:** int4
31
 
 
 
 
 
 
 
 
 
 
 
32
 
33
  ## Model Card Contact
34
 
 
 
35
  [llmware on hf](https://www.huggingface.co/llmware)
36
 
37
  [llmware website](https://www.llmware.ai)
 
1
  ---
2
  license: apache-2.0
3
  inference: false
4
+ tags: [green, p1, llmware-fx, ov, emerald]
5
  ---
6
 
7
+ # slim-extract-tiny-ov
8
 
9
+ **slim-extract-tiny-ov** is a specialized function calling model with a single mission to look for values in a text, based on an "extract" key that is passed as a parameter. No other instructions are required except to pass the context passage, and the target key, and the model will generate a python dictionary consisting of the extract key and a list of the values found in the text, including an 'empty list' if the text does not provide an answer for the value of the selected key.
10
 
11
+ This is an OpenVino int4 quantized version of slim-extract-tiny, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
 
 
 
 
 
 
12
 
13
 
14
  ### Model Description
 
16
  - **Developed by:** llmware
17
  - **Model type:** tinyllama
18
  - **Parameters:** 1.1 billion
19
+ - **Model Parent:** llmware/slim-extract-tiny
20
  - **Language(s) (NLP):** English
21
  - **License:** Apache 2.0
22
+ - **Uses:** Extraction of values from complex business documents
23
  - **RAG Benchmark Accuracy Score:** NA
24
  - **Quantization:** int4
25
 
26
+ ### Example Usage
27
+
28
+ from llmware.models import ModelCatalog
29
+
30
+ text_passage = "The company announced that for the current quarter the total revenue increased by 9% to $125 million."
31
+ model = ModelCatalog().load_model("slim-extract-tiny-ov")
32
+ llm_response = model.function_call(text_passage, function="extract", params=["revenue"])
33
+
34
+ Output: `llm_response = {"revenue": [$125 million"]}`
35
+
36
 
37
  ## Model Card Contact
38
 
39
+ [llmware on github](https://www.github.com/llmware-ai/llmware)
40
+
41
  [llmware on hf](https://www.huggingface.co/llmware)
42
 
43
  [llmware website](https://www.llmware.ai)