slim-extract / README.md
doberst's picture
Upload README.md
0010c7f verified
|
raw
history blame
No virus
3.6 kB
---
license: cc-by-sa-4.0
inference: false
---
# SLIM-SA-NER-3B
<!-- Provide a quick summary of what the model is/does. -->
**slim-sa-ner-3b** combines two of the most popular traditional classifier functions (**Sentiment Analysis** and **Named Entity Recognition**), and reimagines them as function calls on a specialized decoder-based LLM, generating output consisting of a python dictionary with keys corresponding to sentiment, and NER identifiers, such as people, organization, and place, e.g.:
&nbsp;&nbsp;&nbsp;&nbsp;`{'sentiment': ['positive'], people': ['..'], 'organization': ['..'],'place': ['..]}`
This 'combo' model is designed to illustrate the potential power of using function calls on small, specialized models to enable a single model architecture to combine the capabilities of what were traditionally two separate model architectures on an encoder.
The intent of SLIMs is to forge a middle-ground between traditional encoder-based classifiers and open-ended API-based LLMs, providing an intuitive, flexible natural language response, without complex prompting, and with improved generalization and ability to fine-tune to a specific domain use case.
This model is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co./llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.
Each slim model has a 'quantized tool' version, e.g., [**'slim-sa-ner-3b-tool'**](https://huggingface.co./llmware/slim-sa-ner-3b-tool).
## Prompt format:
`function = "classify"`
`params = "sentiment, person, organization, place"`
`prompt = "<human> " + {text} + "\n" + `
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp;`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
<details>
<summary>Transformers Script </summary>
model = AutoModelForCausalLM.from_pretrained("llmware/slim-sa-ner-3b")
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-sa-ner-3b")
function = "classify"
params = "topic"
text = "Tesla stock declined yesterday 8% in premarket trading after a poorly-received event in San Francisco yesterday, in which the company indicated a likely shortfall in revenue."
prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"
inputs = tokenizer(prompt, return_tensors="pt")
start_of_input = len(inputs.input_ids[0])
outputs = model.generate(
inputs.input_ids.to('cpu'),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.3,
max_new_tokens=100
)
output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)
print("output only: ", output_only)
# here's the fun part
try:
output_only = ast.literal_eval(llm_string_output)
print("success - converted to python dictionary automatically")
except:
print("fail - could not convert to python dictionary automatically - ", llm_string_output)
</details>
<details>
<summary>Using as Function Call in LLMWare</summary>
from llmware.models import ModelCatalog
slim_model = ModelCatalog().load_model("llmware/slim-sa-ner-3b")
response = slim_model.function_call(text,params=["sentiment", "people", "organization", "place"], function="classify")
print("llmware - llm_response: ", response)
</details>
## Model Card Contact
Darren Oberst & llmware team
[Join us on Discord](https://discord.gg/MhZn5Nc39h)