kubwa(LoudAI)
Update README.md
333eadc verified
metadata
license: apache-2.0
library_name: transformers

Mistral-7B-Instruct-SQL-ian

About the Model

This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.3. https://huggingface.co./datasets/gretelai/synthetic_text_to_sql

  • Model Name: Mistral-7B-Instruct-SQL-ian

  • Developed by: kubwa

  • Base Model Name: mistralai/Mistral-7B-Instruct-v0.3

  • Base Model URL: Mistral-7B-Instruct-v0.3

  • Base Model Description: The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3. Mistral-7B-v0.3 has the following changes compared to Mistral-7B-v0.2

    • Extended vocabulary to 32768
    • Supports v3 Tokenizer
    • Supports function calling
  • Dataset Name: gretelai/synthetic_text_to_sql

  • Dataset URL: synthetic_text_to_sql

  • Dataset Description: gretelai/synthetic_text_to_sql is a rich dataset of high quality synthetic Text-to-SQL samples, designed and generated using Gretel Navigator, and released under Apache 2.0.

Prompt Template

<s>
### Instruction:
{question}

### Context:
{schema}

### Response:

How to Use it

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model = AutoModelForCausalLM.from_pretrained("kubwa/Mistral-7B-Instruct-SQL-ian")
tokenizer = AutoTokenizer.from_pretrained("kubwa/Mistral-7B-Instruct-SQL-ian",use_fast=False)

text = """<s>
### Instruction:
What is the total volume of timber sold by each salesperson, sorted by salesperson?

### Context:
CREATE TABLE salesperson (salesperson_id INT, name TEXT, region TEXT); INSERT INTO salesperson (salesperson_id, name, region) VALUES (1, 'John Doe', 'North'), (2, 'Jane Smith', 'South'); CREATE TABLE timber_sales (sales_id INT, salesperson_id INT, volume REAL, sale_date DATE); INSERT INTO timber_sales (sales_id, salesperson_id, volume, sale_date) VALUES (1, 1, 120, '2021-01-01'), (2, 1, 150, '2021-02-01'), (3, 2, 180, '2021-01-01');

### Response:
"""

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

inputs = tokenizer(text, return_tensors="pt")
inputs = {key: value.to(device) for key, value in inputs.items()}

outputs = model.generate(**inputs, max_new_tokens=300, pad_token_id=tokenizer.eos_token_id)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Example Output

### Instruction:
What is the total volume of timber sold by each salesperson, sorted by salesperson?

### Context:
CREATE TABLE salesperson (salesperson_id INT, name TEXT, region TEXT); INSERT INTO salesperson (salesperson_id, name, region) VALUES (1, 'John Doe', 'North'), (2, 'Jane Smith', 'South'); CREATE TABLE timber_sales (sales_id INT, salesperson_id INT, volume REAL, sale_date DATE); INSERT INTO timber_sales (sales_id, salesperson_id, volume, sale_date) VALUES (1, 1, 120, '2021-01-01'), (2, 1, 150, '2021-02-01'), (3, 2, 180, '2021-01-01');

### Response:
SELECT salesperson.name, SUM(timber_sales.volume) as total_volume FROM salesperson JOIN timber_sales ON salesperson.salesperson_id = timber_sales.salesperson_id GROUP BY salesperson.name ORDER BY total_volume DESC;

Hardware and Software

  • Training Hardware: 4 Tesla V100-PCIE-32GB GPUs

License

  • Apache-2.0