Edit model card

Training procedure

The following bitsandbytes quantization config was used during training:

  • quant_method: QuantizationMethod.BITS_AND_BYTES
  • load_in_8bit: False
  • load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: bfloat16

Framework versions

  • PEFT 0.4.0

Inference Code

from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
config = PeftConfig.from_pretrained("SalehAhmad/Mistral-7B-Instruct-v0.1-JSON-Test_Generation-2-Epoch", trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("SalehAhmad/Mistral-7B-Instruct-v0.1-JSON-Test_Generation-2-Epoch", trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="auto")
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="auto")
model = PeftModel.from_pretrained(model, "SalehAhmad/Mistral-7B-Instruct-v0.1-JSON-Test_Generation-2-Epoch", trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="auto")
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
tokenizer.pad_token_id = tokenizer.eos_token_id

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=2046, return_full_text=False, device_map="auto",
                kwargs={'stop': ["###Human:"]})

Sys_OBJECTIVE = '''You are a chatbot, who is helping to curate datasets. When given an input context paragraph, you have to generate only one mcq question, 
it's options and it's actual answer. You have to follow the given JSON format for generating the question, options and answer.
Donot use words like "in this paragraph", "from the context" etc. The questions should be independent of any other question.'''

Sys_SUBJECTIVE = '''You are a chatbot, who is helping to curate datasets. When given an input context paragraph, you have to generate only one subjective quesion,
and it's actual answer. You have to follow the given JSON format for generating the question and answer.
Donot use words like "in this paragraph", "from the context" etc. The questions should be independent of any other question.'''

Prompt = '''And in the leadership styles it will be that is the is the there will be the changing into the leadership styles and in the leadership styles it will be that is the the approach will be for doing this type of the research which has been adopted in this paper is that is the degree of the correlation and its statistical significance between the self-assess leadership behavior and the 360 degree assessment of performance, evidence is presented showing that results vary in different context.'''
 
Formatted_Prompt_OBJECTIVE = f"###Human: {Sys_OBJECTIVE}\nThe context is: {Prompt}\n###Assistant: "

Formatted_Prompt_SUBJECTIVE = f"###Human: {Sys_SUBJECTIVE}\nThe context is: {Prompt}\n###Assistant: "
print(Formatted_Prompt_OBJECTIVE), print(Formatted_Prompt_SUBJECTIVE)

response = pipe(Formatted_Prompt_OBJECTIVE)
print(response)
Downloads last month
2
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train SalehAhmad/Mistral-7B-Instruct-v0.1-JSON-Test_Generation-2-Epoch