Text Generation
Transformers
Safetensors
English
gemma2
creative-writing
conversational
text-generation-inference
Inference Endpoints
mehmetkeremturkcan commited on
Commit
446dd65
·
verified ·
1 Parent(s): 51f02a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -25,11 +25,11 @@ A merged LoRA for gemma-2-9b-it, trained using DPO datasets for creative writing
25
  ### How to Use
26
 
27
  ```python
28
- from unsloth import FastLanguageModel
29
  import torch
30
- max_seq_length = 4096 # Choose any! We auto support RoPE Scaling internally!
31
- dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
32
- load_in_4bit = False # Use 4bit quantization to reduce memory usage. Can be False.
33
 
34
  model, tokenizer = FastLanguageModel.from_pretrained(
35
  model_name = "mehmetkeremturkcan/oblivionsend",
 
25
  ### How to Use
26
 
27
  ```python
28
+ from unsloth import FastLanguageModel # we use unsloth for faster inference
29
  import torch
30
+ max_seq_length = 4096
31
+ dtype = None
32
+ load_in_4bit = False
33
 
34
  model, tokenizer = FastLanguageModel.from_pretrained(
35
  model_name = "mehmetkeremturkcan/oblivionsend",