Rename OLMo model from OLMo-7B to OLMo-1B
#2
by
taufiqdp
- opened
README.md
CHANGED
@@ -93,8 +93,8 @@ Now, proceed as usual with HuggingFace:
|
|
93 |
import hf_olmo
|
94 |
|
95 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
96 |
-
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-
|
97 |
-
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-
|
98 |
message = ["Language modeling is "]
|
99 |
inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
|
100 |
# optional verifying cuda
|
@@ -109,12 +109,12 @@ Alternatively, with the pipeline abstraction:
|
|
109 |
import hf_olmo
|
110 |
|
111 |
from transformers import pipeline
|
112 |
-
olmo_pipe = pipeline("text-generation", model="allenai/OLMo-
|
113 |
print(olmo_pipe("Language modeling is "))
|
114 |
>> 'Language modeling is a branch of natural language processing that aims to...'
|
115 |
```
|
116 |
|
117 |
-
Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-
|
118 |
The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues.
|
119 |
|
120 |
Note, you may see the following error if `ai2-olmo` is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer.
|
|
|
93 |
import hf_olmo
|
94 |
|
95 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
96 |
+
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B")
|
97 |
+
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-1B")
|
98 |
message = ["Language modeling is "]
|
99 |
inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
|
100 |
# optional verifying cuda
|
|
|
109 |
import hf_olmo
|
110 |
|
111 |
from transformers import pipeline
|
112 |
+
olmo_pipe = pipeline("text-generation", model="allenai/OLMo-1B")
|
113 |
print(olmo_pipe("Language modeling is "))
|
114 |
>> 'Language modeling is a branch of natural language processing that aims to...'
|
115 |
```
|
116 |
|
117 |
+
Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`).
|
118 |
The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues.
|
119 |
|
120 |
Note, you may see the following error if `ai2-olmo` is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer.
|