Text Generation
Transformers
PyTorch
code
gpt2
custom_code
Eval Results
text-generation-inference
Inference Endpoints

Saving and loading bigcode/santacoder model on a mac results in "ModuleNotFoundError: No module named 'transformers_modules'

#29
by kanandk - opened

Code:
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoModel
import torch

model = torch.load("./tmp/gptj.pt")
model.eval()

tokenizer = AutoTokenizer.from_pretrained("./tmp")
device = "cpu"
inputs = tokenizer.encode("function helloWorld():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))

Transformers version : 4.28.1
Torch version : 2.0.0
Python : 3.9.15

Mac M1 with OSX

Could you please share the stack trace.

Sign up or log in to comment