-
Notifications
You must be signed in to change notification settings - Fork 458
got assertion error while loading the CodeT5+ models 16B-Instruct, 6B and 2B like this "AssertionError: Config has to be initialized with encoder and decoder config". how to solve this issue #178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @Tarak200 , I am also getting the same error while inferencing using Codet5p 2B model. Were you able to find a solution? Thanks. |
@angelocurti Hello, I haven't been able to find a solution. Any suggestions from your end will be highly appreciated. Thank you. |
Creating the generation config solved my problem. from transformers import AutoModelForSeq2SeqLM, AutoTokenizer checkpoint = "Salesforce/codet5p-2b" Select a device (CPU if no GPU is available, otherwise the first GPU)device = torch.device("cuda" if torch.cuda.is_available() else "cpu") tokenizer = AutoTokenizer.from_pretrained(checkpoint) python_function = "write code for matrix multiplication" # Create a GenerationConfig with custom settings outputs = model.generate(**encoding, generation_config=generation_config) |
@ardhiwiratamaby Thank you for providing the solution, I am now able to infer the 2B model. |
Hello, @ardhiwiratamaby, |
Uh oh!
There was an error while loading. Please reload this page.
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "Salesforce/codet5p-2b"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint,
torch_dtype=torch.float16,
trust_remote_code=True).to(device)
encoding = tokenizer("def print_hello_world():", return_tensors="pt").to(device)
encoding['decoder_input_ids'] = encoding['input_ids'].clone()
outputs = model.generate(**encoding, max_length=15)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
used the same code from huggingface, encountered the following error :
AssertionError: Config has to be initialized with encoder and decoder config, How to resolve this issue?
The text was updated successfully, but these errors were encountered: