Skip to content

Why the inference time isn't changed? #14

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
mmjwxbc opened this issue Mar 22, 2025 · 1 comment
Open

Why the inference time isn't changed? #14

mmjwxbc opened this issue Mar 22, 2025 · 1 comment

Comments

@mmjwxbc
Copy link

mmjwxbc commented Mar 22, 2025

# the latest version
self.prot_model = FAEsmForMaskedLM.from_pretrained("./esm_weights/esm2_35M").to(torch.float16) if pretrained else EsmModel(EsmConfig.from_pretrained('esm_weights/esm2_35M'))

# the old version
self.prot_model = ESMModel.from_pretrained("./esm_weights/esm2_35M").to(torch.float16) if pretrained else EsmModel(EsmConfig.from_pretrained('esm_weights/esm2_35M'))

the memory usage is also equal, and the inference time isn't changed

@pengzhangzhi
Copy link
Owner

Hi,
I'm not sure, but I'd like to know if that's bc ur 35m is too small so the diff is minimal. I suggest u run the test we provided and see the difference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants