Skip to content

How to do quantization to int8 model of img_stage_lt_d.onnx and bev_stage_lt_d.onnx? #19

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
avBuffer opened this issue Sep 9, 2023 · 2 comments

Comments

@avBuffer
Copy link

avBuffer commented Sep 9, 2023

Hey,
How to do quantization to int8 model of img_stage_lt_d.onnx and bev_stage_lt_d.onnx?
Thanks!

@LCH1238
Copy link
Owner

LCH1238 commented Sep 13, 2023

Later I will publish a unified engine model using explicit PTQ quantization, and I am cleaning up the code.

@taoxunqiang
Copy link

taoxunqiang commented Mar 29, 2024

Is there a plan to release the code?

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants