-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Yolov5 v6.0 inference with FP16 #1540
Labels
wontfix
This will not be worked on
Comments
For v6.0 tag, it is using FP16 inference by default. This FP16 means the model/weights/inference is FP16, but the input tensor is still FP32, which is a regular setup. Line 11 in b91b15d
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Env
About this repo
Yolov5 v6.0
Your problem
For image preprocessing for inference, how can I turn my input image into FP16 format for faster inference
The text was updated successfully, but these errors were encountered: