Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem loading Yolo Version 11 on Android Studio using Interpreter api and GPU delegate #997

Open
AndrewQianNorthernVue opened this issue Mar 12, 2025 · 0 comments

Comments

@AndrewQianNorthernVue
Copy link

I have converted a YoloV11 model using Ultralytics' export function into a .tflite format. I then attempted to load the interpreter using Interpreter(model, options). The options had a GPU delegate set:

        val delegate = GpuDelegate()
     
        val options = Interpreter.Options().addDelegate(delegate)
      
        interpreter = Interpreter(model, options)

This DID NOT work for YoloV11 but DID work for Yolov8, which I converted the exact same way. I also attempted to convert to onnx and then tflite, but the Interpreter(model, options) would also stall (just never finish) for yolov11. I believe this might be due to unsupported operations, and have looked at Netron, but it is quite tedious.

I would like to know if this is a known issue and if there is a fix. Here is some additional context:

  1. I attempted to just use the CPU for my YoloV11 using options.numThreads(4) instead of the delegate code above. It worked just fine.

  2. I attempted to use the GPU for YoloV8 which was converted the exact same way, it worked without issue.

  3. compatList.isDelegateSupportedOnThisDevice somehow returns false no matter which model I am loading, but this doesn't seem to stop my V8 model from loading and running.

  4. I am running a demensity chip on my device.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant