Perf comparison with OpenAI whisper model vs cpp model (Windows) #1527
vivekuppal
started this conversation in
Show and tell
Replies: 3 comments
-
You are using CPU-only |
Beta Was this translation helpful? Give feedback.
0 replies
-
Thank You! I was not using GPU binaries. Whisper.cpp results
OpenAI results
|
Beta Was this translation helpful? Give feedback.
0 replies
-
Any recommendations, guidance for utilizing whisper.cpp from python. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Transcription using OpenAI whisper model python bindings and whisper.cpp had very similar characteristics. My expectation was that whisper.cpp would be better.
I took the binaries from Release 1.5
Transcribed the sample file jfk.wav with my OSS project Transcribe.
The results in the two cases were comparable. My initial expectation was that the results of this project would be better than the base models published by OpenAI.
I am sharing results of one run only, though I did 10+ runs with consistent results.
Also the results are consistent with other wav files besides the sample file in the repo.
whisper.cpp Results
Transcribe Results
Output of bench for my machine is below
Beta Was this translation helpful? Give feedback.
All reactions