OpenAI Whisper Accuracy (Tflite, Whisper.CPP and Large-V2)
Whisper popularity wave continues. Many projects appear for whisper-based
web services, whisper on mobile and so on. Some projects modify Whisper
models and algorithms to improve speed and it raises questions about
their accuracy. Here we tested couple of different project to demonstrate
the effect those algorithmic modifications have on the accuracy.
There is some accuracy drop, but, accuracy is still extremely impressive.
Note that the TFLite model of just 40Mb demonstrates extremely good
performance even on complicated datasets.
The TFLite decoding is a bit slow though, about 0.5xRT on Intel CPU. With
modern hardware accelerators (like M1 chip) the decoder runs quick enough
even on mobile devices.
Whisper.cpp project is still in a very initial state. A lot of changes in
algorithms (like 2x speedup for audio decoding, questionable practice)
and issues with Python bindings (decoder object not reusable, memory
We also tested https://huggingface.co/openai/whisper-large-v2 recently
released Large-V2 model claimed to be more accurate. Surprisingly, it is
visibly worse than the original V1 Version. Especially on short commands.
Other users report it too, see here:
||Whisper Large V1
||Whisper Large V2
The big thing in Whisper is that it uses huge context. But the extent how
accuracy degrades is not very clear. Here we can provide some numbers:
||Whisper Large Segmented (3-5 seconds)
||Whisper Large Whole Files
You see the advantage of Whisper for long files is visible although
probably not critical. As you also see above, accuracy of recognition of
short commands is pretty bad. So it is not clear how Whisper will
perform for realtime applications.
Lets continue with Whisperology. There are still many many questions how
such a small model works so good.