edit to clarify a misconception in the comments, this is an instagram post so “caption” refers to the description under the image or video
as an example, this text i am typing now is also a “caption”
just saying because someone started a debate misunderstanding this to be about subtitles (aka “closed captions”) and that’s just not the case 👍
Nope, they still not good. I using YouTube auto gen subs and they 100% need LLM to fix mistakes.
Large language models are designed to generate text based on previous text. Translation from audio to text can be done via a neural net but it isn’t a Large Language Model.
Now, could you combine the two to say reduce error on words that were mumbled by having a generative model predict the words that would fit better in that unclear sentence. However you could likely get away with a much smaller and faster net than an LLM in fact you might be able to get away with using plain-Jane markov chains, no machine learning necessary.
Point is that there is a difference between LLMs and other neural nets that produce text.
In the case of audio to text translation, using an LLM would be very inefficient and slow (possibly to the point it isn’t able to keep up with the audio at all), and using a very basic text generation net or even just a probabilistic algorithm would likely do the job just fine.
How would an llm fix a mistake equivalent to something being misheard? I feel like you’re misunderstanding something and could probably also use some help with your English.
Be nice (Rule 2).
Yeah, fair enough. I really did a bad job pointing that out politely.
In hindsight, trying to fix it I think I was trying to connect two thoughts I had about the other comment in a way that was not discernible in any way by anyone other than me.
what the actual fluff is up with lemmy.world accounts in this thread acting like jerks?
many such cases