edit to clarify a misconception in the comments, this is an instagram post so “caption” refers to the description under the image or video
as an example, this text i am typing now is also a “caption”
just saying because someone started a debate misunderstanding this to be about subtitles (aka “closed captions”) and that’s just not the case 👍
No, what you are thinking of is speech to text software, it is much older than LLMs and works in a very different way.
Yeah speech to text models have nothing to do with LLMs and their use for captioning is perfectly fine imo
Nope, they still not good. I using YouTube auto gen subs and they 100% need LLM to fix mistakes.
Large language models are designed to generate text based on previous text. Translation from audio to text can be done via a neural net but it isn’t a Large Language Model.
Now, could you combine the two to say reduce error on words that were mumbled by having a generative model predict the words that would fit better in that unclear sentence. However you could likely get away with a much smaller and faster net than an LLM in fact you might be able to get away with using plain-Jane markov chains, no machine learning necessary.
Point is that there is a difference between LLMs and other neural nets that produce text.
In the case of audio to text translation, using an LLM would be very inefficient and slow (possibly to the point it isn’t able to keep up with the audio at all), and using a very basic text generation net or even just a probabilistic algorithm would likely do the job just fine.
How would an llm fix a mistake equivalent to something being misheard? I feel like you’re misunderstanding something and could probably also use some help with your English.
Be nice (Rule 2).
Yeah, fair enough. I really did a bad job pointing that out politely.
In hindsight, trying to fix it I think I was trying to connect two thoughts I had about the other comment in a way that was not discernible in any way by anyone other than me.
what the actual fluff is up with lemmy.world accounts in this thread acting like jerks?
many such cases
While speech to text software indeed predates LLMs - LLMs do it as well. I’ve only tried a few basic (aka free) options so no idea how well they do en masse, but the generated results were at least on par if not better than YouTubes’ auto caption.
It might not technically be LLMs though. It could be a different type of “ai”. I Just cant stand the “ai” marketing when nothing they are making is actually ai so until they pull their heads out their asses all “ai” models are LLMs to me.
Understandable, AI marketing now is a shitshot, but they are not even AI I think. Just people forget that tech used to do magic before AI existed.
It’s kind of the other way around, we’ve always had AI, it used to just basically mean a computer making some decision based on data. Like a thermostat changing the heating in response to a temperature change.
Then we got LLMs and because they are good at pretending to have complex reasoning ability, AI as a term started to always mean “computer with near human level intelligence” which of course they are absolutely not.
There was a book I can’t remember, the whole thesis was exactly that. “AI is whatever automates the decision making process” not any group of algos
This is a big part of it. Back when ai was first becoming big, my manager said they needed to run all my kb articles through an ai to generate link clouds or some such.
I was like umm… that’s a service this platform has always offered…? Like just because you don’t know what the kb tools do, or what our rock bottom subscription gets us, doesn’t mean I haven’t looked into it… but that also isn’t worth doing because now we only have a handful of articles in any given category because I’m good at my job…