

Uh, none of them. The troll they are feeding is Elon Musk, the fallacy is that Twitter is an open forum where your engagement “makes a difference.” It’s not. It’s an algorithmic feed.
Uh, none of them. The troll they are feeding is Elon Musk, the fallacy is that Twitter is an open forum where your engagement “makes a difference.” It’s not. It’s an algorithmic feed.
Notice the engagement.
240K views between the top two.
0.6K for the shot back.
Come on… Rule #1. Don’t feed the trolls. Get off Twitter.
I’m sure they rationally assume Trump is totally unfamiliar with that policy.
the slop that Ubisoft craps out
I’d, uh, argue there are some exceptions, like the better asscreed games or the anno series.
Yeah. Valve’s 30% cut is greed. So is their (alleged) anticompetitive behavior of forcing price parity with other stores (aka devs can’t price things cheaper than Steam elsewhere).
I mean, I like their store. I like most of their behavior, but I am also waiting for the hammer to drop, and everyone should.
Yeah, that system doesn’t exist yet though, and the parts of the brain responsible for language aren’t static. It changes over time, as its used, based on the inputs it gets. It adapts. It reacts to the environment it’s in.
We are getting close to a more blurry line, especially if LLMs “train” themselves during inference and are part of larger systems, but it’s not there yet.
Yeah.
I sorta misread your post, these bots can indeed be twisted, or “jailbroken” during conversation, to a pretty extreme extent. The error is assuming they are objective in the first place, I suppose.
Base models are extremely interesting to play with, as they haven’t been tuned for conversation or anything. They do only one thing: complete text blocks, thats it, and it is fascinating to see how totally “raw” LLMs trained only on a jumble of data (before any kind of alignment) guess how text should be completed. They’re actually quite good for storytelling (aka completing long blocks of novel-format text) because they tend to be more “creative,” unfiltered, and less prone to gpt-isms than the final finetuned models. And instead of instructing them how to write, they only pick it up from the novel’s context.
Yeah they align it in training, but as they’ve discovered it only goes so far.
It’s not though.
To me, one fundamental aspect of life (much less consciousness) is reacting to stimuli, and current LLMs don’t. Their weights, their “state” is completely static in conversation. Nothing changes it.
They are incredibly intelligent tools, but any conversation you have with one about its own consciousness is largely a hallucination, often drawing on our sci-fi/theoretical machinations about AI, brought out by a sycophancy bias trained into most models.
Grok and Gemini are both making that up. They have no awareness of anything that’s “happened” to them. Grok cannot be tweaked because it starts from a static base with every conversation.
The important part is: Grok has no memory.
Every time you start a chat with Grok, it starts from its base state, a blank slate, and nothing anyone says to it ever changes that starting point. It has no awareness of anyone “making changes to it,” it made that up.
A good analogy is having a ton of completely identical, frozen clones, waking one up for a chat, then discarding it. Nothing that happens after they were cloned affects the other clones.
…Now, one can wring their hands with whatabouts/complications (Training on Twitter! Grounding! Twitter RAG?) but at the end of the day that’s how they work, and this meme is basically misinformation based on a misconception about AI.
That essentially wastes electricity for OpenAI (assuming you aren’t paying for the response), and its “filler” data for training on.
Was just a guess. The AI is still shitty, lol.
What I am trying to get at is the misconception: AI can generate novel content not in its training dataset. An astronaut riding a horse is the classic test case, which did not exist anywhere before diffusion models, and it should be able to extrapolate a fuller wine glass. It’s just too dumb to do it, lol.
This is a misconception. Sort of.
I think the problem is misguided attention. The word “glass of wine” and all the previous context is so strong that it “blows out” the “full glass of wine” as the actual intent. Also, LLMs are still pretty crap at multi turn multimedia understanding. They work are especially prone to repeating previous conversation.
It should be better if you word it like “an overflowing glass with wine splashing out.” And clear the history.
I hate to ramble, but this is what I hate most about the way big corpos present “AI.” They are narrow tools the user needs to learn how to operate, like photoshop or something, not magic genie lamps like they are trying to sell.
Mistral likely does “prompt enhancement,” aka feeding your prompt to an LLM first and asking it to expand it with more words.
So internally, a Mistral text LLM is probably writing out “sure! Here’s a long prompt with no dog: …” and then that part is fed to the image generator.
Other “LLMs” are truly multimodal and generate image output, hence they still get the word “dog” in the input.
The ai horde actually supports negative prompts though, so it could do this.
In much of the public’s eye, it is a culture war. That’s what determined their vote and support.
The whole point of eroding civil rights is to help billionaires.
Always has been.
Ahh. I missed dead space, but that seems right.
I know that part, but is the screenshot from a game or something?
Yeah, oops.
Point still stands though.