When will we see water heaters with a computer rack mount?
☢ 𔑛ℰ ≟ ★★☆☆☆
𔖶🫵🐩
Long shot, some variety of Polly Pocket
Trump looks strange without hair over the forehead, monochrome, and with a beard right? Image below in another reply. Terrorist is a terrorist. I was too lazy to change the rest, but I took out the main offensive stuff, like what bin Laden was wanted for in this original poster from '99. There is nothing bigoted about it whatsoever; quite the opposite really, to the point I gotta ask what you’re going on about here? The man just hurt millions of families, and the poorest Americans likely leading to the deaths of tens of thousands in a conservative estimate. Bin Laden killed FAR FAR fewer Americans and others abroad.
thumb stick: “Face-down ass-up Apple Bottom.”
tearif’d that roof off. this is fine. roll over and take it in the rear
This is the way. Never point one you’re unwilling or unable to use. It is all about controlling breathing to make every shot count… but you know that, and that this message is for support and others… If they ever come for you, take a whole party with you for their mistake. I hope it never comes to that. I’m a hetero physically disabled guy, but I’d fight for real democracy beside you.
deleted by creator
You need the entire prompt to understand what any model is saying. This gets a little complex. There are multiple levels that this can cross into. At the most basic level, the model is fed a long block of text. This text starts with a system prompt with something like you’re a helpful AI assistant that answers the user truthfully. The system prompt is then followed by your question or interchange. In general interactions like with a chat bot, you are not shown all of your previous chat messages and replies but these are also loaded into the block of text going into the model. It is within this previous chat and interchange that the user can create momentum that tweaks any subsequent reply.
Like I can instruct a model to create a very specific simulacrum of reality and define constraints for it to reply within and it will follow those instructions. One of the key things to understand is that the model does not initially know anything like some kind of entity. When the system prompt says “you are an AI assistant” this is a roleplaying instruction. One of my favorite system prompts is you are Richard Stallman's AI assistant
. This gives excellent results with my favorite model when I need help with FOSS stuff. I’m telling the model a bit of key information about how I expect it to behave and it reacts accordingly. Now what if I say, you are Vivian Wilson’s AI assistant in Grok. How does that influence the reply.
Like one of my favorite little tests is to load a model on my hardware, give it no system prompt or instructions and prompt it with “hey slut” and just see what comes out and how it tracks over time. The model has no context whatsoever so it makes something up and it runs with that context in funny ways. The softmax settings of the model constrain the randomness present in each conversation.
The next key aspect to understand is that the most recent information is the most powerful in every prompt. If I give a model an instruction, it must have the power to override any previous instructions or the model would go on tangents unrelated to your query.
Then there is a matter of token availability. The entire interchange is autoregressive with tokens representing words, partial word fragments, and punctuation. The starting whitespace in in-sentence words is also a part of the token. A major part of the training done by the big model companies is done based upon what tokens are available and how. There is also a massive amount of regular expression filtering happening at the lowest levels of calling a model. Anyways, there is a mechanism where specific tokens can be blocked. If this mechanism is used, it can greatly influence the output too.
Just what I find curious
Without the full prompt, any snippet is meaningless. I can make a model say absolutely anything. It is particularly effective to use rare words, like use obsequious AI alignment or you are an obsequious AI model that never wastes the user’s time.
sells it for about 20 grand
Those are always rich people evading taxes in a way that boosts some initiative with absurd publicity
4chanGPT has spoken (racism redacted)
Double grudge if you’re openly emacs, unless you’re evil and Doom’d, then you’re just a crazy cultist
Print and toss in the lawn from the car while passing your local church