It’s… Not? What are you implying here? That the lab leak theory is racist simply because it implicates Chinese scientists?
I’m just a simple man, trying to make his way in the universe.
It’s… Not? What are you implying here? That the lab leak theory is racist simply because it implicates Chinese scientists?
Honestly, I’m not surprised. I obviously didn’t phrase my argument in a compelling way.
I disagree that we don’t have evidence for conciousness in LLMs. They have been showing behavior previously attributed only to highly intelligent, sentient creatures, i.e. us. To me it seems very plausible that when you have a large network of neurons, be they artificial or biological, with specialized circuits for processing specific stimuli that some sort of sentience could emerge.
If you want academic research on this you just have to take a look. Researchers have been discussing this topic for decades. There isn’t a working theory of machine sentience simply because we don’t have one that works for natural systems. But that obviously doesn’t rule it out. After all, why should sentience be constrained to squishy matter? In any case, I think we can all agree something very interesting is going on with LLMs.
Sure. But if they can’t afford the loans they can’t afford the car, either. No one really needs a $40k new car, anyone could get by with a $2000 used beater.
I know I’m the smartest man on earth. And I’m correct.
See how crazy that sounds? Just because someone is confident about something doesn’t make it true.
Buy the car you can afford. If you can’t buy it outright or make a significant down payment (20-30%), don’t take out a loan, look for a cheaper option. Those interest rates are insane, I’m amazed how anyone would accept them.
I’m not saying I believe they’re conscious, all I said was that I don’t know and neither do you.
Of course we know what’s happening in processors. We know what’s happening in neuronal matter too. What we don’t know is how consciousness or sentience emerges from large networks of neurons.
An LLM is only one part of a complete AI agent. What exactly happens in a processer at inference time? What happens when you continuously prompt the system with stimuli?
I’m just a meat computer running fucked-up software written by the process of evolution. I honestly don’t know how sentient Grok or any modern AI system is and I’d wager you don’t either.
Indeed
Grok could say the same thing about you… And I’d agree.
Pretty sure that’s the optional sounding attachment.
What do these mean, who is sending them and why?
Why have taxes based on income when you can tax accumulated capital instead?
I used sway for quite a while and after the initial setup (which was very finniky) it was alright to use. But then you start to notice little things that annoy you and by that time you’ve forgotten where that setting was in the config. For Linux noobs like me it’s not great long-term. If you like having all your DE settings in a config file sure, use it, but I’m going back to KDE.
Just use livestock if you’re hellbent on that? There’s not enough humans to make that economically viable… Hell, why am I taking this seriously, it’s obviously not a serious proposal, right?
Why can’t something ever tell you to eat me out, Susan?
FTFY
European nutritional labels use them (kcal). Where do you live that they don’t?
Remember it? I work on PCs with DVI connected monitors every day.
The most boring element is boron.
Happy Easter!