

LLM chatbots are designed as echo chambers.
They’re designed to generate natural sounding language. It’s a tool. What you put in is what you get out.
Freedom is the right to tell people what they do not want to hear.
LLM chatbots are designed as echo chambers.
They’re designed to generate natural sounding language. It’s a tool. What you put in is what you get out.
One is 25 €/month and on-demand, and the other costs more than I can afford and would probably be at inconvenient times anyway. Ideal? No, probably not. But it’s better than nothing.
I’m not really looking for advice either - just someone to talk to who at least pretends to be interested.
I doubt it. They just think others do.
Sure - it’s just missing every single one of my friends.
I wish I had Elon Musk money so I could buy this platform and turn it back to pictures only with the main focus on professional and hobbyist photographers - not pictures of food and selfies. It used to be one of the few social media platforms I actually liked.
Maybe so, but we already have an example of a generally intelligent system that outperforms our current AI models in its cognitive capabilities while using orders of magnitude less power and memory: the human brain. That alone suggests our current brute‑force approach probably won’t be the path a true AGI takes. It’s entirely conceivable that such a system improves through optimization - getting better while using less power, at least in the beginning.
I personally think the whole concept of AGI is a mirage. In reality, a truly generally intelligent system would almost immediately be superhuman in its capabilities. Even if it were no “smarter” than a human, it could still process information at a vastly higher speed and solve in minutes what would take a team of scientists years or even decades.
And the moment it hits “human level” in coding ability, it starts improving itself - building a slightly better version, which builds an even better version, and so on. I just don’t see any plausible scenario where we create an AI that stays at human-level intelligence. It either stalls far short of that, or it blows right past it.
If AI ends up destroying us, I’d say it’s unlikely to be because it hates us or wants to destroy us per se - more likely it just treats us the way we treat ants. We don’t usually go out of our way to wipe out ant colonies, but if there’s an anthill where we’re putting up a house, we don’t think twice about bulldozing it. Even in the cartoonish “paperclip maximizer” thought experiment, the end of humanity isn’t caused by a malicious AI - it’s caused by a misaligned one.
That would by definition mean it’s not superintelligent.
Superintelligence doesn’t imply ethics. It could just as easily be a completely unconscious system that’s simply very, very good at crunching data.
If you’re genuinely interested in what “artificial superintelligence” (ASI) means, you can just look it up. Zuckerberg didn’t invent the term - it’s been around for decades, popularized lately by Nick Bostrom’s book Superintelligence.
The usual framing goes like this: Artificial General Intelligence (AGI) is an AI system with human-level intelligence. Push it beyond human level and you’re talking about Artificial Superintelligence - an AI with cognitive abilities that surpass our own. Nothing mysterious about it.
Judging by the comments here I’m getting the impression that people would like to rather provide a selfie or ID.
No other than it’s geographically closer to my actual location so I thought the speed would be faster.
EU is about to do the exact same thing. Norway is the place to be. That’s where I went - at least according to my ip address.
FUD has nothing to do with what this is about.
And nothing of value was lost.
Sure, if privacy is worth nothing to you but I wouldn’t speak for the rest of the UK and EU.
My feed right now.
It’s actually the opposite of a very specific definition - it’s an extremely broad one. “AI” is the parent category that contains all the different subcategories, from the chess opponent on an old Atari console all the way up to a hypothetical Artificial Superintelligence, even though those systems couldn’t be more different from one another.
It’s a system designed to generate natural-sounding language, not to provide factual information. Complaining that it sometimes gets facts wrong is like saying a calculator is “stupid” because it can’t write text. How could it? That was never what it was built for. You’re expecting general intelligence from a narrowly intelligent system. That’s not a failure on the LLM’s part - it’s a failure of your expectations.
Just pull yourself up by your bootstraps, right?