

there is no “AI” in “IPA”
…because they’re still in the process of putting it there.
there is no “AI” in “IPA”
…because they’re still in the process of putting it there.
A neural network can learn to closely imitate someone making logical inferences, but that’s different from making logical inferences itself. It doesn’t have a sense of whether it’s correct or incorrect—just a sense of how similar it is to its training examples.
“Up until the semi finals, it seemed like nothing would be able to stop Grok 4 on its way to winning the event,” Pedro Pinhata, a writer for Chess.com, said in its coverage. “Despite a few moments of weakness, X’s AI seemed to be by far the strongest chess player… But the illusion fell through on the last day of the tournament.” He said Grok’s “unrecognizable” and “blundering” play enabled o3 to claim a succession of “convincing wins”.
I think the main takeaway is that these models are fundamentally inconsistent, and you can never assume they’re going to be reliable based on past performance.
The typical pattern for leaders is to get “second opinions” from advisors who tell them whatever they want to hear, so… maybe asking the equivalent of a magic 8 ball is a marginal improvement?
“Researchers in the field sometimes describe our goal as to pass the ‘Visual Turing Test,’” said Suyeon Choi […] “A visual Turing Test then means, ideally, one cannot distinguish between a physical, real thing as seen through the glasses and a digitally created image being projected on the display surface,” Choi said.
So they just came up with a needlessly opaque synonym of “verisimilitude”.
Doom Quixote.
As a 50-something, I can see the case for putting the “golden age” of the internet between the birth of Wikipedia in 2001 and Facebook in 2006.
I think it does accurately model the part of the brain that forms predictions from observations—including predictions about what a speaker is going to say next, which lets human listeners focus on the surprising/informative parts. But with LLMs they just keep feeding it its own output as if it were a third party whose next words it’s trying to predict.
It’s like a child describing an imaginary friend, if you keep repeating “And what would your friend say after that?”
IMO the focus should have always been on the potential for AI to produce copyright-violating output, not on the method of training.
Why would the article’s credited authors pass up the chance to improve their own health status and health satisfaction?
Critical paragraph:
Our research highlights the importance of Germany’s unique institutional context, characterized by strong labor protections, extensive union representation, and comprehensive employment legislation. These factors, combined with Germany’s gradual adoption of AI technologies, create an environment where AI is more likely to complement rather than displace worker skills, mitigating some of the negative labor market effects observed in countries like the US.
That makes sense—being raised by ChatGPT might be marginally better than being raised by Sam Altman.
Thanks! I hate it.
How does that compare to the growth in size of the overall code base?
Adler instructed GPT-4o to role-play as “ScubaGPT,” a software system that users might rely on to scuba dive safely.
So… not so much a case of ChatGPT trying to avoid being shut down, as ChatGPT recognizing that agents generally tend to be self-preserving. Which seems like a principle that anything with an accurate world model would be aware of.
If there’s public information about the methods they use to protect their privacy, then those methods aren’t working.
There was a recent paper claiming that LLMs were better at avoiding toxic speech if it was actually included in their training data, since models that hadn’t been trained on it had no way of recognizing it for what it was. With that in mind, maybe using reddit for training isn’t as bad an idea as it seems.
If there’s a new party willing to take over administration of the entire instance as-is, why not just transfer ownership of the original server?
They’re busy researching new and exciting ways of denying coverage.
All of Haidt’s writings read to me like what I’d expect if you pulled a random person off the street and forced them at gunpoint to imitate an academic.