A contrarian isn’t one who always objects - that’s a confirmist of a different sort. A contrarian reasons independently, from the ground up, and resists pressure to conform.

  • Naval Ravikant
  • 3 Posts
  • 43 Comments
Joined 5 months ago
cake
Cake day: January 30th, 2025

help-circle
  • Everything you do changes your brain activity.

    This isn’t about using ChatGPT broadly, but specifically about the difference between writing an essay with the help of an LLM versus doing it without. And in this case, I think it all comes down to how you use it. If you just have it write the essay for you, then of course it won’t stimulate your brain to the same extent - that’s like hiring someone to go to the gym for you.

    Personally, the way I use it to help with my writing is by doing all the writing myself first. Only after that do I let it check for grammatical errors and help improve the clarity and flow by making minor structural adjustments - while keeping the tone and message of my original draft intact.

    For me, the purpose of writing is to convert abstract thoughts into language and pass that information along, hoping the reader understands it well enough that it forms the same idea in their mind. If ChatGPT can help untangle my word salad and make that process more effective, I welcome it.




  • The ads-based business model is one of the main reasons so much of the internet sucks so bad. It should either be completely free or run on donations or subscriptions.

    I don’t have an issue with YouTube ads because I’ve never actually had to see any - thanks to adblocking. But when they eventually figure out how to prevent that, I’d rather just pay a monthly fee than deal with ads. I think their pricing is completely reasonable, and I can’t morally justify blocking ads - I do it because it’s easy and free. Honestly, I’ve subscribed to services that cost more and give me less value than YouTube does.







  • “Your claim is only valid if you first run this elaborate, long-term experiment that I came up with.”

    The world isn’t binary. When someone says less moderation, they don’t mean no moderation. Framing it as all-or-nothing just misrepresents their view to make it easier for you to argue against. CSAM is illegal, so it’s always going to be against the rules - that’s not up to Google and is therefore a moot point.

    As for other content you ideologically oppose, that’s your issue. As long as it’s not advocating violence or breaking the law, I don’t see why they’d be obligated to remove it. You’re free to think they should - but it’s their platform, not yours. If they want to allow that kind of content, they’re allowed to. If you don’t like it, don’t go there.





  • That’s because it is.

    The term artificial intelligence is broader than many people realize. It doesn’t mean human-level consciousness or sci-fi-style general intelligence - that’s a specific subset called AGI (Artificial General Intelligence). In reality, AI refers to any system designed to perform tasks that would typically require human intelligence. That includes everything from playing chess to recognizing patterns, translating languages, or generating text.

    Large language models fall well within this definition. They’re narrow AIs - highly specialized, not general - but still part of the broader AI category. When people say “this isn’t real AI,” they’re often working from a fictional or futuristic idea of what AI should be, rather than how the term has actually been used in computer science for decades.


  • Different definitions for intelligence:

    • The ability to acquire, understand, and use knowledge.
    • the ability to learn or understand or to deal with new or trying situations.
    • the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (such as tests)
    • the act of understanding
    • the ability to learn, understand, and make judgments or have opinions that are based on reason
    • It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.

    We have plenty of intelligent AI systems already. LLM’s probably fit the definition. Something like Tesla FSD definitely does.



  • Thanks.

    Well, I don’t think OpenAI knows how to build AGI, so that’s false. Otherwise, Sam’s statement there is technically correct, but kind of misleading - he talks about AGI and then, in the next sentence, switches back to AI.

    Sergey’s claim that they will achieve AGI before 2030 could turn out to be true, but again, he couldn’t possibly know that. I’m sure it’s their intention, but that’s different from reality.

    Elon’s statement doesn’t even make sense. I’ve never heard anyone define AGI like that. A thirteen-year-old with an IQ of 85 is generally intelligent. Being smarter than the smartest human definitely qualifies as AGI, but that’s just a weird bar. General intelligence isn’t about how smart something is - it’s about whether it can apply its intelligence across multiple unrelated fields.