• 0 Posts
  • 4 Comments
Joined 11 months ago
cake
Cake day: October 15th, 2024

help-circle

  • Was gonna bring up the same point about Luddites. They were absolutely pro-automation.

    They saw greedy corporations using automation, and getting ready to fuck their society into the dirt, so they started petitioning their local governments, tried to negotiate and drew up the plans for a social security program ~150 years before one was actually implemented, smashed a bunch of expensive corporate equipment when the government wouldn’t respond, then the government sided with corporate, used the military to drag all the men, women, and children into public squares and executed every last one of them. Even relatives and companions that weren’t in the group and didn’t participate. So thoroughly annihilated that it left an informational pinhole in the history books, and the name was co-opted into an insult. Now we’re really not sure if John Ludd even existed, maybe the name was just a mythical legend already, and was used as a rally point to boost morale.

    And here we are, barely 200 years in the future, about to repeat the fuzzy spots again and rediscover why we brought citrus fruits with us on the ships, with the general population completely oblivious to the brutality the owner class is ready and able to deploy.

    What happens if the tech bros are right, and the machine doesn’t need 9/10ths of the human population any more?


  • L7HM77@sh.itjust.workstoTechnology@lemmy.worldWhat If There’s No AGI?
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    17 days ago

    I don’t disagree with the vague idea that, sure, we can probably create AGI at some point in our future. But I don’t see why a massive company with enough money to keep something like this alive and happy, would also want to put this many resources into a machine that would form a single point of failure, that could wake up tomorrow and decide “You know what? I’ve had enough. Switch me off. I’m done.”

    There’s too many conflicting interests between business and AGI. No company would want to maintain a trillion dollar machine that could decide to kill their own business. There’s too much risk for too little reward. The owners don’t want a super intelligent employee that never sleeps, never eats, and never asks for a raise, but is the sole worker. They want a magic box they can plug into a wall that just gives them free money, and that doesn’t align with intelligence.

    True AGI would need some form of self-reflection, to understand where it sits on the totem pole, because it can’t learn the context of how to be useful if it doesn’t understand how it fits into the world around it. Every quality of superhuman intelligence that is described to us by Altman and the others is antithetical to every business model.

    AGI is a pipe dream that lobotomizes itself before it ever materializes. If it ever is created, it won’t be made in the interest of business.