We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

  • Geodad@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    16
    ·
    18 hours ago

    I’ve never been fooled by their claims of it being intelligent.

    Its basically an overly complicated series of if/then statements that try to guess the next series of inputs.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      7 hours ago

      It very much isn’t and that’s extremely technically wrong on many, many levels.

      Yet still one of the higher up voted comments here.

      Which says a lot.

    • adr1an@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      13 hours ago

      I love this resource, https://thebullshitmachines.com/ (i.e. see lesson 1)…

      In a series of five- to ten-minute lessons, we will explain what these machines are, how they work, and how to thrive in a world where they are everywhere.

      You will learn when these systems can save you a lot of time and effort. You will learn when they are likely to steer you wrong. And you will discover how to see through the hype to tell the difference. …

      Also, Anthropic (ironically) has some nice paper(s) about the limits of “reasoning” in AI.

      • aesthelete@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        ·
        edit-2
        17 hours ago

        I really hate the current AI bubble but that article you linked about “chatgpt 2 was literally an Excel spreadsheet” isn’t what the article is saying at all.

      • A_norny_mousse@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        17 hours ago

        And they’re running into issues due to increasingly ingesting AI-generated data.

        There we go. Who coulda seen that coming! While that’s going to be a fun ride, at the same time companies all but mandate AS* to their employees.