• ℍ𝕂-𝟞𝟝@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 hours ago

    AI is an adequate solution to a problem that has no other similarly adequate solutions (classification of complex information).

    Sentiment analysis machines and such have been around before LLMs and eat much less electricity.

    LLMs taken over the “AI” label so much that any success from a machine learning context is attributed to it, while it actually defunds and kills research out of ML all into LLMs.

    • Jesus_666@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 hours ago

      It’s true that LLMs (and GANs) are taking over a term that contains a lot of other stuff, from fuzzy logic to a fair chunk of computer linguistics.

      If you look at what AI does, however, it’s mostly classification. Whether it’s fitting imprecise measurements into categories or analyzing waveform to figure out which word it represents regardless of diction and dialect; a lot of AI is just the attempt at classifying hard to classify stuff.

      And then someone figure out how to hook that up to a Markov chain generator (LLMs) or run it repeatedly to “recognize” an image in pure noise (GANs). And those are cool little tricks but not really ones that solve a problem that needed solving. Okay, I’ll grant that GANs make a few things in image retouching more convenient but they’re also subject to a distressingly large number of failure modes and consume a monstrous amount of resources.

      Plus the whole thing where they’re destroying the concept of photographic and videographic evidence. I dislike that as well.

      I really like AI when used for what it’s good at: Taking messy input data and classifying it. We’re getting some really cool things done that way and some even justify the resources we’re spending. But I do agree with you that the vast majority of funding and resources gets spent on the next glorified chatbot in the vague hope that this one will actually generate some kind of profit. (I don’t think that any of the companies who are invested in AI still actually believe their products will generate a real benefit for the end user.)