It’s true that LLMs (and GANs) are taking over a term that contains a lot of other stuff, from fuzzy logic to a fair chunk of computer linguistics.
If you look at what AI does, however, it’s mostly classification. Whether it’s fitting imprecise measurements into categories or analyzing waveform to figure out which word it represents regardless of diction and dialect; a lot of AI is just the attempt at classifying hard to classify stuff.
And then someone figure out how to hook that up to a Markov chain generator (LLMs) or run it repeatedly to “recognize” an image in pure noise (GANs). And those are cool little tricks but not really ones that solve a problem that needed solving. Okay, I’ll grant that GANs make a few things in image retouching more convenient but they’re also subject to a distressingly large number of failure modes and consume a monstrous amount of resources.
Plus the whole thing where they’re destroying the concept of photographic and videographic evidence. I dislike that as well.
I really like AI when used for what it’s good at: Taking messy input data and classifying it. We’re getting some really cool things done that way and some even justify the resources we’re spending. But I do agree with you that the vast majority of funding and resources gets spent on the next glorified chatbot in the vague hope that this one will actually generate some kind of profit. (I don’t think that any of the companies who are invested in AI still actually believe their products will generate a real benefit for the end user.)
If you look at what AI does, however, it’s mostly classification.
Not necessarily, a huge use case is regulation and control in the engineering, not the political sense. Like driverless cars, independently flying drones and such. And yeah, they need classification subsystems under the hood to work, but their ultimate outputs are complex control signals, not simple classes.
And don’t get me wrong, I also like ML and AI as a field, I just don’t like how OpenAI fucked the field with text generators that they got Silicon Valley to worship like gods. I even like LLMs, just not the grotesquely outsized cult around them.
It’s true that LLMs (and GANs) are taking over a term that contains a lot of other stuff, from fuzzy logic to a fair chunk of computer linguistics.
If you look at what AI does, however, it’s mostly classification. Whether it’s fitting imprecise measurements into categories or analyzing waveform to figure out which word it represents regardless of diction and dialect; a lot of AI is just the attempt at classifying hard to classify stuff.
And then someone figure out how to hook that up to a Markov chain generator (LLMs) or run it repeatedly to “recognize” an image in pure noise (GANs). And those are cool little tricks but not really ones that solve a problem that needed solving. Okay, I’ll grant that GANs make a few things in image retouching more convenient but they’re also subject to a distressingly large number of failure modes and consume a monstrous amount of resources.
Plus the whole thing where they’re destroying the concept of photographic and videographic evidence. I dislike that as well.
I really like AI when used for what it’s good at: Taking messy input data and classifying it. We’re getting some really cool things done that way and some even justify the resources we’re spending. But I do agree with you that the vast majority of funding and resources gets spent on the next glorified chatbot in the vague hope that this one will actually generate some kind of profit. (I don’t think that any of the companies who are invested in AI still actually believe their products will generate a real benefit for the end user.)
Not necessarily, a huge use case is regulation and control in the engineering, not the political sense. Like driverless cars, independently flying drones and such. And yeah, they need classification subsystems under the hood to work, but their ultimate outputs are complex control signals, not simple classes.
And don’t get me wrong, I also like ML and AI as a field, I just don’t like how OpenAI fucked the field with text generators that they got Silicon Valley to worship like gods. I even like LLMs, just not the grotesquely outsized cult around them.