

Based on the attempts we’ve seen at censoring AI output so far, there doesn’t seem to me to be a way to actually do this without building a new model with pre-censored training data.
Sure they can tune models, but even “MechaHitler” Grok was still giving some “woke” answers on occasion. I don’t see how this doesn’t either destroy AI’s “usefulness” (not that there’s any usefulness there to begin with) or cost so much to implement that investors pull out because none of the AI companies are profitable, and throwing billions more to sift through and filter the training data pushes profitability even further away (if censoring all the training data is even possible at all).
I’m still holding out hope that Valve becomes a worker-owned coop when Gaben goes. Internally they’ve been structured that way for years, without traditional “management,” everyone having moving desks where they work on whatever they feel motivated by and most useful at, etc.