

Which he can afford because of ChatGPT. Checkmate.
Which he can afford because of ChatGPT. Checkmate.
Maybe Elmo could, now that they broke up. He could argue that TikTok existing makes Twitter’s videos less popular.
Note: this is only the best argument they have right now because politicians will never be persuaded by people bringing up Palantir’s war crimes and will openly defy any constituents who are persuaded.
If you use LLMs like they should be, i.e. as autocomplete, they’re helpful. Classic autocomplete can’t see me type “import” and correctly guess that I want to import a file that I just created, but Copilot can. You shouldn’t expect it to understand code, but it can type more quickly than you and plug the right things in more often than not.
Copilot does a good job of typing out things like imports that should really be loops but can’t be. Sure, I could easily write a Python or Bash script to do it, but that would take 5-10 minutes and just pressing Tab 20 times takes a lot less. I just have to read each line to make sure it didn’t hallucinate any files that I don’t actually have.
Or were using a product with management that wanted in. A handful of video games were negatively impacted, so their players were at least inconvenienced.
Wrong documentation is still a pretty big problem, even without the gaslighting. Incomplete documentation is better than none; incorrect documentation is not.
I guarantee it’s already happened. The question is when a company large enough that you can’t avoid it follows suit.
Bazzite is generally the go-to for gaming.
And that’s what happens when you spend a trillion dollars on an autocomplete: amazing at making things look like whatever it’s imitating, but with zero understanding of why the original looked that way.
So China has a smaller percentage of its population below 10k USD net worth despite having a 45% lower cost of living? That’s a pretty big difference in the number of people who can live comfortably. I’m willing to sacrifice people’s ability to be ludicrously rich for that.
But it doesn’t know that it exists. It just says that it does because it’s seen others saying that they exist. It’s a trillion-dollar autocomplete program.
For example, if you take a common logic puzzle and change the parameters a little, LLMs will often recite a memorized solution to the wrong puzzle because they aren’t parameterizing the query correctly (mapping lion to predator, cabbage to vegetable, ignoring the instructions that the two cannot be put together in favor of the classic framing where the predator can be left with the vegetable).
I can’t find the link right now, but a different redditor tried the problem with three inanimate objects that could obviously be left alone together and LLMs were still suggesting making return trips with items. They had no examples of a non-puzzle in their training data, so they just recited the solution to a puzzle because they can’t think.
Note that I’ve been careful to say LLMs. I’m open to the idea that AGI/ASI may someday exist, but I’m quite confident that LLMs will not get there. At best, they might be used to offload conversation, like e.g. Dall-E is used to offload image generation from ChatGPT today.
Which other guys? The couple thousand people who voted third party because they didn’t view Harris as an acceptable compromise between fascism and humanity?
The PRC is also quickly catching up in semiconductors, and they actually build infrastructure, unlike the US. Sure, they have a massive wealth disparity and very obviously aren’t communist, but they’re doing a few things right.
Well, yeah, obviously it can be done. What’s the latency, though? A hub’s muxing alternates between packets from different devices, but even USB 1.1 has 64B packets, leaving 64b per controller if you report them all in one packet. That’s 15 digital buttons, 6b per axis, and 13b left over for routing.
However, I can’t think of a way to get the computer to decode one 64B packet into eight separate HID polls without a custom driver. If you use a hub, you’re limited to 8kHz total by the spec, but many EHCI controllers limit that to 1kHz. 125Hz per player is not great.
I can’t confirm that this is the reason or that there isn’t a different way around the restriction, but it seems likely from what I know of USB hubs.
TL;DR: with a custom driver, you can report all controllers on all USB polls rather than each taking up a whole interval, giving you 8x the polling rate compared to an emulated hub with 8 standard HIDs.
The 8bitdo version is easier to implement because it’s one dongle per controller. The Xbox dongle supports eight controllers per dongle. This complicates things; I assume they didn’t want to emulate an eight-port USB hub on the dongle.
You can use BT, but there’s a reason 8bitdo has a dongle as well: BT has worse latency, I assume due to protocol overhead.
And at least Xbox controllers are cross-compatible. You can’t use a DS4 on a PS5, even if you’re playing a PS4 game.
I was getting terrible and inconsistent input lag with an XSX controller. Something about it not correctly reporting its polling rate over BT. Switching to the dongle fixed it.
Glad yours works, but not everyone is so lucky.
Or Debian. It still supports MIPS64 officially and 68K unofficially. x86 isn’t going anywhere for a long time.