

M series chips on macbooks are likely helping more.
M series chips on macbooks are likely helping more.
What is kinda stupid is not understanding how LLMs work, not understanding what the inherent limitations of LLMs are, not understanding what intelligence is, not understanding what the difference between an algorithm and intelligence is, not understanding what the difference between immitating something and being something is, claiming to “perfectly” understand all sorts of issues surrounding LLMs and then choosing to just ignore them and then still thinking you actually have enough of a point to call other people in the discussion “kind of stupid”.
I think your argument is a bit besides the point.
The first issue we have is that intelligence isn’t well-defined at all. Without a clear definition of intelligence, we can’t say if something is intelligent, and even though we as a species tried to come up with a definition of intelligence for centuries, there still isn’t a well-defined one yet.
But the actual question here isn’t “Can AI serve information?” but is AI an intelligence. And LLMs are not. They are not beings, they don’t evolve, they don’t experience.
For example, LLMs don’t have a memory. If you use something like ChatGPT, its state doesn’t change when you talk to it. It doesn’t remember. The only way it can keep up a conversation is that for each request the whole chat history is fed back into the LLM as an input. It’s like talking to a demented person, but you give that demented person a transcript of your conversation, so that they can look up everything you or they have said during the conversation.
The LLM itself can’t change due to the conversation you are having with them. They can’t learn, they can’t experience, they can’t change.
All that is done in a separate training step, where essentially a new LLM is generated.
Sure, lossy compression is lossy, but that wasn’t my point. My point was that data corruption in information-dense formats is more critical than in low-density formats.
To take your example of the vacation photos: If you have a 100 megapixel HDR photo and you lose 100 bytes of data, you will lose a few pixels and you won’t even notice the change unless you zoom in quite far.
Compress these pictures down to fit on the floppy from your example (that would be ~73kb per photo), then losing 100 bytes of data will now be very noticeable in the picture, since you just lost ~0.1% of the whole data. Not taking the specifics of compression algorithms into account, you just lost 1 in every 1000 pixels, which is a lot.
High resolution low information density formats allow for quite a lot of damage before it becomes critical.
High information density formats on the other hand are quite vulnerable to critical data loss.
To show what I mean, take this image:
I saved it as BMP and then ran a script over it that replaces 1% of all bytes with a random byte. This is the result:
(I had to convert the result back to jpg to be able to upload it here.)
So even with a total of 99865 bytes replaced with random values, the image of an apple is clearly visible. There are a few small noise spots here and there, but the overall picture is still fine and if you print it as a photo, it’s likely that these spots won’t even be visible.
As a comparison, I now saved the original image as JPEG and also corrupted 1% of all bytes the same way. This here’s the result. Gimp and many other file viewers can’t open the file at all any more. Chrome can open it, and it looks like this:
The same happens with audio CDs. Audio CDs use uncompressed “direct” data, just like BMP. Data corruption only affects the data at the point of the corruption. That means, if one bit is unreadable, you probably won’t be able to notice at all, and even if 1% of all data on the CD is corrupt, you will likely only notice a slightly elevated noise level, even though 1% data loss is an enormous amount.
If you instead use compressed formats (even FLAC) or if it’s actual data and not media, a single illegible bit might destroy the whole file, because each bit of data depends on the information earlier in the file, so if one bit is corrupted, everything after that bit might become unreadable.
That’s why your audio CD is still legible far beyond its expiry date, but a CD-R containing your backup data might not.
Again, these data retention time spans don’t mean that after that time all data on the device disappears at once, but that until that time every single bit of data on your device is preserved. After that you might start to experience data loss, usually in the form of single bits or bytes failing.
Edit: Just for fun, this is what the BMP looks like with 95% corruption:
Even with this massive amount of damage, the image is still recognizable.
It’s always a game of statistics.
You might have some 20yo disks that play fine, but there’s enough 10yo disks that don’t play fine. Also, especially with audio disks, having some data loss on them won’t be noticeable. You could probably have up to 10% of data loss on the CD without hearing much of a difference.
Things are very different for data storage though. Here losing a single bit (e.g. of an encrypted/compressed file) might make the whole file unreadable. And if it’s a critical file that might make the whole disk useless.
Audio CD is a very low-data-density format. There’s a ton of data on there that doesn’t matter (as exemplified by the fact that MP3 CDs can easily hold 6 times as much audio as a regular, uncompressed Audio CD). This low data density creates redundancy.
The data retention values above aren’t about “After X years all of the data disappears” but about “This is how long the data will be fully retained without a single bit of data loss”.
I also have HDDs from ~2000 that still work fine. The probably oldest piece of tech I own is a Gameboy, which has its BIOS in a ROM, and that one still works fine, even though it’s older than 30 years now. But for one I don’t own enough Gameboys to know whether I got an outlier here and I don’t have the means to check if every single bit on that ROM is still identical to the original.
According to Google, burned CDs and DVDs retain data for 5-10 years.
SSDs are between a few years and a few decades, depending on the age, type and quality of the SSD. Same goes for USB sticks.
HDDs are between 10 and 20 years.
Tape drives are at 30+ years.
I wouldn’t go quite as far. This is just breacrumbs falling of the corporate table.
I’m running Fedora and since kernel 6.11 my laptop can’t wake from sleep, so I keep the kernel back to 6.10, where everything works.
But at the same time I have quite heavy troubles with wine/proton. Probably 80% of the games I tried either don’t run at all or only run at <3 FPS. And I’m talking about 10+yo games on a Nvidia 4070 Mobile.
Could it be that the issues come from Wine/Proton expecting ntsync and not having that available?
Who on earth still burns disks (other than pizzas) in 2025?
Sure, but the main issue here is that JS doesn’t only auto cast to more generic but in both directions.
Maybe a better example is this:
"a" + 1 -> "a1"
"a" - 1 -> NaN
With + it casts to the more generic string type and then executes the overloaded + as a string concatenation.
But with - it doesn’t throw an exception (e.g. something like “Method not implemented”), but instead casts to the more specific number type, and “a” becomes a NaN, and NaN - 1 becomes NaN as well.
There’s no situation where "a" - "b"
makes any sense or could be regarded as intentional, so it should just throw an error. String minus number also only makes sense in very specific cases (specifically, the string being a number), so also here I’d expect an error.
If the programmer really wants to subtract one number from another and one or both of them are of type string, then the programmer should convert to number manually, e.g. using parseInt("1") - parseInt("2")
.
"a"+"b" -> "ab"
"a"-"b" -> NaN
Yeah, might be.
Dailymotion should be a non-issue to get into though. It’s got free user access, it’s the 9th biggest content site by active users worldwide and I don’t think they have selective creator recruiting practices.
But there too you have imperial football fields and metric ones, with an imperial one being 0,745 metric ones.
There are a few platforms. Nebula, Floatplane, Peertube. I think Patreon allows you to host videos directly on their platform too.
But none of these platforms offer free access plus a built-in ad function.
“work” is doing a lot of heavy lifting here when talking about a Fairphone. Worst phone I ever owned with quite some margin.
Generic degoogling/google alternatives video from Pewdiepie, but he won’t get rid of Youtube.
theoretically software support
This. And it’s not only due to drivers and much more due to them not having insourced software development and their outsourced developers not using Fairphones as their daily drivers.
In their 2022 report (page 40) they said that their living wage bonus for that year was $305 000 paid to 1926 factory workers, which comes to $158 per worker, and they said that this covers 6.5% of the living wage gap. That means, the whole living wage gap is $2436 p.a. per worker.
Now what they are paying is down to $60 per year, so even if the living wage gap didn’t change over the last two years, that would mean the $60 would only cover 2.4% of the living wage gap.
That’s apparently what makes this phone “fair”. It’s like tipping your delivery driver $0.20 and asking them to be grateful for that.
Well, that’s exactly what they are doing. That’s literally what these “fairness compensation credits” are that Fairphone is using.
They can’t (or don’t want to) source their materials from sources that actually employ people fairly. So they buy regular stuff made on the backs of disenfranchised people and donate some money to some random third-party organizations, that use the money to make sure some other people somewhere else are are employed more fairly.
Guess what: You can cut the middle man and do the same thing yourself.
And they aren’t even doing that for their whole supply chain. They are only doing that for the mining of some very specific minerals, specifically cobalt, gold and silver. They don’t do that for all the other materials in their phones. They don’t do that for any of the work that goes into processing these materials. They don’t do that for the people who transform these minerals into components. And at the end of that chain they do pay a very small amount to the people who do the final assembly.
In the end you are annoyed at the brand name plus the higher price evoking larger expectations in some of your friends. Join the club. But that’s a far cry from your original statement. Glad we could clear that up.
Yes, I am annoyed that Fairphone does incredibly false advertising. Take away the “Fair” part, how many sales do you think they’d lose? Look at Shiftphone if you want to see a Fairphone competitor that doesn’t have the “Fair” branding, and guess how many devices they have sold.
People need to know that the higher price stems from Fairphone being a boutique manufacturer, not from Fairphone actually spending a lot of money on Fair/eco things. That’s really important for a phone like this.
It’s pretty much equivalent to hypothetically finding out that the Fairtrade seal doesn’t actually mean that the banana farmers are paid fairly, but that the price markup actually stems from the ink being used in the Fairtrade seal is incredibly expensive to make.
import sys import random inFilePath = sys.argv[1] outFilePath = sys.argv[2] damagePercentage = float(sys.argv[3]) with open(inFilePath, "rb") as f: d = f.read() damageTotalCount = int(len(d) * damagePercentage / 100) print("Damaging "+str(damageTotalCount)+" bytes.") dList = [ x.to_bytes(1) for x in d ] for i in range(damageTotalCount): pos = random.randint(2000,len(d)-2) dList[pos] = random.randint(0,255).to_bytes(1) if (i%1000 == 0): print(str(i)+"/"+str(damageTotalCount)) d = b"".join(dList) with open(outFilePath, "wb") as f: f.write(d)
If you run it, the first argument is the input file, the second one is the output file and the third is the percentage of corrupted bytes to inject.
I did spare the first 2000 bytes in the file to get clear of the file header (corruption on a BMP file header can still cause the whole image to be illegible, and this demonstration was about uncompressed vs compressed data, not about resilience of file headers).
I also just noticed when pasting the script that I don’t check for double-corrupting the same bytes. At lower damage rates that’s not an issue, but for the 95% example, it’s actually 61.3% actual corruption.