

The cargo balance bike is EPIC
account feddit di @lgsp@urbanists.social
The cargo balance bike is EPIC
The title doesn’t mentions accident ts either, but I added “for compliance” to be more clear
For my understanding, what is the difference between a quote post and a post that contains a link to another post?
I wish I was that clever, but no. It’s from a researcher named Ian Walker, coined in a very interesting article. It has its own Wikipedia page: https://en.wikipedia.org/wiki/Motonormativity
Also mainstream articles: https://www.theverge.com/2023/1/31/23579510/car-brain-motornormativity-study-ian-walker
And a video by GCN! https://www.youtube.com/watch?v=-_4GZnGl55c
I’m surprised that by being in the fuck cars community you never heard the term!
I’m lost… Can you ELI5?
If the Onion is not out of business yet, it will be soon. Reality is more ridiculous than what they can come up with…
What are your interests?
In the meantime you could follow a couple of communities is where people share interesting videos:
(thanks @asudox@lemmy.asudox.dev for the correction)
The letter is from 30 associations among which the one of the website I linked. Try to contact them
What are you talking about? That could never happen!
Very cool!
I’m wondering how long until Bluesky locks down its users
So cars jump over the garden?
That’s the community I cross-posted from, actually 😁
Even if LLM “neurons” and their interconnections are modeled to the biological ones, LLMs aren’t modeled on human brain, where a lot is not understood.
The first thing is that how the neurons are organized is completely different. Think about the cortex and the transformer.
Second is the learning process. Nowhere close.
The fact explained in the article about how we do math, through logical steps while LLMs use resemblance is a small but meaningful example. And it also shows that you can see how LLMs work, it’s just very difficult
Yes, that’s it. I added the link in the OP,
Very arrogant answer. Good that you have intuition, but the article is serious, especially given how LLMs are used today. The link to it is in the OP now, but I guess you already know everything…
Thank you. I found the article, linkin the OP
Oh wow thank you! That’s it!
I didn’t even remember now good this article was and how many experiments it collected
I’m aware of this and agree but:
I see that asking how an LLM got to their answers as a “proof” of sound reasoning has become common
this new trend of “reasoning” models, where an internal conversation is shown in all its steps, seems to be based on this assumption of trustable train of thoughts. And given the simple experiment I mentioned, it is extremely dangerous and misleading
take a look at this video: https://youtube.com/watch?v=Xx4Tpsk_fnM : everything is based on observing and directing this internal reasoning, and these guys are computer scientists. How can they trust this?
So having a good written article at hand is a good idea imho
What are your interests?
Anyway you could try following this community: https://lemmy.world/c/peertube
This “new” study is from 2018?
https://usa.streetsblog.org/2018/01/03/study-cyclists-dont-break-traffic-laws-any-more-than-drivers-do