Hemingways_Shotgun

  • 1 Post
  • 48 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle




  • Atmostphere and “looks” wise. It does a great job of evoking the original, sure.

    And it’s far from a FF7 remake issue alone. FF12 kind of flirted with it, but starting basically at Crisis Core, combat went from something strategic to simply “mash a button until your ATB meter fills up and then perform a quick combo (usually the same one every time because you don’t have time to think about it so you rely mostly on muscle memory)”. There’s no thought involved in the combat. There’s no “what is going to work best against which enemy”. You might as well be playing Street Fighter.

    When I go up against a tough enemy in the original, I don’t immediately start slashing. I’ll have one person immediately focused on casting barrier on everyone. The second person will throw out a summon. And one person will have transform/mini set up with the added-affect materia and will do a basic attack. If I’m lucky, that enemy will turn into a frog or immediately shrink, making the combat that much easier.

    There is no thought process like that in new final fantasy. It’s just slash, slash, slash, combo. slash, slash, slash, item. over and over and over again until either the enemy is dead or you are.

    I get it. That’s what modern audiences want. And I know I fall squarely into the “old man yells at Cloud” demographic (ba-dum-tiss). But I had at least hoped that the remake would try to retain the old mechanics rather than just copy-pasting the button mashing of the new games.

    I will say this though… Until the remake, it never even occured to me that Jessie was a girl.




  • Hemingways_Shotgun@lemmy.catoLinux@lemmy.mlWhen to upgrade hardware?
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    24 days ago

    The final “Gate” so to speak, will end up being your motherboard.

    At a certain point, your motherboard just won’t support a newer part and you’ll have upgraded all the existing parts as far as they can go.

    My current rig that I’m still perfectly content with is just under ten years old. I’ve upgraded the Ram to as much as the motherboard will allow. I’ve upgraded the Video Card two or three times in that span, where it’s now running a 3060. While I still see a huge improvement with that, there’s no doubt that the video card is being throttled somewhat by the motherboard throughput limitations, but for I don’t mind. I’ve added extra cooling fans, replaced the drives with SSD and use the old metal spinners for extra storage.

    It still runs plenty fast enough to do Blender (nothing complex, just airplane modelling and animation for xplane), video editing with DaVinci Resolve (as long as I use proxy clips and take it a little easy on the motion graphics), and most newer games (though of course not at ultra settings).

    The last bottleneck that I’ll simply never be able to pass is the fact that the CPU socket will never support an octocore processor or higher. I can upgrade as much as I want, but it will never not be a quad-core.

    For now that’s fine. But that’s the hard limit that I’ve given myself. Your mileage may vary.



  • I feel embarassed to say this as someone who is fairly techy, but I’m a little confused by the whole brouhaha.

    Is Google making changes to Android, or to AOSP?

    If Google is making changes to the Android fork they put on their own phones, then fuck 'em. Use Graphene. Use e/OS/, use Lineage…use something that forks their own branch of AOSP and Google can pound sand because those forks are in no way obligated to make the same changes as Google. AOSP is open source for that very reason.

    If Google is making those changes to AOSP itself, which means that anyone who uses AOSP as a base have those changes by default, then isn’t Google obligated to keep those changes as Open Source, in which case anyone else who uses AOSP can just remove them from their own fork?

    Someone explain like I’m a particularly dim five-year-old, please.




  • Linux by design gives the user enough rope to hang themselves with.

    And that’s certainly not a problem when dealing with tech enthusiasts who know what, when and where to touch to avoid messing things up. But when you’re dealing with getting a phone into the hands of ordinary people, that isn’t going to fly because all of those people will at some point start mucking around inside and then expect tech support when they mess up.

    For mainstream adoption, the linux kernel must and the desktop environment must be at least somewhat locked down.


  • Just because it’s a libre phone, doesn’t mean it’s necessarily a linux phone. Or at least any more so than Android is a linux phone because it uses a heavily modified (almost unrecognizable) linux kernel.

    There’s nothing in the article that says they’re just going to use a mainline linux kernel and throw a touch optimized version of some existing desktop on it (ubuntu touch, etc…)

    Heck, they could be meaning that they’re planning on making their own heavily modified kernel for their very own OS so as to skip all of the trouble that trying to make mainline linux into a handheld device has been so far. (similar to I believe how SailfishOS is doing it)


  • It’s not that I’m disagreeing with you. I’m just not agreeing with you.

    I personally think that (as unpopular an opinion as it may be) Flatpak’s largely make the choice of first distro irrelevant. The weakness in Manjaro is that you either risk using the AUR or stay on old versions of the software. Or with Mint/Ubuntu/etc… you either risk adding random repos to your sources list or you use older versions of the software.

    Either way, you run the risk of a new person mucking up their system with a bad repo or a bad aur package.

    The alternative, using flatpaks, largely solves both issues for when you need newer versions of a certain software, and are dead simple to install/remove/update, etc…

    And I say this as someone who was super skeptical of flatpak’s for a very very long time.


  • Exactly that.

    If I were to google how to get gum out of my child’s hair and then be directed to that same reddit post. I’d read through it and be pretty sure which were jokes and which were serious; we make such distinctions, as you say, every day without much effort.

    LLMs simply don’t have that ability. And the number of average people who just don’t get that is mind-boggling to me.

    I also find it weirdly dystopian that, if you sum that up, it kind of makes it sound like in order for an LLM to make the next step towards A.I. It needs a sense of humour. It needs the ability to weed through when the information it’s digging from is serious, or just random jack-asses on the internet.

    Which is turning it into a very very Star Trek problem.


  • The fact that any AI company thought to train their LLM on the answers of Reddit users speaks to a fundamental misunderstanding of their own product (IMO)

    LLMs aren’t programmed to give you the correct answer. They’re programmed to give you the most pervasive/popular answer on the assumption that most of the time that will also happen to be the right one.

    So when you’re getting your knowledge base from random jackasses on Reddit, where a good faith question like “What’s the best way to get get gum out of my childs hair” get’s two two good faith answers, and then a few dozen smart-ass answers that gets lots of replies and upvotes because they’re funny. Guess which one your LLM is going to use.

    People (and apparently even the creators themselves) think that an LLM is actually cognizent enough to be able to weed this out logically. But it can’t. It’s not an intelligence…it’s a knowlege agreggator. And as with any aggregator, the same rule applies

    garbage in, garbage out