• LostWanderer@fedia.io
        link
        fedilink
        arrow-up
        45
        arrow-down
        1
        ·
        6 days ago

        Exactly, as I don’t expect QA done by something that can’t think or feel to know what actually needs to be fixed. AI is a hallucination engine that just agrees rather than points out issues, in some cases it might call attention to non-issues and let critical bugs slip by. The ethical issues are still significant and play into the reason why I would refuse to buy any more Square Enix games going forward. I don’t trust them to walk this back, they are high on the AI lie. Human made games with humans handling the QA are the only games that I want.

        • NuXCOM_90Percent@lemmy.zip
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          5
          ·
          6 days ago

          Exactly, as I don’t expect QA done by something that can’t think or feel to know what actually needs to be fixed

          That is a very small part of QA’s responsibility. Mostly it is about testing and identifying bugs that get triaged by management. The person running the tests is NOT responsible for deciding what can and can’t ship.

          And, in that regard… this is actually a REALLY good use of “AI” (not so much generative). Imagine something like the old “A star algorithm plays mario” where it is about finding different paths to accomplish the same goal (e.g. a quest) and immediately having a lot of exactly what steps led to the anomaly for the purposes of building a reproducer.

          Which actually DOES feel like a really good use case… at the cost of massive computational costs (so… “AI”).

          That said: it also has all of the usual labor implications. But from a purely technical “make the best games” standpoint? Managers overseeing a rack that is running through the games 24/7 for bugs that they can then review and prioritize seems like a REALLY good move.

          • osaerisxero@kbin.melroy.org
            link
            fedilink
            arrow-up
            4
            ·
            6 days ago

            They’re already not paying for QA, so if anything this would be a net increase in resources allocated just to bring the machines onboard to do the task

            • NuXCOM_90Percent@lemmy.zip
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              6 days ago

              Yeah… that is the other aspect where… labor is already getting fucked over massively so it becomes a question of how many jobs are even going away.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      5
      ·
      6 days ago

      I would initially tap the breaks on this, if for no other reason than “AI doing Q&A” reads more like corporate buzzwords than material policy. Big software developers should already have much of their Q&A automated, at least at the base layer. Further automating Q&A is generally a better business practice, as it helps catch more bugs in the Dev/Test cycle sooner.

      Then consider that Q&A work by end users is historically a miserable and soul-sucking job. Converting those roles to debuggers and active devs does a lot for both the business and the workforce. When compared to “AI is doing the art” this is night-and-day, the very definition of the “Getting rid of the jobs people hate so they can do the work they love” that AI was supposed to deliver.

      Finally, I’m forced to drag out the old “95% of AI implementations fail” statistic. Far more worried that they’re going to implement a model that costs a fortune and delivers mediocre results than that they’ll implement an AI driven round of end-user testing.

      Turning Q&A over to the Roomba AI to find corners of the setting that snag the user would be Gud Aktuly.

      • Nate Cox@programming.dev
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        3
        ·
        6 days ago

        Converting those roles to debuggers and active devs does a lot for both the business and the workforce.

        Hahahahaha… on wait you’re serious. Let me laugh even harder.

        They’re just gonna lay them off.

        • pixxelkick@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          6 days ago

          The thing about QA is the work is truly endless.

          If they can do their work more efficiently, they don’t get laid off.

          It just means a better % of edge cases can get covered, even if you made QAs operate at 100x efficiency, they’d still have edge cases not getting covered.

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          7
          ·
          6 days ago

          They’re just gonna lay them off.

          And hire other people with the excess budget. Hell, depending on how badly these systems are implemented, you can end up with more staff supporting the testing system than you had doing the testing.

      • binarytobis@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        6 days ago

        I was going to say, this is one job that actually makes sense to automate. I don’t know any QA testers personally, but I’ve heard plenty of accounts of them absolutely hating their jobs and getting laid off after the time crunch anyway.

      • Mikina@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        6 days ago

        They already have a really cool solution for that, which they talked about in their GDC talk.. I don’t think there’s any need to slap a glorified chatbot into this, it already seems to work well and have just the right amount of human input to be reliable, while also leaving the “testcase replay gruntwork” to a script instead of a human.

  • termaxima@slrpnk.net
    link
    fedilink
    English
    arrow-up
    11
    ·
    4 days ago

    Be prepared for Square Enix games to fail even EA’s QA standards in the near future 😅

  • ghost9@lemmy.world
    link
    fedilink
    English
    arrow-up
    86
    arrow-down
    2
    ·
    6 days ago

    That’s a stupid idea. You’re not supposed to QA or debug games. You just release it, customers report bugs, and then you promise to fix the bugs in the next patch (but don’t).

  • Taldan@lemmy.world
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    3
    ·
    6 days ago

    So Square Enix is demanding OpenAI stop using their content, but is 100% okay using AI built off stolen content to make more money themselves

    As a developer, it bothers me that my code is being used to train AI that Square Enix is using while trying to deny anyone else the ability to use their work

    I could go either way on whether or not AI should be able to train on available data, but no one should get to have it both ways

  • mavu@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    40
    ·
    6 days ago

    Well, good luck with that. Software development is a shit show already anyway. You can find me in my Gardening business in 2027.

    • Rooster326@programming.dev
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      edit-2
      6 days ago

      Good Luck. When the economy finally bottoms out the first budget to go is always the gardening budget.

      You can find me in my plumbing business in 2028.

      I deal with shit daily so it’s what we in biz call a horizontal promotion.

  • hoshikarakitaridia@lemmy.world
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    3
    ·
    6 days ago

    Literally not how any of this works. You don’t let AI check your work, at best you use AI and check it’s work, and at worst you have to do everything by hand anyway.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      3
      ·
      edit-2
      6 days ago

      You don’t let AI check your work

      From a game dev perspective, user Q&A QA is often annoying and repetitive labor. Endlessly criss-crossing terran hitting different buttons to make sure you don’t snag a corner or click objects in a sequence that triggers a state freeze. Hooking a PS controller to Roomba logic and having a digital tool rapidly rerun routes and explore button combos over and over, looking for failed states, is significantly better for you than hoping an overworked team of dummy players can recreate the failed state by tripping into it manually.

      • subignition@fedia.io
        link
        fedilink
        arrow-up
        18
        arrow-down
        3
        ·
        6 days ago

        There’s plenty of room for sophisticated automation without any need to involve AI.

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          6
          ·
          6 days ago

          I mean, as a branding exercise, every form of sophisticated automation is getting the “AI” label.

          Past that, advanced pathing algorithms are what Q&A systems need to validate all possible actions within a space. That’s the bread-and-butter of AI. Its also generally how you’d describe simulated end-users on a test system.

          • subignition@fedia.io
            link
            fedilink
            arrow-up
            2
            arrow-down
            4
            ·
            6 days ago

            I mean, as a branding exercise, every form of sophisticated automation is getting the “AI” label.

            The article is specifically talking about generative AI. I think we need to find new terminology to describe the kind of automation that was colloquially referred to as AI before chatgpt et al. came into existence.

            The important distinction, I think, is that these things are still purpose-built and (mostly) explainable. When you have a bunch of nails, you design a hammer. An “AI bot” QA tester the way Booty describes in the article isn’t going to be an advanced algorithm that carries out specific tests. That exists already and has for years. He’s asking for something that will figure out specific tests that are worth doing when given a vague or nonexistent test plan, most likely. You need a human, or an actual AGI, for something on that level, not generative AI.

            And explicitly with generative AI, as pertains to Square Enix’s initiative in the article, there are the typical huge risks of verifiability and hallucination. However unpleasant you may think a QA worker’s job is now, I guarantee you it will be even more unpleasant when the job consists of fact-checking AI bug reports all day instead of actually doing the testing.

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          4
          ·
          edit-2
          6 days ago

          If it does the job better, who the fuck cares. No one actually cares about how you feel about the tech. Cry me a river.

          • _stranger_@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            6 days ago

            The problem is that if it doesn’t do a better job, no one left in charge will even know enough to give a shit, so quality will go down.

    • zerofk@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      6 days ago

      its *

      Ironically, that’s definitely something AI could check for.