I appreciate Simon’s balanced take on how LLMs can enhance a project when used responsibly.

I’m curious, though—what are this community’s opinions on the use of LLMs in programming?

  • Mikina@programming.dev
    link
    fedilink
    arrow-up
    9
    ·
    1 month ago

    The issue isn’t whether you can get a good results or not. The issue is the skills you are outsourcing to a proprietary tool, skills that you will never learn or forget. Getting information out of documentation, designing an architecture, understanding and replicating an algorithm, etc.

    You will eventually start struggling with critical thinking, there are already studies about that.

    Of course, if you use it in moderation and don’t rely on LLMs too much, you should be ok.

    But how did that work for everyone with short-form content and social networks in the last ten years? How is your attention span doing? Surely we all have managed to take short-form content in moderation, since we knew the risks to our attention span, right?

  • Wooki@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    1 month ago

    Had a subscription, unsubscribed 6 months ago. Simplistically:

    1. They create bad code,
    2. You stop learnng. You want to program? Learn.
  • kn0wmad1c@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 month ago

    The problem with this article is that he stresses that you need to check the code and step in when needed - yet relying heavily on LLMs will invariably make it impossible for you to tell what’s wrong and eventually how to even read the code (since it will produce code using libraries you never experimented with because the LLM can just write the code).

    Also “vibe-coding” is stupid af. You take out the human element altogether because you just accept all changes without reading them and then copy/paste errors back in without any context.

  • Solemarc@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 month ago

    It’s funny, to me I’ve had an llm give me the wrong answer to questions every time.

    The first time I couldn’t remember how to read a file as a string in python and it got me most of the way there. But I trusted the answer thinking “yeah, that looks right” but it was wrong, I just got the io class I didn’t call the read() function.

    The other time it was an out of date answer. I asked it how to do a thing in bevy and it gave me an answer that was deprecated. I can sort of understand that though, bevy is new and not amazingly documented.

    On a different note, my senior who is all PHP, no python, no bash, has used LLM’s to help him write python and bash. It’s not the best code, I’ve had to do optimisations on his bash code to make it run on CI without taking 25 minutes, but it’s definitely been useful to him with python and bash, he was hired as a PHP dev.

  • oshu@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 month ago

    My experience is that use of an LLM is an amplifier to your output but generally at no better quality that you can produce on your own.

    The skilled developer who uses an LLM and checks its work will get a productivity boost without a loss in quality.

    The unskilled developer who copy/pastes code from stackover can get even more sloppy code into production by using an LLM.

  • anotherandrew@mbin.mixdown.ca
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    I’m on the fence.

    I’ve used Perplexity to take a javascript fragment, identify the language it was written in and describe what it’s doing. I then asked it to refactor it into something a human could understand. It nailed both of these, even the variable names were meaningful (the original ones were just single letters). I then asked it to port it to C and use SDL, which it did a pretty good job of.

    I also used it to “untangle” some really gnarly mathy Javascript and port it to C so I could better understand it. That is still a work in progress and I don’t know enough math to know if it’s doing a good job or not, but it’ll give me some ability to work with the codebase.

    I’ve also used it to create some nice helper python scripts like pulling all repositories from a github user account or using YouTube’s API to pull the video title and author data if given a URL. It also wrote the skeleton of some Python scripts which interact with a RESTful API. These kinds of things it excelled at.

    My most recent success was using it to decode DTMF in a .WAV file, then create a new .WAV file using the DTMF start/end times to create cue points to visually show me what it saw and where. This was a mixed bag: I started out with Python, it used FFT (which was the obvious but wrong choice), then I had it implement a Goertzel filter which it did flawlessly. It even ported over to C without any real trouble. Where it utterly failed was with the WAV file creation/cue points. Part of this is because cue points are rather poorly described in any RIFF documentation, the python wrapper for the C wave processing library was incomplete and even then, various audio editors wanting the cue data in different ways, but this didn’t stop the LLM from lying through its damn teeth about not only knowing how to implement it, but assure me that the slop it created functioned as expected.

    I’ve found that it tends to come apart at the seams with longer sessions. When its answers start being nonsensical I sometimes get a bit of benefit from starting over without all the work leading up to that point. LLMs are really good at churning out basic frameworks which aren’t exactly difficult but can be tedious. I then take this skeleton and start hanging the meat on it, occasionally getting help from the LLM but usually that’s the stuff I need to think about and implement. I find that this is where LLMs really struggle, and I waste more time trying to explain what I want to the LLM than if I just wrote it myself.

  • tyler@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    I’ve almost completely stopped using them, unless I’m stuck at a dead end. In the end all they have done is slow me down and make me unable to think properly anymore. They usually write way too much code, especially with tab complete stuff, resulting in me needing to delete code after hitting tab (what’s the point even, intellisense has always been really good and now it’s somehow worse). They’re usually wrong unless prompted multiple times. People say you can use them to generate boilerplate, but just use a language with less or no boilerplate like Kotlin. There’s usually very subtle bugs they introduce or they’re solving a problem that is simply documented on stack overflow, while I wouldn’t be using an LLM if I could just kagi it, so they solve the wrong thing.

    One thing it’s decent for, if you don’t care about code quality, is converting code to a language you do not know. You’re not going to end up with good idiomatic code at the end, but it will probably function.

    None of this is to say that the LLMs aren’t amazing, but if you start to depend on them you very very quickly realize that your ability to solve more complex problems will atrophy. And then when you get to a difficult problem you now waste much more time trying to solve a problem that might have been simpler for past you.

    • enemenemu@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      1 month ago

      My 2cents

      It’s also trained on other people’s code, it may use outdated, inefficient, or otherwise bad code. If it would be trained on my code, I’d like it much more

  • footfaults@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I was going to say “Who?” until I looked at his bio, he helped start Django which I use. I need to go lay down.

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 month ago

    Ignore the “AGI” hype—LLMs are still fancy autocomplete. All they do is predict a sequence of tokens—but it turns out writing code is mostly about stringing tokens together in the right order, so they can be extremely useful for this provided you point them in the right direction.

    I’m just super happy to see someone talking about LLMs realistically without the “AI” bullshit.

    If LLMs help people code, then that’s great. But stop with the hype, grifting, etc. Kudos to this author for a reasonable take. Extremely rare.