• CarrotsHaveEars@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    Stupid! You should tell it to write the test cases first and follow TDD!

    ‘AI, write the test cases for my project! No! You stupid computer! This isn’t testing the features correctly! Ah, no! You’re now just printing the string “PASS”!’

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    “Just give me this and I’ll do the rest” is actually a pretty great workflow, in my experience. AI isn’t at the point where you can just set it loose to work on its own but as a collaborator it saves me a huge amount of hassle and time.

    • dudinax@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      In my experience about 1 percent of suggestions are of that quality, and that’s only for snippets of a few lines of code at most. It almost certainly wastes more time than it saves.

      • LesserAbe@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        I’m a hobbyist, but I’ve found it to be pretty helpful. Seems like main thing is chunking requests down.

        If it’s a domain I’m completely unfamiliar with then it’s not a good fit because I’m no longer able to identify where it’s gone off the rails.

          • LesserAbe@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            5 months ago

            Just splitting something up into smaller tasks.

            I’m sure I don’t need to tell you, but you wouldn’t be like “chatgpt write me an app for telling me the weather” you’d be like “I’m building an app in such and such a framework and working with such and such an API, how would I format this request” (or whatever)

  • Tar_Alcaran@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    People really should remember: generative AI makes things things that look like what you want.

    Now, usually that overlaps a lot with what you actually want, but not nearly always, and especially not when details matter.

    • LeadersAtWork@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      I treat AI the same way I’ve always treated Google: WITH ABSOLUTE DISDAIN Using them as a shove in the right direction and for research purposes to supplement research already being done. ChatGPT for instance is actually pretty decent at figuring out vaguely defined things if worked through. Is it perfect? Hell no. It can help narrow down the options though.

      • xantoxis@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        5 months ago

        I’m pretty anti-AI but even I’ll cop to this one. ChatGPT is good at figuring out what you’re trying to describe. Know you need a particular networking concept? Describe it a bit to ChatGPT and ask for some concepts that are similar, and the thing you’re looking for will probably be in the list.

        Looking for a particular library that you assume must exist even though you’ve never seen it? ChatGPT can give you that.

        You’re on your own after that, but it can actually save you a bit of research time.

        The problem is this: it’s sure it has the answer 100% of the time, but about 30% of the time it gives you a list of nothing but wrong answers and you can go off in the wrong direction as a result.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      It also isn’t telepathic, so the only thing it has to go on when determining “what you want” is what you tell it you want.

      I often see people gripe about how ChatGPT’s essay writing style is mediocre and always sounds the same, for example. But that’s what you get when you just tell ChatGPT “write me an essay about X.” It doesn’t know what kind of essay you want unless you tell it. You have to give it context and direction to get good results.

      • gbuttersnaps@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        Not disagreeing with you at all, you made a pretty good point. But when engineering the prompt takes 80% of the effort that just writing the essay (or code for that matter) would take, I think most people would rather write it themselves.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          Sure, in those situations. I find that it doesn’t take that much effort to write a prompt that gets me something useful in most situations, though. You just need to make some effort. A lot of people don’t put in any effort, get a bad result, and conclude “this tech is useless.”

      • slazer2au@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        We are all annoyed at clients for not saying what they actually want in a Scope of Works, yet we do the same to LLM thinking it will fill in the blanks how we want it filled in.

        • takeda@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          Yet that’s usually enough when taking to another developer.

          The problem is that we have this unambiguous language that is understood by human and a computer to tell computer exactly what we want to do.

          With LLM we instead opt to use a natural language that is imprecise and full of ambiguity to do the same.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            5 months ago

            You communicate with co-workers using natural languages but that doesn’t make co-workers useless. You just have to account for the strengths and weaknesses of that mechanism in your workflow.