By “good” I mean code that is written professionally and concisely (and obviously works as intended). Apart from personal interest and understanding what the machine spits out, is there any legit reason anyone should learn advanced coding techniques? Specifically in an engineering perspective?
If not, learning how to write code seems a tad trivial now.
Removed by mod
Dunno. I’d expect to have to make several attempts to coax a working snippet from the ai, then spending the rest of the time trying to figure out what it’s done and debugging the result. Faster to do it myself.
E.g. I once coded Tetris on a whim (45 min) and thought it’d be a good test for ui/ game developer, given the multi disciplinary nature of the game (user interaction, real time engine, data structures, etc) Asked copilot to give it a shot and while the basic framework was there, the code simply didn’t work as intended. I figured if we went into each of the elements separately, it would have taken me longer than if i’d done it from scratch anyway.
No, because that would require it being trained on good code. Which is rather rare.
If it is trained on Stack Overflow there is no chance.
LLMs are just computerized puppies that are really good at performing tricks for treats. They’ll still do incredibly stupid things pretty frequently.
I’m a software engineer, and I am not at all worried about my career in the long run.
In the short term… who fucking knows. The C-suite and MBA circlejerk seems to have decided they can fire all the engineers because wE CAn rEpLAcE tHeM WitH AI 🤡 and then the companies will have a couple absolutely catastrophic years because they got rid of all of their domain experts.
In my experience, not at all. But sometimes they help with creativity when you hit a wall or challenge you can’t resolve.
They have been trained off internet examples where everyone has a different style/method of coding, like writing style. It’s all very messy and very unreliable. It will be years for LLMs to code “good” and will require a lot of training that isn’t scraping.
Its the most ok’est coder with the attention span of a 5 year old.
For repetitive tasks, it can almost automatically get a first template you write by hand, and extrapolate with multiple variations.
Beyond that… not really. Anything beyond single line completion quickly devolves into either something messy, non working, or worse, working but not as intended. For extremely common cases it will work fine; but extremely common cases are either moved out in shared code, or take less time to write than to “generate” and check.
I’ve been using code completion/suggestion on the regular, and it had times where I was pleasantly surprised by what it produced, but even for these I had to look after it and fix some things. And while I can’t quantify how often it happened, there are a lot of times where it’s convincing gibberish.
I don’t know how to program, but to a very limited extent can sorta kinda almost understand the logic of very short and simplistic code that’s been written for me by someone who can actually code. I tried to get to get chat GPT to write a shell script for me to work as part of an Apple shortcut. It has no idea. It was useless and ridiculously inconsistent and forgetful. It was the first and only time I used chat GPT. Not very impressed.
Given how it is smart enough to produce output that’s kind of in the area of correct, albeit still wrong and logically flawed, I would guess it could eventually be carefully prodded into making one small snippet of something someone might call “good” but at that point I feel like that’s much more an accident in the same way that someone who has memorised a lot of French vocabulary but never actually learned French might accidentally produce a coherent sentence once in a while by trying and failing 50 times before and failing again immediately after without ever having known.
I’m my experience they do a decent job of whipping out mindless minutea and things that are well known patterns in very popular languages.
They do not solve problems.
I think for an “AI” product to be truly useful at writing code it would need to incorporate the LLM as a mere component, with something facilitating checks through static analysis and maybe some other technologies, maybe even mulling the result through a loop over the components until they’re all satisfied before finally delivering it to the user as a proposal.
It’s a decent starting point for a new language. I had to learn webdev as an embedded C coder, and using a LLM and cross-referencing the official documentation makes a new language much more approachable.
I agree, LLMs have been helpful in pointing me in the right direction and helping me rethink what questions I actually want to ask in disciplines I’m not very familiar with.
Technically it’s possible, but it’s neither probable nor likely, and it’s especially not effective. From what I understand, a lot of devs who do try to use something like ChatGPT to write code end up spending as much or more time debugging it, and just generally trying to get it to work, than they would have if they’d just written it themselves. Additionally, you have to know how to code to be able to figure out why it’s not working, and even when all of that is done, it’s almost impossible to get it to integrate with a larger project without just rewriting the whole thing anyway.
So to answer the question you intend to ask, no, LLMs will not be replacing programmers any time soon. They may serve as a tool of dubious value, but the idea that programmers will be replaced is only taken seriously by by people who manage programmers, and not the programmers themselves.
That all depends on where the data set comes from. The code you’ll get out of an LLM is the average code of the data set. If it’s scraped from the internet (which is very likely) the code you’ll get will be an amalgam of concise examples from one website, incorrect examples from another, bits from blogs with all the typos and all the gunk and garbage that’s out there.
Getting LLM code to work well takes an understanding of what the code it gives you actually does and why it’s bad. It will always be bad because it cannot be better than the dataset and in order for a dataset to be big enough to train an LLM it’ll have to have everything they can get including all the trash. But it can be good for providing you a framework to start with. It is however never going to replace actual programming and understanding of programming. The talk of LLMs completely replacing programers is mostly coming from people who do not understand coding or LLMs at all.
Can’t LLM’s eventually gain some form of “sentience”, and be able to self correct? A sort of thinking before speaking kind of situation.
This question right here perfectly encapsulates everything wrong with LLMs right now. They could be good tools but the people pushing them have no idea what they even are. LLMs do not make decisions. All the decisions an LLM appears to make were made in the dataset. All those things that an LLM does that make it seem intelligent were done or said by a human somewhere on the internet. It is a statistical model that determines what output is mostly likely to come next. That is it. It is nothing else. It is not smart. It does not and cannot make decisions. It is an algorithm that searches a dataset and when it can’t find something it’ll provide convincing-looking gibberish instead.
Listen think of it like this; a man decides to take exams to become a doctor in France, but for some reason he doesn’t learn either french or medicine. No, no instead he studies every former exam and all the answers to them. He gets very good at regurgitating those answers so much so that he can even pass the exam. But at no point does he understand what any of it means and when asked new and novel questions he provides utter nonsense answers. No matter how good he gets at memorising those answers he will never get any better at medicine. LLMs are as likely to gain sentience as my excel spreadsheets are.
It is an algorithm that searches a dataset and when it can’t find something it’ll provide convincing-looking gibberish instead.
This is very misleading. An LLM doesn’t have access to its training dataset in order to “search” it. Producing convincing looking gibberish is what it always does, that’s its only mode of operation. The key is that the gibberish that comes out of today’s models is so convincing that it actually becomes broadly useful.
That also means that no, not everything an LLM produces has to have been in its training dataset, they can absolutely output things that have never been said before. There’s even research showing that LLMs are capable of creating actual internal models of real world concepts, which suggests a deeper kind of understanding than what the “stochastic parrot” moniker wants you to believe.
LLMs do not make decisions.
What do you mean by “decisions”? LLMs constantly make decisions about which token comes next, that’s all they do really. And in doing so, on a higher, emergent level they can make any kind of decision that you ask them to, the only question is how good those decisions are going be, which in turn entirely depends on the training data, how good the model is, and how good your prompt is.
Absolutely, but they need a lot of guidance. GitHub CoPilot often writes cleaner code than I do. I’ll write the code and then ask it to clean it up for me and DRYify it.
my dad uses this LLM python code generation quite routinely, he says the output’s mostly fine.
Removed by mod
I use LLMs for C code - most often when I know full well how to code something but I don’t want to spent half a day expressing it and debugging it.
ChatGPT or Copilot will spit out a function or snippet that’s usually pretty close to what I want. I patch it up and move on to the tougher problems LLMs can’t do.
Removed by mod
Fitting username.
The LLM can type the Code, but you need to know what you want / how you want to solve it.
Of course it can. It can also spit out trash. AI, as it exists today, isn’t meant to be autonomous, simply ask it for something and it spits it out. They’re meant to work with a human on a task. Assuming you have an understanding of what you’re trying to do, an AI can probably provide you with a pretty decent starting point. It tends to be good at analyzing existing code, as well, so pasting your code into gpt and asking it why it’s doing a thing usually works pretty well.
AI is another tool. Professionals will get more use out of it than laymen. Professionals know enough to phrase requests that are within the scope of the AI. They tend to know how the language works, and thus can review what the AI outputs. A layman can use AI to great effect, but will run into problems as they start butting up against their own limited knowledge.
So yeah, I think AI can make some good code, supervised by a human who understands the code. As it exists now, AI requires human steering to be useful.