We already have that, there are already people whose sole job is telling the machine what to do in specific enough terms that the machine doesn’t make mistakes. It’s called programmers. People who think LLMs can replace programmers don’t understand what a programmer does. An AGI will surely make programmers obsolete, but it would also make any other job obsolete and I don’t think we’re 10 years away from one.
I’m not talking about an “AI” replacing programmers on coding. I mean making programming and coding obsolete. A new paradigm on how software is made. It won’t be coded. There won’t be different software for different tasks, just one software running everywhere and everything.
But yeah, like I said, 10 years is maybe too little for that.
What you’re describing is called the Singularity, and it’s an AGI, we’re not even close to anything remotely similar to that. I’m not even sure we’ll get there in my lifetime.
I’m not talking about an AGI or a singularity. It’s a long way between where we are and what we have now and that.
What I’m talking about will happen in the meantime and will finally allow me to not deal with Prima Donnas who think they’re the last coca cola in the desert because they can copy paste code from stackoverflow.
You’re talking about a computer you can ask it to do any stuff and it not only understands what you ask (computers can do this now) but what you mean (you need an AGI for that).
You can already replace anyone whose sole job is to copy paste stuff from stack overflow, but that’s not all a programmer does.
There’s an excellent demonstration of what being a programmer is that some teachers do on a programming 101 class which is have the students describe step by step how to do day-to-day tasks, and always people will skip steps or not consider corner cases. Being a programmer is knowing how to explain stuff to a computer in an unambiguous way, and until computers gave a general intelligence they’ll not do ambiguous tasks or make wrong assumptions about it. If LLMs became advanced enough that you could “prompt” the computer to do stuff, the prompt would have to be very specific, and written in a very specific way, which would essentially become a programming language.
You’re stuck in the current paradigm about how software works. What I’m talking about is not a current paradigm and it’s not AGI.
We don’t need AGI for what I’m talking about. You’re fixated on programmatically tell a computer how to do something and I’m not sure you’re just being difficult or can’t grasp or imagine what I’m imagining.
We already have useful LLMs for different tasks. Heck, my team is developing software to perform different tasks using LLMs that if we had to program from scratch we’d be so fucked! Right now, not 2 years after the first version of ChatGPT was released. Do you think this technology will remain the same or will continuously be developed into something that most of us cannot comprehend or will even deny, like you’re doing now?
It’s your right to not agree with me, and I accept it, but don’t say I’m wrong mate, you can’t possibly know!
And don’t talk about AGI or singularity like it’s the next step, you’re doing a disservice to yourself.
Yes, I’m stuck with the paradigm that computers are not intelligent and can’t understand what I mean, there’s a term for a software that can: “AGI”.
Any programmer knows that using LLMs for programming is one of the following cases:
It’s not used in any meaningful way, i.e. it generates boilerplate code like getters and setters or used instead of Google.
It makes the team take longer, because they need to scrutinize and understand what was generated.
It makes a shitty version of the program because it doesn’t understand the need or the corner cases
Only people who don’t understand programming think LLMs are useful as they are now, until computers are actually intelligent and actually understand what you’re asking them to do and think on all of the corner cases and take decisions for all of the things you didn’t specifically ask it to consider those 3 cases will be the only outcome for “AI” in programming.
Don’t believe me? Tell me a prompt you would use to generate a useful program (i.e. not a Hello World) and I’ll list multiple assumptions that you’re making about how this needs to work that you did not include in your prompt therefore the “AI” will not cover.
You haven’t really described what you are imagining.
Proper “AI”. No more coding, you just tell the machine what to do and it will do it. I don’t think in the physical world but computers and every profession that is not physical will be much rarer. Either pivot to AI Management or be the arms that the AI “guides” through a task.
Telling a computer specifically what to do and how to do it without making mistakes is coding. Programming is a level above that, in designing the architecture of how to approach the business problem.
What the other commentator is saying, is that simple being able to tell some model ‘build an app that does XYZ’ requires AGI because that set of instructions is not complete - the machine requires outside knowledge and the ability to make judgement calls in order to complete it.
If that isn’t what you meant, it is at least what you said. The breakdown in communication here, between humans, should also serve as another reminder how difficult it is to convey an idea to another entity and how that problem will remain difficult for a very long time.
Ok, if you got it you got it, if you don’t I can’t be bothered to spend more time on trying to explain what is a very simple concept that people just don’t want to entertain.
He got it, you’re the one who’s not getting it, it is impossible to prompt an entire program to an LLM, I mean you can do it, but even a perfect LLM will give you back a steaming pile of shit, and the reason is because you asked for a steaming pile of shit without realizing it.
I’ll make you the same challenge again, write me a prompt as you would to this “AI” for it to do complex stuff and I’ll point out several of the assumptions a non-AGI software could wrongly make.
We already have that, there are already people whose sole job is telling the machine what to do in specific enough terms that the machine doesn’t make mistakes. It’s called programmers. People who think LLMs can replace programmers don’t understand what a programmer does. An AGI will surely make programmers obsolete, but it would also make any other job obsolete and I don’t think we’re 10 years away from one.
I’m not talking about an “AI” replacing programmers on coding. I mean making programming and coding obsolete. A new paradigm on how software is made. It won’t be coded. There won’t be different software for different tasks, just one software running everywhere and everything.
But yeah, like I said, 10 years is maybe too little for that.
What you’re describing is called the Singularity, and it’s an AGI, we’re not even close to anything remotely similar to that. I’m not even sure we’ll get there in my lifetime.
Don’t be a condescending little prick, mate.
I’m not talking about an AGI or a singularity. It’s a long way between where we are and what we have now and that.
What I’m talking about will happen in the meantime and will finally allow me to not deal with Prima Donnas who think they’re the last coca cola in the desert because they can copy paste code from stackoverflow.
You’re talking about a computer you can ask it to do any stuff and it not only understands what you ask (computers can do this now) but what you mean (you need an AGI for that).
You can already replace anyone whose sole job is to copy paste stuff from stack overflow, but that’s not all a programmer does.
There’s an excellent demonstration of what being a programmer is that some teachers do on a programming 101 class which is have the students describe step by step how to do day-to-day tasks, and always people will skip steps or not consider corner cases. Being a programmer is knowing how to explain stuff to a computer in an unambiguous way, and until computers gave a general intelligence they’ll not do ambiguous tasks or make wrong assumptions about it. If LLMs became advanced enough that you could “prompt” the computer to do stuff, the prompt would have to be very specific, and written in a very specific way, which would essentially become a programming language.
You’re stuck in the current paradigm about how software works. What I’m talking about is not a current paradigm and it’s not AGI.
We don’t need AGI for what I’m talking about. You’re fixated on programmatically tell a computer how to do something and I’m not sure you’re just being difficult or can’t grasp or imagine what I’m imagining.
We already have useful LLMs for different tasks. Heck, my team is developing software to perform different tasks using LLMs that if we had to program from scratch we’d be so fucked! Right now, not 2 years after the first version of ChatGPT was released. Do you think this technology will remain the same or will continuously be developed into something that most of us cannot comprehend or will even deny, like you’re doing now?
It’s your right to not agree with me, and I accept it, but don’t say I’m wrong mate, you can’t possibly know!
And don’t talk about AGI or singularity like it’s the next step, you’re doing a disservice to yourself.
Yes, I’m stuck with the paradigm that computers are not intelligent and can’t understand what I mean, there’s a term for a software that can: “AGI”.
Any programmer knows that using LLMs for programming is one of the following cases:
Only people who don’t understand programming think LLMs are useful as they are now, until computers are actually intelligent and actually understand what you’re asking them to do and think on all of the corner cases and take decisions for all of the things you didn’t specifically ask it to consider those 3 cases will be the only outcome for “AI” in programming.
Don’t believe me? Tell me a prompt you would use to generate a useful program (i.e. not a Hello World) and I’ll list multiple assumptions that you’re making about how this needs to work that you did not include in your prompt therefore the “AI” will not cover.
You haven’t really described what you are imagining.
Telling a computer specifically what to do and how to do it without making mistakes is coding. Programming is a level above that, in designing the architecture of how to approach the business problem.
What the other commentator is saying, is that simple being able to tell some model ‘build an app that does XYZ’ requires AGI because that set of instructions is not complete - the machine requires outside knowledge and the ability to make judgement calls in order to complete it.
If that isn’t what you meant, it is at least what you said. The breakdown in communication here, between humans, should also serve as another reminder how difficult it is to convey an idea to another entity and how that problem will remain difficult for a very long time.
Ok, if you got it you got it, if you don’t I can’t be bothered to spend more time on trying to explain what is a very simple concept that people just don’t want to entertain.
I’m out, see you in 10 years.
He got it, you’re the one who’s not getting it, it is impossible to prompt an entire program to an LLM, I mean you can do it, but even a perfect LLM will give you back a steaming pile of shit, and the reason is because you asked for a steaming pile of shit without realizing it.
I’ll make you the same challenge again, write me a prompt as you would to this “AI” for it to do complex stuff and I’ll point out several of the assumptions a non-AGI software could wrongly make.