I’m doing my part by writing really shitty foss projects for AI to steal and train on.
It seems to me that if one can adequately explain the function of their pseudocode in adequate detail for an LLM to turn it into a functional and reliable program, then the hardest part of writing the code was already done without the LLM.
2023? Like last year? Like when LLMs were just a curiosity more than anything useful?
They should be doing these studies continuously…
Edit: Oh no, I forgot Lemmy hates LLMs. Oh well, can’t blame you guys, hate is the basic manifestation towards what scares you, and it’s revealing.
Unlike this year when LLMs are more of a huge scam.
We’re entering the ‘blockchain for every need’ stage. Expect massive money to flow into scams, poor ideas, and outright dangerous uses for a few years .
Before Blockchain we had ‘the web’ itself in the dot com era. Before that? I saw it in basic computing as a solution to everything.
I’m sure they will, here’s year one.
Hmmm, it’s almost like the study was testing peoples perception of the usefulness of AI vs the actual usefulness and results that came out.