The experience seemed roughly on par with trying to advise a mediocre, but not completely incompetent, graduate student. However, this was an improvement over previous models, whose capability was closer to an actually incompetent graduate student. It may only take one or two further iterations of improved capability (and integration with other tools, such as computer algebra packages and proof assistants) until the level of “competent graduate student” is reached, at which point I could see this tool being of significant use in research level tasks.
I genuinely hate this statement. A competent grad student can solve problems. GPT cannot solve anything, as all it does is put together the shit it stole from somewhere before
O1 is (apparently) different according to some videos I watched, as it pulls apart the question and does some reasoning steps.
I’d love to see one of those videos
like, a video of Tao giving a demonstration?
@NegentropicBoy English20•
O1 is (apparently) different according to some videos I watched, as it pulls apart the question …
Yes
Using GPT without appearing like an idiot takes a competent grad student
This I can believe tbh. It’s a very useful tool in the hands of an expert. Otherwise it’s like giving a chimp a gun.
Maybe this is why I am surprised at people’s hatred of ChatGPT. It’s borne of misuse of a tool for experts, like newcomers struggling with a C++ compiler error.