OpenAI is experiencing performance problems with its new Orion LLM and must adjust its strategies to improve future models. The limited availability of
Training AI on “synthetic” data generated from other AIs sounds genius!
Seems like a bulletproof way to make AI infinity smarter just by recuressively feeding itself! Great success is on the horizon!
It’s been proven that even small amounts of synthetic data injected into a training set quickly leads to a phenomenon termed “model collapse”, though I prefer the term “Hapsburg AI” (not mine).
Basically, this is the kind of thing you announce you’re doing because it will hopefully get you one more round of investment funding while Sam Altman finishes working out how to fake his death.
Training AI on “synthetic” data generated from other AIs sounds genius! Seems like a bulletproof way to make AI infinity smarter just by recuressively feeding itself! Great success is on the horizon!
It’s been proven that even small amounts of synthetic data injected into a training set quickly leads to a phenomenon termed “model collapse”, though I prefer the term “Hapsburg AI” (not mine).
Basically, this is the kind of thing you announce you’re doing because it will hopefully get you one more round of investment funding while Sam Altman finishes working out how to fake his death.
I like to call it, saving the red jpeg. One more save will make it better surely.