It is possible for AI to hallucinate elements that don’t work, at least for now. This requires some level of human oversight.
So, the same as LLMs and they got lucky.
It’s like putting a million monkeys in a writers’ room, but super charged on meth and consuming insane resources.
“We are coming up with structures that are complex and look randomly shaped, and when connected with circuits, they create previously unachievable performance. Humans cannot really understand them, but they can work better.”
Great, so we will eventually have black box chips running black box algorithms for corporations where every aspect of the tech is proprietary and hidden from view with zero significant oversight by actual people…
The true cyber-dystopia.
This has been going on in chess for a while as well. Computer can detect patterns that human cannot because it has a better memory and knowledge base.
This isn’t exactly new. I heard a few years ago about a situation where the ai had these wires on the chip that should not do anything as they didn’t go anywhere , but if they removed it the chip stopped working correctly.
Yeah, I’ve stumbled upon that one a while back too, probably. Was it also the one where the initial designs would refuse to work outside the room temperature 'til the ai was asked to take temps into account?
Sounds like RF reflection used like a data capacitor or something.
Yeah, that probably sounds so unintuitive and weird to anyone who has never worked with RF.
Flashback to the 1960s, Magic and More Magic
I remember this too, it was years and years ago (I almost want to say 2010-2015). Can’t find anything searching for it
See? I want this kind of AI. Not a word dreaming algorithm that spews misinformation
Read the article, it’s still ‘dreaming’ and spewing garbage, it’s just that in some iterations it’s gotten lucky. “Human oversight needed” they say. The AI has no idea what it’s doing.
Yeah I got that. But I still prefer “AI doing science under a scientist’s supervision” over “average Joe can now make a deepfake and publish it for millions to see and believe”
I wonder how well it could work to use AI in developing an algorithm to generate chip designs. My annoyance with all of this stuff is how much people say, “Look! AI invented something new! It only took a few hours and 100x the resources!”
AI is mainly the capitalist dream of a drinking bird toy keeping a nuclear reactor online and paying a layman slave wages to make sure the bird does its job (obligatory “Simpsons did it”).
Maybe, but remember generative AI isn’t any kind of deductive or methodical reasoning. It’s literally “mash up the publicly available info and give a crowd sourced version of what to add next”. This works for art because this kind of random harmony appeals to us asthetically and art is an area where people seek fewer constraints. But when you’re engineering it’s the opposite. Maybe it’s useful to get engineers out of a rut and imagine new possibilities. But that’s it. Generative AI has no idea if what’s it’s smushed together is garbage or randomly insightful.
This is what most all ai is. Gpt models are a tiny subsect.
Subset
Idk, kinda the same, but instead of misinformation we get ICs that release a cloud of smoke in a shape of a cat when presented with specific pattern of inputs (or smth equally batshit crazy)
I want AI that takes a foreign language movie, and augments their face and mouth so it looks like they are speaking my language, and also changes their voice (not a voice over) to be in my language.
What used to take weeks of highly skilled work can now be accomplished in hours.
(…) delivers stunning high-performance devices that run counter to the usual rules of thumb and human intuition (…)Eventually, a.i. created circuits will power better a.i. The singularity may happen soon. This is unpredictable.




