It’s me. I write software and complain to the hardware team when I don’t have leds to blink for diagnostic purposes.
Exactly……
Like, do you not want a light that informs you of the evil?
Just because I installed a check-engine light in my AI doesn’t mean I designed it to be evil 😝
I feel like this just opens up a whole new line of inquiries.
For starters, how did you define “evil” and how complicated was it to design its detection? Is there an acceptable amount of evil that they can do, as a treat?
Hillary: robots must follow the three laws of robotics
Bernie: robots can have a little evil
1980s: evil robot eyes were red because that was just the cheapest option at the time and nobody wants a green or yellow-eyed robot anyway.
2000s: you have to go out of your way to install red LEDs for the evil function, along with the blue LEDs you were obviously gonna use since they’re they’re the trendy new hotness after (finally!) having been invented in 1994.
2020s: evil robot eyes are red because everything’s got addressible RGB LEDs in it these days and the robot picked red in software.
Innovation!
And this one
A good engineer prepares for every use case.
What normally happens is that the engineer will raise the corner cases and then be told that will never happen and they must not prepare for those use cases. Also, they can now deliver a week earlier.
Look turning evil shouldn’t happen, but UX best practices dictate that you should inform the user of the error so that they can troubleshoot the problem. Red eyed murder robots are good user experience!