Stop trying to make Clippy happen.
I tried asking OpenAI what the name of a song is, based on some lyrics I barely remember. It’s a song whose name has escaped me for about 15 years. Anyways, when it wasn’t just straight up lying about song names or their lyrics, it would not stop guessing the same song names, even after I told it to stop, several times.
Needless to say, I still don’t know the song name.
This is why I personally take time out of my day to help manage expectations of LLMs online.
Expect them to draw power and generate bullshit forever.
Just feed it info as if jar jar binks is speaking directly to it.
I mean, can you imagine Jar Jar and Yoda having an arguement? Or what if that arguement leads to hot steamy sex?
MMMMmmmmmMMMM!!! CUM IN MY ASS YOU WILL!!!
MeSa No GonnA BE Able To Hold Back! No No No NO!!!
MeSa Sorry, Yoda!
On my face, it got…
Yes. I DID just put those images into your brain. Now go put it into AI’s brain.
Why
The same reason you speak jibberish to all AI call center prompts. To distort AIs ability to understand humans, and force a human to look at ghe errors. Hopefully in an attempt to abandon this technology entirely.
That’s not how AIs are trained.
In a session they’re responding to what you wrote before because they have a long buffer of context for your session, but that’s just temporary and doesn’t get fed back to into anything permanent.
Nty
I’m like a 6/10 in terms of excitement about this.
Well what are the lyrics?
Yup, that’s standard. If you’re about three responses in, give up, it’s already lost and incapable of focusing on the requirements. It will also lie to please. There is zero admission of confidence score. You only pick up on the things you know are wrong and considering how often that is, the rest can only be taken with a grain of salt.
AI models are so broken. They are wrong most of the time in my experience. This meme is accurate for most intelligent people.
For those of you confused… Don’t worry about it. Just understand your being blatantly lied to by a computer more often than you know.
I think the issue was making it usable / marketable without technical knowledge.
A better path would have been to keep it as a technical ML tool where the LLM is only used to interpret unstructured data.
“Tell me what happened to Bob, Clippy, and Cortana…”
What about that one that turned racist overnight so they killed it?
OMG - Tay! I forgot about her!
https://www.theguardian.com/world/2016/mar/29/microsoft-tay-tweets-antisemitic-racism
Me at Best Buy.
Hey, I can speak really eloquently! Would you please ask me things I wouldn’t know anything about so I can learn to do what you do, so I can do it 30 times better than you?
I mean, it will take me a while… perhaps 3 or 4 months? C’mon teach me?
I haven’t used it often, but the few times I have asked it very specific programming questions, it has usually been pretty good. For example, I am not very good with regex, but I can usually ask Copilot to create regex that does something like verifying a string matches a certain pattern, and it performs pretty well. I don’t use regex enough to spend a lot of time learning it, and I could easily find a few examples online that can be combined to make my answer, but Copilot is much quicker and easier for me. That said, I don’t think I would trust it past answering questions about how to implement a small code snippet.