You can’t choose where you grow up. :(
You can’t choose where you grow up. :(
Sorry guys, this is my experimental atheist AI. I never gave it any examples of christians being oppressed, so it kinda spits out gibberish when it sees any.
deleted by creator
No, you’re thinking of the first scene of the movie where a fly falls into the teletype machine and causes it to type ‘tuttle’ instead of ‘buttle’.
Ok. I’m getting tired. You bested me this round. Have a nice day.
You say it’s the goal of the proletariat to protect the revolution, but why would they? Each proletariat would benefit from the revolution’s failure- they could live better lives as the bourgeois. You talk about the proletariat like they are some monolithic entity, with a single mind and goal. You talk big about helping the individual, but cannot see beyond their class. The proletariat is a person, with needs, desires and opinions. What father would hold the abstract ideals of the “revolution” over the life of his sick daughter? Any father I know would do anything for the safety of his children, even hoard life-saving medicine from others.
Communist logix
we need to abolish private property so everybody has equal power.
we class of people to maintain public ownership
After all, how can we enforce public ownership without a more powerful class of enforcers?
I thought this was more common in neurotypical people. Like neurotypical people are a lot more likely to assign other people into categories than neurodivergent people. Maybe it’s just the kind of people I surround myself with, or maybe I’m just projecting my own distaste for categorizing people’s identities onto others, but I haven’t seen my friends participating in any black-or-white thinking.
It’s a poop scooper. The joke is that the snake is long, it has long poops, and so the lady needs a long poop scooper to grab it.
Why do the leaders in AI know so little about it? Transformers are completely incapable of maintaining any internal state, yet techbros somehow think it will magically have one. Sometimes, machine learning can be more of an art than a science, but they seem to think it’s alchemy. They think they’re making pentagrams out of noncyclic graphs, but are really just summoning a mirror into their own stupidity.
It’s really unfortunate, since they drown out all the news about novel and interesting methods of machine learning. KANs, DNCs, MAMBA, they all have a lot of promise, but can’t get any recognition because transformers are the laziest and most dominant methods.
Honestly, I think we need another winter. All this hype is drowning out any decent research, and so all we are getting are bogus tests and experiments that are irreproducible because they’re so expensive. It’s crazy how unscientific these ‘research’ organizations are. And OpenAI is being paid by Microsoft to basically jerk-off sam Altman. It’s plain shameful.
The issue with sonnet 3.5 is, in my limited testing, is that even with explicit, specific, and direct prompting, it can’t perform to anything near human ability, and will often make very stupid mistakes. I developed a program which essentially lets an AI program, rewrite, and test a game, but sonnet will consistently take lazy routes, use incorrect syntax, and repeatedly call the same function over and over again for no reason. If you can program the game yourself, it’s a quick way to prototype, but unless you know how to properly format JSON and fix strange artefacts, it’s just not there yet.
Everything can be done in constant time, at least during runtime, with a sufficiently large look-up table. It’s easy! If you want to simulate the universe exactly, you just need a table with nxm entries, where n is the number of plank volumes in the universe, and m is the number of quantum fields. Then, you just need to compute all of them at compile time, and you have O(1) time complexity during runtime.
This might be happening because of the ‘elegant’ (incredibly hacky) way openai encodes multiple languages into their models. Instead of using all character sets, they use a modulo operator on each character, to make all Unicode characters represented by a small range of values. On the back end, it somehow detects which language is being spoken, and uses that character set for the response. Seeing as the last line seems to be the same mathematical expression as what you asked, my guess is that your equation just happened to perfectly match some sentence that would make sense in the weird language.
Anything that’s turning complete, has enough ram, and has a c compiler can run Linux. Theoretically, you could program a CPLD to run brainfuck and you could still run Linux.
Cat.