Meanwhile most passenger trains in Germany are double-headed. They have only one locomotive, but the last wagon also has a driver’s cabin so the locomotive can push the train while still being controlled from the front
Meanwhile most passenger trains in Germany are double-headed. They have only one locomotive, but the last wagon also has a driver’s cabin so the locomotive can push the train while still being controlled from the front
I’m German, and I’ve never heard that before. I’d be seriously weirded out by someone saying that or teaching it to their kids
The meme only says “if … then …”. It does not imply the reverse relationship of “if not … then not …”.
Oh awesome, thank you so much!
I’d love to know what font was used for the big “Saturday” there!
Congrats, you completely missed the point. Maybe read the actual article, before going on a rant that’s only tangentially related?
It is an algorithm that searches a dataset and when it can’t find something it’ll provide convincing-looking gibberish instead.
This is very misleading. An LLM doesn’t have access to its training dataset in order to “search” it. Producing convincing looking gibberish is what it always does, that’s its only mode of operation. The key is that the gibberish that comes out of today’s models is so convincing that it actually becomes broadly useful.
That also means that no, not everything an LLM produces has to have been in its training dataset, they can absolutely output things that have never been said before. There’s even research showing that LLMs are capable of creating actual internal models of real world concepts, which suggests a deeper kind of understanding than what the “stochastic parrot” moniker wants you to believe.
LLMs do not make decisions.
What do you mean by “decisions”? LLMs constantly make decisions about which token comes next, that’s all they do really. And in doing so, on a higher, emergent level they can make any kind of decision that you ask them to, the only question is how good those decisions are going be, which in turn entirely depends on the training data, how good the model is, and how good your prompt is.
Revenge bedtime procrastination is the term you’re looking for. Although I’m not sure that’s actually what the OP is describing
That holds only if you assume that random chance decides whether someone votes or not. That is a big assumption to make. A lot of factors that affect your ability or willingness to vote also affect your political leaning, so I highly doubt that it’s a reasonable assumption.