Pretty much exactly this: Ghost - Call Me Little Sunshine
Not sure if this is what you’re referencing, but there’s a famous quantum computer researcher named Scott Aaronson who has this at the top of his blog:
If you take nothing else from this blog: quantum computers won’t solve hard problems instantly by just trying all solutions in parallel.
His blog is good, talks about a lot of quantum computing stuff at an accessible level
Cross-posted to !bestoflemmy@lemmy.world, which is probably the closest active community we’ve got
Ha, that reminds me of Donald Knuth offering 0x$1.00 to anyone that finds a mistake in TAOCP, like this guy:
Sorry, mixed up the videos. It’s actually this one, from 2014:
https://www.destroyallsoftware.com/talks/the-birth-and-death-of-javascript
Edited link above
I’ve been wondering how much of that is back to school. I have the sense that Lemmy has a lot of younger users. I can’t judge though as I’ve been inactive for long stretches due to life. I’ve been trying to contribute more now
FYI this is almost certainly a troll account:
The username is a pun on “I need to be eating”
FYI that’s not a Slipknot shirt, it’s a great band called Ghost:
Strong The Abysmal Eye vibes
Thanks for writing that out. I didn’t post this intending to break rules or stir up drama. I thought it was interesting on its own merits, in essence the same as “How I left Scientology” or “How I left Jehovah’s Witnesses”. I also thought the mention of dwindling users was interesting. If you’ll excuse the LessWrong link (which is a site with its own weird in-group thinking), here’s an essay called “Evaporative Cooling of Group Beliefs” that talks about that effect.
They really tried with Web Environment Integrity:
https://github.com/explainers-by-googlers/Web-Environment-Integrity/issues/28
There was enough pushback that they dropped that proposal, but expect to see it back in mutated form soon.
Oh, you’re from hexbear.
There is no drama. This is a useful account of escaping extremism.
Not sure how ollama integration works in general, but these are two good libraries for RAG:
This is a silly argument:
[…] But even if we give the AGI-engineer every advantage, every benefit of the doubt, there is no conceivable method of achieving what big tech companies promise.’
That’s because cognition, or the ability to observe, learn and gain new insight, is incredibly hard to replicate through AI on the scale that it occurs in the human brain. ‘If you have a conversation with someone, you might recall something you said fifteen minutes before. Or a year before. Or that someone else explained to you half your life ago. Any such knowledge might be crucial to advancing the conversation you’re having. People do that seamlessly’, explains van Rooij.
‘There will never be enough computing power to create AGI using machine learning that can do the same, because we’d run out of natural resources long before we’d even get close,’ Olivia Guest adds.
That’s as shortsighted as the “I think there is a world market for maybe five computers” quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented. Maybe transformers aren’t the path to AGI, but there’s no reason to think we can’t achieve it in general unless you’re religious.
EDIT: From the paper:
The remainder of this paper will be an argument in ‘two acts’. In ACT 1: Releasing the Grip, we present a formalisation of the currently dominant approach to AI-as-engineering that claims that AGI is both inevitable and around the corner. We do this by introducing a thought experiment in which a fictive AI engineer, Dr. Ingenia, tries to construct an AGI under ideal conditions. For instance, Dr. Ingenia has perfect data, sampled from the true distribution, and they also have access to any conceivable ML method—including presently popular ‘deep learning’ based on artificial neural networks (ANNs) and any possible future methods—to train an algorithm (“an AI”). We then present a formal proof that the problem that Dr. Ingenia sets out to solve is intractable (formally, NP-hard; i.e. possible in principle but provably infeasible; see Section “Ingenia Theorem”). We also unpack how and why our proof is reconcilable with the apparent success of AI-as-engineering and show that the approach is a theoretical dead-end for cognitive science. In “ACT 2: Reclaiming the AI Vertex”, we explain how the original enthusiasm for using computers to understand the mind reflected many genuine benefits of AI for cognitive science, but also a fatal mistake. We conclude with ways in which ‘AI’ can be reclaimed for theory-building in cognitive science without falling into historical and present-day traps.
That’s a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.
Canonical lives and dies by the BDFL model. It allowed them to do some great work early on in popularizing Linux with lots of polish. Canonical still does good work when forced to externally, like contributing upstream. The model falters when they have their own sandbox to play in, because the BDFL model means that any internal feedback like “actually this kind of sucks” just gets brushed aside. It doesn’t help that the BDFL in this case is the CEO, founder, and funder of the company and paying everyone working there. People generally don’t like to risk their job to say the emperor has no clothes and all that, it’s easier to just shrug your shoulders and let the internet do that for you.
Here are good examples of when the internal feedback failed and the whole internet had to chime in and say that the hiring process did indeed suck:
https://news.ycombinator.com/item?id=31426558
https://news.ycombinator.com/item?id=37059857
“markshuttle” in those threads is the owner/founder/CEO.
It’s a nice change of pace to see how they interact when they’re not busy parenting Calvin
Thanks. I’m not intending to push anything here. She has some other interesting articles and I came across this one on her blog while I was reading those. Didn’t really see any discussion online about it, and thought this would be a good community for evaluating the claims. I’m coming from a place where I think stuff like this that tries to make concrete claims from data is best discussed and picked apart, especially since it didn’t come across as bad faith to me, but I can understand that might not be a view shared by everyone.