A big issue that a lot of these tech companies seem to have is that they don’t understand what people want; they come up with an idea and then shove it into everything. There are services that I have actively stopped using because they started cramming AI into things; for example I stopped dual-booting with Windows and became Linux-only.
AI is legitimately interesting technology which definitely has specialized use-cases, e.g. sorting large amounts of data, or optimizing strategies within highly restrained circumstances (like chess or go). However, 99% of what people are pushing with AI these days as a member of the general public just seems like garbage; bad art and bad translations and incorrect answers to questions.
I do not understand all the hype around AI. I can understand the danger; people who don’t see that it’s bad are using it in place of people who know how to do things. But in my teaching for example I’ve never had any issues with students cheating using ChatGPT; I semi-regularly run the problems I assign through ChatGPT and it gets enough of them wrong that I can’t imagine any student would be inclined to use ChatGPT to cheat multiple times after their grade the first time comes in. (In this sense, it’s actually impressive technology - we’ve had computers that can do advanced math highly accurately for a while, but we’ve finally developed one that’s worse at math than the average undergrad in a gen-ed class!)
The answer is that it’s all about “growth”. The fetishization of shareholders has reached its logical conclusion, and now the only value companies have is in growth. Not profit, not stability, not a reliable customer base or a product people will want. The only thing that matters is if you can make your share price increase faster than the interest on a bond (which is pretty high right now).
To make share price go up like that, you have to do one of two things; show that you’re bringing in new customers, or show that you can make your existing customers pay more.
For the big tech companies, there are no new customers left. The whole planet is online. Everyone who wants to use their services is using their services. So they have to find new things to sell instead.
And that’s what “AI” looked like it was going to be. LLMs burst onto the scene promising to replace entire industries, entire workforces. Huge new opportunities for growth. Lacking anything else, big tech went in HARD on this, throwing untold billions at partnerships, acquisitions, and infrastructure.
And now they have to show investors that it was worth it. Which means they have to produce metrics that show people are paying for, or might pay for, AI flavoured products. That’s why they’re shoving it into everything they can. If they put AI in notepad then they can claim that every time you open notepad you’re “engaging” with one of their AI products. If they put Recall on your PC, every Windows user becomes an AI user. Google can now claim that every search is an AI interaction because of the bad summary that no one reads. The point is to show “engagement”, “interest”, which they can then use to promise that down the line huge piles of money will fall out of this pinata.
The hype is all artificial. They need to hype these products so that people will pay attention to them, because they need to keep pretending that their massive investments got them in on the ground floor of a trillion dollar industry, and weren’t just them setting huge piles of money on fire.
The answer is that it’s all about “growth”. The fetishization of shareholders has reached its logical conclusion, and now the only value companies have is in growth. Not profit, not stability, not a reliable customer base or a product people will want. The only thing that matters is if you can make your share price increase faster than the interest on a bond (which is pretty high right now).
As you can see, this can’t go on indefinitely. And also such unpleasantries are well known after every huge technological revolution. Every time eventually resolved, and not in favor of those on the quick buck train.
It’s still not a dead end. The cycle of birth, growth, old age, death, rebirth from the ashes and so on still works. It’s only the competitive, evolutionary, “fast” model has been killed - temporarily.
These corporations will still die unless they make themselves effectively part of the state.
BTW, that’s what happened in Germany described by Marx, so despite my distaste for marxism, some of its core ideas may be locally applicable with the process we observe.
It’s like a worldwide gold rush IMHO, but not even really worldwide. There are plenty of solutions to be developed and sold in developing countries in place of what fits Americans and Europeans and Chinese and so on, but doesn’t fit the rest. Markets are not exhausted for everyone. Just for these corporations because they are unable to evolve.
Lacking anything else, big tech went in HARD on this, throwing untold billions at partnerships, acquisitions, and infrastructure.
If only Sun survived till now, I feel they would have good days. What made them fail then would make them more profitable now. They were planning too far ahead probably, and were too careless with actually keeping the company afloat.
My point is that Sun could, unlike these corporations, function as some kind of “the phone company”, or “the construction company”, etc. Basically what Microsoft pretended to be in the 00s. They were bad with choosing the right kind of hype, but good with having a comprehensive vision of computing. Except that vision and its relation to finances had schizoaffective traits.
Same with DEC.
The point is to show “engagement”, “interest”, which they can then use to promise that down the line huge piles of money will fall out of this pinata.
Well. It’s not unprecedented for business opportunities to dry out. It’s actually normal. What’s more important, the investors supporting that are the dumber kind, and the investors investing in more real things are the smarter kind. So when these crash (for a few years hunger will probably become a real issue not just in developing countries when that happens), those preserving power will tend to be rather insightful people.
If only Sun survived till now, I feel they would have good days
The problem is a lot of what Sun brought to the industry is now in the Linux arena. If Sun survived, would Linux have happened? With such a huge development infrastructure around Linux, would Sun really add value?
I was a huge fan of Sun also, they revolutionized the industry far above their footprint. However their approach seemed more research or academic at times, and didn’t really work with their business model. Red Hat figured out a balance where they could develop opensource while making enough to support their business. The Linux world figured out a different balance where the industry is above and beyond individual companies and doesn’t require profit
The problem is a lot of what Sun brought to the industry is now in the Linux arena. If Sun survived, would Linux have happened? With such a huge development infrastructure around Linux, would Sun really add value?
Linux is not better than Solaris. It was, however, circumstantially more affordable, more attractive, and more exciting than Solaris at the same time. They’ve made a lot of strategic mistakes, but those were in the context of having some vision.
I mean this to say that the “huge development infrastructure around Linux” is bigger, but much less efficient than that of any of BSDs, and than that of Solaris in the past. Linux people back then would take pride in ability to assemble bigger resources, albeit with smaller efficiency, and call that “the cathedral vs the bazaar”, where Linux is the bazaar. Well, by now one can see that the bazaar approach make development costs bigger long-term.
IMHO if Sun didn’t make those mistakes, Solaris would be the most prestigious Unix and Unix-like system, but those systems would be targeted by developers similarly. So Linux would be alive, but not much more or less popular than FreeBSD. I don’t think they’d need Solaris to defeat all other Unix systems. After all, in early 00s FreeBSD had SVR4 binary compatibility code, similarly to its Linux compatibility code, which is still there and widely used. Probably commercial software distributed in binaries would be compiled for that, but would run on all of them. Or maybe not.
It’s hard to say.
But this
The Linux world figured out a different balance where the industry is above and beyond individual companies and doesn’t require profit
is wrong, everything about Linux that keeps going now is very commercial. Maybe 10 years ago one could say it’s not all about profit.
The point is the industry is not a profit driven entity, but has room for many profit driven entities.
That’s like saying your body is not a protein driven mechanism (cause there are many other things involved), but has room for proteins.
If somebody tears out half of your internal organs, you die.
If profit-driven companies stop participating in Linux, Linux dies. Today’s Linux. Linux of year 1999 wouldn’t.
That’s how even gifts can be the needle to control you.
I mean, why is this even a point of contention. BSDs played safe in terms of politics, Linux gambled by not considering the dangers. BSDs grew more slowly, Linux took the bank. But now Linux is confined by the decisions made back then. BSDs are more free.
There is this seeming need to discredit AI from some people that goes overboard. Some friends and family who have never really used LLMs outside of Google search feel compelled to tell me how bad it is.
But generative AIs are really good at tasks I wouldn’t have imagined a computer doing just a few year ago. Even if they plateaued in place where they are right now it would lead to major shakeups in humanity’s current workflow. It’s not just hype.
The part that is over hyped is companies trying to jump the gun and wholesale replace workers with unproven AI substitutes. And of course the companies who try to shove AI where it doesn’t really fit, like AI enabled fridges and toasters.
The part that is over hyped is companies trying to jump the gun and wholesale replace workers with unproven AI substitutes. And of course the companies who try to shove AI where it doesn’t really fit, like AI enabled fridges and toasters.
This is literally the hype. This is the hype that is dying and needs to die. Because generative AI is a tool with fairly specific uses. But it is being marketed by literally everyone who has it as General AI that can “DO ALL THE THINGS!” which it’s not and never will be.
The obsession with replacing workers with AI isn’t going to die. It’s too late. The large financial company that I work for has been obsessively tracking hours saved in developer time with GitHub Copilot. I’m an older developer and I was warned this week that my job will be eliminated soon.
The large financial company that I work for
So the company that is obsessed with money that you work for has discovered a way to (they think) make more money by getting rid of you and you’re surprised by this?
At least you’ve been forewarned. Take the opportunity to abandon ship. Don’t be the last one standing when the music stops.
I never said that I was surprised. I just wanted to point out that many companies like my own are already making significant changes to how they hire and fire. They need to justify their large investment in AI even though we know the tech isn’t there yet.
Computers have always been good at pattern recognition. This isn’t new. LLM are not a type of actual AI. They are programs capable of recognizing patterns and Loosely reproducing them in semi randomized ways. The reason these so-called generative AI Solutions have trouble generating the right number of fingers. Is not only because they have no idea how many fingers a person is supposed to have. They have no idea what a finger is.
The same goes for code completion. They will just generate something that fills the pattern they’re told to look for. It doesn’t matter if it’s right or wrong. Because they have no concept of what is right or wrong Beyond fitting the pattern. Not to mention that we’ve had code completion software for over a decade at this point. Llms do it less efficiently and less reliably. The only upside of them is that sometimes they can recognize and suggest a pattern that those programming the other coding helpers might have missed. Outside of that. Such as generating act like whole blocks of code or even entire programs. You can’t even get an llm to reliably spit out a hello world program.
Goldman Sachs, quote from the article:
“AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.”
Generative AI can indeed do impressive things from a technical standpoint, but not enough revenue has been generated so far to offset the enormous costs. Like for other technologies, It might just take time (remember how many billions Amazon burned before turning into a cash-generating machine? And Uber has also just started turning some profit) + a great deal of enshittification once more people and companies are dependent. Or it might just be a bubble.
As humans we’re not great at predicting these things including of course me. My personal prediction? A few companies will make money, especially the ones that start selling AI as a service at increasingly high costs, many others will fail and both AI enthusiasts and detractors will claim they were right all along.
Like what outcome?
I have seen gains on cell detection, but it’s “just” a bit better.
See now, I would prefer AI in my toaster. It should be able to learn to adjust the cook time to what I want no matter what type of bread I put in it. Though is that realky AI? It could be. Same with my fridge. Learn what gets used and what doesn’t. Then give my wife the numbers on that damn clear box of salad she buys at costco everytime, which take up a ton of space and always goes bad before she eats even 5% of it. These would be practical benefits to the crap that is day to day life. And far more impactful then search results I can’t trust.
See now, I would prefer AI in my toaster.
I agree with your wife: there’s always an aspirational salad in the fridge. For most foods, I’m pretty good at not buying stuff we won’t eat, but we always should eat more veggies. I don’t know how to persuade us to eat more veggies, but step 1 is availability. Like that Reddit meme
- Availability
- ???
- Profit by improved health
It’s been years… maybe we don’t need the costco size for the love of pete.
So true.
“Built to do my art and writing so I can do my laundry and dishes” – Embodied agents is where the real value is. The chatbots are just fancy tech demos that folks started selling because people were buying.
Eh, my best coworker is an LLM. Full of shit, like the rest of them, but always available and willing to help out.
Too bad it actively makes all of your work lower quality via the “helping”.
Just like every other coworker, it’s important to know what tasks they do well and where they typically need help
Lmao your stance is really “every coworker makes all product lower quality by nature of existence”? Thats some hardcore Cope you’re smoking.
Every coworker has a specific type of task they do well and known limits you should pay attention to.
Yes and therefor any two employees must never be allowed to speak to each other. You know, because it makes all of their work worse quality. /s
That’s quite the extreme interpretation.
I’m a lead software dev, and when deadlines are close, I absolutely divvy up tasks based on ability. We’re a webapp shop with 2D and 3D components, and I have the following on my team:
- 2 BE devs with solid math experience
- 1 senior BE without formal education, but lots of knowledge on frameworks
- 1 junior fullstack that we hired as primarily backend (about 75/25 split)
- 2 senior FE devs, one with a QA background
- 2 mid level FEs who crank out code (but miss some edge cases)
- 1 junior FE
That’s across two teams, and one of the senior FEs is starting to take over the other team.
If we’re at the start of development, I’ll pair tasks between juniors and seniors so the juniors get more experience. When deadlines are close, I’ll pair tasks with the most competent dev in that area and have the juniors provide support (write tests, fix tech debt, etc).
The same goes for AI. It’s useful at the start of a project to understand the code and gen some boilerplate, but I’m going to leave it to the side when tricky bugs need to get fixed or we can’t tolerate as many new bugs. AI is like a really motivated junior, it’s quick to give answers but slow to check their accuracy.
Though the image generators are actually good. The visual arts will never be the same after this
Compare it to the microwave. Is it good at something, yes. But if you shoot your fucking turkey in it at Thanksgiving and expect good results, you’re ignorant of how it works. Most people are expecting language models to do shit that aren’t meant to. Most of it isn’t new technology but old tech that people slapped a label on as well. I wasn’t playing Soul Caliber on the Dreamcast against AI openents… Yet now they are called AI opponents with no requirements to be different. GoldenEye on N64 was man VS AI. Madden 1995… AI. “Where did this AI boom come from!”
Marketing and mislabeling. Online classes, call it AI. Photo editors, call it AI.
I wasn’t playing Soul Caliber on the Dreamcast against AI openents…
Maybe terminology differs by region, but I absolutely played against AI as a kid. When I set up a game of Command and Conquer or something, I’d pick the number of AI opponents. Sometimes we’d call them bots (more common in FPS) or “the computer” or “CPU” (esp in Civ and other TBS), but I distinctly remember calling RTS SP opponents “AI” and I think many games used that terminology during the 90s.
What frustrates me is the opposite of what you’re saying, people have changed the meaning of “AI” from a human programmed opponent to a statistical model. When I played against “AI” 20-30 years ago, I was playing against something a human crafted and tuned. These days, I don’t play against “AI” because “AI” generates text, images, and video from a statistical model and can’t really play games. AI is something that runs in the cloud, with maybe a small portion on phones and Windows computers to do simple tasks where the network would add too much latency.
But the line must go up!
The article does mention that when the AI bubble is going down, the big players will use the defunct AI infrastructure and add it to their cloud business to get more of the market that way and, in the end, make the line go up.
That’s not what the article says.
They’re arguing that AI hype is being used as a way of driving customers towards cloud infrastructure over on-prem. Once a company makes that choice, it’s very hard to get them to go back.
They’re not saying that AI infrastructure specifically can be repurposed, just that in general these companies will get some extra cloud business out of the situation.
AI infrastructure is highly specialized, and much like ASICs for the blockchain nonsense, will be somewhere between “very hard” and “impossible” to repurpose.
Assuming a large decline in demand for AI compute, what would be the use cases for renting out older AI compute hardware on the cloud? Where would the demand come from? Prices would also go down with a decrease in demand.
Relaunching Stadia?
Haha. I believe the AMD Instinct / Nvidia Datacentre GPUs aren’t that great for gaming.
oh wow who would have guessed that business consultancy companies are generally built on bullshitting about things which they dont really have a grasp of
I’m buying semis. I don’t see AI, construed broadly, as ever shrinking from its current position.
I’m loading up on vacuum tubes.
They make the LLM responses “warmer”.
I’m stocked up on obsolete media formats.
Based on what, exactly?
You do you, but I think there’s a good chance we see a pullback, followed by a pivot, followed by a more sustained rise. Basically, once investors realize AI can’t deliver on the promises of the various marketing depts, they’ll pull investment, and then some new tech or application will demonstrate sustained demand.
I think we’re at that first crest, so I expect a pullback in the next few years. In short, I expect AI to experience something like what the Internet experienced at the turn of the millennium.