Modern AI data centers consume enormous amounts of power, and it looks like they will get even more power-hungry in the coming years as companies like Google, Microsoft, Meta, and OpenAI strive towards artificial general intelligence (AGI). Oracle has already outlined plans to use nuclear power plants for its 1-gigawatt datacenters. It looks like Microsoft plans to do the same as it just inked a deal to restart a nuclear power plant to feed its data centers, reports Bloomberg.
Microsoft jumped fully on the AI hype bandwagon with their partnership in OpenAI and their strategy of forcing GenAI down our throats. Instead of realizing that GenAI is not much more than a novel parlor trick that can’t really solve problems, they are now fully committing.
Microsoft invested $1 billion in OpenAI, and reactivating 3 Mile Island is estimated at $1.6 billion. And any return on these investments are not guaranteed. Generally, GenAI is failing to live up to its promises and there is hardly any GenAI use case that actually makes money.
This actually has the potential of greatly damaging Microsoft, so I wouldn’t say all their decisions are financially rational and sound.
On the other hand, if they ever admit the whole genAI thing doesn’t work, they could just sell the electricity produced by the plant.
. . . The entire multi-billion-dollar hype train goes off the cliff. All the executives that backed it look like clowns, the layoffs come back to bite them - hard - and Microsoft wont recover for a decade.
I mean . . . a boy can dream
My org’s Microsoft reps gave a demo of their upcoming copilot 365 stuff. It can summarize an email chain, use the transcript of a teams meeting to write a report, generate a PowerPoint of the key parts of that report, and write python code that generates charts and whatnot in excel. Assuming it works as advertised, this is going to be really big in offices. All of that would save a ton of time.
Keep in mind that that was a demo to sell Copilot.
The issue that I’ve got with GenAI is that it has no expert knowledge in your field, knows nothing of your organization, your processes, your products or your problems. It might miss something important and it’s your responsibility to review the output. It also makes stuff up instead of admitting not knowing, gives you different answers for the same prompt, and forgets everything when you exhaust the context window.
So if I’ve got emails full of fluff it might work, but if you’ve got requirements from your client or some regulation you need to implement you’ll have to review the output. And then what’s the point?
And whether it works as well as they described remains to be seen. However, they did prove that there’s a legitimate use case for generative AI in the office, in most offices. It’s not just a toy.
deleted by creator