Intel skepticism aside, I hope they can deliver on this. M-series Macs seem streets ahead in terms of battery life right now and it doesn’t feel great buying any other portable.
Honestly, a lot of that is budget.
Apple makes low clocked, very wide SoCs, and are always the first customer of the most cutting edge silicon node. This is very expensive. And Apple can eat it with their outrageous prices.
Intel (and AMD) go more for “balance,” with smaller cheaper dies and higher peak clocks. Their OEMs also “cheap out” by bundling a bunch of bloatware that also drains the battery to pad margins. You can find PCs with big batteries and better stock configs, but these are more expensive.
AMD is only just now getting into the “premium” game with the upcoming Strix Halo chip (M2 Pro-ish spec wise). Intel isn’t there yet, but there are rumors they will as well.
bloatware
Even if you remove all that crap, battery life is nowhere near the same vs the M-series chips. So while it may be a problem, it’s still not anywhere close to the reason battery life sucks.
It can be if you run linux and throttle the chips. Even my older G14 last a long time, as the AMD SoCs are great, it can run fanless throttled down, and it just has a straight up bigger battery than razor thin Macs.
But again, it’s just not configured this way in most laptops, which sacrifice battery for everything else because, well, OEMs are idiots.
I don’t, I just run stock. I run an E495 and get something like 3-5 hours battery life, depending on what I’m doing, and after a few years of ownership, I still get around 3 hours battery life.
On my G14, I just uses the ROG utility to disable turbo and make some kernel tweaks. I’ve used ryzenadj before, but its been awhile. And yes I measured battery drain in the terminal (but again its been awhile).
Also throttling often produces the opposite result in terms of extended battery life as it likely takes more time in the higher states to do the same amount of work whereas running at a faster clock speed, the work is completed faster and the CPU returns to a lower less energy using state quicker and resides there more of the time.
“Race to sleep” is true to some extent, but after a certain point the extra voltage one needs for higher clocks dramatically outweighs the benefit of the CPU sleeping longer. Modern CPUs turbo to ridiculously inefficient frequencies by default before they thermally throttle themselves.
Isn’t the screen eating most of the power in laptops? I just have an old T490 that I don’t use very much so I might be not that well informed.
I thought so too, but if Apple is getting more than 2x the battery life vs competitors while having a more dense screen, then I suppose it’s not as significant as I had thought.
Denser screen shouldn’t use more power though? Lighter, brighter, larger yes, denser IMO not so much.
There were some benchmarks that showed Ryzen getting very close and in some cases beating with Zen 4 based Z1 Extreme already. They just aren’t in laptops.
I couldn’t find any clarification in the article but in guessing these are still x86_64 and from the description it seems like they’ve stacked a lot of different components into a single CPU core. Normally both those things would make it a big powerhouse so I’m not sure how it’s going to beat arm on baterry which competes by having a smaller simpler ISA that doesn’t need as much resources or complexity to process.
Extra components mean more specific hardware to complete each task. This more specific hardware can process the same data often faster and with less power consumption. The drawback is cost, complexity and these compose are only good for that one task.
CPUs are great because they are multipurpose and can do anything, given infinite time and storage. This flexibility means it isn’t as optimised.
People are not creating custom code to solve their own problems. They are running very common applications, using very common libraries for similar functions. So for the general user specific hardware for encryption, video codecs, networking etc will reduce power consumption and increase processing speed in a practical way.
Out of curiosity this wouldn’t be automatically supported right? Like you’d need the os or dependent libraries to know about these special chips and take advantage of them for things like encryption for example. Is it common to define tailored hardware for this kind of functionality or is this intel trying to setup a very tailored mass market appeal product for laptops.
You need software support to use them. But, it’s already common to support this. But it does take time to develop test and deploy this software.
The software will exist in kernels, drivers and libraries. Intel already supports things like this.
You may need to wait or use a bleeding edge version of your os to support these extra features.
It’s somewhat common. On the media encoding/decoding front, Intel has been doing this with stuff like QuickSync, AMD with AMF and Nvidia with NVENC.
So they’re promising ARM-beating battery life while just beginning to incorporate the kind of custom silicon that Apple has been integrating for years now?
I’ll believe it when I see it.
Well, specifically, they’re promising battery life that beats Qualcomm’s implementation of an ARM laptop SoC.
Qualcomm is significantly behind Apple. I’m not convinced that the ISA matters all that much for battery life. AMD’s x86_64 performance per watt blew Intel’s out of the water in recent generations, and Qualcomm/Samsung’s ARM chips can’t compete with Apple’s ARM chips in the mobile, tablet, or laptop space.
Yeah. I think they will struggle to match apple. By the time they do apple will have progressed further.
Another big issue, is these features need deep and well implemented software. This is really easy for apple, they control all the hardware and software. They write all the drivers and can modify their kernel to their hearts content. A better processor is still unlikely to match apples overall performance. Intel have to support more operating systems and interface with more hardware of which they have little control over. It won’t be until years after release that these processors even realistically reach their potential. By which time intel and apple with both have newer releasesed chips with more features, that intel users won’t be able to use for a while.
This strategy has intel on the back foot and they will remain their indefinitely. They really need a bolder strategy if they want to reclaim best desktop processors. It’s pretty embarrassing apple laptop and integrated GPU completely wipe the floor of intel desktop cpus with dedicated gpus in certain workflows, it can often be the cheaper option to buy the apple device if your in a creative profession.
Qualcomm will have similar issues, but they won’t be limited to inferior x86 architecture. x86 only serves backwards compatibility and intel/amd. Arm is used on phones because with the same fab and power restrictions it makes better processors. This has been know for a long time, but consumers would accept this till apple proved it.
I wouldn’t be surprised if these intel chips flop initially, intel cuts their losses and stops developing new ones. Then we will see lots of articles saying intel should never have stopped developing these, there really competitive relativel to their contemporaries, not realising the software took that much time to effectively utilise them.
Right now Intel and AMD have less to fear from Apple than they do from Qualcomm – the people who can do what they need to do with a Mac and want to are already doing that, it’s businesses that are locked into the Windows ecosystem that drive the bulk of their laptop sales right now, and ARM laptops running Windows are the main threat in the short term.
If going wider and integrating more coprocessors gets them closer to matching Apple Silicon in performance per watt, that’s great, but Apple snatching up their traditional PC market sector is a fairly distant threat in comparison.
People overblow the importance of ISA.
Honestly a lot of the differences are business decisions. There is a balance between price, raw performance and power efficiency. Apple tend to focus exclusively on the latter two at the expense of price, while Intel (and AMD) have a bad habit of chasing cheap raw performance.
Apple does two things that are very expensive:
- They use a huge physical area of silicon for their high performance chips. The “Pro” line of M chips have a die size of around 280 square mm, the “Max” line is about 500 square mm, and the “Ultra” line is possibly more than 1000 square mm. This is incredibly expensive to manufacture and package.
- They pay top dollar to get the exclusive rights to TSMC’s new nodes. They lock up the first year or so of TSMC’s manufacturing capacity at any given node, at which point there is enough capacity to accommodate other designs from other TSMC clients (AMD, NVIDIA, Qualcomm, etc.). That means you can just go out and buy an Apple device made from TSMC’s latest node before AMD or Qualcomm have even announced the lines that will be using those nodes.
Those are business decisions that others simply can’t afford to follow.
The Risc Isa isn’t simple anymore. It has more instructions than a 90’s CISC CPU.
Arm has 64 bit, 32bit and 16 bit (Thumb) instructions.
The legacy 8-32bit Intel ISA doesn’t eat power if it isn’t used. It wastes a little silicon but it’s extremely tiny on modern CPU’s.
So they’re not just focus on marketing?