I built my PC back in 2019. I am still using the same CPU : i7 8700k. I have upgraded my GPU to an rtx 3070. I play only GPU heavy games in 4k usually with DLSS. I’m able to run most games in an acceptable framerate, 50-60fps. I would say my frame times could be better tho. I dont run any CPU heavy applications, but i have noticed that with these latest AAA games it takes forever to compile the shaders. My CPU is pretty outdated at this point, and I’m wondering if it would even be worth upgrading considering my use case and that I am mostly happy with my gaming performance. Oh and I run Linux if that makes a difference.
I am mostly happy with my gaming performance.
That’s the only benchmark you need, friend.
The 1151 socket is only compatible through 9th gen. I doubt you’d see a performance leap from a compatible socket upgrade, and if you’re getting a new mobo, you might as well upgrade the RAM, and now your GPU is the bottleneck, plus you better make sure you have adequate power for it all. Congratulations, you bought a whole new computer.
Intel has gone out of their way to make it harder to upgrade components. You should save up to buy a whole new system when you are no longer happy with your current performance.
That’s what I was thinking too. And right now I can’t upgrade my GPU anymore without also upgrading my PSU. I also didn’t think about the mobo socket component, so that does limit me more anyway. I will just keep this one until it doesn’t keep up with the newest titles and then go to AMD for my next PC.
https://pc-builds.com/bottleneck-calculator/result/0NY175/1/general-tasks/3840x2160/
Effectively no gain, it’s a pretty balanced system so if you upgrade one then the other is gonna be bottlenecked.
I didn’t even know this site existed. This is really helpful, thanks!
I went from a 7k i7 to a 12k a year and a half back. It was a massive difference overall. I got an AI machine and run Linux with no gaming. I’m not sure when the internet hardware went to the multi threaded thing but that was huge by comparison. I can pull 10gb in just a few minutes at most when it took around an hour on the old machine, same internet connection too and the same rather old OpenWRT router.
With 64gb of DDR5, 20 logic threads, and a 16GB GPU, I can run much larger quantized LLMs and diffusion models than most people get to play with. If I cared about AI stuff, I would definitely get a 24gb+ GPU, as many logical cores as I could reasonably afford, and as much of the fastest memory I can fit. No joke, and no skin in the shill space. I just wish I could run Flux cooler and faster for diffusion. And I would love to try out Command R and other even larger models, but I’m limited by total processors. Llama.cpp will split LLM loads between the CPU and GPU automatically, so yeah I’m turning all of this up to 11 regularly. Skip on Intel 13k-14k and the latest as junk. 12k on Intel is the last decent reliable hardware.


