At the same time, the trouble with local LLMs is that they’re very resource heavy. Your average household computer isn’t going to be able to run one with much usability or speed.
it’s a lot slower that chatgpt but on my integrated graphics i7 laptop it ran decent, def enough to be useable. Also there’s different models to play around with, some are faster but worse and some are smarter but slower
At the same time, the trouble with local LLMs is that they’re very resource heavy. Your average household computer isn’t going to be able to run one with much usability or speed.
Phi 3 can run on pretty low specs (requires 4gb RAM) and has relatively good output
it’s a lot slower that chatgpt but on my integrated graphics i7 laptop it ran decent, def enough to be useable. Also there’s different models to play around with, some are faster but worse and some are smarter but slower