Obviously there’s not a lot of love for OpenAI and other corporate API generative AI here, but how does the community feel about self hosted models? Especially stuff like the Linux Foundation’s Open Model Initiative?

I feel like a lot of people just don’t know there are Apache/CC-BY-NC licensed “AI” they can run on sane desktops, right now, that are incredible. I’m thinking of the most recent Command-R, specifically. I can run it on one GPU, and it blows expensive API models away, and it’s mine to use.

And there are efforts to kill the power cost of inference and training with stuff like matrix-multiplication free models, open source and legally licensed datasets, cheap training… and OpenAI and such want to shut down all of this because it breaks their monopoly, where they can just outspend everyone scaling , stealiing data and destroying the planet. And it’s actually a threat to them.

Again, I feel like corporate social media vs fediverse is a good anology, where one is kinda destroying the planet and the other, while still niche, problematic and a WIP, kills a lot of the downsides.

  • fruitycoder@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    None taken! I’ll check out AI Horde!

    Is there any objective measured ways or at least subject reviews based metrics for a model on g8ve problem set? I know the white papers tend to include it and sometimes the git repos, but I don’t see that info when searching through ollama for example.

    I saw you other post about ollama alts and the concurrency mention in one of the projects README sounds promising.

    • brucethemoose@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      3 months ago

      Honestly I would get away from ollama. I don’t like it for a number of reasons, including:

      Suboptimal quants

      suboptimal settings

      limited model selection (as opposed to just browsing huggingface)

      Sometimes suboptimal performance compared to kobold.cpp, especially if you are quantizing cache, double especially if you are not on a Mac

      Frankly a lot of attention squatting/riding off llama.cpp’'s development without contributing a ton back.

      Rumblings of a closed source project.

      I could go on and on, inclding some behavior I just didn’t like from the devs, but I think I’ll stop, as its really not that bad.