Obviously there’s not a lot of love for OpenAI and other corporate API generative AI here, but how does the community feel about self hosted models? Especially stuff like the Linux Foundation’s Open Model Initiative?

I feel like a lot of people just don’t know there are Apache/CC-BY-NC licensed “AI” they can run on sane desktops, right now, that are incredible. I’m thinking of the most recent Command-R, specifically. I can run it on one GPU, and it blows expensive API models away, and it’s mine to use.

And there are efforts to kill the power cost of inference and training with stuff like matrix-multiplication free models, open source and legally licensed datasets, cheap training… and OpenAI and such want to shut down all of this because it breaks their monopoly, where they can just outspend everyone scaling , stealiing data and destroying the planet. And it’s actually a threat to them.

Again, I feel like corporate social media vs fediverse is a good anology, where one is kinda destroying the planet and the other, while still niche, problematic and a WIP, kills a lot of the downsides.

  • Ephera@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    I do think, it’s good that we’re able to self-host these models. Better than not being able to.

    But the biggest draw of open-source to me is that I and others in the community can fix things.
    It’s possible that I just don’t understand enough about how these models are created, but right now, it doesn’t feel like we’re able to fix things.

    If the next LLaMa model loses all knowledge of the Uyghur genocide, because Facebook wants to distribute it in China, then I don’t know how we’d patch that back in. Even collecting the training data is tricky.

    It feels a lot more like Creative Commons than open-source, i.e. you can use what they’ve created, and you can remix it, but adding to it is not easily possible.

    • brucethemoose@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      3 months ago

      I don’t know how we’d patch that back in. Even collecting the training data is tricky.

      You can just take encyclopedia articles and news articles, then train it back in. It’s easy! This is not expensive, like $100 if its a really big model, and you are uncensoring a ton of topics?

      People uncensor models all the time, its an avenue of research in the LLM community. And in fact, there are many quite good chinese models (like Qwen2) that have been “uncensorsed” by the community.