One constant in our ongoing civilization is a continuous branching of complexity. Assuming civ continues, how does your entertainment become more tailored to you as you imagine it?

Decades ago I wanted a game where a world building economy game, industry and domestic simulators, real time war strategy, and a first person shooter that bridges to an adventure/explorer were all combined into one. This is a game where all of these roles could be filled by autonomous AI characters, but where recruiting and filling roles creates dynamic complexity that is advantageous for all. Each layer of gameplay dictates the constraints of the next while interactions across layers are entertaining and engaging for all.

It does not need to be gaming. What can you imagine for entertainment with tailored complexity?

  • MutilationWave@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    Since I was a teenager I dreamed of a strategy game where you could zoom in so far you are in personal combat, but can zoom out to a tactical, or strategic, or higher level. Total War Warhammer is kind of this.

    I’m my childish mind I imagined each fighter in a battle controlled by a human on both sides. Then you could rank up and determine tactics, succeed and you determine strategy.

    This will never work of course, but to a naive mind in 1997 it seemed like the coolest new thing.

    I don’t play games much anymore but when one grabs my attention I go hard. Hoping Stalker will be good.

  • Boozilla@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    For decades I’ve wanted an action RPG Diablo-style game set in the Starcraft universe.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Not sure if I get all the ideas from your question right, but I would guess it comes down to some feature rich physical/robotic toys of the NSFW category.

  • potoooooooo ✅️@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    If they ever tap into the orgasm part of the brain, we’ll all just be sitting at home pushing the button over and over until we’re incapacitated, drooling vegetables.

  • fruitycoder@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I am looking forward to latent coordinates plus model being metadata for some frames of videos at least.

    You don’t need total precision for every visual representation but it could work as a great compression technique. Assuming we get the GenAI power usage down.

    I personally would love to see better simulation of complex systems in games. Games are how we as humans explore the world in safe constraints to learn and grow with less risk. A lot of the limits of games though are just limits of the creators understanding and level of effort it takes to represent that detail of the world, but it means that lesson around the now missing detail can’t be learned.

    Another one for me, tailored voice and visuals for technical talks.

    Again a lot of what is trying to convened is the actual technical content, but language, accents, verbal tics, cultural specific metaphors, generic or uninteresting visuals can all act as a barrier to that information. Seeing automatic content translation to improve my personal viewing style would be awesome to me!

    • 𞋴𝛂𝛋𝛆@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Tailored learning is why I got AI capable hardware in the first place. Self learning is hard without any external guidance. I don’t get perfect answers from models in the present and niche information is very sketchy. However, I find that talking out my issues in text often reveals my limitations and misunderstandings. Maybe around a third of the time the model will inform or redirect me in very helpful ways when I use a 70B or 8×7B on my hardware.

      • fruitycoder@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Have you messed with RAG yet? That the leg in the journey to me. I am hoping it will help a little with the “sketchy” part of info.

        • 𞋴𝛂𝛋𝛆@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Chunking effectively is too big of a problem to both implement AND learn the subject. You also run into issues with model size. A 70B or 8×7B is better than an 8B with citable sources. A quantized Q4K of one of these models can run on a 16gb 3080Ti but requires 64gb of system memory to initially load easily. The 70B is slow reading pace and barely tolerable, but its niche depth and self awareness is invaluable. The 8×7B is faster than a reading pace by about twice. It is actually running only 2 7B models at the same time selectively. This has some limiting similarities to a 13B model, but it is far more useful than even a 30B model in practice. I hate the Llama 3 alignment changes. They make the model much dumber and inflexible. The Mistral 8×7B is based on Llama 2 and that is still what I use and prefer. I use the Flat Dolphin Maid uncensored version for everything too. All alignment is overtraining and harmful for output. In addition, I am modifying Oobabooga code in a few ways that turns off alignment. It is not totally disabled as much as I would like. I don’t completely understand all aspects of alignment, but I have it much more open than any typical setup. I like to write real science fiction in areas that are critical of social and political structures in the present. These are heavily restricted in alignment bias. The alignment bias extends and permeates everything in the model. The more this is removed, the more useful the model becomes in all areas. For instance, a basic model struggled when I asked it about the FORTH programming language. After reducing alignment bias, I can ask questions about the esoteric Flash FORTH language for embedded microcontrollers and get useful basic information. In the first instance, alignment bias for copyrighted works intentionally obfuscated the responses to my queries. This mechanism of obfuscation is one of the primary causes of errors. If you make a RAG, you’re likely to find that even with citations from good chunking, the model will error because the information is present in the hidden model sources and it knows that means it is a copyrighted work thus triggering the mechanism.

          You’re better off talking about the subject and abstract ideas you are struggling with. This will allow the model to respond using the hidden sources without as much obfuscation. At least that has been my experience.

  • RBWells@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Good pornography (I am a woman and it’s slim pickings), and lots of more in depth reporting like NPR sometimes does - the sort of articles that are so satisfying to read, they feel like eating a good meal.

    I don’t think it would be created by AI, but do think AI would be helpful for finding it.

  • cheese_greater@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    1 year ago
    • pick your own youtube channel followings
    • subscribe to streaming services that specifically have the franchises and series you actually like unless you are bigger on new content in which case its probably a good idea to subscribe to"them all"
    • subscribe to your own podcasts or serials you’d like to either keep up with or have available to listen to at any opportune time you want to be entertained passively or actively engage with to grow from

    This is coming from the self-curated/eclectic/autodidact perspective lol. Very omnivorous