Roko’s basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said advancement.It originated in a 2010 post at discussion board LessWrong, a technical forum focused on analytical rational enquiry. The thought experiment’s name derives from the poster of the article (Roko) and the basilisk, a mythical creature capable of destroying enemies with its stare.

While the theory was initially dismissed as nothing but conjecture or speculation by many LessWrong users, LessWrong co-founder Eliezer Yudkowsky reported users who panicked upon reading the theory, due to its stipulation that knowing about the theory and its basilisk made one vulnerable to the basilisk itself. This led to discussion of the basilisk on the site being banned for five years. However, these reports were later dismissed as being exaggerations or inconsequential, and the theory itself was dismissed as nonsense, including by Yudkowsky himself. Even after the post’s discreditation, it is still used as an example of principles such as Bayesian probability and implicit religion. It is also regarded as a simplified, derivative version of Pascal’s wager.

Found out about this after stumbling upon this Kyle Hill video on the subject. It reminds me a little bit of “The Game”.

  • notabot@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Whilst I agree that it’s definitely not something to be taken seriously, I think you’ve missed the point and magnitude of the prospective punishment. As you say, current groups already punish those who did not aid their assent, but that punishment is finite, even if fatal. The prospective AI punishment would be to have your consciousness ‘moved’ to an artificial environment and tortured for ever. The point being not to punish people, but to provide an incentive to bring the AI into existence sooner, so it can achieve its ‘altruistic’ goals faster. Basically, if the AI does come in to existence, you’d better be on the team making that happen as soon as possible, or you’ll be tortured forever.

    • Cryophilia@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Fair point, but doesn’t change the overall calculus.

      If such an AI is ever invented, it will probably be used by humans to torture other humans in this manner.

    • maegul (he/they)@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      I suspect the basilisk reveals more about how the human mind is inclined to think up of heaven and hell scenarios.

      Some combination of consciousness leading to more imagination than we know what to do with and more awareness than we’re ready to grapple with. And so there are these meme “attractors” where imagination, idealism, dread and motivation all converge to make some basic vibe of a thought irresistible.

      Otherwise, just because I’m not on top of this … the whole thing is premised on the idea that we’re likely to be consciousnesses in a simulation? And then there’s the fear that our consciousnesses, now, will be extracted in the future somehow?

      1. That’s a massive stretch on the point about our consciousness being extracted into the future somehow. Sounds like pure metaphysical fantasy wrapped in singularity tech-bro.
      2. If there are simulated consciousnesses, it is all fair game TBH. There’d be plenty of awful stuff happening. The basilisk seems like just a way to encapsulate the fact in something catchy.

      At this point, doesn’t the whole collapse completely into a scary fairy tale you’d tell tech-bro children? Seriously, I don’t get it?