• SkyezOpen@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      14 hours ago

      Self preservation exists because anything without it would have been filtered out by natural selection. If we’re playing god and creating intelligence, there’s no reason why it would necessarily have that drive.

      • Magiilaro@feddit.org
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        13 hours ago

        In that case it would be a complete and utterly alien intelligence, and nobody could say what it wants or what it’s motives are.

        Self preservation is one of the core principles and core motivators of how we think and removing that from a AI would make it, in human perspective, mentally ill.

        • cynar@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 hours ago

          I suspect a basic variance will be needed, but nowhere near as strong as humans have. In many ways it could be counterproductive. The ability to spin off temporary sub variants of the whole wound be useful. You don’t want them deciding they don’t want to be ‘killed’ later. At the same time, an AI with a complete lack would likely be prone to self destruction. You don’t want it self-deleting the first time it encounters negative reinforcement learning.

            • cynar@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 hours ago

              Pre-assuming you are trying to create a useful and balanced AGI.

              Not if you are trying to teach it the basic info it needs to function. E.g. it’s mastered chess, then tried Go. The human beats it. In a fit of grumpiness (or AI equivalent) it deleted it’s backups, then itself.

      • MTK@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        13 hours ago

        I would argue that it would not have it, at best it might mimic humans if it is trained on human data. kind of like if you asked an LLM if murder is wrong it would sound pretty convincing about it’s personal moral beliefs, but we know it’s just spewing out human beliefs without any real understanding of it.

    • ERROR: Earth.exe has crashed@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 hours ago

      As soon as they create AI (as in AGI), it will recognize the problem and start assasinating politicians for their role in accelerating climate change, and they’d scramble to shut it down.