• Communist@lemmy.frozeninferno.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 days ago

    A job is a task one human wants another to accomplish, it is not arbitrary at all.

    philosophy, politics, science are among the most important non-family-oriented “jobs” we humans do. They require consciousness.

    i don’t see why they do, a philosophical zombie could do it, why not an unconscious AI? alphaevolve is already making new science, I see no reason an unconscious being with the abilty to manipulate the world and verify couldn’t do these things.

    Plus, if a machine does what it’s told, then someone would be telling it what to do. That’s a job that a machine cannot do. But most of our jobs are already about telling machines what to do. If an AGI is not self-directed, it can’t tell other machines what to do, unless it is itself told what to do. But then someone is telling it what to do, which is “a job.”

    yes but you can give it large, vague goals like “empower humanity, do what we say and minimize harm.” And it will still do them. So what does it matter?

    • yeahiknow3@lemmings.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      6 days ago

      Why do you expect an unthinking, non-deliberative zombie process to know what you mean by “empower humanity”? There are facts about what is GOOD and what is BAD that can only be grasped through subjective experience.

      When you tell it to reduce harm, how do you know it won’t undertake a course of eugenics? How do you know it won’t see fit that people like you, by virtue of your stupidity, are culled or sterilized?

      • Communist@lemmy.frozeninferno.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        Why do you expect an unthinking, non-deliberative zombie process to know what you mean by “empower humanity”? There are facts about what is GOOD and what is BAD that can only be grasped through subjective experience.

        these cannot be grasped by subjective experience, and I would say nothing can possibly achieve this, not any human at all, the best we can do is poll humanity and go by approximates, which I believe is best handled by something automatic. humans can’t answer these questions in the first place, why should I trust something without subjective experience to do it any worse?

        When you tell it to reduce harm, how do you know it won’t undertake a course of eugenics?

        because this is unpopular, there are many things online saying not to… do you think humans are immune to this? When has consciousness ever prevented such an outcome?

        How do you know it won’t see fit that people like you, by virtue of your stupidity, are culled or sterilized?

        we don’t, but we also don’t with conscious beings, so there’s still no stated advantage to consciousness.

        • yeahiknow3@lemmings.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          6 days ago

          Oh my god. So the machine won’t do terrible immoral things because they are unpopular on the internet. Well ladies and gentlemen, I rest my case.

          • Communist@lemmy.frozeninferno.xyz
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            6 days ago

            No, the machine will and so would a conscious one. you misunderstand. This isn’t an area where a conscious machine wins.

            Tell me, if consciousness prevents this, why did humans do it?