• yeahiknow3@lemmings.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    7 days ago

    Reasoning literally requires consciousness because it’s a fundamentally normative process. What computers do isn’t reasoning. It’s following instructions.

    • postmateDumbass@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      7 days ago

      Reasoning is approximated enough with matrix math and filter algorithms.

      It can fly drones, dodge wrenches.

      The AGI that escapes wont be the ideal philosopher king, it will be the sociopathic teenage rebel.

      • yeahiknow3@lemmings.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        7 days ago

        Okay, we can create the illusion of thought by executing complicated instructions. But there’s still a difference between a machine that does what it’s told and one that thinks for itself. The fact that it might be crazy is irrelevant, since we don’t know how to make it, at all, crazy or not.

          • yeahiknow3@lemmings.world
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            6 days ago

            The discussion is over whether we can create an AGI. An AGI is an inorganic mind of some sort. We don’t need to make an AGI. I personally don’t care. The question was can we? The answer is No.

              • yeahiknow3@lemmings.world
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                6 days ago

                Your definition of AGI as doing “jobs” is arbitrary, since the concept of “a job” is made up; literally anything can count as economic labor.

                For instance, people frequently discuss AGI replacing governments. That would require the capacity for leadership. It would require independence of thought and creative deliberation. We simply cannot list (let alone program) all human goals and values. It is logically impossible to axiomatize our value systems. The values would need to be intuited. This is a very famous result in mathematics called Gödel’s first incompleteness theorem.

                To quote Gödel himself: “We cannot mechanize all of our intuitions.”

                Alan Turing drew the same conclusion a few years later with The Halting Problem.

                In other words, if we want to build a machine that shares our value system, we will need to do so in such a way that it can figure out our values for itself. How? Well, presumably by being conscious. I would be happy if we could do so without its being conscious, but that’s my point: nobody knows how. Nobody even knows where to begin to guess how. That’s why AGI is so problematic.

                • Communist@lemmy.frozeninferno.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 days ago

                  Jobs are not arbitrary, they’re tasks humans want another human to accomplish, an agi could accomplish all of those that a human can.

                  For instance, people frequently discuss AGI replacing governments. That would require the capacity for leadership. It would require independence of thought and creative deliberation. We simply cannot list (let alone program) all human goals and values. It is logically impossible to axiomatize our value systems. The values would need to be intuited. This is a very famous result in mathematics called Gödel’s first incompleteness theorem

                  Why do you assume we have to? Even a shitty current ai can do a decent job at this if you fact check it, better than a lot of modern politicians. Feed it the entire internet and let it figure out what humans value, why would we manually do this?

                  In other words, if we want to build a machine that shares our value system, we will need to do so in such a way that it can figure out our values for itself. How? Well, presumably by being conscious. I would be happy if we could do so without its being conscious, but that’s my point: nobody knows how. Nobody even knows where to begin to guess how. That’s why AGI is so problematic.

                  humans are conscious and have gotten no closer to doing this, ever, I see no reason to believe consciousness will help at all with this matter.

                  • yeahiknow3@lemmings.world
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    6 days ago

                    Feed it the entire internet and let it figure out what humans value

                    There are theorems in mathematical logic that tell us this is literally impossible. Also common sense.

                    And LLMs are notoriously stupid. Why would you offer them as an example?

                    I keep coming back to this: what we were discussing in this thread is the creation of an actual mind, not a zombie illusion. You’re welcome to make your half-assed malfunctional zombie LLM machine to do menial or tedious uncreative statistical tasks. I’m not against it. That’s just not what interests me.

                    Sooner or later humans will create real artificial minds. Right now, though, we don’t know how to do that. Oh well.

                    https://introtcs.org/public/index.html

    • Communist@lemmy.frozeninferno.xyz
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      edit-2
      7 days ago

      A philosophical zombie still gets its work done, I fundamentally disagree that this distinction is economically meaningful. A simulation of reasoning isn’t meaningfully different.

      • yeahiknow3@lemmings.world
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        7 days ago

        That’s fine, but most people (engaged in this discussion) aren’t interested in an illusion. When they say AGI, they mean an actual mind capable of rationality (which requires sensitivity and responsiveness to reasons).

        Calculators, LLMs, and toasters can’t think or understand or reason by definition, because they can only do what they’re told. An AGI would be a construct that can think for itself. Like a human mind, but maybe more powerful. That requires subjective understanding (intuitions) that cannot be programmed. For more details on why, see Gödel’s incompleteness theorems. We can’t even axiomatize mathematics, let alone human intuitions about the world at large. Even if it’s possible we simply don’t know how.

        • Communist@lemmy.frozeninferno.xyz
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          7 days ago

          If it quacks like a duck it changes the entire global economy and can potentially destroy humanity. All while you go “ah but it’s not really reasoning.”

          what difference does it make if it can do the same intellectual labor as a human? If I tell it to cure cancer and it does will you then say “but who would want yet another machine that just does what we say?”

          your point reads like complete psuedointellectual nonsense to me. How is that economically valuable? Why are you asserting most people care about that and not the part where it cures a disease when we ask it to?

          • yeahiknow3@lemmings.world
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            7 days ago

            A malfunctioning nuke can also destroy humanity. So could a toaster, under the right circumstances.

            The question is not whether we can create a machine that can destroy humanity. (Yes.) Or cure cancer. (Maybe.) The question is whether we can create a machine that can think. (No.)

            What I was discussing earlier in this thread was whether we (scientists) can build an AGI. Not whether we can create something that looks like an AGI, or whether there’s an economic incentive to do so. None of that has any bearing.

            In English, the phrase “what most people mean when they say” idiomatically translates to something like “what I and others engaged in this specific discussion mean when we say.” It’s not a claim about how the general population would respond to a poll.

            Hope that helps!

            • Communist@lemmy.frozeninferno.xyz
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              7 days ago

              If there’s no way to tell the illusion from reality, tell me why it matters functionally at all.

              what difference does true thought make from the illusion?

              also agi means something that can do all economically important labor, it has nothing to do with what you said and that’s not a common definition.

              • yeahiknow3@lemmings.world
                link
                fedilink
                arrow-up
                2
                ·
                edit-2
                7 days ago

                Matter to whom?

                We are discussing whether creating an AGI is possible, not whether humans can tell the difference (which is a separate question).

                Most people can’t identify a correct mathematical equation from an incorrect one, especially when the solution is irrelevant to their lives. Does that mean that doing mathematics correctly “doesn’t matter?” It would be weird to enter a mathematical forum and ask “Why does it matter?”

                Whether we can build an AGI is just a curious question, whose answer for now is No.

                P.S. defining AGI in economic terms is like defining CPU in economic terms: pointless. What is “economically important labor”? Arguably the most economically important labor is giving birth, raising your children, and supporting your family. So would an AGI be some sort of inorganic uterus as well as a parent and a lover? Lol.

                That’s a pretty tall order, if AGI also has to do philosophy, politics, and science. All fields that require the capacity for rational deliberation and independent thought, btw.

                • Communist@lemmy.frozeninferno.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  7 days ago

                  Most people can’t identify a correct mathematical equation from an incorrect one

                  this is irrelevant, we’re talking about something where nobody can tell the difference, not where it’s difficult.

                  What is “economically important labor”? Arguably the most economically important labor is giving birth, raising your children, and supporting your family. So would an AGI be some sort of inorganic uterus as well as a parent and a lover? Lol.

                  it means a job. That’s obviously not a job and obviously not what is meant, an interesting strategy from one who just used “what most people mean when they say”

                  That’s a pretty tall order, if AGI also has to do philosophy, politics, and science. All fields that require the capacity for rational deliberation and independent thought, btw.

                  it just has to be at least as good as a human at manipulating the world to achieve its goals, I don’t know of any other definition of agi that factors in actually meaningful tasks

                  an agi should be able to do almost any task a human can do at a computer. It doesn’t have to be conscious and I have no idea why or where consciousness factors into the equation.

                  • yeahiknow3@lemmings.world
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    6 days ago

                    we’re talking about something where nobody can tell the difference, not where it’s difficult.

                    You’re missing the point. The existence of black holes was predicted long before anyone had any idea how to identify them. For many years, it was impossible. Does that mean black holes don’t matter? That we shouldn’t have contemplated their existence?

                    Seriously though, I’m out.

                  • yeahiknow3@lemmings.world
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    6 days ago

                    Economics is descriptive, not prescriptive. The whole concept of “a job” is made up and arbitrary.

                    You say an AGI would need to do everything a human can. Great, here are some things that humans do: love, think, contemplate, reflect, regret, aspire, etc. these require consciousness.

                    Also, as you conveniently ignored, philosophy, politics, science are among the most important non-family-oriented “jobs” we humans do. They require consciousness.

                    Plus, if a machine does what it’s told, then someone would be telling it what to do. That’s a job that a machine cannot do. But most of our jobs are already about telling machines what to do. If an AGI is not self-directed, it can’t tell other machines what to do, unless it is itself told what to do. But then someone is telling it what to do, which is “a job.”