A philosophical zombie still gets its work done, I fundamentally disagree that this distinction is economically meaningful. A simulation of reasoning isn’t meaningfully different.
That’s fine, but most people (engaged in this discussion) aren’t interested in an illusion. When they say AGI, they mean an actual mind capable of rationality (which requires sensitivity and responsiveness to reasons).
Calculators, LLMs, and toasters can’t think or understand or reason by definition, because they can only do what they’re told. An AGI would be a construct that can think for itself. Like a human mind, but maybe more powerful. That requires subjective understanding (intuitions) that cannot be programmed. For more details on why, see Gödel’s incompleteness theorems. We can’t even axiomatize mathematics, let alone human intuitions about the world at large. Even if it’s possible we simply don’t know how.
If it quacks like a duck it changes the entire global economy and can potentially destroy humanity. All while you go “ah but it’s not really reasoning.”
what difference does it make if it can do the same intellectual labor as a human? If I tell it to cure cancer and it does will you then say “but who would want yet another machine that just does what we say?”
your point reads like complete psuedointellectual nonsense to me. How is that economically valuable? Why are you asserting most people care about that and not the part where it cures a disease when we ask it to?
A malfunctioning nuke can also destroy humanity. So could a toaster, under the right circumstances.
The question is not whether we can create a machine that can destroy humanity. (Yes.) Or cure cancer. (Maybe.) The question is whether we can create a machine that can think. (No.)
What I was discussing earlier in this thread was whether we (scientists) can build an AGI. Not whether we can create something that looks like an AGI, or whether there’s an economic incentive to do so. None of that has any bearing.
In English, the phrase “what most people mean when they say” idiomatically translates to something like “what I and others engaged in this specific discussion mean when we say.” It’s not a claim about how the general population would respond to a poll.
We are discussing whether creating an AGI is possible, not whether humans can tell the difference (which is a separate question).
Most people can’t identify a correct mathematical equation from an incorrect one, especially when the solution is irrelevant to their lives. Does that mean that doing mathematics correctly “doesn’t matter?” It would be weird to enter a mathematical forum and ask “Why does it matter?”
Whether we can build an AGI is just a curious question, whose answer for now is No.
P.S. defining AGI in economic terms is like defining CPU in economic terms: pointless. What is “economically important labor”? Arguably the most economically important labor is giving birth, raising your children, and supporting your family. So would an AGI be some sort of inorganic uterus as well as a parent and a lover? Lol.
That’s a pretty tall order, if AGI also has to do philosophy, politics, and science. All fields that require the capacity for rational deliberation and independent thought, btw.
Most people can’t identify a correct mathematical equation from an incorrect one
this is irrelevant, we’re talking about something where nobody can tell the difference, not where it’s difficult.
What is “economically important labor”? Arguably the most economically important labor is giving birth, raising your children, and supporting your family. So would an AGI be some sort of inorganic uterus as well as a parent and a lover? Lol.
it means a job. That’s obviously not a job and obviously not what is meant, an interesting strategy from one who just used “what most people mean when they say”
That’s a pretty tall order, if AGI also has to do philosophy, politics, and science. All fields that require the capacity for rational deliberation and independent thought, btw.
it just has to be at least as good as a human at manipulating the world to achieve its goals, I don’t know of any other definition of agi that factors in actually meaningful tasks
an agi should be able to do almost any task a human can do at a computer. It doesn’t have to be conscious and I have no idea why or where consciousness factors into the equation.
we’re talking about something where nobody can tell the difference, not where it’s difficult.
You’re missing the point. The existence of black holes was predicted long before anyone had any idea how to identify them. For many years, it was impossible. Does that mean black holes don’t matter? That we shouldn’t have contemplated their existence?
The existence of black holes has a functional purpose in physics, the existence of consciousness only has one to our subjective experience, and not one to our capabilities.
if I’m wrong list a task that a conscious being can do that an unconscious one is unable to accomplish.
Economics is descriptive, not prescriptive. The whole concept of “a job” is made up and arbitrary.
You say an AGI would need to do everything a human can. Great, here are some things that humans do: love, think, contemplate, reflect, regret, aspire, etc. these require consciousness.
Also, as you conveniently ignored, philosophy, politics, science are among the most important non-family-oriented “jobs” we humans do. They require consciousness.
Plus, if a machine does what it’s told, then someone would be telling it what to do. That’s a job that a machine cannot do. But most of our jobs are already about telling machines what to do. If an AGI is not self-directed, it can’t tell other machines what to do, unless it is itself told what to do. But then someone is telling it what to do, which is “a job.”
A job is a task one human wants another to accomplish, it is not arbitrary at all.
philosophy, politics, science are among the most important non-family-oriented “jobs” we humans do. They require consciousness.
i don’t see why they do, a philosophical zombie could do it, why not an unconscious AI? alphaevolve is already making new science, I see no reason an unconscious being with the abilty to manipulate the world and verify couldn’t do these things.
Plus, if a machine does what it’s told, then someone would be telling it what to do. That’s a job that a machine cannot do. But most of our jobs are already about telling machines what to do. If an AGI is not self-directed, it can’t tell other machines what to do, unless it is itself told what to do. But then someone is telling it what to do, which is “a job.”
yes but you can give it large, vague goals like “empower humanity, do what we say and minimize harm.” And it will still do them. So what does it matter?
A philosophical zombie still gets its work done, I fundamentally disagree that this distinction is economically meaningful. A simulation of reasoning isn’t meaningfully different.
That’s fine, but most people (engaged in this discussion) aren’t interested in an illusion. When they say AGI, they mean an actual mind capable of rationality (which requires sensitivity and responsiveness to reasons).
Calculators, LLMs, and toasters can’t think or understand or reason by definition, because they can only do what they’re told. An AGI would be a construct that can think for itself. Like a human mind, but maybe more powerful. That requires subjective understanding (intuitions) that cannot be programmed. For more details on why, see Gödel’s incompleteness theorems. We can’t even axiomatize mathematics, let alone human intuitions about the world at large. Even if it’s possible we simply don’t know how.
If it quacks like a duck it changes the entire global economy and can potentially destroy humanity. All while you go “ah but it’s not really reasoning.”
what difference does it make if it can do the same intellectual labor as a human? If I tell it to cure cancer and it does will you then say “but who would want yet another machine that just does what we say?”
your point reads like complete psuedointellectual nonsense to me. How is that economically valuable? Why are you asserting most people care about that and not the part where it cures a disease when we ask it to?
A malfunctioning nuke can also destroy humanity. So could a toaster, under the right circumstances.
The question is not whether we can create a machine that can destroy humanity. (Yes.) Or cure cancer. (Maybe.) The question is whether we can create a machine that can think. (No.)
What I was discussing earlier in this thread was whether we (scientists) can build an AGI. Not whether we can create something that looks like an AGI, or whether there’s an economic incentive to do so. None of that has any bearing.
In English, the phrase “what most people mean when they say” idiomatically translates to something like “what I and others engaged in this specific discussion mean when we say.” It’s not a claim about how the general population would respond to a poll.
Hope that helps!
If there’s no way to tell the illusion from reality, tell me why it matters functionally at all.
what difference does true thought make from the illusion?
also agi means something that can do all economically important labor, it has nothing to do with what you said and that’s not a common definition.
Matter to whom?
We are discussing whether creating an AGI is possible, not whether humans can tell the difference (which is a separate question).
Most people can’t identify a correct mathematical equation from an incorrect one, especially when the solution is irrelevant to their lives. Does that mean that doing mathematics correctly “doesn’t matter?” It would be weird to enter a mathematical forum and ask “Why does it matter?”
Whether we can build an AGI is just a curious question, whose answer for now is No.
P.S. defining AGI in economic terms is like defining CPU in economic terms: pointless. What is “economically important labor”? Arguably the most economically important labor is giving birth, raising your children, and supporting your family. So would an AGI be some sort of inorganic uterus as well as a parent and a lover? Lol.
That’s a pretty tall order, if AGI also has to do philosophy, politics, and science. All fields that require the capacity for rational deliberation and independent thought, btw.
this is irrelevant, we’re talking about something where nobody can tell the difference, not where it’s difficult.
it means a job. That’s obviously not a job and obviously not what is meant, an interesting strategy from one who just used “what most people mean when they say”
it just has to be at least as good as a human at manipulating the world to achieve its goals, I don’t know of any other definition of agi that factors in actually meaningful tasks
an agi should be able to do almost any task a human can do at a computer. It doesn’t have to be conscious and I have no idea why or where consciousness factors into the equation.
You’re missing the point. The existence of black holes was predicted long before anyone had any idea how to identify them. For many years, it was impossible. Does that mean black holes don’t matter? That we shouldn’t have contemplated their existence?
Seriously though, I’m out.
The existence of black holes has a functional purpose in physics, the existence of consciousness only has one to our subjective experience, and not one to our capabilities.
if I’m wrong list a task that a conscious being can do that an unconscious one is unable to accomplish.
Economics is descriptive, not prescriptive. The whole concept of “a job” is made up and arbitrary.
You say an AGI would need to do everything a human can. Great, here are some things that humans do: love, think, contemplate, reflect, regret, aspire, etc. these require consciousness.
Also, as you conveniently ignored, philosophy, politics, science are among the most important non-family-oriented “jobs” we humans do. They require consciousness.
Plus, if a machine does what it’s told, then someone would be telling it what to do. That’s a job that a machine cannot do. But most of our jobs are already about telling machines what to do. If an AGI is not self-directed, it can’t tell other machines what to do, unless it is itself told what to do. But then someone is telling it what to do, which is “a job.”
A job is a task one human wants another to accomplish, it is not arbitrary at all.
i don’t see why they do, a philosophical zombie could do it, why not an unconscious AI? alphaevolve is already making new science, I see no reason an unconscious being with the abilty to manipulate the world and verify couldn’t do these things.
yes but you can give it large, vague goals like “empower humanity, do what we say and minimize harm.” And it will still do them. So what does it matter?