7 Comments
User's avatar
JG's avatar

“Some people cast doubt on the Orthogonality Thesis on the grounds that true intelligence will include moral reasoning.”

The other issue with this that one can be a moral realist, and still believe that moral truths are inaccessible via the sort of reasoning an AI could access. I’m a non-naturalist moral realist, and even if I’m right, it’s not at all clear to me that AI would agree. Similarly, a naturalist could believe that moral facts are *so* difficult to prove that even superintelligent AI might get them wrong (similar to how a superintelligent AI might still not be able to solve the problem of consciousness, etc.).

Expand full comment
Connor Jennings's avatar

Yep, I agree!

Expand full comment
Adam Cheklat's avatar

There’s also “I, Robot”.

Expand full comment
Xader's avatar

good post; i like the chimp analogy. i understand the alignment problem, but what i fail to grasp is the common position that we should be worried about AGI arriving in the next few years.

i spent some time in an ACX open thread discussing the matter, and far more people than i would have expected seem to believe that LLMs will become superintelligent with enough compute.

i drilled a little deeper and found that almost all of these folks (AGI optimists/doomers) are presupposing epiphenomenalism as though it were established scientific fact. i saw multiple statements along the lines of “well the brain is just a sophisticated prediction algorithm too, so why couldn’t LLMs become smarter than a person?”

sure, bro, the brain is just a big, squishy binary computer and it’s synonymous with the mind. the color turquoise? that’s just matter bro. don’t worry about it.

then i had a thought: what if some people aren’t actually conscious? could there a subset of the population comprised of philosophical zombies? that would explain the confusion (apparently) experienced by some when they hear, “shouldn’t consciousness be a prerequisite for knowledge?” imagine how inscrutable that would be if all you’ve ever known (so to speak) was the objective!

obviously i’m being facetious, but it genuinely never ceases to amaze me how the unexamined physicalist ideology of our culture handicaps the ability of otherwise intelligent people to think clearly on topics like AI

Expand full comment
Connor Jennings's avatar

I'm not sure I follow. The question of whether or not consciousness is immaterial, and whether or not it can cause physical events, seems quite separate to whether or not scaling will create AGI in the next couple of years

Unless you think AGI requires AI be conscious? I think most people just use the term to reference AI's abilities rather than it's potential conscious states. Seems like you could have a super intelligent zombie

Expand full comment
Xader's avatar

sorry, my two points bled into each other and neither was made very clearly. i believe that a true AGI would have to be capable of knowledge, and i'm skeptical that knowledge can come from anywhere other than a conscious mind.

because all intelligent beings that have ever existed all share the trait of *being able to know things,* the idea that a complete automaton could become similarly intelligent strikes me as an extraordinary claim that demands better evidence than anything we've seen from an LLM.

this lack of knowledge is responsible for the propensity of LLMs to hallucinate, which, as far as i'm aware, remains a glaring and unresolved flaw. our ability to *compare information to an internal register of things we believe to be true* strikes me as totally fundamental to the process of working with that information in an intelligent way.

the folks in the ACX thread seem to completely disregard the centrality of this mental function, as though the human mind were nothing more than a computer that derives outputs from inputs deterministically and not in consultation with anything subjective. This philosophical position is equally as unprovable as my own conception of the mind, but it still strikes me as self-evidently false.

so yeah, i pretty much think that AGI would have to be conscious. i guess maybe we could figure out how to imbue the quality of knowing into something totally inert, but the hardcore dualist in me says that won't happen

Expand full comment
Connor Jennings's avatar

Right, I understand much better now. Well if we're using that definition of AGI, then I also agree and think that's unlikely in the next couple of years. Pretty bold to think we'll have a conscious AI soon. I'm not sure it'll ever happen, but that's just because consciousness is mystery

If we take AGI to just mean an AI that can do most tasks better than most humans though, I think we are close to that. 2027 would be my guess (I am trying to not get too attached to my career lol)

Expand full comment