5 Comments
⭠ Return to thread

Interesting that Pinker was very slow to understand Bob's proposal that LLMs in their pre-training evolve in a manner similar to the human brain. Pinker clearly does not agree (despite his polite acknowledgement of Bob's point at the end of the podcast). Instead, Pinker continually repeats that LLMs are just next word predictors that do statistical correlations of word frequency of occurrence to respond to questions. That's why LLMs can't tell you that Tom Cruise is the son of his mother because that information almost never occurs that way on the internet. It always says who Tom Cruise's mother is and almost never says the name of mother and then recites who her son is. No surprise here since the LLM is just looking at word frequency of occurrence something Bob can't seem to absorb but Pinker (and Gary Marcus who told Bob that repeatedly)

clearly do.

What was remarkable to hear is that Pinker seems to think, like Bob, that consciousness is epiphenomenal. And that's why neither Bob nor Pinker are even interested in the fact that LLMs are not sentient, because they don't think sentience does anything. As a result they are missing the whole key to AGI and the goal of most AI developers (and the fear of AI Doomers). Human intelligence is conscious, embodied, motivated and self-interested. Since AIs lack all of these qualities they can never produce human level intelligence because human intelligence requires all those things. Neither Bob nor Pinker seems to understand embodiment or "embodied cognition" and those concepts are key to why we can't develop AGI unless it can be made sentient. There is no evidence that it can be.

Human consciousness is largely awareness of our bodies or if you will our being. Our bodies are mortal and have many needs that must be constantly maintained. Therefore humans have interests and goals that dominant their minds. They are self motivated to keep going and to survive and thrive. AIs on the other hand have no motives, interests or goals. They don't care about anything. Human intelligence is all about feelings and caring and needs and goals. AIs cannot experience these things because they are not biological; organisms and do not experience anything. If you don't understand this concept of embodiment you really cannot understand anything about AIs and their ability or inability to think like humans. It is very strange that two such well known public intellectuals don't seem to grasp these relatively obvious points..

Expand full comment

I had stopped listening to or reading Bob's thoughts on AI because I found his perspective so frustrating, but I listened to this podcast because I have always enjoyed Bob's past conversations with Pinker. There were portions of this discussion that I likewise enjoyed, but the fundamental problems with Bob's perspective on AI remain incredibly frustrating--I just don't understand how such a great thinker can hold such obviously wrong ideas on AI. Bob seems steadfast in his position that the pattern recognition of machine learning results in a "map of semantic meaning" that is "functionally comparable" to human understanding--despite the ever-growing evidence that LLM's fail basic tests of human understanding. Pinker made versions of this point a few times in the conversation, but did not hit it as hard as he could have. At one point, Pinker referred to the "mimicry" of AI, and that is precisely the point--LLM's are a wondrously powerful form of mimicry, but lack understanding in the sense that we have it. There are so many vivid examples of this, with basic math being a clear one--LLM's can perfectly recite mathematical principles in words, but are very poor at actually applying them. Why? Because they can recite the words based on past patterns, but they don't "understand" the words in the way that we do. Gary Marcus and others have supplied many other examples. I have yet to hear or read Bob clearly confronting this basic dichotomy between LLMs (1) perfectly explaining certain simple ideas in words and (2) erroneously applying those simple ideas in practice. How does the "map of semantic meaning" correspond to something "functionally comparable" to human understanding yet still result in this dichotomy?

LLMs raise so many fascinating questions, and as I've written in previous posts, there is no one that I would rather hear engage with those questions than Bob--I have long held Bob in the absolute highest regard as a public intellectual of the first order. But to me, those questions do not include "is machine learning a form of evolution that is reverse engineering the architecture of human understanding?" The answer to that question is no, and arguing otherwise ignores the mechanics of these systems and the empirical evidence showing that pattern recognition / reconstruction is different than human understanding.

By focusing on a mistaken, overly anthropomorphized conception of AI, Bob is missing out on opportunities to address some really interesting questions that flow from how the models actually work. I've written about some of those questions in earlier posts. But, seems like Bob is committed to the path he is taking...

Expand full comment

Correct and well said. Bob's position on this is inexplicable.

Expand full comment

I very much agree with you argument that "Human consciousness is largely awareness of our bodies". However I think you were a little hard on Steve, who gave a pretty good description of embodiment, although he didn't lean into how it is related to intentionality as much as you described. I doesn't think either of them is absolutely committed to a purely epiphenomenal view of consciousness.

Expand full comment

I think you may be attributing views or lack of understanding to these two a bit liberally.

Despite this, I found your post thought provoking

Expand full comment