1 Comment
User's avatar
⭠ Return to thread
The Amphibian's avatar

I've got an answer for you from your most recent 'Earthling Unplugged': "The whole thing about AI is it can in principle, as it evolves, do everything people do with their heads."

LLMs can do some things we do 'with our heads,' like manipulate language, get right answers to questions, notice and produce patterns, etc. But there is a huge class of things we do with our heads, to me the most important things, that it is *in principle* not capable of doing: it can neither understand nor express meaning.

[I know you are very impressed by what might be called a 'semantic map' found in LLMs -- that dog is mapped closer to cat than to aircraft carrier, etc. But nothing about the ability to reproduce accurate semantic patterns of this sort from huge swaths of text suggests actual semantic engagement (actual understanding/expression of meaning).]

None of this is to say we should stop worrying about the potential enormous harms of AI - bad actor stuff, job replacement stuff, etc. But I do think that we should START worrying about the discourse around AI collapsing or deflating how we view our own human capacities for understanding and expressing meanings.

Case in point, from your comments:

"LLMs help us see that language, rather than meaning something about "the world", works as a vehicle that moves the mind through meaning space. Inherently therefore, the meaning of an utterance depends on the starting point of the listener. Once again, this is obviously true; you have to believe in unempirical academic theories to believe that eg this paragraph has the same meaning to an informed reader and a Chinese child."

This is a view of meaning that is strikingly devoid of human agency. If what something means is simply 'what-it-is-taken-to-mean,' misunderstanding (and thus, understanding) is impossible.

Expand full comment