NPR headline: ChatGPT promised to help her find her soulmate. Then it betrayed her

I find it intensely annoying when people ascribe intelligence, or intentionality, to statements by AIs (i.e. Large Language Models). In today’s example, a writer said that an AI “betrayed” someone. This kind of statement is a category error. It projects intelligence onto a system that, though facile with language, does not in fact engage in human reasoning at all. It just makes pronouncements that look like human speech. I really wish writers would stop using these kinds of statements that mislead people into thinking that AIs are, in fact, intelligent.

I began trying to imagine the words that shouldn’t be used to describe AI speech. In chatting with Philip, I said, “AIs can’t ‘promise’ anything either.”

“They can say they do, though. They can say anything.”

“They can say anything. It just doesn’t mean anything.”

“I don’t know,” he said. “It ‘means’ something, in the sense that a string of words means things. I mean, the AI can’t mean anything, because it has no agency, and no real existence. But the WORDS mean things, which is how we get this puzzlement.”

I disagree. Here’s the thing. Statements (strings of words) never mean anything on their own. The receiver always has to ascribe meaning to a statement. This is a fundamental tenet of social constructivism: You can’t transmit meaning — only words. You probably had a meaning in mind when you transmitted the words, but the other person receives the words and has to construct their own meaning from them.

In a normal case, one makes the assumption that the statement meant something to the person who made it. When the receiver ascribes meaning to it, they make assumptions about what it means to themself and what it may have meant to the speaker. And, in this way, interlocutors negotiate a shared understanding. But things don’t mean anything to AIs. So you’re projecting meaning onto something that isn’t there.

It reminds me of Wittgenstein’s “Language Game.” Wittgenstein began his philosophical inquiry with that the idea that propositions (human statements) are (1) tautologies or (2) contradictions or (3) neither. He agonized over what would could be said and what could only be thought or shown. But, eventually, he came to call language a “game” and recognized that one of the principal outcomes of language was that most of what could said were things that had no corresponding referent in reality. I think he basically gave up on philosophy as a meaningful endeavor.

AIs are the language game as simulated by machines. Nothing they say has any referent. There is no intentionality or thought process behind their utterances. But when people see a statement, they are seduced into imagining there must consciousness and meaning behind it. I would recommend people not give into the temptation. AIs are not trying to accomplish anything. They do not have motives. Or goals. All they do is generate text that looks like an answer.

Do not project intelligence onto them. In fact, I would recommend not using them at all.

The people who are creating these machines obviously do have motives and goals. And it would be a mistake to believe that their goals align with yours.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>