NPR headline: ChatGPT promised to help her find her soulmate. Then it betrayed her

I find it intensely annoying when people ascribe intelligence, or intentionality, to statements by AIs (i.e. Large Language Models). In today’s example, a writer said that an AI “betrayed” someone. This kind of statement is a category error. It projects intelligence onto a system that, though facile with language, does not in fact engage in human reasoning at all. It just makes pronouncements that look like human speech. I really wish writers would stop using these kinds of statements that mislead people into thinking that AIs are, in fact, intelligent.

I began trying to imagine the words that shouldn’t be used to describe AI speech. In chatting with Philip, I said, “AIs can’t ‘promise’ anything either.”

“They can say they do, though. They can say anything.”

“They can say anything. It just doesn’t mean anything.”

“I don’t know,” he said. “It ‘means’ something, in the sense that a string of words means things. I mean, the AI can’t mean anything, because it has no agency, and no real existence. But the WORDS mean things, which is how we get this puzzlement.”

I disagree. Here’s the thing. Statements (strings of words) never mean anything on their own. The receiver always has to ascribe meaning to a statement. This is a fundamental tenet of social constructivism: You can’t transmit meaning — only words. You probably had a meaning in mind when you transmitted the words, but the other person receives the words and has to construct their own meaning from them.

In a normal case, one makes the assumption that the statement meant something to the person who made it. When the receiver ascribes meaning to it, they make assumptions about what it means to themself and what it may have meant to the speaker. And, in this way, interlocutors negotiate a shared understanding. But things don’t mean anything to AIs. So you’re projecting meaning onto something that isn’t there.

It reminds me of Wittgenstein’s “Language Game.” Wittgenstein began his philosophical inquiry with that the idea that propositions (human statements) are (1) tautologies or (2) contradictions or (3) neither. He agonized over what would could be said and what could only be thought or shown. But, eventually, he came to call language a “game” and recognized that one of the principal outcomes of language was that most of what could said were things that had no corresponding referent in reality. I think he basically gave up on philosophy as a meaningful endeavor.

AIs are the language game as simulated by machines. Nothing they say has any referent. There is no intentionality or thought process behind their utterances. But when people see a statement, they are seduced into imagining there must consciousness and meaning behind it. I would recommend people not give into the temptation. AIs are not trying to accomplish anything. They do not have motives. Or goals. All they do is generate text that looks like an answer.

Do not project intelligence onto them. In fact, I would recommend not using them at all.

The people who are creating these machines obviously do have motives and goals. And it would be a mistake to believe that their goals align with yours.

a stylish hip flask

It’s become nearly impossible to avoid “AI” which is increasing shoehorned into every corner of our lives. I’ve lived through a bunch of the tech bubbles and this is by far the biggest and most intrusive. The tech-bros are convinced that robot slaves will print money for them so they can do away with all of these inconvenient human resources, impoverish them, and make them traffic their children for sex. Or, maybe, that’s just what they want you to think — to keep the bezzle going. But the fact of the matter is that today it’s nearly impossible to do anything using technology that hasn’t been tainted by so-called AI.

It seems apparent to me that the techbros have been intentionally enshittifying tools (like search) to force people to become dependent on AI. I suspect they are also using the huge pools of venture capital at their disposal to literally pay companies (cough Mozilla cough) to put AI into everything so that it becomes impossible to avoid.

It’s becoming harder and harder to define exactly what is AI. Some people distinguish between analytical and generative AI. Or what the model is trained with. Or where the model is run. I’m quite sure that almost no-one, outside of narrow specialists really has a good understanding. I think it’s all worth avoiding.

As an author, I strive very hard to stay away from AI. I don’t use any of the AI chatbots. I’ve used ChatGPT exactly one time. I want my writing to be unequivocally my own. I certify as such when I submit a manuscript. Toward that end, I don’t use computer operating systems with AI installed (I use Pop!_OS and an older version of the MacOS.) I have managed to retain the Google Assistant, turning off Gemini whenever they turn it on. I use the NoAI Duck Duck Go search engine. I have all of the AI bullshit turned off in Firefox. I do most of my writing in a text editor that doesn’t have AI (although there are AI plugins you can install). I’m using the wp-disable-ai plugin for WordPress to remove the interface elements that are based on generative AI. I turn off the AI Companion in Zoom. etc, etc, etc.

That said, I also use tools where it is nigh-on impossible to completely avoid AI, like Google Docs. Or Google Image Search. Or Google Maps. As Philip Brewer commented to me:

You know, it’s just about impossible to do anything on the internet and not end up using LLMs. If I use Google to check and see if there’s already a company with the same name I’m thinking to use as the name of a nefarious company in my story, Google is going to give me an AI-fied version of the search. If I read that, and then (depending on the result) either go with my fictional company name or else change it to some other fictional name, is my work now a work that used an LLM?

I don’t avoid AI only because of my authorship. I also want to make sure I’m using my brain and not becoming dependent on machines to think for me. I suspect people will discover that it is exactly like with GPS systems: There is “concrete evidence supporting the abstract contention that the rising technical order of GPS systems is dissipating human mental order in those who come to increasingly use and depend on it.” (From J. Robbins, “GPS navigation…but what is it doing to us?,” 2010 IEEE International Symposium on Technology and Society, Wollongong, NSW, Australia, 2010, pp. 309-318, doi: 10.1109/ISTAS.2010.5514623 — see A. Hutchinson, “Global Impositioning Systems: Is GPS technology actually harming our sense of direction?” The Walrus, Oct. 14, 2009. http://www.journals.uchicago.edu/doi/abs/10.1086/432651). This is not to say that I never use GPS systems, but I try to minimize my use — using them only when absolutely necessary — because becoming dependent on them causes the parts of your brain that do that work to atrophy. Literally.

I also avoid the commercial AI systems because their creators and operators are manifestly untrustworthy. You can’t know whether the results they’re presenting to you have some hidden bias. Or an overt bias. Sometimes that bias may be as simple as, “This restaurant paid us more money to have them show up in your Google Map results.” But there are a lot of other far more subtle potential biases that might be intentionally programmed in for political or ideological purposes. I would much rather be able to inspect the underlying data directly and make my own decisions. Search engines allowed us to do that. AI summaries do not.

People are going to need to come to their own decisions about what kinds of AI use are acceptable and unacceptable. I recognize that I tend toward one extreme. But others may reasonably tend toward another. Context is important.

It is not just a slippery slope. I remember many years ago, I went bicycling with my brother on the KalHaven rail trail, that runs from Kalamazoo to South Haven, on the Lake Michigan shoreline. We rode out, making good time, and feeling great. Then we turned around and the ride back was a terrible slog. It felt like we were riding into a strong headwind. Upon reflection, we realized that although the rail trail looked perfectly flat, it was not level. The rail trail is all downhill from Kalamazoo to the lake. And all uphill going back. You’d never know that standing on any particular point — you can’t see the slope. I think AI is like that: it’s a continuum and it’s going to become harder and harder to know exactly where you are on the slope. Unless you have a GPS.

Note: WordPress would lurve for me to use an AI assistant to generate an image for this post. I considered doing that — just for the lulz. But, no. It’s my own, original artwork. Made by me: a human being.