
It’s become nearly impossible to avoid “AI” which is increasing shoehorned into every corner of our lives. I’ve lived through a bunch of the tech bubbles and this is by far the biggest and most intrusive. The tech-bros are convinced that robot slaves will print money for them so they can do away with all of these inconvenient human resources, impoverish them, and make them traffic their children for sex. Or, maybe, that’s just what they want you to think — to keep the bezzle going. But the fact of the matter is that today it’s nearly impossible to do anything using technology that hasn’t been tainted by so-called AI.
It seems apparent to me that the techbros have been intentionally enshittifying tools (like search) to force people to become dependent on AI. I suspect they are also using the huge pools of venture capital at their disposal to literally pay companies (cough Mozilla cough) to put AI into everything so that it becomes impossible to avoid.
It’s becoming harder and harder to define exactly what is AI. Some people distinguish between analytical and generative AI. Or what the model is trained with. Or where the model is run. I’m quite sure that almost no-one, outside of narrow specialists really has a good understanding. I think it’s all worth avoiding.
As an author, I strive very hard to stay away from AI. I don’t use any of the AI chatbots. I’ve used ChatGPT exactly one time. I want my writing to be unequivocally my own. I certify as such when I submit a manuscript. Toward that end, I don’t use computer operating systems with AI installed (I use Pop!_OS and an older version of the MacOS.) I have managed to retain the Google Assistant, turning off Gemini whenever they turn it on. I use the NoAI Duck Duck Go search engine. I have all of the AI bullshit turned off in Firefox. I do most of my writing in a text editor that doesn’t have AI (although there are AI plugins you can install). I’m using the wp-disable-ai plugin for WordPress to remove the interface elements that are based on generative AI. I turn off the AI Companion in Zoom. etc, etc, etc.
That said, I also use tools where it is nigh-on impossible to completely avoid AI, like Google Docs. Or Google Image Search. Or Google Maps. As Philip Brewer commented to me:
You know, it’s just about impossible to do anything on the internet and not end up using LLMs. If I use Google to check and see if there’s already a company with the same name I’m thinking to use as the name of a nefarious company in my story, Google is going to give me an AI-fied version of the search. If I read that, and then (depending on the result) either go with my fictional company name or else change it to some other fictional name, is my work now a work that used an LLM?
I don’t avoid AI only because of my authorship. I also want to make sure I’m using my brain and not becoming dependent on machines to think for me. I suspect people will discover that it is exactly like with GPS systems: There is “concrete evidence supporting the abstract contention that the rising technical order of GPS systems is dissipating human mental order in those who come to increasingly use and depend on it.” (From J. Robbins, “GPS navigation…but what is it doing to us?,” 2010 IEEE International Symposium on Technology and Society, Wollongong, NSW, Australia, 2010, pp. 309-318, doi: 10.1109/ISTAS.2010.5514623 — see A. Hutchinson, “Global Impositioning Systems: Is GPS technology actually harming our sense of direction?” The Walrus, Oct. 14, 2009. http://www.journals.uchicago.edu/doi/abs/10.1086/432651). This is not to say that I never use GPS systems, but I try to minimize my use — using them only when absolutely necessary — because becoming dependent on them causes the parts of your brain that do that work to atrophy. Literally.
I also avoid the commercial AI systems because their creators and operators are manifestly untrustworthy. You can’t know whether the results they’re presenting to you have some hidden bias. Or an overt bias. Sometimes that bias may be as simple as, “This restaurant paid us more money to have them show up in your Google Map results.” But there are a lot of other far more subtle potential biases that might be intentionally programmed in for political or ideological purposes. I would much rather be able to inspect the underlying data directly and make my own decisions. Search engines allowed us to do that. AI summaries do not.
People are going to need to come to their own decisions about what kinds of AI use are acceptable and unacceptable. I recognize that I tend toward one extreme. But others may reasonably tend toward another. Context is important.
It is not just a slippery slope. I remember many years ago, I went bicycling with my brother on the KalHaven rail trail, that runs from Kalamazoo to South Haven, on the Lake Michigan shoreline. We rode out, making good time, and feeling great. Then we turned around and the ride back was a terrible slog. It felt like we were riding into a strong headwind. Upon reflection, we realized that although the rail trail looked perfectly flat, it was not level. The rail trail is all downhill from Kalamazoo to the lake. And all uphill going back. You’d never know that standing on any particular point — you can’t see the slope. I think AI is like that: it’s a continuum and it’s going to become harder and harder to know exactly where you are on the slope. Unless you have a GPS.
Note: WordPress would lurve for me to use an AI assistant to generate an image for this post. I considered doing that — just for the lulz. But, no. It’s my own, original artwork. Made by me: a human being.

Thank you for this. I am right there with you regarding any use of LLMs. I use yWriter for my drafting with no AI tools. I don’t think I’ve ever knowingly used chatgpt or the like. It annoys me when tools I use shove AI tools where there is no need for them. And it takes constant vigilance to keep them off.