
One thing most people probably haven’t thought much about is the autonomy of so-called AIs. (Note: Large Language Models are not actually “intelligent” in the way people think of intelligence and people tend to project intelligence onto their behavior. But for the sake of convenience, I’ll call them AI anyway). Who actually controls AIs?
People assume that AIs are “trained” on “data” and then behave autonomously in response to the prompts they’re given. That’s sometimes true. But in many ways, their behavior is often secretly constrained. When Google’s photo recognition software mistakenly identified an African American as a gorilla, the company simply put in a hard limit so that the AI would never report recognizing anything as a gorilla. But none of this is visible to the end user. Most of the current AIs are probably full of hacks like these to prevent the AI from making common sense blunders that would get the company in trouble. But what other kinds of hacks might be in place?
If you’re a company producing an AI, there are all kinds of things you might wish your AI would do if used in particular circumstances. Or by particular people: your opponents, say. Or politicians. How irresistible will it be to corporations that make AIs to make them act in ways that benefit the corporation when given the opportunity? Anyone who knows corporations will know that it will be totally irresistible.
More importantly, when was the last time you heard of a corporation getting it’s network compromised. Yesterday? This morning? Ten minutes ago? It happens all the time. What happens when one of these AIs get compromised? How do you know the AIs you’ve been using up until now haven’t already been compromised?
Humans sometimes get compromised too. If someone gets kompromat on a person, like a pee tape for example, they might be able to get them to do nearly anything: even become a traitor to their country. And, of course, people are notoriously susceptible to inducements: e.g. money, sex, drugs. Or to become a mole or traitor for revenge. There are a bunch of huge differences between human treachery and a compromised AI. But one difference should give you pause.
We have deep experience with human treachery. We all know hundreds or thousands of examples of it throughout recorded history. There is legal precedent and volumes of case law for how to handle it. We have no experience with what happens when an AI gets compromised and begins to systematically undermine the agenda of the user. Who is responsible? Who decides? What’s the liability? Nobody knows.
Personally, I don’t use AI for anything. Not for important things. Not for unimportant things. Not for anything. That may seem like an extreme position. But I think that once many people begin to use AI, they’ll quickly become dependent on it and will find it much harder to recognize the subtle ways that AI — or whoever is actually controlling it — may be using them.