Well, the world finally caught up to my story again. Firewatcher is a near-future sci-fi story. Conventional wisdom was that many trends would happen circa 2100. I follow trends. I guessed 2040. Then, I started writing the book, realized the trends were happening even sooner, and worked harder and faster at finishing the story. That’s why the first edition of Firewatcher includes a few (ha!) missed edits and doesn’t have page numbers. (The Revised Edition caught a lot of those misses.) Today, August 20, 2024, another event hit the timeline. Someone wants to put an AI in charge of a government.
re: AI For Mayor #PNTP
As I alluded to in another of my blogs, PNTP, trust in government, really government politicians and employees is down and falling. Trust in AI is a confusing mess, but without calling it AI, we are trusting algorithms with many decisions: how to hire, when to fire, how to drive, how to invest, how to get somewhere,…even what to shop for and when. Someone is running for mayor in Cheyenne, Wyoming, but if elected, will simply let the AI run things. AI in charge by proxy.
In Firewatcher’s backstory, people distrust each other so much that they concede control directly to AI. They don’t like it. They don’t trust it. But they feel even worse about each other. As the AIs gain control, the colonists decide to leave the planet.
Much of the debate about AI is like the debate about nuclear weapons. The government spent extraordinary efforts and resources making sure no one dropped The Big One. Good. But I was worried about the little ones, too. A pocket nuke wouldn’t destroy a city, but it would be a disaster worse than hurricanes and earthquakes. It would be disruptive and fundamentally unsettling to those who lived there.
The definition of an ultimate AI has been changing as we advance the technology. We’ve already surpassed some expectations from the 50s. Now folks worry more about AGI, Artificial General Intelligence, a further goal with greater authority. I’m more concerned with something that isn’t even a true AI. An AI wannabe can be just as dangerous. Experts may say that it isn’t an AGI or even a full AI, but if the software acts like it or is given the same authority of it, then it effectively is it. ‘It’ being some higher level of AI. An unsophisticated AI may be given a more constrained authority, but if the world is so interlinked that our system is now unknowable, fragile, and likely to enable unintended consequences. If every car decides to turn left, the world can go into gridlock, though maybe not the rural parts of Wyoming and such.
This post is less inspired by prognostication and more about authors and anticipation. One of the benefits of science fiction is to play with ideas before we have to live with those ideas. Scifi is society’s R&D lab. But sometimes, those ideas that are growing sneak out of their petri dishes and infect the world. Another reason to write and read AI, and to do it now.
Leave a comment