Traveling home on the Sunday after Thanksgiving provided an interesting insight into the future of AI and autonomous vehicles.
The Sunday after Thanksgiving is the annual I-35 post-Tday traffic jam. It’s something of a tradition as everyone simultaneously returns home. You never know precisely when or where it will happen or how bad it will be, but you know the traffic will drop from 80 mph to 0. In years past, you’d hit the slowdown and everyone would have to make a decision based on what little they could see ahead of them: Get off or stay on. Getting off the interstate and taking the parallel access road might be quicker by bypassing an accident. Or it might not. It was a gamble either way. And everyone had to make that decision independently. Hence, some got off and some stayed on.
This year, we happened to have our Google maps navigation up on one of our iPhones so that we could see ahead where the problems were and how bad they’d be (so that we could time when stops for the kids). As we approached what turned out to be an over-turned semi-trailer, Google maps said to get off the highway because that route was quicker. So we started trying to get off. But so did a surprising amount of other drivers too. I’m betting that most of them also had a GPS navigation system telling them the same thing at the same time. The problem is that the access road next to the interstate isn’t capable of handling the volume of traffic that hit it; it’s only 1 or 2 lanes in most places. So traffic on the access road suddenly slowed significantly. Then, before we could actually reach the exit ramp, Google maps had detected the slower traffic on the access roads and now told us to stay on the interstate as that route was now faster.
The irony of it amused me enough to alleviate some of the traffic stress. But it pointed to a larger looming problem with AIs and autonomous vehicles. The traffic system is designed to accommodate thousands of drivers operating a independent decision makers. Not everyone will do the same thing. GPS navigation systems are already eliminating some of that independence and creating new traffic problems. Autonomous vehicles will take that to the next step.
If you ask two locals (especially in smaller towns) what’s the best route to a nearby city or town, you’ll often get 2 different answers. Each of them has their preferred route. If, in the future, all the cars drive themselves and use the same navigation algorithms and traffic updates, they’ll all take the same routes, thereby clogging that route and leaving alternate routes open and faster. It’s quite plausible then that they’ll all receive a traffic update simultaneously and re-route, clogging the second route and opening up the first.
All this is to say that if all (weak) AIs solve the same problems in the same ways, then a sort of groupthink will emerge. We humans will then be along for the groupthink ride. As another example, if we increasingly allow search algorithms not only to answer questions for us, but tell us which questions to ask, we will increasingly groupthink our way to the same (potentially highly objectionable) conclusions. Except it’s not really groupthink. We’ll have automated that the job of groupthink to our new AIs driving us around and teaching us about the world.