Category: Cooperation

AI and Thanksgiving Traffic

Traveling home on the Sunday after Thanksgiving provided an interesting insight into the future of AI and autonomous vehicles.

The Sunday after Thanksgiving is the annual I-35 post-Tday traffic jam. It’s something of a tradition as everyone simultaneously returns home. You never know precisely when or where it will happen or how bad it will be, but you know the traffic will drop from 80 mph to 0. In years past, you’d hit the slowdown and everyone would have to make a decision based on what little they could see ahead of them: Get off or stay on. Getting off the interstate and taking the parallel access road might be quicker by bypassing an accident. Or it might not. It was a gamble either way. And everyone had to make that decision independently. Hence, some got off and some stayed on.

This year, we happened to have our Google maps navigation up on one of our iPhones so that we could see ahead where the problems were and how bad they’d be (so that we could time when stops for the kids). As we approached what turned out to be an over-turned semi-trailer, Google maps said to get off the highway because that route was quicker. So we started trying to get off. But so did a surprising amount of other drivers too. I’m betting that most of them also had a GPS navigation system telling them the same thing at the same time. The problem is that the access road next to the interstate isn’t capable of handling the volume of traffic that hit it; it’s only 1 or 2 lanes in most places. So traffic on the access road suddenly slowed significantly. Then, before we could actually reach the exit ramp, Google maps had detected the slower traffic on the access roads and now told us to stay on the interstate as that route was now faster.

The irony of it amused me enough to alleviate some of the traffic stress. But it pointed to a larger looming problem with AIs and autonomous vehicles. The traffic system is designed to accommodate thousands of drivers operating a independent decision makers. Not everyone will do the same thing. GPS navigation systems are already eliminating some of that independence and creating new traffic problems. Autonomous vehicles will take that to the next step.

If you ask two locals (especially in smaller towns) what’s the best route to a nearby city or town, you’ll often get 2 different answers. Each of them has their preferred route. If, in the future, all the cars drive themselves and use the same navigation algorithms and traffic updates, they’ll all take the same routes, thereby clogging that route and leaving alternate routes open and faster. It’s quite plausible then that they’ll all receive a traffic update simultaneously and re-route, clogging the second route and opening up the first.

All this is to say that if all (weak) AIs solve the same problems in the same ways, then a sort of groupthink will emerge. We humans will then be along for the groupthink ride. As another example, if we increasingly allow search algorithms not only to answer questions for us, but tell us which questions to ask, we will increasingly groupthink our way to the same (potentially highly objectionable) conclusions. Except it’s not really groupthink. We’ll have automated that the job of groupthink to our new AIs driving us around and teaching us about the world.

Research Institute for Humanity and Nature

RIHN-posterMichael O’Rourke and I just returned from Kyoto, Japan, where we spoke and conducted Toolbox workshops with the environmental researchers at the national Research Institute for Humanity and Nature (RIHN).

On the first day, we spoke on problems of communication and collaboration in cross-disciplinary research. As a way of introducing the problem, I compared inter- and transdisciplinary research (collectively cross-disciplinary research, CDR) to the game Double Cranko, which comes from an old episode of M*A*S*H. The game is a cross between chess, checker, poker, and gin (both the drink and the rummy).RIHN2There are no rules; players make them up as they go along. The problem for CDR is much worse. Imagine 2 scientists from different disciplines working on a research project and 2 non-research stakeholders in that project (say one from government and another from business). Each knows one game only, and all the rules, terms, and objectives of that game. In collaborating on this project, they have to develop a way to integrate 4 different games (chess, checker, poker, and gin) into one game. But they don’t even speak the same game language. A point we emphasized over the two days with the RIHN researchers is the need for a co-creation of meaning of ambiguous terms or concepts for effective collaboration.

In the morning workshop of the first day, we facilitated dialogues among the researchers to begin that process of co-creation of meaning. They had to negotiate various ambiguous terms that we gave them in a set of prompts. In the afternoon session, the researchers broke into their research teams to produce a concept map of their projects from which to find project-specific ambiguous terms or concepts that will have to be negotiated with their projects’ non-research stakeholders.

[cross-posted at]

Unpublished Thoughts: Ethics of Cooperation

Passengers push train off womanLast week something interesting happened in Japan. A woman fell into the gap between a train and the platform. Unlike a similar instance in New York where one man could save another from a subway, no one person could save her. So 40 passengers got off the train and started pushing. Others quickly joined in. By cooperating, they managed to push the train over far enough to get her out. 

This case raises some interesting questions about cooperation? What moral obligation do we have to cooperate? Can we be obligated to cooperate with others on something that we would not individually be obligated to do? And how can we best understand any moral obligation for cooperation, as a duty or a virtue?
Continue reading

Prisoner’s Dilemma: Now with Real Prisoners

In a very cool new paper, University of Hamburg economists Khadjavi and Lange tested the Prisoner’s Dilemma with actual prisoners. Interestingly enough, apparently no one had ever bothered to check how they would actual behave in the game named for them.  Continue reading

© 2018 Brian Robinson

Theme by Anders NorenUp ↑