When I was still at university, I did several courses in AI, and in one of them we spent a lot of time looking at why Go was so hard to implement. I was therefore very impressed when DeepMind created AlphaGo two years ago and started beating professional players, because it was sooner than I had expected. And I am now overwhelmed by the version called AlphaGo Zero, which is so much better:
Previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go. AlphaGo Zero skips this step and learns to play simply by playing games against itself, starting from completely random play. In doing so, it quickly surpassed human level of play and defeated the previously published champion-defeating version of AlphaGo by 100 games to 0.
It is able to do this by using a novel form of reinforcement learning, in which AlphaGo Zero becomes its own teacher. The system starts off with a neural network that knows nothing about the game of Go. It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games.
I’m wondering whether the same methodology could be used to create a version of Diplomacy.
The game of Diplomacy was invented by Allan B. Calhamer in 1954. The seven players represent the great powers of pre-WWI Europe, but differently from many other board games, there are no dice – nothing is random. In effect it’s more like chess for seven players, except for the addition of diplomacy, i.e., negotiation. For instance, if I’m France and attack England on my own, it’s likely our units will simply bounce; to succeed, I need to convince Germany or Russia to join me, or I need to convince England I’m their friend and that it’ll be perfectly safe to move all their units to Russia or Germany without leaving any of them behind.
Implementing a computer version of Diplomacy without the negotiation aspect isn’t much use (or fun), and implementing human negotiation capabilities is a bit beyond the ability of current computational linguistics techniques.
However, why not simply let AlphaDiplomacy Zero develop its own language? It will probably look rather odd to a human observer, perhaps a bit like Facebook’s recent AI experiment:
Well, weirder than this, of course, because Facebook’s Alice and Bob started out with standard English. AlphaDiplomacy Zero might decide that “Jiorgiougj” means “Let’s gang up on Germany”, and that “Oihuergiub” means “I’ll let you have Belgium if I can have Norway.”
It would be fascinating to study this language afterwords. How many words would it have? How complex would the grammar be? Would it be fundamentally different from human languages? How would it evolve over time?
It would also be fascinating for students of politics and diplomacy to study AlphaDiplomacy’s negotiation strategies (once the linguists had translated it). Would it come up with completely new approaches?
I really hope DeepMind will try this out one day soon. It would be truly fascinating, not just as a board game, but as a study in linguistic universals and politics.
It would tick so many of my boxes in one go (linguistics, AI, Diplomacy and politics). I can’t wait!