Generation X are disappearing

Hugo Rifkind had an article in The Times today about being a Xennial (too old to be a Millennial, too young for Generation X), and I sent him the following tweet as a reply:

Hugo and 77 other people (so far) were kind enough to like it, so I thought I’d elaborate a bit on my theory.

A lot of the stuff about the Baby Boomers, Generation X and the Millennials can be traced back to Howe and Strauss’s Generations from 1991. This book examined earlier American generations and claimed to identify a four-generation cycle. They then defined the new generations that were emerging at the time and tried to predict their future very roughly. In particular, they expected a huge crisis once the Baby-Boomers had started to retire (perhaps around 2020), which Generation X would sort out and then hand over power to the Millennials.

This is clearly not what happened – the crises (9/11 + the financial crash) happened much sooner than they expected, while the Baby-Boomers were still in office. They actually mentioned this possibility briefly on page 382:

What happens if the crisis comes early? What if the Millennium – the year 2000 or soon thereafter – provides Boomers with the occasion to impose their “millennial” visions on the nation and world? The generation cycle suggests that the risk of cataclysm would be very high.

Furthermore, in their historical analysis they clearly don’t assign a standard length to generations, so they would themselves have expected the generational boundaries in the 20th century to require some tweaking once the big defining events had taken place. It’s therefore completely in their spirit to revisit the definitions they suggested more than 25 years ago.

They actually don’t even stick to four generations per cycle all the time. What they call the Civil War Cycle contains only three. As they write on page 192:

[It is] America’s only three-part cycle – the one whose crisis came too soon, too hard, and with too much ghastly devastation. This cycle is no aberration. Rather, it demonstrates how events can turn out badly – and, from a generational perspective, what happens when they do.

I’m postulating that this has happened again. The crisis came so soon that at least half of Generation X hadn’t yet managed to get high enough up the housing ladder (or build up assets in other ways) to allow them to benefit from the asset boom that was a result of the financial crash. As a result we now have a huge split in most western societies: On the one hand, older people (Baby Boomers and older X’ers) often are asset-rich and have paid off most of their house, as well as having a good pension. Other members of this generation are less rich, but they might at least have a cheap council house that is affordable on their salary or their pension. On the other hand, younger people (Millennials and younger X’ers) don’t tend to have much wealth: They’re either renting in the private sector, or they’ve paid so much money for their house that a crazy amount of their salary is spent on the mortgage. They don’t have decent pensions, and they don’t really expect ever to be able to retire comfortably. They also typically grew up being told to expect a great and prosperous life, and they weren’t expecting things to turn out like this.

I was born in 1972, so right in the middle of Generation X, and I think we felt different from both the Baby Boomers and the Millennials before the financial crash. However, I now feel more and more similar to the Millennials, and further and further removed from the Boomers. So I think we might have to redefine the Baby Boomer generation as stretching all the way to the late 1960s, and the Millennials starting immediately afterwards. (I don’t believe it’s a clean break – whether somebody belongs in one generation or the other ultimately depends on whether they had enough assets when the economy collapsed.)

I think we can now also tell when the Millennial generation ended: The youngsters who don’t remember the time before the financial crash have a different mindset because they didn’t spend their childhood expecting a rich and easy life. They also happen to be the smartphone generation.

So to finish this blog post, let me redefine the generations as follows:

  • The Baby Boomers (too young to remember WWII, and old enough to have built up their wealth before the financial crash): Roughly 1940–1969.
  • The Car-Crash Generation (grew up expecting an easy life, but suddenly the rug got pulled away from under they feet): Roughly 1970–1999.
  • The Smartphone Generation (they don’t remember the easy years, and they live their lives through their smartphones): Roughly 2000–.

AlphaDiplomacy Zero?

diplomacy game photo
Photo by condredge
When I was still at university, I did several courses in AI, and in one of them we spent a lot of time looking at why Go was so hard to implement. I was therefore very impressed when DeepMind created AlphaGo two years ago and started beating professional players, because it was sooner than I had expected. And I am now overwhelmed by the version called AlphaGo Zero, which is so much better:

Previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go. AlphaGo Zero skips this step and learns to play simply by playing games against itself, starting from completely random play. In doing so, it quickly surpassed human level of play and defeated the previously published champion-defeating version of AlphaGo by 100 games to 0.

It is able to do this by using a novel form of reinforcement learning, in which AlphaGo Zero becomes its own teacher. The system starts off with a neural network that knows nothing about the game of Go. It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games.

I’m wondering whether the same methodology could be used to create a version of Diplomacy.

The game of Diplomacy was invented by Allan B. Calhamer in 1954. The seven players represent the great powers of pre-WWI Europe, but differently from many other board games, there are no dice – nothing is random. In effect it’s more like chess for seven players, except for the addition of diplomacy, i.e., negotiation. For instance, if I’m France and attack England on my own, it’s likely our units will simply bounce; to succeed, I need to convince Germany or Russia to join me, or I need to convince England I’m their friend and that it’ll be perfectly safe to move all their units to Russia or Germany without leaving any of them behind.

Implementing a computer version of Diplomacy without the negotiation aspect isn’t much use (or fun), and implementing human negotiation capabilities is a bit beyond the ability of current computational linguistics techniques.

However, why not simply let AlphaDiplomacy Zero develop its own language? It will probably look rather odd to a human observer, perhaps a bit like Facebook’s recent AI experiment:

Well, weirder than this, of course, because Facebook’s Alice and Bob started out with standard English. AlphaDiplomacy Zero might decide that “Jiorgiougj” means “Let’s gang up on Germany”, and that “Oihuergiub” means “I’ll let you have Belgium if I can have Norway.”

It would be fascinating to study this language afterwords. How many words would it have? How complex would the grammar be? Would it be fundamentally different from human languages? How would it evolve over time?

It would also be fascinating for students of politics and diplomacy to study AlphaDiplomacy’s negotiation strategies (once the linguists had translated it). Would it come up with completely new approaches?

I really hope DeepMind will try this out one day soon. It would be truly fascinating, not just as a board game, but as a study in linguistic universals and politics.

It would tick so many of my boxes in one go (linguistics, AI, Diplomacy and politics). I can’t wait!

The future belongs to small and weird languages

tlingit photo
Photo by David~O
Google Translate and other current machine translation programs are based on bilingual corpora, i.e., collections of translated texts. They translate a text by breaking it into bits, finding similarities in the corpus, selecting the corresponding bits in the other language and then stringing the translation snippets together again. It works surprisingly well, but it means that current machine translation can never get better than existing translations (errors in the corpus will get replicated), and also that it’s practically impossible to add a language that very few translations exist for (this is for instance a challenge for adding Scots, because very few people translate to or from this language).

My prediction is that the next big break-through in computational linguistics will involve deducing meaning from monolingual corpora, i.e., figuring out the meaning of a word by analysing how it’s used. If somebody then manages to construct a computational representation of meaning (perhaps aided by brain research), it should then theoretically be possible to translate from one language into another without ever having seen a translation before, by turning language into meaning and back into another language. I’ve no idea when this is going to happen, but I presume Google and other big software companies are throwing big money at this problem, so it might not be too far away. My gut feeling would be 10–20 years from now.

Interestingly, once this form of machine translation has been invented, translating between two language varieties will be just as easy as translating between two separate languages. So you could translate a text in British English into American English, or formal language into informal, or Geordie into Scouse. You could even ask for Wuthering Heights as J.K. Rowling would have written it.

Also, the computer could be analysing your use of language and start mimicking it – using the same words and phrases with the same pronunciation. In effect, it could start sounding like you (or like your mum, Alex Salmond or Marilyn Monroe if you so desired).

This will have huge repercussions for dialects and small languages.

At the moment, we’re surrounded by big languages – they dominate written materials as well as TV and movies, and most computer interfaces work best in them. It’s also hard to speak a non-standard variety of a big language, because speech recognition and machine translation programs tend to fall over when the way you speak doesn’t conform. Scottish people are very aware of this, as shown by the famous elevator sketch:

However, if my predictions come out true, all of that will change. As soon as a corpus exists (and that can include spoken language, not just written texts), the computer should be able to figure our how to speak and understand this variety. Because translation is always easier and more accurate between similar language varieties than between very different ones, people might prefer to get everything translated or dubbed into their own variety. So you will never need to hear RP or American English again if you don’t want to – you can get everything in your own variety of Scottish English instead. Or in broad Scots. Or in Gaelic.

Every village used to have its own speech variety (its patois to use the French term). The reformation initiated a process of language standardisation, and this got a huge boost when all children started going to school to learn to read and write (not necessarily well, but always in the standard language). When radio was invented, the spoken language started converging, too, and television made this even more ubiquitous. We’re now in a situation where lots of traditional languages and dialects are threatened with extinction.

If computers start being good at picking up the local lingo, all of that will potentially change again. There will be no great incentive to learn a standard variety of a language if your computer can always bridge the gap if other people don’t understand it. The languages of the world might start diverging again. That will be interesting.