Okay, that's pretty awesome.
An AI that vanquished humanity at perhaps the most complex traditional game on Earth was inconceivably smart. But not smart enough to survive its own replacement by an even more awesome, alien intelligence.
Google's DeepMind researchers have just announced the next evolution of their seemingly indomitable artificial intelligence – AlphaGo Zero – which has dispensed with what may have been the most inefficient resource in its ongoing quest for knowledge: humans.
Zero's predecessor, dubbed simply AlphaGo, was described as "Godlike" by one of the crestfallen human champions it bested at the ancient Chinese board game, Go, but the new evolution has refined its training arsenal by eradicating human teachings from its schooling entirely.
The AlphaGo versions that kicked our butts at Go in a series of contests this year and last year first learned to play the game by analysing thousands of human amateur and professional games, but AlphaGo Zero is entirely self-taught, learning by 100 percent independent experimentation.
In a new study, the researchers report how that uncanny self-reliance sharpened Zero's intelligence to devastating effect: in 100 games against Zero, a previous AlphaGo incarnation – which cleaned the floor with us in 2016 – didn't pick up a single win. Not one.
Even more amazingly, that trumping came after just three days of self-play training by AlphaGo Zero, in which it distilled the equivalent of thousands of years of human knowledge of the game.
"It's like an alien civilization inventing its own mathematics," computer scientist Nick Hynes MIT told Gizmodo.
"What we're seeing here is a model free from human bias and presuppositions. It can learn whatever it determines is optimal, which may indeed be more nuanced that our own conceptions of the same."
After 21 days of self-play, Zero had progressed to the standard of its most powerful predecessor, known as AlphaGo (Master), which is the version that beat world number one Ke Jie this year, and in subsequent weeks it eclipsed that level of performance.
Aside from the self-reliance, the team behind AlphaGo Zero ascribe its Go dominance to an improved, single neural network (former versions used two in concert), and more advanced training simulations.
But just because the AI is racing ahead at such an awesome – if disquieting – pace, it doesn't necessarily mean Zero is smarter or more capable than humans in other fields away from this complex but constrained board game.
"AI fails in tasks that are surprisingly easy for humans," computational neuroscientist Eleni Vasilaki from Sheffield University in the UK told The Guardian.
"Just look at the performance of a humanoid robot in everyday tasks such as walking, running, and kicking a ball."
That may be true, but allow us our moment of silenced awe as we witness the birth of this astonishingly powerful synthetic way of thinking.
It might not do what humans can do, but it can do so many things we can't, too.
According to DeepMind, those capabilities will one day soon help Zero - or its inevitable, evolving heirs - figure out things like how biological mechanisms operate, how energy consumption can be reduced, or how new kinds of materials fit together.
Welcome to a bright new future, which clearly isn't ours alone.
The findings are reported in Nature.