Sunday, December 10, 2017

10/12/17: Rationally-Irrational AI, yet?..

In a recent post ( I mused about the deep-reaching implications of the Google's AlphaZero or AlphaGo in its earliest incarnation capabilities to develop independent (of humans) systems of logic. And now we have another breakthrough in the Google's AI saga.

According to the report in the Guardian (,:

"AlphaZero, the game-playing AI created by Google sibling DeepMind, has beaten the world’s best chess-playing computer program, having taught itself how to play in under four hours. The repurposed AI, which has repeatedly beaten the world’s best Go players as AlphaGo, has been generalised so that it can now learn other games. It took just four hours to learn the rules to chess before beating the world champion chess program, Stockfish 8, in a 100-game match up."

Another quote worth considering:
"After winning 25 games of chess versus Stockfish 8 starting as white, with first-mover advantage, a further three starting with black and drawing a further 72 games, AlphaZero also learned shogi in two hours before beating the leading program Elmo in a 100-game matchup. AlphaZero won 90 games, lost eight and drew 2."

Technically, this is impressive. But the real question worth asking at this stage is whether the AI logic is capable of intuitive sensing, as opposed to relying on self-generated libraries of moves permutations. The latter is a form of linear thinking, as opposed to highly non-linear 'intuitive' logic which would be consistent with discrete 'jumping' from one logical moves tree to another based not on history of past moves, but on strategy that these moves reveal to the opponent. I don't think we have an answer to that, yet.

In my view, that is important, because as I argued some years ago in a research paper,  such 'leaps of faith' in logical systems are indicative of the basic traits of humanity, as being distinct from other forms of conscious life. In other words, can machines be rationally irrational, like humans?..

Post a Comment