Home

Why Artificial Intelligence Is Scary

August 2017

Let's begin with the bottom line: computers are figuring out solutions to problems, and no one knows exactly how they're doing it.

A technique called "machine learning" is now leading the way in artificial intelligence. I gave a couple examples in my earlier column on neural nets. Google taught a computer to recognize images of a cat not by telling it anything about cats but simply by showing it millions of images of cats. The computer itself, using machine learning, gradually recognized patterns and figured out what constitutes catness.

Called DeepMind, Google's machine learning technology is now the basis of many services it offers, such as image recognition in photos and making video recommendations in YouTube.

It's also the basis, as we discussed earlier, of Google Translate. Last September, Google dumped the former phrase-based approach to translation and replaced it with a system powered by DeepMind. Their computer used machine learning to analyze many translations and recognize patterns. The result was much better translations.

Here's the fun part. They fed the computer translations from English to Korean and vice versa, and also translations from English to Japanese and vice versa, so that it could translate between these languages. Then somebody wondered: Would the computer be able to translate from Korean to Japanese, even though their machine learning system hadn't been fed Korean-Japanese translations? It worked. The computer produced a reasonable translation. And no one knew how it did it.

Using machine learning on sample translations, the computer had created a model that was so adept at recognizing patterns that it had seemed to tap into a deeper level of language and created an original model to understand concepts that it hadn't been trained to understand.

And that's also the scary part. Computers are accomplishing extraordinary feats using machine learning, but there's not yet any way to know how they're arriving at their solutions.

Similarly, Google's AlphaGo used machine learning to analyze 160,000 games of Go – the famously complex ancient board game that originated in China – comprising 30 million board positions. In doing so, it learned which moves were the most successful. Then Google put AlphaGo in two computers and had them play 50 games against each other, learning all the while. The predictable result: In May AlphaGo beat Ke Jie, the world's champion Go player in three straight games.

Ke Jie had earlier said that AlphaGo would never be able to beat him, but after losing, he said, "Last year, I think the way AlphaGo played was pretty close to human beings, but today I think he plays like the God of Go."

How did AlphaGo do it? No one knows. And it's not possible to know. Machine learning uses layers of neural nets, assigning weights to nodes in the net, working in a fashion similar to our brains. But the complexity of Google's AlphaGo exceeds what the human brain is capable of. The number of possible moves in Go is considerably larger than the number of atoms in the universe. Machine learning creates an incredibly complex model based on those weights and then makes decisions, but there's no way for us to know how it makes them.

That can create problems. Researchers at Mt. Sinai Hospital in New York used machine learning to analyze 700,000 patient records. The result was a system that was extremely good at predicting disease when looking at new records. It could even anticipate who would develop schizophrenia, which experts have found difficult to predict. But then what? Should a doctor prescribe a treatment not knowing how a computer came up with the prediction?

In a wonderful article in Wired magazine, David Weinberger points out that computers have surpassed the ability of humans to find patterns and draw conclusions. We became an intelligent species by looking at phenomena and creating simple models in to help us understand these phenomena, such as the motions of the planets. But these models created by human minds have reached a limit, and now we depend on computers to create extremely complex models beyond our ability to fathom. And we're increasingly dependent on the answers provided by these models.

What should we think about all this? Weinberger says that maybe the world isn't as knowable as we thought. Maybe the true nature of reality is that it's beyond our ability to understand – and that artificial intelligence is helping us to appreciate this.

"Models are always reductive," he says. "They confine the investigation to the factors that we can observe and follow…. Now our machines are letting us see that even if the rules are simple, elegant, beautiful, and rational, the domain they govern is so granular, so intricate, so interrelated … that our brains and our knowledge cannot begin to comprehend it."

© 2017 by Jim Karpen, Ph.D.

E-mail Jim Karpen