Artificial General Intelligence — Is the Turing Test Useless?




Artificial intelligence (AI) is all the rage today. It permeates our lives in ways obvious to us and in ways not so obvious. Some obvious ways are in our search engines, game playing, Siri, Alexa, driving cars, ad selection, and speech recognition. Some not-so-obvious ways are finding new patterns in big data research, solving complex mathematical equations, creating and defeating encryption methodologies, and designing the next generation weapons.

Yet AI remains artificial, not human. No AI computer has yet passed the Turing Test. AI far exceeds human intelligence in some cognitive tasks like calculating and game playing. AI even exceeds humans in cognitive tasks requiring extensive human training like interpreting certain x-rays and pathology slides. Generally, its achievements, while amazing, are still somewhat narrow. They are getting broader particularly in hitherto exclusively human capabilities like facial recognition. But we have not yet achieved what is called artificial general intelligence, or AGI.

AGI is defined as the point where a computer’s intelligence is equal to and indistinguishable from human intelligence. It defines a point toward which AI is supposedly heading. There is considerable debate as to how long it will take to reach AGI and even more debate whether that will be a good thing or an existential threat to humans.

Here are my conclusions:

  1. AGI will never be achieved.
  2. The existential threat still exists.

AGI will never be achieved for two reasons. First, we will never agree on a working definition of AGI that could be measured unambiguously. Second we don’t really want to achieve it and therefore won’t really try.

We cannot define AGI because we cannot define human intelligence—or more precisely, our definitions will leave too much room for ambiguity in measurement. Intelligence is generally defined as the ability to reason, understand and learn. AI computers already do this depending on how one defines these terms. More precise definitions attempt to identify those unique characteristics of human intelligence, including the ability to create and communicate memes, reflective consciousness, fictive thinking and communicating, common sense, and shared intentionality.

Even if we could define all of these characteristics, it seems inconceivable we will agree on a method of measuring their combined capabilities in any unambiguous manner. It is even more inconceivable that we will ever achieve all of those characteristics in a computer.

More importantly, we won’t try. Human intelligence includes many functions that don’t seem necessary to achieve the future goals of AI. The human brain has evolved over millions of years and includes functions that are tightly integrated into our cognitive behaviors that seem unnecessary, even unwanted, to build into future AI systems.

Emotions, dreams, sleep, control of breathing, heart rate, monitoring and control of hormone levels, and many other physiological functions are inextricably built into all brain activities. Do we need an angry computer? Why would we waste time trying to include those functions in future AIs? Emulating human intelligence is not the correct goal. Human intelligence makes a lot of mistakes because of human biases. Our goal is to improve on human intelligence—not emulate it.

The more likely path to future AI is NOT to fully emulate the human brain, but rather to model the brain where that is helpful—like the parallel processing of deep neural networks and self learning—but create non-human computer-based approaches to problem solving, learning, pattern recognition, and other useful functions that will assist humans. The end result will not be an AI that is indistinguishable from human intelligence by any test. Yet it will still be “smarter” in many obvious and measurable ways. The Turing Test is irrelevant.

If that is true, why would AI still be an existential threat? The concerns of people like Elon Musk, Stephen Hawking, Nick Bostrom, and many other eminent scientists is that there will come a time when the self-learning and self programming AI systems will reach a “cross-over” point where they will rapidly exceed human intelligence and become what is called artificial superintelligence or ASI. The fear is that we will then lose control of an ASI in unpredictable ways. One possibility is that an ASI will treat humans similarly to the way we treat other species and eliminate us either intentionally or unintentionally as we eliminate thousands and even millions of other species today.

There is no reason that a future ASI must go through an AGI stage to achieve this potential threat. It could still be uncontrollable by us, unfriendly to us, and never have passed the Turning Test or any other measure of human intelligence.

References

P. Saygin, I. Cicekli, V. Akman, Turing Test: 50 Years Later, Minds and Machines 10:463 (2000)

Musk and Zuckerberg bicker over the future of AI (Engadget, July 25 2017)

Simborg, DW, What Comes After Homo Sapiens?, (DWS Publishing, September, 2017)

N. Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press, 2014)

Don Simborg, MD

Don Simborg, MD, is the author of the recently released book, What Comes After Homo Sapiens? (http://bit.ly/2DqnyWI). Dr. Simborg earned his medical degree from Johns Hopkins School of Medicine and has a background in scientific research. He’s an expert in clinical information systems and has devised computer-based solutions to many biomedical problems. He has served on the faculties of the Johns Hopkins and University of California, San Francisco schools of medicine and published more than 100 peer-reviewed articles.
See All Posts By The Author