AI: Replication without understanding?

Artificial intelligence is a topic that is never far from my mind. But the first thing conjured up in people’s minds whenever I bring it up is the evil AI portrayed in movies and on TV. You know the ones: Skynet, Agent Smith, HAL, the Cylons, etc. In these movies, inevitably, as soon as the AI becomes self-aware, it turns the full brunt of its vast, incomprehensible intelligence toward the destruction or control of its human masters. Clearly, there is a deep-rooted fear in the public consciousness that we will be unable to control any self-aware AI we create, probably caused by a perception that we will never truly understand general intelligence/consciousness and how it works. I want to make a distinction here between the general field of AI, which falls into two camps I’ll discuss below, and the study of true AI, which involves creating a self-aware agent. I believe one of the main problems limiting progress on the latter is that the tools and approaches of the former are wholly inappropriate for any endeavors to discover and recreate how a mind thinks.

The bulk of artificial intelligence research at present is divided into two main factions: 1) machine learning and 2) biologically realistic simulation. Machine learning is a conglomeration of techniques and algorithms which rely on heuristics to evolve a desired behavior. In other words, a machine learning algorithm iteratively runs itself, evaluates output based on some measure of quality, and adjusts its parameters until it works optimally against testing data. At the other end of the spectrum, we have biologically realistic and detailed simulations of neuronal networks (brains), such as Henry Markram’s Blue Brain Project.

Although quite different, both of these approaches suffer from the curse of replication without understanding. The first replicates function, but it’s impossible to tell given a set of neural network weights or genetic algorithm weights what was actually relevant to the desired function. It is all too easy to program a machine learning algorithm which learns on a feature completely unintended. Machine learning is clearly quite useful in many cases, but, in the end, essentially only produce smarter computers. I believe machine learners will never amount to true intelligence and will always be limited by the intentions and capacities of their human creators.

The second, biologically realistic simulations, could conceivably reproduce the physiological dynamics of real brains and maybe even in the future recreate complex cognitive functioning, but these models are even more inscrutable in terms of allowing clear understanding of the processes at work. When you have thousands of parameters to adjust to make something function, can you say you truly have an understanding of how it functions?

So is it possible to actually understand the intelligences we create? I believe it is. We’ve just been looking in the wrong places. Instead of trying to replicate the hardware or function, we should try to replicate the software. If we wanted to figure out how a computer worked, we could recreate the chassis and every bolt, try to mimic the transistors as closely as possible, even try to recreate the same current levels that we can measure during normal operation (or as closely as we can get it to normal operation given that we’ve opened it up and attached measuring devices). Alternatively, we could declare that we don’t care how it works, just that it works, and try to recreate its functioning with wild guesses based on our intuition and a lot of training instances. But neither of these gets us even close to understanding the complex code that governs a computer’s ultimate functioning: the software.

This begs the question: what is the software of a brain?

It’s not that we don’t study it. In fact, we’ve studied it for hundreds of years, and still avidly do to this day. This field is psychology. It’s a broad field encompassing everything from cognitive psychology to clinical practice. Cognitive psychology is a relatively new subfield of psychology, which focuses on investigating how minds work using scientifically rigorous methods and holds the best chance of uncovering the secrets of true AI. However, I believe progress on this is currently limited by two factors: 1) lack of sophisticated computational and modeling techniques, and 2) no overarching paradigm to guide and organize understanding.

This is not to say that important advancements aren’t currently being made in psychology, but I believe that if we are ever to attain the level of understanding of consciousness and intelligence required of being able to create true AI, major changes are needed. We need to wholly incorporate sophisticated modeling tools and computational simulation into the field of psychology, similar to what has happened with biology. Since the advent of these new technological tools and frameworks, biology has exploded into extremely fruitful subfields such as bioinformatics and systems biology. Perhaps the same innovations can occur with psychology, leading to new, more complex models which can be tested computationally and empirically.

The second factor is crucial, but it’s also the most difficult to address. Before Darwin’s theory of evolution, biology arguably amounted to little more than detailed taxonomy and cataloguing. However, an understanding of natural selection created a scaffolding upon which all other insights can be firmly rooted, imbuing them with larger significance. Evolution created a crucial context which formed larger meaning, and hence the ability to understand much larger pieces of the puzzle at a time. Both AI and cognitive psychology are still a long way from having anything resembling an overarching framework such as evolution. But perhaps simply recognizing this limitation is a step in the right direction.

So can true artificial intelligence eventually arise from the understanding created by marrying cognitive psychology with modeling? It’s no golden ticket, and nothing in life is guaranteed, but I think it’s our best shot. I also think there’s no chance of accidentally creating an evil megalomaniacal artificial intelligence hellbent on destroying all humankind, but that’s a discussion for a whole other blog post.

—Jane Wang

*Cover photo from http://www.referenceforbusiness.com/encyclopedia/A-Ar/Artificial-Intelligence-AI.html#b