The Greek artificial intelligence engineer, novelist and science writer George Zarkadakis is not convinced that we can ever develop artificial intelligence capable of thinking and acting on par with a human being. If it’s possible, we need to do away with our age-old distinction between hardware and software and instead try to build a new AI from scratch.
During the 1980s, an Austrian futurist, Hans Moravec, came up with a theory that today still greatly impacts the field of artificial intelligence. Moravec’s Paradox stipulates that while computers are capable of possessing standard adult human intelligence, often in complicated areas such as high-level logical reasoning, they cannot carry out the simple, low-level sensorimotor actions that, say, a one-year-old baby can.
This ironic conundrum, theoretically, can be fixed— so the logical argument goes— if we can simply increase the machine’s computational power.
Most empirically-based computer scientists and technology experts believe that by speeding up the algorithm of a computer, it will eventually be able to make decisions and to think, and act, just like a human being can.
Whether you believe this theory to be true or not, really comes down to where you stand on the concept of the disembodiment of information: a theory that says information is outside of the body, and is deemed to be immaterial.
This is an old idea that pre-dates computers, even modernity. The myth of the body and mind being separate entities has been embedded in western civilisation for thousands of years. The story of Christianity, where the body, soul and spirit are seen as distinct components of a human being, is a typical example of this archetypal myth in action. And, like all good mythologies, it keeps on getting retold in a variety of ways.
Think of the classic dystopian robot film genre: movies like Blade Runner or The Terminator. Have you ever noticed the way the machine is always programmed so that its body is separated from its software, which is essentially its mind?
Sitting in a cafe in west London, sipping an espresso, George Zarkadakis, an artificial intelligence engineer, novelist, and popular science writer, is attempting to explain why the body and mind cannot be separated, particularly when we think in terms of the development of AI.
“Research supported by neuroscience and biology tells us you can’t separate the body from the mind”, he begins. “It cannot be extracted. We are our brains. So this idea of the disembodiment of information is simply incorrect”. If futurists and technology enthusiasts are ever going to fully master artificial intelligence, Zarkadakis believes the following question is key to furthering that understanding: is it a pattern that causes the shape of the brain? Or is it the mind itself that decides how the neurons in the brain connect?
“If you are an idealist, you believe the former”, says Zarkadakis. “If you are an empiricist, you believe the latter. I happen to be an empiricist in that respect”.
“And if you accept that latter premise — which is one predominately supported by scientific evidence in biology — then the future of robotics is doomed to failure.
We expect the robots of the future to be like us. But they won’t be. They will be something different”, he says. “There is nothing beyond our brains. There is not a pattern there. And if there is a pattern there, it is a product of our brain, and not the other way around”.
I’m spending this Tuesday afternoon with Zarkadakis in a cafe on a busy high street in west London because he’s outlining, in thorough detail, the main premise of a book he’s recently published, entitled: In Our Own Image: Will artificial intelligence save or destroy us?
Ambitious in scope, the narrative touches upon a wide range of subjects including cybernetics, neuroscience, information theory, the mystery of consciousness, literature, mind philosophy, and the fundamental principles of logic, which stretch all the way back to ancient Athens, Plato and Aristotle.
Using these subjects as a guiding path, Zarkadakis proposes a thesis that seeks to answer a fundamental question: is it possible to build an artificial mind that thinks and acts just like a human being does?
An artificial intelligence engineer by training, Zarkadakis takes a somewhat philosophical approach to tease out numerous arguments over the course of this fascinating book.
His thesis thus becomes an open-ended narrative, heavily influenced by mind philosophy and literature as well as empirical science, leaving us with many questions to ponder rather than cementing one argument firmly in stone. So why does Zarkadakis believe that artificial intelligence — in the current climate of how technology is manufactured — cannot mimic the human brain?
The main problem artificial intelligence faces, Zarkadakis says, is also one that the field of neuroscience is presently facing: trying to understand where, why, and how consciousness takes place in the human brain.
“In mind philosophy, one of the biggest questions right now has to do with what we call the hard problem”, says Zarkadakis. “This is essentially the question: how does consciousness come about?”
If we cannot solve this problem, Zarkadakis believes, “the whole of science is based on very loose foundations”.
The dead end of computer science
I cite an interview I did some time ago with Daniel Dennett: an American mind philosopher who likes to use scientific experiments to back up his theories on consciousness. Dennett is mentioned several times throughout Zarkadakis’ latest book.
When Dennett and I previously met in London, after he published his book, Intuition Pumps and Other Tools for Thinking, he claimed that, empirically, it’s a proven fact there is no place in the brain where human thought and awareness of being actually come together. Dennett categorically states that there is nothing mysterious about consciousness, and he believes there is nothing a human being can do that a computer cannot.
Zarkadakis, however, disagrees. He points to what he sees as a fundamental flaw in Dennett’s argument: the idea that the world and the universe can be reduced into computations. Zarkadakis believes that in order for artificial intelligence to make any kind of significant progress in the coming years, we must instead accept the premise that complexity arises from simplicity: This is an idea that emerges from cybernetics, which is the science of communication and control in animals, humans and machines. Cybernetic theory tries to explain how complex systems can exhibit new kinds of behaviours that do not exist in their individual parts but that emerge through combination and synergy.
“Robotics and computing control theory all come from this central idea”, Zarkadakis says. Cybernetics, deriving from the Greek word for steersman (kybernetes), was originally introduced by the mathematician Norbert wiener in the 1940s and may be seen as an outdated scientific theory today, but according to Zarkadakis, it’s still one of the most insightful and ambitious scientific syntheses of all time.
So what makes him so sure? “Well one of the major discoveries of neuroscience — when it comes to consciousness — is that our brains resemble a cybernetic system,” says Zarkadakis. “This is not a surprise: it’s been a basic assumption about the brain since the beginning. It’s just that neuroscience now confirms this theory”.
Zarkadakis postulates that if we accept these neuroscientific results about the brain, then we really need to re-examine the way we build computers. “Right now, artificial intelligence is not based on how the different parts interact with each other”, he says. “Our computer technology is based on a completely different premise: that of logic, and on the premise of separation between hardware and software”. While technological power and capacity have increased exponentially over the last number of decades, many in the artificial intelligence industry have begun to lower their expectations of how advanced machines can actually become. During the late 1980s, when Zarkadakis was still completing his PhD, he says there was still a religious-like belief in the field that computers could become self-aware. However, since then the whole definition of AI has changed, he claims.
“What AI can now promise is a technology that can produce deep learning systems from the huge wealth of data that we have. These can come up with new understandings on science, the environment, economics, and also help to create beneficial new drugs”.
So does Zarkadakis believe computers cannot become conscious in the same way that the human mind can? He hesitates slightly before fully committing to an answer. “My feeling is that it’s not going to go there”, he finally replies. But is there another possible route, whereby this could change, I ask him. “Well if technology companies start exploring new machines that don’t have a separation between software and hardware, it is certainly possible”, says Zarkadakis.
“These machines would not be coded, and they would be trained to resemble the human brain. Firstly, the brain receives central input. Then, it accesses memory. But as the brain interacts with its outside and inside environment, it changes its connectome. So there isn’t a programme that runs in the background”.
Zarkadakis believes that if computers begin to be built from the bottom up, in an entirely new way, the consequences could be revolutionary for AI. “In traditional computers we have one machine and a separation between hardware and software. And depending on what software you put in, [the machine] changes: it can be moody, happy, smart, or whatever”.
“It doesn’t have a personality or a self. But a neuromorphic computer [one that resembles a mammalian brain, with no distinction between hardware and software, ed.] has the potential for forming a self and a personality. Simply due to how those electronics decide to connect will be unique. This uniqueness comes about from how our bodies interact with the environment. But in order for the computer to have sensory experience to the outside world, it would have to be embodied in some kind of robotic body”.
Zarkadakis believes that while this is the most obvious road computers could take to evolve to gain intelligence on a par with human beings, there is also another route: albeit one that it is slightly more complicated. So far, he posits, our understanding of human intelligence is anthropocentric in that it regards the human mind as something exceptional and distinct from the rest of nature.
The antithesis to this form of human-centred intelligence is what some philosophers and mathematicians have defined as universal intelligence: this involves forgetting you are human and accepting that you are merely an agent made of matter.
“If that agent is able to achieve goals that are meaningful to its own existence, regardless of what the environment is like, then one should accept that this is an intelligent creature”, Zarkadakis explains. “An intelligent agent is something autonomous, something that is: in itself.
So if we define intelligence like that, it’s obvious that human intelligence is not the measure of all things. It’s part of something bigger”. “Theoretically, if you go a few steps deeper in that thinking, why not develop machines that are capable of evolving faster than humans?” Zarkadakis asks rhetorically.
“Humans took thousands of years to evolve. In fact, some people argue that evolution has actually finished in humans. But computers have evolved, thanks to Moore’s Law, very quickly. So if you can accelerate evolution— so the argument goes — computers can reach beyond human intelligence”.
Much of my conversation with Zarkadakis here today continues in this haphazard and scattered fashion. He spends considerable time, as many philosophers tend to, teasing out an argument at length, only to break it down again, by finding a flaw in there somewhere to create a binary philosophical opposition.
Much of his theories, while interesting, are often hypothetical. They are predominately theories of the mind, and not always built from scientific evidence based on experiments conducted in the laboratory.
So does he believe that this concept of universal intelligence allowing computers to evolve is possible?
“Theoretically it is”, he admits. Practically, though, can it happen? “I don’t think so”, he replies. “The reason computers have evolved as they have is based entirely on Moore’s Law, which has nearly ended already. Nature has limits. You cannot infinitely miniaturise transistors into small space. There are all sorts of smart ways to squeeze the last juices of smart miniaturisation, but this is coming to an end a lot faster than the year 2030, or 2040, that certain futurists have been claiming are the limits”.
Ultimately though, the next stage in the evolution of intelligent machines is impossible to describe, predict, imagine, or comprehend, says Zarkadakis. Primarily because if that evolution eventually happens, it will be many orders of magnitude higher than the intelligence of our own species.
We need to start a debate about whether or not this is the route we want Ai to take, says Zakadarkis. If not, we will have to start thinking about making global collective decisions regulating research on artificial intelligence — or to put it more precisely, artificial consciousness.
Such a treaty – if practically possible to enforce – would virtually obliterate the possibility of an AI singularity: a concept that would see a computer network redesigning itself, thus creating a machine with an intellectual capacity far greater than any human being could ever dream of.
“Take for example the military in the United States: it’s already accelerating the evolution of autonomous drones which can take decisions in the theatre of war over life and death”.
“What you have then created is a machine that is capable of killing people. That is really where the important debate around AI needs to start because AI is being used to take decisions about life and death. It’s a terminator scenario”.
We’ve been chatting for nearly two hours now. Before we part ways, I return to the title of Zarkadakis’ book, particularly the latter half of it, which is loaded with an open-ended question: will artificial intelligence destroy us? “I think we have control over it”, says Zarkadakis, rather optimistically.
“If we can come together to limit the nature of biological weapons through international diplomacy, then we can certainly come together to negotiate how artificial intelligence and robots progress”.