— By Emil Rijcken

In 1991 Guszti Eiben received his PhD in computer science at the Eindhoven University of Technology. He got his first AI introduction during his master’s in mathematics at the Eötvös Loránd University in Budapest. At the time, AI was very esoteric, and Guszti Eiben could only daydream about its possibilities. Now, as a professor of AI at the Vrije Universiteit in Amsterdam and as a special visiting professor at the University of York, he has the tools to realize all his sci-fi dreams. Professor Eiben is a pioneer in Evolutionary Computing; he wrote the textbook that is now used at many universities, was the first in the world to let two robots have a baby and is constantly pushing the field to go beyond the status quo.

A consortium with his UK colleagues received a two-million-euro grant to research Autonomous Robot Evolution (ARE) in 2018. The ARE project is building an EvoSphere, an evolutionary robot habitat serving as a tool to study evolution and the emergence of intelligence. Fundamental evolutionary questions for both biologists and computer scientists can now be answered. But the EvoSphere is also expected to push engineering; applications include autonomous robots to colonize space and cleaning up nuclear reactors on Earth. Professor Eiben postulates that if evolution can create intelligence, then artificial evolution can create artificial intelligence.

Professor Eiben, continuing on your analogy: god created offspring, and you have created artificial offspring. Is it fair to say that you are an artificial god?

Definitely not, and I disagree with your first part of the statement that ‘God created offspring’ because evolution created offspring. This is a big difference in starting points. To put it simply, in the beginning, physics became chemistry, and chemistry became biology. Simultaneously to the last transition, evolution emerged. We are still in the grand process of evolution, ‘the greatest show on earth’. I can only change the substrate and have a minor role in this process. The only form of evolution we know and analyze is carbon-based, which is life on Earth. People like me do not introduce new principles; we just introduce evolutionary principles on a new medium.

The sequence is as follows; in the 19th-century, Darwin described evolution in wetware. In the 20th century, evolution in software ‘evolutionary computing’ was invented by computer scientists. Now, in the 21st century, we are working on evolution in hardware. So, there are two grand transitions:

    • wetware to software
    • software to hardware.

The result is the evolution in hardware, which is different from carbon-based life as we know it.

You are considered a pioneer of evolutionary computing; what are your main contributions?

In the late 20th century, a community arose that used search algorithms based on Darwinian principles, selection and reproduction to solve problems and developed what is now known as evolutionary computing. Yet, in the early years, no one knew exactly how to fine-tune evolutionary algorithms. Firstly, I contributed to the methodology. I have burnt three PhD students investigating and optimizing the hyperparameters in evolutionary processes. I demonstrated how the quality of evolutionary processes depends on these hyperparameters and put it on the research agenda. Also, I optimized the optimizer to improve the outcome of the evolutionary process.

Secondly, after waiting for a good textbook for a long time, I wrote one myself (with Jim Smith, a friend and colleague from the UK). Our textbook is now perhaps the textbook of evolutionary computing, used at many universities and recently, it has been translated to Chinese.

Thirdly, I studied fundamental questions based on reproduction mechanisms. For example, I investigated what would happen if we had more than two parents. In biology, we know about two kinds of reproduction: sexual- and asexual. Asexual reproduction consists of mutations only, while sexual reproduction requires two parents and is used by higher life forms such as homo sapiens and fishes.

As a mathematician, I saw the number of parents just as a reproduction parameter. It does not have to be limited to two, and theoretically, it could be three, four, or even more. I investigated what happened to evolution if more parents were to reproduce and demonstrated that having more than two parents in a crossover operator accelerates evolution. Depending on the kind of crossover operator, the optimal number is somewhere between two and ten.

Lastly, and perhaps my most significant contribution; I promoted the optimization of active rather than passive objects. In classical evolutionary computing, objects like routes for a travelling salesman or the design of an industrial object are subject to optimization. I started optimizing things with agency. In that domain, I investigate and evolve active artefacts, agents, organisms, or simulated robots. In a way, these are all the same types of entities; they have a body and a brain. Having both makes them much harder to evolve but also much more interesting to study.

Before we dive into details, could you give a short introduction; what is evolutionary computing?

It is a collection of search algorithms with its own style; the evolutionary style. This style has adopted the principles of reproduction/variation and selection from biological evolution.

One could argue that all search algorithms have the same properties, namely: generate and test. The ‘generate’ step is equivalent to reproduction/variation, and the ‘test’ step is used for selection. Evolutionary algorithms are unique within the big family of search algorithms because they use a population of solutions and crossover as a search operator, combining two or more solutions. This is unique as all other search methods iterate only one solution, applying perturbations (mutations) to produce new solutions. Also, the stochastic character of selection and reproduction in a population is an important special feature.

In conclusion, evolutionary algorithms are population-based, stochastic search methods. Evolutionary computing is motivated by evolutionary principles, and the search steps can use more points in the search space to generate new points. We do not exactly simulate or emulate evolutionary (carbon-based) mechanisms, but we use evolutionary principles.

Is evolutionary computing a form of artificial intelligence?

Definitely. To position it even further, roughly, AI can be divided into symbolic/top-down and sub-symbolic/bottom-up approaches. In symbolic AI, the algorithm designer is most prominent in setting rules, whereas the designer is less prominent in sub-symbolic approaches. In the 20th century, the top-down approach was the dominant approach, while the sub-symbolic approach has become dominant in the 21st century. With sub-symbolic AI, algorithms find solutions based on a predefined method. However, the resulting rules that produce the output are sometimes very untransparent and unexplainable. Evolutionary computing is a form of sub-symbolic, bottom-up, 21st century AI.

There are many applications of your work; which application are you most excited about?

All of them are very interesting, but their visibility can distinguish some more than others.
Allow me to explain one thing first; evolution is not necessarily uncontrolled. Evolution can be supervised and controlled by humans; we refer to this as ‘breeding’. Farmers can breed species such as crops and animals. From the two principal operations of evolution, reproduction and selection, humans could control selection for thousands of years already. For instance, farmers can decide which bull and cow can be coupled to make calves. Influencing the selection component of evolution for several generations is effectively steering evolution towards desirable outcomes *.

Provided this context, the most visible applications in my work are ‘robot breeding farms’, where evolution happens under human supervision. Humans can aid, direct, and accelerate the evolutionary process towards the desired result. Given enough resources (read: funding), we could have robot breeding farms within five years. Such a breeding farm will employ evolution as a design approach, running many generations under supervision and stopping evolution once a good solution emerges. This approach does not replace traditional ways of designing robots, but it has a niche complementary to the usual applications.

A good example of applications to illustrate this niche are robots used for monitoring the rainforest. This problem is very complex because we have no idea what kind of robot is optimal for that environment; should the robot have wheels, legs, or both? Should it be small to sneak through the holes in the vegetation, or should it be big so that it can trump down obstacles? Robots, designed through classic engineering methods, only work in a static, predictable and structured environment (e.g. warehouses). But if the environment is complex, dynamic, and not known in advance, finding a good design is very hard, and evolution is your friend. The way I put it to my students is: When the going gets tough, evolution gets going.

Based on this idea, we can use robot breeding farms to get a well-designed body and brain that operates well in our mockup forest. After that, we create many copies of the optimal robots and send them out to monitor the real environment. This is one of my favourite applications because we can do it quite quickly, and it is relevant for society.

In the long term, we could have evolutionary processes that operate without direct oversight from humans. This means a hands-free, almost open-ended evolutionary process. However, this process raises both ethical and safety issues about runaway evolution. At the same time, this approach has highly useful applications. For example, it can be used for space research. We could send an evolutionary robot colony to another planet and have them do what life did on Earth. Firstly, they need to evolve and adjust to the circumstances to survive and operate for a long time. Once they can survive, they could activate human-related tasks, such as building houses or making the planet habitable for humans. This is a different approach than the breeding farm, as it is not directed nor controlled.

Suppose that a multitude of robots is sent out to perform a task. How do robots choose between performing their task and finding their mating partner?

A time-sharing system is the most logical. For example, task execution can be (almost) permanent, while mating can be occasional, perhaps triggered by time (e.g. a mating season) or an event (e.g. meeting another robot).

Are these actual applications, or is this a hypothetical discussion on possibilities?

The breeding farm is an actual application and concerns a collaboration between the University of York, the Bristol Robotics Lab, the Napier University, Edinborough and the Vrije Universiteit, Amsterdam. Our goal is to develop robots capable of cleaning up nuclear power plants. Typical for such visionary projects is that we will not make it. However, we learn a lot and know how to make it work if we get another four years of funding.

The space application with hands-free, autonomous robots is for the future and will take another ten or fifteen years. However, these applications are not just a matter of money and engineering. There are fundamental technical, scientific and ethical questions that need to be addressed first; how can we set up a system to operate, do what we want, and do no harm?

Which fundamental questions are you most excited about?

My two favourites are ‘How can intelligence evolve from a non-intelligent beginning?‘ and ‘What is the interaction between the body and the brain behind (evolved) intelligent behaviour?‘ The premise is that intelligence is not just in the brain but also in the body. All existing forms of intelligence we know are hosted in a body, and we do not know any intelligence that does not need a body. This indicates that intelligence needs both the brain and the body. More specifically, behaviour is always determined by the body, the brain, and the environment.

Humans can walk on two legs on solid ground very well. But if you put them into the water, they sink. If the body does not change, but the environment does, then the behaviour needs to change, e.g., swimming instead of walking. This is fascinating both from a fundamental and a practical perspective.
For example, an interesting question we investigated is: ‘What is more important for intelligent behaviour; a good body, or a good brain? And how do we get this via evolution?‘. There were many caveats to answering this question because the answer can depend on the experimental setup, the given robot design, and environmental details. Yet, I have quantified the question, answered it through experiments, and published the result at an annual artificial life conference with a student and a colleague. In our evolutionary robot system, the body is more important for intelligent behaviour than the brain.

What was your experimental setup?

We designed a system with an essential property; all possible bodies and brains could be combined into a working robot, and we measured the behaviour of each combination. Simply put, we found a technical solution so that even a fish’s brain could be put on a human’s body and work.

We generated 25 bodies and 25 brains, resulting in 625 combinations arranged in a table and evaluated each one of them. Then, we looked at the standard deviation of the columns and the standard deviation of the rows. If the standard deviation is low, then that part is more important. To understand why, imagine that the rows are bodies. In that case, you get a body with 25 combinations of different brains, resulting in 25 fitness values. If these fitness values are in a small range, it does not matter what brain you put on there; you always get approximately the same intelligence. However, if you have a larger spread of fitness values, then the intelligence depends on the brain to a greater extent.

This is how we quantified our naïve question into a scientific question. After formulating the question, we ‘only’ had to run the simulations and fill out the body-brain matrix. In the end, we found that the spread is always smaller when the body is fixed. In this system, the body was more determinant for behaviour quality.

It is not just the engineering that I find interesting about this question. I am especially fascinated by the fundamental, even philosophical aspects, the interplay between body and brain and how they develop simultaneously through evolution.

So, the body is most important for intelligence. Yet, humans have a fixed set of body parts. The optimal number of parents is greater than two. Yet, humans only have two parents. Did humans get stuck in a local optimum?

No, the optimal number of parents is also determined by practicality. More parents are less practical and require more effort and luck to mate.

Is an Evosphere the same as a robot breeding farm?

No, not necessarily; the EvoSphere is a generic concept, while a breeding farm is one specific subtype. The human is in the loop in the latter and supervises the selection and infant learning process in the ‘robot nursery’.

In contrast to the breeding farm, the Evosphere also allows for open-ended robot evolution without direct human oversight. The Evosphere is a generic system architecture that consists of three components. The first one, the Robot Fabricator or ‘Birth Clinic’, produces robot offspring. In evolutionary terms, a genotype (the robotic DNA) is converted into a phenotype, a real robot. The second one, the Training Center or ‘Nursery’, is where ‘newborn’ robots learn optimal body control. This stage is called the ‘infant period’. During this period, robots learn new skills and cannot produce children. After the infant period, the robots become fertile and make it to the ‘arena’, where they operate and produce children. This is a generic system architecture applicable to all robot evolutionary systems, regardless of how the details are implemented.

Which Evosphere’s component do you prefer working with?

I have been more interested in and challenged by the infancy period in the last couple of years, which is similar to machine learning but also completely different. The best way to explain the difference is by inverting the words from ‘machine learning’ to ‘learning machines’. The message is that with ‘learning machines’, you are discussing machines, either simulated or physical, capable of learning. Notably, a learning machine generates its training data by performing actions, whereas, in machine learning, users feed the algorithm with predefined training data sets.

Learning machines form a big challenge in the context of my research; every new robot has a different body (e.g. more legs, fewer wheels, different sensors, the camera on the other side) that needs a body-dependent controller. Thus, each robot represents a new learning problem: how to control the given body optimally and ensure that the robot can operate, e.g., walk, perform tasks, survive, and reproduce.

Human babies spend a year learning to walk and grasp objects. Evolvable robots also must develop their ‘hand-eye coordination’ quickly after birth. The problem for robots is more challenging because human babies always have the same body as their parents (e.g. two hands, five fingers on each hand). In contrast, robot offspring can have completely different bodies than their parents. We are using some learning techniques from machine learning, such as reinforcement learning and neural networks. So, although many machine learning algorithms are potentially helpful, we do not know anything about the robot’s morphology (body plan) in advance and cannot make any assumptions. Hence, we need learning methods that work on all possible robots in our design space. Each new robot produced by evolution is the equivalent of a new dataset in traditional machine learning.

This learning problem is only a stepping stone to the really big challenge; finding out how evolution and learning influence each other. This question has been discussed for more than a hundred years and arose in the biology community. They invented notions like ‘Lamarckian evolution‘ or the ‘Baldwin effect‘, the early AI community picked up. Forty-year-old papers investigate the combination of learning and evolution in settings that I would now describe as artificial life systems. This is a prominent issue for evolving robots because learning in the infancy stage is essential. This transforms the theoretical question on the interaction between evolution and learning into a practical one: how to combine evolution and learning in robots to maximize efficiency and efficacy?

Ultimately, I am interested in the combination of evolution and learning in one system; it would be a significant avenue to realize a new level of AI. I believe that future AI will be produced by autonomous processes rather than human developers encoding the solution. I call this phenomenon ‘second-order engineering’ or ‘second-order development’. The standard approach for developing an AI system (robotic or otherwise) is based on a developer who analyzes the problem, does a literature search, and designs and implements the target system. This is typical ‘first-order engineering’. With second-order engineering, we develop an evolutionary system that develops a solution for us, rather than us constructing an AI or robot system directly. I am convinced that second-order engineering will become more prominent in future AI.

What is the role of humans in second-order engineering?

Humans should specify the components of the evolutionary system. For instance, define the genetic language used in the genotypes, specify adequate mutation and crossover operators, formulate the fitness function, and determine the conditions for reproduction. If learning is applied, then the learning method(s) need to be defined as well.

There are two critical issues here: sample efficiency and safety. Biological evolution is highly wasteful. It creates a lot of solutions, most of which die before they become fertile. An artificial evolutionary system cannot be too wasteful because the time scale is weeks or months rather than millions of years. Additionally, we need to ensure safety in an inherently stochastic and adaptive system that produces real robots in the real world. The obvious dangers are runaway evolution and the emergence of unwanted or dangerous robot properties. Safety and ethics are essential. Yet, not much is known about these issues as we are just starting to learn about them. However, we need to be aware of the ethical and safety issues from the first moment onwards.

In earlier media outings, you discussed centralized reproduction. Is this a form of safety measure you propose?

Yes, it is, as it can help prevent runaway evolution. My solution is to reject distributed reproduction systems such as laying eggs, becoming pregnant or cell division because these reproduction systems would allow robots to reproduce anywhere and in any way without having the option to stop them. Instead, I insist that we only build evolutionary systems with a centralized unit for (re)production of robots, the first component of an EvoSphere. This unit serves as a safety switch; once it is turned off, reproduction stops, and there will be no more robot offspring. The existing robots may not drop down ‘dead’ immediately, but at least they will not further reproduce.

How is the fitness function defined for autonomous robots?

The research community’s standard approach is to have one task and equate fitness with task performance (e.g. for a robot that should be fast; fast robots will have many children, and slow robots will not). This guarantees that evolution creates robots that are good at doing that task. I am trying to nudge the research community to go further and do more complex tasks with practical relevance and consider multiple tasks simultaneously. To survive, robots need to be good at many tasks.

Let’s assume we are sending a robot colony to a distant planet. There are multiple tasks to perform. How would you define the fitness function?

Here, we should distinguish between skills and tasks. The number of combinable, elementary skills necessary for complex tasks is relatively small, less than ten. Take locomotion; a robot has to walk. Then, locomotion should be targeted; a robot should learn how to move to a specific target and avoid obstacles. Subsequently, a robot needs to learn to manipulate objects. This set of elementary skills can be used as stepping stones so that the robots can perform more complex tasks too. Robots will learn these skills in the ‘robot school’, enabling them to perform more complex tasks.

Let’s focus on the evolution of things and second-order engineering. Can you imagine a context in which a robot would develop a consciousness, morality or emotions to perform tasks?

Let me define ‘the evolution of things’ firstly. Before, I described the transition from wetware to software and from software to hardware. Similarly, such a sequence could also be related to evolution: from the evolution of living organisms to the evolution of code (evolutionary computing) and then to the evolution of things (robots).

The question about consciousness is hard to answer; it is more fundamental and philosophical. I cannot say whether they will or could have consciousness. The following analogy is the easy way out: if it walks like a duck, looks like a duck and quacks like a duck, then it probably is a duck. If the robots’ actions match our standards of morality, we could call them moral robots, regardless of the mechanism that drives this behaviour. Moral behaviour is designable and desirable; they need to adhere to our standards.

But could this also be an outcome of an evolutionary process?

Having evolution or any other adaptive, emergent process at work does not mean that we cannot control it. We must develop the technology and science to control these emergent processes and ensure they respect our constraints, which we could call moral- or ethical borders. However, setting such constraints comes down to one of the biggest questions in bottom-up, sub-symbolic AI; how to limit evolutionary processes without disabling them? Thus, how to keep evolution within our ethical borders, without ‘curtailing’ their behaviour too much. I have no answer and can only emphasize that it needs further attention.

Suppose you have one robot that can choose between two robots for reproduction. Both robots are identical in terms of functionality. Is it plausible that a robot bases its decision on aesthetics?

Based on our engineering-based perception, we are inclined to choose robots based on functionality and usefulness. But life does not work like that. The idea that you propose is very good, and we are investigating it right now. This approach is different from the usual evolutionary algorithms. Firstly, the selection is not made centrally, while almost all artificial evolutionary systems have a centralized protocol, ‘the manager’ (technically the main evolutionary loop), to decide which robots are mating with which other robots. The decision is based on complete information on each population member. This is a desirable property for algorithmists but not for the artificial life community. Therefore, the first change is to enable robots to select mating partners themselves. Secondly, the selection criteria for mating partners should be changed. Currently, two robots can meet each other and decide whether they want to have a “baby” purely based on utility (task performance). In the new approach, we extend or replace this criterion with another one related to the morphology of the robots –beauty, if you will. Typically, utility is linked to behaviour; ‘Tell me how many soil samples you collected in the forest, and I will tell you if I want to have a baby with you‘. Alternatively, you can change it to; ‘I look at you, and I will tell you whether I want to have a baby with you‘. So yes, aesthetic-based selection is possible, hugely exciting, and we have just started investigating it.

Is this aesthetic-based approach interesting for engineers as well, or primarily for the artificial life people?

For engineers, it is less interesting as they are utility-oriented. It is primarily interesting for artificial life, theoretical biology or philosophy. An interesting question is: what kind of bodies/morphologies do you get if you have morphologically driven selection? The peacock is a prime example that I always have in mind. Peacocks have fantastic morphological features; their massive tails. But although the tails are beautiful, they are utterly useless and even dangerous. The tails make peacocks easier to catch by predators, and they require the peacock to eat more food. Yet, this morphological property heavily impacts whether peacocks will reproduce or not. I am curious to see whether we would see this phenomenon evolving in a robot system as well. If our evolutionary mechanisms capture fundamental properties, we could create an artificial evolutionary system with the same attractors as carbon-based evolution. Carbon-based evolution took millions of years to develop and is very complex. Artificial evolution has been developed only for a few decades and is much less complex, so it is not a done deal that this is possible.

However, it is extremely exciting; finding such effects would indicate that we understand the essence of evolutionary systems. There is something fundamental about evolution as such, regardless of the substrate we can capture. That would also give a hint on life on other planets. If all kinds of evolutionary systems are similar, then evolution on other planets could also be similar.

You state that as artificial intelligence has changed our view on intelligence, it is likely that artificial life will change our view on life. How do you think our view of life will change as a result of artificial life?

The notion of life will no longer be restricted to carbon-based life, which is the only kind of life we currently know. If many scientists agree that evolving robot systems constitute life, it will be acknowledged that life can have a different base. Other life forms can be digital, mechatronic or based on new materials with new forms of actuation and sensing. This means that the criteria for determining whether something is living or not will change; they need to be more about functionality rather than about ‘incarnation’ or instantiation. A broader definition of life will enable more generalizable research on life. As a scientist, you do not want to draw conclusions based on one sample only. However, currently, we only have one sample of life. More samples would lead to better-funded conclusions and to better insights into what life is about.

To this end, it is important that life as we know it is moderately observable, hardly controllable, and not really programmable, making it hard to study experimentally. But robots and artificial organisms are observable, controllable, and programmable. For instance, it is possible to retrieve robot communication by registering wifi signals, and internal processes can be logged on a black box inside the robot. In principle, this could cover everything: all sensory inputs, all information processing in the robot brain, all control commands, battery levels, etcetera. Such data can be stored and analyzed offline or used in a control loop to probe the system during its operation in an online fashion. This provides us with an extended set of tools to study and understand life and intelligence.

Ultimately, evolutionary robot systems represent a radically new kind of research instrument that can help understand the emergence of intelligence. The key open question here is: ‘How did intelligence emerge?’ and as of today, even the simplest answers are lacking. For example, is the process of acquiring intelligence linear, stepwise, or is it a hockey stick curve? Evolving artificial life systems allow us to investigate these questions, which would be an enrichment of artificial intelligence as we know it.

That concludes the interview, professor Eiben. Do you have any last remarks?

Emergent intelligence and second-order engineering have very significant risks. These risks have to be addressed from the beginning and the ground up while developing such systems. If we only try to mitigate them once they occur, it will be too late.

*Directly interfering with reproduction became possible after genetic manipulation was invented. This is now ethically debated but technically possible.