Professor Carlos Artemio Coello Coello is a pioneer in the field of multi-objective optimization through bio-inspired metaheuristics. He completed his PhD in computer science at Tulane University in 1996 and is a full professor at the computer science department of CINVESTAV-IPN in Mexico City. Renowned for his groundbreaking contributions, he boasts more than 68,000 citations and an H-index of 102, according to Google Scholar. Notably, he was among the top 300 most cited computer scientists in the 2016 Shanghai Global Ranking of Academic Subjects developed by OSPHERE.

 

I’d like to delve into your personal journey. You had to identify a suitable research topic for your PhD in 1996 at Tulane University. Can you briefly tell me the story that led you to work on evolutionary multi-objective optimization??

This is a long story, so I will try to be brief. When I got to Tulane for my master’s and then PhD degree in computer science, I didn’t know what topic I wanted to work on. I knew I didn’t want to do software engineering nor databases. Firstly, I tried programming languages then robotics. Both didn’t work. Accidentally, one day, I read a paper that used genetic algorithms to solve a structural optimization problem. I decided to dedicate a course assignment to this paper, developed my own genetic algorithm and wrote software for analysis. This got me very excited, as I could now see how a genetic algorithm was able to produce good solutions to a complex optimization problem relatively easily. This excitement for evolutionary algorithms has stayed my entire life.

However, although two professors at Tulane worked with evolutionary algorithms, I decided to go with a robotics professor. He did not know much about evolutionary computing, and neither did I, but we decided we could work together. As such, he could not help me find a suitable topic. Professor Bill Buckles, who worked with evolutionary algorithms, recommended me to work with multi-objective optimization as not many people had been using algorithms in that domain. After looking for related papers, I found my PhD topic. Serendipitously, it all came together without being planned. I believe that many great things come together by serendipity rather than being planned.

Can you elaborate on what sparked your interest in evolutionary computing?

There is a large difference between classical optimization and using evolutionary algorithms. Classical optimization mostly depends on math and calculus, whereas evolutionary algorithms are inspired by natural phenomena. It fascinates me how nature has adapted the species in different ways, just aiming for survival, and how this can be such a powerful tool to improve the mechanisms of a particular individual. With evolutionary algorithms, we simulate this process, albeit a coarse, low-quality version of what happens in nature.

Evolutionary algorithms seem to have a simplistic framework, mirroring intricate natural phenomena, which paradoxically yields exceptional problem-solving capabilities. In my pursuit to understand why it is that they were so good, I am still puzzled. I have read many papers related to natural evolution. I tried to follow up a little bit on findings in sort of popular science magazines, not technical things.

The relationship between algorithmic and natural evolution has always fascinated me. If circumstances permitted — the knowledge, time, and skills — I would devote the rest of my career to trying to understand how they operate.

How has the multi-objective optimization field evolved?

Though the domain of multi-objective optimization is relatively narrow, my journey began in an era when opportunities were abundant due to the limited number of researchers. This allowed me to explore a diverse array of topics. While the landscape has evolved, I’ve observed that despite a proliferation of papers, a distinct perspective is still lacking.

Why is this perspective lacking?

Researchers are somewhat hesitant to embrace challenging problems and push the boundaries of research topics. Additionally, we struggle to provide robust explanations for our methodologies. We are still not daring to go to challenging problems, to challenging research topics, and we are still not able to explain many of the things we have done. We’re well-equipped with techniques for specific problems, yet we lack a deeper comprehension of these techniques’ underlying principles. Most people focus on proposing, not on understanding. This realization has prompted a shift in my focus.

What role do you take in this development?

As I’ve matured, my priority has shifted from mere proposition to understanding. I believe that if no one else undertakes this task, it falls upon us to do so. While it’s a challenging endeavour to dissect and understand mechanisms and reasons behind algorithmic efficacy, I consider this pursuit essential for real scientific advancement. You could have only two or three methods for a problem rather than 200. If there is no way to classify all these methods, one cannot justify a new tool, and I don’t think it makes much sense to continue in this direction. Of course, people will keep producing, and that’s fine. But if we lack understanding, I think we will end up with a field with no future. Ultimately, my objective is to direct my efforts toward grasping existing tools before determining the need for novel ones.

How can we move towards more understanding of existing methods?

We should spend more time trying to understand the things we already have. Then, we can assess what we really need. We should work based on the domain’s needs instead of the desire to have more publications. If we don’t have a tool that does this, then let’s work on developing it. Then, research should be moving more in this direction of need rather than in the direction of producing numbers.

Are these questions centered around understanding why specific algorithms work?

Well, it’s not only about why they work. The question of why certain algorithms work is undoubtedly crucial, but our inquiries shouldn’t be limited to just that. A critical aspect to delve into is how to best match algorithms to applications. When presented with multiple algorithms, practitioners often grapple with deciding which one is optimal for a particular application, whether it’s for combinatorial or continuous optimization. The ambiguity lies in discerning the ideal scenarios for each algorithm.

Today, while we do not have algorithms designed for specific tasks that don’t require further characterization, it’s equally important to understand and perhaps categorize general algorithms. We should aim to extract more information about their operation and evaluate whether they truly are universally applicable or if they should be tied to specific tasks.

Beyond algorithms, there are tools and techniques such as scalarizing functions, crossover operators, mutation operators and archiving techniques. There is a plethora of all of these. Yet, only a select few are commonly used, often because they’ve been employed historically rather than due to an intrinsic understanding of their efficacy. We should be addressing questions like: “Why use one method over another?” It’s these broader, nuanced inquiries that our domain needs to focus on.

Can you explain how evolutionary algorithms function in multi-objective optimization?

Evolutionary algorithms initiate with a collection of solutions, usually generated randomly. These solutions initially possess low quality, but through the selection process, they gradually evolve towards the Pareto front. However, it’s important to note that while a Pareto front is generated, users typically don’t require all solutions within it. Then, a few or only one solution is selected. But selecting the right solution on the Pareto front is not optimization, but is instead decision making.

With decision-making, a subset or even a single solution is selected from the Pareto front based on the user’s preferences. Determining user’s preferences can be straightforward if they have a clear trade-off in mind, but when preferences are uncertain, the algorithm generates several possibilities for users to evaluate and select from. This diverges from optimization and delves into decision-making. Thus, in multi-objective optimization, there are three distinct stages: modeling, optimization, and decision-making.

I primarily focus on the optimization aspect. Other researchers, particularly in operations research, delve into decision-making, and some combine both. These interactive approaches involve running the optimizer for a few iterations and then seeking user input on the desired direction, generating solutions based on the user’s preferences. These interactive methods can be effective, but crafting concise and meaningful user queries is crucial to prevent overwhelming them.

In an earlier outing, you mentioned the most important criterion based on which you select PhDs is their passion. How do you assess passion?

Ideally, students are passionate but are also excellent programmers and mathematicians. Unfortunately, students with all these skills are rare, and a balance between these should be found. One could say this is a multi-objective optimization problem in itself. Passion weighs heavily compared to other traits and skills in my assessment.

Assessing passion can be intricate to define but more evident to recognize. When I encounter it, a sort of sixth sense guides me in differentiating genuine passion from feigned enthusiasm. One telltale sign is students who consistently go beyond the scope of assigned tasks, constantly exceeding expectations. However, this is not the sole indicator. Passionate individuals exhibit an insatiable curiosity, not only asking numerous questions about their topic but also independently delving into related areas. They bridge concepts, linking seemingly disparate elements to their work — an essential trait in research which involves creative connections. For me, this indicates a true passion for the craft. In my experience, individuals with an innate passion tend to exhibit an affinity for probing the depths of their topic, exploring facets beyond immediate instruction. Such students possess a research-oriented spirit, not solely seeking prescribed answers but uncovering avenues to enrich their understanding.

The final element involves leveraging and cultivating their skills. Even if a student excels primarily in passion, their other abilities may not be lacking. It’s rare to find a student embodying every desirable trait. More often, students excel in a particular facet while maintaining proficiency in others. For instance, a student might excel in passion, possess good programming skills, albeit not extraordinary, and demonstrate solid mathematical foundations. Striking a balance among these attributes constitutes a multi-objective problem, aiming to extract the most from a student based on their unique skill set.

Why is passion so important?

I recall having a few students who were exceptional in various aspects but lacked that spark of passion. The work we engaged in, as a result, felt rather mundane and uninspiring to me. A passionate student not only strives for their own growth but also reignites my enthusiasm for the subject matter. They challenge me, push me deeper into the topic, and make the collaborative process more stimulating. On the other hand, a student who is merely going through the motions, focusing just on task completion without the drive to delve deeper, doesn’t evoke the same excitement. Such situations tend to become more about ticking boxes to ensure they graduate rather than an enriching exchange of knowledge and ideas. Simply put, without passion, the experience becomes transactional, devoid of the vibrancy that makes academic collaboration truly rewarding.

You prefer making a few valuable contributions rather than many papers simply following a research-by-analogy approach. Since there is typically little novelty in research by analogy, should this be conducted at universities?

The question raises a fundamental consideration: the objectives of universities in research endeavours. Research by analogy certainly has its place — it’s necessary, and over time, it has incrementally pushed the boundaries of knowledge in specific directions. For instance, in the context of multi-objective optimization, significant progress has occurred over the past 18 years, leading to the development of improved algorithms. This success validates the role of research by analogy.

However, the potential downside lies in overreliance on research by analogy, which could stifle the reception of truly innovative ideas. Novel ideas, when introduced, might face resistance within a system that largely values incremental work. Consequently, a harmonious coexistence between the two modes of research is essential. Institutions, evaluation systems, and academic journals should incentivize both. Research by analogy serves as a foundation for steady progress, while the cultivation of groundbreaking ideas drives the field forward. The coexistence ensures that while we build upon existing knowledge, we simultaneously embrace avenues leading to unforeseen territories. A future devoid of either approach would be less than optimal; therefore, fostering a balanced ecosystem ensures that the field remains vibrant, adaptive, and poised for growth.

Do you incentivize this as well in your journal?

I do my best, but it’s challenging as it’s not solely within my control. The outcome hinges on the contributions of Associate Editors and reviewers. While I strive not to reject papers with novel ideas, it’s not always feasible. Unfortunately, I must admit that encountering papers with genuinely new concepts is becoming increasingly rare. Notably, this year, I reviewed a paper for a conference featuring an exceptionally intriguing idea that captivated me. This stands as the most remarkable discovery I’ve encountered in the past 15 years. However, such occurrences are not frequent.

Computational intelligence was historically divided into evolutionary computing, fuzzy logic, and neural networks. The last decade witnessed groundbreaking developments in neural networks, particularly transformer models. What role can evolutionary computing play in this new landscape?

I posit that evolutionary algorithms, traditionally used in evolving neural architectures, have potential yet to be fully harnessed. There’s a possibility of designing robust optimizers that can seamlessly integrate with existing algorithms, like Adam, to train neural networks. There have been a few endeavours in this domain, such as the particle swarm approach, but these efforts are primarily focused on smaller-scale problems. However, I anticipate the emergence of more complex challenges in the years ahead.

Additionally, someone I know firmly believes that deep learning performance can be replicated using genetic programming. The idea could be described as “deep genetic programming.” By incorporating layered trees in genetic programming, the structure would resemble that of deep learning. This is a relatively uncharted territory, divergent from the conventional neural network approach. The potential benefits? Possibly it might offer more computational efficiency or even heightened accuracy. But the real advantage remains to be explored.

While there are researchers using genetic programming for classification, it’s not a widespread application. Genetic programming has often been harnessed more for building heuristics, especially hyper heuristics pertinent to combinatorial optimization. I speculate the limited use for singular classification problems stems from the computational costs involved. Yet, I’m hopeful that with time and technological progression, we’ll see a shift.

In summary, evolutionary computing still has vast areas to explore, be it in augmenting neural networks or challenging them with unique methodologies. There’s ample room for coexistence and innovation.

Do you perceive the neural network focus as a trend or a structural shift due to their superior performance?

Many AI people will tell you that it’s fashionable. I am not so sure; I think this is a very powerful tool, and it will be difficult to outperform deep neural networks. Perhaps in 10–15 years, it may happen, but not now. Their performance is such that I find it hard to envision any imminent rival that can easily outperform them, especially considering the extensive research and development invested in this space. Maybe in a decade or more, we might witness changes, but presently, they appear unmatched.

Yet, AI is not solely about the tasks deep learning is known for. There are numerous AI challenges and domains that aren’t necessarily centered around what deep learning primarily addresses. Shifting our focus to those broader challenges could be beneficial.

One vulnerability to highlight in deep learning models is their sensitivity to ‘pixel attacks’. By tweaking just one pixel, which is often imperceptible to the human eye, these models can be deceived. Recently, evolutionary algorithms have been employed to execute these pixel attacks, shedding light on the frailties in neural networks. Beyond merely pinpointing these weaknesses, there’s an opportunity for evolutionary algorithms to enhance model resilience against such vulnerabilities. This is a promising avenue that integrates the strengths of both deep learning and evolutionary algorithms.

This marks the end of our interview. Do you have a last remark?

I’d like to stress that research, regardless of the domain, holds captivating allure for those driven by passion. Passion serves as a vital ingredient for anyone dedicating their career to research. Utilizing tools can be satisfying, but true research involves unearthing solutions to uncharted problems and forging connections between seemingly disparate elements. Cultivating interest among the younger generation is paramount. Science constantly requires fresh minds, brimming with creativity, prepared to tackle progressively intricate challenges. Given the critical issues such as climate change, pollution, and resource scarcity, science’s role in crafting sophisticated solutions becomes pivotal for our survival. Although not everyone may be inclined to research, for those drawn to it, it’s a rewarding journey. While not a path to instant wealth, it offers immense satisfaction in solving complex problems and contributing to our understanding of the world. It’s a source of excitement, pleasure, and accomplishment, something I’ve personally cherished throughout my journey in the field.