Interview: Luc Steels

By Tom Lenaerts

///Interview: Luc Steels

Interview: Luc Steels

Luc Steels

What is your academic background and how were you drawn into the field of Artificial Intelligence?

I started actually studying languages and literature, without any awareness of computation at all: first at UFSIA in Antwerp and afterwards I was one of the first generation of UIA students, also in Antwerp. When I started there was no computer in sight. You have to imagine the distance between language studies and computers at that time. Yet, there was a course in linguistics about language processing by Jacques Noël from Liège and he was very well informed. He talked about processing and information retrieval, but he also gave a few courses on what was happening in Computational Linguistics and AI. He lectured about Terry Winograd for example, about what was going on at MIT or Stanford.

These lectures motivated me to participate in a summer school in computational linguistics in Pisa in 1972, organized by Antonio Zampolli, a big man in language processing in Italy who died a few years ago. Being there was incredible because many important people like Terry Winograd, Bill Woods, Roger Schank, Charles Fillmore, and Martin Kay were there. I mean, all these people who had built these amazing systems like SHRDLU, were presenting at that meeting. This was an incredible world that opened for me and I had the unique opportunity to make contact with them, entering an exciting new field. That was how I got into it and then did my thesis in this area. After my PhD, I somehow managed to get into MIT. As a result, I found myself in 1977 in the office of Marvin Minsky as a new student at the AI lab of MIT, starting a new adventure.

How do you look back on the evolution of AI research in Belgium?

When I was at MIT, there was nothing going on in Belgium. Whereas at the MIT AIlab and with Seymour Papert and Minsky as my advisors I got catapulted in the middle of the action (Patrick Winston, Gerry Sussman, David Marr and many other important AI researchers were also at MIT at that time). There was also an interesting connection with Stanford, since John McCarthy was very good friends with Minsky. AI, as a field, was still very young and there were maybe five labs in the US performing research in that field. While at MIT, I was also one of the first users of the LISP machine. After MIT I went to work in a company, Schlumberger, who was one of the first to start applying AI, more specifically expert systems. Through this experience, I got to know people like Edward Feigenbaum and many other key people from Stanford, who were collaborating on our Dipmeter Advisor project: We built one of the first industrial expert systems for the interpretation of logging data from oil exploration. I experienced thus both the world of the LISP machine at MIT and the world of expert systems, of which the initial ideas were coming out of Stanford.

After a short stay in Paris learning more about technology transfer, I came back to Belgium in 1982, to start the AI lab. But in Belgium the landscape was pretty empty. I remember that I was proudly shown the Cyber CDC computer where you could log in remotely so that you no longer had to use punched cards, whereas I was already used to desktop computers with windows, a mouse interface, local area network, laser printers, etc. MIT and other AI labs were totally ahead of what was available in Belgium. So, when I came here, I imported all this technology: we had the first local area network, the first LISP machines (in Belgium and even on the continent), the first laser printer from AGFA Gevaert and so on. I tried to equip the lab with the most advanced technologies and we used them to develop our first AI projects. 1982 and ‘83 were also the years that the European framework programs started. With my contacts, we submitted an AI proposal, which was also accepted. A lot of money appeared suddenly in Brussels and the university didn’t know how to handle this.

After the start of the AI lab in 1982, I also founded the Belgian AI association.   We had our first meeting in September of that year at the Vrije Universiteit Brussel. I actually recently found back the poster of that event. There was a group working on LISP in Liège and there was some interest in UCL on logic programming as well as in Leuven with Maurice Bruynooghe. Computer science as a department was nonexistent.  Here and there, there was somebody in a mathematics department or in engineering but basically there was no serious infrastructure for computer science, no real labs. The mathematicians were totally surprised that I actually wanted to have computers and even some rooms to put computers in. Why do you need computers they asked? Mathematicians interested in computer science were very much influenced by Dijkstra at that time: you first prove the correctness of your program before you type it in.

So, doing AI research in Belgium was very difficult. My experience has taught me that it actually was always a struggle and that things were therefore very much dependent on European grants. There were some sporadic opportunities in Belgium like an action by the Minister of Science in 1986 (Guy Verhofstadt, at that time), which was in reality targeting computer science and not AI. Nonetheless, he had the vision that something big had to be done. There was also a big action around that time by the IWONL (which became IWT afterwards and is now part of the Agency for Innovation and Entrepreneurship of Flanders) and then there was a big wave about expert systems in the 80’s. We collaborated with many companies in that period like ACEC in Charleroi, BELL in Antwerp, BARCO and many more. We built different applications and even had some spinoff companies avant la lettre.

It was also in that period, in the context of the ESPRIT framework, that we made contact with the AI community in the Netherlands. I had a lot of interactions with the group of Bob Wielinga and Joost Breuker in Amsterdam, starting I think in 1984-1985. We had a few big projects together on the knowledge level in AI. Together with Bob I also started the AI Communications journal, I was program chairman of the first big AI conference in Europe organized in Brighton and was one of the founding members of ECCAI (now EURAI), which explains why this organization is legally a ‘Belgian’ organization.

It was a fantastic time since everything had to be set up and there was a lot of interest from industry. At some point in that period the lab consisted easily of 30 people.

You mentioned this limited support for AI research and the need to get European funding to perform this kind of research in Belgium. Comparing to the Netherlands they seem to have invested more in AI with specialized bachelor programs, etc. So why not in Belgium?

It is not because we did not try. The reason I think is that Belgium is first of all a very small country and then you have Flanders, which is even smaller. What I see, and not only in AI, is that there are people who are very much ahead in their field, they are explorers. But then when it is a matter of following up and institutionalizing their efforts then it doesn’t work.   It’s partly the lack of money due to the scale. As a consequence, most of these explorers move away, leading labs in other locations in Europe or beyond.    People coming from the VUB AI lab, you find in labs everywhere in Europe and the US and sometimes they created their own groups like for instance Walter Daelemans in Antwerp, which is like a spinoff of this lab. More recently, Tony Belpaeme in Plymouth created his robotics lab and Pierre-Yves Oudeyer, who was also here, created a fantastic lab in Bordeaux. These people went outside since there was sadly no strong basis to build things in Brussels.

Can you tell us more about your vision for the future of AI?

First of all, I always moved between academia and industry. I find this very important. Nowadays, universities are pushed to make application projects but actually, if you really want to do good applications, you should become a start-up or you go into a company. It is very frustrating for both sides if you don’t: industrial applications are best done in industry otherwise it is simply cheap labor from the university, which becomes exploited.   Moreover, from the other side, industry will say: “This is not industry quality software which is being delivered here”. Of course not, it’s done by PhD students and postdocs. So, in my opinion, this model is not good. That was also my reason for developing activities in Paris with Sony. This allowed me to do really fantastic things and that lab is still doing that. I also always tried to find out how I could move forward in AI. Of course, we have done a lot of applications to earn money to pay our computers, people, post-stamps, telephone, etc. You have to do this. But it was always, I would say, a sort of application-driven basic research. We did this for our knowledge engineering research in the 80’s and in the 90’s for the behavior-based robotics.

Yet, whenever I felt that we were reaching limits then I started to look around. In this way, we made the transition to the Artificial Life community in the late 80’s/early 90’s with people like Chris Langton, Rodney Brooks, Rolf Pfeifer, and many more visiting the lab. I completely changed the lab at that time: In the 80’s we were working with LISP machines and knowledge-based systems and then when you came back in the 90’s we had small robots and lego vehicles driving around. The motivation was to find new avenues for AI and that movement was also very successful in terms of research and spinoffs: Many people now have an iRobot, which is maybe not such a fantastically intelligent robot but it is in the living rooms of millions of people and it was originally programmed in LISP by the way.

But then I felt again the limitations of staying in that area and decided to concentrate on language. But instead of doing the standard thing, I thought we should focus on the evolution of language and that’s of course due to the influence of Artificial Life thinking, about evolution, self-organization, the use of complex systems and all that. So, from the late nineties, I tried to import evolution into the language field.

This also explains why I chose to embed myself in an evolutionary biology in Barcelona, in a way becoming a student again, in order to really understand how these people think. This choice was also motivated by the opportunity of working very closely with Eörs Szathmáry, a top evolutionary biologist. Just like it was very important in the 70’s to really understand how these computer scientists were thinking, I find it now important to understand how evolutionary biologists think.

But in the meantime, there is this new explosion of neural network research. We also worked on neural networks in Brussels in the 80’s with a group of complex systems researchers. Tony Bell for example, who is now in California, did his thesis here in the 80’s on that topic. He became pretty big in this field. This neural network research was a wave at that time that went down again. This particular wave is now booming again and is kind off rolling over the rest, but you can predict that it will slow down.

Notwithstanding this wave, I think the real future of AI is in evolutionary thinking. If you look at genetic algorithms, which we already worked on with people like Bernard Manderick in the 90’s, it is a very crude and limited view on evolutionary biology. We can learn so much more from real biology, both the molecular biology and evolution.

Will you continue with language evolution in the future? Are you moving to real humans now and leave robots behind?

No, no, for me this area of language evolution is very important and exciting. But it is extremely difficult to get money for it. You must also remember that this problem of money has always existed. I remember one proposal we wrote for genetic algorithms in ’92. The review came back stating that genetic algorithms are totally useless, will never find any application, and that our project should therefore not be accepted. Of course, now genetic algorithms are an industry in itself, with big conferences and different spinoffs. We did also a lot of work in the nineties with Walter Van De Velde on agents for electronic commerce. We had an IWONL project to show the industry what it could do. At that time, the World-Wide Web was in its infancy and people who had internet at home were extremely rare. We had one of the first websites in Belgium. We showed this to companies with different demos: you could browse to find products, you could see pictures, you could buy, … all very advanced at that time. Then at the end the sponsors said: ”Well Mr. Steels, how many people have access to the internet?”. At that time, I had to admit it was only a few hundred, yet in hindsight … So, it was always a struggle to convince people. Afterwards they then tell you, “why didn’t you tell us?”, but we actually did. And now this is happening again with this new interest in AI. This wave of statistical processing of data, everybody talks about it. We have good people in Belgium doing this, but not the critical mass to make the difference. So now the industry is coming again with a wish list of things they want: “Can you do this project, can you do that, …”. But we don’t have the human resources as there was no support a decade ago to grow these resources. This should clearly change.

Looking at your career in AI, you have done so many different projects. Was there ever a topic you could not investigate due to lack of time, people etc. Does anything stand out?

At the moment, this research in language evolution is, let’s say, a little bit on the back burner because we don’t have the money. We had it for a while as you know, but then we moved it to grammar and it became impossible to get money for it. Even the AI people would say, “why do you need this?”. It’s very technical and when you use the word grammar they are afraid because they remember primary school. The linguists, they are not interested because people like Chomsky belief that language doesn’t evolve, so that’s the end of that. We are doing, I think, important work in grammar, very technical work with a few excellent people here, but it is a small community. Nevertheless, it may have a big impact in the future.

But my personal interest is in something bizarre that is happening, which is that AI is now in the collective imagination, appearing as a kind of wonder, you know, a magical thing. You have technical projects wherein they talk about immortality and about brain-computer interfaces, about mind uploading, agents that are replica’s of yourself. I’m interested in the cultural impact of AI on the collective beliefs, how people are trying to make sense of themselves, the difference between mind and body, the future of humanity. What you see is that all the things we work with, like agents, the cloud and so on, get interpreted in terms of concepts that are actually religious concepts, or used to be religious concepts. For instance, the after-life: an agent is like an angel, and then of course the concept of the devil emerges also. I find this cultural impact very fascinating. There is actually at the moment the opening of the Ars Electronica festival in Linz, which is the biggest festival in the world about technology and art, and the theme this year is AI. There is a talk on AI and spirituality, the keynote lecture given by a Buddhist monk, which is quite amazing. I’m studying this phenomenon but in order to play a role, although you could write papers but nobody will read them, I wrote instead an opera.

There is currently a huge interest of companies in AI, which they mostly equate to data science and deep learning. What is your perspective on this “hype”?

In principle, I think it is great! But we have to manage it somehow. I think there are two roads to AI, the knowledge-based road, which in the past has been dominating AI research, and then the data-based one. In the knowledge-based approach, you try to get grips on the knowledge of humans and then you use that to build a system. In the beginning, it was by analysis, talking to the human expert and also by machine learning. But at that time, it was symbolic machine learning, like inductive logic programming. The other perspective is that of data science, which consists actually in many cases of statistical approaches, sometimes with neural networks, and particularly deep learning. But it is essentially statistical analysis of data.

The great thing about AI is that it is a very dynamic, open and creative field. You always have other sciences that at some point feed into it. In the beginning, this was Logic. This direct line from the great logicians, the logical empiricists like Reichenbach or Russel, to McCarthy, who was a student of Reichenbach, provided an infusion from the logic side. Afterwards there was an infusion from computer science followed by an infusion from biology with genetic algorithms and things like that. Currently we see a similar infusion from statistics and complex systems science. AI is a field that is able, it is open enough, to have this inflow which is turning things upside down. This is great, it is a very good feature because you have many fields, like for instance linguistics, where the field is like a fortress: new ideas don’t get in. They create schools of thought, which are tightly controlled, which is bad. So, I think these new waves are all very good but people have to realize that a lot of the machinery behind the semantic web for example is grounded in knowledge-based AI, see the work of Frank van Harmelen and colleagues in Amsterdam for instance. If you do data-driven AI, this is all very good, but there are very strong limitations to it, just as there are limitations to knowledge-based AI, such as the effort needed to do knowledge analysis.

Problems for the data driven approaches are things like explanation and robustness, in the sense that by minimal changes to the input the system will tell you there is an elephant in the picture whereas before it was identified as a flower. Also, the fact that it is produced by statistics makes it limited by definition because it takes the past and then makes a prediction only based on that. This does not work for human language for instance as it is an open system. You can easily invent new language on the spot. I mean new words reusing grammatical constructions in another way. That is also why the infusion in AI from Biology is so important: Biology is also an open system. Innovation happens in living systems, sometimes very quickly. We understand a bit of it but not everything. To me all these ideas of how life, species and structures have emerged are essential to understand, as they provide insights to really go to the next phase in AI.

Also, the merging of the data-driven and knowledge-driven approaches is essential to getting explanations. We will need knowledge-based AI and language, contextual representations and all that stuff to produce an explanation why a system has drawn that conclusion. This is extremely important as people want to know why the system has made that decision about them: do they need to stay in prison or not, can they get a loan or not, all these things. The system should not just provide numbers or list the neurons that fired in a totally non-transparent network.

In Europe, there is a regulation coming up. A directive, which will be obligatory from 2018, that any system that makes a decision that affects a person has to provide explanations and is accountable. When you don’t agree with a decision made by an insurance system for instance and the response is that it is because the AI system decided this, then there is a basis in Law to challenge that company. It is particularly relevant in the social domain if systems start to decide whether you can have access to social housing, all these kinds of things are happening. Also in Law, there was an article in De Morgen yesterday on legal expert systems. In the past the knowledge was coming from experts and now this is replaced by inference from data. This is a black box. There is going to be a big clash. Solving this problem of making explainable AI is one of the key topics that we should work on. It is not obvious how this should be done.

As you already mentioned, AI is an open and creative endeavor. In your 2007 article on the future of AI you argued that it is the design aspect, building a system to understand it that makes AI unique. Is this still the case?  

This is really a very important point. I think the importance of AI has been to take ideas from psychology, from linguistics, from philosophy, statistics, sociology, biology, physics, etc. and to turn that into operational systems. At the moment, biologists also believe that this is a good thing, and they are developing so called synthetic biology. They understand that if you build something then you understand it better. In Physics, I think Feynman once said “What I cannot create, I do not understand”. In linguistics, this idea is far from accepted: “Why would we implement our grammar?”. This idea, namely that you validate and work-out your ideas through programming, i.e. building models, is in the humanities still considered a weird idea. In most areas of psychology, they don’t do this either. That is for me the importance and the role of AI with respect to understanding the mind. You get new ideas when you do that, you are creative using your talent as an engineer or inventor, and get new ideas. That is really what it is about.

This creativity is also visible in your works with the context of the Arts. Can you tell more about that?

There are of course now a lot of projects in the context of computational creativity, I won’t go into that. Creativity is indeed the thing that interests me most. Also, the reason for looking at language evolution, is that you are forced to think about the creative aspects of language. This interest in creativity is also what I find fascinating in biology. I think that that is at the core of biology. The big thinkers in Biology, that is what they worry about: “where does it come from?”. But also in physics, people working on the origins of the cosmos for example, or in sociology, where there is also a shift away from pure observations: Where do new conventions come from or structures like cities, city-states or the notion of republic?   In all these fields, this thinking about origins is essential. That’s why for me, origins of intelligence or origins of language, is really a way to think about where we can go in the future with AI.

Does this vision of AI as a tool to study origins in line with in your future perspective on AI?

Certainly! To me this became clear when I was making this TV series called “Science at the Edge of Chaos”, this was in the 90’s. I went around and talked to a lot of people like Ilya Prigogine for example or Christian de Duve, or Manfred Eigen. They were all in this series. I just thought “Who are the most interesting guys?” and then with a camera I went to talk to them. If you have a camera and say “Television”, they have time to talk to you. Stuart Kauffman, Chris Langton and Benoit Mandelbrot, they were all in this series. Then I really understood that this question of origin is what motivates most of these people. But there is nobody working on the origins of intelligence in AI. The difficulty is that you first need to know how to build it, which was done in the first decades and then how it can learn.   But learning is not the same as emerging, because you learn what is already there not what can be created.

The BNAIC conference has seen a decrease in participants, especially in the number of senior AI researchers, who have shifted their focus to the international level. What role could the BNAIC still play in this highly internationalized scene?

The first thing I would like to say is that for me, I speak for myself but it also might be true for the others, is that there are so many things. There are many demands put on us, partly also due to the fact that money is mostly coming from European projects. The local projects are too small. These European projects have their own dynamic, which is bad actually: there are review meetings and network meetings, with their summer schools inside the networks, and so on. All these things require our attention. Before this was more open and now these things are closed, which is a pity. So, it is not out of lack of interest but just because there are so many things happening. The second thing is that I believe that BNAIC is extremely important for the younger generation. I always encourage the younger generation to participate and establish their network also in a local context. The people that go are quite happy with the level of attention they get for their work. I’m not at all pessimistic, on the contrary. I think the BNAIC is a very important conference. Maybe we just should go more often ourselves. I have been invited speaker before, and will be again this year. The previous time was in Delft, I gave a talk about the ten big ideas of AI. It was very exciting as I got lost and did not find the location of the auditorium. I arrived only when they were already announcing me on stage! I was a bit stressed out.

But, you see there are conferences like IJCAI, ECAI, AAAI, etc. We also organized last year one of the Stanford AAAI Symposia in Palo Alto about our work in grammar. I guess people work more at an international level but in my opinion BNAIC remains important.

Any advice for the BNVKI board ?

In Belgium, there are not many groups in AI, this is something that I hope, with this new wave of attention for AI, will change. This is also why I have organized recently a debate at the Academy of Sciences and Arts in Brussels.   In my opinion, we need a new big action for AI in Flanders. It doesn’t have to be as big as IMEC (an institute in micro-electronics with steady structural funding) but something like it. Not with a few PhD students here and there. That is not going to do it. In the Netherlands, the situation is better because they have good educational programs specific for AI, they have a number of good institutes, so things are much better. If you look at the universities in Belgium, in relation to the importance that AI now has, economically, something should be done.   Integrating AI techniques within other different disciplines is excellent but is not the same as educating for AI research. You need people who take AI as their problem and not image processing or some problem in economics. In my experience, it takes 5 years for somebody who has done computer science to really understand AI.

What advice would you give to a student interested in AI in terms of roads to explore or pitfalls to avoid?

I would say that a very solid introduction to computer science is essential. Not just a bit of programming in Java or some other language. You should have the ability to write an interpreter to design your own new programming language. You should know how to deal with parallel architectures, know what is going on in web technologies, etc. You need the mindset of a computer scientist in the first place. You can then bring in an additional field, for example linguistics or psychology or neuroscience, but just as well physics or sociology. My experience is that for a computer scientist to learn about grammar is easy. But for someone coming from linguistics to learn computer science, that is very difficult.

Any final comments?

I would like to say the following: I started out in AI in 1972. My first conference was the AISB in Edinburgh, where I had a paper on case grammar. It was very small and Maggie Boden came to talk to me afterwards, encouraging my work. I’m still doing AI now. For me AI has been and still is one of the most, and maybe the most, interesting creative and dynamic field of science over the past 50 years. It has never been boring. When you assume that you have reached a steady state, something new and exciting pops up and blows your mind, like now with deep learning and other topics. But there will be other things coming afterwards. I’m sure absorption of ideas from biology is the next big thing. I cannot predict when this will happen but at some point AI systems will become so complex that they cannot learn anymore from humans and it will not be possible to engineer them. We will have to switch then to self-developing systems that interact in sound ecosystems that are constantly evolving, not only at the material level of embodied AI but also the mental and cultural level. AI is going to be an incredibly exciting field for a long time to come. We need the brightest people in order to make breakthroughs

By | 2017-10-19T12:54:41+00:00 October 19th, 2017|Articles|0 Comments