Interview  with Jaap van den Herik, Founding Father of the BNVKI

by Emil Rijcken

 

The Benelux Association for Artificial Intelligence (BNVKI) celebrates its fortieth birthday this year. Although we might only be at the dawn of the AI revolution, a world without AI around us is already unthinkable. This was not the case forty years ago; at that moment barely, anyone knew the term. During the last few decades, AI has gone through immense growth. The world champions of chess and Go don’t stand a chance against computers, cars can drive themselves, and some claim the Turing test has been passed. While some professionals fear their profession will become extinct, others are seemingly far from being replaced by AI.

During this interview with Jaap van den Herik, one of the Founding Fathers of the BNVKI, we will recapitulate the advances over four decades of AI. Moreover, we will discuss what might be ahead of us.

In 1981 Jaap was one of the 19 founders of BNVKI. Together with Bob Wielinga and Denis de Champeaux de Laboulaye, he was in the lead of the organization that was in the hands of Amsterdam and Delft. They had the ambition to place AI prominently on the research agenda in the Netherlands. When Jaap was awarded his doctorate in 1983 on ‘Computer Chess, Chess World and Artificial Intelligence‘ (see Relevant References, henceforth RR), one of his promotors, Professor Adrian de Groot, did not believe him stating that a computer would beat the world’s greatest chess players in the future. A statement proven right when Kasparov, the human world Chess Champion (1985-2000), was defeated by IBM’s DEEP BLUE in 1997.

Being a visionary, professor van den Herik (henceforth vdH) predicted in 1991 that machines would judge court cases and replace judges in the future (RR, 1991). This King’s Gambit was a comical statement at the time but is becoming more realistic as technology evolves (see the movie the ‘Queen’s Gambit). During his career, vdH has awarded a doctorate to 91 PhD students, he contributed to the establishment of various organizations*, received a Humies award in 2014 for his research on computer chess, and upon his retirement as professor of Law and Computer Science, he was appointed Officer in the Order of Orange-Nassau, a royal award for individuals who have made a special contribution to society.

 

Professor van den Herik, you focused on computer chess in your PhD dissertation. Meanwhile, you were an excellent chess player yourself (2290 rating); could you still beat your algorithm?

Yes, I could. Early chess programs were not very good, and I could beat them quite easily until the mid-1980s and play at par to 1988. The actual start of the development was already in November 1966 at Bell Labs when the MACK HACK VI chess program was developed by Richard Greenblatt. It was the best in the world. However, it was only rated 1243 when playing in the Massachusetts Amateur Championship. After that, a full range of computer chess programs was built, culminating in the playing strength of over-world champion. 

 

What have been the main contributions of your dissertation?

My PhD thesis (Delft, 1983) was one of the first theses on Artificial Intelligence in the Netherlands*. It was multidisciplinary (computer science, psychology, and philosophy). It started with a description of AI and the history of computer chess. In my research, I have worked on methods for knowledge minimalization, knowledge classification, the use of equivalence classes, the combination of classes, and the complicated combined evaluation in chess end games. This was followed by an impact analysis of programs that would defeat the human world champion. I collected ideas in personal interviews from Claude Shannon, Herb Simon, Ken Thompson, Donald Michie, Adrian de Groot, Mikhail Botvinnik, Max Euwe, Anatoli Karpov, Jan Timman, Genna Sosonko, and many others. Lastly, I focused on philosophical questions regarding computers, intuition, and creativity.

In advance of the ceremonial defense, the Delft University of Technology had arranged a press conference to discuss my research and thoughts on the potential impact of AI on society. My expectations were very high; I predicted that a chess computer would beat the world champion one day.

 

What did the term artificial intelligence’ mean back then?

Nowadays, there are endless AI applications. In the beginning (the 1950s), this was not the case, and AI could be distinguished into four domains:

  1. Chess and Checkers
  2. Knowledge Representation
  3. Problem Solving
  4. Language Translation

 

What is your definition of AI?

Just as AI has evolved with time, so has my definition. I used Herb Simon’s definition until my PhD defense: ‘an AI program should mimic human thinking‘.

However, soon I realized computer programs would outclass humans in chess, and my ideas on the definition shifted. Computer chess was an excellent example for AI researchers, yet ‘mimicking’ was too restrictive on the definition. My second definition (1983 – 2000) was based on Donald Michie’s definition dealing with the human window. My definition was: ‘an AI program is a computer program that provides insights into the human thinking processes‘. The goal of this definition was to build programs within the scope of the human window, meaning that they are human executable and understandable.

Then, ‘learning’ and ‘deep learning’ entered the scene, and now I am inclined to separate the term ‘artificial intelligence’ from ‘natural intelligence’. Obviously, ‘intelligence’ refers to clever behavior or a clever solution to a complex problem. But we should distinguish human intelligence and artificial intelligence from each other. My current definition (2000-now) reads: ‘AI is the ability to address issues in the real world in an adequate way’.

 

The developments in computer chess have similarities with developments in artificial intelligence; could you explain the developments per decade?

Early chess computers were based on search algorithms and were both rule- and library-based. Then, computing power increased exponentially, and so did the performance of chess computers. Generally, each decade is characterized by specific developments.

The fifties: the emphasis in this decade is on search algorithms; tree search, and evaluations. Claude Shannon and Alan Turing quantified all pieces and then summed the estimated values of these pieces. The position with the highest value was preferred. John von Neumann did similarly.

The sixties: in this decade, emphasis was on knowledge and knowledge representation; special attention was on positional characteristics (e.g., developing pieces, open files). A prime example of this decade was Richard Greenblatt’s MACK HACK VI chess program.

The seventies: the developments of the fifties (search) and sixties (knowledge) were combined in the seventies, of which my dissertation is a good example. Combining both aspects was made possible due to significant increases in computing power (MACK HACK VI, 1966 went from 200 nodes to 160 000 nodes per second by BELLE (Ken Thompson, 1980).

The eighties: computing power increased even further in the eighties with the introduction of parallelism. The DEEP THOUGHT chess program defeated the human chess grandmaster Bent Larsen in 1988. It combined 64 chess playing chips and considered up to 500 000 positions per second.

The nineties: distributed systems were investigated and used in computer chess. Tasks were distributed and were executed through scheduling—two Dutch programs dominated in the first half of this decade, viz. GIDEON (Ed Schröder) and FRITZ (Frans Morsch) won the World Computer Chess Championship (WCCC) in 1992 and 1995, respectively. Then, in 1995 ‘IBM’s DEEP BLUE I project started. It had 36 processors but lost 4-2 against human chess world champion Gary Kasparov in 1996. After that defeat, DEEP BLUE II was developed, with more computing power than its predecessors; it could evaluate up to 2.5 million positions per second. As a result, in 1997, DEEP BLUE II defeated Kasparov by 3.5-2.5.

In conclusion, DEEP BLUE marked the start of a new era of chess programs with advanced computing (known by the RS/6000 computer) and the introduction of machine learning. They used the so-called ‘Dap ‘Tap’ for finding patterns in the opening libraries and later in the search processes. However, the findings and techniques developed by IBM were not publicly available. They were considered as a single point of knowledge.

The 2000s: in this decade, Frans Morsch made the over-human strength publicly available by his commercial products FRITZ and DEEP FRITZ. In 2002, FRITZ played an 8-game match with the new world champion Vladimir Kramnik; the result was 4-4. Then, in 2006 DEEP FRITZ won a 6-game match with Kramnik by 4-2. It was the end of human superiority in chess.

The chess community changed drastically. During the world championship matches, the public was no longer allowed to enter the playing hall, since all spectators knew the best move via their telephone, only the world champion and the contender did not know.

The 2010s: In research, most advances were achieved by incorporating machine learning, later deep learning and neural networks. In 2004 I started a project on Evolutionary Computing with Omid David Tabibi, Nathan Netanyahu, and Moshe Koppel. The topic was using genetic algorithms to tune the evaluation function so that the chess algorithm could learn from scratch (i.e., a program only knows how the chess pieces move). Our contribution to the GECCO 2014 conference was awarded the HUMIES Award 2014. It was a breakthrough since, in simple words, it showed the power of randomized learning. The idea in itself has led already in 2012 to collaboration with Jos Vermaseren (Nikhef). We then applied the concept on Feynman diagrams and formulated a Monte-Carlo Tree Search for HEPGAME (High Energy Physics Game). The proposal was accepted as an ERC advanced research project. Moreover, in this decade, the rise of DEEP ‘MIND’s performances in computer Go was predominant. ALPHAZERO did ring a bell for all AI researchers.

The 2020s: although this decade has just started, I expect the Bidirectional Encoder Representation from Transformers (BERT) to mark the next era of state-of-the-art computer games ( among them chess). BERT is a transformer-based machine learning technique initially proposed for natural language processing. But its strong capabilities in pattern analysis lend themselves well for chess and other games.

 

Monte-Carlo Tree Search is an essential algorithm in modern computer chess programs. What is it, and what was your role in its development?

Bruno Bouzy was the first researcher to publish on random search in a game tree for Go in 2004. I was privileged to be the editor of the book. Bouzy had two gifted students, viz. Rémi Coulom (presented first ideas in Turin, 2006) and Guillaume Chaslot (received a research place in Maastricht). Chaslot et al. (2008) designed and published the formal description of Monte-Carlo Tree Search (MCTS) (see RR).

MCTS is an effective tree-search technique characterized by building a search tree node by node according to the outcome of simulated playouts. The process can be broken down into four steps.

  1. Selection – starting at root R, recursively select optimal child nodes until a leaf node L is reached.
  2. Expansion – if L is not a terminal node (i.e., it does not end the game), create one or more child nodes and select one C.
  3. Simulation – run a simulated playout from C until a result is achieved.
  4. Backpropagation – update the current node sequence with the simulation result.

Each node must contain two important pieces of information.

  1. An estimated value based on simulation results.
  2. The number of times it has been visited.

 

Will there ever be an optimal chess computer’?

Although there are approximately  different positions in chess, I believe that the game can be solved and expect this to happen around 2035.

 

What would be the rating of an optimal chess computer?

By then, this question is irrelevant, or we have formulated a different interpretation of playing strength.

 

In this interview, we focus on the past and the future of AI. Your passion is two-fold, with chess and law. Why law as well?

In 1987, I was invited to join the Leiden Faculty of Law to make them familiar with modern developments in computer science. Inspired by Alan Turing’s (1950) ‘Can machines think?‘ the step from computers playing chess to computers judging court cases seemed minor at first glance. However, I understood very well that the above question was audacious. Moreover, up to 1990, law and AI had received very little attention in the scientific world. Therefore, the invitation to go in that direction was exciting. Please, note that initially, in 1988, I knew very little of law, and it took me three years of hard work to develop a proper understanding.

 

In 1991, you predicted that computers would replace human judges in the future. Can you elaborate on this statement?

Whether computers will replace judges at some point in time is something I cannot predict (See the link https://www.universiteitvannederland.nl/college/kan-een-computer-een-rechter-vervangen). It depends on more than task performance only. Society and government will decide on acceptance. Moreover, the full range of tasks of lawyers, judges and paralegals is a topic of research, and there is no formal definition yet of computers being qualified. Still, my prediction is that computers will perform both simple and complex legal tasks at par or better than humans in the foreseeable future. Hence, in my opinion, empirical evidence will show us the best way for society (see RR).

 

Talking to a computer seems much different than talking to a human. Human judges can take the emotions of suspects into account and adjust their speech and non-verbal communication accordingly. How would this apply to computer-based judges?

I foresee that computers will be able to understand emotions in the future. Again, only time can tell whether society is prepared to accept such capabilities when exhibited by computers. I believe that the descendants of BERT will have a great future.

 

You don’t believe in the ‘computing paradigm’, namely that computers perform analyses and structure data, while humans work on ethics, intuition, and creativity.

In my opinion, the computing paradigm certainly applies to our interaction with computers nowadays. Therefore, I do not exclude that computers will be capable of handling work on ethics, intuition, and creativity by the end of this century. Creativity will never be a problem for computer scientists to realize, starting at the end of your list. According to Michie, creativity is one of the least valuable capabilities of human beings since it can be best mimicked by genetic algorithms (or random search).

Intuition is another cup of tea. De Groot stated: “Playing at the level of a world chess champion requires intuition. Intuition cannot be programmed. So, a computer will never play on that level”.Currently, my own opinion is that “Intuition is programmable” (see RR, my Valedictory Address in Tilburg, 2016).

Ethics is the real issue. To what extent ethics can be incorporated in computer programs cannot be answered in brief, mainly since ethics is culture-dependent. Here, I remark that in Law, we see many cultural differences in the jurisdiction. In my opinion, each local and global legal system can be implemented in a formal system endowed with conditions expressing the human measures. So, ethics is the research challenge of the future.

 

What is your definition of ethics? And isn’t ethics inherently subjective?

Formulating a definition of ethics is difficult as more than 170 definitions exist. It can also be called moral discipline; thus, it is concerned with what is morally right and wrong. Please note that ethics is equally valuable to any system or theory of moral values or principles.

Indeed, ethics is subjective, both for humans and for computer systems. There is no such thing as being objectively right or wrong; there are just different approaches to ethical reasoning. I have thought about how a computer would handle this, but I cannot formulate suggestions for future research other than searching for human measures.

 

What is the human measure’?

Recently, the human measure has been embraced by and upon policy execution. Too tight regulations can limit the ability to execute legislation at the cost of the human measure. As a result, formulating a definition for this measure becomes relevant. However, formulating a definition comes down to the philosophical question ‘what is a human?‘. My definition of the human measure is: in execution, the human measure means that the executor takes individual circumstances into account within the legal frameworks.

 

There is no argument that individual circumstances are limited to a fixed set. How can a computer learn to handle each unique circumstance?

It is impossible to learn each unique case, a priori. But this holds both for humans and for machines. Why should a computer learn an adequate judgment for all subsets (or all elements) if a human has not done so either? Of course, in the practice of both (humans and machines), some sets of circumstances will be missed, but I assume that the approximations will be sufficient in relation to the human measures.

 

You predict that computers will realize the human measure eighty years from now. Will it be based on characteristics as predefined by humans? If so, isn’t that a loss of information since some characteristics are tacit?

It could be based on predefined characteristics, which will undoubtedly be the case at the beginning of this line of research. This would indeed mean loss of information. But I am sure human investigators will catch up, maybe with the help of computer assistants. As there are so many challenges ahead, I still predict a realization within eighty years from now. We have to march on before identifying the new challenges more precisely.

 

Most AI algorithms are trained by learning patterns in vast amounts of data and could perform well on problems related to the past. But what happens if a new out of context’ problem arises?

Algorithms will base their decisions on analogy, distance measures and by developing new metrics. Such decision making could be sufficient for some out-of-context problems too.

 

Are the algorithms we have nowadays adequate for developing an AI-driven judge? If not, what still needs to be developed?

At this moment, the algorithms are not sufficient. Probably, we need new ways of computing. However, we should start by keeping in mind that perfect tuning has not to occur by then. Moreover, some argue that we could get good algorithms through quantum computing. Still, I would not advise waiting for so long.

 

Humans are biased, and so will computers if they are trained on the decisions of biased humans. If we only train algorithms based on past data, AI judges will be biased forever. How can we prevent this from happening?

This is an excellent and vital question. We should place sufficient energy and money on research aimed at handling biases.

 

AI judges are facing a massive leap towards artificial general intelligence (AGI), in which an intelligent agent can learn any intellectual task that a human being performs. A Nature publication [RR 2020] states that AGI will not be realized. Do you think it will?

The paper has many truths and might be true. However, I believe there will be AI judges at some point, but the future is still open. Furthermore, I cannot oversee what AGI developments are expected to bring us.

 

This marks the end of our interview. Do you have any last remark?

Discussing past AI developments is relatively easy; predicting the future is more challenging. Currently, we are still some eighty years from having AI judges. I cannot foresee all the intermediate challenges ahead of us, but I trust these will be investigated adequately once raised. I cannot think of arguments stopping AI judges from being realized.

*In 1981, Denis de Champeaux de Laboulaye completed his PhD thesis titled  ‘Algorithms in Artificial Intelligence‘ in which he addressed, among other topics, the (n² -1) puzzle.

 

Relevant References (RR)

Bouzy, B. and B. Helmstetter (2004). Monte-Carlo Go Developments, Advances in Computer Games 10: Many Games, Many Challenges, pp. 159-174 (eds. H. J. van den Herik, H. Iida and E. A. Heinz), Springer.

Chaslot, G.M.J.-B. , M.H.M. Winands, H.J. van den Herik, J.W.H.M. Uiterwijk and B. Bouzy (2008). Progressive Strategies for Monte-Carlo Tree Search, New Mathematics and Natural Computation, Vol. 4, No. 3, pp. 343-357.

Coulom, R. (2007). Efficient Selectivity and Backup Operators in Monte-Carlo Tree Search, Computers and Games (CG 2006), pp. 72-83 (eds. H.J. van den Herik, P. Ciancarini and H.H.L.M. Donkers). Springer.

David, O.E., H.J. van den Herik, M. Koppel, and N.S. Netanyahu (2013). Genetic algorithms for evolving computer chess programs. IEEE Transactions on Evolutionary Computation Vol. 18, No.5, pp. 779 – 789. doi:10.1109/TEVC.2013.2285111. ISSN 1089-778X.

Fjelland, R. (2020). Why general artificial intelligence will not be realized. Humanities and Social Sciences Communications, Vol. 7, No.1, pp.1-9.

Herik, H.J. van den (1983). Computerschaak, Schaakwereld en Kunstmatige Intelligentie. TH Delft, June 21, 1983, 630 pp. ISBN 90-6233-103-3. (In Dutch)

Herik, H.J. van den (1991). Kunnen Computers Rechtspreken? Inaugural address, Leiden University, June 21, Gouda Quint, Arnhem. ISBN 90-6000-842-1. (In Dutch)

Herik, H.J. van den (2016). Intuition is Programmable. Valedictory Address, Tilburg University, January 29. ISBN 978-94-6167-268-1.

 

Distinctions from the Field

*BNVKI – Benelux Association for Artificial Intelligence (Honorary member)

CSVN – Computer Chess Association of the Netherlands (Honorary member)

JURIX – Foundation for JURIdical eXpert systems (Honorary chair)

SIKS – School of Information and Knowledge Systems (Honorary member)

ECCAI / EurAI – European Community for Artificial Intelligence (Fellow)

ICCA / ICGA  – International Computer Chess (Games) Association (Honorary Editor)