The nations that lead in the development and use of artificial intelligence (AI) will shape the future of technology and significantly improve their economic competitiveness. At the same time, a loss can be expected for those falling behind.

The United States has emerged as the early frontrunner in AI, but China is challenging its lead. In the meantime, the European Union continues to fall behind, and within the EU countries, the Netherlands lacks research output. The percentage of AI researchers amongst all researchers is far under average, and so is its contribution to all global AI research. Yet, with a growth percentage of 115% in AI publications between 2013 and 2018, the Netherlands has been one of the fastest-growing countries globally (after the US and Japan with 151% and 135%, respectively). Also, with an average citation impact score of 2.08, the Dutch’s AI research quality is among the highest globally (#4 behind the US, Canada and the UK with 2,63, 2,19 and 2,09, respectively).

During this interview with Professor Tibor Bosse, we assess the state of AI in the Netherlands and the future interaction between AI systems and humans. Professor Bosse is a member of the NL AI Coalition’s strategy team, which aims to strengthen the position of the Netherlands by stimulating, supporting, and organizing Dutch activities in AI. Also, he is the figurehead of the NWA route Big Data and is the acting chair of the BNVKI. In his research, professor Bosse focuses on social AI, the interaction between humans and AI.

 

Professor Bosse, is the Dutch AI glass half-full or half-empty?

You already sketched the situation quite accurately in your introduction. Indeed, the percentage of contributions to global AI research is not so large. But the impact of our citations is still among the highest on various lists and rankings. In addition, the full part of the glass also relates to the fact that Dutch AI research has been very strong historically. Unlike many other countries, AI as a field has been there for over 30 years. If only you look at the BNVKI, our association, we recently celebrated our 40th birthday. This indicates we have been active for decades already. The field is relatively well organized because of its long history. Also, we have a good overview of which subdisciplines of AI are located where, we know our strengths, we have a good infrastructure, and we have had a strong education for an extended period. The latter is a significant difference that distinguishes the Netherlands from other countries. We have had an explicit education where AI is not just a subdiscipline of computer science but stands on its own and scores high on various rankings for a long time.

In principle, this puts us in a good position. However, developments are going so rapidly that AI is defined entirely differently from 30 years ago. For example, the investments are incomparable with other countries such as the US and China and competing with them is tough. That’s the empty part of the glass.

Ultimately, I am optimistic, so the glass is half full.

 

What are the NL AI Coalition (NLAIC) and the strategy team?

The NLAIC is a large public-private collaboration which aims to accelerate and connect AI developments in the Netherlands. It involves the Dutch Government and hundreds of companies, knowledge institutions, and societal partners that contribute to AI development. Our slogan is ‘algorithms that work for everyone’, emphasizing the goal of making AI accessible. When we were founded in 2019, the aim was to stimulate economic growth in the Netherlands and to put the Netherlands on the map as an AI powerhouse. Recently, the NLAIC submitted a bid to the Dutch National Growth Fund, and a large budget was awarded for investing in AI over the entire knowledge chain, ranging from fundamental research to applications.

We align the outlined strategy for the upcoming years with the strategy team. However, our role is advisory only, and we do not have decisive power. My role within the strategy team is to represent the scientific field, which I do together with two other representatives.

The money that the growth fund allocates helps us in reaching our goals. Yet, it is also money that we must share with the entire country. So my goal is to ensure that the research budget is well spent.

 

What are the questions currently asked by the strategy team?

The Strategy Team advices on issues like strategy, policy, stakeholder management and the preparation of investment incentives. In addition, we evaluate the progress of the NLAIC’s activities against its objectives. Many of our questions deal with financial instruments, such as the bid for the growth fund. For instance, we value attracting new talent, which is crucial for academia. We have to provide answers to the ‘brain drain’, where academic talents leave academia in the Netherlands to get better positions abroad or in industry. Our fellowship program is a financial instrument designed to let universities retain or attract talented AI researchers. Also, we have financial instruments at a European level. In addition to focusing on technical aspects of AI, we also value the human side of AI. Therefore, recently money has been allocated to ELSA (Ethical Legal Social Aspects) labs. Lastly, we organize events to connect with society and Dutch citizens.

 

The NLAIC focuses on five building blocks. Could you describe each of these building blocks and the current state in the Netherlands?

The NLAIC aims to cluster its activities into five main themes: Human Capital, Research and Innovation, Data Sharing, Human Centric AI and Startups and Scale-ups. These building blocks are essential for ground-breaking impact in social and economical application areas. Each building block has its working group, in which participants tackle cross-sectoral challenges. For instance, the Data Sharing working group aims to break down barriers to sharing data. Machine learning is impossible without data, and the greater the amount of relevant data available, the better the predictive value is. However, in the Netherlands, data is often kept locked away, primarily for legal or commercial reasons. Therefore, the working group aims to better organise data sharing responsibly. Similarly, the other working groups try to address other relevant challenges.

 

The Rathenau institute published a report comparing different countries’ AI strengths and focus areas. What drives a country’s focus?

It varies per country. In the Netherlands, it has been chiefly a bottom-up approach. Traditionally, some universities have been strong in some areas and have continued their focus. Some groups have focused on machine learning and have now changed to deep learning. Others have natural language processing strengths, but agent systems and logic have been strong areas in some Dutch universities too.

At an aggregate level, there are significant differences in the mechanisms for agenda-setting. In the US, many innovations are driven by big tech companies. Whereas in China, the government aims to be the global AI leader by 2030 and invests its funds accordingly. In contrast, Europe does not follow either of these approaches, perhaps explaining why we lagged in the last few years. Europe consists of many different countries with their own positioning, and it is not easy to create a shared strategy. Yet, our position has allowed us to focus more on the human side of AI. The published research topics depend heavily on whether the approach is bottom-up or top-down. Since different countries follow different approaches, we see a wider variety of publications too.

 

What are these differences?

The bottom-up approach generally assigns much more value to human values than the top-down. As a result, fundamental issues such as privacy and transparency are more important here. That sets the agenda for both technical- and socially-oriented AI research. For instance, since transparency is a crucial issue in AI, we need to understand how an algorithm makes its decision. This importance steers our agenda towards fundamental research into ‘explainable AI’ and social research that investigates under what conditions people accept and trust the algorithm’s answers. In the US and China, such research is less apparent.

 

What role does the international AI positioning of the Netherlands play in the strategy team’s decision making?

Historically, the Netherlands has always been an important player in AI. Dutch universities have been conducting strong AI research for 40 years, and our country has several excellent educational AI programmes and a good ecosystem. However, due to recent global developments and the delay in establishing a national AI strategy, we are gradually falling behind. We have recently lost a lot of talent to other countries, where working conditions and salaries are sometimes much more attractive. As it would be very harmful if we rely too much on developments abroad, we try to exploit better the opportunities to build and maintain a strong and distinctive position for the Netherlands in terms of AI research and industry.

 

Many reports use the number of publications as a proxy for a country’s AI success. Yet, science is open, and we might as well benefit from publications abroad. Do we really need many publications?

There has been a trend toward too many publications in recent years. However, in the end, quality is more important than quantity; papers should be read and have an impact. Rather than focusing on quantity, we need to emphasize data- and algorithm sharing for better quality. To some extent, the latter is already happening. The NWO and KNAW, two established Dutch research councils, invest more in changing the system towards better quality. Instead of the number of publications, scientists are now more likely to get a promotion based on other factors reflecting impact.

 

The Netherlands is a leader in planning and decision making globally. How come?

It is one of the key areas that have been identified as strengths in the technical AI perspective. A few years ago, I was in the working group responsible for the Dutch AI manifesto in which we sketched the landscape in the Netherlands. The seven strengths that we identified were agent systems, computer vision, information retrieval, machine learning, knowledge representation, natural language processing and planning and decision making.

 

Should the Netherlands specialize in one area and strengthen its position as a global leader, or should we diversify our focus?

Being part of the ‘VSNU kennistafel’, a working group uniting the AI representatives in all Dutch universities, I have had various discussions about this trade-off. Traditionally, the Netherlands has followed the polder model for decision making, which is consensus-based, and that culture has been rooted in our discussions. It was hard to all agree on one approach from the discussions we had.

Focusing on one or two key areas would not work, in my opinion. But having said that, we need to specialize in some areas to get grants from the government and we would be better positioned if we moved in that direction.

 

To what extent do the different stakeholders’ visions align within the strategy team?

Industrial goals do not always align with academic goals, but that’s why we have representatives from all sectors. Several strategy team members are representatives from big companies, and my goal is to guard academia’s interest. However, even in academia, the interests are not unified. There is an ongoing ‘competition’ between the beta, tech-oriented AI researchers (that historically owned the discipline) versus the more socially- and ethically-oriented AI researchers. There is an ongoing debate about how much importance should be given to each perspective.

 

Is it correct to say that you focus mainly on the human-centred side?

I have experience with both sides, which is unique and probably explains why I have been asked for this role. For 25 years, I have worked in a computer science department. I often say that I was only interested in computers as a little boy, but I became increasingly interested in humans over time. I made a shift at some point, and now I am working in a social science faculty. However, my research is still relatively technical, and I focus on the interaction between humans and intelligent systems, both by developing new algorithms and evaluating them experimentally.

 

Can you give an example of a misalignment in stakeholders’ interests?

Typically, the industry expects a higher pace than academia and the government. The latter tends to first test and trust new algorithms before implementing them. In comparison, industrial parties are more focused on economic growth. And at the academic level, we discuss technical- and human-oriented research.

 

Does this focus on economic growth explain the US’s vast growth?

Indeed, that makes a big difference, and we do not have these big tech companies in Europe.

 

In addition to the focus areas, NLAIC also promotes beneficial social effects. How does it do this?

We try to create an infrastructure where the developments at the industrial level are embedded in discussions about societal values, such as data sharing. Also, we organize events to educate and train people. Because AI is coming and is here, we need people to work with these algorithms. Also, we want to train people confronted with AI in their daily lives to increase ‘AI literacy’.

 

You mentioned the synergy between social and technical sciences in your inaugural speech. Can you elaborate on this?

On a high level, the claim is that AI is huge and impacts all facets of our daily life. Meanwhile, it is too complex to be studied from one perspective only. We need technical people focusing on developing, improving and scaling algorithms. Yet, we also need people to understand the implications of algorithms on people, how we receive them and how they impact our lives.

My goal is to connect both perspectives in my research, and I focus on social AI, which concerns all social interactions between humans and human-like intelligent systems.

 

Do you have one topic or idea that fascinates you the most?

I am fascinated with anthropomorphism, the phenomenon of assigning human-like properties to computers while we know they do not possess them. In my research, I approach this phenomenon from two angles. On the technical side, I try to make better algorithms to give the impression that robots are human-like by processing speech and reading non-verbal behaviour. And on the social science side, we also need to test the impact of these new algorithms. The latter gives input to future algorithms, creating a co-evolution of technology and the people.

 

Which domain questions need to be answered to end your academic career satisfied many years from now?

The dot on the horizon is a situation where we have fully natural interactions with social AI systems and that we almost forget that they are only artefacts.

From a technical side, it means we need new algorithms. We have algorithms producing a somewhat human-like language or detecting facial emotions. But these systems are easily being fooled, and implementing them in real applications will go wrong at some point.

One important thing to note is that the goal will never be to copy or replace humans fully. We can make interactions smoother but still acknowledge that robots are not people and have strengths and weaknesses.

 

Why do you want to make them more natural if you don’t want to replace humans?

I expect such intelligent systems to be more effective; it will be easier to give commands if they understand what you mean, have a theory of mind, and understand your needs. But they should not try to mimic humans at all levels. For example, robots do not need to look like humans, which could raise the wrong expectations.

There are many areas where social AI systems may assist us in our daily lives. For instance, in healthcare, they could take over the repetitive tasks from doctors and provide them the time to focus on more complex tasks.

 

Do you think intelligent systems can be empathetic or emotional?

Empathy and emotion have a few components, generally. One of these components is purely behavioural, such as expressing empathy. This component is relatively easy to reach, and robots might even express empathy better than humans. However, the expression does not mean that robots feel anything. The second component focuses on the experiential part of emotions, a subjective phenomenon. This component is way more complex, and there is a philosophical discussion on whether this is attainable in machines. I do not exclude that option, but I have not seen much progress in that direction. Therefore, I believe more in the weak notion of empathetic and emotional intelligent systems; they can learn ‘to understand’ the user’s problem and express themselves empathetically without experiencing empathy.

 

What arguments are being used by the people thinking that robots can have empathy and emotions at the experiential level?

My reasoning might be a simplification, but it comes down to the following.

Humans have empathy and feel emotions.

Humans are just information processing systems.

Computers are also just information processing systems.

Therefore, there is no fundamental reason artificial systems could not experience the same things either. Since experience is an emergent process, from all the local interactions between neurons in the brain, the experience of emotions emerges; some philosophers also deem this phenomenon possible in computer systems. However, no one has achieved that yet.

 

How do we know humans have these feelings; we only suspect we do because we express these feelings?

The scientific methods lack the right tools to measure this objectively, indeed. It is all based on introspection; we feel these emotions inside and share them with others. Therefore, we assume that we all have them.

 

Would there be a benefit to developing intelligent systems with the experiential component?

One benefit could be that we better understand experience and consciousness in humans. It would be a huge breakthrough for humanity if we could replicate that. It would solve the mystery of consciousness. However, I cannot immediately think of any practical benefits. We have enough conscious beings on our planet already. Why create more?

 

In one of your publications, you mention the notion of robotic needs. Does a robot have needs, and do we need to consider them?

I do not think a robot has similar biological needs as humans. All its needs are programmed, and a robot does not experience them. Interestingly enough, humans sometimes consider robots’ needs, although we know they do not feel anything. For example, people tend to be polite while talking to chatbots. Or, when people watch movies with robots that are being broken down, they might still feel empathetic towards the robot. The question is whether we should empathize this way or not.

The answer is twofold. In some cases, it can hinder us. For example, we may mistakenly assume that our companion robot cares about us, leading to unrealistic expectations. However, in a world where robots are omnipresent, we might start treating humans worse if we have many robots that we treat badly. Then, we might become more egocentric and stop considering the emotions of others.

 

That marks the end of the interview. Is there a last remark that you would like to make?

In the discussions about the role of AI, I find it essential to note the following. In principle, AI is a very powerful invention, impacting many, leading to many good things and potentially also to bad things. It is essential to stress that we should see it as a complementary approach to human intelligence rather than replacing humans. We should look for a society where algorithms can be used in addition to human intelligence to benefit them.