The AI & Games session consisted of three presentations. The games were completely different, ranging from a logical game via an emotional one to a 9×5 board game. The first presentation was titled “How much does it help to know what she knows you know? An agent-based simulation study”, authored by Harmen De Weerd, Rineke Verbrugge and Bart Verheij. Harmen did the presentation.

The main topic was how to deal with the Theory of Mind. The key issue is: what thoughts do we attribute to others? To answer this question people use the Theory of Mind. We may distinguish higher order steps of reasoning, such as may occur in the sentence: Alice knows that Rob knows that Caroll will have a surprise party. The scientific question is: to what extent are non-human entities able to use the Theory of Mind? Chimpansees are very fast in the “false belief” task, but here we see a competition between a large brain and cognitive abilities. A good approach is building agent-based models that are in competition with each other. Of course, game theory can help us to a large extent, e.g., the game “Rock, Paper, Scissors.” Moreover, we can use bounded rationality. Harmen showed a game Blue and Green, in which Blue had a first-order theory and Green a zero-order theory. Usually some knowledge pays off when facing “zero-knowledge.” So, Blue is winning with big numbers against Green. But what about 2nd order against 1st order (and 3rd – 2nd, 4th – 3rd, etc.). Moreover, we can endow the players with a learning mechanism. De Weerd showed a plethora of competitions. He received many questions, e.g., on the role of complexity, on learning within the order-environment as assigned to the player, on how to deal with opponent knowledge, and on the prisoner’s dilemma. The discussion was very lively.

The second presentation was titled “Why is it so hard to say sorry?” by The Anh Han, Luís Moniz Pereira, Francisco Santos and Tom Lenaerts. The Anh Han gave the presentation.

She nicely started the lecture by a sheet containing: “Sorry?” and promised us to discuss Models and Results. The models should embody the answers of the two questions: (1) Why do people apologise? and (2) Why is it so difficult? Of course, it all depends on the environment (situation awareness). Are you in front of the court? or are you in a state of being reproached by your parents? In both cases, the model should have different knowledge at its disposal. A real apology is an act that is willingly made. However, can a fake apology given by a human or a machine be equally effective? Questions to be addressed in this area of model building are: (1) Would an apology need to be sincere? (assume the goal is to function properly), (2) May an apology lead to high levels of cooperation by itself? (many marriages may benefit from it), and (3) Are abundance and sincerity two criteria that lead to improved relations? Subsequently, The Anh continued her presentation with telling us on: direct reciprocity, induced reciprocity, and commitments. The question now is: should we build an apology model with or without Commitment? First, we see that an apology is more frequent in a commitment relationship. Second, an apology needs to be sincere in order to function properly. So, third, The’s conclusion on the question: what does commitment bring about sincerity? reads as follows: (a) Severe commitment will abort sincerity, (b) still, apology supported by commitment prevails. The analyses of apologies should be continued to obtain a deep insight into the why’s, the when’s and the what’s of an apology. Obviously, there were many questions by the audience.

The third presentation was titled “Complexity and Retrograde Analysis of the Game Dou Shou Qi” by Jan van Rijn and Jonathan Vis. The presentation was by both researchers.

“This game is better known as Jungle Chess”, was the start of their presentation. The audience was kept. Games, AI and Chess, the old BNAIC times were returned and the atmosphere was as in those days. First, the rules, then the strategy, and finally the exhaustive enumeration. Two very interesting topics were discussed at length: (1) the complexity of the game, and (2) the characterisation. Jungle Chess is PSpace hard. It is similar to Stratego. It is a two-player game of perfect information. It is played with eight pieces: Elephant, Lion, Tiger, Panther, Dog, Wulf, Cat, and Rat. Moreover, there is a Trap, which gives the game a 3D flavor. The challege is whether it can be proven that the game is PSpace complete? The game then results in a circuit game, apparently in 2D, but it turned out to be 3D (see above) and is investigated as such by Robert Hume (US). White Panthers can go back to the input square, so the White additional Panthers come into play. Black pieces have other gadgets. The fascinating lecture ended abruptly by the statement: “that’s Jungle Chess.” The audience was still waiting for Dr. Livingstone (to signify the Jungle’s climax) but understood that it was the end of the session. Mark Winands expressed the feelings of many attendees: “This is an excellent game for a student task in Games and AI.” So be it. We look forward to the next new game in 2014, and of course to part 2 of Jungle Chess.