The sessions was composed of three presentations. The first presentation by Decebal Constantin Mocanu, titled “Automatically Mapped Transfer Between Reinforcement Learning Tasks via Three-Way Restricted Boltzmann Machines”, introduced a new form of restricted Boltzmann machines tailored for automatic transfer between RL tasks. There were questions about the similarity of the domains used for transfer, and about the need for a new three way machine.

The second talk “Neurally Plausible Reinforcement Learning of Working Memory Tasks” was presented by Jaldert Rombouts, and introduced a biologically plausible neural network model for learning of certain partially observable tasks. The approach tries to learn an internal state representation to overcome the need of remembering the full history. The extent to which such techniques are applicable for more general partially observable learning tasks was briefly discussed.

Kristoff Van Moffaert presented the last talk on “Scalarized Multi-Objective Reinforcement Learning: Novel Design Techniques”. This work presents an investigation of the Chebyshev scalarization function for use in M.O. RL. There were questions about some of the experiments which showed that using this new scalarization function.