The first presentation in this session was given by David Roschewitz from Maastricht University, who presented a joint work with Kurt Driessens and Pieter Collins titled “Simultaneous Ensemble Generation and Hyper parameter Optimization for Regression”. The authors presented a method to simultaneously generate ensembles and tune the hyper parameters of the models for regression problems. In addition they also investigated the use of robust loss functions as well as different methodologies for determining the size of the ensemble. For the tested problems, they have observed that the MSE loss function outperforms the robust loss function. Many experimental results on several challenging problems showed that for models with tunable hyper parameter spaces, the proposed techniques significantly outperform single regressors.

The second presentation in this session was given by Dr. Marieke van Vugt from the University of Groningen. The title of the paper is: “Distracted in a Demanding Task: A Classification Study with Artificial Neural Networks” which is a joint work with Stefan Huijser and Niels Taatgen. The authors analyzed spatial complex working memory tasks by means of Artificial Neural Network (ANN) based classifiers. More precisely, they aim at predicting whether distracted subjects’ thoughts were focused on the task based on recorded eye-tracking features and task performance. They also found that Eye-tracking features (e.g., pupil size, blink duration, fixation duration) are much less predictive and that trial-to-trial performance is the strongest predictor of distracted thought.

The third presentation in this session was given by Dimitrios Bountouridis from Utrecht University. The title of the paper is: “Melody Retrieval and Classification Using Biologically-Inspired Techniques” which is a joint work with Dan Brown, Hendrik Vincent Koops, Frans Wiering and Remco Veltkamp. The paper aims at enhancing melody retrieval and classification using bioinformatic based techniques. With regard to efficient classification, the authors employed BLAST (Basic Local Alignment Search Tool) and observed its limitations for complex retrieval tasks. Therefore they also examined the profile hidden Markov models (profile HMMs) which are able to capture salient and robust properties of musical content, called profiles or prototypes. Their experimental results show that BLAST and profile HMMs can be reliable and efficient solutions for large-scale melody classification and retrieval respectively, without the incorporation of musical heuristics.

The last presentation in this session was given by Jaap Kamps from the University of Amsterdam who presented joint work with Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn and W. Bruce Croft titled “Neural Ranking Models with Weak Supervision”. They proposed to leverage large amounts of unlabeled data to infer “noisy” or “weak” labels and use that signal for learning supervised models. In particular, the authors used classic unsupervised IR models such as BM25 as a weak supervision signal for training deep neural ranking models. They further studied the impact of weak supervision on various neural ranking models with different ranking architectures and objectives. Other studies that have been conducted in this work are analyzing the behavior of models to understand what they learn, what is the relationship among different models, and how much training data is needed to go beyond the weak supervision signal. They demonstrated that, in the ranking problem, the performance of deep neural networks trained on a limited amount of supervised data significantly improves when they are initialized from a model pre-trained on weakly labeled data.