The Knowledge Representation session consisted of three presentations. The topics were completely different, ranging from clinical trials, via requirements engineering and argumentation-based techniques, to Linked Open Data. This session was one of the last sessions of the first day.
The first presentation was titled “Feasibility estimation for clinical trials”, authored by Zhisheng Huang, Frnak van Harmelen, Annette ten Teije, and André Dekker. Zhisheng was presenting. The presentation was motivated by the observation that at least 90% of trials are extended by at least 6 weeks because investigators fail to enroll patients on schedule. Therefore, at design-time it is important to have good insight in how the choice of eligibility criteria affects the recruitment rate. In the presentation an elegant mathematical model was presented to achieve this goal. Results with real and synthetic patient data were presented. To increase the reproducibility of the results, the datasets have been made available onlline.
The second presentation was titled “Capturing Evidence and Rationales with Requirements Engineering and Argumentation-Based Techniques” by Sepideh Ghanavati and Marc van Zee. Marc van Zee was presenting. In the presentation a problem from Requirements Engineering was discussed. It was noted that in URN, the User Requirements Notation, discussion between stakeholders, and the evidence that this was based on, can not be traced back. The authors propose an extension to URN to capture these discussions, using a hybrid approach based on evidential reasoning, a technique that was used previously in describing criminal cases.
The third presentation was titled “LOD Laundromat: A Uniform Way of Publishing Other People’s Dirty Data” by Wouter Beek, Laurens Rietveld, Hamid Bazoobandi, Jan Wielemaker and Stefan Schlobach. Wouter Beek gave the presentation, using a mix of prepared slides and a live demonstration of the working system. LOD means Linked Open Data. The goals of the paper is to make proper data publishing easier, since it is noted that many published datasets do not contain clean data, causing the Linked Open Data Cloud to contain a high level of dirty data. The LOD Laundromat removes the direty data without human intervention. It uses an automated system of standards-compliant parsing to cleanup data. The Laundromat provides real-time visualizations of the crawled data.
Although all three presentations differed in topic, each presentation was interesting and engendered numerous questions from the audience.
by Aske Plaat