Existing Questionnaires: IVA Proceedings 2013-2018
In this study, we collected 189 constructs from IVA Proceedings 2013-2018 and developed an Evaluation Instrument Model for human interaction with ASAs. The model contains 7 main categories: (1) Agent's Basic Properties, (2) Agent's Social Traits, (3) Agent's Impression Left by Interaction, (4) Agent's Role Performance, (5) (Human-Agent) Interaction Quality, (6) Human's Impression Left by Interaction, and (7) Human's Attributes to Support Interaction.
The result of the study is reported in:
- Siska Fitrianie, Merijn Bruijnes, Deborah Richards, Amal Abdulrahman, and Willem-Paul Brinkman. 2019. What are We Measuring Anyway? - A Literature Survey of Questionnaires Used in Studies Reported in the Intelligent Virtual Agent Conferences. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents (IVA '19). Association for Computing Machinery, New York, NY, USA, 159–161. https://doi.org/10.1145/3308532.3329421 and its presentation slides at IVA 2019 (.pptx)
- Siska Fitrianie, Merijn Bruijnes, and Willem-Paul Brinkman. 2018. Technical Report: Existing Questionnaires - IVA Proceedings 2013-2018, No. ASAEvalInst-TR#01, Date: 21-December-2018. (.pdf)
Resulted data:
Study 1: Defining Categories
In this study, we asked the participants to place 189 constructs into categories in the Evaluation Instrument Model. We also asked them to add more categories if necessary. Using 50% agreement cut-off value, we found that 89 constructs (47%) could be placed into one particular category, whereas 99 constructs (52%) could be placed into two particular categories and only 1 construct could not be placed into any category. Among 188 constructs, 11 constructs were placed outside the 7 main categories (i.e. External Variables, Process Variables, Outcome Variables and Other).
The result of the study is reported in:
- Siska Fitrianie, Merijn Bruijnes, Deborah Richards, Andrea Bönsch, and Willem-Paul Brinkman. 2020. The 19 Unifying Questionnaire Constructs of Artificial Social Agents: An IVA Community Analysis. In Proc. of the 20th ACM International Conference on Intelligent Virtual Agents (IVA '20). ACM, New York, NY, USA, Article 21, 1–8. https://doi.org/10.1145/3383652.3423873 and its presentation video - .mp4.
Note: The article includes the result of Study 2 - Defining Constructs - Siska Fitrianie, Merijn Bruijnes, and Willem-Paul Brinkman. 2019. Technical Report: Study 1 - Define Categories, No. ASAEvalInst-TR#02, Date: 15-May-2019. (.pdf)
- Presentation at Workshop on Methodology and/of Evaluation of IVAs - IVA Conference 2019
Resulted data:
Study 2: Defining Constructs
In this study, we asked the participants to organize 177 constructs into groups using 7 card-sorting tasks that were corresponding with the 7 main categories in the Evaluation Instrument Model. Using 50% agreement cut-off value, we found that 25 cards (14%) could not be included into any card-sorting groups. Based on these groups, we recognized a final of 19 constructs. Some of these constructs have dimensions.
The result of the study is reported in:
- Siska Fitrianie, Merijn Bruijnes, Deborah Richards, Andrea Bönsch, and Willem-Paul Brinkman. 2020. The 19 Unifying Questionnaire Constructs of Artificial Social Agents: An IVA Community Analysis. In Proc. of the 20th ACM International Conference on Intelligent Virtual Agents (IVA '20). ACM, New York, NY, USA, Article 21, 1–8. https://doi.org/10.1145/3383652.3423873 and its presentation video - .mp4 (in Youtube channel)
Note: The article includes the result of Study 1 - Defining Categories - Siska Fitrianie, Merijn Bruijnes, and Willem-Paul Brinkman. 2020. Technical Report: Study 2 - Define Construct, No. ASAEvalInst-TR#03, Date: 25-January-2020. (.pdf)
Resulted data:
- Existing IVA-constructs in 7 categories: aImpr.csv, aProp.csv, aRole.csv, aSoc.csv, hAttr.csv, hImpr.csv, and intQ.csv
- Experts' discussion on defining the construct sets (.xlsx)
- 19 constructs and their dimensions (.pdf)
Study 3: Collecting Questionnaire Items
In this study, we have collected 431 relevant questionnaire items for the 19 constructs and their dimensions.
The result of the study is reported in:
Resulted data:
Study 4: Defining Questionnaire Items
In this study, we asked the participants to validate the questionnaires items collected in the previous study. It resulted in 207 content-validated items for the 19 constructs and their dimensions.
The result of the study is reported in:
Resulted data:
Study 5: Collecting Prototypical ASAs
In this study, we asked members of the workgroup to join effort in collecting existing artifical social agents. This includes their video link and description (if available). Further three sets of videos were selected to be stimuli in the further studies.
The result of the study is reported in:
Resulted data:
- Experts' discussion on the list of prototypical ASAs (.xlsx)
- 56 prototypical ASAs short-video clips (30s - .mp4)
- 3 experts' prediction scores on 54 ASAs (.csv): expert A, expert B, and expert C; or their average ratings of ASAs for the selection purpose
Study 6: Reliability Analysis of the Questionnaire Items
This study is the first study into the validation of the questionnaire instrument for evaluating human interaction with an artificial social agent. It involves crowdworkers registered in an online crowdsourcing platform. They were asked to rate using the questionnaire instrument an interaction between an agent and a human user, which was displayed in a 30 second video clip (resulted from Study 5: Collecting Prototypical ASAs). The result of this study were used to analyze the internal consistency between items of the questionnaire's measurement constructs. The analysis has resulted in 131 reliability-analyzed questionnaire items for the 19 constructs and their dimensions.
The result of the study is reported in:
- Siska Fitrianie, Merijn Bruijnes, Fengxiang Li, and Willem-Paul Brinkman. 2021. Questionnaire Items for Evaluating Artificial Social Agents - Expert Generated, Content Validated and Reliability Analysed. In Proceedings of the 21th ACM International Conference on Intelligent Virtual Agents (IVA '21). Association for Computing Machinery, New York, NY, USA, 84–86. https://doi.org/10.1145/3472306.3478341 see its presentation video (.mp4) and its poster (.pdf).
Note: this paper includes the results of Study 3 - Collecting Questionnaire Items and Study 4: Defining Questionnaire Items - Siska Fitrianie, Merijn Bruijnes, and Willem-Paul Brinkman. 2021. Technical Report: Study 6 - Analyzing Reliability of Questionnaire Items, No.ASAEvalInst-TR#07, Date: 17-June-2021. (.pdf).
Resulted data:
Study 7: Construct Validity: Convergent and Discriminant Validity analysis
The research is Study 7 is the second study into the validation of the questionnaire instrument for evaluating human interaction with an artificial social agent. It involved crowdworkers on an online crowdsourcing platform. They were asked to use the questionnaire instrument to rate an interaction between an agent and a human user, which was displayed in a 30 second video clip. Each participant was randomly assigned to rate one of 14 different ASA prototypes. The data gathered was analyzed and used to examine the association of the questionnaire items with the latent constructs, i.e. construct validity. The analysis included several factor analysis models, and resulted in the selection of 90 items for inclusion of the long version of the ASA questionnaire. In addition, a representative item of each construct or dimension was select to create a 24-item short version of the ASA questionnaire. Whereas the long version is suitable for a comprehensive evaluation of human-ASA interaction, the short version allows quick analysis and description of the interaction with the ASA. To support reporting ASA questionnaire results, we also put forward an ASA chart. The chart provides a quick overview of agent profile.
The result of the study is reported in:
- Siska Fitrianie, Merijn Bruijnes, Fengxiang Li, Amal Abdulrahman, and Willem-Paul Brinkman. 2022. The artificial-social-agent questionnaire: establishing the long and short questionnaire versions. In Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents (IVA '22). Association for Computing Machinery, New York, NY, USA, Article 18, 1–8. https://doi.org/10.1145/3514197.3549612
- Siska Fitrianie, Merijn Bruijnes, and Willem-Paul Brinkman. 2022. Technical Report: Study 7 - Construct Validity: Convergent and Discriminant Validity analysis, No.ASAEvalInst-TR#08, Date: 13-July-2022. (.pdf)
- Study data management plan
- Study ethical commitee checklist
- Study approval from TUDElft Ethical Committee
Resulted data:
- Siska Fitrianie, Merijn Bruijnes, Fengxiang Li, Amal Abdulrahman, and Willem-Paul Brinkman. 2022. Artificial Social Agent Questionnaire Instrument. (2022). https://doi.org/10.4121/19650846 4TU.ResearchData.
- Siska Fitrianie, Merijn Bruijnes, Fengxiang Li, Amal Abdulrahman, and Willem-Paul Brinkman. 2022. Data and analysis underlying the research into the Artificial-Social-Agent Questionnaire: Establishing the long and short questionnaire versions. (2022). https://doi.org/10.4121/19758436 4TU.ResearchData.
- Crowd-workers' answers on 131 questionnaire items (.csv)
Study 8: Cross Validation Final Questionnaire Set
In Study 8, we determine the generalization performance of the long and short questionnaire version resulted in the Study 7 (i.e. cross validation: fit model on data set from new set of ASAs). We recruited 544 new Prolific Academic crowd-workers. The data collection took place between 5–15 September 2022. Each participant was randomly assigned to rate an interaction between a human user and one of 15 selected agents (13 ASAs, one zombie, and one fish) - they saw in a video.
In the journal below, we report the reliability, validity and interpretability of ASAQ based on the combined data from three studies: the reliability study (Study 6) and two follow-up studies (Study 7 and Study 8 -- aimed at construct- and cross-validation). The result is a questionnaire that can capture more than 80% of the constructs that studies in the intelligent virtual agent community investigate, with acceptable levels of reliability, content validity, construct validity, and cross-validity. The long version of ASAQ is suitable for a comprehensive evaluation of human-ASA interaction, while the short version of ASAQ allows quick analysis and description of the interaction with the ASA. ASAQ is also supported with two charts for reporting ASA questionnaire results and a quick overview of agent profile. The ASAQ Chart can be used for comparing the ASAQ results of up-to 4 ASAs on the original -3 to 3 scale, while the ASAQ Percentile Chart can be used for contrasting the ASAQ results with the ASAQ Representative Set. This set contains dataset of representative ASAs and their unique participants' ASAQ ratings. This journal gives instructions for practical use, such as sample size estimations, and how to interpret and present results.
The result of the study is reported in:
- Siska Fitrianie, Merijn Bruijnes, Amal Abdulrahman, and Willem-Paul Brinkman. 2025. The Artificial Social Agent Questionnaire (ASAQ) - Development and evaluation of a validated instrument for capturing human interaction experiences with artificial social agents. International Journal of Human-Computer Studies (2025), 103482. https://doi.org/10.1016/j.ijhcs.2025.103482
- Study data management plan
- Study ethical commitee checklist
- Study approval from TUDelft Ethical Committee
Resulted data:
- Siska Fitrianie, Merijn Bruijnes, Amal Abdulrahman, and Willem-Paul Brinkman. 2025. Data and Analysis Underlying the Research into The Artificial Social Agent Questionnaire (ASAQ) - Development and Evaluation of a Validated Instrument for Capturing Human Interaction Experiences with Artificial Social Agents. https://doi.org/10.4121/4fe035a8-45ff-4ffc-a269-380d09361029. 4TU.ResearchData.
Study 9: Concurrent Analysis and A Normative Dataset Development
In Study 9, we compare the newly developed Artificial-Social-Agent (ASA) questionnaire with other existing relevant questionnaires (i.e., concurrent validity) at the same time to develop a normative dataset of the ASA questionnaire based on widely used and accessible ASAs. Data are gathered by asking participants (crowd-workers) to rate, simultaneously on the ASA questionnaire and the other (existing) questionnaires, their interaction with their most familiar ASA. We analyze the correlation of results from the ASA questionnaire and the selected existing questionnaires whether the ASA questionnaire correlates well with the existing questionnaires. In parallel, per selected ASA, we collect the participants’ ASA questionnaire results to develop a normative dataset.
Translation of the ASAQ
To expand the use of the ASAQ to studies using languages other than English, translating the ASAQ into more languages allows for the inclusion of other populations and the ability to compare them. That is why we have carried out several projects of ASAQ translation into other languages. Each translation consists of three construction cycles that include forward and backward translations and involve bilingual crowdworkers from an online crowdsourcing platform. Because this is a starting point, we will continue to encourage more translation projects in the future. We also hope that the translation projects and the translated ASAQ into other languages will motivate researchers to study human-ASA interactions among different populations in the world and to study cultural similarities and differences in this area.
Chinese:
Fengxiang Li, Siska Fitrianie, Merijn Bruijnes, Amal Abdulrahman, Fu Guo, and Willem-Paul Brinkman. 2023. Mandarin Chinese translation of the Artificial-Social-Agent questionnaire instrument for evaluating human-agent interaction. Frontiers in Computer Science, Sec. Human-Media Interaction, Volume 5 https://doi.org/10.3389/fcomp.2023.1149305
- Dutch and German:
- Nele Albers, Andrea Bönsch, Jonathan Ehret, Boleslav A. Khodakov, and Willem-Paul Brinkman. 2024. German and Dutch Translations of the Artificial-Social-Agent Questionnaire Instrument for Evaluating Human-Agent Interactions. In Proceedings of the 24th ACM International Conference on Intelligent Virtual Agents (IVA '24). Association for Computing Machinery, New York, NY, USA, Article 33, 1–4. https://doi.org/10.1145/3652988.3673928
- Nele Albers, Andrea Bönsch, Jonathan Ehret, Boleslav A. Khodakov, and Willem-Paul Brinkman (2024). German and Dutch Translations of the Artificial-Social-Agent Questionnaire Instrument for Evaluating Human-Agent Interactions: Final Questionnaires, Data, Analysis Code and Appendix. 4TU Data Repository https://doi.org/10.4121/a1457cc7-424a-4bb1-aeac-1288b5178fbe