Publication

Existing Questionnaires:  IVA Proceedings 2013-2018

In this study, we collected 189 constructs from IVA Proceedings 2013-2018 and developed an Evaluation Instrument Model for human interaction with ASAs. The model contains 7 main categories: (1) Agent's Basic Properties, (2) Agent's Social Traits, (3) Agent's Impression Left by Interaction, (4) Agent's Role Performance, (5) (Human-Agent) Interaction Quality, (6) Human's Impression Left by Interaction, and (7) Human's Attributes to Support Interaction.

The result of the study is reported in:

Resulted data:

 

Study 1: Defining Categories

In this study, we asked the participants to place 189 constructs into categories in the Evaluation Instrument Model. We also asked them to add more categories if necessary. Using 50% agreement cut-off value, we found that 89 constructs (47%) could be placed into one particular category, whereas 99 constructs (52%) could be placed into two particular categories and only 1 construct could not be placed into any category. Among 188 constructs, 11 constructs were placed outside the 7 main categories (i.e. External Variables, Process Variables, Outcome Variables and Other). 

The result of the study is reported in:

Resulted data:

 

Study 2: Defining Constructs

In this study, we asked the participants to organize 177 constructs into groups using 7 card-sorting tasks that were corresponding with the 7 main categories in the Evaluation Instrument Model. Using 50% agreement cut-off value, we found that 25 cards (14%) could not be included into any card-sorting groups. Based on these groups, we recognized a final of 19 constructs. Some of these constructs have dimensions. 

The result of the study is reported in:

Resulted data:

 
 

Study 4: Defining Questionnaire Items

In this study, we asked the participants to validate the questionnaires items collected in the previous study. It resulted in 207 content-validated items for the 19 constructs and their dimensions.

The result of the study is reported in:

Resulted data:

 

Study 5: Collecting Prototypical ASAs

In this study, we asked members of the workgroup to join effort in collecting existing artifical social agents. This includes their video link and description (if available). Further three sets of videos were selected to be stimuli in the further studies.

The result of the study is reported in:

Resulted data:

 

Study 6: Reliability Analysis of the Questionnaire Items

This study is the first study into the validation of the questionnaire instrument for evaluating human interaction with an artificial social agent. It involves crowdworkers registered in an online crowdsourcing platform. They were asked to rate using the questionnaire instrument an interaction between an agent and a human user, which was displayed in a 30 second video clip (resulted from Study 5: Collecting Prototypical ASAs). The result of this study were used to analyze the internal consistency between items of the questionnaire's measurement constructs. The analysis has resulted in 131 reliability-analyzed questionnaire items for the 19 constructs and their dimensions.

The result of the study is reported in:

Resulted data:

 

Study 7: Construct Validity: Convergent and Discriminant Validity analysis

The research is Study 7 is the second study into the validation of the questionnaire instrument for evaluating human interaction with an artificial social agent. It involved crowdworkers on an online crowdsourcing platform. They were asked to use the questionnaire instrument to rate an interaction between an agent and a human user, which was displayed in a 30 second video clip. Each participant was randomly assigned to rate one of 14 different ASA prototypes. The data gathered was analyzed and used to examine the association of the questionnaire items with the latent constructs, i.e. construct validity. The analysis included several factor analysis models, and resulted in the selection of 90 items for inclusion of the long version of the ASA questionnaire. In addition, a representative item of each construct or dimension was select to create a 24-item short version of the ASA questionnaire. Whereas the long version is suitable for a comprehensive evaluation of human-ASA interaction, the short version allows quick analysis and description of the interaction with the ASA. To support reporting ASA questionnaire results, we also put forward an ASA chart. The chart provides a quick overview of agent profile. 

The result of the study is reported in:

Resulted data:

 

Study 8: Cross Validation Final Questionnaire Set

In Study 8, we determine the generalization performance of the long and short questionnaire version resulted in the Study 7 (i.e. cross validation: fit model on data set from new set of ASAs). We recruited 544 new Prolific Academic crowd-workers. The data collection took place between 5–15 September 2022. Each participant was randomly assigned to rate an interaction between a human user and one of 15 selected agents (13 ASAs, one zombie, and one fish) - they saw in a video.

In the journal below, we report the reliability, validity and interpretability of ASAQ based on the combined data from three studies: the reliability study (Study 6) and two follow-up studies (Study 7 and Study 8 -- aimed at construct- and cross-validation). The result is a questionnaire that can capture more than 80% of the constructs that studies in the intelligent virtual agent community investigate, with acceptable levels of reliability, content validity, construct validity, and cross-validity. The long version of ASAQ is suitable for a comprehensive evaluation of human-ASA interaction, while the short version of ASAQ allows quick analysis and description of the interaction with the ASA. ASAQ is also supported with two charts for reporting ASA questionnaire results and a quick overview of agent profile. The ASAQ Chart can be used for comparing the ASAQ results of up-to 4 ASAs on the original -3 to 3 scale, while the ASAQ Percentile Chart can be used for contrasting the ASAQ results with the ASAQ Representative Set. This set contains dataset of representative ASAs and their unique participants' ASAQ ratings. This journal gives instructions for practical use, such as sample size estimations, and how to interpret and present results.

The result of the study is reported in:

Resulted data:

  • Siska Fitrianie, Merijn Bruijnes, Amal Abdulrahman, and Willem-Paul Brinkman. 2025. Data and Analysis Underlying the Research into The Artificial Social Agent Questionnaire (ASAQ) - Development and Evaluation of a Validated Instrument for Capturing Human Interaction Experiences with Artificial Social Agents. https://doi.org/10.4121/4fe035a8-45ff-4ffc-a269-380d09361029. 4TU.ResearchData.
 

Study 9: Concurrent Analysis and A Normative Dataset Development

In Study 9, we compare the newly developed Artificial-Social-Agent (ASA) questionnaire with other existing relevant questionnaires (i.e., concurrent validity) at the same time to develop a normative dataset of the ASA questionnaire based on widely used and accessible ASAs. Data are gathered by asking participants (crowd-workers) to rate, simultaneously on the ASA questionnaire and the other (existing) questionnaires, their interaction with their most familiar ASA. We analyze the correlation of results from the ASA questionnaire and the selected existing questionnaires whether the ASA questionnaire correlates well with the existing questionnaires. In parallel, per selected ASA, we collect the participants’ ASA questionnaire results to develop a normative dataset.

 

Translation of the ASAQ

To expand the use of the ASAQ to studies using languages ​​other than English, translating the ASAQ into more languages ​​allows for the inclusion of other populations and the ability to compare them. That is why we have carried out several projects of ASAQ translation into other languages. Each translation consists of three construction cycles that include forward and backward translations and involve bilingual crowdworkers from an online crowdsourcing platform. Because this is a starting point, we will continue to encourage more translation projects in the future. We also hope that the translation projects and the translated ASAQ into other languages ​​will motivate researchers to study human-ASA interactions among different populations in the world and to study cultural similarities and differences in this area.

  • Chinese:

    Fengxiang Li, Siska Fitrianie, Merijn Bruijnes, Amal Abdulrahman, Fu Guo, and Willem-Paul Brinkman. 2023. Mandarin Chinese translation of the Artificial-Social-Agent questionnaire instrument for evaluating human-agent interaction. Frontiers in Computer Science, Sec. Human-Media Interaction, Volume 5 https://doi.org/10.3389/fcomp.2023.1149305

  • Dutch and German:
    • Nele Albers, Andrea Bönsch, Jonathan Ehret, Boleslav A. Khodakov, and Willem-Paul Brinkman. 2024. German and Dutch Translations of the Artificial-Social-Agent Questionnaire Instrument for Evaluating Human-Agent Interactions. In Proceedings of the 24th ACM International Conference on Intelligent Virtual Agents (IVA '24). Association for Computing Machinery, New York, NY, USA, Article 33, 1–4. https://doi.org/10.1145/3652988.3673928
    • Nele Albers, Andrea Bönsch, Jonathan Ehret, Boleslav A. Khodakov, and Willem-Paul Brinkman (2024). German and Dutch Translations of the Artificial-Social-Agent Questionnaire Instrument for Evaluating Human-Agent Interactions: Final Questionnaires, Data, Analysis Code and Appendix. 4TU Data Repository https://doi.org/10.4121/a1457cc7-424a-4bb1-aeac-1288b5178fbe