Component-Based Usability Questionnaire (CBUQ)
With the Component-Based Usability Questionnaire (CBUQ) it is possible to measure the perceived usability of a specific part of an interactive system and can therefore be used for component-base usability testing. To measure the perceived usability, CBUQ uses Davis' six Perceived of Ease-of-Use (PEOU) statements from the Technology acceptance model (Table 1).
No | Statement |
---|---|
1 | Learning to operate [name] would be easy for me |
2 | I would find it easy to get [name] to do what I want it to do |
3 | My interaction with [name] would be clear and understandable |
4 | I would find [name] to be flexible to interact with |
5 | It would be easy for me to become skillful at using [name] |
6 | I would find [name] easy to use |
For each part, the questionnaire includes these six statements, whereby [name] is replaced by the name of a part, i.e. the interaction component. To help a person to identify an interaction component, the questions should be accompanied by clear description of the component and if possible a picture. Each statement is rated on a 7-point Likert scale. Below is an example taken from a questionnaire to measure perceived usability of File Control of MP3 player.
File Control
Below are six statements about the File Control. Please rate the unlikely/likelihood of each statement if you would use the File Control in the future. You can indicate your rating by placing an X in one of the seven circles after each statement.
Description
The File Control (see figures on the right) can be used to search and open a single or a group of MP3 files.
Statement | Unlikely | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Likely |
---|---|---|---|---|---|---|---|---|---|
extremely | quite | slightly | neither | slightly | quite | extremely | |||
Learning to operate the File Control would be easy for me | O | O | O | O | O | O | O | ||
I would find it easy to get the File Control to do what I want it to do | O | O | O | O | O | O | O | ||
My interaction with the File Control would be clear and understandable | O | O | O | O | O | O | O | ||
I would find the File Control to be flexible to interact with | O | O | O | O | O | O | O | ||
It would be easy for me to become skilful at using the File Control | O | O | O | O | O | O | O | ||
I would find the File Control easy to use | O | O | O | O | O | O | O |
A CBUQ questionnaire includes several sections, each measuring the perceived usability of a specific interaction component. To avoid a possible order effect, the order in which these sections are listed in questionnaire can be varied, e.g. counterbalanced. For example, 50% of participants rate the interaction components A, B, C, D, while the other 50% of the participants rate them in the order D, C, B, A.
The perceived usability score of an interaction component is calculated by taking the average score of the six items. This value can be compared with the norm value of 5.29, e.g. One-sample student’s t-test. Above this break-even point, the argument can be made that the perceived usability is more comparable to a norm set of easy to use interaction components. Below this point, the perceived usability is more comparable to a norm set of less easy to use interaction components.
When reporting the results of a CBUQ questionnaire, please include a reference to the following journal publication:
- Brinkman, W.P., Haakma, R., and Bouwhuis, D.G. (2009). Theoretical foundation and validity of a component-based usability questionnaire, Behaviour and Information Technology, 28(2), 121 - 137.
A preliminary version is available of published version. Furthermore, example study that uses CBUQ questionnaire including instructions, questionnaires and data analysis is also available. Note that in this example results of the ratings are compared with the rational given in an open question about the usability rating. This however, is not part of a standard CBUQ questionnaire.
Dialogue Experience Questionnaire (DEQ)
The Dialogue Experience Questionnaire (DEQ) was developed to measure the users’ dialogue experience with an computer avatar. The operationalising of the dialogue experience construct is presented in Table 2.
(sub) dimension | explanation |
---|---|
flow | |
--dialogue speed | Pauses between responses of the computer and the user and the accompanied feeling this caused |
--interruption | The computer talking before the user was finished talking |
--correctness locally | Correctness of the responses from the computer on the user's responses |
--correctness globally | Correctness of the entire dialogue and consistency between the different question lines |
interaction | |
--involvement | The impression the user got from the avatars and their shown interest in what the user said |
--discussion satisfaction | The feeling the user got during the question phase and how the user experienced the answers and attention from the avatars. |
--reality | How natural did the conversation feel |
The score for a sub-dimension is calculated by averaging the response of the items of that sub-dimension. Items are score on 7-point Likert scale ranging from strongly disagree (1) to strongly agree (7). The value of items indicated by * should first be reversed (reversed score = 8 – item value). There exists both an English and a Dutch version of the questionnaire.
When reporting the results of the questionnaire, please include a reference to the following journal publication:
- ter Heijden, N., and Brinkman, W.P. (2011). Design and evaluation of a virtual reality exposure therapy system with automatic free speech interaction. Journal of CyberTherapy and Rehabilitation, 4(1), 41-55.
A preliminary version is available of published version.
Artificial Social Agent Questionnaire (ASAQ)
I am involved in the Open Science Foundation workgroup that aims to develop an evaluation instrument of Artificial Social Agent (ASA). This instrument will help researchers to make claims about people’s perceptions, attitudes, and beliefs towards their agents. It will allow agents to be compared across user studies, and importantly, it helps in replicating our scientific findings. This is essential for the community if we want to make valid claims about the impact that our social agents can have in domains such as health, entertainment, and education. Information of the Open Science Foundation workgroup can be found on the website or on OSF. The ASAQ has long version (90 items) and short version (24 items), and is supported by ASA profile generator.
References
- Davis, F. D. (1989), Perceived usefulness, perceived ease of use, and user acceptance of information technology, MIS Quarterly 13(3): 319–340