Change search
ReferencesLink to record
Permanent link

Direct link
Informal feedback rather than performance measurements: user-centred evaluation in Scrum projects
Reykjavik University.
Uppsala University.
KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.ORCID iD: 0000-0002-2411-6417
2014 (English)In: Behavior and Information Technology, ISSN 0144-929X, E-ISSN 1362-3001, Vol. 33, no 11, 1118-1135 p.Article in journal (Refereed) Published
Abstract [en]

The main features of the Scrum process are intense communication between different stakeholders, and rapid feedback based on regular delivery of working software. The integration of traditional user-centred evaluation activities in this context is challenging. Hence, this paper presents an interview study of 21 informants, categorised in four different professional roles. The main contribution of the paper is an overview of the types of user-centred evaluation conducted by information technology professionals in various Scrum projects. Results show that various forms of feedback are indeed gathered on the usability and user experience of the software, system or service being developed. However, the user-centred evaluations conducted typically are informal with few users, gathering empirical qualitative data and performed during short unplanned sessions. Performance measurements gathering quantitative data are seldom used. The informants in the business specialist role merely ask users about their opinion, whereas the other roles use several user-centred evaluation activities to gather feedback on their design. Generally, feedback is gathered throughout the whole project, but often evaluation is conducted early in the project or even before the actual development starts. Finally, these results are discussed in relation to previous studies in the area.

Place, publisher, year, edition, pages
2014. Vol. 33, no 11, 1118-1135 p.
Keyword [en]
user-centred evaluation, usability, user experience, Scrum, human-centred activities, agile development
National Category
Human Computer Interaction
URN: urn:nbn:se:kth:diva-95301DOI: 10.1080/0144929X.2013.857430ISI: 000342237100002ScopusID: 2-s2.0-84907249124OAI: diva2:527507

QC 20141027. Updated from submitted to published.

Available from: 2012-05-21 Created: 2012-05-21 Last updated: 2014-10-27Bibliographically approved
In thesis
1. User Centred Evaluation in Experimental and Practical Settings
Open this publication in new window or tab >>User Centred Evaluation in Experimental and Practical Settings
2012 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The objective of this thesis is to obtain knowledge regarding how effective user centred evaluation methods are and how user centred evaluations are conducted by IT professionals. This will be achieved by exploring user centred evaluation in experimental and practical settings. The knowledge gained in these studies should inspire suggestions for further research and suggestions for improvements on the user centred evaluation activity.

Two experimental studies were conducted. One compares the results from using three user centred evaluation methods, and the other examines two factors while conducting heuristic evaluation. The results show that the think-aloud evaluation method was the most effective method in finding realistic usability problems of the three methods. The number of critical problems found during think-aloud evaluation increases, if heuristic evaluation is conducted prior to the think-aloud evaluations.

Further, two studies of user centred evaluation in practical settings were performed. The IT professionals participating in those studies were using the software development process Scrum to plan their work. The results show that user centred evaluation is infrequently conducted in Scrum projects, compared to testing activities like acceptance testing. The main type of evaluation is qualitative. Few participants measure user performance or use surveys to gather quantitative results on the usability and the user experience. IT professionals get feedback from users in an informal way and gather informal feedback from peers. Many participants use a mixture of methods for gathering feedback on their work.

The outcome of this thesis shows that IT professionals should be encouraged to include users whenever possible when evaluating software, for example by using the think-aloud method. Using heuristic evaluation prior to conducting think-aloud evaluations is also recommended. In addition, IT professionals are encouraged to evaluate their software in an informal way frequently, rather than waiting for the right time to conduct a thorough quantitative evaluation.

To advance this field further, researchers who want to improve the evaluation activity for the IT professionals should study how user centred evaluation methods could be combined in an efficient way and how the use of qualitative evaluation methods could be made more effective.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2012. xii, 80 p.
Trita-CSC-A, ISSN 1653-5723 ; 2012:05
User centred evaluation, Scrum, evaluation methods, agile software development
National Category
Human Computer Interaction
urn:nbn:se:kth:diva-95302 (URN)978-91-7501-357-2 (ISBN)
Public defence
2012-06-08, F3, Lindstedtsvagen 26, KTH, Stockholm, 13:15 (English)
QC 20120522Available from: 2012-05-22 Created: 2012-05-21 Last updated: 2012-05-22Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Lárusdóttir, Marta K.Gulliksen, Jan
By organisation
Media Technology and Interaction Design, MID
In the same journal
Behavior and Information Technology
Human Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 421 hits
ReferencesLink to record
Permanent link

Direct link