kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 78) Show all publications
Prieto, L. P., Viberg, O., Yip, J. C. & Topali, P. (2025). Aligning human values and educational technologies with value-sensitive design. British Journal of Educational Technology, 56(4), 1299-1310
Open this publication in new window or tab >>Aligning human values and educational technologies with value-sensitive design
2025 (English)In: British Journal of Educational Technology, ISSN 0007-1013, E-ISSN 1467-8535, Vol. 56, no 4, p. 1299-1310Article in journal, Editorial material (Refereed) Published
Place, publisher, year, edition, pages
Wiley, 2025
National Category
Other Educational Sciences
Identifiers
urn:nbn:se:kth:diva-364531 (URN)10.1111/bjet.13602 (DOI)001482010700001 ()2-s2.0-105004268840 (Scopus ID)
Note

QC 20250618

Available from: 2025-06-18 Created: 2025-06-18 Last updated: 2025-06-18Bibliographically approved
Hedlin, E., Estling, L., Wong, J., Demmans Epp, C. & Viberg, O. (2025). Got It! Prompting Readability Using ChatGPT to Enhance Academic Texts for Diverse Learning Needs. In: 15th International Conference on Learning Analytics and Knowledge, LAK 2025: . Paper presented at 15th International Conference on Learning Analytics and Knowledge, LAK 2025, Dublin, Ireland, March 3-7, 2025 (pp. 115-125). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Got It! Prompting Readability Using ChatGPT to Enhance Academic Texts for Diverse Learning Needs
Show others...
2025 (English)In: 15th International Conference on Learning Analytics and Knowledge, LAK 2025, Association for Computing Machinery (ACM) , 2025, p. 115-125Conference paper, Published paper (Refereed)
Abstract [en]

Reading skills are crucial for students' success in education and beyond. However, reading proficiency among K-12 students has been declining globally, including in Sweden, leaving many underprepared for post-secondary education. Additionally, an increasing number of students have reading disorders, such as dyslexia, which require support. Generative artificial intelligence (genAI) technologies, like ChatGPT, may offer new opportunities to improve reading practices by enhancing the readability of educational texts. This study investigates whether ChatGPT-4 can simplify academic texts and which prompting strategies are most effective. We tasked ChatGPT to re-write 136 academic texts using four prompting approaches: Standard, Meta, Roleplay, and Chain-of-Thought. All four approaches improved text readability, with Meta performing the best overall and the Standard prompt sometimes creating texts that were less readable than the original. This study found variability in the simplified texts, suggesting that different strategies should be used based on the specific needs of individual learners. Overall, the findings highlight the potential of genAI tools, like ChatGPT, to improve the accessibility of academic texts, offering valuable support for students with reading difficulties and promoting more equitable learning opportunities.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2025
Keywords
Analytics, Equity, Large language models, Literacy, Prompt engineering, Readability
National Category
Pedagogy Natural Language Processing
Identifiers
urn:nbn:se:kth:diva-361966 (URN)10.1145/3706468.3706483 (DOI)2-s2.0-105000372142 (Scopus ID)
Conference
15th International Conference on Learning Analytics and Knowledge, LAK 2025, Dublin, Ireland, March 3-7, 2025
Note

Part of ISBN 9798400707018

QC 20250403

Available from: 2025-04-03 Created: 2025-04-03 Last updated: 2025-04-03Bibliographically approved
Huang, K., Ferreira Mello, R., Pereira Junior, C., Rodrigues, L., Baars, M. & Viberg, O. (2025). That's What RoBERTa Said: Explainable Classification of Peer Feedback. In: 15th International Conference on Learning Analytics and Knowledge, LAK 2025: . Paper presented at 15th International Conference on Learning Analytics and Knowledge, LAK 2025, Dublin, Ireland, March 3-7, 2025 (pp. 880-886). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>That's What RoBERTa Said: Explainable Classification of Peer Feedback
Show others...
2025 (English)In: 15th International Conference on Learning Analytics and Knowledge, LAK 2025, Association for Computing Machinery (ACM) , 2025, p. 880-886Conference paper, Published paper (Refereed)
Abstract [en]

Peer feedback (PF) is essential for improving student learning outcomes, particularly in Computer-Supported Collaborative Learning (CSCL) settings. When using digital tools for PF practices, student data (e.g., PF text entries) is generated automatically. Analyzing these large datasets can enhance our understanding of how students learn and help improve their learning. However, manually processing these large datasets is time-intensive, highlighting the need for automation. This study investigates the use of six machine learning models to classify PF messages from 231 students in a large university course. The models include Multi-Layer Perceptron (MLP), Decision Tree, BERT, RoBERTa, DistilBERT, and ChatGPT4o. The models were evaluated based on Cohen's accuracy and F1-score. Preprocessing involved removing stop words, and the impact of this on model performance was assessed. Results showed that only the Decision Tree model improved with stop-word removal, while performance decreased in the other models. RoBERTa consistently outperformed the others across all metrics. Explainable AI was used to understand RoBERTa's decisions by identifying the most predictive words. This study contributes to the automatic classification of peer feedback which is crucial for scaling learning analytics efforts aiming to provide better in-time support to students in CSCL settings.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2025
Keywords
Computer Supported Collaborative Learning., Explainable artificial intelligence, Higher Education, Machine Learning, Peer feedback
National Category
Computer Sciences Information Systems
Identifiers
urn:nbn:se:kth:diva-361960 (URN)10.1145/3706468.3706526 (DOI)2-s2.0-105000232458 (Scopus ID)
Conference
15th International Conference on Learning Analytics and Knowledge, LAK 2025, Dublin, Ireland, March 3-7, 2025
Note

Part of ISBN 9798400707018

QC 20250403

Available from: 2025-04-03 Created: 2025-04-03 Last updated: 2025-04-03Bibliographically approved
Viberg, O., Kizilcec, R. F., Wise, A. F., Jivet, I. & Nixon, N. (2024). Advancing equity and inclusion in educational practices with AI-powered educational decision support systems (AI-EDSS). British Journal of Educational Technology, 55(5), 1974-1981
Open this publication in new window or tab >>Advancing equity and inclusion in educational practices with AI-powered educational decision support systems (AI-EDSS)
Show others...
2024 (English)In: British Journal of Educational Technology, ISSN 0007-1013, E-ISSN 1467-8535, Vol. 55, no 5, p. 1974-1981Article in journal, Editorial material (Other academic) Published
Abstract [en]

A key goal of educational institutions around the world is to provide inclusive, equitable quality education and lifelong learning opportunities for all learners. Achieving this requires contextualized approaches to accommodate diverse global values and promote learning opportunities that best meet the needs and goals of all learners as individuals and members of different communities. Advances in learning analytics (LA), natural language processes (NLP), and artificial intelligence (AI), especially generative AI technologies, offer potential to aid educational decision making by supporting analytic insights and personalized recommendations. However, these technologies also raise serious risks for reinforcing or exacerbating existing inequalities; these dangers arise from multiple factors including biases represented in training datasets, the technologies' abilities to take autonomous decisions, and processes for tool development that do not centre the needs and concerns of historically marginalized groups. To ensure that Educational Decision Support Systems (EDSS), particularly AI-powered ones, are equipped to promote equity, they must be created and evaluated holistically, considering their potential for both targeted and systemic impacts on all learners, especially members of historically marginalized groups. Adopting a socio-technical and cultural perspective is crucial for designing, deploying, and evaluating AI-EDSS that truly advance educational equity and inclusion. This editorial introduces the contributions of five papers for the special section on advancing equity and inclusion in educational practices with AI-EDSS. These papers focus on (i) a review of biases in large language models (LLMs) applications offers practical guidelines for their evaluation to promote educational equity, (ii) techniques to mitigate disparities across countries and languages in LLMs representation of educationally relevant knowledge, (iii) implementing equitable and intersectionality-aware machine learning applications in education, (iv) introducing a LA dashboard that aims to promote institutional equality, diversity, and inclusion, and (v) vulnerable student digital well-being in AI-EDSS. Together, these contributions underscore the importance of an interdisciplinary approach in developing and utilizing AI-EDSS to not only foster a more inclusive and equitable educational landscape worldwide but also reveal a critical need for a broader contextualization of equity that incorporates the socio-technical questions of what kinds of decisions AI is being used to support, for what purposes, and whose goals are prioritized in this process.

Place, publisher, year, edition, pages
Wiley, 2024
Keywords
AI, AI-EDSS, bias, education, equality, inclusion
National Category
Human Computer Interaction Pedagogy
Identifiers
urn:nbn:se:kth:diva-366604 (URN)10.1111/bjet.13507 (DOI)001269476400001 ()2-s2.0-85198137990 (Scopus ID)
Note

QC 20250710

Available from: 2025-07-10 Created: 2025-07-10 Last updated: 2025-07-10Bibliographically approved
Tao, Y., Viberg, O., Baker, R. S. & Kizilcec, R. F. (2024). Cultural bias and cultural alignment of large language models. PNAS Nexus, 3(9), Article ID pgae346.
Open this publication in new window or tab >>Cultural bias and cultural alignment of large language models
2024 (English)In: PNAS Nexus, ISSN 2752-6542, Vol. 3, no 9, article id pgae346Article in journal (Refereed) Published
Abstract [en]

Culture fundamentally shapes people's reasoning, behavior, and communication. As people increasingly use generative artificial intelligence (AI) to expedite and automate personal and professional tasks, cultural values embedded in AI models may bias people's authentic expression and contribute to the dominance of certain cultures. We conduct a disaggregated evaluation of cultural bias for five widely used large language models (OpenAI's GPT-4o/4-turbo/4/3.5-turbo/3) by comparing the models' responses to nationally representative survey data. All models exhibit cultural values resembling English-speaking and Protestant European countries. We test cultural prompting as a control strategy to increase cultural alignment for each country/territory. For later models (GPT-4, 4-turbo, 4o), this improves the cultural alignment of the models' output for 71-81% of countries and territories. We suggest using cultural prompting and ongoing evaluation to reduce cultural bias in the output of generative AI.

Place, publisher, year, edition, pages
Oxford University Press (OUP), 2024
Keywords
generative AI, large language models, cultural alignment, controllability
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-354593 (URN)10.1093/pnasnexus/pgae346 (DOI)001320085400002 ()39290441 (PubMedID)2-s2.0-85205497414 (Scopus ID)
Note

QC 20241009

Available from: 2024-10-09 Created: 2024-10-09 Last updated: 2025-05-27Bibliographically approved
Viberg, O., Kizilcec, R. F., Jivet, I., Mones, A. M., Oh, A., Mutimukwe, C., . . . Scheffel, M. (2024). Cultural differences in students' privacy concerns in learning analytics across Germany, South Korea, Spain, Sweden, and the United States. COMPUTERS IN HUMAN BEHAVIOR REPORTS, 14, Article ID 100416.
Open this publication in new window or tab >>Cultural differences in students' privacy concerns in learning analytics across Germany, South Korea, Spain, Sweden, and the United States
Show others...
2024 (English)In: COMPUTERS IN HUMAN BEHAVIOR REPORTS, ISSN 2451-9588, Vol. 14, article id 100416Article in journal (Refereed) Published
Abstract [en]

Applications of learning analytics (LA) can raise concerns from students about their privacy in higher education contexts. Developing effective privacy-enhancing practices requires a systematic understanding of students' privacy concerns and how they vary across national and cultural dimensions. We conducted a survey study with established instruments to measure privacy concerns and cultural values for university students in five countries (Germany, South Korea, Spain, Sweden, and the United States; N = 762). The results show that students generally trusted institutions with their data and disclosed information as they perceived the risks to be manageable even though they felt somewhat limited in their ability to control their privacy. Across the five countries, German and Swedish students stood out as the most trusting and least concerned, especially compared to US students who reported greater perceived risk and less control. Students in South Korea and Spain responded similarly on all five privacy dimensions (perceived privacy risk, perceived privacy control, privacy concerns, trusting beliefs, and non-self-disclosure behavior), despite their significant cultural differences. Culture measured at the individual level affected the antecedents and outcomes of privacy concerns. Perceived privacy risk and privacy control increase with power distance. Trusting beliefs increase with a desire for uncertainty avoidance and lower masculinity. Non-self-disclosure behaviors rise with power distance and masculinity and decrease with more uncertainty avoidance. Thus, cultural values related to trust in institutions, social equality and risk-taking should be considered when developing privacy-enhancing practices and policies in higher education.

Place, publisher, year, edition, pages
Elsevier BV, 2024
Keywords
Privacy concerns, Learning analytics, Culture, Students, Higher education
National Category
Pedagogy
Identifiers
urn:nbn:se:kth:diva-347779 (URN)10.1016/j.chbr.2024.100416 (DOI)001231129100001 ()2-s2.0-85190326780 (Scopus ID)
Note

QC 20240614

Available from: 2024-06-14 Created: 2024-06-14 Last updated: 2024-06-14Bibliographically approved
Knight, S., Viberg, O., Mavrikis, M., Kovanović, V., Khosravi, H., Ferguson, R., . . . Cukurova, M. (2024). Emerging technologies and research ethics: Developing editorial policy using a scoping review and reference panel. PLOS ONE, 19(10), Article ID e0309715.
Open this publication in new window or tab >>Emerging technologies and research ethics: Developing editorial policy using a scoping review and reference panel
Show others...
2024 (English)In: PLOS ONE, E-ISSN 1932-6203, Vol. 19, no 10, article id e0309715Article in journal (Refereed) Published
Abstract [en]

Background Emerging technologies and societal changes create new ethical concerns and greater need for cross-disciplinary and cross–stakeholder communication on navigating ethics in research. Scholarly articles are the primary mode of communication for researchers, however there are concerns regarding the expression of research ethics in these outputs. If not in these outputs, where should researchers and stakeholders learn about the ethical considerations of research? Objectives Drawing on a scoping review, analysis of policy in a specific disciplinary context (learning and technology), and reference group discussion, we address concerns regarding research ethics, in research involving emerging technologies through developing novel policy that aims to foster learning through the expression of ethical concepts in research. Approach This paper develops new editorial policy for expression of research ethics in scholarly outputs across disciplines. These guidelines, aimed at authors, reviewers, and editors, are underpinned by: 1. a cross-disciplinary scoping review of existing policy and adherence to these policies; 2. a review of emerging policies, and policies in a specific discipline (learning and technology); and, 3. a collective drafting process undertaken by a reference group of journal editors (the authors of this paper). Results Analysis arising from the scoping review indicates gaps in policy across a wide range of journals (54% have no statement regarding reporting of research ethics), and adherence (51% of papers reviewed did not refer to ethics considerations). Analysis of emerging and discipline-specific policies highlights gaps. Conclusion Our collective policy development process develops novel materials suitable for cross-disciplinary transfer, to address specific issues of research involving AI, and broader challenges of emerging technologies.

Place, publisher, year, edition, pages
Public Library of Science, 2024
National Category
Ethics Information Studies
Identifiers
urn:nbn:se:kth:diva-356293 (URN)10.1371/journal.pone.0309715 (DOI)39480862 (PubMedID)2-s2.0-85208064406 (Scopus ID)
Note

QC 20241114

Available from: 2024-11-13 Created: 2024-11-13 Last updated: 2024-11-14Bibliographically approved
Viberg, O., Mutimukwe, C., Hrastinski, S., Cerratto‐Pargman, T. & Lilliesköld, J. (2024). Exploring teachers' (future) digital assessment practices in higher education: Instrument and model development. British Journal of Educational Technology, 55(6), 2597-2616
Open this publication in new window or tab >>Exploring teachers' (future) digital assessment practices in higher education: Instrument and model development
Show others...
2024 (English)In: British Journal of Educational Technology, ISSN 0007-1013, E-ISSN 1467-8535, Vol. 55, no 6, p. 2597-2616Article in journal (Refereed) Published
Abstract [en]

Digital technologies are increasingly used in assessment. On the one hand, this use offers opportunities for teachers to practice assessment more effectively, and on the other hand, it brings challenges to the design of pedagogically sound and responsible digital assessment. There is a lack of validated instruments and models that explain, assess and support teachers' critical pedagogical practice of digital assessment. This explorative work first develops and validates a survey instrument to examine teachers' digital assessment practices. Secondly, we build a model to investigate to what extent teachers' pedagogical digital assessment knowledge is a foundation for the future of digital assessment (ie, authentic, accessible, automated, continuous and responsible). A total of 219 university teachers at a large European university participated in the survey study. Factor exploratory analysis and structural equation modelling were used to validate the reliability and validity of items and internal causal relations of factors. The results show the survey is a valid and reliable instrument for assessing teachers' digital assessment practice in higher education. Teachers' pedagogical knowledge and pedagogical content knowledge of digital assessment is critical, while teachers' technological pedagogical knowledge seems to have a more limited impact on the future of digital assessment. 

Place, publisher, year, edition, pages
Wiley, 2024
National Category
Pedagogy
Identifiers
urn:nbn:se:kth:diva-346257 (URN)10.1111/bjet.13462 (DOI)001251469700001 ()2-s2.0-85189638407 (Scopus ID)
Note

QC 20240514

Available from: 2024-05-09 Created: 2024-05-09 Last updated: 2025-03-20Bibliographically approved
Iop, A., Viberg, O., Francis, K., Norström, V., Mattias Persson, D., Wallin, L., . . . Matviienko, A. (2024). Exploring the Influence of Object Shapes and Colors on Depth Perception in Virtual Reality for Minimally Invasive Neurosurgical Training. In: CHI 2024 - Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Sytems: . Paper presented at 2024 CHI Conference on Human Factors in Computing Sytems, CHI EA 2024, Hybrid, Honolulu, United States of America, May 11 2024 - May 16 2024. Association for Computing Machinery, Article ID 154.
Open this publication in new window or tab >>Exploring the Influence of Object Shapes and Colors on Depth Perception in Virtual Reality for Minimally Invasive Neurosurgical Training
Show others...
2024 (English)In: CHI 2024 - Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Sytems, Association for Computing Machinery , 2024, article id 154Conference paper, Published paper (Refereed)
Abstract [en]

Minimally invasive neurosurgery (MIS) involves inserting a medical instrument, e.g., a catheter, through a small incision to target an area inside the patient's body. Training surgeons to perform MIS is challenging since the surgical site is not directly visible from their perspective. In this paper, we conducted two pilot studies focused on object shapes and colors to collect preliminary results on their influence on depth perception for MIS in Virtual Reality. In the first study (N = 8), participants inserted a virtual catheter into objects of different shapes. In the second study (N = 5), they observed the insertion of a virtual catheter into objects of different colors and backgrounds under different lighting conditions. We found that participants' precision decreased with distance and was lower with the skull shape than with a cube. Moreover, depth perception was higher with blue backgrounds under better lighting conditions.

Place, publisher, year, edition, pages
Association for Computing Machinery, 2024
Keywords
depth perception, minimally invasive neurosurgery, virtual reality
National Category
Neurology
Identifiers
urn:nbn:se:kth:diva-347323 (URN)10.1145/3613905.3650813 (DOI)001227587702041 ()2-s2.0-85194135109 (Scopus ID)
Conference
2024 CHI Conference on Human Factors in Computing Sytems, CHI EA 2024, Hybrid, Honolulu, United States of America, May 11 2024 - May 16 2024
Note

QC 20240613

Part of ISBN 979-840070331-7

Available from: 2024-06-10 Created: 2024-06-10 Last updated: 2024-10-30Bibliographically approved
Viberg, O., Baars, M., Mello, R. F., Weerheim, N., Spikol, D., Bogdan, C. M., . . . Paas, F. (2024). Exploring the nature of peer feedback: An epistemic network analysis approach. Journal of Computer Assisted Learning, 40(6), 2809-2821
Open this publication in new window or tab >>Exploring the nature of peer feedback: An epistemic network analysis approach
Show others...
2024 (English)In: Journal of Computer Assisted Learning, ISSN 0266-4909, E-ISSN 1365-2729, Vol. 40, no 6, p. 2809-2821Article in journal (Refereed) Published
Abstract [en]

Background Study: Peer feedback has been used as an effective instructional strategy to enhance students' learning in higher education. Objectives: This paper reports on the findings of an explorative study that aimed to increase our understanding of the nature and role of peer feedback in the students' learning process in a computer-supported collaborative learning (CSCL) setting. Exploring what types of feedback are used, and how they relate to each other and are related to academic performance has important implications for students and teachers. Methods: This study was conducted in the higher education setting. It used a dataset consisting of student peer feedback messages (N = 2444) and grades from 231 students who participated in a large engineering course. Using qualitative methods, peer feedback was coded inductively. Epistemic network analysis (ENA) was used to analyse the relation between peer feedback types and performance. Results: Based on the five types of peer feedback (i.e., ‘management’, ‘cognition’ ‘affect’, ‘interpersonal factors’ and ‘suggestions for improvements’), the results of the ENA showed that student feedback categories ‘management’, ‘cognition’ and ‘affect’ were positively related to student performance at the formative assessment phase. Conclusions: The findings and the ENA visualizations also show that ‘suggestions for improvement’ and ‘interpersonal factors’ were not a significant part of student learning in peer assessment and feedback in the studied context.

Place, publisher, year, edition, pages
Wiley, 2024
Keywords
computer-supported collaborative learning settings, epistemic network analysis, learning performance, peer feedback
National Category
Pedagogy
Identifiers
urn:nbn:se:kth:diva-366316 (URN)10.1111/jcal.13035 (DOI)001265567000001 ()2-s2.0-85197820526 (Scopus ID)
Note

QC 20250707

Available from: 2025-07-07 Created: 2025-07-07 Last updated: 2025-07-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-8543-3774

Search in DiVA

Show all publications