kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
That's What RoBERTa Said: Explainable Classification of Peer Feedback
KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.ORCID iD: 0009-0004-3218-8818
Cesar School, Recife, Brazil.
Instituto Federal Goiano, Ceres, Brazil.
Computing Institute - Federal University of Alagoas, Maceió, Brazil.
Show others and affiliations
2025 (English)In: 15th International Conference on Learning Analytics and Knowledge, LAK 2025, Association for Computing Machinery (ACM) , 2025, p. 880-886Conference paper, Published paper (Refereed)
Abstract [en]

Peer feedback (PF) is essential for improving student learning outcomes, particularly in Computer-Supported Collaborative Learning (CSCL) settings. When using digital tools for PF practices, student data (e.g., PF text entries) is generated automatically. Analyzing these large datasets can enhance our understanding of how students learn and help improve their learning. However, manually processing these large datasets is time-intensive, highlighting the need for automation. This study investigates the use of six machine learning models to classify PF messages from 231 students in a large university course. The models include Multi-Layer Perceptron (MLP), Decision Tree, BERT, RoBERTa, DistilBERT, and ChatGPT4o. The models were evaluated based on Cohen's accuracy and F1-score. Preprocessing involved removing stop words, and the impact of this on model performance was assessed. Results showed that only the Decision Tree model improved with stop-word removal, while performance decreased in the other models. RoBERTa consistently outperformed the others across all metrics. Explainable AI was used to understand RoBERTa's decisions by identifying the most predictive words. This study contributes to the automatic classification of peer feedback which is crucial for scaling learning analytics efforts aiming to provide better in-time support to students in CSCL settings.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM) , 2025. p. 880-886
Keywords [en]
Computer Supported Collaborative Learning., Explainable artificial intelligence, Higher Education, Machine Learning, Peer feedback
National Category
Computer Sciences Information Systems
Identifiers
URN: urn:nbn:se:kth:diva-361960DOI: 10.1145/3706468.3706526Scopus ID: 2-s2.0-105000232458OAI: oai:DiVA.org:kth-361960DiVA, id: diva2:1949633
Conference
15th International Conference on Learning Analytics and Knowledge, LAK 2025, Dublin, Ireland, March 3-7, 2025
Note

Part of ISBN 9798400707018

QC 20250403

Available from: 2025-04-03 Created: 2025-04-03 Last updated: 2025-04-03Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Huang, KevinViberg, Olga

Search in DiVA

By author/editor
Huang, KevinViberg, Olga
By organisation
Media Technology and Interaction Design, MID
Computer SciencesInformation Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 23 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf