kth.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Ask and distract: Data-driven methods for the automatic generation of multiple-choice reading comprehension questions from Swedish texts
KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Tal, musik och hörsel, TMH.ORCID-id: 0000-0001-7327-3059
2023 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)Alternativ titel
Fråga och distrahera : Datadrivna metoder för automatisk generering av flervalsfrågor för att bedöma läsförståelse av svenska (Svenska)
Abstract [en]

Multiple choice questions (MCQs) are widely used for summative assessment in many different subjects. The tasks in this format are particularly appealing because they can be graded swiftly and automatically. However, the process of creating MCQs is far from swift or automatic and requires a lot of expertise both in the specific subject and also in test construction.

This thesis focuses on exploring methods for the automatic MCQ generation for assessing the reading comprehension abilities of second-language learners of Swedish. We lay the foundations for the MCQ generation research for Swedish by collecting two datasets of reading comprehension MCQs, and designing and developing methods for generating the whole MCQs or its parts. An important contribution is the methods (which were designed and applied in practice) for the automatic and human evaluation of the generated MCQs.

The best currently available method (as of June 2023) for generating MCQs for assessing reading comprehension in Swedish is ChatGPT (although still only around 60% of generated MCQs were judged acceptable). However, ChatGPT is neither open-source, nor free. The best open-source and free-to-use method is the fine-tuned version of SweCTRL-Mini, a foundational model developed as a part of this thesis. Nevertheless, all explored methods are far from being useful in practice but the reported results provide a good starting point for future research.

Abstract [sv]

Flervalsfrågor används ofta för summativ bedömning i många olika ämnen. Flervalsfrågor är tilltalande eftersom de kan bedömas snabbt och automatiskt. Att skapa flervalsfrågor manuellt går dock långt ifrån snabbt, utan är en process som kräver mycket expertis inom det specifika ämnet och även inom provkonstruktion.

Denna avhandling fokuserar på att utforska metoder för automatisk generering av flervalsfrågor för bedömning av läsförståelse hos andraspråksinlärare av svenska. Vi lägger grunden för forskning om generering av flervalsfrågor för svenska genom att samla in två datamängder bestående av flervalsfrågor som testar just läsförståelse, och genom att utforma och utveckla metoder för att generera hela eller delar av flervalsfrågor. Ett viktigt bidrag är de metoder för automatisk och mänsklig utvärdering av genererade flervalsfrågor som har utvecklats och tillämpats i praktiken.

Den bästa för närvarande tillgängliga metoden (i juni 2023) för att generera flervalsfrågor som testar läsförståelse på svenska är ChatGPT (dock bedömdes endast cirka 60% av de genererade flervalsfrågorna som acceptabla). ChatGPT har dock varken öppen källkod eller är gratis. Den bästa metoden med öppen källkod som är också gratis är den finjusterade versionen av SweCTRL-Mini, en “foundational model” som utvecklats som en del av denna avhandling. Alla utforskade metoder är dock långt ifrån användbara i praktiken, men de rapporterade resultaten ger en bra utgångspunkt för framtida forskning.

Ort, förlag, år, upplaga, sidor
KTH Royal Institute of Technology, 2023. , s. viii, 67
Serie
TRITA-EECS-AVL ; 2023:56
Nyckelord [en]
multiple choice questions, question generation, distractor generation, reading comprehension, second-language learners, L2 learning, Natural Language Generation
Nyckelord [sv]
flervalsfrågor, frågegenerering, distraktorsgenerering, läsförståelse, andraspråkslärande, L2-inlärning, Natural Language Generation
Nationell ämneskategori
Språkteknologi (språkvetenskaplig databehandling)
Forskningsämne
Tal- och musikkommunikation
Identifikatorer
URN: urn:nbn:se:kth:diva-336531ISBN: 978-91-8040-661-1 (tryckt)OAI: oai:DiVA.org:kth-336531DiVA, id: diva2:1797477
Disputation
2023-10-17, F3, Lindstedtsvägen 26, Stockholm, 14:00 (Engelska)
Opponent
Handledare
Anmärkning

QC 20230915

Tillgänglig från: 2023-09-15 Skapad: 2023-09-14 Senast uppdaterad: 2023-09-25Bibliografiskt granskad
Delarbeten
1. Quinductor: A multilingual data-driven method for generating reading-comprehension questions using Universal Dependencies
Öppna denna publikation i ny flik eller fönster >>Quinductor: A multilingual data-driven method for generating reading-comprehension questions using Universal Dependencies
2024 (Engelska)Ingår i: Natural Language Engineering, ISSN 1351-3249, E-ISSN 1469-8110, s. 217-255Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

We propose a multilingual data-driven method for generating reading comprehension questions using dependency trees. Our method provides a strong, deterministic and inexpensive-to-train baseline for less-resourced languages. While a language-specific corpus is still required, its size is nowhere near those required by modern neural question generation (QG) architectures. Our method surpasses QG baselines previously reported in the literature in terms of automatic evaluation metrics and shows a good performance in terms of human evaluation.

Ort, förlag, år, upplaga, sidor
Cambridge University Press (CUP), 2024
Nyckelord
Natural language generation, Evaluation, Multilinguality, Question generation, Reading comprehension
Nationell ämneskategori
Språkteknologi (språkvetenskaplig databehandling)
Forskningsämne
Datalogi
Identifikatorer
urn:nbn:se:kth:diva-326862 (URN)10.1017/s1351324923000037 (DOI)000939777300001 ()2-s2.0-85189534486 (Scopus ID)
Forskningsfinansiär
Vinnova, 2019-02997
Anmärkning

QC 20230515

Tillgänglig från: 2023-05-15 Skapad: 2023-05-15 Senast uppdaterad: 2024-04-18Bibliografiskt granskad
2. Automatically generating question-answer pairs for assessing basic reading comprehension in Swedish
Öppna denna publikation i ny flik eller fönster >>Automatically generating question-answer pairs for assessing basic reading comprehension in Swedish
2022 (Engelska)Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

This paper presents an evaluation of the quality of automatically generated reading comprehension questions from Swedish text, using the Quinductor method. This method is a light-weight, data-driven but non-neural method for automatic question generation (QG). The evaluation shows that Quinductor is a viable QG method that can provide a strong baseline for neural-network-based QG methods. 

Nationell ämneskategori
Språkteknologi (språkvetenskaplig databehandling)
Identifikatorer
urn:nbn:se:kth:diva-326889 (URN)10.48550/arXiv.2211.15568 (DOI)
Konferens
The 9th Swedish Language Technology Conference (SLTC 2022), Stockholm, Sweden, 23–25 November 2022
Forskningsfinansiär
Vinnova, 2019-02997
Anmärkning

QC 20230515

Tillgänglig från: 2023-05-15 Skapad: 2023-05-15 Senast uppdaterad: 2023-09-14Bibliografiskt granskad
3. Minor changes make a difference: a case study on the consistency of UD-based dependency parsers
Öppna denna publikation i ny flik eller fönster >>Minor changes make a difference: a case study on the consistency of UD-based dependency parsers
2021 (Engelska)Ingår i: Proceedings of the Fifth Workshop on Universal Dependencies (UDW, SyntaxFest 2021), Association for Computational Linguistics (ACL) , 2021, s. 96-108Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Many downstream applications are using dependency trees, and are thus relying on dependencyparsers producing correct, or at least consistent, output. However, dependency parsers are trainedusing machine learning, and are therefore susceptible to unwanted inconsistencies due to biasesin the training data. This paper explores the effects of such biases in four languages – English,Swedish, Russian, and Ukrainian – though an experiment where we study the effect of replacingnumerals in sentences. We show that such seemingly insignificant changes in the input can causelarge differences in the output, and suggest that data augmentation can remedy the problems.

Ort, förlag, år, upplaga, sidor
Association for Computational Linguistics (ACL), 2021
Nationell ämneskategori
Språkteknologi (språkvetenskaplig databehandling)
Forskningsämne
Datalogi
Identifikatorer
urn:nbn:se:kth:diva-326888 (URN)2-s2.0-85138675937 (Scopus ID)
Konferens
UDW 2021 - 5th Workshop on Universal Dependencies, Proceedings - To be held as part of SyntaxFest 2021, Sofia, 21-25 March 2021
Forskningsfinansiär
Vinnova, 2019-02997
Anmärkning

Part of proceedings ISBN 978-195591717-9 

QC 20230515

Tillgänglig från: 2023-05-15 Skapad: 2023-05-15 Senast uppdaterad: 2023-09-14Bibliografiskt granskad
4. BERT-based distractor generation for Swedish reading comprehension questions using a small-scale dataset
Öppna denna publikation i ny flik eller fönster >>BERT-based distractor generation for Swedish reading comprehension questions using a small-scale dataset
2021 (Engelska)Ingår i: Proceedings of the 14th International Conference on Natural Language Generation, 2021, s. 387-403Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

An important part when constructing multiple-choice questions (MCQs) for reading comprehension assessment are the distractors, the incorrect but preferably plausible answer options. In this paper, we present a new BERT-based method for automatically generating distractors using only a small-scale dataset. We also release a new such dataset of Swedish MCQs (used for training the model), and propose a methodology for assessing the generated distractors. Evaluation shows that from a student's perspective, our method generated one or more plausible distractors for more than 50% of the MCQs in our test set. From a teacher's perspective, about 50% of the generated distractors were deemed appropriate. We also do a thorough analysis of the results

Nyckelord
Multiple-choice questions, Reading comprehension, Small scale, Student perspectives, Swedishs, Teachers', Test sets
Nationell ämneskategori
Språkteknologi (språkvetenskaplig databehandling)
Forskningsämne
Datalogi
Identifikatorer
urn:nbn:se:kth:diva-302480 (URN)2-s2.0-85123291566 (Scopus ID)
Konferens
14th International Conference on Natural Language Generation, INLG 2021, Virtual/Online, 20-24 September 2021
Forskningsfinansiär
Vinnova, 2019-02997
Anmärkning

Part of proceedings: ISBN 978-1-954085-51-0

QC 20220301

Tillgänglig från: 2021-09-24 Skapad: 2021-09-24 Senast uppdaterad: 2023-09-14Bibliografiskt granskad
5. Quasi: a synthetic Question-Answering dataset in Swedish using GPT-3 and zero-shot learning
Öppna denna publikation i ny flik eller fönster >>Quasi: a synthetic Question-Answering dataset in Swedish using GPT-3 and zero-shot learning
2023 (Engelska)Ingår i: Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa), 2023, s. 477-491Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

This paper describes the creation and evaluation of a synthetic dataset of Swedish multiple-choice questions (MCQs) for reading comprehension using GPT-3. Although GPT-3 is trained mostly on English data, with only 0.11% of Swedish texts in its training material, the model still managed to generate MCQs in Swedish. About 44% of the generated MCQs turned out to be of sufficient quality, i.e.\ they were grammatically correct and relevant, with exactly one answer alternative being correct and the others being plausible but wrong. We provide a detailed analysis of the errors and shortcomings of the rejected MCQs, as well an analysis of the level of difficulty of the accepted MCQs. In addition to giving insights into GPT-3, the synthetic dataset could be used for training and evaluation of special-purpose MCQ-generating models.

Nationell ämneskategori
Språkteknologi (språkvetenskaplig databehandling)
Forskningsämne
Datalogi
Identifikatorer
urn:nbn:se:kth:diva-327972 (URN)
Konferens
The 24th Nordic Conference on Computational Linguistics (NoDaLiDa 2023), 22-24 May 2023, Tórshavn, Faroe Islands
Anmärkning

QC 20230602

Tillgänglig från: 2023-06-02 Skapad: 2023-06-02 Senast uppdaterad: 2023-09-14Bibliografiskt granskad
6. SweCTRL-Mini: a data-transparent Transformer-based large language model for controllable text generation in Swedish
Öppna denna publikation i ny flik eller fönster >>SweCTRL-Mini: a data-transparent Transformer-based large language model for controllable text generation in Swedish
(Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
Abstract [en]

We present SweCTRL-Mini, a large Swedish language model that can be used for inference and fine-tuning on a single consumer-grade GPU. The model is based on the CTRL architecture by Keskar et.al. (2019), which means that users of the SweCTRL-Mini model can control the genre of the generated text by inserting special tokens in the generation prompts. SweCTRL-Mini is trained on a subset of the Swedish part of the mC4 corpus and a set of Swedish novels. In this article, we provide (1) a detailed account of the utilized training data and text pre-processing steps, to the extent that it is possible to check whether a specific phrase/source was a part of the training data, and (2) an evaluation of the model on both discriminative tasks, using automatic evaluation methods, and generative tasks, using human referees. We also compare the generative capabilities of the model with those of GPT-3. SweCTRL-Mini is fully open and available for download.

Nyckelord
Large Language Models, Swedish, Transformers, Neural Networks, Language Models, CTRL, Evaluation
Nationell ämneskategori
Språkteknologi (språkvetenskaplig databehandling)
Forskningsämne
Datalogi
Identifikatorer
urn:nbn:se:kth:diva-329437 (URN)
Forskningsfinansiär
Vinnova, 2019-02997
Anmärkning

Submitted to Language Resources and Evaluation, ISSN 1574-020X

QC 20230627

Tillgänglig från: 2023-06-21 Skapad: 2023-06-21 Senast uppdaterad: 2023-09-14Bibliografiskt granskad
7. Generation and Evaluation of Multiple-choice Reading Comprehension Questions for Swedish
Öppna denna publikation i ny flik eller fönster >>Generation and Evaluation of Multiple-choice Reading Comprehension Questions for Swedish
(Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
Abstract [en]

Multiple-choice questions (MCQs) provide a widely used means of assessing reading comprehension. The automatic generation of such MCQs is a challenging language-technological problem that also has interesting educational applications. This article presents several methods for automatically producing reading comprehension questions MCQs from Swedish text. Unlike previous approaches, we construct models to generate the whole MCQ in one go, rather than using a pipeline architecture. Furthermore, we propose a two-stage method for evaluating the quality of the generated MCQs, first evaluating on carefully designed single-sentence texts, and then on texts from the SFI national exams. An extensive evaluation of the MCQ-generating capabilities of 12 different models, using this two-stage scheme, reveals that GPT-based models surpass smaller models that have been fine-tuned using small-scale datasets on this specific problem.

Nyckelord
Natural Language Generation, Natural Language Processing, Question Generation, Distractor Generation, Reading Comprehension, Multiple choice Questions
Nationell ämneskategori
Språkteknologi (språkvetenskaplig databehandling)
Forskningsämne
Datalogi
Identifikatorer
urn:nbn:se:kth:diva-329400 (URN)
Forskningsfinansiär
Vinnova, 2019-02997
Anmärkning

QC 20230627

Tillgänglig från: 2023-06-20 Skapad: 2023-06-20 Senast uppdaterad: 2023-09-14Bibliografiskt granskad
8. UDon2: a library for manipulating Universal Dependencies trees
Öppna denna publikation i ny flik eller fönster >>UDon2: a library for manipulating Universal Dependencies trees
2020 (Engelska)Ingår i: Proceedings of the Fourth Workshop on Universal Dependencies (UDW 2020), 2020, s. 120-125Konferensbidrag, Poster (med eller utan abstract) (Refereegranskat)
Abstract [en]

UDon2 is an open-source library for manipulating dependency trees represented in the CoNLL-U format. The library is compatible with the Universal Dependencies. UDon2 is aimed at developers of downstream Natural Language Processing applications that require manipulating dependency trees on the sentence level (in addition to other available tools geared towards working with treebanks).

Nationell ämneskategori
Språkteknologi (språkvetenskaplig databehandling)
Identifikatorer
urn:nbn:se:kth:diva-288878 (URN)
Konferens
28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), 8-13 December 2020
Forskningsfinansiär
Vinnova
Anmärkning

QC 20210115

Tillgänglig från: 2021-01-14 Skapad: 2021-01-14 Senast uppdaterad: 2023-09-14Bibliografiskt granskad
9. Textinator: an Internationalized Tool for Annotation and Human Evaluation in Natural Language Processing and Generation
Öppna denna publikation i ny flik eller fönster >>Textinator: an Internationalized Tool for Annotation and Human Evaluation in Natural Language Processing and Generation
2022 (Engelska)Ingår i: LREC 2022: Thirteen International Conference On Language Resources And Evaluation / [ed] Calzolari, N Bechet, F Blache, P Choukri, K Cieri, C Declerck, T Goggi, S Isahara, H Maegaard, B Mazo, H Odijk, H Piperidis, S, European Language Resources Association (ELRA) , 2022, s. 856-866Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

We release an internationalized annotation and human evaluation bundle, called Textinator, along with documentation and video tutorials. Textinator allows annotating data for a wide variety of NLP tasks, and its user interface is offered in multiple languages, lowering the entry threshold for domain experts. The latter is, in fact, quite a rare feature among the annotation tools, that allows controlling for possible unintended biases introduced due to hiring only English-speaking annotators. We illustrate the rarity of this feature by presenting a thorough systematic comparison of Textinator to previously published annotation tools along 9 different axes (with internationalization being one of them). To encourage researchers to design their human evaluation before starting to annotate data, Textinator offers an easy-to-use tool for human evaluations allowing importing surveys with potentially hundreds of evaluation items in one click. We finish by presenting several use cases of annotation and evaluation projects conducted using pre-release versions of Textinator. The presented use cases do not represent Textinator's full annotation or evaluation capabilities, and interested readers are referred to the online documentation for more information.

Ort, förlag, år, upplaga, sidor
European Language Resources Association (ELRA), 2022
Nyckelord
annotation tool, human evaluation tool, natural language processing, natural language generation
Nationell ämneskategori
Språkteknologi (språkvetenskaplig databehandling)
Identifikatorer
urn:nbn:se:kth:diva-324335 (URN)10.5281/zenodo.6497334 (DOI)000889371700090 ()2-s2.0-85144462359 (Scopus ID)
Konferens
13th International Conference on Language Resources and Evaluation (LREC), JUN 20-25, 2022, Marseille, FRANCE
Anmärkning

QC 20230228

Tillgänglig från: 2023-02-28 Skapad: 2023-02-28 Senast uppdaterad: 2023-09-14Bibliografiskt granskad

Open Access i DiVA

Summary(1316 kB)271 nedladdningar
Filinformation
Filnamn FULLTEXT01.pdfFilstorlek 1316 kBChecksumma SHA-512
7c183be6441bc0b63a2218c1da3f3e4d494b4c63a5d9db0bbafa9d29af3e1566e27a2f9c9cd64679fb200b9b75ab9cc6016ef972b935efb7f3c9eeb1eb23f18d
Typ fulltextMimetyp application/pdf

Person

Kalpakchi, Dmytro

Sök vidare i DiVA

Av författaren/redaktören
Kalpakchi, Dmytro
Av organisationen
Tal, musik och hörsel, TMH
Språkteknologi (språkvetenskaplig databehandling)

Sök vidare utanför DiVA

GoogleGoogle Scholar
Totalt: 271 nedladdningar
Antalet nedladdningar är summan av nedladdningar för alla fulltexter. Det kan inkludera t.ex tidigare versioner som nu inte längre är tillgängliga.

isbn
urn-nbn

Altmetricpoäng

isbn
urn-nbn
Totalt: 1657 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf