kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Ask and distract: Data-driven methods for the automatic generation of multiple-choice reading comprehension questions from Swedish texts
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0001-7327-3059
2023 (English)Doctoral thesis, comprehensive summary (Other academic)Alternative title
Fråga och distrahera : Datadrivna metoder för automatisk generering av flervalsfrågor för att bedöma läsförståelse av svenska (Swedish)
Abstract [en]

Multiple choice questions (MCQs) are widely used for summative assessment in many different subjects. The tasks in this format are particularly appealing because they can be graded swiftly and automatically. However, the process of creating MCQs is far from swift or automatic and requires a lot of expertise both in the specific subject and also in test construction.

This thesis focuses on exploring methods for the automatic MCQ generation for assessing the reading comprehension abilities of second-language learners of Swedish. We lay the foundations for the MCQ generation research for Swedish by collecting two datasets of reading comprehension MCQs, and designing and developing methods for generating the whole MCQs or its parts. An important contribution is the methods (which were designed and applied in practice) for the automatic and human evaluation of the generated MCQs.

The best currently available method (as of June 2023) for generating MCQs for assessing reading comprehension in Swedish is ChatGPT (although still only around 60% of generated MCQs were judged acceptable). However, ChatGPT is neither open-source, nor free. The best open-source and free-to-use method is the fine-tuned version of SweCTRL-Mini, a foundational model developed as a part of this thesis. Nevertheless, all explored methods are far from being useful in practice but the reported results provide a good starting point for future research.

Abstract [sv]

Flervalsfrågor används ofta för summativ bedömning i många olika ämnen. Flervalsfrågor är tilltalande eftersom de kan bedömas snabbt och automatiskt. Att skapa flervalsfrågor manuellt går dock långt ifrån snabbt, utan är en process som kräver mycket expertis inom det specifika ämnet och även inom provkonstruktion.

Denna avhandling fokuserar på att utforska metoder för automatisk generering av flervalsfrågor för bedömning av läsförståelse hos andraspråksinlärare av svenska. Vi lägger grunden för forskning om generering av flervalsfrågor för svenska genom att samla in två datamängder bestående av flervalsfrågor som testar just läsförståelse, och genom att utforma och utveckla metoder för att generera hela eller delar av flervalsfrågor. Ett viktigt bidrag är de metoder för automatisk och mänsklig utvärdering av genererade flervalsfrågor som har utvecklats och tillämpats i praktiken.

Den bästa för närvarande tillgängliga metoden (i juni 2023) för att generera flervalsfrågor som testar läsförståelse på svenska är ChatGPT (dock bedömdes endast cirka 60% av de genererade flervalsfrågorna som acceptabla). ChatGPT har dock varken öppen källkod eller är gratis. Den bästa metoden med öppen källkod som är också gratis är den finjusterade versionen av SweCTRL-Mini, en “foundational model” som utvecklats som en del av denna avhandling. Alla utforskade metoder är dock långt ifrån användbara i praktiken, men de rapporterade resultaten ger en bra utgångspunkt för framtida forskning.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2023. , p. viii, 67
Series
TRITA-EECS-AVL ; 2023:56
Keywords [en]
multiple choice questions, question generation, distractor generation, reading comprehension, second-language learners, L2 learning, Natural Language Generation
Keywords [sv]
flervalsfrågor, frågegenerering, distraktorsgenerering, läsförståelse, andraspråkslärande, L2-inlärning, Natural Language Generation
National Category
Language Technology (Computational Linguistics)
Research subject
Speech and Music Communication
Identifiers
URN: urn:nbn:se:kth:diva-336531ISBN: 978-91-8040-661-1 (print)OAI: oai:DiVA.org:kth-336531DiVA, id: diva2:1797477
Public defence
2023-10-17, F3, Lindstedtsvägen 26, Stockholm, 14:00 (English)
Opponent
Supervisors
Note

QC 20230915

Available from: 2023-09-15 Created: 2023-09-14 Last updated: 2023-09-25Bibliographically approved
List of papers
1. Quinductor: A multilingual data-driven method for generating reading-comprehension questions using Universal Dependencies
Open this publication in new window or tab >>Quinductor: A multilingual data-driven method for generating reading-comprehension questions using Universal Dependencies
2024 (English)In: Natural Language Engineering, ISSN 1351-3249, E-ISSN 1469-8110, p. 217-255Article in journal (Refereed) Published
Abstract [en]

We propose a multilingual data-driven method for generating reading comprehension questions using dependency trees. Our method provides a strong, deterministic and inexpensive-to-train baseline for less-resourced languages. While a language-specific corpus is still required, its size is nowhere near those required by modern neural question generation (QG) architectures. Our method surpasses QG baselines previously reported in the literature in terms of automatic evaluation metrics and shows a good performance in terms of human evaluation.

Place, publisher, year, edition, pages
Cambridge University Press (CUP), 2024
Keywords
Natural language generation, Evaluation, Multilinguality, Question generation, Reading comprehension
National Category
Language Technology (Computational Linguistics)
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-326862 (URN)10.1017/s1351324923000037 (DOI)000939777300001 ()2-s2.0-85189534486 (Scopus ID)
Funder
Vinnova, 2019-02997
Note

QC 20230515

Available from: 2023-05-15 Created: 2023-05-15 Last updated: 2024-04-18Bibliographically approved
2. Automatically generating question-answer pairs for assessing basic reading comprehension in Swedish
Open this publication in new window or tab >>Automatically generating question-answer pairs for assessing basic reading comprehension in Swedish
2022 (English)Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents an evaluation of the quality of automatically generated reading comprehension questions from Swedish text, using the Quinductor method. This method is a light-weight, data-driven but non-neural method for automatic question generation (QG). The evaluation shows that Quinductor is a viable QG method that can provide a strong baseline for neural-network-based QG methods. 

National Category
Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:kth:diva-326889 (URN)10.48550/arXiv.2211.15568 (DOI)
Conference
The 9th Swedish Language Technology Conference (SLTC 2022), Stockholm, Sweden, 23–25 November 2022
Funder
Vinnova, 2019-02997
Note

QC 20230515

Available from: 2023-05-15 Created: 2023-05-15 Last updated: 2023-09-14Bibliographically approved
3. Minor changes make a difference: a case study on the consistency of UD-based dependency parsers
Open this publication in new window or tab >>Minor changes make a difference: a case study on the consistency of UD-based dependency parsers
2021 (English)In: Proceedings of the Fifth Workshop on Universal Dependencies (UDW, SyntaxFest 2021), Association for Computational Linguistics (ACL) , 2021, p. 96-108Conference paper, Published paper (Refereed)
Abstract [en]

Many downstream applications are using dependency trees, and are thus relying on dependencyparsers producing correct, or at least consistent, output. However, dependency parsers are trainedusing machine learning, and are therefore susceptible to unwanted inconsistencies due to biasesin the training data. This paper explores the effects of such biases in four languages – English,Swedish, Russian, and Ukrainian – though an experiment where we study the effect of replacingnumerals in sentences. We show that such seemingly insignificant changes in the input can causelarge differences in the output, and suggest that data augmentation can remedy the problems.

Place, publisher, year, edition, pages
Association for Computational Linguistics (ACL), 2021
National Category
Language Technology (Computational Linguistics)
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-326888 (URN)2-s2.0-85138675937 (Scopus ID)
Conference
UDW 2021 - 5th Workshop on Universal Dependencies, Proceedings - To be held as part of SyntaxFest 2021, Sofia, 21-25 March 2021
Funder
Vinnova, 2019-02997
Note

Part of proceedings ISBN 978-195591717-9 

QC 20230515

Available from: 2023-05-15 Created: 2023-05-15 Last updated: 2023-09-14Bibliographically approved
4. BERT-based distractor generation for Swedish reading comprehension questions using a small-scale dataset
Open this publication in new window or tab >>BERT-based distractor generation for Swedish reading comprehension questions using a small-scale dataset
2021 (English)In: Proceedings of the 14th International Conference on Natural Language Generation, 2021, p. 387-403Conference paper, Published paper (Refereed)
Abstract [en]

An important part when constructing multiple-choice questions (MCQs) for reading comprehension assessment are the distractors, the incorrect but preferably plausible answer options. In this paper, we present a new BERT-based method for automatically generating distractors using only a small-scale dataset. We also release a new such dataset of Swedish MCQs (used for training the model), and propose a methodology for assessing the generated distractors. Evaluation shows that from a student's perspective, our method generated one or more plausible distractors for more than 50% of the MCQs in our test set. From a teacher's perspective, about 50% of the generated distractors were deemed appropriate. We also do a thorough analysis of the results

Keywords
Multiple-choice questions, Reading comprehension, Small scale, Student perspectives, Swedishs, Teachers', Test sets
National Category
Language Technology (Computational Linguistics)
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-302480 (URN)2-s2.0-85123291566 (Scopus ID)
Conference
14th International Conference on Natural Language Generation, INLG 2021, Virtual/Online, 20-24 September 2021
Funder
Vinnova, 2019-02997
Note

Part of proceedings: ISBN 978-1-954085-51-0

QC 20220301

Available from: 2021-09-24 Created: 2021-09-24 Last updated: 2023-09-14Bibliographically approved
5. Quasi: a synthetic Question-Answering dataset in Swedish using GPT-3 and zero-shot learning
Open this publication in new window or tab >>Quasi: a synthetic Question-Answering dataset in Swedish using GPT-3 and zero-shot learning
2023 (English)In: Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa), 2023, p. 477-491Conference paper, Published paper (Refereed)
Abstract [en]

This paper describes the creation and evaluation of a synthetic dataset of Swedish multiple-choice questions (MCQs) for reading comprehension using GPT-3. Although GPT-3 is trained mostly on English data, with only 0.11% of Swedish texts in its training material, the model still managed to generate MCQs in Swedish. About 44% of the generated MCQs turned out to be of sufficient quality, i.e.\ they were grammatically correct and relevant, with exactly one answer alternative being correct and the others being plausible but wrong. We provide a detailed analysis of the errors and shortcomings of the rejected MCQs, as well an analysis of the level of difficulty of the accepted MCQs. In addition to giving insights into GPT-3, the synthetic dataset could be used for training and evaluation of special-purpose MCQ-generating models.

National Category
Language Technology (Computational Linguistics)
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-327972 (URN)
Conference
The 24th Nordic Conference on Computational Linguistics (NoDaLiDa 2023), 22-24 May 2023, Tórshavn, Faroe Islands
Note

QC 20230602

Available from: 2023-06-02 Created: 2023-06-02 Last updated: 2023-09-14Bibliographically approved
6. SweCTRL-Mini: a data-transparent Transformer-based large language model for controllable text generation in Swedish
Open this publication in new window or tab >>SweCTRL-Mini: a data-transparent Transformer-based large language model for controllable text generation in Swedish
(English)Manuscript (preprint) (Other academic)
Abstract [en]

We present SweCTRL-Mini, a large Swedish language model that can be used for inference and fine-tuning on a single consumer-grade GPU. The model is based on the CTRL architecture by Keskar et.al. (2019), which means that users of the SweCTRL-Mini model can control the genre of the generated text by inserting special tokens in the generation prompts. SweCTRL-Mini is trained on a subset of the Swedish part of the mC4 corpus and a set of Swedish novels. In this article, we provide (1) a detailed account of the utilized training data and text pre-processing steps, to the extent that it is possible to check whether a specific phrase/source was a part of the training data, and (2) an evaluation of the model on both discriminative tasks, using automatic evaluation methods, and generative tasks, using human referees. We also compare the generative capabilities of the model with those of GPT-3. SweCTRL-Mini is fully open and available for download.

Keywords
Large Language Models, Swedish, Transformers, Neural Networks, Language Models, CTRL, Evaluation
National Category
Language Technology (Computational Linguistics)
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-329437 (URN)
Funder
Vinnova, 2019-02997
Note

Submitted to Language Resources and Evaluation, ISSN 1574-020X

QC 20230627

Available from: 2023-06-21 Created: 2023-06-21 Last updated: 2023-09-14Bibliographically approved
7. Generation and Evaluation of Multiple-choice Reading Comprehension Questions for Swedish
Open this publication in new window or tab >>Generation and Evaluation of Multiple-choice Reading Comprehension Questions for Swedish
(English)Manuscript (preprint) (Other academic)
Abstract [en]

Multiple-choice questions (MCQs) provide a widely used means of assessing reading comprehension. The automatic generation of such MCQs is a challenging language-technological problem that also has interesting educational applications. This article presents several methods for automatically producing reading comprehension questions MCQs from Swedish text. Unlike previous approaches, we construct models to generate the whole MCQ in one go, rather than using a pipeline architecture. Furthermore, we propose a two-stage method for evaluating the quality of the generated MCQs, first evaluating on carefully designed single-sentence texts, and then on texts from the SFI national exams. An extensive evaluation of the MCQ-generating capabilities of 12 different models, using this two-stage scheme, reveals that GPT-based models surpass smaller models that have been fine-tuned using small-scale datasets on this specific problem.

Keywords
Natural Language Generation, Natural Language Processing, Question Generation, Distractor Generation, Reading Comprehension, Multiple choice Questions
National Category
Language Technology (Computational Linguistics)
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-329400 (URN)
Funder
Vinnova, 2019-02997
Note

QC 20230627

Available from: 2023-06-20 Created: 2023-06-20 Last updated: 2023-09-14Bibliographically approved
8. UDon2: a library for manipulating Universal Dependencies trees
Open this publication in new window or tab >>UDon2: a library for manipulating Universal Dependencies trees
2020 (English)In: Proceedings of the Fourth Workshop on Universal Dependencies (UDW 2020), 2020, p. 120-125Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

UDon2 is an open-source library for manipulating dependency trees represented in the CoNLL-U format. The library is compatible with the Universal Dependencies. UDon2 is aimed at developers of downstream Natural Language Processing applications that require manipulating dependency trees on the sentence level (in addition to other available tools geared towards working with treebanks).

National Category
Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:kth:diva-288878 (URN)
Conference
28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), 8-13 December 2020
Funder
Vinnova
Note

QC 20210115

Available from: 2021-01-14 Created: 2021-01-14 Last updated: 2023-09-14Bibliographically approved
9. Textinator: an Internationalized Tool for Annotation and Human Evaluation in Natural Language Processing and Generation
Open this publication in new window or tab >>Textinator: an Internationalized Tool for Annotation and Human Evaluation in Natural Language Processing and Generation
2022 (English)In: LREC 2022: Thirteen International Conference On Language Resources And Evaluation / [ed] Calzolari, N Bechet, F Blache, P Choukri, K Cieri, C Declerck, T Goggi, S Isahara, H Maegaard, B Mazo, H Odijk, H Piperidis, S, European Language Resources Association (ELRA) , 2022, p. 856-866Conference paper, Published paper (Refereed)
Abstract [en]

We release an internationalized annotation and human evaluation bundle, called Textinator, along with documentation and video tutorials. Textinator allows annotating data for a wide variety of NLP tasks, and its user interface is offered in multiple languages, lowering the entry threshold for domain experts. The latter is, in fact, quite a rare feature among the annotation tools, that allows controlling for possible unintended biases introduced due to hiring only English-speaking annotators. We illustrate the rarity of this feature by presenting a thorough systematic comparison of Textinator to previously published annotation tools along 9 different axes (with internationalization being one of them). To encourage researchers to design their human evaluation before starting to annotate data, Textinator offers an easy-to-use tool for human evaluations allowing importing surveys with potentially hundreds of evaluation items in one click. We finish by presenting several use cases of annotation and evaluation projects conducted using pre-release versions of Textinator. The presented use cases do not represent Textinator's full annotation or evaluation capabilities, and interested readers are referred to the online documentation for more information.

Place, publisher, year, edition, pages
European Language Resources Association (ELRA), 2022
Keywords
annotation tool, human evaluation tool, natural language processing, natural language generation
National Category
Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:kth:diva-324335 (URN)10.5281/zenodo.6497334 (DOI)000889371700090 ()2-s2.0-85144462359 (Scopus ID)
Conference
13th International Conference on Language Resources and Evaluation (LREC), JUN 20-25, 2022, Marseille, FRANCE
Note

QC 20230228

Available from: 2023-02-28 Created: 2023-02-28 Last updated: 2023-09-14Bibliographically approved

Open Access in DiVA

Summary(1316 kB)271 downloads
File information
File name FULLTEXT01.pdfFile size 1316 kBChecksum SHA-512
7c183be6441bc0b63a2218c1da3f3e4d494b4c63a5d9db0bbafa9d29af3e1566e27a2f9c9cd64679fb200b9b75ab9cc6016ef972b935efb7f3c9eeb1eb23f18d
Type fulltextMimetype application/pdf

Authority records

Kalpakchi, Dmytro

Search in DiVA

By author/editor
Kalpakchi, Dmytro
By organisation
Speech, Music and Hearing, TMH
Language Technology (Computational Linguistics)

Search outside of DiVA

GoogleGoogle Scholar
Total: 271 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1657 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf