kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Automatic Evaluation of the Pataka Test Using Machine Learning and Audio Signal Processing
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-6529-1211
Karolinska Institutet, Stockholm, Sweden.ORCID iD: 0000-0001-8452-0043
2025 (English)In: Acta Logopaedica, E-ISSN 2004-9048, Vol. 2Article in journal (Refereed) Published
Abstract [en]

This study presents an automated deep learning approach to evaluate the oral diadochokinesis, a widely used clinical tool for assessing syllable repetition speed in motor speech disorders. Addressing the limitations of manual assessments—including subjectivity, time constraints, and inter-rater variability—we developed a system leveraging the Wav2Vec2 speech recognition model, combined with audio preprocessing (resampling, mono conversion, and normalisation) and temporal alignment techniques for syllable detection. In an initial assessment of the developed method, the system was evaluated on 16 recordings from two healthy speakers, analysed by three speech and language pathologists (SLPs) and compared to ground truth measurements. Results demonstrated superior accuracy of the machine learning system, with a mean squared error (MSE) of 0.07, compared to 1.18 for human raters. Statistical analysis (Wilcoxon signed-rank test: p = 0.98 for model vs. p = 0.00043 for SLPs) confirmed the model’s alignment with ground truth. While the system occasionally missed syllables (1–2 per recording), its precision in calculating syllables per second (SPS) and temporal consistency highlights its potential as a supplementary clinical tool. Key innovations include a user-friendly offline interface for data security and visualisations (Mel spectrograms, timing evenness, and distinctness metrics) to support clinical interpretation. The present study is subject to certain limitations. The study’s methodology is constrained by a small and homogeneous sample. Separately, the developed system’s performance is limited by unresolved challenges in the detection of subtle articulation errors. Future work will expand validation to diverse populations, including speakers with dysarthria, and refine human-in-the-loop integration to mitigate missed syllables. This study underscores the feasibility of combining deep learning with signal processing to enhance objectivity in speech assessments, offering a scalable solution to standardise the oral diadochokinesis test while preserving clinical expertise.

Place, publisher, year, edition, pages
CLINTEC/Logopedi, Karolinska Institutet , 2025. Vol. 2
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-371730DOI: 10.58986/al.2025.41035OAI: oai:DiVA.org:kth-371730DiVA, id: diva2:2007075
Note

QC 20251019

Available from: 2025-10-17 Created: 2025-10-17 Last updated: 2025-10-19Bibliographically approved
In thesis
1. Evaluation of Artificial Intelligence in the Medical Domain: Speech, Language and Applications
Open this publication in new window or tab >>Evaluation of Artificial Intelligence in the Medical Domain: Speech, Language and Applications
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This doctoral thesis investigates the potential of advanced speech and languagetechnologies, driven by deep learning, to improve clinical diagnostics and patientcare, primarily within the Swedish healthcare context. The research encompasseseight key papers, which are presented across three main sections:(1) Data Capture and Machine Learning for Speech: This section explores the use ofmultimodal data and advanced speech processing techniques for clinical applications.It includes research on utilizing multimodal data capture (speech, gaze, and digitalpen input) from clinical interviews to identify potential digital biomarkers for theearly detection and differentiation of dementia (Paper A). It also develops anautomated deep learning system to evaluate the oral diadochokinesis test for motorspeech disorders, which demonstrates higher accuracy than human raters andproposes a human-in-the-loop clinical interface (Paper B). Furthermore, this sectionevaluates the performance of Automatic Speech Recognition (ASR) systems,comparing word error rates between native (L1) and non-native (L2) Swedishspeakers (Paper C), and investigates data augmentation techniques to improve ASRaccuracy for individuals with aphasia, demonstrating a path towards more inclusivetechnology (Paper D).(2) Evaluation of LLMs in the Medical Domain: This section focuses on establishingrobust methods for assessing Large Language Models (LLMs) within a medicalcontext. It details the development of a specialized Swedish Medical LLM Benchmark,comprising over 2600 questions across various medical domains, designed to assessLLM performance in a clinically relevant, language-specific manner (Paper E).Additionally, the medical reasoning capabilities of LLMs, such as DeepSeek R1, arerigorously assessed, focusing on their capacity for general medical diagnosticreasoning (Paper F).(3) Application and Best Practice for Working with AI in Healthcare: This sectionaddresses the practical, ethical, and user experience (UX) considerations forvimplementing AI in healthcare. It proposes a novel user interface paradigm throughan AI-powered journaling application designed for personal health management,illustrating a low-risk, user-centric approach to AI integration (Paper G).Complementing this, it develops harm reduction strategies for the thoughtful use ofLLMs in the medical domain, providing perspectives for both patients and cliniciansto maximize utility while mitigating risks, thereby establishing best practices forresponsible AI engagement (Paper H).Collectively, this work advances the field by providing new tools and methodologiesfor early disease detection using speech and multimodal data, establishing robustevaluation methods for ASR and LLMs in the medical domain, and offering pathwaysand frameworks for responsible, user-centered, and effective AI implementation inhealthcare.

Abstract [sv]

Denna doktorsavhandling undersöker potentialen hos avancerade tal- ochspråkteknologier, drivna av djupinlärning, för att förbättra klinisk diagnostik ochpatientvård, främst inom svensk hälso- och sjukvård. Forskningen omfattar åttacentrala artiklar, vilka presenteras inom tre huvudsakliga avsnitt:(1) Datainsamling och maskininlärning för tal: Detta avsnitt utforskar användningenav multimodal data och avancerade talbearbetningstekniker för kliniskatillämpningar. Det inkluderar forskning om användning av multimodaldatainsamling från kliniska intervjuer för att identifiera digitala biomarkörer fördemens (Artikel A). Vidare utvecklas ett automatiserat system med djupinlärning föratt utvärdera oral diadochokinesis-testet vid motoriska talrubbningar, vilket visarhögre noggrannhet än mänskliga bedömare och föreslår ett kliniskt gränssnitt medmänniska-i-loopen (Artikel B). Avsnittet utvärderar även prestandan hos system förautomatisk taligenkänning (ASR) genom att jämföra felkvoter mellan talare medsvenska som modersmål respektive andraspråk (Artikel C) och undersökerdataaugmenteringstekniker för att förbättra ASR-noggrannheten för personer medafasi (Artikel D).(2) Utvärdering av stora språkmodeller (LLM:er) inom det medicinska området:Detta avsnitt fokuserar på att etablera robusta metoder för att bedöma storaspråkmodeller (LLM:er) i en medicinsk kontext. Det beskriver utvecklingen av ettspecialiserat svenskt medicinskt LLM-benchmark, bestående av över 2600 frågorinom olika medicinska domäner, avsett att utvärdera LLM:ers prestanda på ettkliniskt relevant och språkspecifikt sätt (Artikel E). Därtill bedöms den medicinskaresonemangsförmågan hos LLM:er, såsom DeepSeek R1, noggrant, med fokus påderas kapacitet för generell medicinsk diagnostiskt resonerande (Artikel F).(3) Applikationer och bästa praxis för AI inom hälso- och sjukvård: Detta avsnittbehandlar praktiska, etiska och användarupplevelsemässiga (UX) överväganden vidimplementering av AI inom hälso- och sjukvården. Ett nyttviianvändargränssnittsparadigm föreslås genom en AI-driven applikation för att föra enpersonlig hälsodagbok. Den är utformad för personlig hälsohantering och illustreraren lågrisk, användarcentrerad strategi för AI-integration (Artikel G). Somkomplement utvecklas strategier för harm reduction för genomtänkt användning avLLM:er inom det medicinska området. Dessa strategier erbjuder perspektiv för bådepatienter och kliniker för att maximera nyttan och samtidigt minimera riskerna, ochetablerar därmed bästa praxis för ansvarsfullt AI-engagemang (Artikel H).Sammantaget bidrar detta arbete till forskningsfältet genom att tillhandahålla nyaverktyg och metoder för tidig sjukdomsdetektion med hjälp av tal- och multimodaldata, etablera robusta utvärderingsmetoder för ASR och LLM:er inom det medicinskaområdet, samt erbjuda vägledning och ramverk för en ansvarsfull, användarcentreradoch effektiv implementering av AI inom hälso- och sjukvården.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2025. p. xxi, 82
Series
TRITA-EECS-AVL ; 2025:83
Keywords
Large Language Models (LLMs), Automatic Speech Recognition (ASR), Neurodegenerative Disorders, Swedish Language, Clinical Diagnostics, AI Ethics, Medical Reasoning, Multimodal Data, Tal- och språkteknologi, maskininlärning, djupinlärning, automatisk taligenkänning (ASR), stora språkmodeller (LLM), medicinsk diagnostik, digitala biomarkörer, afasi, demens, hälso- och sjukvård, användarupplevelse (UX), harm reduction, AI-integration
National Category
Artificial Intelligence
Research subject
Speech and Music Communication
Identifiers
urn:nbn:se:kth:diva-371738 (URN)978-91-8106-404-9 (ISBN)
Public defence
2025-12-12, https://kth-se.zoom.us/j/69936124469, Kollegiesalen, Brinellvägen 8, Stockholm, 13:00 (English)
Opponent
Supervisors
Note

QC 20251022

Available from: 2025-10-22 Created: 2025-10-17 Last updated: 2025-11-13Bibliographically approved

Open Access in DiVA

fulltext(2214 kB)68 downloads
File information
File name FULLTEXT01.pdfFile size 2214 kBChecksum SHA-512
6a312751220bfca3987f79def01177a7702f762c6f09c72b06a249ba02a9c1e278145b93efb4e1517fb33d0c014299ad82128d1b906181a19f71a9732cb6746f
Type fulltextMimetype application/pdf

Other links

Publisher's full text

Authority records

Moell, Birger

Search in DiVA

By author/editor
Moell, BirgerSand Aronsson, Fredrik
By organisation
Speech, Music and Hearing, TMH
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 616 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf