kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Simulating human walking: a model-based reinforcement learning approach with musculoskeletal modeling
KTH, School of Engineering Sciences (SCI), Engineering Mechanics.ORCID iD: 0000-0002-5592-5372
KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Vehicle Engineering and Solid Mechanics. Karolinska Inst, Dept Womens & Childrens Hlth, Stockholm, Sweden.ORCID iD: 0000-0001-5417-5939
2023 (English)In: Frontiers in Neurorobotics, ISSN 1662-5218, Vol. 17, article id 1244417Article in journal (Refereed) Published
Abstract [en]

IntroductionRecent advancements in reinforcement learning algorithms have accelerated the development of control models with high-dimensional inputs and outputs that can reproduce human movement. However, the produced motion tends to be less human-like if algorithms do not involve a biomechanical human model that accounts for skeletal and muscle-tendon properties and geometry. In this study, we have integrated a reinforcement learning algorithm and a musculoskeletal model including trunk, pelvis, and leg segments to develop control modes that drive the model to walk.MethodsWe simulated human walking first without imposing target walking speed, in which the model was allowed to settle on a stable walking speed itself, which was 1.45 m/s. A range of other speeds were imposed for the simulation based on the previous self-developed walking speed. All simulations were generated by solving the Markov decision process problem with covariance matrix adaptation evolution strategy, without any reference motion data.ResultsSimulated hip and knee kinematics agreed well with those in experimental observations, but ankle kinematics were less well-predicted.DiscussionWe finally demonstrated that our reinforcement learning framework also has the potential to model and predict pathological gait that can result from muscle weakness.

Place, publisher, year, edition, pages
Frontiers Media SA , 2023. Vol. 17, article id 1244417
Keywords [en]
human and humanoid motion analysis, motion synthesis, optimization, optimal control, kinematics, CMA-ES, reflex-based control
National Category
Robotics and automation Neurology
Identifiers
URN: urn:nbn:se:kth:diva-339586DOI: 10.3389/fnbot.2023.1244417ISI: 001091049400001PubMedID: 37901705Scopus ID: 2-s2.0-85174803144OAI: oai:DiVA.org:kth-339586DiVA, id: diva2:1812112
Note

QC 20231115

Available from: 2023-11-15 Created: 2023-11-15 Last updated: 2025-02-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textPubMedScopus

Authority records

Su, BinbinGutierrez-Farewik, Elena

Search in DiVA

By author/editor
Su, BinbinGutierrez-Farewik, Elena
By organisation
Engineering MechanicsVehicle Engineering and Solid Mechanics
In the same journal
Frontiers in Neurorobotics
Robotics and automationNeurology

Search outside of DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 53 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf