kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A method of detecting human movement intentions in real environments
KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Vehicle Engineering and Solid Mechanics. KTH MoveAbility Lab.ORCID iD: 0000-0002-4679-2934
KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Vehicle Engineering and Solid Mechanics. KTH MoveAbility Lab.
KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Vehicle Engineering and Solid Mechanics. KTH MoveAbility Lab.ORCID iD: 0000-0002-2232-5258
KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Vehicle Engineering and Solid Mechanics. KTH MoveAbility Lab.ORCID iD: 0000-0001-5417-5939
2023 (English)In: 2023 international conference on rehabilitation robotics, ICORR, Institute of Electrical and Electronics Engineers (IEEE) , 2023Conference paper, Published paper (Refereed)
Abstract [en]

Accurate and timely movement intention detection can facilitate exoskeleton control during transitions between different locomotion modes. Detecting movement intentions in real environments remains a challenge due to unavoidable environmental uncertainties. False movement intention detection may also induce risks of falling and general danger for exoskeleton users. To this end, in this study, we developed a method for detecting human movement intentions in real environments. The proposed method is capable of online self-correcting by implementing a decision fusion layer. Gaze data from an eye tracker and inertial measurement unit (IMU) signals were fused at the feature extraction level and used to predict movement intentions using 2 different methods. Images from the scene camera embedded on the eye tracker were used to identify terrains using a convolutional neural network. The decision fusion was made based on the predicted movement intentions and identified terrains. Four able-bodied participants wearing the eye tracker and 7 IMU sensors took part in the experiments to complete the tasks of level ground walking, ramp ascending, ramp descending, stairs ascending, and stair descending. The recorded experimental data were used to test the feasibility of the proposed method. An overall accuracy of 93.4% was achieved when both feature fusion and decision fusion were used. Fusing gaze data with IMU signals improved the prediction accuracy.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2023.
Series
International Conference on Rehabilitation Robotics ICORR, ISSN 1945-7898
Keywords [en]
Robotic exoskeletons, movement intention prediction, eye tracker, wearable sensor
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:kth:diva-341996DOI: 10.1109/ICORR58425.2023.10304774ISI: 001103260000102PubMedID: 37941205Scopus ID: 2-s2.0-85176437253OAI: oai:DiVA.org:kth-341996DiVA, id: diva2:1825199
Conference
International Conference on Rehabilitation Robotics (ICORR), SEP 24-28, 2023, Singapore, Singapore
Note

Part of proceedings ISBN: 979-8-3503-4275-8

QC 20240109

Available from: 2024-01-09 Created: 2024-01-09 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textPubMedScopus

Authority records

Liu, YixingWan, Zhao-YuanWang, RuoliGutierrez-Farewik, Elena

Search in DiVA

By author/editor
Liu, YixingWan, Zhao-YuanWang, RuoliGutierrez-Farewik, Elena
By organisation
Vehicle Engineering and Solid Mechanics
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 63 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf