kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Multimodal data fusion framework enhanced robot-assisted minimally invasive surgery
Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Italy.
Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Italy.
Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Italy.
Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Italy.
Show others and affiliations
2022 (English)In: Transactions of the Institute of Measurement and Control, ISSN 0142-3312, E-ISSN 1477-0369, Vol. 44, no 4, p. 735-743Article in journal (Refereed) Published
Abstract [en]

The generous application of robot-assisted minimally invasive surgery (RAMIS) promotes human-machine interaction (HMI). Identifying various behaviors of doctors can enhance the RAMIS procedure for the redundant robot. It bridges intelligent robot control and activity recognition strategies in the operating room, including hand gestures and human activities. In this paper, to enhance identification in a dynamic situation, we propose a multimodal data fusion framework to provide multiple information for accuracy enhancement. Firstly, a multi-sensors based hardware structure is designed to capture varied data from various devices, including depth camera and smartphone. Furthermore, in different surgical tasks, the robot control mechanism can shift automatically. The experimental results evaluate the efficiency of developing the multimodal framework for RAMIS by comparing it with a single sensor system. Implementing the KUKA LWR4+ in a surgical robot environment indicates that the surgical robot systems can work with medical staff in the future.

Place, publisher, year, edition, pages
SAGE Publications , 2022. Vol. 44, no 4, p. 735-743
Keywords [en]
event-based control, human activity recognition, minimally invasive surgery, Multimodal data fusion, redundant manipulator
National Category
Robotics and automation
Identifiers
URN: urn:nbn:se:kth:diva-335694DOI: 10.1177/0142331220984350ISI: 000682757500001Scopus ID: 2-s2.0-85099569459OAI: oai:DiVA.org:kth-335694DiVA, id: diva2:1795050
Note

QC 20230907

Available from: 2023-09-07 Created: 2023-09-07 Last updated: 2025-02-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Zhang, Longbin

Search in DiVA

By author/editor
Zhang, Longbin
By organisation
Vehicle Engineering and Solid Mechanics
In the same journal
Transactions of the Institute of Measurement and Control
Robotics and automation

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 32 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf