kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (6 of 6) Show all publications
Zhang, Y., Rajabi, N., Taleb, F., Matviienko, A., Ma, Y., Björkman, M. & Kragic, D. (2025). Mind Meets Robots: A Review of EEG-Based Brain-Robot Interaction Systems. International Journal of Human-Computer Interaction, 1-32
Open this publication in new window or tab >>Mind Meets Robots: A Review of EEG-Based Brain-Robot Interaction Systems
Show others...
2025 (English)In: International Journal of Human-Computer Interaction, ISSN 1044-7318, E-ISSN 1532-7590, p. 1-32Article in journal (Refereed) Published
Abstract [en]

Brain-robot interaction (BRI) empowers individuals to control (semi-)automated machines through brain activity, either passively or actively. In the past decade, BRI systems have advanced significantly, primarily leveraging electroencephalogram (EEG) signals. This article presents an up-to-date review of 87 curated studies published between 2018 and 2023, identifying the research landscape of EEG-based BRI systems. The review consolidates methodologies, interaction modes, application contexts, system evaluation, existing challenges, and future directions in this domain. Based on our analysis, we propose a BRI system model comprising three entities: Brain, Robot, and Interaction, depicting their internal relationships. We especially examine interaction modes between human brains and robots, an aspect not yet fully explored. Within this model, we scrutinize and classify current research, extract insights, highlight challenges, and offer recommendations for future studies. Our findings provide a structured design space for human-robot interaction (HRI), informing the development of more efficient BRI frameworks.

Place, publisher, year, edition, pages
Informa UK Limited, 2025
Keywords
EEG based, brain-robot interaction, interaction mode, comprehensive review
National Category
Vehicle and Aerospace Engineering
Identifiers
urn:nbn:se:kth:diva-361866 (URN)10.1080/10447318.2025.2464915 (DOI)001446721000001 ()2-s2.0-105000309480 (Scopus ID)
Note

QC 20250402

Available from: 2025-04-02 Created: 2025-04-02 Last updated: 2025-04-02Bibliographically approved
Taleb, F., Medbouhi, A. A., Marchetti, G. L. & Kragic Jensfelt, D. (2025). Towards Discovering the Hierarchy of the Olfactory Perceptual Space via Hyperbolic Embeddings. In: Science Communications Worldwide (Ed.), 22nd annual Computational and Systems Neuroscience (COSYNE) conference, Montreal and Mont Tremblant, Quebec, Canada, March 27 - April 1, 2025.: . Paper presented at Computational and Systems Neuroscience (COSYNE), Montreal 27-30, Canada.
Open this publication in new window or tab >>Towards Discovering the Hierarchy of the Olfactory Perceptual Space via Hyperbolic Embeddings
2025 (English)In: 22nd annual Computational and Systems Neuroscience (COSYNE) conference, Montreal and Mont Tremblant, Quebec, Canada, March 27 - April 1, 2025. / [ed] Science Communications Worldwide, 2025Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Human olfactory perception is understudied in the whole spectrum of neuroscience, from computational to system neuroscience. In this study, we explore the hierarchy underlying human olfactory perception by embedding perceptual data in the hyperbolic space. Previous research emphasizes the significance of hyperbolic geometry in gaining insights into the neural encoding of natural odorants. This is due to the exponential growth of the hyperbolic space, that makes it appropriate to encode hierarchical data. We employ a contrastive learning approach over the Poincare ball in order to embed olfactory perceptual data in a hyperbolic space. The results indicate the emergence of a hierarchical representation in the hyperbolic space, which could have implications for understanding the structure of the olfactory perceptual space in the brain. Our finding suggests that the human brain may encode olfactory perceptions in a hierarchical manner, where higher odor perceptual certainty correlates with deeper levels in the hierarchical representation.

Keywords
hyperbolic geometry, olfaction, representation
National Category
Neurosciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-363971 (URN)10.57736/0d78-6155 (DOI)
Conference
Computational and Systems Neuroscience (COSYNE), Montreal 27-30, Canada
Note

QC 20250602

Available from: 2025-06-01 Created: 2025-06-01 Last updated: 2025-06-02Bibliographically approved
Taleb, F., Vasco, M., Ribeiro, A. H., Björkman, M. & Kragic Jensfelt, D. (2024). Can Transformers Smell Like Humans?. In: Advances in Neural Information Processing Systems 37 - 38th Conference on Neural Information Processing Systems, NeurIPS 2024: . Paper presented at 38th Conference on Neural Information Processing Systems, NeurIPS 2024, Vancouver, Canada, Dec 9 2024 - Dec 15 2024. Neural Information Processing Systems Foundation
Open this publication in new window or tab >>Can Transformers Smell Like Humans?
Show others...
2024 (English)In: Advances in Neural Information Processing Systems 37 - 38th Conference on Neural Information Processing Systems, NeurIPS 2024, Neural Information Processing Systems Foundation , 2024Conference paper, Published paper (Refereed)
Abstract [en]

The human brain encodes stimuli from the environment into representations that form a sensory perception of the world. Despite recent advances in understanding visual and auditory perception, olfactory perception remains an under-explored topic in the machine learning community due to the lack of large-scale datasets annotated with labels of human olfactory perception. In this work, we ask the question of whether pre-trained transformer models of chemical structures encode representations that are aligned with human olfactory perception, i.e., can transformers smell like humans? We demonstrate that representations encoded from transformers pre-trained on general chemical structures are highly aligned with human olfactory perception. We use multiple datasets and different types of perceptual representations to show that the representations encoded by transformer models are able to predict: (i) labels associated with odorants provided by experts; (ii) continuous ratings provided by human participants with respect to pre-defined descriptors; and (iii) similarity ratings between odorants provided by human participants. Finally, we evaluate the extent to which this alignment is associated with physicochemical features of odorants known to be relevant for olfactory decoding.

Place, publisher, year, edition, pages
Neural Information Processing Systems Foundation, 2024
National Category
Neurosciences Computer Sciences
Identifiers
urn:nbn:se:kth:diva-361995 (URN)2-s2.0-105000466521 (Scopus ID)
Conference
38th Conference on Neural Information Processing Systems, NeurIPS 2024, Vancouver, Canada, Dec 9 2024 - Dec 15 2024
Note

QC 20250408

Available from: 2025-04-03 Created: 2025-04-03 Last updated: 2025-04-08Bibliographically approved
Taleb, F., Vasco, M., Rajabi, N., Björkman, M. & Kragic, D. (2024). Challenging Deep Learning Methods for EEG Signal Denoising under Data Corruption. In: 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2024 - Proceedings: . Paper presented at 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2024, Orlando, United States of America, Jul 15 2024 - Jul 19 2024. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Challenging Deep Learning Methods for EEG Signal Denoising under Data Corruption
Show others...
2024 (English)In: 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2024 - Proceedings, Institute of Electrical and Electronics Engineers (IEEE) , 2024Conference paper, Published paper (Refereed)
Abstract [en]

Capturing informative electroencephalogram (EEG) signals is a challenging task due to the presence of noise (e.g., due to human movement). In extreme cases, data recordings from specific electrodes (channels) can become corrupted and entirely devoid of information. Motivated by recent work on deep-learning-based approaches for EEG signal denoising, we present the first benchmark study on the performance of EEG signal denoising methods in the presence of corrupted channels. We design our study considering a wide variety of datasets, models, and evaluation tasks. Our results highlight the need for assessing the performance of EEG deep-learning models across a broad suite of datasets, as provided by our benchmark.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
data corruption, deep learning, EEG, signal denoising, signal noise
National Category
Signal Processing Computer Sciences
Identifiers
urn:nbn:se:kth:diva-358866 (URN)10.1109/EMBC53108.2024.10782132 (DOI)40039138 (PubMedID)2-s2.0-85214969123 (Scopus ID)
Conference
46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2024, Orlando, United States of America, Jul 15 2024 - Jul 19 2024
Note

Part of ISBN 9798350371499]

QC 20250128

Available from: 2025-01-23 Created: 2025-01-23 Last updated: 2025-05-27Bibliographically approved
Xia, H., Zhang, Y., Rajabi, N., Taleb, F., Yang, Q., Kragic, D. & Li, Z. (2024). Shaping high-performance wearable robots for human motor and sensory reconstruction and enhancement. Nature Communications, 15(1), Article ID 1760.
Open this publication in new window or tab >>Shaping high-performance wearable robots for human motor and sensory reconstruction and enhancement
Show others...
2024 (English)In: Nature Communications, E-ISSN 2041-1723, Vol. 15, no 1, article id 1760Article in journal (Refereed) Published
Abstract [en]

Most wearable robots such as exoskeletons and prostheses can operate with dexterity, while wearers do not perceive them as part of their bodies. In this perspective, we contend that integrating environmental, physiological, and physical information through multi-modal fusion, incorporating human-in-the-loop control, utilizing neuromuscular interface, employing flexible electronics, and acquiring and processing human-robot information with biomechatronic chips, should all be leveraged towards building the next generation of wearable robots. These technologies could improve the embodiment of wearable robots. With optimizations in mechanical structure and clinical training, the next generation of wearable robots should better facilitate human motor and sensory reconstruction and enhancement.

Place, publisher, year, edition, pages
Springer Nature, 2024
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-344345 (URN)10.1038/s41467-024-46249-0 (DOI)38409128 (PubMedID)2-s2.0-85186407087 (Scopus ID)
Note

QC 20240315

Available from: 2024-03-13 Created: 2024-03-13 Last updated: 2025-02-09Bibliographically approved
Rajabi, N., Chernik, C., Reichlin, A., Taleb, F., Vasco, M., Ghadirzadeh, A., . . . Kragic, D. (2023). Mental Face Image Retrieval Based on a Closed-Loop Brain-Computer Interface. In: Augmented Cognition: 17th International Conference, AC 2023, Held as Part of the 25th HCI International Conference, HCII 2023, Proceedings. Paper presented at 17th International Conference on Augmented Cognition, AC 2023, held as part of the 25th International Conference on Human-Computer Interaction, HCII 2023, Copenhagen, Denmark, Jul 23 2023 - Jul 28 2023 (pp. 26-45). Springer Nature
Open this publication in new window or tab >>Mental Face Image Retrieval Based on a Closed-Loop Brain-Computer Interface
Show others...
2023 (English)In: Augmented Cognition: 17th International Conference, AC 2023, Held as Part of the 25th HCI International Conference, HCII 2023, Proceedings, Springer Nature , 2023, p. 26-45Conference paper, Published paper (Refereed)
Abstract [en]

Retrieval of mental images from measured brain activity may facilitate communication, especially when verbal or muscular communication is impossible or inefficient. The existing work focuses mostly on retrieving the observed visual stimulus while our interest is on retrieving the imagined mental image. We present a closed-loop BCI framework to retrieve mental images of human faces. We utilize EEG signals as binary feedback to determine the relevance of an image to the target mental image. We employ the feedback to traverse the latent space of a generative model to propose new images closer to the actual target image. We evaluate the proposed framework on 13 volunteers. Unlike previous studies, we do not restrict the possible attributes of the resulting images to predefined semantic classes. Subjective and objective tests validate the ability of our model to retrieve face images similar to the actual target mental images.

Place, publisher, year, edition, pages
Springer Nature, 2023
Keywords
Brain-Computer Interface, EEG, Generative Models, Mental Image Retrieval
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-337884 (URN)10.1007/978-3-031-35017-7_3 (DOI)001286423000003 ()2-s2.0-85171440140 (Scopus ID)
Conference
17th International Conference on Augmented Cognition, AC 2023, held as part of the 25th International Conference on Human-Computer Interaction, HCII 2023, Copenhagen, Denmark, Jul 23 2023 - Jul 28 2023
Note

Part of ISBN 9783031350160

QC 20231010

Available from: 2023-10-10 Created: 2023-10-10 Last updated: 2025-02-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-4482-1460

Search in DiVA

Show all publications