kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 20) Show all publications
Akander, J., Bakhtiari, H., Ghadirzadeh, A., Mattsson, M. & Hayati, A. (2024). Development of an AI Model Utilizing Buildings' Thermal Mass to Optimize Heating Energy and Indoor Temperature in a Historical Building Located in a Cold Climate. Buildings, 14(7), Article ID 1985.
Open this publication in new window or tab >>Development of an AI Model Utilizing Buildings' Thermal Mass to Optimize Heating Energy and Indoor Temperature in a Historical Building Located in a Cold Climate
Show others...
2024 (English)In: Buildings, E-ISSN 2075-5309, Vol. 14, no 7, article id 1985Article in journal (Refereed) Published
Abstract [en]

Historical buildings account for a significant portion of the energy use of today's building stock, and there are usually limited energy saving measures that can be applied due to antiquarian and esthetic restrictions. The purpose of this case study is to evaluate the use of the building structure of a historical stone building as a heating battery, i.e., to periodically store thermal energy in the building's structures without physically changing them. The stored heat is later utilized at times of, e.g., high heat demand, to reduce peaking as well as overall heat supply. With the help of Artificial Intelligence and Convolutional Neural Network Deep Learning Modelling, heat supply to the building is controlled by weather forecasting and a binary calendarization of occupancy for the optimization of energy use and power demand under sustained comfortable indoor temperatures. The study performed indicates substantial savings in total (by approximately 30%) and in peaking energy (by approximately 20% based on daily peak powers) in the studied building and suggests that the method can be applied to other, similar cases.

Place, publisher, year, edition, pages
MDPI AG, 2024
Keywords
district heating, deep learning, artificial intelligence (AI), historical building, energy storage, peak shaving
National Category
Energy Engineering
Identifiers
urn:nbn:se:kth:diva-351431 (URN)10.3390/buildings14071985 (DOI)001276631300001 ()2-s2.0-85199592335 (Scopus ID)
Note

QC 20240819

Available from: 2024-08-19 Created: 2024-08-19 Last updated: 2024-08-19Bibliographically approved
Rajabi, N., Chernik, C., Reichlin, A., Taleb, F., Vasco, M., Ghadirzadeh, A., . . . Kragic, D. (2023). Mental Face Image Retrieval Based on a Closed-Loop Brain-Computer Interface. In: Augmented Cognition: 17th International Conference, AC 2023, Held as Part of the 25th HCI International Conference, HCII 2023, Proceedings. Paper presented at 17th International Conference on Augmented Cognition, AC 2023, held as part of the 25th International Conference on Human-Computer Interaction, HCII 2023, Copenhagen, Denmark, Jul 23 2023 - Jul 28 2023 (pp. 26-45). Springer Nature
Open this publication in new window or tab >>Mental Face Image Retrieval Based on a Closed-Loop Brain-Computer Interface
Show others...
2023 (English)In: Augmented Cognition: 17th International Conference, AC 2023, Held as Part of the 25th HCI International Conference, HCII 2023, Proceedings, Springer Nature , 2023, p. 26-45Conference paper, Published paper (Refereed)
Abstract [en]

Retrieval of mental images from measured brain activity may facilitate communication, especially when verbal or muscular communication is impossible or inefficient. The existing work focuses mostly on retrieving the observed visual stimulus while our interest is on retrieving the imagined mental image. We present a closed-loop BCI framework to retrieve mental images of human faces. We utilize EEG signals as binary feedback to determine the relevance of an image to the target mental image. We employ the feedback to traverse the latent space of a generative model to propose new images closer to the actual target image. We evaluate the proposed framework on 13 volunteers. Unlike previous studies, we do not restrict the possible attributes of the resulting images to predefined semantic classes. Subjective and objective tests validate the ability of our model to retrieve face images similar to the actual target mental images.

Place, publisher, year, edition, pages
Springer Nature, 2023
Keywords
Brain-Computer Interface, EEG, Generative Models, Mental Image Retrieval
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-337884 (URN)10.1007/978-3-031-35017-7_3 (DOI)001286423000003 ()2-s2.0-85171440140 (Scopus ID)
Conference
17th International Conference on Augmented Cognition, AC 2023, held as part of the 25th International Conference on Human-Computer Interaction, HCII 2023, Copenhagen, Denmark, Jul 23 2023 - Jul 28 2023
Note

Part of ISBN 9783031350160

QC 20231010

Available from: 2023-10-10 Created: 2023-10-10 Last updated: 2025-02-07Bibliographically approved
Ahlberg, S., Axelsson, A., Yu, P., Shaw Cortez, W. E., Gao, Y., Ghadirzadeh, A., . . . Dimarogonas, D. V. (2022). Co-adaptive Human-Robot Cooperation: Summary and Challenges. Unmanned Systems, 10(02), 187-203
Open this publication in new window or tab >>Co-adaptive Human-Robot Cooperation: Summary and Challenges
Show others...
2022 (English)In: Unmanned Systems, ISSN 2301-3850, E-ISSN 2301-3869, Vol. 10, no 02, p. 187-203Article in journal (Refereed) Published
Abstract [en]

The work presented here is a culmination of developments within the Swedish project COIN: Co-adaptive human-robot interactive systems, funded by the Swedish Foundation for Strategic Research (SSF), which addresses a unified framework for co-adaptive methodologies in human-robot co-existence. We investigate co-adaptation in the context of safe planning/control, trust, and multi-modal human-robot interactions, and present novel methods that allow humans and robots to adapt to one another and discuss directions for future work.

Place, publisher, year, edition, pages
World Scientific Pub Co Pte Ltd, 2022
Keywords
Co-adaptive systems, human-in-the-loop systems, human-robot interaction
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-310041 (URN)10.1142/S230138502250011X (DOI)000761503800006 ()2-s2.0-85116890059 (Scopus ID)
Note

QC 20220321

Available from: 2022-03-21 Created: 2022-03-21 Last updated: 2025-02-09Bibliographically approved
Demir Kanik, S. U., Yin, W., Güneysu Özgür, A., Ghadirzadeh, A., Björkman, M. & Kragic, D. (2022). Improving EEG-based Motor Execution Classification for Robot Control. In: Proceedings 14th International Conference, SCSM 2022, Held as Part of the 24th HCI International Conference, HCII 2022: Social Computing and Social Media: Design, User Experience and Impact. Paper presented at Social Computing and Social Media: Design, User Experience and Impact - 14th International Conference, SCSM 2022, Held as Part of the 24th HCI International Conference, HCII 2022, Virtual Event, June 26 - July 1, 2022 (pp. 65-82). Springer Nature
Open this publication in new window or tab >>Improving EEG-based Motor Execution Classification for Robot Control
Show others...
2022 (English)In: Proceedings 14th International Conference, SCSM 2022, Held as Part of the 24th HCI International Conference, HCII 2022: Social Computing and Social Media: Design, User Experience and Impact, Springer Nature , 2022, p. 65-82Conference paper, Published paper (Refereed)
Abstract [en]

Brain Computer Interface (BCI) systems have the potential to provide a communication tool using non-invasive signals which can be applied to various fields including neuro-rehabilitation and entertainment. Interpreting multi-class movement intentions in a real time setting to control external devices such as robotic arms remains to be one of the main challenges in the BCI field. We propose a learning framework to decode upper limb movement intentions before and during the movement execution (ME) with the inclusion of motor imagery (MI) trials. The design of the framework allows the system to evaluate the uncertainty of the classification output and respond accordingly. The EEG signals collected during MI and ME trials are fed into a hybrid architecture consisting of Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) with limited pre-processing. Outcome of the proposed approach shows the potential to anticipate the intended movement direction before the onset of the movement, while waiting to reach a certainty level by potentially observing more EEG data from the beginning of the actual movement before sending control commands to the robot to avoid undesired outcomes. Presented results indicate that both the accuracy and the confidence level of the model improves with the introduction of MI trials right before the movement execution. Our results confirm the possibility of the proposed model to contribute to real-time and continuous decoding of movement directions for robotic applications.

Place, publisher, year, edition, pages
Springer Nature, 2022
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13315
Keywords
brain computer interface
National Category
Neurosciences Signal Processing Robotics and automation
Identifiers
urn:nbn:se:kth:diva-318297 (URN)10.1007/978-3-031-05061-9_5 (DOI)000911435700005 ()2-s2.0-85133032331 (Scopus ID)
Conference
Social Computing and Social Media: Design, User Experience and Impact - 14th International Conference, SCSM 2022, Held as Part of the 24th HCI International Conference, HCII 2022, Virtual Event, June 26 - July 1, 2022
Note

QC 20230307

Available from: 2022-09-19 Created: 2022-09-19 Last updated: 2025-02-05Bibliographically approved
Gert, A. L., Czesumski, A., Keshava, A., Ghadirzadeh, A., Kalthoff, T., Ehinger, B., . . . Koenig, P. (2021). COORDINATING WITH A ROBOT PARTNER AFFECTS ACTION MONITORING RELATED NEURAL PROCESSING. Psychophysiology, 58, S60-S60
Open this publication in new window or tab >>COORDINATING WITH A ROBOT PARTNER AFFECTS ACTION MONITORING RELATED NEURAL PROCESSING
Show others...
2021 (English)In: Psychophysiology, ISSN 0048-5772, E-ISSN 1469-8986, Vol. 58, p. S60-S60Article in journal, Meeting abstract (Other academic) Published
Place, publisher, year, edition, pages
Wiley, 2021
Keywords
EEG, Human-Robot-Interaction, ERP
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-304198 (URN)000706408100225 ()
Note

QC 20211103

Available from: 2021-11-03 Created: 2021-11-03 Last updated: 2025-02-09Bibliographically approved
Czeszumski, A., Gert, A. L., Keshava, A., Ghadirzadeh, A., Kalthoff, T., Ehinger, B. V., . . . König, P. (2021). Coordinating With a Robot Partner Affects Neural Processing Related to Action Monitoring. Frontiers in Neurorobotics, 15, Article ID 686010.
Open this publication in new window or tab >>Coordinating With a Robot Partner Affects Neural Processing Related to Action Monitoring
Show others...
2021 (English)In: Frontiers in Neurorobotics, ISSN 1662-5218, Vol. 15, article id 686010Article in journal (Refereed) Published
Abstract [en]

Robots start to play a role in our social landscape, and they are progressively becoming responsive, both physically and socially. It begs the question of how humans react to and interact with robots in a coordinated manner and what the neural underpinnings of such behavior are. This exploratory study aims to understand the differences in human-human and human-robot interactions at a behavioral level and from a neurophysiological perspective. For this purpose, we adapted a collaborative dynamical paradigm from the literature. We asked 12 participants to hold two corners of a tablet while collaboratively guiding a ball around a circular track either with another participant or a robot. In irregular intervals, the ball was perturbed outward creating an artificial error in the behavior, which required corrective measures to return to the circular track again. Concurrently, we recorded electroencephalography (EEG). In the behavioral data, we found an increased velocity and positional error of the ball from the track in the human-human condition vs. human-robot condition. For the EEG data, we computed event-related potentials. We found a significant difference between human and robot partners driven by significant clusters at fronto-central electrodes. The amplitudes were stronger with a robot partner, suggesting a different neural processing. All in all, our exploratory study suggests that coordinating with robots affects action monitoring related processing. In the investigated paradigm, human participants treat errors during human-robot interaction differently from those made during interactions with other humans. These results can improve communication between humans and robot with the use of neural activity in real-time.

Place, publisher, year, edition, pages
Frontiers Media SA, 2021
Keywords
human-robot interaction, social neuroscience, joint action, ERP, EEG, embodied cognition, action monitoring
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-301830 (URN)10.3389/fnbot.2021.686010 (DOI)000690873200001 ()34456705 (PubMedID)2-s2.0-85113457169 (Scopus ID)
Note

QC 20210915

Available from: 2021-09-15 Created: 2021-09-15 Last updated: 2025-02-09Bibliographically approved
Ghadirzadeh, A., Chen, X., Yin, W., Yi, Z., Björkman, M. & Kragic, D. (2021). Human-Centered Collaborative Robots With Deep Reinforcement Learning. IEEE Robotics and Automation Letters, 6(2), 566-571
Open this publication in new window or tab >>Human-Centered Collaborative Robots With Deep Reinforcement Learning
Show others...
2021 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 6, no 2, p. 566-571Article in journal (Refereed) Published
Abstract [en]

We present a reinforcement learning based framework for human-centered collaborative systems. The framework is proactive and balances the benefits of timely actions with the risk of taking improper actions by minimizing the total time spent to complete the task. The framework is learned end-to-end in an unsupervised fashion addressing the perception uncertainties and decision making in an integrated manner. The framework is shown to provide more time-efficient coordination between human and robot partners on an example task of packaging compared to alternatives for which perception and decision-making systems are learned independently, using supervised learning. Two important benefits of the proposed approach are that tedious annotation of motion data is avoided, and the learning is performed on-line.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2021
Keywords
Human-Centered robotics, human-robot collaboration, reinforcement learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-289870 (URN)10.1109/LRA.2020.3047730 (DOI)000608677400003 ()2-s2.0-85099106192 (Scopus ID)
Note

QC 20210212

Available from: 2021-02-12 Created: 2021-02-12 Last updated: 2024-01-17Bibliographically approved
Chen, X., Ghadirzadeh, A., Björkman, M. & Jensfelt, P. (2020). Adversarial Feature Training for Generalizable Robotic Visuomotor Control. In: 2020 International Conference on Robotics And Automation (ICRA): . Paper presented at 2020 IEEE International Conference on Robotics and Automation, ICRA 2020 Paris 31 May 2020 through 31 August 2020 (pp. 1142-1148). Institute of Electrical and Electronics Engineers (IEEE), Article ID 9197505.
Open this publication in new window or tab >>Adversarial Feature Training for Generalizable Robotic Visuomotor Control
2020 (English)In: 2020 International Conference on Robotics And Automation (ICRA), Institute of Electrical and Electronics Engineers (IEEE) , 2020, p. 1142-1148, article id 9197505Conference paper, Published paper (Refereed)
Abstract [en]

Deep reinforcement learning (RL) has enabled training action-selection policies, end-to-end, by learning a function which maps image pixels to action outputs. However, it's application to visuomotor robotic policy training has been limited because of the challenge of large-scale data collection when working with physical hardware. A suitable visuomotor policy should perform well not just for the task-setup it has been trained for, but also for all varieties of the task, including novel objects at different viewpoints surrounded by task-irrelevant objects. However, it is impractical for a robotic setup to sufficiently collect interactive samples in a RL framework to generalize well to novel aspects of a task.In this work, we demonstrate that by using adversarial training for domain transfer, it is possible to train visuomotor policies based on RL frameworks, and then transfer the acquired policy to other novel task domains. We propose to leverage the deep RL capabilities to learn complex visuomotor skills for uncomplicated task setups, and then exploit transfer learning to generalize to new task domains provided only still images of the task in the target domain. We evaluate our method on two real robotic tasks, picking and pouring, and compare it to a number of prior works, demonstrating its superiority.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2020
Series
Proceedings - IEEE International Conference on Robotics and Automation, ISSN 1050-4729
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-285992 (URN)10.1109/ICRA40945.2020.9197505 (DOI)000712319500124 ()2-s2.0-85092725429 (Scopus ID)
Conference
2020 IEEE International Conference on Robotics and Automation, ICRA 2020 Paris 31 May 2020 through 31 August 2020
Note

QC 20211216

Part of proceeding: ISBN 978-1-7281-7395-5

Available from: 2020-11-16 Created: 2020-11-16 Last updated: 2025-02-09Bibliographically approved
Bütepage, J., Ghadirzadeh, A., Öztimur Karadag, Ö., Björkman, M. & Kragic, D. (2020). Imitating by Generating: Deep Generative Models for Imitation of Interactive Tasks. Frontiers in Robotics and AI, 7, Article ID 47.
Open this publication in new window or tab >>Imitating by Generating: Deep Generative Models for Imitation of Interactive Tasks
Show others...
2020 (English)In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 7, article id 47Article in journal (Refereed) Published
Abstract [en]

To coordinate actions with an interaction partner requires a constant exchange of sensorimotor signals. Humans acquire these skills in infancy and early childhood mostly by imitation learning and active engagement with a skilled partner. They require the ability to predict and adapt to one's partner during an interaction. In this work we want to explore these ideas in a human-robot interaction setting in which a robot is required to learn interactive tasks from a combination of observational and kinesthetic learning. To this end, we propose a deep learning framework consisting of a number of components for (1) human and robot motion embedding, (2) motion prediction of the human partner, and (3) generation of robot joint trajectories matching the human motion. As long-term motion prediction methods often suffer from the problem of regression to the mean, our technical contribution here is a novel probabilistic latent variable model which does not predict in joint space but in latent space. To test the proposed method, we collect human-human interaction data and human-robot interaction data of four interactive tasks “hand-shake,” “hand-wave,” “parachute fist-bump,” and “rocket fist-bump.” We demonstrate experimentally the importance of predictive and adaptive components as well as low-level abstractions to successfully learn to imitate human behavior in interactive social tasks.

Place, publisher, year, edition, pages
Frontiers Media SA, 2020
Keywords
deep learning, generative models, human-robot interaction, imitation learning, sensorimotor coordination, variational autoencoders
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-277188 (URN)10.3389/frobt.2020.00047 (DOI)000531230100001 ()33501215 (PubMedID)2-s2.0-85084053889 (Scopus ID)
Note

QC 20200714

Available from: 2020-07-14 Created: 2020-07-14 Last updated: 2025-02-09Bibliographically approved
Arndt, K., Hazara, M., Ghadirzadeh, A. & Kyrki, V. (2020). Meta Reinforcement Learning for Sim-to-real Domain Adaptation. In: 2020 IEEE International Conference On Robotics And Automation (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation (ICRA), MAY 31-JUN 15, 2020, ELECTR NETWORK (pp. 2725-2731). IEEE
Open this publication in new window or tab >>Meta Reinforcement Learning for Sim-to-real Domain Adaptation
2020 (English)In: 2020 IEEE International Conference On Robotics And Automation (ICRA), IEEE , 2020, p. 2725-2731Conference paper, Published paper (Refereed)
Abstract [en]

Modern reinforcement learning methods suffer from low sample efficiency and unsafe exploration, making it infeasible to train robotic policies entirely on real hardware. In this work, we propose to address the problem of sim-to-real domain transfer by using meta learning to train a policy that can adapt to a variety of dynamic conditions, and using a task-specific trajectory generation model to provide an action space that facilitates quick exploration. We evaluate the method by performing domain adaptation in simulation and analyzing the structure of the latent space during adaptation. We then deploy this policy on a KUKA LBR 4+ robot and evaluate its performance on a task of hitting a hockey puck to a target. Our method shows more consistent and stable domain adaptation than the baseline, resulting in better overall performance.

Place, publisher, year, edition, pages
IEEE, 2020
Series
IEEE International Conference on Robotics and Automation ICRA, ISSN 1050-4729
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-306450 (URN)10.1109/ICRA40945.2020.9196540 (DOI)000712319502005 ()2-s2.0-85092690778 (Scopus ID)
Conference
IEEE International Conference on Robotics and Automation (ICRA), MAY 31-JUN 15, 2020, ELECTR NETWORK
Note

QC 20211217

conference ISBN 978-1-7281-7395-5

Available from: 2021-12-17 Created: 2021-12-17 Last updated: 2025-02-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-6738-9872

Search in DiVA

Show all publications