Change search
Refine search result
1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Karaoguz, Hakan
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Object Detection Approach for Robot Grasp Detection2019In: 2019 International Conference on Robotics And Automation (ICRA) / [ed] Howard, A Althoefer, K Arai, F Arrichiello, F Caputo, B Castellanos, J Hauser, K Isler, V Kim, J Liu, H Oh, P Santos, V Scaramuzza, D Ude, A Voyles, R Yamane, K Okamura, A, Institute of Electrical and Electronics Engineers (IEEE), 2019, p. 4953-4959, article id 8793751Conference paper (Refereed)
    Abstract [en]

    In this paper, we focus on the robot grasping problem with parallel grippers using image data. For this task, we propose and implement an end-to-end approach. In order to detect the good grasping poses for a parallel gripper from RGB images, we have employed transfer learning for a Convolutional Neural Network (CNN) based object detection architecture. Our obtained results show that, the adapted network either outperforms or is on-par with the state-of-the art methods on a benchmark dataset. We also performed grasping experiments on a real robot platform to evaluate our method's real world performance.

  • 2.
    Karaoǧuz, Hakan
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Işil Bozma, H.
    Merging appearance-based spatial knowledge in multirobot systems2016In: IEEE International Conference on Intelligent Robots and Systems, IEEE, 2016, p. 5107-5112Conference paper (Refereed)
    Abstract [en]

    This paper considers the merging of appearancebased spatial knowledge among robots having compatible visual sensing. Each robot is assumed to retain its knowledge in its individual long-term spatial memory where i) the place knowledge and their spatial relations are retained in an organized manner in place and map memories respectively; and ii) a 'place' refers to a spatial region as designated by a collection of associated appearances. In the proposed approach, each robot communicates with another robot, receives its memory and then merges the received knowledge with its own. The novelty of the merging process is that it is done in two stages: merging of place knowledge followed by the merging of map knowledge. As each robot's place memory is processed as a whole or in portions, the merging process scales easily with respect to the amount and overlap of the appearance data. Furthermore, the merging can be done in decentralized manner. Our experimental results with a team of three robots demonstrate that the resulting merged knowledge enables the robots to reason about learned places.

  • 3.
    Klamt, Tobias
    et al.
    Univ Bonn, Autonomous Intelligent Syst, Bonn, Germany..
    Chen, Xi
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Karaoǧuz, Hakan
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Behnke, Sven
    Univ Bonn, Autonomous Intelligent Syst, Bonn, Germany..
    et al.,
    Flexible Disaster Response of Tomorrow: Final Presentation and Evaluation of the CENTAURO System2019In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 26, no 4, p. 59-72Article in journal (Refereed)
    Abstract [en]

    Mobile manipulation robots have great potential for roles in support of rescuers on disaster-response missions. Robots can operate in places too dangerous for humans and therefore can assist in accomplishing hazardous tasks while their human operators work at a safe distance. We developed a disaster-response system that consists of the highly flexible Centauro robot and suitable control interfaces, including an immersive telepresence suit and support-operator controls offering different levels of autonomy.

  • 4.
    Kragic, Danica
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Gustafson, Joakim
    KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.
    Karaoǧuz, Hakan
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Krug, Robert
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Interactive, collaborative robots: Challenges and opportunities2018In: IJCAI International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence , 2018, p. 18-25Conference paper (Refereed)
    Abstract [en]

    Robotic technology has transformed manufacturing industry ever since the first industrial robot was put in use in the beginning of the 60s. The challenge of developing flexible solutions where production lines can be quickly re-planned, adapted and structured for new or slightly changed products is still an important open problem. Industrial robots today are still largely preprogrammed for their tasks, not able to detect errors in their own performance or to robustly interact with a complex environment and a human worker. The challenges are even more serious when it comes to various types of service robots. Full robot autonomy, including natural interaction, learning from and with human, safe and flexible performance for challenging tasks in unstructured environments will remain out of reach for the foreseeable future. In the envisioned future factory setups, home and office environments, humans and robots will share the same workspace and perform different object manipulation tasks in a collaborative manner. We discuss some of the major challenges of developing such systems and provide examples of the current state of the art.

  • 5.
    Mancini, Massimiliano
    et al.
    Sapienza Univ Rome, Rome, Italy.;Fdn Bruno Kessler, Trento, Italy..
    Karaoguz, Hakan
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Ricci, Elisa
    Fdn Bruno Kessler, Trento, Italy.;Univ Trento, Trento, Italy..
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Caputo, Barbara
    Italian Inst Technol, Milan, Italy..
    Knowledge is Never Enough: Towards Web Aided Deep Open World Recognition2019In: 2019 International Conference on Robotics And Automation (ICRA) / [ed] Howard, A Althoefer, K Arai, F Arrichiello, F Caputo, B Castellanos, J Hauser, K Isler, V Kim, J Liu, H Oh, P Santos, V Scaramuzza, D Ude, A Voyles, R Yamane, K Okamura, A, Institute of Electrical and Electronics Engineers (IEEE), 2019, p. 9537-9543, article id 8793803Conference paper (Refereed)
    Abstract [en]

    While today's robots are able to perform sophisticated tasks, they can only act on objects they have been trained to recognize. This is a severe limitation: any robot will inevitably see new objects in unconstrained settings, and thus will always have visual knowledge gaps. However, standard visual modules are usually built on a limited set of classes and are based on the strong prior that an object must belong to one of those classes. Identifying whether an instance does not belong to the set of known categories (i.e. open set recognition), only partially tackles this problem, as a truly autonomous agent should be able not only to detect what it does not know, but also to extend dynamically its knowledge about the world. We contribute to this challenge with a deep learning architecture that can dynamically update its known classes in an end-to-end fashion. The proposed deep network, based on a deep extension of a non-parametric model, detects whether a perceived object belongs to the set of categories known by the system and learns it without the need to retrain the whole system from scratch. Annotated images about the new category can be provided by an 'oracle' (i.e. human supervision), or by autonomous mining of the Web. Experiments on two different databases and on a robot platform demonstrate the promise of our approach.

  • 6.
    Mancini, Massimiliano
    et al.
    Sapienza Univ Rome, Rome, Italy.;Fdn Bruno Kessler, Trento, Italy..
    Karaoǧuz, Hakan
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Ricci, Elisa
    Fdn Bruno Kessler, Trento, Italy.;Univ Trento, Trento, Italy..
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Caputo, Barbara
    Sapienza Univ Rome, Rome, Italy.;Italian Inst Technol, Milan, Italy..
    Kitting in the Wild through Online Domain Adaptation2018In: 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Maciejewski, AA Okamura, A Bicchi, A Stachniss, C Song, DZ Lee, DH Chaumette, F Ding, H Li, JS Wen, J Roberts, J Masamune, K Chong, NY Amato, N Tsagwarakis, N Rocco, P Asfour, T Chung, WK Yasuyoshi, Y Sun, Y Maciekeski, T Althoefer, K AndradeCetto, J Chung, WK Demircan, E Dias, J Fraisse, P Gross, R Harada, H Hasegawa, Y Hayashibe, M Kiguchi, K Kim, K Kroeger, T Li, Y Ma, S Mochiyama, H Monje, CA Rekleitis, I Roberts, R Stulp, F Tsai, CHD Zollo, L, IEEE , 2018, p. 1103-1109Conference paper (Refereed)
    Abstract [en]

    Technological developments call for increasing perception and action capabilities of robots. Among other skills, vision systems that can adapt to any possible change in the working conditions are needed. Since these conditions are unpredictable, we need benchmarks which allow to assess the generalization and robustness capabilities of our visual recognition algorithms. In this work we focus on robotic kitting in unconstrained scenarios. As a first contribution, we present a new visual dataset for the kitting task. Differently from standard object recognition datasets, we provide images of the same objects acquired under various conditions where camera, illumination and background are changed. This novel dataset allows for testing the robustness of robot visual recognition algorithms to a series of different domain shifts both in isolation and unified. Our second contribution is a novel online adaptation algorithm for deep models, based on batch-normalization layers, which allows to continuously adapt a model to the current working conditions. Differently from standard domain adaptation algorithms, it does not require any image from the target domain at training time. We benchmark the performance of the algorithm on the proposed dataset, showing its capability to fill the gap between the performances of a standard architecture and its counterpart adapted offline to the given target domain.

  • 7.
    Sibirtseva, Elena
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kontogiorgos, Dimosthenis
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Nykvist, Olov
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Karaoǧuz, Hakan
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Gustafson, Joakim
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction2018In: Proceedings 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) 2018, IEEE, 2018Conference paper (Refereed)
    Abstract [en]

    Picking up objects requested by a human user is a common task in human-robot interaction. When multiple objects match the user's verbal description, the robot needs to clarify which object the user is referring to before executing the action. Previous research has focused on perceiving user's multimodal behaviour to complement verbal commands or minimising the number of follow up questions to reduce task time. In this paper, we propose a system for reference disambiguation based on visualisation and compare three methods to disambiguate natural language instructions. In a controlled experiment with a YuMi robot, we investigated realtime augmentations of the workspace in three conditions - head-mounted display, projector, and a monitor as the baseline - using objective measures such as time and accuracy, and subjective measures like engagement, immersion, and display interference. Significant differences were found in accuracy and engagement between the conditions, but no differences were found in task time. Despite the higher error rates in the head-mounted display condition, participants found that modality more engaging than the other two, but overall showed preference for the projector condition over the monitor and head-mounted display conditions.

1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf